Posts Marcados python

Profiling your web application with wsgid request filters

It’s always important to know how your web app is performing and there are many ways to do this. One such way is measuring memory consumption, for example. This blog post will explore how you can measure response time on your live application.

We will see how to make this using a new feature available in wsgid 0.7.0:  RequestFilters.

Request Filters

This feature works in a very simple way. Wsgid gives you the opportunity to run external code inside the regular request/response flow. Wsgid defines two distinct interfaces: IPreRequestFilter and IPostRequestFilter. By implementing any of these interfaces you will have your code called on every request.

Since we are injecting external code into the execution flow of our application, wsgid must ensure that a failing filter does not crash a successful request. So unless you call sys.exit(0) or crash the python interpreter (believe me, it happens!) all requests should complete as usual.

IPreRequestFilter Interface

This interface has just one method (code here):

def IPreRequestFilter(plugnplay.Interface):

    def process(self, m2message, environ):
        pass

The m2message parameter is an instance of wsgid.core.Message. This is the parsed mongrel2 message. The environ is the WSGI Environ, as described by the PEP-333 specification.

Your code can modify the environ freely. This modified environ will be passed to the running WSGI app.

IPostRequestFilter Interface

This interface has two methods (code here):

def IPostRequestFilter(plugnplay.Interface):

    def process(self, m2message, status, headers, body):
        pass

    def exception(self, m2message, e):
        pass

Since this filter is called after the WSGI app has ran, the filter receives all values returned by it. These values are are the raw values as defined by PEP-333. The exception method is called when the WSGI app call fails for any reason (usually an unhandled exception). From the wsgid perspective, even if the application returns an HTTP 500 it will be considered a successful run. If you want to detect error HTTP responses you should inspect the status parameter, received by the process method.

Important note: If the WSGI app call fails, just the exception method will be called. Does not make sense to call the process method since the WSGI app did not return any value.

Simple examples

The wsgid source-code includes two simple examples of request filters. The first is the WorkerPidFilter. This just adds an header to the response containing the PID of the Wsgid worker that ran the WSGI app. The code is very simple:

class WorkerPidFilter(Plugin):
 '''
 Simple fillter that adds one more response header containing the
 pid of the Wsgid workers that was running the WSGI application
 '''
    implements = [IPostRequestFilter, ]

    def process(self, message, status, headers, body):
        return (status, headers + [('X-Worker', os.getpid())], body)

    def exception(self, message, e):
        pass

Another example is a filter that calculates the total elapsed time of your request. You can take a look at the code here.

A more real world example

These example filters works very well for simple cases. When it comes to real world production application we may have to do it a little different. First of all, when writing a request filter we must consider the time the filter will take to run. The less it takes, the less it interferes in the request response time.

We could, for example, save the response times in a database. But we know that this wouldn’t be a great idea in the end! So the idea here is to save the information to later processing. We could save it into the logs. You can do this by just importing wsgid.core.log. This is a stantard Python Logger and it writes the information into your app main log file (the file is located inside your wsgidapp folder at logs/wsgid.log). But by doing this we would have to keep parsing the application log, not a very good idea either.

Another idea is to use a queue server, such as RabbitMQ. This way we can keep the filter execution time very low and we will have all messages saved to be processes later, without the need to parse any log file.

Attached to the RabbitMQ server we would have a separated process, reading all the messages and storing them anywhere, in a database, in a Graphite server or any other place.

The idea behind this implementation

The idea is very simples. Since your filter receives the parsed mongrel2 message, you have access to the connection ID of the client making the current request. This could be your primary key when measuring the values. In your PreRequestFilter you can send to the queue server, along with the connection ID, all the HTTP headers, the current time and any other information you want.

When the PostRequestFilter runs, you send the connection ID, the status returned by the WSGI app, the response headers and the current time again. With this information you can not only calculate the response time (per request URI, if you like) but you can calculate all sorts of information about your requests. User-Agent percentage, Content-Length average size, the most/less visited URI, HTTP status distribution (how many 200’s, 400’s, 500’s is your WSGI app returning) just to cite a few.

I will leave this implementation for a future post as I pretend to calculate all this information for the wsgid (http://wsgid.com) website and for my personal website (http://daltonmatos.com). This will be perfectly possible since they both run with wsgid.

Thanks for the reading and I hope you enjoyed!

Anúncios

, ,

Deixe um comentário

How to configure wsgid and mongrel2 to handle arbitrary sized requests

Intro

Wsgid is a generic handler for the mongrel2 web server. It acts as a bridge between WSGI and mongrel2 and makes it possible to run any WSGI python application using mongrel2 as the front end. To know more about both projects visit their official websites at: http://wsgid.com and http://mongrel2.org

In this post we will see how to configure mongrel2 and wsgid so your application will be able to handle arbitrary sized requests very easily and with a low memory usage.

Why?

The problem of receiving big requests is that depending on how your front end server deals with it, you can easily become out of resources and in a worst case scenario your application can stop responding for a while. If your application exposes any POST endpoint, then you should be prepared to handle some big requests along the way.

The big picture

Mongrel2 has a very clever way to handle big requests without consuming all your server’s resources, in fact without consuming almost nothing, only bandwidth. Basically what it does is to dump the entire request to disk, using almost none additional memory. Here is how it works:

When the request comes in, mongrel2 sends a special message to the back-end handler containing only the original headers. This message notifies the start of this request. It’s up to the handler to accept or deny it. To accept the request, your handler just do nothing. If you want to deny the request, your handler must send a 0-length message back to mongrel2. Due to mongrel2 async model, you can send this deny message at any time, does not need to be imediatelly after receiving the request start message.

If you happen to be using wsgid (I hope you are!) this happens without your WSGI application ever knowing. All your application will see (if at all) is the wsgi.input attribute of the WSGI environ (if you happen to write a WSGI framework) or just request.POST (or something similar depending on what framework you’re using). It’s all transparent.

After the request is completely dumped, mongrel2 sends another message notifying the end of the upload and that’s when wsgid will actually call your application. During all the upload process, your application does not even know that a new request has come and was being handled.

All this happens while mongrel2 is already dumping the request content to disk. If your handler happen to deny the request, mongrel2 will close the connection and remove the temporary file. To know more about how mongrel2 notifies the handlers, you can read the on-line manual: http://mongrel2.org/static/mongrel2-manual.html.

Configuring mongrel2 to handle big requests

The configuration of the server is extremely simple. All you have to do is tell it where to dump any big request that may arrive. You may ask: How big a request must be to be dumped to disk? The truth is that you decide this and tell mongrel2 about it.

To tell it where to dump the request you must set the option upload.temp_store to the correct path. This path must include the filename template. An example could be: /uploads/m2.XXXXXX. This must be a mkstemp(3) compatible template. See the manpage for more details.

The size of the request is set with the limits.content_length. This sets the biggest request mongrel2 will handle without dumping to disk. This size is set in bytes.

Configuring wsgid to understand what mongrel2 is saying

Since mongrel is saving requests on disk wsgid must be able to open these files and pass its contents to the application being run. It’s important to note that the path chosen on the upload.temp_store is always relative to where your mongrel2 is chrooted, so somehow we must tell wsgid where this is.

Fortunately this new release of wsgid comes with a new command line option: --mongrel2-chroot. You just have to pass this to your wsgid instance to be able to handle big requests.

Alternatively you can add this options to your wsgid.json configuration file if you want. This can be complished with a simple command:

$ wsgid config --app-path=/path/to/your/wsgid-app --mongrel2-chroot=/path/to/mongrel2/chroot

This will save this new option on your config file. Just restart your wsgid instance and it will re-read the config file.

Working without knowing where mongrel2 is chrooted

There is another way to handle these big requests. In this approach your don’t need to pass --mongrel2-chroot to all your wsgid instances. The trick here is to mount the same device on many different places. The easiest way to do this is to have mongrel2’s temp_store in a separated partition (or logical volume if you are using LVM). Let’s see an example:

Suppose we have mongrel2 chrooted at /var/mongrel2/ and configured this way:

upload.temp_store = '/uploads/m2.XXXXXX'

Since mongrel2 assumes /uploads is relative to its chroot we must mount this device ate the right place.

# mount /dev/vg01/uploads /var/mongrel2/uploads

Now our logical volume is mounted at /var/mongrel2/uploads. There is one last setp so we can start serving big requests with wsgid. When mongrel2 sends the upload started message to wsgid, which contains the path of the temporary file, the path received by wsgid is the same we put on mongrel2’s config. So wsgid will try to read /uploads/m2.384Tdg (for example) and obviously will fail. So we need a way to make /uploas/ also available to wsgid (that normally is not chrooted anywhere) and the trick to do this is to re-mount the same device at a different place. And this is how we do it:

# mount /dev/vg01/uploads /uploads

So now we have the same device mongrel2 writes the temporary files available to all wsgid processes that need to read them. Remember that mongrel2 does not remove any of these file because obviously it does not know when your app is done. So it’s up to you to clean them.

If your /upload directory is part of a bigger volume or is not on a separated one, you still can accomplish this multi-mount approach. Just use the --bind. This way you can re-mount any directory at another place. Read the mount man page for more details.

Conclusion

This is another cool mongrel2 feature that the latest wsgid release (0.5.0) already supports. So now wsgid supports all major features mongrel2 provides and is starting to be more mature, trying to find its way into a production-ready tool.

This release complete changelog is available at wsgid.com/changelog and you can grab your wsgid copy at wsgid.com.

Thanks for reading and enjoy!

, , ,

Deixe um comentário

Deploying your django application with mongrel2 and wsgid

Some time ago when people talked about webapp deployment they were probably talking about the LAMP stack. Since the appearance of the nginx webserver this has changed quite a bit. In this post you will know another stack, that does not use neither apache nor nginx but that is equally interesting and quite scalable too. We will be talking about mongrel2 for the frontend and wsgid for the backend worker processes.

TL;DR

In this post you will learn how to configure mongrel2 and wsgid to run your WSGI appalication. If you are interested in web servers, web application deployment and system administration, read on!

Leia o resto deste post »

, , ,

Deixe um comentário

How I’m planning to test one of my projects by writing another one

The purpose

Some time ago, I started writing wsgid. It is a project that brings to you the ability to run any WSGI application with mongrel2 web server. Some time ago I had this idea of writing a web application that could help me test wsgid on a real production environment. Both wsgid’ official website and my personal website are hosted using wsgid and mongrel2 as the backend, but none of them even uses a database (and maybe won’t for a long time) and since they are very simple and low traffic websites, I think they aren’t good enough to test wsgid.

I thought about some type of applications that would be better to test wsgid and end up with the decision to write my own blog engine. This same engine will soon power my own blog, self hosted and managed by me.

Wait, another blog engine? Why?

Yes, it will be a blog engine, another one! =) Before you think about NIH, I will try to explain why I decided to start this project. The very first thing I did when I thought about writing one more blog engine was to find some existing ones. But it could not be any blog engine, they must obey some rules:

  • Written in python
  • Used a WSGI framework
  • Bonus points for engines written in Django

These rules are obvious. If I’m trying to test wsgid (that is a WSGI gateway) the application must be written in Python and be compliant to the WSGI specification. The last one was a bonus point because I’m currently learning Django. I found that are there plenty of Django blog engines, but one decision made me not choose any of them: I want to learn something new, and I think the best way to do this is having a project. And not any one, but one that you will really use. And writing the code of my own blog is a good way to maintain the project alive, evolving and getting better and better.

Another point that helped me on this decision was the willing to self host my applications. This will give me very important knowledge and experience on system administration, servers, deployment, clusters and may more. Recently I had two amazing opportunities to work with these topics and I’m sure that if I were already managing my own infra-structure for a while I could have done much better on these two interviews.

Until the day of witting this blog post, my blog is still hosted on wordpress.com. I  plan to migrate to my own servers as soon as this new project becomes minimally usable. This will be good for many reasons that I already said and one more is that I will be able to have my own domain name and not pay any more money for this, since I already own daltonmatos.com.

Wish me luck

Starting a new project is always a great responsibility, first with yourself, second with the people that follow your projects and more important: with who uses your project. So today I decided to start this new project: The blog engine. It has no name yet, the only certainty is that I will create this project and host my own blog. The code will be hosted on github, where I publish all my codes. So if you are intersted follow me there and stay tuned!

Thank you for reading!

, ,

Deixe um comentário

Finalmente meu site pessoal está lançado: daltonmatos.com

 

Hoje é um dia muito importante. Finalmente depois de muito tempo consegui lançar meu próprio site. Já tinha comprado o domínio há bastante tempo mas não tinha ainda parado pra escrever os textos e o código do site.

O site é uma aplicação django e roda com o auxílio de um projeo meu já citado aqui no blog, o wsgid. Por enquanto o blog continua aqui no wordpress.com mas pretendo migrá-lo pra lá, pois assim terei tudo centralizado. Ainda tenho que encontrar uma blog engine para que possa migrar o blog.

Bem, é isso aí! Vai lá e dá uma olhada: http://daltonmatos.com

, ,

2 Comentários

Rodando sua aplicação WSGI como um *nix daemon

For english readers, pease see post below.

Quase sempre quando pensamos em aplicações web (eu inclusive), pensamos na apache stack. Esse post apresenta uma forma diferente de fazer o deploy de sua aplicação, começando pelo fato de usarmos o servidor web apache.

Primeiro de tudo, precisamos conhecer uma peça chave diso tudo:

Mongrel2

Mongrel2 é um servidor web escrito por Zed A. Shaw. Esse é um servidor que simplismente não se importa em que linguagem sua aplicação foi escrita, tudo que você precisa fazer é seguir algumas regras bem simples para conseguir juntar mongrel2 e sua aplicação.

A primeira regra é que sua aplicação precisa saber se comunicar através de uma fila, nesse caso ØMQ (http://zeromq.org) e a segunda regra é que sua aplicação precisa entender o protocolo definido pelo mongrel2. Você pode (e deve!) saber mais sobre o mongrel2 no site oficial: http://mongrel2.org.

A grande sacada do mongrel2 é que, o fato dele usar uma fila para se comunicar com sua aplicação faz com que sua aplicação possa rodar de forma desacoplada do servidor, isso significa que sua aplicação não roda com a ajuda de um mod_algumacoisa ou como um processo filho do servidor web. Uma outra vantagem é que o ØMQ permite enviar mensagem por TCP, isso significa que sua aplicação pode estar rodando em uma máquina diferente de onde o mongrel2 está rodando.

Poder rodar sua aplicação em máquinas diferentes é fantástico pois sempre que necessário você pode adicionar mais uma máquina ao seu cluster e startar mais instâncias da sua apliação, todas as instâncias vão se comunicar com o mesmo mongrel2, que vai balancear as requisições entre todas as instâncias conectadas, usando uma politica round-robin.

wsgid

wsgid é o projeto que desenvolvi para tornar possível rodar suas aplicações WSGI usando como servidor web o mongrel2. O wsgid é a ponte entre o mongrel2 e a especificação WSGI (PEP-333), isso significa que basta a sua aplicação obedecer à especificação WSGI para já estar pronta para rodar com o mongrel2.

O wsgid entende tanto a língua do mongrel2 (zeromq + protocolo) quanto a especificação WSGI. Além disso, sua aplicação agora é um processo separado, com PID e tudo mais. Isso te traz algumas vantagens, como:

  • Pode rodar com permissões de qualquer usuário, a sua escolha;
  • Por ser um processo do sistema operacional, está automaticamente sujeito a quaisquer configurações que o S.O possa oferecer, como por exemplo: limite de banda, memória, CPU e etc;
  • Pode ser rodado dentro de um chroot, caso você precise rodar código não confiável;
  • Entre outras.

Iniciar uma nova instância de sua aplicação é tão fácil quanto:

   $ wsgid --app-path=/path/to/wsgid-app-folder/ --recv=tcp://127.0.0.1:8888 --send=tcp://127.0.0.1:8889
      --workers=4

Nesse exemplo simples estamos conectando ao endpoint 0MQ que está na mesma máquna em que nossa aplicação, mas nada impede de usarmos –recv=tcp://129.168.0.2:8888, sendo esse IP o de uma máquina qualquer em seu cluster. Ainda nesse exemplo a opção –workers=4 cria 4 processos que vão ser responsáveis por atender requisições de sua aplicação.

wsgid, principais funcionalidades

Além de facilitar o deploy de aplicações WSGI com o mongrel2, o wsgid possui ainda outras funcionalidades importantes:

workers

Com a opção workers você pode iniciar quantos processos desejar, de uma só vez. Isso significa que você não precisa rodar o comando 4 vezes caso queira 4 instâncias, apesar de você poder fazer isso, sem problema nenhum. Mas essa opção te dá a possibilidade de fazer –workers=4, que te dará o mesmo resultado.

keep alive

Com a opção keep-alive o wsgid reinicia automaticamente qualquer processo que tenha terminado sua execução. Isso significa que uma chamada ao wsgid com –workers=4 –keep-alive manterá sempre 4 workers trabalhando para sua aplicação.

hot deploy

Sempre que o wsgid restarta um dos processos, o código de sua aplicação é recarregado, afinal é um novo processo que está sendo criado. Isso significa que você pode atualizar o código-fonte de sua aplicação e apenas mandar um SIGTERM para todos os seus workers, isso fará com que o wsgid restarte cada um deles, mas dessa vez já rodando o novo código-fonte.

chroot

O wsgid já pode, automaticamente, fazer o chroot para uma localidade onde a aplicação está. Isso faz com que essa aplicação possa rodar de forma isolada, criando assim um ambiente um pouco mais seguro (não totalmente) caso você esteja rodando códigos não confiáveis.

Sistema plugável para carregar diferentes aplicações

O wsgid conta com um sistema de plugins muito simples porém bem poderoso. Com ele você pode escrever seu próprio Application Loader, caso o wsgid não consiga carregar sua aplicação. Com isso será muito simples adicionar suporte a outros frameworks WSGI para que mais aplicações possam fazer uso desse projeto.

Espero que tenha gostado do projeto, qualquer feedback é bem vindo. Para saber mais sobre o projeto visite o site oficial: http://wsgid.com e saiba mais.

Mas lembre-se, use o que for melhor para você, use o que te atende melhor. Não estou dizendo que a solução mongrel2+wsgid é a melhor sempre, estou apenas dizendo que essa combinação é fantástica, e deve ser considerada no momento de publicar uma nova aplicação WSGI.

Apenas por curiosidade, o site wsgid.com é uma aplicação django e está rodado usando o próprio wsgid.


Running your WSGI app as a *nix daemon

Almost every time we think about web apps (at least me) we think about the apache stack. This post will show you a different method to deploy your aps, starting by not using apache as the web server.

First of all, we need to know a key piece of all this:

Mongrel2

Mongrel2 is a webserver written by Zed A. Shaw. The server is language agnostic, which means that it just doesn’t care which language you wrote you app, all you have to do is follow some very simple rules and you will be able to plug mongrel2 and your app together.

The first rule is that your app must know how to communicate through a queue, in this case ØMQ (http://zeromq.org) and the second one is that you must follow the very simple protocol specified by mongrel2. You can (and must!) know more about mongre2 on the official website: http://mongrel2.org.

The key feature of mongrel2 is that, that fact of using a queue to communicate with the applications makes it possible to run the apps decoupled from the server, this means that your app doesn’t run with a mod_something or as a child process of the webserver. Another advantage is that ØMQ allows you to communicate over TCP, that’s how you can run your app on a machine other than where mongrel2 is running.

The ability to run your app this way, that is, on different machines is fantastic, because whenever needed you can add a new node to your cluster and start new instances of your app. All these instances will connect to the same mongrel2, and the requests will be load-balanced among all instances in a round-robin policy.

wsgid

wsgid is a project I developed to make possible to run your WSGI apps with mongrel2 webserver. wsgid is the brifge between mongrel2 and the WSGI specification (PEP-333), this means that just by conforming with the WSGI spec your app is ready to run with mongrel2+wsgid.

wsgid talks both mongrel2 and WSGI, also from now on your app will be a separated process, with it’s own PID and everything a process has. This brings you some advantages:

  • Can run as any user on your operating system;
  • As being a O.S process, is automatically submitted to all characteristcs and O.S process has, eg: bandwith limit, memory limits, CPU scheduling, etc;
  • Can run inside a chroot, in caso you are running untrusted code;
  • Many others.

Start new instances of your app is as easy as:

   $ wsgid --app-path=/path/to/wsgid-app-folder/ --recv=tcp://127.0.0.1:8888 --send=tcp://127.0.0.1:8889
      --workers=4

I this very simple example we are connecting to a ØMQ endponit in same machine that the app will be running, but nothing prevents us from using –recv=tcp://192.168.0.2:8888, being this IP the address of another node on the cluster. Still in this example, the option –worker=4 starts 4 processes that will respond to requests for your app.

wsgid, key features

Besides facilitating the deployment of WSGI applications with mongrel2, wsgid also has other important features:

workers

With this option you will be able to start any number of processes you want at once. That means you don’t need to run the same wsgid command 4 times if you want four instances, although you can do this without any problem. But this option gives you the possibility to do –workers=4, which gives you the same result.

keep alive

With the keep-alive option wsgid automatically restarts any process which has finished its execution. This means that with a call to wsgid with –workers=4 –keep-alive, wsgid will always keep 4 processes working for your application.

hot deploy

Whenever wsgid re-starts one of the processes, the code of your application is reloaded, after all is a new process being created. That means you can update the source code of your application and just SIGTERM all its workers, it will make  wsgid restart all workers, but this time already running the new source code.

chroot

wsgid can chroot to the location where the application is. This makes such application run in isolated, thus creating an environment a little safer (not entirely) if you are running untrusted code.

Plugable system for Application Loading

wsgid has a very simple but very powerful plugin sub-system . With it you can write your own application loader, in case wsgid is not able to load your WSGI app. With such a sub-system, it will be very simple to add support for other WSGI frameworks so that more applications can make use of this project.

That’s it! Hope you enjoyed the project, any feedback is more than welcome. To learn more about the project visit the official website: http://wsgid.com.

But remember, use what works best for you, use what best serves you. I’m not saying that the solution mongrel2+wsgid is the best ever, I’m just saying that this combination is fantastic and should be considered whenever you are thinking of publishing a new WSGI application.

Just out of curiosity, the wsgid.com website is a django application and runs with wsgid.

,

2 Comentários