Gunicorn timeout exception for specific urls - python

I use Gunicorn for my python web service with NGINX.
I observe the following problem: even though I have client_max_body_size is big enough in nginx's config, I still get 502 error code when I try to upload big files. I believe that's because uploading is not finished within timeout 90 seconds set when I run my gunicorn workers.
So, my question - is it possible to specify timeout exception for some URLs in Gunicorn command line or config? For example, for file upload URL I want higher timeout and small one for other URLs. If not, what workaround could be implemented?
Thanks in advance!

Related

Uninterrupted nginx + 2 gunicorns setup

So what's the trick? Nginx is facing the client. Normally the requests are forwarded to gunicorn A at port 80.
You can't run code update in-place, since something might be wrong. So you do a fresh code checkout and launch a separate gunicorn B on some port 5678.
Once you test the new code on a development/testing database, you:
Adjust gunicorn B to point to the database, but do not send any requests.
Stop gunicorn A. Nginx now, ever so briefly, responds with an error.
Set nginx to point to gunicorn B, still at port 5678.
Restart nginx.
Is this about right? Do you just write a script to run the four actions faster and minimize the duration (between steps 2 and 4) the server responds with an error?
Nginx supports configuration reloading. Using this feature, updating your application can work like this:
Start a new instance Gunicorn B.
Adjust the nginx configuration to forward traffic to Gunicorn B.
Reload the nginx configuration with nginx -s reload. After this, Gunicorn B will serve new requests, while Gunicorn A will still finish serving old requests.
Wait for the old nginx worker process to exit (which means all requests initiated before the reload are now done) and then stop Gunicorn A.
Assuming your application works correctly with two concurrent instances, this gives you a zero-downtime update.
The relevant excerpt from the nginx documentation:
Once the master process receives the signal to reload configuration, it checks the syntax validity of the new configuration file and tries to apply the configuration provided in it. If this is a success, the master process starts new worker processes and sends messages to old worker processes, requesting them to shut down. Otherwise, the master process rolls back the changes and continues to work with the old configuration. Old worker processes, receiving a command to shut down, stop accepting new connections and continue to service current requests until all such requests are serviced. After that, the old worker processes exit.

Daphne Django file upload size limitations

I am using Daphne for both socket and http connections. I am running 4 worker containers and running everything locally right now in a docker container.
My daphne server fails if I try to upload a file that is 400MB. It works fine for small files upto 15MB.
My docker container quits with error code 137. I dont get any error in daphne logs. The daphne container just dies but the worker containers keep on running.
Does anyone know if there is a way to increase upload limits on daphne or I am missing something else?
I start the daphne server by
daphne -b 0.0.0.0 -p 8001 project.asgi:channel_layer --access-log=${LOGS}/daphne.access.log
This is because daphne loads the entire HTTP POST request body completely and immediately before transferring control to the django with channels.
All your 400 MB are loaded into RAM here. Your docker container died due to the out of memory reason.
This happens even before checking for the size of the request body in django. See here
There is the open ticket here
If you want to prevent it right now use a uvicorn instead daphne. Uvicorn must pass control to Django with chunks. And depending on the FILE_UPLOAD_MAX_MEMORY_SIZE django setting you will receive a temporary file on your hard disk (not in RAM). But you need to write your own AsyncHttpConsumer or AsgiHandler because AsgiHandler and AsgiRequest from channels do not support chunked body too. This will be possible after the PR.

Service inside docker container stops after some time

I have deployed a rest service inside a docker container using uwsgi and nginx.
When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason.
Has anyone faced similar issue?
Is there any know fix for this issue?
Consider doing a docker ps -a to get the stopped container's identifier.
-a here just means listing all of the containers you got on your machine.
Then do docker inspect and look for the LogPath attribute.
Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this)
Note: A process can die because of anything, e.g. code fault
If nothing suspicious is presented in the log file then you might want to check on the State attribute. Also check the ExitCode attribute to see if you can work backwards to see which line of your application could have exited using that code.
Also check the OOMKilled flag, if this is true then it means your container could be killed due to out of memory error.
Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died.

Python Twitter Bot w/ Heroku Error: R10 Boot Timeout

I have developed a simple python twitter bot which periodically executes various functions using the following libraries:
TwitterFollowBot==2.0.2
schedule==0.3.2
The application works fine when I execute it on my computer, and I wanted to migrate it to Heroku so it could run independently. Upon executing it on Heroku it works as it should for 60 seconds before timing out:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
After researching this, I found out that Heroku dynamically switches ports and my application must continuously specify which port it should run on. From another thread I read that a possible solution required me to alter my Procfile, so I appended the PORT variable to the end:
Procfile: web: python app.py $PORT
This had no effect so I tried it again with ${PORT},
And I also tried switching web: with bot: (which stopped my application from executing properly)
I found other solutions to this issue which worked for node, or python applications using Django, Flask, etc... However, I was unable to find a solution for just a simple .py application. Is this even possible? Or should I create my app with Flask and attempt one of the other fixes?
If it doesn't provide any web content then you don't need to run a web process - call it something else like bot and then do:
heroku ps:scale web=0
heroku ps:scale bot=1
and you won't get any more R10s.

Slow Flask on mod_wsgi on openshift

I have some performance issues with my Flask application on openshift.
There is a need to get some images from database and display them on the web page. And
for this taks, I have created a simple method :
#app.route('/getImage/')
def getImageFromUrl(url=None):
return make_response(getImageFromDb(request.args['url']));
There are maximum 10 images per page. And the problem is that this is slow.... veerry slow.
On my local machine, started with app.run() (even in debug mode) it is super fast, so I asume there is something in mod_wsgi.
Also there are these error messages in log files:
Exception KeyError: KeyError(140116433057760,) in <module 'threading' from '/usr/lib64/python2.6/threading.pyc'> ignored
and
[error] server reached MaxClients setting, consider raising the MaxClients setting
What is happening and what should I do to speed the things up?
Exception KeyError is caused by gevent I guess, should be more code in question :) To avoid it import gevent before all.
server reached MaxClients setting seems to be Apache error and should be investigated with logs and settings MaxClients and ServerLimt.
The KeyError is usually because you are using an old version of mod_wsgi. Use mod_wsgi 3.3 or later, which has changes to accommodate the changes which were made in Python that caused this.

Categories

Resources