I'm pretty inexperienced with gunicorn. I have it installed within a virtual env and am trying to serve a pyramid app with the following:
env/bin/gunicorn --pid /home/staging/gunicorn.pid --bind 0.0.0.0:8000 pyzendoc:main
However everytime a request is sent I get the following trace from gunicorn
2013-10-30 14:16:20 [1284] [ERROR] Error handling request
Traceback (most recent call last):
File "/home/staging/api/env/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 126, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: main() takes exactly 1 argument (2 given)
I'm guessing that main in the gunicorn refers to the main method in pyramids init but that method takes (global_config, **settings) as args so I think that maybe gunicorn is somehow looking at the wrong method. Has anyone seen anything similar before?
Thanks
C
The invocation pyzendoc:main is expecting to find a callable that accepts an (environ, start_response) signature, as a WSGI app, which you don't have until main(global_conf, **settings) returns one. A better option is to use gunicorn_paster, as shown here.
Related
We're using a Flask app, served by gunicorn + Nginx in a docker container. Roughly once or twice every 24 ours, our logger displays the following message:
Dec 31 00:27:36 Computer logger: [12/31/2019 08:27:36] backend ERROR: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1949, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1925, in dispatch_request self.raise_routing_exception(req) File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1907, in raise_routing_exception raise request.routing_exception File "/usr/local/lib/python3.6/site-packages/flask/ctx.py", line 350, in match_request result = self.url_adapter.match(return_rule=True) File "/usr/local/lib/python3.6/site-packages/werkzeug/routing.py", line 1799, in match import pdb werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
What this affects or what it's caused by is totally mysterious, and the traceback is not helpful in the slightest. There is no noticeable change on the user-facing side. I've tried many things, including starting the app. in debug mode. My most recent attempt was to go into /usr/local/lib/python3.6/site-packages/werkzeug/routing.py itself and change the behavior of the error. What happens, though, is that above-logged message just prints out my code changes—but doesn't seem to actually run the changes themselves. (i.e. - Everything looks to run exactly the same way, regardless of how I change that file. Instead, the changes themselves are logged [not the effects of running that changed code]).
How does one debug an unresponsive Werkzeug 404 error like this? What options/settings/methods am I missing? Why would changes to the code of routing.py (for example) be ignored, but printed out to the screen (as if what Python is doing is just printing out code blocks between line x and line y).
I am trying to deploy an aiohttp web app, but can't figure out how to get the app to serve over a unix socket, which I think I need in order to get nginx and gunicorn to talk to each other.
Simple example app from aiohttp documentation saved as app.py:
import asyncio
from aiohttp import web
#asyncio.coroutine
def hello(request):
return web.Response(body=b'Hello')
app = web.Application()
app.router.add_route('GET', '/', hello)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
handler = app.make_handler()
f = loop.create_server(handler, '0.0.0.0', 8080)
srv = loop.run_until_complete(f)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
loop.run_until_complete(handler.finish_connections(1.0))
srv.close()
loop.run_until_complete(srv.wait_closed())
loop.run_until_complete(app.finish())
loop.close()
Running this with gunicorn directly works:
gunicorn -k aiohttp.worker.GunicornWebWorker -b 0.0.0.0:8000 app:app
But when I try to bind it instead to a unix socket, I get the following errors.
gunicorn -k aiohttp.worker.GunicornWebWorker -b unix:my_sock.sock app:app
Traceback:
[2015-08-09 12:26:05 -0700] [26898] [INFO] Booting worker with pid: 26898
[2015-08-09 12:26:06 -0700] [26898] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/home/claire/absapp/venv/lib/python3.4/site- packages/gunicorn/arbiter.py", line 507, in spawn_worker
worker.init_process()
File "/home/claire/absapp/venv/lib/python3.4/site-packages/aiohttp/worker.py", line 28, in init_process
super().init_process()
File "/home/claire/absapp/venv/lib/python3.4/site-packages/gunicorn/workers/base.py", line 124, in init_process
self.run()
File "/home/claire/absapp/venv/lib/python3.4/site-packages/aiohttp/worker.py", line 34, in run
self.loop.run_until_complete(self._runner)
File "/usr/lib/python3.4/asyncio/base_events.py", line 268, in run_until_complete
return future.result()
File "/usr/lib/python3.4/asyncio/futures.py", line 277, in result
raise self._exception
File "/usr/lib/python3.4/asyncio/tasks.py", line 236, in _step
result = next(coro)
File "/home/claire/absapp/venv/lib/python3.4/site-packages/aiohttp/worker.py", line 81, in _run
handler = self.make_handler(self.wsgi, *sock.cfg_addr)
TypeError: make_handler() takes 4 positional arguments but 11 were given
[2015-08-09 12:26:06 -0700] [26898] [INFO] Worker exiting (pid: 26898)
I came across something in an aiohttp issue (https://github.com/KeepSafe/aiohttp/issues/136)
that uses socket to create a socket to put as a parameter in the loop.create_server() function, but I just couldn't get anything to work. (I also don't know if the app in his code is the same web.Application object)
Does anybody know how I can make this work? Thanks!
The problem is that GunicornWebWorker doesn't support unix domain sockets. It comes from GunicornWebWorker.make_handler(self, app, host, port), which wants parameters: host and port. Obviously you don't have them if you're using unix socket, but have path to socket instead.
Let's take a look at the beginning of GunicornWebWorker._run():
def _run(self):
for sock in self.sockets:
handler = self.make_handler(self.wsgi, *sock.cfg_addr)
...
In case of -b localhost:8000 sock.cfg_addr is ['localhost', 8000], but for -b unix:my_sock.sock it's just 'my_sock.sock'. This is where error TypeError: make_handler() takes 4 positional arguments but 11 were given comes from. It unpacks string, instead of list.
The quick way to fix it is to subclass GunicornWebWorker and redefine GunicornWebWorker.make_handler() to ignore host and port. They are not used anyway. You can do it like this:
class FixedGunicornWebWorker(worker.GunicornWebWorker):
def make_handler(self, app, *args):
if hasattr(self.cfg, 'debug'):
is_debug = self.cfg.debug
else:
is_debug = self.log.loglevel == logging.DEBUG
return app.make_handler(
logger=self.log,
debug=is_debug,
timeout=self.cfg.timeout,
keep_alive=self.cfg.keepalive,
access_log=self.log.access_log,
access_log_format=self.cfg.access_log_format)
NOTE You'll need to have package with fixed worker in your PYTHONPATH. Otherwise Gunicorn won't be able to locate it. For example if you put fixed worker inside fixed_worker.py file inside the same directory you run gunicorn from, you can use it like:
$ PYTHONPATH="`pwd`:$PYTHONPATH" gunicorn -k fixed_worker.FixedGunicornWebWorker -b unix:my_sock.sock app:app
UPD Also opened issue in aiohttp repository.
I have a basic application written in CherryPy. It looks somewhat like this:
import cherrypy
class API():
#cherrypy.expose
def index(self):
return "<h3>Its working!<h3>"
if __name__ == '__main__':
cherrypy.config.update({
'server.socket_host': '127.0.0.1',
'server.socket_port': 8082,
})
cherrypy.quickstart(API())
I would like to deploy this application with gunicorn, possibly with multiple workers. gunicorn starts when I run this in terminal
gunicorn -b localhost:8082 -w 4 test_app:API
But everytime I try to access the default method, it gives an internal server error. However, running this standalone using CherryPy works.
Here is the error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/sync.py", line 130, in handle
self.handle_request(listener, req, client, addr)
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/sync.py", line 171, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: this constructor takes no arguments
I have a fairly large CherryPy application that I would like to deploy using gunicorn. Is there any to mix CherryPy and gunicorn?
I'm using Django 1.6 and Django-ImageKit 3.2.1.
I'm trying to generate images asynchronously with ImageKit. Async image generation works locally but not on the production server.
I'm using Celery and I've tried both:
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Async'
IMAGEKIT_DEFAULT_CACHEFILE_BACKEND = 'imagekit.cachefiles.backends.Celery'
Using the Simple backend (synchronous) instead of Async or Celery works fine on the production server. So I don't understand why the asynchronous backend gives me the following ImportError (pulled from the Celery log):
[2014-04-05 21:51:26,325: CRITICAL/MainProcess] Can't decode message body: DecodeError(ImportError('No module named s3utils',),) [type:u'application/x-python-serialize' encoding:u'binary' headers:{}]
body: '\x80\x02}q\x01(U\x07expiresq\x02NU\x03utcq\x03\x88U\x04argsq\x04cimagekit.cachefiles.backends\nCelery\nq\x05)\x81q\x06}bcimagekit.cachefiles\nImageCacheFile\nq\x07)\x81q\x08}q\t(U\x11cachefile_backendq\nh\x06U\x12ca$
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/messaging.py", line 585, in _receive_callback
decoded = None if on_m else message.decode()
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/message.py", line 142, in decode
self.content_encoding, accept=self.accept)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 59, in _reraise_errors
reraise(wrapper, wrapper(exc), sys.exc_info()[2])
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 55, in _reraise_errors
yield
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 184, in loads
return decode(data)
File "/opt/python/run/venv/lib/python2.6/site-packages/kombu/serialization.py", line 64, in pickle_loads
return load(BytesIO(s))
DecodeError: No module named s3utils
s3utils is what defines my AWS S3 bucket paths. I'll post it if need be, but the strange thing I think is that the synchronous backend has no problem importing s3utils while the asynchronous does... and asynchronous does ONLY on the production server, not locally.
I'd be SO greatful for any help debugging this. I've been wrestling this for days. I'm still learning Django and python so I'm hoping this is a stupid mistake on my part. My Google-fu has failed me.
As I hinted at in my comment above, this kind of thing is usually caused by forgetting to restart the worker.
It's a common gotcha with Celery. The workers are a separate process from your web server so they have their own versions of your code loaded. And just like with your web server, if you make a change to your code, you need to reload so it sees the change. The web server talks to your worker not by directly running code, but by passing serialized messages via the broker, which will say something like "call the function do_something()". Then the worker will read that message and—and here's the tricky part—call its version of do_something(). So even if you restart your webserver (so that it has a new version of your code), if you forget to reload the worker (which is what actually calls the function), the old version of the function will be called. In other words, you need to restart the worker any time you make a change to your tasks.
You might want to check out the autoreload option for development. It could save you some headaches.
I've got a Flask app that I'm trying to deploy using Gunicorn and nginx. However, although it works fine locally, it throws a TemplateNotFound error when I run in with Gunicorn on my remote server.
I'm not sure how to even start debugging this, let alone why it's failing...would love help on the former, if not the latter. I thought maybe it was a permissions issue, so chmod'd the templates folder to 777...no luck. Here's all the relavant details:
install script
Starting with a bare Ubuntu 10.04 install, I run this to set up the server and pull in my code: https://github.com/total-impact/total-impact-deploy/blob/master/deploy.sh. Then I put this nginx config file at /etc/nginx/sites-available/total-impact:
server {
location / {
proxy_pass http://127.0.0.1:8000;
}
}
Finally, I navigate the app directory and run gunicorn web:app, and hit the server's IP address. This generates a 500 in the browser, and this output on the command line:
stack trace:
root#jc:/home/ti/total-impact-webapp/totalimpactwebapp# gunicorn web:app2012-05-28 23:15:06 [15313] [INFO] Starting gunicorn 0.14.3
2012-05-28 23:15:06 [15313] [INFO] Listening at: http://127.0.0.1:8000 (15313)
2012-05-28 23:15:06 [15313] [INFO] Using worker: sync
2012-05-28 23:15:06 [15316] [INFO] Booting worker with pid: 15316
2012-05-28 23:15:12,274 - totalimpactwebapp.core - ERROR - Exception on / [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/flask/app.py", line 1292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.6/dist-packages/flask/app.py", line 1062, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.6/dist-packages/flask/app.py", line 1060, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.6/dist-packages/flask/app.py", line 1047, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/ti/total-impact-webapp/totalimpactwebapp/web.py", line 60, in home
return render_template('index.html', commits=False)
File "/usr/local/lib/python2.6/dist-packages/flask/templating.py", line 120, in render_template
return _render(ctx.app.jinja_env.get_template(template_name),
File "/usr/local/lib/python2.6/dist-packages/jinja2/environment.py", line 719, in get_template
return self._load_template(name, self.make_globals(globals))
File "/usr/local/lib/python2.6/dist-packages/jinja2/environment.py", line 693, in _load_template
template = self.loader.load(self, name, globals)
File "/usr/local/lib/python2.6/dist-packages/jinja2/loaders.py", line 115, in load source, filename, uptodate = self.get_source(environment, name)
File "/usr/local/lib/python2.6/dist-packages/flask/templating.py", line 61, in get_source
raise TemplateNotFound(template)
TemplateNotFound: index.html
Are your templates in [app root]/templates/?
If so, check to be sure your path is correct. Put this as the first line in the view that handles your homepage:
return app.root_path
If that's what you expect to see - or if you're using Blueprints or another method that changes the default Jinja Environment somehow - it's a little more complicated.
Oddly, Jinja doesn't seem to have a jinja2.Environment.FileSystemLoader.get_search_path() method. I assumed it would have one :(
Today, I experienced identical problems after a long period of my Flask app behaving quite normally (ie not throwing TemplateNotFound exceptions). None of the approaches mentioned by others here hit the mark or seemed appropriate (eg app.debug, path manipulation).
Instead, I tracked it down to the standard Flask app initialisation line:
app = Flask(__name__)
I had changed __name__ to another value (to get access to a named logger), not expecting for all this carnage to unfold :-) Don't change this value unless you are very familiar with Flask internals.
I have just spent 2 hours in a very similar situation and thought I'd post what ended up being the solution.
I was suddenly getting TemplateNotFound errors in the Apache logs from my Flask application, in production, which had been running fine until then. This resulted in 500 errors across the site.
First issue was that the TemplateNotFound errors did not show unless I had Flask's "debug" flag on -- there was no sign of any problems at all in the Apache log despite LogLevel set to info.
Running the app "locally" (Flask listens on localhost:5000) was fine (one can test the pages via wget 127.0.0.0:5000). It turned out that a copy of the main web app python code had somehow landed in the directory above where it should have been. This was imported by wsgi first and, as a result, the relative path to templates was incorrect.