I am trying to deploy an aiohttp web app, but can't figure out how to get the app to serve over a unix socket, which I think I need in order to get nginx and gunicorn to talk to each other.
Simple example app from aiohttp documentation saved as app.py:
import asyncio
from aiohttp import web
#asyncio.coroutine
def hello(request):
return web.Response(body=b'Hello')
app = web.Application()
app.router.add_route('GET', '/', hello)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
handler = app.make_handler()
f = loop.create_server(handler, '0.0.0.0', 8080)
srv = loop.run_until_complete(f)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
loop.run_until_complete(handler.finish_connections(1.0))
srv.close()
loop.run_until_complete(srv.wait_closed())
loop.run_until_complete(app.finish())
loop.close()
Running this with gunicorn directly works:
gunicorn -k aiohttp.worker.GunicornWebWorker -b 0.0.0.0:8000 app:app
But when I try to bind it instead to a unix socket, I get the following errors.
gunicorn -k aiohttp.worker.GunicornWebWorker -b unix:my_sock.sock app:app
Traceback:
[2015-08-09 12:26:05 -0700] [26898] [INFO] Booting worker with pid: 26898
[2015-08-09 12:26:06 -0700] [26898] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/home/claire/absapp/venv/lib/python3.4/site- packages/gunicorn/arbiter.py", line 507, in spawn_worker
worker.init_process()
File "/home/claire/absapp/venv/lib/python3.4/site-packages/aiohttp/worker.py", line 28, in init_process
super().init_process()
File "/home/claire/absapp/venv/lib/python3.4/site-packages/gunicorn/workers/base.py", line 124, in init_process
self.run()
File "/home/claire/absapp/venv/lib/python3.4/site-packages/aiohttp/worker.py", line 34, in run
self.loop.run_until_complete(self._runner)
File "/usr/lib/python3.4/asyncio/base_events.py", line 268, in run_until_complete
return future.result()
File "/usr/lib/python3.4/asyncio/futures.py", line 277, in result
raise self._exception
File "/usr/lib/python3.4/asyncio/tasks.py", line 236, in _step
result = next(coro)
File "/home/claire/absapp/venv/lib/python3.4/site-packages/aiohttp/worker.py", line 81, in _run
handler = self.make_handler(self.wsgi, *sock.cfg_addr)
TypeError: make_handler() takes 4 positional arguments but 11 were given
[2015-08-09 12:26:06 -0700] [26898] [INFO] Worker exiting (pid: 26898)
I came across something in an aiohttp issue (https://github.com/KeepSafe/aiohttp/issues/136)
that uses socket to create a socket to put as a parameter in the loop.create_server() function, but I just couldn't get anything to work. (I also don't know if the app in his code is the same web.Application object)
Does anybody know how I can make this work? Thanks!
The problem is that GunicornWebWorker doesn't support unix domain sockets. It comes from GunicornWebWorker.make_handler(self, app, host, port), which wants parameters: host and port. Obviously you don't have them if you're using unix socket, but have path to socket instead.
Let's take a look at the beginning of GunicornWebWorker._run():
def _run(self):
for sock in self.sockets:
handler = self.make_handler(self.wsgi, *sock.cfg_addr)
...
In case of -b localhost:8000 sock.cfg_addr is ['localhost', 8000], but for -b unix:my_sock.sock it's just 'my_sock.sock'. This is where error TypeError: make_handler() takes 4 positional arguments but 11 were given comes from. It unpacks string, instead of list.
The quick way to fix it is to subclass GunicornWebWorker and redefine GunicornWebWorker.make_handler() to ignore host and port. They are not used anyway. You can do it like this:
class FixedGunicornWebWorker(worker.GunicornWebWorker):
def make_handler(self, app, *args):
if hasattr(self.cfg, 'debug'):
is_debug = self.cfg.debug
else:
is_debug = self.log.loglevel == logging.DEBUG
return app.make_handler(
logger=self.log,
debug=is_debug,
timeout=self.cfg.timeout,
keep_alive=self.cfg.keepalive,
access_log=self.log.access_log,
access_log_format=self.cfg.access_log_format)
NOTE You'll need to have package with fixed worker in your PYTHONPATH. Otherwise Gunicorn won't be able to locate it. For example if you put fixed worker inside fixed_worker.py file inside the same directory you run gunicorn from, you can use it like:
$ PYTHONPATH="`pwd`:$PYTHONPATH" gunicorn -k fixed_worker.FixedGunicornWebWorker -b unix:my_sock.sock app:app
UPD Also opened issue in aiohttp repository.
Related
I just installed celery flower.
It is working great for showing me real-time tasks, which queue they were processed on, cpu usage, and the processing time.
I also want to have access to the broker page so I can monitor queue lengths.
The issue I am having is with SSL.
The broker page returns a 500. Looking at the logs, I am seeing the following stack trace.
2020-12-24T21:19:21.828079+00:00 app[web.1]: [W 201224 21:19:21 connection:255] Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour.
2020-12-24T21:19:21.854471+00:00 app[web.1]: [W 201224 21:19:21 connection:255] Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour.
2020-12-24T21:19:21.878474+00:00 app[web.1]: [E 201224 21:19:21 web:1793] Uncaught exception GET /broker (...)
2020-12-24T21:19:21.878479+00:00 app[web.1]: HTTPServerRequest(protocol='http', host='...herokuapp.com', method='GET', uri='/broker', version='HTTP/1.1', remote_ip='...')
2020-12-24T21:19:21.878480+00:00 app[web.1]: Traceback (most recent call last):
2020-12-24T21:19:21.878481+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/tornado/web.py", line 1704, in _execute
2020-12-24T21:19:21.878481+00:00 app[web.1]: result = await result
2020-12-24T21:19:21.878482+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/tornado/gen.py", line 234, in wrapper
2020-12-24T21:19:21.878482+00:00 app[web.1]: yielded = ctx_run(next, result)
2020-12-24T21:19:21.878482+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/tornado/gen.py", line 162, in _fake_ctx_run
2020-12-24T21:19:21.878483+00:00 app[web.1]: return f(*args, **kw)
2020-12-24T21:19:21.878483+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flower/views/broker.py", line 31, in get
2020-12-24T21:19:21.878485+00:00 app[web.1]: http_api=http_api, broker_options=broker_options, broker_use_ssl=broker_use_ssl)
2020-12-24T21:19:21.878485+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flower/utils/broker.py", line 237, in __new__
2020-12-24T21:19:21.878486+00:00 app[web.1]: return RedisSsl(broker_url, *args, **kwargs)
2020-12-24T21:19:21.878486+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flower/utils/broker.py", line 220, in __init__
2020-12-24T21:19:21.878486+00:00 app[web.1]: super(RedisSsl, self).__init__(broker_url, *args, **kwargs)
2020-12-24T21:19:21.878487+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flower/utils/broker.py", line 134, in __init__
2020-12-24T21:19:21.878487+00:00 app[web.1]: self.redis = self._get_redis_client()
2020-12-24T21:19:21.878488+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flower/utils/broker.py", line 155, in _get_redis_client
2020-12-24T21:19:21.878488+00:00 app[web.1]: return redis.Redis(**self._get_redis_client_args())
2020-12-24T21:19:21.878489+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/flower/utils/broker.py", line 225, in _get_redis_client_args
2020-12-24T21:19:21.878489+00:00 app[web.1]: client_args.update(self.broker_use_ssl)
2020-12-24T21:19:21.878554+00:00 app[web.1]: TypeError: 'NoneType' object is not iterable
2020-12-24T21:19:21.881667+00:00 app[web.1]: [W 201224 21:19:21 connection:255] Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour.
It looks like I need to pass in the cert somehow to broker_use_ssl, but I am not sure where or how.
This is all deployed on Heroku. The rediss URL is what's on my production application, flower is on a separate app.
On celery I have {"ssl_cert_reqs": ssl.CERT_NONE}.
Flower deployed on heroku looks like
requirements.txt as follows
celery==4.4.4
future==0.18.2
flower==0.9.7
redis==3.5.3
Then a Procfile where I try to pass in ssl.CERT_NONE which returns 0. It doesn't work.
web: flower --port=$PORT --broker=$BROKER_URL --basic_auth=$FLOWER_BASIC_AUTH --broker_use_ssl={"ssl_cert_reqs": 0}
Can anyone shed some light on how to setup these configuration options?
Thank you
Seems like a fix related to broker_use_ssl was merged to master two days ago. Not sure if it only improves or bug fixes. There's a related issue here. Note that the latest release doesn't contain this fix yet (released 4 days ago).
Anyway, here is things that you can try:
Something with the way you're passing the value of --broker_use_ssl - maybe you need to escape the quotes, something like: --broker_use_ssl={\"ssl_cert_reqs\": 0} or --broker_use_ssl="{\"ssl_cert_reqs\": 0}".
Try to pass your settings via a configuration file instead of the command line, like: flower --conf=celeryconfig.py - that way you don't need to handle the escaping and you can set the value as you did ({"ssl_cert_reqs": 0}).
Use the master branch to see if the last commit solves your problem.
Good luck!
I am running gunicorn in async mode behind a nginx reverse proxy. Both are in separate docker containers on the same VM in the host network and everything is running fine as long as I don't configure max_requests to autorestart workers after a certain amount of requests. With autorestart configured the reboot of workers is not handled correctly, throwing errors and causing failed responses. I need this settings to fix problems with memory leaks and prevent gunicorn and other application components from crashing.
Gunicorn log:
2020-08-07 06:55:23 [1438] [INFO] Autorestarting worker after current request.
2020-08-07 06:55:23 [1438] [ERROR] Socket error processing request.
Traceback (most recent call last):
File "/opt/mapproxy/lib/python3.5/site-packages/gunicorn/workers/base_async.py", line 65, in handle
util.reraise(*sys.exc_info())
File "/opt/mapproxy/lib/python3.5/site-packages/gunicorn/util.py", line 625, in reraise
raise value
File "/opt/mapproxy/lib/python3.5/site-packages/gunicorn/workers/base_async.py", line 38, in handle
listener_name = listener.getsockname()
OSError: [Errno 9] Bad file descriptor
Gunicorn is running with the following configuration:
bind = '0.0.0.0:8081'
worker_class = 'eventlet'
workers = 8
timeout = 60
no_sendfile = True
max_requests = 1000
max_requests_jitter = 500
I'm following along a video series for loose reference as I build the back-end of my first website using Flask. One of the videos has me writing this function verbatim to be used as a decorator:
# add this as a decorator to pages that require a login
def login_required(f):
#wraps(f)
def wrap(*args, **kwargs):
if 'logged_in' in session:
return f(*args, **kwargs)
else:
flash('You need to login.')
return redirect(url_for('login'))
return wrap
which is used to redirect to a login if a page requires it. This works fine when testing the app using python3 -m flask run, but I get an internal server error with the following stack trace when I run gunicorn main:main
[2020-07-13 18:33:52 -0400] [6272] [INFO] Listening at: http://127.0.0.1:8000 (6272)
[2020-07-13 18:33:52 -0400] [6272] [INFO] Using worker: sync
[2020-07-13 18:33:52 -0400] [6274] [INFO] Booting worker with pid: 6274
[2020-07-13 18:33:59 -0400] [6274] [ERROR] Error handling request /
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/gunicorn/workers/sync.py", line 134, in handle
self.handle_request(listener, req, client, addr)
File "/usr/local/lib/python3.8/dist-packages/gunicorn/workers/sync.py", line 175, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/home/john/Developer/runjs/main.py", line 15, in wrap
if 'logged_in' in session:
File "/home/john/.local/lib/python3.8/site-packages/werkzeug/local.py", line 379, in <lambda>
__contains__ = lambda x, i: i in x._get_current_object()
File "/home/john/.local/lib/python3.8/site-packages/werkzeug/local.py", line 306, in _get_current_object
return self.__local()
File "/home/john/.local/lib/python3.8/site-packages/flask/globals.py", line 38, in _lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
I'm not sure what exactly is wrong with my function that causes it to break when run through Gunicorn. I have tried researching this, but none of the suggested fixes have seemed pertinent to my specific scenario and I am confused as to why it works locally.
I have a basic application written in CherryPy. It looks somewhat like this:
import cherrypy
class API():
#cherrypy.expose
def index(self):
return "<h3>Its working!<h3>"
if __name__ == '__main__':
cherrypy.config.update({
'server.socket_host': '127.0.0.1',
'server.socket_port': 8082,
})
cherrypy.quickstart(API())
I would like to deploy this application with gunicorn, possibly with multiple workers. gunicorn starts when I run this in terminal
gunicorn -b localhost:8082 -w 4 test_app:API
But everytime I try to access the default method, it gives an internal server error. However, running this standalone using CherryPy works.
Here is the error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/sync.py", line 130, in handle
self.handle_request(listener, req, client, addr)
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/sync.py", line 171, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: this constructor takes no arguments
I have a fairly large CherryPy application that I would like to deploy using gunicorn. Is there any to mix CherryPy and gunicorn?
I'm pretty inexperienced with gunicorn. I have it installed within a virtual env and am trying to serve a pyramid app with the following:
env/bin/gunicorn --pid /home/staging/gunicorn.pid --bind 0.0.0.0:8000 pyzendoc:main
However everytime a request is sent I get the following trace from gunicorn
2013-10-30 14:16:20 [1284] [ERROR] Error handling request
Traceback (most recent call last):
File "/home/staging/api/env/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 126, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: main() takes exactly 1 argument (2 given)
I'm guessing that main in the gunicorn refers to the main method in pyramids init but that method takes (global_config, **settings) as args so I think that maybe gunicorn is somehow looking at the wrong method. Has anyone seen anything similar before?
Thanks
C
The invocation pyzendoc:main is expecting to find a callable that accepts an (environ, start_response) signature, as a WSGI app, which you don't have until main(global_conf, **settings) returns one. A better option is to use gunicorn_paster, as shown here.