I installed RabbitMQ, Celery, Flask and Python but when I tried to run celery worker to test. It does not work, these are the error that was pop out in the cmd.
[2019-01-18 09:56:37,443: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "c:\users\ansonkho\anaconda3\lib\site-packages\celery\worker\consumer\consumer.py", line 317, in start
blueprint.start(self)
File "c:\users\ansonkho\anaconda3\lib\site-packages\celery\bootsteps.py", line 119, in start
step.start(parent)
File "c:\users\ansonkho\anaconda3\lib\site-packages\celery\worker\consumer\mingle.py", line 40, in start
self.sync(c)
File "c:\users\ansonkho\anaconda3\lib\site-packages\celery\worker\consumer\mingle.py", line 44, in sync
replies = self.send_hello(c)
File "c:\users\ansonkho\anaconda3\lib\site-packages\celery\worker\consumer\mingle.py", line 57, in send_hello
replies = inspect.hello(c.hostname, our_revoked._data) or {}
below is my code:
from celery import Celery
app = Celery('test_celery', broker='amqp://myuser:mypassword#localhost/myvhost',backend='rpc://')
[2019-01-18 09:56:37,443: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
As mentioned in the error, there is no broker running. You need to start Rabbitmq before making a connection to it. That is why the consumer is throwing Connection to broker lost as the broker is not running.
Related
I am running gunicorn in async mode behind a nginx reverse proxy. Both are in separate docker containers on the same VM in the host network and everything is running fine as long as I don't configure max_requests to autorestart workers after a certain amount of requests. With autorestart configured the reboot of workers is not handled correctly, throwing errors and causing failed responses. I need this settings to fix problems with memory leaks and prevent gunicorn and other application components from crashing.
Gunicorn log:
2020-08-07 06:55:23 [1438] [INFO] Autorestarting worker after current request.
2020-08-07 06:55:23 [1438] [ERROR] Socket error processing request.
Traceback (most recent call last):
File "/opt/mapproxy/lib/python3.5/site-packages/gunicorn/workers/base_async.py", line 65, in handle
util.reraise(*sys.exc_info())
File "/opt/mapproxy/lib/python3.5/site-packages/gunicorn/util.py", line 625, in reraise
raise value
File "/opt/mapproxy/lib/python3.5/site-packages/gunicorn/workers/base_async.py", line 38, in handle
listener_name = listener.getsockname()
OSError: [Errno 9] Bad file descriptor
Gunicorn is running with the following configuration:
bind = '0.0.0.0:8081'
worker_class = 'eventlet'
workers = 8
timeout = 60
no_sendfile = True
max_requests = 1000
max_requests_jitter = 500
I use websocket_server in order to provide a one way (server to client) websocket connection.
I have several threads on the server which query at given intervals (while True: ... time.sleep(60)) an API and then perform a server.send_message() call to update the client. All of this works fine.
From time to time, without any particular reason, I get a crash:
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "C:\Python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "D:/Dropbox/dev/domotique/webserver.py", line 266, in calendar
server.send_message(client, json.dumps({"calendar": events}))
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 71, in send_message
self._unicast_(client, msg)
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 119, in _unicast_
to_client['handler'].send_message(msg)
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 194, in send_message
self.send_text(message)
File "C:\Python35\lib\site-packages\websocket_server\websocket_server.py", line 240, in send_text
self.request.send(header + payload)
BrokenPipeError: [WinError 10058] A request to send or receive data was disallowed because the socket had already been shut down in that direction with a previous shutdown call
There is no shutdown call in my code. What else can shut a websocket down?
The WebSocket client can ask the server to close the connection (or directly close it). From the library's code:
if not b1:
logger.info("Client closed connection.")
self.keep_alive = 0
return
if opcode == CLOSE_CONN:
logger.info("Client asked to close connection.")
self.keep_alive = 0
return
You could check self.keep_alive to know if the socket is still open.
I have a celery worker, running on a Ubuntu 14.04 server, that is reading and writing to an MSSQL Server database using pyodbc and the freetds driver. When the SQL Server goes down, the function fails as expected and the celery worker starts trying to clean up and get ready for the next task. At this time, the worker calls django's "connection.close()" method. This method appears to send a command to rollback any incomplete transactions. Since the server is down this throws an exception that is not caught by the celery worker. The worker then hangs and neither releases the task or move on to the next task.
I tried overriding the on_failure and after_return methods for the function and calling connection.close() there (as specified in other answers), but that didn't work. I suspect it is because the when I call connection.close() it has the same issue, and just bubbles the exception up, or because celery's cleanup code gets run before those two methods get called.
Any ideas on how to either catch this exception before it gets to celery, or avoid it all together?
Below is the stack trace of the exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 283, in trace_task
state, retval, uuid, args, kwargs, None,
File "/var/www/cortex/corespring/tasks.py", line 13, in after_return
connections['xxx'].close()
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/init.py", line 317, in close
self.connection.close()
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 2642, in close
self.rollback()
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 2565, in rollback
check_success(self, ret)
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 987, in check_success
ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 965, in ctrl_err
raise DatabaseError(state,err_text)
DatabaseError: (u'08S01', u'[08S01] [FreeTDS][SQL Server]Write to the server failed')
I'm trying to make a worker run only one task at a time, then shutdown. I've got the shutdown part working correctly (some background here: celery trying shutdown worker by raising SystemExit in task_postrun signal but always hangs and the main process never exits), but when it shuts down, I'm getting an error:
[2013-02-13 12:19:05,689: CRITICAL/MainProcess] Couldn't ack 1, reason:AttributeError("'NoneType' object has no attribute 'method_writer'",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/kombu/transport/base.py", line 104, in ack_log_error
self.ack()
File "/usr/local/lib/python2.7/site-packages/kombu/transport/base.py", line 99, in ack
self.channel.basic_ack(self.delivery_tag)
File "/usr/local/lib/python2.7/site-packages/amqplib/client_0_8/channel.py", line 1742, in basic_ack
self._send_method((60, 80), args)
File "/usr/local/lib/python2.7/site-packages/amqplib/client_0_8/abstract_channel.py", line 75, in _send_method
self.connection.method_writer.write_method(self.channel_id,
AttributeError: 'NoneType' object has no attribute 'method_writer'
Why is this happening? Not only does it not ack, but it also purges all of the other tasks that are left in the queue (big problem).
How do I fix this?
UPDATE
Below is the stack trace with everything updated (pip install -U kombu amqp amqplib celery):
[2013-02-13 11:58:05,357: CRITICAL/MainProcess] Internal error: AttributeError("'NoneType' object has no attribute 'method_writer'",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/worker/__init__.py", line 372, in process_task
req.execute_using_pool(self.pool)
File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 219, in execute_using_pool
timeout=task.time_limit)
File "/usr/local/lib/python2.7/dist-packages/celery/concurrency/base.py", line 137, in apply_async
**options)
File "/usr/local/lib/python2.7/dist-packages/celery/concurrency/base.py", line 27, in apply_target
callback(target(*args, **kwargs))
File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 333, in on_success
self.acknowledge()
File "/usr/local/lib/python2.7/dist-packages/celery/worker/job.py", line 439, in acknowledge
self.on_ack(logger, self.connection_errors)
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/base.py", line 98, in ack_log_error
self.ack()
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/base.py", line 93, in ack
self.channel.basic_ack(self.delivery_tag)
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 1562, in basic_ack
self._send_method((60, 80), args)
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 57, in _send_method
self.connection.method_writer.write_method(
AttributeError: 'NoneType' object has no attribute 'method_writer'
Exiting in task_postrun is not recommended as task_postrun is executed outside of the "task body" error handling.
Exactly what happens when a task calls sys.exit is not well defined,
and actually it depends on the pool being used.
With multiprocessing the child process will simply be replaced by a new one.
In other pools the worker will shutdown, but this is something that is likely to change
so that it's consistent with multiprocessing behavior.
Calling exit outside of the task body is regarded as an internal error (crash).
The "task body" is whatever executes at task.__call__()
I think maybe a better solution for this would be to use a custom execution
strategy:
from celery.worker import strategy
from functools import wraps
#staticmethod
def shutdown_after_strategy(task, app, consumer):
default_handler = strategy.default(task, app, consumer)
def _shutdown_to_exit_after(fun):
#wraps(fun)
def _inner(*args, **kwargs):
try:
return fun(*args, **kwargs)
finally:
raise SystemExit()
return _inner
return _decorate_to_exit_after(default_handler)
#celery.task(Strategy=shutdown_after_strategy)
def shutdown_after():
print('will shutdown after this')
This isn't exactly beautiful, but the execution strategy is there to optimize
task execution and not to be easily extendable (the worker "precompiles" the execution
path for each task type by caching Task.Strategy)
In Celery 3.1 you can extend the worker and consumer using "bootsteps", so likely
there will be a pretty solution then.
I've been trying to make use of RabbitMQ from within my gevent program by using the Pika library (monkey patched by gevent), gevent likes randomly throwing a timeout error.
What should I do? Is there another library I could use?
WARNING:root:Document not found, retrying primary.
Traceback (most recent call last):
...
File "/usr/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 32, in __init__
BaseConnection.__init__(self, parameters, None, reconnection_strategy)
File "/usr/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 50, in __init__
reconnection_strategy)
File "/usr/lib/python2.7/dist-packages/pika/connection.py", line 170, in __init__
self._connect()
File "/usr/lib/python2.7/dist-packages/pika/connection.py", line 228, in _connect
self.parameters.port or spec.PORT)
File "/usr/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 44, in _adapter_connect
self._handle_read()
File "/usr/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 151, in _handle_read
data = self.socket.recv(self._suggested_buffer_size)
File "/usr/lib/python2.7/dist-packages/gevent/socket.py", line 427, in recv
wait_read(sock.fileno(), timeout=self.timeout, event=self._read_event)
File "/usr/lib/python2.7/dist-packages/gevent/socket.py", line 169, in wait_read
switch_result = get_hub().switch()
File "/usr/lib/python2.7/dist-packages/gevent/hub.py", line 164, in switch
return greenlet.switch(self)
timeout: timed out
Pika is not ideally suited to use with gevent because pika implements its own asynchronous connection to RabbitMQ based on non-blocking sockets. This just does not fit well with gevent's implementation of the same.
You may want to consider using py-amqplib or kombu
I'm also having timeout problems with using Pika in a Django/Gunicorn application. I played with raising connection_attempts or increasing the timeout but RabbitMQ always closed the connection with a handshake error. The latter seems to indicate that Pika never transmitted any data on the socket.
The cause for the timeouts could be this libevent bug - at least in my environment the script attached to the bug is able to reproduce the issue.
You could try upgrading to gevent>=1.0 (at the time of writing not released yet):
wget http://gevent.googlecode.com/files/gevent-1.0b4.tar.gz
pip install gevent-1.0b4.tar.gz