Related
I'm trying to use Apscheduler with a postgresql db via an asyncpg connection. I thought it would working, because asyncpg supports sqlalchemy ref. But yeah, it isn't working. And to make it even worst, I don't understand the error message, so I have not even a guess what to google for.
import asyncio
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
def simple_job():
print('This was an easy job!')
scheduler = AsyncIOScheduler()
jobstore = SQLAlchemyJobStore(url='postgresql+asyncpg://user:password#localhost:5432/public')
scheduler.add_jobstore(jobstore)
# schedule a simple job
scheduler.add_job(simple_job, 'cron', second='15', id='heartbeat',
coalesce=True, misfire_grace_time=5, replace_existing=True)
scheduler.start()
Versions:
python 3.7
APScheduler==3.7.0
asyncpg==0.22.0
SQLAlchemy==1.4.3
Error Message and traceback:
Traceback (most recent call last):
File "C:/Users/d/PycharmProjects/teamutils/utils/automation.py", line 320, in <module>
scheduler.start()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\asyncio.py", line 45, in start
super(AsyncIOScheduler, self).start(paused)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\base.py", line 163, in start
store.start(self, alias)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 68, in start
self.jobs_t.create(self.engine, True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\sql\schema.py", line 940, in create
bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2979, in _run_ddl_visitor
with self.begin() as conn:
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2895, in begin
conn = self.connect(close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3067, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 91, in __init__
else engine.raw_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3146, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3113, in _wrap_pool_connect
return fn()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 301, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 755, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 419, in checkout
rec = pool._do_get()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
self._dec_overflow()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 142, in _do_get
return self._create_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 247, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 362, in __init__
self.__connect(first_connect_check=True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 605, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 599, in __connect
connection = pool._invoke_creator(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\default.py", line 548, in connect
return self.dbapi.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 744, in connect
await_only(self.asyncpg.connect(*arg, **kw)),
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 48, in await_only
"greenlet_spawn has not been called; can't call await_() here. "
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: http://sqlalche.me/e/14/xd2s)
sys:1: RuntimeWarning: coroutine 'connect' was never awaited
I looked up the provided link, but not getting smart of it. So it would be nice, if somebody can tell me what is going on, so I can search for a solution by my own. (a solution would okay too, of course xD)
Sorry for this "open" question, but my understanding is so bad, that I dont know what to ask for.
I think problem is in ApScheduler.
What is happening is that scheduler.start() will attempt to create the job table in your database. But since your database url is specified as +asyncpg and there is no async coroutine running (ie: async def) when ApScheduler tries to create the table. Hence the "coroutine 'connect' was never awaited" error.
After reading the ApScheduler code, I think "integrates with asyncio" is a little misleading - specifically the scheduler can run asyncio, but the JobStore itself has no provision for an asyncio database connection.
You can get it working by removing +asyncpg in the connection url used with ApScheduler.
Note it would still be possible to use async db calls within job functions with a separate asyncpg connection.
So I am getting error while executing code in rq worker
File "/usr/local/bin/rq", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rq/cli/cli.py", line 75, in wrapper
return ctx.invoke(func, cli_config, *args[1:], **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rq/cli/cli.py", line 236, in worker
worker.work(burst=burst, logging_level=logging_level)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 493, in work
self.execute_job(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 662, in execute_job
self.fork_work_horse(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 599, in fork_work_horse
self.main_work_horse(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 677, in main_work_horse
success = self.perform_job(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 781, in perform_job
self.prepare_job_execution(job)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 706, in prepare_job_execution
registry.add(job, timeout, pipeline=pipeline)
File "/usr/local/lib/python3.7/site-packages/rq/registry.py", line 47, in add
return pipeline.zadd(self.key, score, job.id)
File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 2263, in zadd
for pair in iteritems(mapping):
File "/usr/local/lib/python3.7/site-packages/redis/_compat.py", line 123, in iteritems
return iter(x.items())
AttributeError: 'int' object has no attribute 'items'
14:04:29 Moving job to 'failed' queue (work-horse terminated unexpectedly; waitpid returned 256)
I am trying to run this code in decoupled containers on openshift. Same images work locally. I guess only difference is I was running rq worker on system instead on container when trying locally and my local have both python 2 and 3 while openshift have python3 only.
Can anyone resolve why it started behaving like this. I suspect its due to python version. but i dont know how to run redis/ rq worker with python 2.
That is a known issue, resulting from redis-py's new version 3 - see https://github.com/rq/rq/issues/1014
I'm using Celery 3.1.19 with scheduled tasks. I start the process like so:
celery beat --app=my_app.celery.app:app --pidfile=/usr/local/celerybeat.pid --schedule=/usr/local/celerybeat-schedule -l INFO
I've had a couple occurrences where the celery process terminates after an nslookup failure. This causes future scheduled tasks to not run. Eventually I notice and restart celery beat.
As far as I can tell the hostname it's trying to lookup is my RabbitMQ host. The nslookup failures are temporary. The hostname is correct and evidently there was a blip in name resolution. Ideally that would not crash the process and instead it would retry until it the hostname lookup succeeded.
Questions:
Is this expected behavior?
Is there a common way to ensure that the scheduler keeps running?
Do people have a system to watch the process and restart if it crashes?
Stack trace:
Message Error: Couldn't apply scheduled task ping: Error opening socket: hostname lookup failed
File "/usr/local/bin/celery", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/celery/__main__.py", line 30, in main
main()
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 770, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 311, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 762, in handle_argv
return self.execute(command, argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 694, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 315, in run_from_argv
sys.argv if argv is None else argv, command)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 377, in handle_argv
return self(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 274, in __call__
ret = self.run(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/beat.py", line 79, in run
return beat().run()
File "/usr/local/lib/python2.7/dist-packages/celery/apps/beat.py", line 83, in run
self.start_scheduler()
File "/usr/local/lib/python2.7/dist-packages/celery/apps/beat.py", line 112, in start_scheduler
beat.start()
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 473, in start
File "/usr/local/lib/python2.7/dist-packages/celery/beat.py", line 221, in tick
What I am doing is, imagine that you have several workflows that need to execute. These workflows have tasks, and the target of the tasks are different hosts.
The fastest way to do this is running every workflow inside a process, and run them in parallel.
I am trying to run python multiprocessing to execute a remote function that I call with the help of celery. My program runs ok if I just run one process. But when I run more than one process, I get the error below. As far as I got, the issue is with concurrent publishing on the same channel. Channels should not be shared between threads/etc.
How I can make Celery to resolve this? Is is a parameter that I should launch with 'celeryd' command, or I need to do it in my python program?
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "testHello.py", line 16, in test_hello_aux
print output.get()
File "/usr/local/lib/python2.7/dist-packages/celery/result.py", line 169, in get
no_ack=no_ack,
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 155, in wait_for
on_interval=on_interval)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 229, in consume
no_ack=no_ack, accept=self.accept) as consumer:
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 359, in __init__
self.revive(self.channel)
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 371, in revive
self.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 381, in declare
queue.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 505, in declare
self.queue_declare(nowait, passive=False)
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 531, in queue_declare
nowait=nowait)
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 1254, in queue_declare
self._send_method((50, 10), args)
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 56, in _send_method
self.channel_id, method_sig, args, content,
File "/usr/local/lib/python2.7/dist-packages/amqp/method_framing.py", line 221, in write_method
write_frame(1, channel, payload)
File "/usr/local/lib/python2.7/dist-packages/amqp/transport.py", line 177, in write_frame
frame_type, channel, size, payload, 0xce,
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 32] Broken pipe
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "testHello.py", line 16, in test_hello_aux
print output.get()
File "/usr/local/lib/python2.7/dist-packages/celery/result.py", line 169, in get
no_ack=no_ack,
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 155, in wait_for
on_interval=on_interval)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 229, in consume
no_ack=no_ack, accept=self.accept) as consumer:
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 359, in __init__
Process Process-3:
self.revive(self.channel)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 371, in revive
self.declare()
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 381, in declare
queue.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 504, in declare
self.run()
self.exchange.declare(nowait)
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 166, in declare
self._target(*self._args, **self._kwargs)
nowait=nowait, passive=passive,
File "testHello.py", line 16, in test_hello_aux
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 613, in exchange_declare
print output.get()
File "/usr/local/lib/python2.7/dist-packages/celery/result.py", line 169, in get
no_ack=no_ack,
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 155, in wait_for
on_interval=on_interval)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 229, in consume
no_ack=no_ack, accept=self.accept) as consumer:
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 359, in __init__
self._send_method((40, 10), args)
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 56, in _send_method
self.channel_id, method_sig, args, content,
File "/usr/local/lib/python2.7/dist-packages/amqp/method_framing.py", line 221, in write_method
self.revive(self.channel)
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 371, in revive
self.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 381, in declare
write_frame(1, channel, payload)
queue.declare()
File "/usr/local/lib/python2.7/dist-packages/amqp/transport.py", line 177, in write_frame
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 504, in declare
frame_type, channel, size, payload, 0xce,
File "/usr/lib/python2.7/socket.py", line 224, in meth
self.exchange.declare(nowait)
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 166, in declare
nowait=nowait, passive=passive,
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 620, in exchange_declare
return getattr(self._sock,name)(*args)
error: [Errno 32] Broken pipe
(40, 11), # Channel.exchange_declare_ok
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 67, in wait
self.channel_id, allowed_methods)
File "/usr/local/lib/python2.7/dist-packages/amqp/connection.py", line 237, in _wait_method
self.method_reader.read_method()
File "/usr/local/lib/python2.7/dist-packages/amqp/method_framing.py", line 189, in read_method
raise m
error: [Errno 104] Connection reset by peer
Process Process-4:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "testHello.py", line 16, in test_hello_aux
print output.get()
File "/usr/local/lib/python2.7/dist-packages/celery/result.py", line 169, in get
no_ack=no_ack,
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 155, in wait_for
on_interval=on_interval)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/amqp.py", line 229, in consume
no_ack=no_ack, accept=self.accept) as consumer:
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 359, in __init__
self.revive(self.channel)
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 371, in revive
self.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/messaging.py", line 381, in declare
queue.declare()
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 505, in declare
self.queue_declare(nowait, passive=False)
File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 531, in queue_declare
nowait=nowait)
File "/usr/local/lib/python2.7/dist-packages/amqp/channel.py", line 1258, in queue_declare
(50, 11), # Channel.queue_declare_ok
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 67, in wait
self.channel_id, allowed_methods)
File "/usr/local/lib/python2.7/dist-packages/amqp/connection.py", line 270, in _wait_method
self.wait()
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 69, in wait
return self.dispatch_method(method_sig, args, content)
File "/usr/local/lib/python2.7/dist-packages/amqp/abstract_channel.py", line 87, in dispatch_method
return amqp_method(self, args)
File "/usr/local/lib/python2.7/dist-packages/amqp/connection.py", line 526, in _close
(class_id, method_id), ConnectionError)
UnexpectedFrame: Basic.publish: (505) UNEXPECTED_FRAME - expected content header for class 60, got non content header frame instead
celery --version 3.1.11 (Cipater)
amq --version 0.9.1
When using Celery you should not need to use the python multiprocessing module. Celery takes care of everything for you.
Define your task in a file called tasks.py
from celery import Celery
app = Celery('tasks', broker='amqp://guest#localhost//')
#app.task
def add(x, y):
return x + y
Now assume the add function is actually what ever you would like to run in parallel. Let's also consider terms. Parallel means at the same time, while async means not synchronously. I cannot guarantee you tasks will be run at the same time, though I can guarantee they will not be run synchronously. For that reason, lets stick with the term async.
Celery has Canvas, a set of primitives for async flow control. Two you would be interested in would be group and chord. group allows you to run a group of async tasks and ask block on the results of all of the async tasks (accomplishing what you were attempting with you join). chord provides the same functionality as group though fire a callback when all of the tasks complete.
An example of the calling code :
WAIT_TIME = 10 # how ever long you are willing to wait for your tasks
from tasks import add
from celery import group
future = group(add.s(i**i, i**i) for i in xrange(10))()
results = future.get(timeout=WAIT_TIME)
Celery tasks are automatically run in their own process (the workers you spawn) and do not require you to create further processes yourself.
I faced with one annoying (while not critical) problem when I tried to combine Tornado and pyzmq ioloops as described in the pyzmq official documentation.
I have a process running tornado (T) server which accepts REST API requests from clients (C) and proxies them though ZMQ transport to another process (Z) that does real work.
C <-> T <-> Z
If C closes the connection before Z replies to T, Z (tornado) outputs long bunch of exception traces (see at the bottom). Imagine the following example:
import tornado.ioloop
from tornado.web import Application, RequestHandler, asynchronous
from zmq.eventloop import ioloop
import time
def time_consuming_task():
time.sleep(5)
class TestHandler(RequestHandler):
def get(self, arg):
print "Test arg", arg
time_consuming_task()
print "Ok, time to reply"
self.write("Reply")
if __name__ == "__main__":
app = tornado.web.Application(
[
(r"/test/([0-9]+)", TestHandler)
])
ioloop.install()
app.listen(8080)
tornado.ioloop.IOLoop.instance().start()
This example doesn't actually talk to any ZMQ peer, it just attaches pyzmq ioloop to tornado's ioloop. Though, it's enough to illustrate the problem.
From console one run server:
% python example.py
From console two run client and interrupt it before server replies (i.e. during 5 secs):
% curl -is http://localhost:8080/test/1
^C
The output of server is:
Test arg 1
Ok, time to reply
WARNING:root:Read error on 24: [Errno 54] Connection reset by peer
ERROR:root:Uncaught exception GET /test/1 (::1)
HTTPRequest(protocol='http', host='localhost:8080', method='GET', uri='/test/1', version='HTTP/1.1', remote_ip='::1', body='', headers={'Host': 'localhost:8080', 'Accept': '*/*', 'User-Agent': 'curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5'})
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1023, in _execute
self.finish()
File "/Library/Python/2.7/site-packages/tornado/web.py", line 701, in finish
self.request.finish()
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 433, in finish
self.connection.finish()
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 187, in finish
self._finish_request()
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 223, in _finish_request
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 153, in read_until
self._try_inline_read()
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 386, in _try_inline_read
if self._read_to_buffer() == 0:
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 421, in _read_to_buffer
chunk = self._read_from_socket()
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 402, in _read_from_socket
chunk = self.socket.recv(self.read_chunk_size)
error: [Errno 54] Connection reset by peer
ERROR:root:Cannot send error response after headers written
ERROR:root:Uncaught exception, closing connection.
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 304, in wrapper
callback(*args)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 262, in _on_headers
self.request_callback(self._request)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1412, in __call__
handler._execute(transforms, *args, **kwargs)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1025, in _execute
self._handle_request_exception(e)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1065, in _handle_request_exception
self.send_error(500, exc_info=sys.exc_info())
File "/Library/Python/2.7/site-packages/tornado/web.py", line 720, in send_error
self.finish()
File "/Library/Python/2.7/site-packages/tornado/web.py", line 700, in finish
self.flush(include_footers=True)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 660, in flush
self.request.write(headers + chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 429, in write
self.connection.write(chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 177, in write
assert self._request, "Request closed"
AssertionError: Request closed
ERROR:root:Exception in callback
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pyzmq-2.2.0-py2.7-macosx-10.7-intel.egg/zmq/eventloop/ioloop.py", line 434, in _run_callback
callback()
File "/Library/Python/2.7/site-packages/tornado/iostream.py", line 304, in wrapper
callback(*args)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 262, in _on_headers
self.request_callback(self._request)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1412, in __call__
handler._execute(transforms, *args, **kwargs)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1025, in _execute
self._handle_request_exception(e)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 1065, in _handle_request_exception
self.send_error(500, exc_info=sys.exc_info())
File "/Library/Python/2.7/site-packages/tornado/web.py", line 720, in send_error
self.finish()
File "/Library/Python/2.7/site-packages/tornado/web.py", line 700, in finish
self.flush(include_footers=True)
File "/Library/Python/2.7/site-packages/tornado/web.py", line 660, in flush
self.request.write(headers + chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 429, in write
self.connection.write(chunk, callback=callback)
File "/Library/Python/2.7/site-packages/tornado/httpserver.py", line 177, in write
assert self._request, "Request closed"
AssertionError: Request closed
NOTE: It seems it's pyzmq related problem because disappears after excluding pyzmq ioloop.
Server doesn't die, it can be used by other clients, so the problem is not critical. Though, it's very annoying to find these huge confusing traces in logfiles.
So, are there any well-known methods to solve this problem?
Thanks.
It's not ZMQ problem. Request can be closed by other than timeout reasons. ZMQ only problem here is that they are raising AssertionError, which is common, instead of more specific exception.
If you are sure, that you don't want to have these exceptions in log files - do something like this:
try:
time_consuming_task()
except AssertionError as e:
if e.message == 'Request closed':
logging.info('Bad, annoying client, came to us again!')
else:
raise e