I'm using Celery with Redis to run some background tasks, but each time a task is called, it creates a new connection to Redis. I'm on Heroku and my Redis to Go plan allows for 10 connections. I'm quickly hitting that limit and getting a "max number of clients reached" error.
How can I ensure that Celery queues the tasks on a single connection rather than opening a new one each time?
EDIT - including the full traceback
File "/app/.heroku/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/hooks/framework_django.py", line 447, in wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 77, in wrapped_view
return view_func(*args, **kwargs)
File "/app/feedback/views.py", line 264, in zencoder_webhook_handler
tasks.process_zencoder_notification.delay(webhook)
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/task.py", line 343, in delay
return self.apply_async(args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/task.py", line 458, in apply_async
with app.producer_or_acquire(producer) as P:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/base.py", line 247, in producer_or_acquire
with self.amqp.producer_pool.acquire(block=True) as producer:
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 705, in acquire
R = self.prepare(R)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 54, in prepare
p = p()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 45, in <lambda>
return lambda: self.create_producer()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 42, in create_producer
return self.Producer(self._acquire_connection())
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 160, in __init__
super(TaskProducer, self).__init__(channel, exchange, *args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/messaging.py", line 83, in __init__
self.revive(self.channel)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/messaging.py", line 174, in revive
channel = self.channel = maybe_channel(channel)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 879, in maybe_channel
return channel.default_channel
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 617, in default_channel
self.connection
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 610, in connection
self._connection = self._establish_connection()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 569, in _establish_connection
conn = self.transport.establish_connection()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 722, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 705, in create_channel
channel = self.Channel(connection)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 271, in __init__
self.client.info()
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/function_trace.py", line 81, in literal_wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/client.py", line 344, in info
return self.execute_command('INFO')
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 536, in execute_command
conn.send_command(*args)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 273, in send_command
self.send_packed_command(self.pack_command(*args))
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 256, in send_packed_command
self.connect()
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/function_trace.py", line 81, in literal_wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 207, in connect
self.on_connect()
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 233, in on_connect
if self.read_response() != 'OK':
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 283, in read_response
raise response
ResponseError: max number of clients reached
I ran into the same problem on Heroku with CloudAMQP. I do not know why, but I had no luck when assigning low integers to the BROKER_POOL_LIMIT setting.
Ultimately, I found that by setting BROKER_POOL_LIMIT=None or BROKER_POOL_LIMIT=0 my issue was mitigated. According to the Celery docs, this disables the connection pool. So far, this has not been a noticeable issue for me, however I'm not sure if it might be for you.
Link to relevant info: http://celery.readthedocs.org/en/latest/configuration.html#broker-pool-limit
I wish I was using Redis, because there is a specific option to limit the number of connections: CELERY_REDIS_MAX_CONNECTIONS.
http://docs.celeryproject.org/en/3.0/configuration.html#celery-redis-max-connections (for 3.0)
http://docs.celeryproject.org/en/latest/configuration.html#celery-redis-max-connections (for 3.1)
http://docs.celeryproject.org/en/master/configuration.html#celery-redis-max-connections (for dev)
The MongoDB has a similar backend setting.
Given these backend settings, I have no idea what BROKER_POOL_LIMIT actually does. Hopefully CELERY_REDIS_MAX_CONNECTIONS solves your problem.
I'm one of those folks using CloudAMQP, and the AMQP backend does not have its own connection limit parameter.
Try those settings :
CELERY_IGNORE_RESULT = True
CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True
I had a similar issue involving number of connections and Celery. It wasn't on Heroku, and it was Mongo and not Redis though.
I initiated the connection outside of the task function definition at the task module level. At least for Mongo this allowed the tasks to share the connection.
Hope that helps.
https://github.com/instituteofdesign/wander/blob/master/wander/tasks.py
mongoengine.connect('stored_messages')
#celery.task(default_retry_delay = 61)
def pull(settings, google_settings, user, folder, messageid):
'''
Pulls a message from zimbra and stores it in Mongo
'''
try:
imap = imap_connect(settings, user)
imap.select(folder, True)
.......
Related
we have a producer consumer model on RMQ and from past time I have started getting this error
File "newrelic/api/background_task.py", line 117, in wrapper
return wrapped(*args, **kwargs)
File "crs_consumer.py", line 49, in process_message
message.ack()
File "kombu/message.py", line 123, in ack
self.channel.basic_ack(self.delivery_tag, multiple=multiple)
File "amqp/channel.py", line 1407, in basic_ack
return self.send_method(
File "amqp/abstract_channel.py", line 70, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content)
File "amqp/method_framing.py", line 186, in write_frame
write(buffer_store.view[:offset])
File "amqp/transport.py", line 347, in write
self._write(s)
What could be the reason behind this error ?
python==3.9 kombu==5.0.2
I'm trying to use Apscheduler with a postgresql db via an asyncpg connection. I thought it would working, because asyncpg supports sqlalchemy ref. But yeah, it isn't working. And to make it even worst, I don't understand the error message, so I have not even a guess what to google for.
import asyncio
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
def simple_job():
print('This was an easy job!')
scheduler = AsyncIOScheduler()
jobstore = SQLAlchemyJobStore(url='postgresql+asyncpg://user:password#localhost:5432/public')
scheduler.add_jobstore(jobstore)
# schedule a simple job
scheduler.add_job(simple_job, 'cron', second='15', id='heartbeat',
coalesce=True, misfire_grace_time=5, replace_existing=True)
scheduler.start()
Versions:
python 3.7
APScheduler==3.7.0
asyncpg==0.22.0
SQLAlchemy==1.4.3
Error Message and traceback:
Traceback (most recent call last):
File "C:/Users/d/PycharmProjects/teamutils/utils/automation.py", line 320, in <module>
scheduler.start()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\asyncio.py", line 45, in start
super(AsyncIOScheduler, self).start(paused)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\base.py", line 163, in start
store.start(self, alias)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 68, in start
self.jobs_t.create(self.engine, True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\sql\schema.py", line 940, in create
bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2979, in _run_ddl_visitor
with self.begin() as conn:
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2895, in begin
conn = self.connect(close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3067, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 91, in __init__
else engine.raw_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3146, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3113, in _wrap_pool_connect
return fn()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 301, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 755, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 419, in checkout
rec = pool._do_get()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
self._dec_overflow()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 142, in _do_get
return self._create_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 247, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 362, in __init__
self.__connect(first_connect_check=True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 605, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 599, in __connect
connection = pool._invoke_creator(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\default.py", line 548, in connect
return self.dbapi.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 744, in connect
await_only(self.asyncpg.connect(*arg, **kw)),
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 48, in await_only
"greenlet_spawn has not been called; can't call await_() here. "
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: http://sqlalche.me/e/14/xd2s)
sys:1: RuntimeWarning: coroutine 'connect' was never awaited
I looked up the provided link, but not getting smart of it. So it would be nice, if somebody can tell me what is going on, so I can search for a solution by my own. (a solution would okay too, of course xD)
Sorry for this "open" question, but my understanding is so bad, that I dont know what to ask for.
I think problem is in ApScheduler.
What is happening is that scheduler.start() will attempt to create the job table in your database. But since your database url is specified as +asyncpg and there is no async coroutine running (ie: async def) when ApScheduler tries to create the table. Hence the "coroutine 'connect' was never awaited" error.
After reading the ApScheduler code, I think "integrates with asyncio" is a little misleading - specifically the scheduler can run asyncio, but the JobStore itself has no provision for an asyncio database connection.
You can get it working by removing +asyncpg in the connection url used with ApScheduler.
Note it would still be possible to use async db calls within job functions with a separate asyncpg connection.
So I am getting error while executing code in rq worker
File "/usr/local/bin/rq", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rq/cli/cli.py", line 75, in wrapper
return ctx.invoke(func, cli_config, *args[1:], **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/rq/cli/cli.py", line 236, in worker
worker.work(burst=burst, logging_level=logging_level)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 493, in work
self.execute_job(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 662, in execute_job
self.fork_work_horse(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 599, in fork_work_horse
self.main_work_horse(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 677, in main_work_horse
success = self.perform_job(job, queue)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 781, in perform_job
self.prepare_job_execution(job)
File "/usr/local/lib/python3.7/site-packages/rq/worker.py", line 706, in prepare_job_execution
registry.add(job, timeout, pipeline=pipeline)
File "/usr/local/lib/python3.7/site-packages/rq/registry.py", line 47, in add
return pipeline.zadd(self.key, score, job.id)
File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 2263, in zadd
for pair in iteritems(mapping):
File "/usr/local/lib/python3.7/site-packages/redis/_compat.py", line 123, in iteritems
return iter(x.items())
AttributeError: 'int' object has no attribute 'items'
14:04:29 Moving job to 'failed' queue (work-horse terminated unexpectedly; waitpid returned 256)
I am trying to run this code in decoupled containers on openshift. Same images work locally. I guess only difference is I was running rq worker on system instead on container when trying locally and my local have both python 2 and 3 while openshift have python3 only.
Can anyone resolve why it started behaving like this. I suspect its due to python version. but i dont know how to run redis/ rq worker with python 2.
That is a known issue, resulting from redis-py's new version 3 - see https://github.com/rq/rq/issues/1014
There are a number of unanswered questions about this topic.
The challenge is utilising celery tasks that can access the database over SQLAlchemy. Without a proper integration with Flask, we would get the typical out of context error message.
Hence this is my attempt to integrate Flask and Celery, which seems to be working fine. As the next step I would like to invoke that task when '/' is hit. However I get a connection refused message (see below for trace)
wsgi_fb.py
def make_celery(the_app):
the_celery = Celery(the_app.import_name, backend=the_app.config['CELERY_RESULT_BACKEND'], broker=the_app.config['BROKER_URL_CELERY'])
the_celery.config_from_object(config_celery)
the_celery.conf.update(the_app.config)
task_base = the_celery.Task
class ContextTask(task_base):
abstract = True
def __call__(self, *args, **kwargs):
with the_app.app_context():
return task_base.__call__(self, *args, **kwargs)
the_celery.Task = ContextTask
return the_celery
app = Flask(__name__)
app.config.from_object(config)
celery_app = make_celery(app)
celery_app.config_from_object(config_celery)
celery_app.set_current()
api = Api(app)
db.init_app(app)
api.add_resource(Index, '/')
facebook_bot.py
import tasks
class Index(Resource):
def get(self):
tasks.send_tag_batches.delay()
return 'OK'
tasks.py
from celery import current_app
#current_app.task
def send_tag_batches():
...
return 'ok'
StackTrace:
File "/Users/houmie/projects/chasebot/src/facebook/resource/facebook_bot.py", line 22, in get
tasks.send_tag_batches.delay()
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/task.py", line 453, in delay
return self.apply_async(args, kwargs)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/task.py", line 565, in apply_async
**dict(self._get_exec_options(), **options)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/base.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/amqp.py", line 310, in publish_task
**kwargs
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/messaging.py", line 172, in publish
routing_key, mandatory, immediate, exchange, declare)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 457, in _ensured
interval_max)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 369, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/utils/__init__.py", line 246, in retry_over_time
return fun(*args, **kwargs)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 237, in connect
return self.connection
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 742, in connection
self._connection = self._establish_connection()
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 697, in _establish_connection
conn = self.transport.establish_connection()
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [Errno 61] Connection refused
After running through the basic example for flask-celery(runs fine as far as I can tell) I'm trying to integrate this to my own project. Basically, I'm using this below:
from flask import Blueprint, jsonify, request, session
from flask.views import MethodView
from celery.decorators import task
blueprint = Blueprint('myapi', __name__)
class MyAPI(MethodView):
def get(self, tag):
return get_resource.apply_async(tag)
#task(name="get_task")
def get_resource(tag):
pass
with the same setup as in the example, I'm getting this error:
Traceback (most recent call last):
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1518, in __call__
return self.wsgi_app(environ, start_response)
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1506, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1504, in wsgi_app
response = self.full_dispatch_request()
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1264, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1262, in full_dispatch_request
rv = self.dispatch_request()
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1248, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/x/venv/lib/python2.7/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/x/venv/lib/python2.7/site-packages/flask/views.py", line 151, in dispatch_request
return meth(*args, **kwargs)
File "/x/api/modules/document/document.py", line 14, in get
return get_resource.apply_async(tag)
File "/x/venv/lib/python2.7/site-packages/celery/app/task/__init__.py", line 449, in apply_async
publish = publisher or self.app.amqp.publisher_pool.acquire(block=True)
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 657, in acquire
R = self.prepare(R)
File "/x/venv/lib/python2.7/site-packages/kombu/pools.py", line 54, in prepare
p = p()
File "/x/venv/lib/python2.7/site-packages/kombu/pools.py", line 45, in <lambda>
return lambda: self.create_producer()
File "/x/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 265, in create_producer
pub = self.app.amqp.TaskPublisher(conn, auto_declare=False)
File "/x/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 328, in TaskPublisher
return TaskPublisher(*args, **self.app.merge(defaults, kwargs))
File "/x/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 158, in __init__
super(TaskPublisher, self).__init__(*args, **kwargs)
File "/x/venv/lib/python2.7/site-packages/kombu/compat.py", line 61, in __init__
super(Publisher, self).__init__(connection, self.exchange, **kwargs)
File "/x/venv/lib/python2.7/site-packages/kombu/messaging.py", line 79, in __init__
self.revive(self.channel)
File "/x/venv/lib/python2.7/site-packages/kombu/messaging.py", line 168, in revive
channel = channel.default_channel
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 581, in default_channel
self.connection
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 574, in connection
self._connection = self._establish_connection()
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 533, in _establish_connection
conn = self.transport.establish_connection()
File "/x/venv/lib/python2.7/site-packages/kombu/transport/amqplib.py", line 279, in establish_connection
connect_timeout=conninfo.connect_timeout)
File "/x/venv/lib/python2.7/site-packages/kombu/transport/amqplib.py", line 89, in __init__
super(Connection, self).__init__(*args, **kwargs)
File "/x/venv/lib/python2.7/site-packages/amqplib/client_0_8/connection.py", line 129, in __init__
self.transport = create_transport(host, connect_timeout, ssl)
File "/x/venv/lib/python2.7/site-packages/amqplib/client_0_8/transport.py", line 281, in create_transport
return TCPTransport(host, connect_timeout)
File "/x/venv/lib/python2.7/site-packages/amqplib/client_0_8/transport.py", line 85, in __init__
raise socket.error, msg
error: [Errno 111] Connection refused
-->
I'm using redis, and if I install rabbitmq I get another error, but I do not understand this right now --the broker should be redis but its isn't finding it or what? Can anyone give me more of a clue what is going on here? Do I need to import something else, etc. The point is, there is very little beyond the bare bones example and this makes no sense to me.
The most I've been able to determine as that in the Api module there is no access to the 'celery' and when it goes to try and put data there when at the app level, the celery there falls into some defaults, which aren't installed because I'm pointing to redis. Just a guess. I haven't been able to import information into the module, only determined that calling anything 'celery'(for example, output celery.conf) from the app causes an error -- although I could import celery.task.
This is the broker config the application is using, direct from the example:
BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = "redis"
CELERY_REDIS_HOST = "localhost"
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
EDIT:
If you'd like to see a demo: https://github.com/thrisp/flask-celery-example
AS it turns out having BROKER_TRANSPORT = 'redis' in your settings is important for whatever is being passed that I'm passing in (for the setup I put forth here and in the git example), I'm not entirely sure why it isn't in the example bits, but is in the ones I've added but it is -- without this it wants to dump everything onto a default ampq queue.
EDIT2:
Also, this a rather a big deal, using the upcoming version of Celery simplifies 10,000 issues when using it with Flask, making all of this unecesssary.
You must configure redis to bind the localhost. In /etc/redis/redis.conf, uncomment the line with
bind 127.0.0.1