There are a number of unanswered questions about this topic.
The challenge is utilising celery tasks that can access the database over SQLAlchemy. Without a proper integration with Flask, we would get the typical out of context error message.
Hence this is my attempt to integrate Flask and Celery, which seems to be working fine. As the next step I would like to invoke that task when '/' is hit. However I get a connection refused message (see below for trace)
wsgi_fb.py
def make_celery(the_app):
the_celery = Celery(the_app.import_name, backend=the_app.config['CELERY_RESULT_BACKEND'], broker=the_app.config['BROKER_URL_CELERY'])
the_celery.config_from_object(config_celery)
the_celery.conf.update(the_app.config)
task_base = the_celery.Task
class ContextTask(task_base):
abstract = True
def __call__(self, *args, **kwargs):
with the_app.app_context():
return task_base.__call__(self, *args, **kwargs)
the_celery.Task = ContextTask
return the_celery
app = Flask(__name__)
app.config.from_object(config)
celery_app = make_celery(app)
celery_app.config_from_object(config_celery)
celery_app.set_current()
api = Api(app)
db.init_app(app)
api.add_resource(Index, '/')
facebook_bot.py
import tasks
class Index(Resource):
def get(self):
tasks.send_tag_batches.delay()
return 'OK'
tasks.py
from celery import current_app
#current_app.task
def send_tag_batches():
...
return 'ok'
StackTrace:
File "/Users/houmie/projects/chasebot/src/facebook/resource/facebook_bot.py", line 22, in get
tasks.send_tag_batches.delay()
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/task.py", line 453, in delay
return self.apply_async(args, kwargs)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/task.py", line 565, in apply_async
**dict(self._get_exec_options(), **options)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/base.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/celery/app/amqp.py", line 310, in publish_task
**kwargs
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/messaging.py", line 172, in publish
routing_key, mandatory, immediate, exchange, declare)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 457, in _ensured
interval_max)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 369, in ensure_connection
interval_start, interval_step, interval_max, callback)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/utils/__init__.py", line 246, in retry_over_time
return fun(*args, **kwargs)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 237, in connect
return self.connection
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 742, in connection
self._connection = self._establish_connection()
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/connection.py", line 697, in _establish_connection
conn = self.transport.establish_connection()
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/Users/houmie/.pyenv/versions/3.5.1/envs/venv35/lib/python3.5/site-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [Errno 61] Connection refused
Related
I have a Django project and have set up a Celery worker. I have a test Task in which I attempt to create and save an object to the database:
def get_genome():
return Genome(genome_name='test-genome2', genome_sequence='AAAAA', organism='phage')
#shared_task
def test():
sleep(10)
g = get_genome()
g.save()
I call the task in a view using test.delay(). The sleep command and the get_genome commeand executes within the celery worker however calling .save() returns the following error:
[2020-11-09 10:53:09,131: ERROR/ForkPoolWorker-8] Task genome.tasks.test[cdd748a9-f889-4dae-bec6-3f869f96daf9] raised unexpected: TypeError('connect() argument 3 must be str, not None')
Traceback (most recent call last):
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/celery/app/trace.py", line 409, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/celery/app/trace.py", line 701, in __protected_call__
return self.run(*args, **kwargs)
File "/home/daemon/MAS/genome/tasks.py", line 14, in test
g.save()
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/models/base.py", line 753, in save
self.save_base(using=using, force_insert=force_insert,
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/models/base.py", line 790, in save_base
updated = self._save_table(
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/models/base.py", line 895, in _save_table
results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/models/base.py", line 933, in _do_insert
return manager._insert(
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/models/query.py", line 1254, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1395, in execute_sql
with self.connection.cursor() as cursor:
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/backends/base/base.py", line 259, in cursor
return self._cursor()
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/backends/base/base.py", line 235, in _cursor
self.ensure_connection()
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 234, in get_new_connection
return Database.connect(**conn_params)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/MySQLdb/__init__.py", line 130, in Connect
return Connection(*args, **kwargs)
File "/home/daemon/miniconda/envs/mas/lib/python3.8/site-packages/MySQLdb/connections.py", line 185, in __init__
super().__init__(*args, **kwargs2)
TypeError: connect() argument 3 must be str, not None
How do I configure celery so it can properly use the Django ORM and save objects?
I am using Django version 3.1.2 and celery version 5.0.2
#iklinac's comment led me to figure out what the issue was. I was pulling the password for my database from an environment variable:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'OPTIONS': {
'host': 'mas-sql-server',
'database': 'mas',
'user': 'root',
'password': os.getenv('MYSQL_ROOT_PASSWORD')
}
}
}
This environment variable was not available in the environment in which I was running the Celery worker.
I am currently trying to use SQLAlchemy to create a table in a Database in an flask application. However I am getting an error and I am unable to resolve it.
Can you please help me to resolve it ?
I am totally new to this as I just enrolled in the Udacity web development nanodegree and this is what I have to do there.
My code -
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres#localhost:5432/DB#'
db = SQLAlchemy(app)
class Person(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column(db.String(10), nullable = False)
db.create_all()
#app.route('/')
def index():
return "Hello the free world"
if __name__ == '__main__':
app.run()
Error -
/usr/bin/python3 /home/kinjalk/Demo_scripts/app.py
/home/kinjalk/.local/lib/python3.6/site-packages/flask_sqlalchemy/__init__.py:835: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
Traceback (most recent call last):
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2285, in _wrap_pool_connect
return fn()
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 363, in connect
return _ConnectionFairy._checkout(self)
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 773, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 492, in checkout
rec = pool._do_get()
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 139, in _do_get
self._dec_overflow()
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 69, in __exit__
exc_value, with_traceback=exc_tb,
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 136, in _do_get
return self._create_connection()
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 308, in _create_connection
return _ConnectionRecord(self)
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 437, in __init__
self.__connect(first_connect_check=True)
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 657, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 69, in __exit__
exc_value, with_traceback=exc_tb,
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 652, in __connect
connection = pool._invoke_creator(self)
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
return dialect.connect(*cargs, **cparams)
File "/home/kinjalk/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 490, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/kinjalk/.local/lib/python3.6/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: fe_sendauth: no password supplied
And I am getting very long error like this which I am not able to post here as StackOverflow is not allowing it.
I am creating an API using Python and the flask library. I have some problems with SSL. I am running the API on a Ubuntu 16.04.6 LTS server.
from flask import Flask
from flask import request
from OpenSSL import SSL
context = SSL.Context(SSL.PROTOCOL_TLSv1_2)
context.load_cert_chain('PATH_TO_PUBLIC_KEY','PATH_TO_PRIVATE_KEY')
#app.route('/example', methods=['POST'])
def sayHallo():
return "Hallo!"
if __name__ == '__main__':
serving.run_simple("0.0.0.0", 5000, app, ssl_context=context)
The API and its connection works using http, but adding SSL to the code gives me the error:
Traceback (most recent call last):
File "/usr/local/bin/flask", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/flask/cli.py", line 966, in main
cli.main(prog_name="python -m flask" if as_module else None)
File "/usr/local/lib/python2.7/dist-packages/flask/cli.py", line 586, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 1137, in inv oke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 956, in invo ke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 555, in invo ke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/click/decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 555, in invo ke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask/cli.py", line 848, in run_c ommand
app = DispatchingApp(info.load_app, use_eager_loading=eager_loading)
File "/usr/local/lib/python2.7/dist-packages/flask/cli.py", line 305, in __ini t__
self._load_unlocked()
File "/usr/local/lib/python2.7/dist-packages/flask/cli.py", line 330, in _load _unlocked
self._app = rv = self.loader()
File "/usr/local/lib/python2.7/dist-packages/flask/cli.py", line 388, in load_ app
app = locate_app(self, import_name, name)
File "/usr/local/lib/python2.7/dist-packages/flask/cli.py", line 240, in locat e_app
__import__(module_name)
File "/var/www/api/app.py", line 7, in <module>
context = SSL.Context(SSL.PROTOCOL_TLSv1_2)
AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'
According to [PyOpenSSL]: class OpenSSL.SSL.Context(method):
Parameters: method - One of SSLv2_METHOD, SSLv3_METHOD, SSLv23_METHOD, or TLSv1_METHOD.
So, you should use:
context = SSL.Context(SSL.TLSv1_2_METHOD)
I'm using Celery with Redis to run some background tasks, but each time a task is called, it creates a new connection to Redis. I'm on Heroku and my Redis to Go plan allows for 10 connections. I'm quickly hitting that limit and getting a "max number of clients reached" error.
How can I ensure that Celery queues the tasks on a single connection rather than opening a new one each time?
EDIT - including the full traceback
File "/app/.heroku/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/hooks/framework_django.py", line 447, in wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 77, in wrapped_view
return view_func(*args, **kwargs)
File "/app/feedback/views.py", line 264, in zencoder_webhook_handler
tasks.process_zencoder_notification.delay(webhook)
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/task.py", line 343, in delay
return self.apply_async(args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/task.py", line 458, in apply_async
with app.producer_or_acquire(producer) as P:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/base.py", line 247, in producer_or_acquire
with self.amqp.producer_pool.acquire(block=True) as producer:
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 705, in acquire
R = self.prepare(R)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 54, in prepare
p = p()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 45, in <lambda>
return lambda: self.create_producer()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 42, in create_producer
return self.Producer(self._acquire_connection())
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 160, in __init__
super(TaskProducer, self).__init__(channel, exchange, *args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/messaging.py", line 83, in __init__
self.revive(self.channel)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/messaging.py", line 174, in revive
channel = self.channel = maybe_channel(channel)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 879, in maybe_channel
return channel.default_channel
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 617, in default_channel
self.connection
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 610, in connection
self._connection = self._establish_connection()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 569, in _establish_connection
conn = self.transport.establish_connection()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 722, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 705, in create_channel
channel = self.Channel(connection)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 271, in __init__
self.client.info()
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/function_trace.py", line 81, in literal_wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/client.py", line 344, in info
return self.execute_command('INFO')
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 536, in execute_command
conn.send_command(*args)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 273, in send_command
self.send_packed_command(self.pack_command(*args))
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 256, in send_packed_command
self.connect()
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/function_trace.py", line 81, in literal_wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 207, in connect
self.on_connect()
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 233, in on_connect
if self.read_response() != 'OK':
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 283, in read_response
raise response
ResponseError: max number of clients reached
I ran into the same problem on Heroku with CloudAMQP. I do not know why, but I had no luck when assigning low integers to the BROKER_POOL_LIMIT setting.
Ultimately, I found that by setting BROKER_POOL_LIMIT=None or BROKER_POOL_LIMIT=0 my issue was mitigated. According to the Celery docs, this disables the connection pool. So far, this has not been a noticeable issue for me, however I'm not sure if it might be for you.
Link to relevant info: http://celery.readthedocs.org/en/latest/configuration.html#broker-pool-limit
I wish I was using Redis, because there is a specific option to limit the number of connections: CELERY_REDIS_MAX_CONNECTIONS.
http://docs.celeryproject.org/en/3.0/configuration.html#celery-redis-max-connections (for 3.0)
http://docs.celeryproject.org/en/latest/configuration.html#celery-redis-max-connections (for 3.1)
http://docs.celeryproject.org/en/master/configuration.html#celery-redis-max-connections (for dev)
The MongoDB has a similar backend setting.
Given these backend settings, I have no idea what BROKER_POOL_LIMIT actually does. Hopefully CELERY_REDIS_MAX_CONNECTIONS solves your problem.
I'm one of those folks using CloudAMQP, and the AMQP backend does not have its own connection limit parameter.
Try those settings :
CELERY_IGNORE_RESULT = True
CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True
I had a similar issue involving number of connections and Celery. It wasn't on Heroku, and it was Mongo and not Redis though.
I initiated the connection outside of the task function definition at the task module level. At least for Mongo this allowed the tasks to share the connection.
Hope that helps.
https://github.com/instituteofdesign/wander/blob/master/wander/tasks.py
mongoengine.connect('stored_messages')
#celery.task(default_retry_delay = 61)
def pull(settings, google_settings, user, folder, messageid):
'''
Pulls a message from zimbra and stores it in Mongo
'''
try:
imap = imap_connect(settings, user)
imap.select(folder, True)
.......
After running through the basic example for flask-celery(runs fine as far as I can tell) I'm trying to integrate this to my own project. Basically, I'm using this below:
from flask import Blueprint, jsonify, request, session
from flask.views import MethodView
from celery.decorators import task
blueprint = Blueprint('myapi', __name__)
class MyAPI(MethodView):
def get(self, tag):
return get_resource.apply_async(tag)
#task(name="get_task")
def get_resource(tag):
pass
with the same setup as in the example, I'm getting this error:
Traceback (most recent call last):
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1518, in __call__
return self.wsgi_app(environ, start_response)
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1506, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1504, in wsgi_app
response = self.full_dispatch_request()
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1264, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1262, in full_dispatch_request
rv = self.dispatch_request()
File "/x/venv/lib/python2.7/site-packages/flask/app.py", line 1248, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/x/venv/lib/python2.7/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/x/venv/lib/python2.7/site-packages/flask/views.py", line 151, in dispatch_request
return meth(*args, **kwargs)
File "/x/api/modules/document/document.py", line 14, in get
return get_resource.apply_async(tag)
File "/x/venv/lib/python2.7/site-packages/celery/app/task/__init__.py", line 449, in apply_async
publish = publisher or self.app.amqp.publisher_pool.acquire(block=True)
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 657, in acquire
R = self.prepare(R)
File "/x/venv/lib/python2.7/site-packages/kombu/pools.py", line 54, in prepare
p = p()
File "/x/venv/lib/python2.7/site-packages/kombu/pools.py", line 45, in <lambda>
return lambda: self.create_producer()
File "/x/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 265, in create_producer
pub = self.app.amqp.TaskPublisher(conn, auto_declare=False)
File "/x/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 328, in TaskPublisher
return TaskPublisher(*args, **self.app.merge(defaults, kwargs))
File "/x/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 158, in __init__
super(TaskPublisher, self).__init__(*args, **kwargs)
File "/x/venv/lib/python2.7/site-packages/kombu/compat.py", line 61, in __init__
super(Publisher, self).__init__(connection, self.exchange, **kwargs)
File "/x/venv/lib/python2.7/site-packages/kombu/messaging.py", line 79, in __init__
self.revive(self.channel)
File "/x/venv/lib/python2.7/site-packages/kombu/messaging.py", line 168, in revive
channel = channel.default_channel
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 581, in default_channel
self.connection
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 574, in connection
self._connection = self._establish_connection()
File "/x/venv/lib/python2.7/site-packages/kombu/connection.py", line 533, in _establish_connection
conn = self.transport.establish_connection()
File "/x/venv/lib/python2.7/site-packages/kombu/transport/amqplib.py", line 279, in establish_connection
connect_timeout=conninfo.connect_timeout)
File "/x/venv/lib/python2.7/site-packages/kombu/transport/amqplib.py", line 89, in __init__
super(Connection, self).__init__(*args, **kwargs)
File "/x/venv/lib/python2.7/site-packages/amqplib/client_0_8/connection.py", line 129, in __init__
self.transport = create_transport(host, connect_timeout, ssl)
File "/x/venv/lib/python2.7/site-packages/amqplib/client_0_8/transport.py", line 281, in create_transport
return TCPTransport(host, connect_timeout)
File "/x/venv/lib/python2.7/site-packages/amqplib/client_0_8/transport.py", line 85, in __init__
raise socket.error, msg
error: [Errno 111] Connection refused
-->
I'm using redis, and if I install rabbitmq I get another error, but I do not understand this right now --the broker should be redis but its isn't finding it or what? Can anyone give me more of a clue what is going on here? Do I need to import something else, etc. The point is, there is very little beyond the bare bones example and this makes no sense to me.
The most I've been able to determine as that in the Api module there is no access to the 'celery' and when it goes to try and put data there when at the app level, the celery there falls into some defaults, which aren't installed because I'm pointing to redis. Just a guess. I haven't been able to import information into the module, only determined that calling anything 'celery'(for example, output celery.conf) from the app causes an error -- although I could import celery.task.
This is the broker config the application is using, direct from the example:
BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = "redis"
CELERY_REDIS_HOST = "localhost"
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 0
EDIT:
If you'd like to see a demo: https://github.com/thrisp/flask-celery-example
AS it turns out having BROKER_TRANSPORT = 'redis' in your settings is important for whatever is being passed that I'm passing in (for the setup I put forth here and in the git example), I'm not entirely sure why it isn't in the example bits, but is in the ones I've added but it is -- without this it wants to dump everything onto a default ampq queue.
EDIT2:
Also, this a rather a big deal, using the upcoming version of Celery simplifies 10,000 issues when using it with Flask, making all of this unecesssary.
You must configure redis to bind the localhost. In /etc/redis/redis.conf, uncomment the line with
bind 127.0.0.1