Python Flask: RQ Worker raising KeyError because of environment variable - python

I'm trying to setup a redis queue and a worker to process the queue with my flask app. I'm implementing this to handle a task that sends emails. I'm a little confused because it appears that the stack trace is saying that my 'APP_SETTINGS' environment variable is not set when it is in fact set.
Prior to starting up the app, redis or the worker, I set APP_SETTINGS:
export APP_SETTINGS="project.config.DevelopmentConfig"
However, when an item gets added to the queue, here's the stack trace:
17:00:00 *** Listening on default...
17:00:59 default: project.email.sendMailInBG(<flask_mail.Message object at 0x7fc930e1c3d0>) (aacf9546-5558-4db8-9232-5f36c25d521b)
17:01:00 KeyError: 'APP_SETTINGS'
Traceback (most recent call last):
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/worker.py", line 588, in perform_job
rv = job.perform()
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 498, in perform
self._result = self.func(*self.args, **self.kwargs)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 206, in func
return import_attribute(self.func_name)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/utils.py", line 150, in import_attribute
module = importlib.import_module(module_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/tony/pyp-launch/project/__init__.py", line 24, in <module>
app.config.from_object(os.environ['APP_SETTINGS'])
File "/home/tony/pyp-launch/venv/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'APP_SETTINGS'
Traceback (most recent call last):
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/worker.py", line 588, in perform_job
rv = job.perform()
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 498, in perform
self._result = self.func(*self.args, **self.kwargs)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/job.py", line 206, in func
return import_attribute(self.func_name)
File "/home/tony/pyp-launch/venv/local/lib/python2.7/site-packages/rq/utils.py", line 150, in import_attribute
module = importlib.import_module(module_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/tony/pyp-launch/project/__init__.py", line 24, in <module>
app.config.from_object(os.environ['APP_SETTINGS'])
File "/home/tony/pyp-launch/venv/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'APP_SETTINGS'
17:01:00 Moving job to u'failed' queue
17:01:00
17:01:00 *** Listening on default...
email.py
from flask.ext.mail import Message
from project import app, mail
from redis import Redis
from rq import use_connection, Queue
q = Queue(connection=Redis())
def send_email(to, subject, template, emailable):
if emailable==True:
msg = Message(
subject,
recipients=[to],
html=template,
sender=app.config['MAIL_DEFAULT_SENDER']
)
q.enqueue(sendMailInBG, msg)
else:
print("no email sent, emailable set to: " + str(emailable))
def sendMailInBG(msgContent):
with app.test_request_context():
mail.send(msgContent)
worker.py
import os
import redis
from rq import Worker, Queue, Connection
listen = ['default']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
if __name__ == '__main__':
with Connection(conn):
worker = Worker(list(map(Queue, listen)))
worker.work()
I'd really appreciate another set of eyes on this. I can't for the life of me figure out what's going on here.

Thanks to the prompting of #danidee, I discovered that the environment variables need to be defined in each terminal. Hence, APP_SETTINGS was defined for the actual app, but not for the worker.
The solution was to set APP_SETTINGS in the worker terminal.

Related

Redis queue Retry does not work with the interval argument

I am trying to use the rq Retry functionality by following the rq documentation but it does not work when using the interval argument
python version: 3.8.0
rq version: 1.10.0
The somewhere.py
def my_func():
print('Start...')
asdsa # Here a NameError is raised
A script that enqueues my_func with retry functionality
from redis import Redis
from rq import Retry, Queue
from somewhere import my_func
r = Redis("localhost",
6379,
socket_connect_timeout=1,
decode_responses=True,
)
q = Queue(connection=r)
q.enqueue(my_func, retry=Retry(max=3, interval=10))
I was expecting to see the worker running my_func 3 times with 10 sec intervals in the between but it actually runs it only once. The worker output:
17:35:19 Worker rq:worker:1801215fdd1040b2aee962cccceff587: started, version 1.10.1
17:35:19 Subscribing to channel rq:pubsub:1801215fdd1040b2aee962cccceff587
17:35:19 *** Listening on default...
17:35:22 default: somewhere.my_func() (dc051976-598a-4863-8d15-6813c61d1377)
1
17:35:22 Traceback (most recent call last):
File "/home/user/Documents/Projects/Aquacrop/aquacrop/aquacrop-api/env/lib/python3.8/site-packages/rq/worker.py", line 1061, in perform_job
rv = job.perform()
File "/home/user/Documents/Projects/Aquacrop/aquacrop/aquacrop-api/env/lib/python3.8/site-packages/rq/job.py", line 821, in perform
self._result = self._execute()
File "/home/user/Documents/Projects/Aquacrop/aquacrop/aquacrop-api/env/lib/python3.8/site-packages/rq/job.py", line 844, in _execute
result = self.func(*self.args, **self.kwargs)
File "./somewhere.py", line 3, in my_func
somewhere
NameError: name 'somewhere' is not defined
Traceback (most recent call last):
File "/home/user/Documents/Projects/Aquacrop/aquacrop/aquacrop-api/env/lib/python3.8/site-packages/rq/worker.py", line 1061, in perform_job
rv = job.perform()
File "/home/user/Documents/Projects/Aquacrop/aquacrop/aquacrop-api/env/lib/python3.8/site-packages/rq/job.py", line 821, in perform
self._result = self._execute()
File "/home/user/Documents/Projects/Aquacrop/aquacrop/aquacrop-api/env/lib/python3.8/site-packages/rq/job.py", line 844, in _execute
result = self.func(*self.args, **self.kwargs)
File "./somewhere.py", line 3, in my_func
somewhere
NameError: name 'somewhere' is not defined
If I do not use the interval argument, the worker retries the function 3 times as expected.
What am I doing wrong?
As sated here and here one has to run the worker with the --with-scheduler flag, like:
rq worker --url redis://localhost:6379 --with-scheduler

Celery + Azure Service Bus (Broker) = claim is empty or token is invalid

I am trying to use Azure Service Bus as the broker for my celery app.
I have patched the solution by referring to various sources.
The goal is to use Azure Service Bus as the broker and PostgresSQL as the backend.
I created an Azure Service Bus and copied the credentials for the RootManageSharedAccessKey to the celery app.
Following is the task.py
from time import sleep
from celery import Celery
from kombu.utils.url import safequote
SAS_policy = safequote("RootManageSharedAccessKey") #SAS Policy
SAS_key = safequote("1234222zUY28tRUtp+A2YoHmDYcABCD") #Primary key from the previous SS
namespace = safequote("bluenode-dev")
app = Celery('tasks', backend='db+postgresql://afsan.gujarati:admin#localhost/local_dev',
broker=f'azureservicebus://{SAS_policy}:{SAS_key}=#{namespace}')
#app.task
def divide(x, y):
sleep(30)
return x/y
When I try to run the Celery app using the following command:
celery -A tasks worker --loglevel=INFO
I get the following error
[2020-10-09 14:00:32,035: CRITICAL/MainProcess] Unrecoverable error: AzureHttpError('Unauthorized\n<Error><Code>401</Code><Detail>claim is empty or token is invalid. TrackingId:295f7c76-770e-40cc-8489-e0eb56248b09_G5S1, SystemTracker:bluenode-dev.servicebus.windows.net:$Resources/Queues, Timestamp:2020-10-09T20:00:31</Detail></Error>')
Traceback (most recent call last):
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 918, in create_channel
return self._avail_channels.pop()
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/servicebusservice.py", line 1225, in _perform_request
resp = self._filter(request)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/_http/httpclient.py", line 211, in perform_request
raise HTTPError(status, message, respheaders, respbody)
azure.servicebus.control_client._http.HTTPError: Unauthorized
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 311, in start
blueprint.start(self)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/connection.py", line 21, in start
c.connection = c.connect()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 398, in connect
conn = self.connection_for_read(heartbeat=self.amqheartbeat)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 404, in connection_for_read
return self.ensure_connected(
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/celery/worker/consumer/consumer.py", line 430, in ensure_connected
conn = conn.ensure_connection(
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 383, in ensure_connection
self._ensure_connection(*args, **kwargs)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 435, in _ensure_connection
return retry_over_time(
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/utils/functional.py", line 325, in retry_over_time
return fun(*args, **kwargs)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 866, in _connection_factory
self._connection = self._establish_connection()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/connection.py", line 801, in _establish_connection
conn = self.transport.establish_connection()
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 938, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 920, in create_channel
channel = self.Channel(connection)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/kombu/transport/azureservicebus.py", line 64, in __init__
for queue in self.queue_service.list_queues():
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/servicebusservice.py", line 313, in list_queues
response = self._perform_request(request)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/servicebusservice.py", line 1227, in _perform_request
return _service_bus_error_handler(ex)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/_serialization.py", line 569, in _service_bus_error_handler
return _general_error_handler(http_error)
File "/Users/afsan.gujarati/.pyenv/versions/3.8.1/envs/celery-servicebus/lib/python3.8/site-packages/azure/servicebus/control_client/_common_error.py", line 41, in _general_error_handler
raise AzureHttpError(message, http_error.status)
azure.common.AzureHttpError: Unauthorized
<Error><Code>401</Code><Detail>claim is empty or token is invalid. TrackingId:295f7c76-770e-40cc-8489-e0eb56248b09_G5S1, SystemTracker:bluenode-dev.servicebus.windows.net:$Resources/Queues, Timestamp:2020-10-09T20:00:31</Detail></Error>
I don't see a straight solution for this anywhere. What am I missing?
P.S. I did not create the Queue in Azure Service Bus. I am assuming that celery would create the Queue by itself when the celery app is executed.
P.S.S. I also tried to use the exact same credentials in Python's Service Bus Client and it seemed to work. It feels like a Celery issue, but I am not able to figure out exactly what.
If you want to use Azure Service Bus Transport to connect Azure service bus, the URL should be azureservicebus://{SAS policy name}:{SAS key}#{Service Bus Namespace}.
For example
Get Shared access policies RootManageSharedAccessKey
Code
from celery import Celery
from kombu.utils.url import safequote
SAS_policy = "RootManageSharedAccessKey" # SAS Policy
# Primary key from the previous SS
SAS_key = safequote("X/*****qyY=")
namespace = "bowman1012"
app = Celery('tasks', backend='db+postgresql://<>#localhost/<>',
broker=f'azureservicebus://{SAS_policy}:{SAS_key}#{namespace}')
#app.task
def add(x, y):
return x + y

Error trying to connect Celery through SQS using STS

I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this:
role_info = {
'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution',
'RoleSessionName': 'roleExecution'
}
sts_client = boto3.client('sts', region_name='eu-central-1')
credentials = sts_client.assume_role(**role_info)
aws_access_key_id = credentials["Credentials"]['AccessKeyId']
aws_secret_access_key = credentials["Credentials"]['SecretAccessKey']
aws_session_token = credentials["Credentials"]["SessionToken"]
os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id
os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key
os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1'
os.environ["AWS_SESSION_TOKEN"] = aws_session_token
broker = "sqs://"
backend = 'redis://redis-service:6379/0'
celery = Celery('tasks', broker=broker, backend=backend)
celery.conf["task_default_queue"] = 'my-queue'
celery.conf["broker_transport_options"] = {
'region': 'eu-central-1',
'predefined_queues': {
'my-queue': {
'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue'
}
}
}
In the same file I have the following task:
#celery.task(name='my-queue.my_task')
def my_task(content) -> int:
print("hello")
return 0
When I execute the following code I get an error:
[2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel
return self._avail_channels.pop()
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start
c.connection = c.connect()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect
conn = self.connection_for_read(heartbeat=self.amqheartbeat)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read
self.app.connection_for_read(heartbeat=heartbeat))
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected
callback=maybe_shutdown,
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection
callback, timeout=timeout)
File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect
return self.connection
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel
channel = self.Channel(connection)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__
self._update_queue_cache(self.queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache
resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.
If I use boto3 directly without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the predefined_queues configuration, tha is used to avoid these behavior (from docs):
If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined_queue_urls setting
Source here
Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all.
The versions I'm using:
celery==4.4.7
boto3==1.14.54
kombu==4.5.0
Thanks!
PS: I created and issue in Github to track if this can be a library error or not...
I solved the problem updating dependencies to the latest versions:
celery==5.0.0
boto3==1.14.54
kombu==5.0.2
pycurl==7.43.0.6
I was able to get celery==4.4.7 and kombu==4.6.11 working by setting the following configuration option:
celery.conf["task_create_missing_queues"] = False

Airflow Scheduler Crashes when setting Postgres celery result_backend

I try to implement Apache Airflow with the CeleryExecutor. For the database I use Postgres, for the celery message queue I use Redis. When using LocalExecutor everything works fine, but when I set the CeleryExecutor in the airflow.cfg and want to set the Postgres database as the result_backend
result_backend = postgresql+psycopg2://airflow_user:*******#localhost/airflow
I get this error when running the Airflow scheduler no matter which DAG I trigger:
[2020-03-18 14:14:13,341] {scheduler_job.py:1382} ERROR - Exception when executing execute_helper
Traceback (most recent call last):
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'backend'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1380, in _execute
self._execute_helper()
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1441, in _execute_helper
if not self._validate_and_run_task_instances(simple_dag_bag=simple_dag_bag):
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/jobs/scheduler_job.py", line 1503, in _validate_and_run_task_instances
self.executor.heartbeat()
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/executors/base_executor.py", line 130, in heartbeat
self.trigger_tasks(open_slots)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 205, in trigger_tasks
cached_celery_backend = tasks[0].backend
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/local.py", line 146, in __getattr__
return getattr(self._get_current_object(), name)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/task.py", line 1037, in backend
return self.app.backend
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/base.py", line 1227, in backend
return self._get_backend()
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/base.py", line 944, in _get_backend
self.loader)
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/backends.py", line 74, in by_url
return by_name(backend, loader), url
File "<PATH_TO_VIRTUALENV>/lib/python3.6/site-packages/celery/app/backends.py", line 60, in by_name
backend, 'is a Python module, not a backend class.'))
celery.exceptions.ImproperlyConfigured: Unknown result backend: 'postgresql'. Did you spell that correctly? ('is a Python module, not a backend class.')
The exact same parameter to direct to the database works
sql_alchemy_conn = postgresql+psycopg2://airflow_user:*******#localhost/airflow
Setting Redis as the celery result_backend works, but I read it is not the recommended way.
result_backend = redis://localhost:6379/0
Does anyone see what I am doing wrong?
You need to add the db+ prefix to the database connection string:
f"db+postgresql+psycopg2://{user}:{password}#{host}/{database}"
This is also mentioned in the docs: https://docs.celeryproject.org/en/stable/userguide/configuration.html#database-url-examples
You need to add the db+ prefix to the database connection string:
result_backend = db+postgresql://airflow_user:*******#localhost/airflow

Error 111 connection refused (Python, celery, redis)

I tried to get all the active/scheduled/reserved tasks in redis:
from celery.task.control import inspect
inspect_obj = inspect()
inspect_obj.active()
inspect_obj.scheduled()
inspect_obj.reserved()
But was greeted with a list of errors as follows:
My virtual environment ==> HubblerAPI.
Iam using this from the ec2 console
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/celery/app/control.py", line 81, in active
return self._request('dump_active', safe=safe)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/celery/app/control.py", line 71, in _request
timeout=self.timeout, reply=True,
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/celery/app/control.py", line 316, in broadcast
limit, callback, channel=channel,
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/pidbox.py", line 283, in _broadcast
chan = channel or self.connection.default_channel
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/connection.py", line 771, in default_channel
self.connection
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/connection.py", line 756, in connection
self._connection = self._establish_connection()
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/connection.py", line 711, in _establish_connection
conn = self.transport.establish_connection()
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/home/ec2-user/HubblerAPI/local/lib/python3.4/site-
packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
**OSError: [Errno 111] Connection refused**
My celery config file is as follows:
BROKER_TRANSPORT = 'redis'
BROKER_TRANSPORT_OPTIONS = {
'queue_name_prefix': 'dev-',
'wait_time_seconds': 10,
'polling_interval': 30,
# The polling interval decides the number of seconds to sleep
between unsuccessful polls
'visibility_timeout': 3600 * 5,
# If a task is not acknowledged within the visibility_timeout, the
task will be redelivered to another worker and executed.
}
CELERY_MESSAGES_DB = 6
BROKER_URL = "redis://%s:%s/%s" % (AWS_REDIS_ENDPOINT, AWS_REDIS_PORT,
CELERY_MESSAGES_DB)
What am i doing wrong here as the error log suggests that its not using the redis broker.
Looks like your python code doesn't recognize your configs since it is attempting to use RabbitMQ's ampq protocol instead of the configured broker.
I suggest the following
https://docs.celeryq.dev/en/stable/getting-started/backends-and-brokers/redis.html
Your configs look similar to Django configs for Celery yet it doesn't seem you are using Celery with Django.
https://docs.celeryq.dev/en/latest/django/first-steps-with-django.html
The issue is using "BROKER_URL" instead of "CELERY_BROKER_URL" in settings.py. Celery wasn't finding the URL and was defaulting to the rabbitmq port instead of the redis port.

Categories

Resources