max number of clients reached celery - python

I am using django with celery and redis.
I keep getting this error redis.exceptions:ResponseError max number of clients reached
I am using heroku and my redis backend has max connection of 400. I running 20 dynos for the main app and for celery I am running 5 dynos. How do I set the max number of connections? I've tried putting it in my celery.py like so:
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'imdowntags.settings')
from django.conf import settings
app = Celery('imdowntags',
broker=os.environ['REDIS_URL'],
backend=os.environ['REDIS_URL'],
include=['imdowntags.tasks'])
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app.conf.update( CELERY_REDIS_MAX_CONNECTIONS = 20,)
also have tried putting
CELERY_REDIS_MAX_CONNECTIONS =20
inside of my settings.py

You are probably hitting a plan limit on the Redis server itself. If you are using the Heroku Redis addon, consult the plan descriptions: https://elements.heroku.com/addons/heroku-redis
You should expect a minimum of 1 connection for each dyno - for web dynos to queue stuff up, and the celery workers to pop stuff off. 20 web + 5 celery easily puts you over the for the free tier.

Related

Heroku / Redis Connection error on Python/Django Project

I'm trying to set up django so it send automatic email when a certain date in my models i reached. However i setup a Heroku-Redis server and am trying to connect to it. I created a simple task to test out if celery is working but it always returns the following error:
Error while reading from socket: (10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None)
I setup celery according to the website:
celery.py:
import os
from celery import Celery
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'incFleet.settings')
app = Celery('incFleet')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
Tasks.py
import datetime
from celery import shared_task, task
from time import sleep
from .models import trucks
from datetime import datetime
#shared_task()
def sleepy(duration):
sleep(duration)
return 0
##shared_task()
#def send_warning():
My views:
def index(request):
sleepy.delay(5)
return render(request, 'Inventory/index.html')
And my settings.py
# Celery Broker - Redis
CELERY_BROKER_URL = 'redis://:p0445df1196b44ba70a9bd0c84545315fec8a5dcbd77c8e8c4bd22ff4cd0a2ff4#ec2-54-167-58-171.compute-1.amazonaws.com:11900/'
CELERY_RESULT_BACKEND = 'redis://:p0445df1196b44ba70a9bd0c84545315fec8a5dcbd77c8e8c4bd22ff4cd0a2ff4#ec2-54-167-58-171.compute-1.amazonaws.com:11900/'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
After a while i realized the issue was with the network in my office. The port i was trying to access was closed. As soon as i activated redis on the actual server it started working fine.
heroku config | grep REDIS
check the actual url using this code

Celery Remote task Blocking Request

I've a problem with remote task calling via tornado app with REST call .
In my case I've tasks that working on another machine , and another rest api working on another machine .
from flask import Flask
celery_obj = //CELERY .
#app.route("/task1")
def func():
celery_obj.send_task(name="tasks.task1", args=[])
When I start the application and send the request to the /taksk1 endpoint, flask app cannot reply anything .
What is the reason of this problem .
Please help.
celery_obj needs to be the celery application that you are sending it to with a minimum of the broker url specified.
e.g.,
from celery.app import Celery
celery = Celery(broker='redis://127.0.0.1/1')
celery.send_task('task.name', kwargs={})

Why isn't celery periodic task working?

I'm trying to create a periodic task within a Django app.
I added this to my settings.py:
from datetime import timedelta
CELERYBEAT_SCHEDULE = {
'get_checkins': {
'task': 'api.tasks.get_checkins',
'schedule': timedelta(seconds=1)
}
}
I'm just getting started with Celery and haven't figured out which broker I want to use, so I added this as well to just bypass the broker for the time being:
if DEBUG:
CELERY_ALWAYS_EAGER = True
I also created a celery.py file in my project folder:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'testproject.settings')
app = Celery('testproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
Inside my app, called api, I made a tasks.py file:
from celery import shared_task
#shared_task
def get_checkins():
print('hello from get checkins')
I'm running the worker and beat with celery -A testproject worker --beat -l info
It starts up fine and I can see the task is registered under [tasks], but I don't see any jobs getting logged. Should be one per second. Can anyone tell why this isn't executing?
I looked at your post and don't see any comment on the broker you are using along with celery.
Have you installed a broker like Rabbitmq? Is it running or logging some kind of error?
Celery needs a broker to send and receive data.
Check the documentation here (http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html#choosing-a-broker)

Specify number of processes for eventlet wsgi server

I am trying to add websocket functionality to an existing application. The existing structure of the app is
In /server/__init__.py:
from connexion import App
...
connexion_app = App(__name__, specification_dir='swagger/') # Create Connexion App
app = connexion_app.app # Configure Flask Application
...
connexion_app.add_api('swagger.yaml', swagger_ui=True) # Initialize Connexion api
In startserver.py:
from server import connexion_app
connexion_app.run(
processes=8,
debug=True
)
In this way, I was able to specify the number of processes. There are some long-running tasks that make it necessary to have as many processes as possible.
I have modified the application to include websocket functionality as below. It seems to be that I only have one process available. Once the application attempts to run one of the long-running processes, all API calls hang. Also, if the long-runnign process fails, the application is stuck in a hanging state
In /server/__init__.py:
from connexion import App
import socketio
...
connexion_app = App(__name__, specification_dir='swagger/') # Create Connexion App
sio = socketio.Server() # Create SocketIO for websockets
app = connexion_app.app # Configure Flask Application
...
connexion_app.add_api('swagger.yaml', swagger_ui=True) # Initialize Connexion api
In startserver.py:
import socketio
import eventlet
from server import sio
from server import app
myapp = socketio.Middleware(sio, app)
eventlet.wsgi.server(eventlet.listen(('', 5000)), myapp)
What am I missing here?
(side note: If you have any resources available to better understand the behemoth of the Flask object, please point me to them!!)
Exact answer to question: Eventlet built-in WSGI does not support multiple processes.
Approach to get the best solution for described problem: share one file that contains absolute minimum code required to reproduce problem. Maybe here https://github.com/eventlet/eventlet/issues or any other way you prefer.
Way of hope. Random stuff to poke at: eventlet.monkey_patch(), isolate Eventlet and long blocking calls in separate threads or processes.

Celery + Redis losing connection

I have a very simple Celery task that runs a (long running) shell script:
import os
from celery import Celery
os.environ['CELERY_TIMEZONE'] = 'Europe/Rome'
os.environ['TIMEZONE'] = 'Europe/Rome'
app = Celery('tasks', backend='redis', broker='redis://OTHER_SERVER:6379/0')
#app.task(name='ct.execute_script')
def execute_script(command):
return os.system(command)
I have this task running on server MY_SERVER and I launch it from OTHER_SERVER where is also running the Redis database.
The task seems to run successfully (I see the result of executing the script on the filesystem) but the I always start getting the following error:
INTERNAL ERROR: ConnectionError('Error 111 connecting to localhost:6379. Connection refused.',)
What could it be? Why is it trying to contact localhost while I've set the Redis server to be redis://OTHER_SERVER:6379/0 and it works (since the task is launched)? Thanks
When you set the backend argument, Celery will use it as the result backend.
On your code, you tell Celery to use local redis server as the result backend.
You seen ConnectionError, because celery can't save the reult to local redis server.
You can disable result backend or start an local redis server or set it to OTHER_SERVER.
ref:
http://celery.readthedocs.org/en/latest/getting-started/first-steps-with-celery.html#keeping-results
http://celery.readthedocs.org/en/latest/configuration.html#celery-result-backend

Categories

Resources