I'm using celery 3.0.11 and djcelery 3.0.11 with python 2.7 and django 1.3.4.
I'm trying to run celeryd as a daemon and I've followed instructions from http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html
When I run the workers using celeryd as described in the link with a python (non-django) configuration, the daemon comes up.
When I run the workers using python manage.py celery worker --loglevel=info to test the workers, they come up fine and start to consume messages.
But when I run celeryd with a django configuration i.e. using manage.py celeryd_multi, I just get a message that says
> Starting nodes...
> <node_name>.<user_name>: OK
But I don't see any daemon running and my messages obviously don't get consumed. There is an empty log file (the one that's configured in the celeryd config file).
I've tried this with a very basic django project as well and I get the same result.
I'm wondering if I'm missing any basic configuration piece. Since I don't get any errors and I don't have any logs, I'm stuck. Running it with sh-x doesn't show anything special either.
Has anyone experienced this before or does anyone have any suggestions on what I can try?
Thanks,
For now I've switched to using supervisord instead of celeryd and I have no issues running multiple workers.
Related
I am building a app and I am trying to run some tasks everyday. So I saw some answers, blogs and tutorials about using celery, So I liked the idea of using celery for doing background jobs.
But I have some questions about celery :-
As mentioned in Celery Documentation that after setting a celery task , I have to run a command like celery -A proj worker -l INFO which will process all the tasks and after command it will run the tasks, so my question is , I have to stop the running server to execute this command and
what if I deploy Django project with celery on Heroku or Python Anywhere.
Should I have to run command every time Or I can execute this command first then i can start the server ?
If I have to run this command every time to perform background tasks then how is this possible when deploying to Heroku,
Will celery's background tasks will remain running after executing python manage.py run server in only terminal ?
Why I am in doubt ? :-
What I think is, When running celery -A proj worker -l INFO it will process (or Run) the tasks and I cannot execute run server in one terminal.
Any help would be much Appreciated. Thank You
Should I have to run command every time Or I can execute this command first then i can start the server ?
Dockerize your Celecry and write your own script for auto-run.
You can't run celery worker and django application in one terminal simultaneously, because both of them are programs that should be running in parallel. So you should use two terminals, one for django and another for celery worker.
I highly recommend to read this heroku development article for using Celery and Django on heroku.
This is a funny stackover flow question, because I have an answer, but the answer is a few years old. I can't find much content which is new, yet it seems like it would be quite high profile.
I am using docker-compose to start a few containers. Two of them use standard postgres and redis images.
The others are django 2.2.9 (and celery) This is a development environment, and I start them with docker compose, like this:
command: ./manage.py runserver 0.0.0.0:80
docker-compose stop sends a SIGINT. The redis and postgres containers exit quickly.
the django containers don't. docker-compose stop loses patience and kills them.
(and pycharm has infinite patience currently, and doesn't send a kill until I force it).
This post from 2015 referring to Django 1.9 (http://blog.lotech.org/fix-djangos-runserver-when-run-under-docker-or-pycharm.html) says that
"The quick fix is to specifically listen for SIGINT and SIGTERM in
your manage.py, and sys.kill() when you get them. So modify your
manage.py to add a signal handler:"
and it says how. The fix to change manage.py to catch SIGINT works and it's a handful of lines, although it doesn't work for celery which has its own startup.
So I can carry forward my own version of of manage.py and fix celery, but really is this still how to fix this?
I see the the dockerfile could have
STOPSIGNAL SIGINT
but it doesn't make and difference, I suppose because the entry point is managed by docker-compose.
Use the list variant of command:
command: ["./manage.py", "runserver", "0.0.0.0:80"]
See https://hynek.me/articles/docker-signals/ for details why.
I'm having an issue that's come up multiple times before, but none of the previous answers seem to help me here.
I'm running Celery (via Docker/Kubernetes) with a Redis back-end. I'm using this command:
celery worker --uid 33 -A finimize_django --loglevel=DEBUG -E
(I've just set it to debug now)
I am using celery==4.3.0 and redis==3.2.1.
Whenever I run celery -A app_name status I get:
Error: No nodes replied within time constraint.
What's weird is Celery seems to be working fine. I can see tasks being processed, and even if I monitor Redis stuff seems to be running successfully. This has also been running fine in production for months, only for this to start happening last week.
It is causing a problem because my liveness probe kills the pod because of this error message.
How can I debug the underlying issue? There is nothing in the log output that is erroring.
Thanks!
I had the same issue or at least very similar. I've manged to fix it in my project by pinning kombu to version 4.6.3. According to this issue on the github for celery it is an issue with 4.6.4. Really insidious problem to debug, but I hope this helps!
As suggest here in the first answer and the issue on github downgrading kombu to version 4.6.3 itself was not enough I had to change my command a bit and use this one .. :
timeout 120s celery inspect ping -A run:celery -b ${REDIS_URL} -d celery#$HOSTNAME
Notice redis URL and the hostname.
When there are simultaneous ajax request sent to runserver its gets killed.
I know earlier it was single threaded, but --nothreading option says it is now multithreaded by default. Still, my runserver gets killed.
I am running on django==1.10 and python==2.7
How do I stop runsever from getting killed?
Or This is because of python's multithreading limitations?
Out of trial and error, I found the way to stop runserver from getting killed.
--nothreading actually solved my problem automagically.
so final commands to run dev server is.
django-admin runserver 1.2.3.4:8000 --nothreading
or
python manage runserver 1.2.3.4:8000 --nothreading
Happy I am :-)
System Info
Ubuntu 12.04 LTS
Django 1.5.5
Python 2.7.3
Celery 3.1.9
I am running this on a vagrant virtual machine (with puppet) and attempting to set up celery to run the worker as a daemon as described in the celery docs here as well as the celery setup for django described here. I am using a virtualenv for the project located at
/home/vagrant/virtualenvs/myproj
The actual project files are located at
/srv/myproj
I have been able to start the the worker and the beat scheduler without issue when located in the /srv/myproj directory using the command line statements.
~/virtualenvs/myproj/bin/celery -A app beat
~/virtualenvs/myproj/bin/celery worker -A app
Both beat and the worker start without issue and the scheduled task is passed to the worker and executed. The problem arises when I attempt to attempt to run them as background processes. I am using the scripts found on the celery github repo in /etc/init.d/ and using the following configuration settings in my celeryd and celerybeat files located in /etc/default
CELERY_BIN="/home/vagrant/virtualenvs/myproj/bin/celery"
CELERYD_CHDIR="/srv/myproj"
Attempting to run the services as sudo with
sudo service celeryd start
sudo service celerybeat start
Causes an error message to be thrown, I believe this is because it is using the python located in usr/lib instead of the python in the virtualenv. The error thrown is a cannot import name (the package exists in the virtualenv but not globally hence my assumption).
I also noticed on the Running the worker as a daemon it states that workers should run as unprivileged users, and that you should start workers and beat as using the multi or
--detach command. This way I was able to start the worker (not beat) but all the .log and .pid files are being created in my current directory instead of where I've specified in the /etc/default/celeryd config file.
Does anyone have a solution for getting celery to work in a virtualenv? I feel like I'm really close and am overlooking some simple part of the configuration.
I was eventually able to get this working by using supervisor and setting the environment variables in the [program:celery] environment option.