I'm currently writing a flask application and going to use openshift. I start my worker in my dev environment using
celery worker -A wsgi.app
My question is how do I start my celery worker in openshift? Because if I started in the openshift shell when I exit the shell the process is killed and my background workers never run so the flask application never runs correctly.
I really appreciate any help. Thanks.
Why can't try like this
celery multi start worker1 \
--pidfile="$HOME/run/celery/%n.pid" \
--logfile="$HOME/log/celery/%n.log"
where mentioned here
Related
I am building a app and I am trying to run some tasks everyday. So I saw some answers, blogs and tutorials about using celery, So I liked the idea of using celery for doing background jobs.
But I have some questions about celery :-
As mentioned in Celery Documentation that after setting a celery task , I have to run a command like celery -A proj worker -l INFO which will process all the tasks and after command it will run the tasks, so my question is , I have to stop the running server to execute this command and
what if I deploy Django project with celery on Heroku or Python Anywhere.
Should I have to run command every time Or I can execute this command first then i can start the server ?
If I have to run this command every time to perform background tasks then how is this possible when deploying to Heroku,
Will celery's background tasks will remain running after executing python manage.py run server in only terminal ?
Why I am in doubt ? :-
What I think is, When running celery -A proj worker -l INFO it will process (or Run) the tasks and I cannot execute run server in one terminal.
Any help would be much Appreciated. Thank You
Should I have to run command every time Or I can execute this command first then i can start the server ?
Dockerize your Celecry and write your own script for auto-run.
You can't run celery worker and django application in one terminal simultaneously, because both of them are programs that should be running in parallel. So you should use two terminals, one for django and another for celery worker.
I highly recommend to read this heroku development article for using Celery and Django on heroku.
suppose I am running a worker dyno with following python code in heroku
import time
time.sleep(60*60)
exit()
I want to stop the worker completely, this code ends the program, but it starts again having the same effect as heroku ps:restart worker what should I write in the code to have the same effect as heroku ps:scale worker=0, is this possible ? If not what are my alternatives ?
The worker processes defined in your Procfile should be always-on workers, same as the web-processes you define in there.
For one-off tasks that exit afterwards you can use a one-off worker dyno. You can also easily define them in the Heroku scheduler.
After I deploy my django project, all I need is touch uwsgi_touch file. And uwsgi will gracefully restart its workers. But what about celery? Now I just restart celery manually when code base of celery tasks is changed. But even if I do it manually I still can't be sure that I will not kill celery task.
Any solutions?
A better way to manage celery workers is to use supervisor
$ pip install supervisor
$ cd /path/to/your/project
$ echo_supervisord_conf > supervisord.conf
Add these to your supervisord.conf file
[program:celeryworker]
command=/path/to/celery worker -A yourapp -l info
stdout_logfile=/path/to/your/logs/celeryd.log
stderr_logfile=/path/to/your/logs/celeryd.log
Now start supervisor with supervisord command in your terminal & use supervisorctl to manage process.
To restart you can do
$ supervisorctl restart celeryworker
I've found answer in celery FAQ
http://docs.celeryproject.org/en/2.2/faq.html#how-do-i-shut-down-celeryd-safely
Use the TERM signal, and the worker will finish all currently
executing jobs and shut down as soon as possible. No tasks should be
lost.
You should never stop celeryd with the KILL signal (-9), unless you’ve
tried TERM a few times and waited a few minutes to let it get a chance
to shut down. As if you do tasks may be terminated mid-execution, and
they will not be re-run unless you have the acks_late option set
(Task.acks_late / CELERY_ACKS_LATE).
I am running celery task on a remote server I am using following command to run celery:
python manage.py celery worker --beat
celery tasks are running according to schedule but when click ctrl+c or close session windows it stops.
I need a command to run celery permanently.
You can run it as daemon. See:
http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html
I have written an Upstart job to run celery in my Ubuntu server. Here's my configuration file called celeryd.conf
# celeryd - runs the celery daemon
#
# This task is run on startup to run the celery daemon
description "run celery daemon"
start on startup
expect fork
respawn
exec su - trakklr -c "/app/trakklr/src/trakklr celeryd --events --beat --loglevel=debug --settings=production"
When I execute sudo service celeryd start, the celeryd process starts just fine and all the x number of worker process start fine.
..but when I execute, sudo service celeryd stop, it stops most of the processes but a few processes are left hanging.
Why is this happening? I'm using Celery 2.5.3.
Here's an issue from the Github tracker.
https://github.com/celery/django-celery/issues/142
I still use init.d to run celery so this may not apply. With that in mind, stopping the celery service sends the TERM signal to celery. This tells the workers not to accept new tasks but it does not terminate existing tasks. Therefore, depending on how long your tasks take to execute you may see tasks for some time after telling celery to stop. Eventually, they will all shut down unless you have some other problem.
I wasn't able to figure this out but it seemed to be an issue with my older celery version. I found this issue mentioned on their issue-tracker and I guess it points to the same issue:
https://github.com/celery/django-celery/issues/142
I upgraded my celery and django-celery to the 3.x.x versions and this issue was gone.