How to run celery as daemon with normal celery command - python

I have a django app for which i am using celery tasks to perform some csv processing in background, and so i installed rabbitmq-server like sudo apt-get install rabbitmq-server, by this command the rabbitmq-server was installed and running successfully.
And i have some celery tasks code in tasks.py module inside an app and running the celery like below
celery -A app.tasks worker --loglevel=info
which was working fine and executing the csv files in background successfully, but now i just want to daemonize the above command, and i searched about any option to daemonize it but i din't found any arguments to pass like -D to daemonize the above command. So is there anyway that i can daemonize the above command and make celery run ?

I think you're looking for the --detach option. [1]
But is recommended that you use something like systemd.
The celery docs has a whole page on this topic. [2]
[1] http://celery.readthedocs.org/en/latest/reference/celery.bin.base.html#daemon-options
[2] http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html

supervisorctl will be a better bet on this.
Installation: sudo apt-get install supervisor
The main configuration file of supervisor is here: /etc/supervisor/supervisord.conf
Run $vim /etc/supervisor/supervisord.conf to inspect. Looking into the file, at the bottom, youu'll notice:
[include]
files = /etc/supervisor/conf.d/*.conf
This basically means that config files of your projects can be stored here /etc/supervisor/conf.d/ and they will be automatically included.
Run: sudo vim /etc/supervisor/conf.d/myapp.conf. Your configuration may look like:
[program:myapp]
command={{ your celery commands without curly braces }}
directory=/directory/to/myapp
autostart=true
autorestart=true
stderr_logfile=/var/log/myapp.err.log
stdout_logfile=/var/log/myapp.out.log
To Restart service: $sudo service supervisor restart
To Re-read after making updates to any *.conf file: $sudo supervisorctl reread
To record updates: $sudo supervisorctl update
To check status of specific *.conf: sudo supervisorctl status myapp
Check your log files for more status data.

Related

Celery worker displaying "unknown option -A" on windows

Celery worker suddenly not working and displaying error message saying unknown option -A.
I am running celery 5.0.0 on windows within virtual environment of python.
The command is
pipenv run celery worker -A <celery_file> -l info
Error message is as follows:
Usage: celery worker [OPTIONS]
Try 'celery worker --help' for help.
Error: no such option: -A
Please let me know why this error is occurring, as I am unable to find the cause of it.
Worker has no flag -A, I think you want to use that on the celery level.
Like this:
pipenv run celery -A worker <celery_file> -l info
Now I am not on Windows so I can't verify but it seems to be in line with the commands in the official documentation on workers.
$ celery -A proj worker -l info
3.1.25 was the last version that works on windows(just tested on my win10 machine)
pip install celery==3.1.25
In your Python interpreter, type the following commands:
>>> import os
>>> import sys
>>> os.path.dirname(sys.executable)
'C:\\python\python'
note celery have dropped support for Windows(since v4).
"c:\python\python" -m celery -A your-application worker -Q your-queue -l info --concurrency=300
or using other format
celery worker --app=app.app --pool=your-pool --loglevel=INFO
The correct way (for those using pipenv) to start the worker should be something like pipenv run celery -A <package.module> worker -l info . Note that -A comes before worker command as it is general Celery option. Look at pipenv run celery --help for more details.
Also, I notice you use the latest 5.0.0 Celery - they have changed the command-line handler so switching to 5.0.0 may cause problems with some of your old startup scripts.

ERROR: CANT_REREAD: The directory named as part of the path /home/app/logs/celery.log does not exist

I'm following a tutorial on how to use Celery on my Django production server.
When I get to the bit where it says:
Now reread the configuration and add the new process:
sudo supervisorctl reread
sudo supervisorctl update
When I perform sudo supervisorctl reread in my server (Ubuntu 16.04) terminal, it returns this:
ERROR: CANT_REREAD:
The directory named as part of the path /home/app/logs/celery.log does not exist.
in section 'app-celery' (file: '/etc/supervisor/conf.d/app-celery.conf')
I've done all of the instructions prior to this including installing supervisor as well as creating a file named mysite-celery.conf (app-celery.conf) in the folder: /etc/supervisor/conf.d
If you're curious my app-celery.conf file looks like this:
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
user=zorgan
numprocs=1
stdout_logfile=/home/app/logs/celery.log
stderr_logfile=/home/app/logs/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?
Somehow supervisor is not able to create the folder - /home/app/logs/.
You can create it manually using mkdir and restart the supervisor service
mkdir /home/app/logs
sudo service supervisor restart
I added my username to the superisord.conf file under the [unix_http_server] section like so:
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0770 ; sockef file mode (default 0700)
chown=appuser:supervisor ;(username:group)
This seemed to work- time will tell if it continues working after I manage solve the rest of the supervisor issues.

Autoreload flask on file save using heroku local [duplicate]

Finally I migrated my development env from runserver to gunicorn/nginx.
It'd be convenient to replicate the autoreload feature of runserver to gunicorn, so the server automatically restarts when source changes. Otherwise I have to restart the server manually with kill -HUP.
Any way to avoid the manual restart?
While this is old question you need to know that ever since version 19.0 gunicorn has had the --reload option.
So now no third party tools are needed.
One option would be to use the --max-requests to limit each spawned process to serving only one request by adding --max-requests 1 to the startup options. Every newly spawned process should see your code changes and in a development environment the extra startup time per request should be negligible.
Bryan Helmig came up with this and I modified it to use run_gunicorn instead of launching gunicorn directly, to make it possible to just cut and paste these 3 commands into a shell in your django project root folder (with your virtualenv activated):
pip install watchdog -U
watchmedo shell-command --patterns="*.py;*.html;*.css;*.js" --recursive --command='echo "${watch_src_path}" && kill -HUP `cat gunicorn.pid`' . &
python manage.py run_gunicorn 127.0.0.1:80 --pid=gunicorn.pid
I use git push to deploy to production and set up git hooks to run a script. The advantage of this approach is you can also do your migration and package installation at the same time. https://mikeeverhart.net/2013/01/using-git-to-deploy-code/
mkdir -p /home/git/project_name.git
cd /home/git/project_name.git
git init --bare
Then create a script /home/git/project_name.git/hooks/post-receive.
#!/bin/bash
GIT_WORK_TREE=/path/to/project git checkout -f
source /path/to/virtualenv/activate
pip install -r /path/to/project/requirements.txt
python /path/to/project/manage.py migrate
sudo supervisorctl restart project_name
Make sure to chmod u+x post-receive, and add user to sudoers. Allow it to run sudo supervisorctl without password. https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
From my local / development server, I set up git remote that allows me to push to the production server
git remote add production ssh://user_name#production-server/home/git/project_name.git
# initial push
git push production +master:refs/heads/master
# subsequent push
git push production master
As a bonus, you will get to see all the prompts as the script is running. So you will see if there is any issue with the migration/package installation/supervisor restart.

how to use celeryd.conf file provided in production

This is the file provided at https://github.com/ask/django-celery/blob/master/contrib/supervisord/celeryd.conf . How can I run this conf file ?
I am running my django app using gunicorn
; =======================================
; celeryd supervisor example for Django
; =======================================
[program:celery]
command=/path/to/project/manage.py celeryd --loglevel=INFO
directory=/path/to/project
user=nobody
numprocs=1
stdout_logfile=/var/log/celeryd.log
stderr_logfile=/var/log/celeryd.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
Thanks
That configuration file can't be run on its own; it is for use with supervisord.
You need to install supervisord (if you have pip, use pip install supervisor), create a configuration file for it using sudo echo_supervisord_conf > /etc/supervisord.conf and then copy paste the contents of the file above into your supervisord configuration file as described in the supervisord documentation under Adding a program.
So basically run the following at your shell:
pip install supervisor
sudo echo_supervisord_conf > /etc/supervisord.conf
sudo wget -O - -o /dev/null https://raw.github.com/ask/django-celery/master/contrib/supervisord/celeryd.conf >> /etc/supervisor.conf
sudo $EDITOR /etc/supervisor.conf
and edit the config file to your heart's content.

nginx and supervisor setup in Ubuntu

I'm using django-gunicorn-nginx setup by following this tutorial http://ijcdigital.com/blog/django-gunicorn-and-nginx-setup/ Upto nginx setup, it is working. Then I installed supervisor, configured it and then I reboot my server and checked, it shows 502 bad gateway. I'm using Ubuntu 12.04 LTS
/etc/supervisor/conf.d/qlimp.conf
[program: qlimp]
directory = /home/nirmal/project/qlimp/qlimp.sh
user = nirmal
command = /home/nirmal/project/qlimp/qlimp.sh
stdout_logfile = /path/to/supervisor/log/file/logfile.log
stderr_logfile = /path/to/supervisor/log/file/error-logfile.log
Then I restarted supervisor and I run this command $ supervisorctl start qlimp and I'm getting this error
unix:///var/run/supervisor.sock no such file
Is there any problem in my supervisor setup?
Thanks!
That there is no socket file probably means that supervisor isn't running. A reason that it isn't running might be that your qlimp.conf file has some sort of error in it. If you do a
sudo service supervisor start
you can see whether or not this is the case. If supervisor is already running, it will say. And if it is catching an error, it will usually give you a more helpful error message than supervisorctl.
I have met the same issue as you and after several times, here comes the solution:
First remove the apt-get supervisor version:
sudo apt-get remove supervisor
Kill the backend supervisor process:
sudo ps -ef | grep supervisor
Then get the newest version(apt-get version was 3.0a8):
sudo easy_install(pip install) supervisor==3.0b2
Echo the config file(root premission):
echo_supervisord_conf > /etc/supervisord.conf
5.Start supervisord:
sudo supervisord
6.Enter supervisorctl:
sudo supervisorctl
Anything has been done! Have fun!
Try this
cd /etc/supervisor
sudo supervisord
sudo supervisorctl restart all
Are you sure that supervisord is installed and running? Is there a socket file in present at /var/run/supervisor.sock?
The error indicates that supervisorctl, the control CLI, cannot reach the UNIX socket to communicate with supervisord, the daemon.
You could also check /etc/supervisor/supervisord.conf and see if the values for the unix_http_server and supervisorctl sections match.
Note that this is a Ubuntu-level problem, not a problem with Python, Django or nginx and as such this question probably belongs on ServerFault.
On Ubuntu 16+ it seems to been caused by the switch to systemd, this workaround may fix for new servers:
# Make sure Supervisor comes up after a reboot.
$ sudo systemctl enable supervisor
# Bring Supervisor up right now.
$ sudo systemctl start supervisor
and then do check your status of iconic.conf [My example] of supervisor
$ sudo supervisorctl status iconic
PS: Make sure gunicorn should not have any problem while running.
The error may be due to that you don't have the privilege.
Maybe you can fix the error by this way, open your terminal, and input vim /etc/supervisord.conf to edit the file, search the lines
[unix_http_server]
;file=/tmp/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
and delete the Semicolon in the start of the string ;file=/tmp/supervisor.sock and ;chmod=0700, restart your supervisord. I suggest you do it.
Make sure that in /etc/supervisor.conf the following two sections exists
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
You can do something like this :-
sudo touch /var/run/supervisor.sock
sudo chmod 777 /var/run/supervisor.sock
sudo service supervisor restart
It's definitely work, try this.
In my case, Supervisor was not running. To spot the issue I run:
sudo systemctl status supervisor.service
The problem was that I had my logs pointing to a non-existing directory, so I just had to create it.
I hope it helps :)
touch /var/run/supervisor.sock
sudo supervisord -c /etc/supervisor/supervisord.conf
and after
supervisorctl restart all
if you want to listen the supervisor port
ps -ef | grep supervisord
if you want kill the process
kill -s SIGTERM 2503
Create a conf file and below add lines
Remember that in order to work with Nginx, you must have to disable autostart on system boot, that you activated while installing Nginx.
https://askubuntu.com/questions/177041/nginx-disable-autostart
Note: All the supervisor processes must be on "daemon off" mode, in order to work with supervisor
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
startretries=5
stopasgroup=true
stopsignal=QUIT
numprocs=1
startsecs=0
process_name=WebServer(Nginx)
stderr_logfile=/var/log/nginx/error.log
stderr_logfile_maxbytes=10MB
stdout_logfile=/var/log/nginx/access.log
stdout_logfile_maxbytes=10MB
sudo supervisorctl reread && sudo supervisorctl update
I have faced this error several times -
If server is newly created instance and facing this issue
Might be because of some wrong config or mistake happened during the process, or supervisor is not enabled.
Try restarting your supervisor and reconnecting ec2
or
Try reinstalling supervisor (#Scen)
or
Try approaches mentioned by #Yuvaraj Loganathan, #Dinesh Sunny. But mostly you might end up creating a new instance.
If server was running perfectly from long time but then it suddenly stopped
and threw
unix:///var/run/supervisor.sock no such file on sudo supervisorctl status.
It may be due to high memory usage, refer below image usage was 99% of 7.69GB earlier.
You can find the above config after connecting with ec2 via ssh or putty at the top.
You can upgrade your ec2 instance or you can then delete any extra files like logs (/var/logs), zip to free up the space. But careful do not delete any system files.
Restart supervisor
sudo service supervisor restart
Check sudo supervisorctl status

Categories

Resources