Supervisord error "child process was not spawned" - python

I create bash to run python script start_queue.sh
content of start_queue.sh
python /tmp/my_python_script.py &
It's work when I run it in terminal. But I want to manage it using supervisord since I have few django website already manage by supervisord.
But I just get the error when start
supervisor: couldn't exec /tmp/start_queue.sh: ENOEXEC
supervisor: child process was not spawned
This is how i config in supervisord
[group:scriptgroup]
programs=script_1
[program:script_1]
command=/tmp/start_queue.sh
directory=/tmp/
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/x.log
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=50
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
loglevel=info
It's possible to manage a backgroup process by supervisord, What did i wrong here. Please help.

Add #!/bin/sh at the beginning of script.

You need to execute your shell script spawned. Spawn means when you kill it, it doesn't kill unless a set of cicumstances to be.
notes: supervisor is a python module that control proocessing.
For more see this.

Related

Python Daemon dies

I created a Daemon process with these liblinktosite
I connect trough ssh and start the process with python myDaemon.py start.
I use a loop within the daemon method to do my tasks. But as soon as I logout the daemon stops(dies).
Does this happen because I save the PID file on my user and not in the root folder?
Anyone a idea. I can deliver code but now on Thread creation.(+3h)
Use the shebang line in your python script. Make it executable using the command.
chmod +x test.py
Use no hangup to run a program in background even if you close your terminal.
nohup /path/to/test.py &
Do not forget to use & to put it in background.
To see the process again, use in terminal,
ps ax | grep test.py
Answer
Another way would be actually make it an upstart script

launching program through sh script in Supervisord

I have a script run.sh which launches a python pub-sub listener as follows:
export MY_ENV_VAR='/root/config/'
python /usr/local/lib/python2.7/dist-packages/listener/main.py
And I setup Supervisord so that it allows me to run my script as follows:
[program:Listener]
command=/bin/bash run.sh
directory=/root/listener
process_name=%(program_name)s
autostart=true
autorestart=true
startretries=3
My question is: when I go to my Supervisord UI at port 9001, and press STOP next to the Listener line, do I really stop my Listener? I have the impression that since supervisord is pointing to the .sh script, it does not stop the python script when I click on STOP.
You can try to specify
stopasgroup=true
parameter in your configuration file.
So supervisord will send kill signal to child processes too:
http://supervisord.org/configuration.html

Deploy django project with celery

After I deploy my django project, all I need is touch uwsgi_touch file. And uwsgi will gracefully restart its workers. But what about celery? Now I just restart celery manually when code base of celery tasks is changed. But even if I do it manually I still can't be sure that I will not kill celery task.
Any solutions?
A better way to manage celery workers is to use supervisor
$ pip install supervisor
$ cd /path/to/your/project
$ echo_supervisord_conf > supervisord.conf
Add these to your supervisord.conf file
[program:celeryworker]
command=/path/to/celery worker -A yourapp -l info
stdout_logfile=/path/to/your/logs/celeryd.log
stderr_logfile=/path/to/your/logs/celeryd.log
Now start supervisor with supervisord command in your terminal & use supervisorctl to manage process.
To restart you can do
$ supervisorctl restart celeryworker
I've found answer in celery FAQ
http://docs.celeryproject.org/en/2.2/faq.html#how-do-i-shut-down-celeryd-safely
Use the TERM signal, and the worker will finish all currently
executing jobs and shut down as soon as possible. No tasks should be
lost.
You should never stop celeryd with the KILL signal (-9), unless you’ve
tried TERM a few times and waited a few minutes to let it get a chance
to shut down. As if you do tasks may be terminated mid-execution, and
they will not be re-run unless you have the acks_late option set
(Task.acks_late / CELERY_ACKS_LATE).

Supervisord process control - stopping a single subprocess

We're using Supervisord to run workers started by our Gearman job server. To remove a job from the queue, we have to run :
$ sudo killall supervisord
as to kill all Supervisord subprocesses so the job doesn't spawn when removed, then
$ gearman -n -w -f FUNCTION_NAME > /dev/null
to remove the job completley from the server.
Is there a way to kill only one Supervisord subprocess instead of using killall? For instance, if we have multiple jobs running and a single job is running longer than it should, or starts throwing errors, how can we kill the subprocess and remove the job from the server without killing all subprocesses?
Yes: Use supervisorctl to interact with supervisord. If you need to do so programmatically, there's a web service interface.

How to run server script indefinitely

I would like to run an asynchronous program on a remote linux server indefinitely. This script doesn't output anything to the server itself(other than occasionally writing information to a mysql database). So far the only option I have been able to find is the nohup command:
nohup script_name &
From what I understand, nohup allows the command to run even after I log out of my SSH session while the '&' character lets the command run in the background. My question is simple: is this the best way to do what I would like? I am only trying to run a single script for long periods of time while occasionally stopping it to make updates.
Also, if nohup is indeed the best option, what is the proper way to terminate the script when I need to? There seems to be some disagreement over what is the best way to kill a nohup process.
Thanks
What you are basically asking is "How do I create a daemon process?" What you want to do is "daemonize", there are many examples of this floating around on the web. The process is basically that you fork(), the child creates a new session, the parent exits, the child duplicates and then closes open file handles to the controlling terminal (STDIN, STDOUT, STDERR).
There is a package available that will do all of this for you called python-daemon.
To perform graceful shutdowns, look at the signal library for creating a signal handler.
Also, searching the web for "python daemon" will bring up many reimplementations of the common C daemonizing process: http://code.activestate.com/recipes/66012/
If you can modify the script, then you can catch SIGHUP signals and avoid the need for nohup. In a bash script you would write:
trap " echo ignoring hup; " SIGHUP
You can employ the same technique to terminate the program: catch, say, a SIGUSR1 signal in a handler, set a flag and then gracefully exit from your main loop. This way you can send this signal of your choice to stop your program in a predictable way.
There are some situations when you want to execute/start some scripts on a remote machine/server (which will terminate automatically) and disconnect from the server.
eg: A script running on a box which when executed 1) takes a model and copies it to a custer (remote server) 2) creates a script for running a simulation with the wodel and push it to server 3) starts the script on the server and disconnect 4) The duty of the script thus started is to run the simulation in the server and once completed (will take days to complete) copy the results back to client.
I would use the following command:
ssh remoteserver 'nohup /path/to/script `</dev/null` >nohup.out 2>&1 &'
eg:
echo '#!/bin/bash
rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
' > script.sh
chmod u+x script.sh
rsync -azvp script.sh remotehost:/tmp
ssh remoteshot '/tmp/script.sh `</dev/null` >nohup.out 2>&1 &'
Hope this helps ;-)
That is the simplest way to do it if you want to (or have to) avoid changing the script itself. If the script is always to be run like this, you can write a mini script containing the line you just typed and run that instead. (or use an alias, if appropriate)
To answer you second question:
$ nohup ./test &
[3] 11789
$ Sending output to nohup.out
$jobs
[1]- Running emacs *h &
[3]+ Running nohup ./test &
$ kill %3
$ jobs
[1]- Running emacs *h &
[3]+ Exit 143 nohup ./test
Ctrl+c works too, (sends a SIGINT) as does kill (sends a SIGTERM)

Categories

Resources