Got a script for activating a python venv and running a server in the background, but right now I am trying to keep the pid when I start the process and then kill the process with pid after I am done. However, it is not all the time is gets killed.
My question is, can I run the process with a name, then killing it by using pkill name after? and how will that look
#!/bin/sh
ROOT_DIR=$(pwd)
activate(){
source $ROOT_DIR/.venv/bin/activate
python3 src/server.py -l & pid=$! # <== This is the process
python3 src/client.py localhost 8080
}
activate
sleep 10
kill "$pid"
printf "\n\nServer is done, terminating processes..."
You can run programs with a specific command name by using the bash buildin exec. Note that exec replaces the shell with the command so you have to run it in a subshell environment like:
( exec -a my_new_name my_old_command ) &
However, it probably won't help you much because this sets the command line name, which is apparently different from the command name. So executing the above snippet will show your process as "my_new_name" for example in top or htop, but pkill and killall are filtering by the command name and will thus not find a process called "my_new_name".
While it is interesting, how one can start a command with a different name than the executable, it is most likely not the cause of your problem. PIDs never change, so I assume that the problem lays somewhere different.
My best guess is that the server binds a socket to listen on a specific port. If the program is not shutdown gracefully but killed the port number remains occupied and is only freed by the kernel after some time during some kind of kernel garbage collect. If the program is restarted after a short period of time it finds the port already been occupied and prints a misleading message, that says it is already running. If that is indeed the cause of your problem I would strongly consider implementing a way to graceful shutdown the server. (may be already closing the socket in a destructor or something similar could help)
I think you should have to use systemd for this case:
https://github.com/torfsen/python-systemd-tutorial
Related
I've coded a stock trading bot in Python3. I have it hosted on a server (Ubuntu 18.10) that I use iTerm to SSH into. Wondering how to keep the script actively running so that when I exit out of my session it won't kill the active process.
Basically, I want to SSH into my server, start the script then close out and come back into it when the days over to stop the process.
You could use nohup and add & at the end of your command to safely exit you session without killing original process. For example if your script name is script.py:
nohup python3 script.py &
Normally, when running a command using & and exiting the shell afterwards, the shell will terminate the sub-command with the hangup signal (kill -SIGHUP <pid>). This can be prevented using nohup, as it catches the signal and ignores it so that it never reaches the actual application.
You can use screen
sudo apt-get install screen
screen
./run-my-script
Ctrl-A then D to get out of your screen
From there you will be able to close out your ssh terminal. Come back later and run
screen -ls
screen -r $screen_running
The screen running is usually the first 5 digits you see after you've listed all the screens. You can see if you're script is still running or if you've added logging you can see where in the process you are.
Using tmux is a good option. Alternatively you could run the command with an & at the end which lets it run in the background.
https://tmuxcheatsheet.com/
I came here for finding nohup python3 script.py &
Right solution for this thread is screen OR tmux. :)
The task is to use python to run a remote process in background and immediately close the ssh session.
I have a remote script name 'start' under server:PATH/, the start script does nothing but lunch a long-live background program. 'start' script which has one line:
nohup PATH/Xprogram &
When I use python subprocess module to call my remote 'start' script, it does start OK. But the issue is: it seems the SSH connection is persist, meaning I am getting stdout from the remote Xprogram (since it is a long live program which has output to stdout). Does this indicating ssh connection is still there ?
All I need is call the start script without blocking and forget about it (leave the long-live program running, close ssh, release resources).
my python function call looks like this:
ret = subprocess.Popen(["ssh", "xxx#servername", "PATH/start"])
if I use ret.terminate() after the command, it then will kill the long-live program too.
I have also tried spur module. basically the same thing.
=====update====
#Dunes' answer solves the problem. Based on his answer, I did more digging and found this link very helpful.
My understanding of this is: basically, if any file descriptor is still held by your process (e.g. stdout held by my XProgram), then SSH session won't exit. However redirect stdout/stderr to NULL effectively close those file descriptor and let SSH session exit normally.
solution
ret = subprocess.Popen(["ssh", "xxx#servername", "PATH/start >dev/null 2>&1"])
After playing about a bit I found that nohup doesn't seem to be properly disconnecting the child process from the parent ssh session (as it should be). This means you have to manually close stdout or point it at a file, e.g.
Using bash:
ssh user#host "nohup PATH/XProgram >&- &"
Shell agnostic (as far as I know):
ssh user#host "nohup PATH/XProgram >/dev/null 2>&1 &"
In python:
from shlex import split
from subprocess import Popen
p = Popen(split('ssh user#host "nohup PATH/XProgram >&- &"'))
p.communicate() # returns (None, None)
Try
subprocess.Popen(["ssh", "xxx#servername", "nohup PATH/start & disown"])
For me,
subprocess.Popen(["ssh", "xxx#servername", "nohup sleep 1000 & disown"])
lets my script exit immediately while leaving sleep running on the server awhile.
When your script dies, an ssh process is left on your system, but killing it doesn't kill the remote process.
i need to keep run my Python Program in Background on my Raspberry Pi after i close the ssh connection, Because i need to save Phidget information on a SQL DB
I try to do this with nohup but it seems that the python Program isn't even executed.
Because when I look into the MySql DB , after doing below there nothing inserted.
I type in :
pi#raspi ~/MyProjekt $ sudo nohup python sensorReader.py &
[1] 8580
and when i try to look if this process exist whit :
ps -A| grep 8580
it returns nothing.
So do i something wrong ?
How can i run the python program after close SSH Conneciton
I would recommend running your python program in a cron reboot job.
To edit your root cronjobs use
sudo crontab -e
And add the line
#reboot sudo python full_path/MyProjekt/sensorReader.py
Then reboot your pi with:
sudo reboot
And then confirm that your process is running:
ps -aux | grep python
I don't think this is an ssh connection issue, from what you say the program seems to execute and exit. Does your .py execute in an infinite loop? Else you shouldn't expect it to stay alive.
Then, about keeping a process alive after the parent has terminated (the shell in your case), nohup is the answer, that means ignore HUP signals (those sent by a terminating parent process).
The '&' just means 'execute in background'.
The cron solution is good if your program is meant to do something periodically, but if it should stay alive waiting for some event (like listening to a socket), I would prefer to create an init scritp, so that the program is run as a demon at boot time and only in the desired runlevels.
In a project I am working on, there is some code that starts up a long-running process using sudo:
subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
I would like to clean up this process when the parent exits. Currently, the subprocess keeps running when the parent exits (re-attached to init, of course).
I am not sure of the best solution to this problem. The code is limited to only running certain commands via sudo, and granting blanket authority to run sudo kill would be sketchy at best.
I don't have an open pipe to the child process that I can close (the child process is not reading from stdin), and I am not able to modify the code of the child process.
Are there any other mechanisms that might work in this situation?
First of all I just answer the question. Though I do not think it is a good thing to do, it is what you asked for. I would wrap that child process into a small program that can listen stdin. Then you may sudo that program, and it will be able to run the process without sudo, and will know its pid and have the rights needed to kill the process when you ask it through stdin to do so.
However, generally such a situation means sudo with no password and poor security. The most common technique is to use lowering your program's privileges, not elevating them. In such case you should create a runner program that is started by superuser, than it starts your main program with lowering of privileges and listens for a pipe to communicate. When it is necessary to run a command, your main program tells that to the runner program, and runner program does the job. When it is necessary to terminate command, you again tell this to a runner program via the pipe.
The common rules are:
If you need superuser rights, you should give them to the very parent process.
If a child process needs to do a privileged operation, it requests the top-level process to do that for him.
The top-level process should be kept as small as possible and do as little as possible. The larger it is, the more holes in security it creates.
That's what many applications do. The first example that comes into my mind is Apache web server (at least on *nix) that has a small top-level program and preforked working programs that are not run as root/wheel/whatever-else-is-the-superuser-username.
This will raise OSError: [Errno 1] Operation not permitted on the last line:
p = subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
print p.stdout.read()
p.terminate()
Assuming sudo will not ask for a password, one workaround is to make a shell script which calls sudo …
#!/bin/sh
sudo /usr/bin/somecommand
… and then do this in Python:
p = subprocess.Popen("/path/to/script.sh", cwd="/path/to")
print p.stdout.read()
p.terminate()
I would like to run an asynchronous program on a remote linux server indefinitely. This script doesn't output anything to the server itself(other than occasionally writing information to a mysql database). So far the only option I have been able to find is the nohup command:
nohup script_name &
From what I understand, nohup allows the command to run even after I log out of my SSH session while the '&' character lets the command run in the background. My question is simple: is this the best way to do what I would like? I am only trying to run a single script for long periods of time while occasionally stopping it to make updates.
Also, if nohup is indeed the best option, what is the proper way to terminate the script when I need to? There seems to be some disagreement over what is the best way to kill a nohup process.
Thanks
What you are basically asking is "How do I create a daemon process?" What you want to do is "daemonize", there are many examples of this floating around on the web. The process is basically that you fork(), the child creates a new session, the parent exits, the child duplicates and then closes open file handles to the controlling terminal (STDIN, STDOUT, STDERR).
There is a package available that will do all of this for you called python-daemon.
To perform graceful shutdowns, look at the signal library for creating a signal handler.
Also, searching the web for "python daemon" will bring up many reimplementations of the common C daemonizing process: http://code.activestate.com/recipes/66012/
If you can modify the script, then you can catch SIGHUP signals and avoid the need for nohup. In a bash script you would write:
trap " echo ignoring hup; " SIGHUP
You can employ the same technique to terminate the program: catch, say, a SIGUSR1 signal in a handler, set a flag and then gracefully exit from your main loop. This way you can send this signal of your choice to stop your program in a predictable way.
There are some situations when you want to execute/start some scripts on a remote machine/server (which will terminate automatically) and disconnect from the server.
eg: A script running on a box which when executed 1) takes a model and copies it to a custer (remote server) 2) creates a script for running a simulation with the wodel and push it to server 3) starts the script on the server and disconnect 4) The duty of the script thus started is to run the simulation in the server and once completed (will take days to complete) copy the results back to client.
I would use the following command:
ssh remoteserver 'nohup /path/to/script `</dev/null` >nohup.out 2>&1 &'
eg:
echo '#!/bin/bash
rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
' > script.sh
chmod u+x script.sh
rsync -azvp script.sh remotehost:/tmp
ssh remoteshot '/tmp/script.sh `</dev/null` >nohup.out 2>&1 &'
Hope this helps ;-)
That is the simplest way to do it if you want to (or have to) avoid changing the script itself. If the script is always to be run like this, you can write a mini script containing the line you just typed and run that instead. (or use an alias, if appropriate)
To answer you second question:
$ nohup ./test &
[3] 11789
$ Sending output to nohup.out
$jobs
[1]- Running emacs *h &
[3]+ Running nohup ./test &
$ kill %3
$ jobs
[1]- Running emacs *h &
[3]+ Exit 143 nohup ./test
Ctrl+c works too, (sends a SIGINT) as does kill (sends a SIGTERM)