What are the alternatives to "python3 sample_program.py &" via ssh? - python

I am running a python script sample_program.py on python via ssh. I log into the machine, and run
python3 sample_program.py &
and log off with the command 'exit'. Unfortunately, the script stops running after a few minutes.
What else could I use to run python scripts remotely and not keep the Terminal open?

nohup
nohup python3 sample_program.py &
is the simplest way (man nohup):
nohup - run a command immune to hangups, with output to a non-tty
and IMHO it is installed everywhere.

at
You can use the at command. The at execute commands at a later time. The at utility shall read commands from standard input and group them together as an at-job, to be executed at a later time.
For more information, options, examples, and others see the [Ubuntu Manpage Repository][1]
Example:
at now +8 hours -f python3 sample_program.py
You can also use convenient shorthands, like tomorrow or noon, as in
echo "tweet fore" | at teatime
Independently of any terminal
ssh root#remoteserver '/root/backup.sh </dev/null >/var/log/root-backup.log 2>&1 &'
You need to close all file descriptors that are connected to the ssh socket, because the ssh session won't close as long as some remote process has the socket open. If you aren't interested in the script's output (presumably because the script itself takes care of writing to a log file), redirect it to /dev/null (but note that this will hide errors such as not being able to start the script).
Using nohup has no useful effect here. nohup arranges for the program it runs not to receive a HUP signal if the program's controlling terminal disappears, but here there is no terminal in the first place, so nothing is going to send a SIGHUP to the process out of the blue. Also, nohup redirects standard output and standard error (but not standard input) to a file, but only if they're connected to a terminal, which, again, they aren't.
You can set a cron job.
For example if now the time is 14:39:00 and today is friday, 30 august, you can add the following cron job (to be executed after 8 hours) in your crontab file using crontab -e command:
39 22 30 8 5 /path/to/python3 /path/to/sample_program.py

Add the shebang to the start of your scripts!
#!/usr/bin/python3
Give it permissions to execute.
chmod +x python3
Execute remotely!
sudo nohup ./python3 >/dev/null 2>&1 &
This way it will run as a background process and detach from the terminal, and you will not be writing an unnecessary nohup.out file.
You DO NOT even need the .py file extension in Linux, nor do you need to use more characters than needed:
{ python3 python3.py }
is just the same with
{ ./python3 }
It just needs the shebang and to be executable.

Related

continue program even after logout [duplicate]

I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH?
Run nohup python bgservice.py & to get the script to ignore the hangup signal and keep running. Output will be put in nohup.out.
Ideally, you'd run your script with something like supervise so that it can be restarted if (when) it dies.
If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine)
Running a Python Script in the Background
First, you need to add a shebang line in the Python script which looks like the following:
#!/usr/bin/env python3
This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that the first Python interpreter in your $$PATH environment variable is taken. You can also hardcode the path of your Python interpreter (e.g. #!/usr/bin/python3), but this is not flexible and not portable on other machines. Next, you’ll need to set the permissions of the file to allow execution:
chmod +x test.py
Now you can run the script with nohup which ignores the hangup signal. This means that you can close the terminal without stopping the execution. Also, don’t forget to add & so the script runs in the background:
nohup /path/to/test.py &
If you did not add a shebang to the file you can instead run the script with this command:
nohup python /path/to/test.py &
The output will be saved in the nohup.out file, unless you specify the output file like here:
nohup /path/to/test.py > output.log &
nohup python /path/to/test.py > output.log &
If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
# doesn't create nohup.out
nohup command >/dev/null 2>&1
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
# runs in background, still doesn't create nohup.out
nohup command >/dev/null 2>&1 &
You can find the process and its process ID with this command:
ps ax | grep test.py
# or
# list of running processes Python
ps -fA | grep python
ps stands for process status
If you want to stop the execution, you can kill it with the kill command:
kill PID
You could also use GNU screen which just about every Linux/Unix system should have.
If you are on Ubuntu/Debian, its enhanced variant byobu is rather nice too.
You might consider turning your python script into a proper python daemon, as described here.
python-daemon is a good tool that can be used to run python scripts as a background daemon process rather than a forever running script. You will need to modify existing code a bit but its plain and simple.
If you are facing problems with python-daemon, there is another utility supervisor that will do the same for you, but in this case you wont have to write any code (or modify existing) as this is a out of the box solution for daemonizing processes.
Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>.
You can nohup it, but I prefer screen.
Here is a simple solution inside python using a decorator:
import os, time
def daemon(func):
def wrapper(*args, **kwargs):
if os.fork(): return
func(*args, **kwargs)
os._exit(os.EX_OK)
return wrapper
#daemon
def my_func(count=10):
for i in range(0,count):
print('parent pid: %d' % os.getppid())
time.sleep(1)
my_func(count=10)
#still in parent thread
time.sleep(2)
#after 2 seconds the function my_func lives on is own
You can of course replace the content of your bgservice.py file in place of my_func.
Try this:
nohup python -u <your file name>.py >> <your log file>.log &
You can run above command in screen and come out of screen.
Now you can tail logs of your python script by: tail -f <your log file>.log
To kill you script, you can use ps -aux and kill commands.
The zsh shell has an option to make all background processes run with nohup.
In ~/.zshrc add the lines:
setopt nocheckjobs #don't warn about bg processes on exit
setopt nohup #don't kill bg processes on exit
Then you just need to run a process like so: python bgservice.py &, and you no longer need to use the nohup command.
I know not many people use zsh, but it's a really cool shell which I would recommend.
If what you need is that the process should run forever no matter whether you are logged in or not, consider running the process as a daemon.
supervisord is a great out of the box solution that can be used to daemonize any process. It has another controlling utility supervisorctl that can be used to monitor processes that are being run by supervisor.
You don't have to write any extra code or modify existing scripts to make this work. Moreover, verbose documentation makes this process much simpler.
After scratching my head for hours around python-daemon, supervisor is the solution that worked for me in minutes.
Hope this helps someone trying to make python-daemon work
You can also use Yapdi:
Basic usage:
import yapdi
daemon = yapdi.Daemon()
retcode = daemon.daemonize()
# This would run in daemon mode; output is not visible
if retcode == yapdi.OPERATION_SUCCESSFUL:
print('Hello Daemon')

Is there a way to wait (in a bash script) for a python program (started in a new terminal) to end before executing the next command?

To automate simulations I created a single bash script (see below) running 3 python programs in parallel, interconnected with sockets, each in a new terminal (The behavior of the software requires the 3 subprograms to be ran in separate terminals).
#!/bin/bash
gnome-terminal -- ./orchestrator.py -c config/orchestrator/configWorstCaseRandomV.yml -v
gnome-terminal -- ./server.py -c config/server/config.yml -s config/server/systems/ault14mix.yml --simulate -v
gnome-terminal -- ./simulation/traces/simulate_traces.py -m September
What I want to do is to re-execute the same 3 pythons programs but with different parameters ONLY after the 3 previous programs have ended. But here the 3 programs are started at the same time and I have no way of knowing when then end in my bash script.
I tried to simply add the corresponding commands at the end of my script, but logically it doesn't wait for the previous simulation to be completed to start the next 3 programs.
So my question is, is there a way of knowing when a program is ended in another terminal before executing the next lines of a bash script ?
I assume gnome-terminal is used only to run programs in parallel, and without it, each Python script would run in the foreground.
In this case you may run them in background using the ampersand character & (instead of gnome-terminal) and get their process ID with $! and finally wait for them with the wait command, e.g.:
#!/bin/bash
./orchestrator.py -c config/orchestrator/configWorstCaseRandomV.yml -v &
pid1=$!
./server.py -c config/server/config.yml -s config/server/systems/ault14mix.yml --simulate -v &
pid2=$!
./simulation/traces/simulate_traces.py -m September &
pid3=$!
wait $pid1 $pid2 $pid3

How to run multiple Python scripts from command line?

I have ten python scripts in the same directory. How to run all of these from command line, that it will work in background?
I use SSH terminal to connect to server CentOS and run Python script as:
python index.py
But when I close client terminal SSH, proccess is died
You can use the & command to make things run in the background, and nohup so it continues on logout, such as
nohup python index.py &
If you want to run multiple things this way, it's probably easiest to just make a script to start them all (with a shell of your choice):
#!/bin/bash
nohup python index1.py &
nohup python index2.py &
...
As long as you don't need to interact with the scripts once they are started (and don't need any stdout printing) this could be pretty easily automated with another python script using the subprocess module:
for script in listofscripts:
#use subprocess.run() for python 3.x (this blocks until each script terminates)
subprocess.call(["python", script], *args) #use popen if you want non - blocking
*args is a link (it's coloring got overwritten by code highliting
also of note: stdout / stderr printing is possible, just more work..

Recover python process after ssh broken pipe [duplicate]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm working on a Linux machine through SSH (Putty). I need to leave a process running during the night, so I thought I could do that by starting the process in background (with an ampersand at the end of the command) and redirecting stdout to a file.
To my surprise, that doesn't work. As soon as I close the Putty window, the process is stopped.
How can I prevent that from happening??
Check out the "nohup" program.
I would recommend using GNU Screen. It allows you to disconnect from the server while all of your processes continue to run. I don't know how I lived without it before I knew it existed.
When the session is closed the process receives the SIGHUP signal which it is apparently not catching. You can use the nohup command when launching the process or the bash built-in command disown -h after starting the process to prevent this from happening:
> help disown
disown: disown [-h] [-ar] [jobspec ...]
By default, removes each JOBSPEC argument from the table of active jobs.
If the -h option is given, the job is not removed from the table, but is
marked so that SIGHUP is not sent to the job if the shell receives a
SIGHUP. The -a option, when JOBSPEC is not supplied, means to remove all
jobs from the job table; the -r option means to remove only running jobs.
daemonize? nohup? SCREEN? (tmux ftw, screen is junk ;-)
Just do what every other app has done since the beginning -- double fork.
# ((exec sleep 30)&)
# grep PPid /proc/`pgrep sleep`/status
PPid: 1
# jobs
# disown
bash: disown: current: no such job
Bang! Done :-) I've used this countless times on all types of apps and many old machines. You can combine with redirects and whatnot to open a private channel between you and the process.
Create as coproc.sh:
#!/bin/bash
IFS=
run_in_coproc () {
echo "coproc[$1] -> main"
read -r; echo $REPLY
}
# dynamic-coprocess-generator. nice.
_coproc () {
local i o e n=${1//[^A-Za-z0-9_]}; shift
exec {i}<> <(:) {o}<> >(:) {e}<> >(:)
. /dev/stdin <<COPROC "${#}"
(("\$#")&) <&$i >&$o 2>&$e
$n=( $o $i $e )
COPROC
}
# pi-rads-of-awesome?
for x in {0..5}; do
_coproc COPROC$x run_in_coproc $x
declare -p COPROC$x
done
for x in COPROC{0..5}; do
. /dev/stdin <<RUN
read -r -u \${$x[0]}; echo \$REPLY
echo "$x <- main" >&\${$x[1]}
read -r -u \${$x[0]}; echo \$REPLY
RUN
done
and then
# ./coproc.sh
declare -a COPROC0='([0]="21" [1]="16" [2]="23")'
declare -a COPROC1='([0]="24" [1]="19" [2]="26")'
declare -a COPROC2='([0]="27" [1]="22" [2]="29")'
declare -a COPROC3='([0]="30" [1]="25" [2]="32")'
declare -a COPROC4='([0]="33" [1]="28" [2]="35")'
declare -a COPROC5='([0]="36" [1]="31" [2]="38")'
coproc[0] -> main
COPROC0 <- main
coproc[1] -> main
COPROC1 <- main
coproc[2] -> main
COPROC2 <- main
coproc[3] -> main
COPROC3 <- main
coproc[4] -> main
COPROC4 <- main
coproc[5] -> main
COPROC5 <- main
And there you go, spawn whatever. the <(:) opens an anonymous pipe via process substitution, which dies, but the pipe sticks around because you have a handle to it. I usually do a sleep 1 instead of : because its slightly racy, and I'd get a "file busy" error -- never happens if a real command is ran (eg, command true)
"heredoc sourcing":
. /dev/stdin <<EOF
[...]
EOF
This works on every single shell I've ever tried, including busybox/etc (initramfs). I've never seen it done before, I independently discovered it while prodding, who knew source could accept args? But it often serves as a much more manageable form of eval, if there is such a thing.
nohup blah &
Substitute your process name for blah!
Personally, I like the 'batch' command.
$ batch
> mycommand -x arg1 -y arg2 -z arg3
> ^D
This stuffs it in to the background, and then mails the results to you. It's a part of cron.
As others have noted, to run a process in the background so that you can disconnect from your SSH session, you need to have the background process properly disassociate itself from its controlling terminal - which is the pseudo-tty that the SSH session uses.
You can find information about daemonizing processes in books such as Stevens' "Advanced Network Program, Vol 1, 3rd Edn" or Rochkind's "Advanced Unix Programming".
I recently (in the last couple of years) had to deal with a recalcitrant program that did not daemonize itself properly. I ended up dealing with that by creating a generic daemonizing program - similar to nohup but with more controls available.
Usage: daemonize [-abchptxV][-d dir][-e err][-i in][-o out][-s sigs][-k fds][-m umask] -- command [args...]
-V print version and exit
-a output files in append mode (O_APPEND)
-b both output and error go to output file
-c create output files (O_CREAT)
-d dir change to given directory
-e file error file (standard error - /dev/null)
-h print help and exit
-i file input file (standard input - /dev/null)
-k fd-list keep file descriptors listed open
-m umask set umask (octal)
-o file output file (standard output - /dev/null)
-s sig-list ignore signal numbers
-t truncate output files (O_TRUNC)
-p print daemon PID on original stdout
-x output files must be new (O_EXCL)
The double-dash is optional on systems not using the GNU getopt() function; it is necessary (or you have to specify POSIXLY_CORRECT in the environment) on Linux etc. Since double-dash works everywhere, it is best to use it.
You can still contact me (firstname dot lastname at gmail dot com) if you want the source for daemonize.
However, the code is now (finally) available on GitHub in my SOQ (Stack
Overflow Questions) repository as file daemonize-1.10.tgz in the
packages
sub-directory.
For most processes you can pseudo-daemonize using this old Linux command-line trick:
# ((mycommand &)&)
For example:
# ((sleep 30 &)&)
# exit
Then start a new terminal window and:
# ps aux | grep sleep
Will show that sleep 30 is still running.
What you have done is started the process as a child of a child, and when you exit, the nohup command that would normally trigger the process to exit doesn't cascade down to the grand-child, leaving it as an orphan process, still running.
I prefer this "set it and forget it" approach, no need to deal with nohup, screen, tmux, I/o redirection, or any of that stuff.
On a Debian-based system (on the remote machine)
Install:
sudo apt-get install tmux
Usage:
tmux
run commands you want
To rename session:
Ctrl+B then $
set Name
To exit session:
Ctrl+B then D
(this leaves the tmux session). Then, you can log out of SSH.
When you need to come back/check on it again, start up SSH, and enter
tmux attach session_name
It will take you back to your tmux session.
If you use screen to run a process as root, beware of the possibility of privilege elevation attacks. If your own account gets compromised somehow, there will be a direct way to take over the entire server.
If this process needs to be run regularly and you have sufficient access on the server, a better option would be to use cron the run the job. You could also use init.d (the super daemon) to start your process in the background, and it can terminate as soon as it's done.
nohup is very good if you want to log your details to a file. But when it goes to background you are unable to give it a password if your scripts ask for. I think you must try screen. its a utility you can install on your linux distribution using yum for example on CentOS yum install screen then access your server via putty or another software, in your shell type screen. It will open screen[0] in putty. Do your work. You can create more screen[1], screen[2], etc in same putty session.
Basic commands you need to know:
To start screen
screen
To create next screen
ctrl+a+c
To move to next screen you created
ctrl+a+n
To detach
ctrl+a+d
During work close your putty. And next time when you login via putty type
screen -r
To reconnect to your screen, and you can see your process still running on screen. And to exit the screen type #exit.
For more details see man screen.
Nohup allows a client process to not be killed if a the parent process is killed, for argument when you logout. Even better still use:
nohup /bin/sh -c "echo \$\$ > $pidfile; exec $FOO_BIN $FOO_CONFIG " > /dev/null
Nohup makes the process you start immune to termination which your SSH session and its child processes are kill upon you logging out. The command i gave provides you with a way you can store the pid of the application in a pid file so that you can correcly kill it later and allows the process to run after you have logged out.
Use screen. It is very simple to use and works like vnc for terminals.
http://www.bangmoney.org/presentations/screen.html
There's also the daemon command of the open-source libslack package.
daemon is quite configurable and does care about all the tedious daemon stuff such as automatic restart, logging or pidfile handling.
If you're willing to run X applications as well - use xpra together with "screen".
i would also go for screen program (i know that some1 else answer was screen but this is a completion)
not only the fact that &, ctrl+z bg disown, nohup, etc. may give you a nasty surprise that when you logoff job will still be killed (i dunno why, but it did happened to me, and it didn't bother with it be cause i switched to use screen, but i guess anthonyrisinger solution as double forking would solve that), also screen have a major advantage over just back-grounding:
screen will background your process without losing interactive control to it
and btw, this is a question i would never ask in the first place :) ... i use screen from my beginning of doing anything in any unix ... i (almost) NEVER work in a unix/linux shell without starting screen first ... and i should stop now, or i'll start an endless presentation of what good screen is and what can do for ya ... look it up by yourself, it is worth it ;)
Append this string to your command: >&- 2>&- <&- &. >&- means close stdout. 2>&- means close stderr. <&- means close stdin. & means run in the background. This works to programmatically start a job via ssh, too:
$ ssh myhost 'sleep 30 >&- 2>&- <&- &'
# ssh returns right away, and your sleep job is running remotely
$
Accepted answer suggest using nohup. I would rather suggest using pm2. Using pm2 over nohup has many advantages, like keeping the application alive, maintain log files for application and lot more other features. For more detail check this out.
To install pm2 you need to download npm. For Debian based system
sudo apt-get install npm
and for Redhat
sudo yum install npm
Or you can follow these instruction.
After installing npm use it to install pm2
npm install pm2#latest -g
Once its done you can start your application by
$ pm2 start app.js # Start, Daemonize and auto-restart application (Node)
$ pm2 start app.py # Start, Daemonize and auto-restart application (Python)
For process monitoring use following commands:
$ pm2 list # List all processes started with PM2
$ pm2 monit # Display memory and cpu usage of each app
$ pm2 show [app-name] # Show all informations about application
Manage processes using either app name or process id or manage all processes together:
$ pm2 stop <app_name|id|'all'|json_conf>
$ pm2 restart <app_name|id|'all'|json_conf>
$ pm2 delete <app_name|id|'all'|json_conf>
Log files can be found in
$HOME/.pm2/logs #contain all applications logs
Binary executable files can also be run using pm2. You have to made a change into the jason file. Change the "exec_interpreter" : "node", to "exec_interpreter" : "none". (see the attributes section).
#include <stdio.h>
#include <unistd.h> //No standard C library
int main(void)
{
printf("Hello World\n");
sleep (100);
printf("Hello World\n");
return 0;
}
Compiling above code
gcc -o hello hello.c
and run it with np2 in the background
pm2 start ./hello
I used screen command. This link has detail as to how to do this
https://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/#starting
On systemd/Linux, systemd-run is a nice tool to launch session-independent processes.

nohup not logging in nohup.out

I am running a python script of web2py and want to log its output. I am using following command
nohup python /var/www/web2py/web2py.py -S cloud -M -N -R applications/cloud/private/process.py >>/var/log/web2pyserver.log 2>&1 &
The process is running but it is not logging into the file. I have tried without nohup also but it is still same.
The default logging of nohup in nohup.out is also not working.
Any suggestion what might be going wrong?
Nothing to worry. Actually the python process along with nohup was logging the file in batch mode and i could see the output only after quite some time and not instantaneously.
nohup will try to create the file in the local directory. Can you create a file in the folder you are running it from ?
If you've got commas in your print statements there's a good chance it's due to buffering. You can put a sys command (forget which) in your code or when you run the nohup, just add the option -u and you'll disable std(in|out|err) buffering
Don't worry about this, it is because of the buffering mechanism, run your Python script with the -u flag will solve the problem:
nohup python -u code.py > code.log &
or just
nohup python -u code.py &

Categories

Resources