Say I am running a nohup job with a python file:
nohup python -u train_model.py 2>&1&
If I make a change to the python file and save it, will the nohup job be affected?
Once you run the python program, regardless of using nohup or not, it will be loaded into memory and will not be altered when you change the source code which resides in your disk.
Changes will apply only if you re-run it after the initial process complete, or if you run it in parallel.
Related
I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH?
Run nohup python bgservice.py & to get the script to ignore the hangup signal and keep running. Output will be put in nohup.out.
Ideally, you'd run your script with something like supervise so that it can be restarted if (when) it dies.
If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine)
Running a Python Script in the Background
First, you need to add a shebang line in the Python script which looks like the following:
#!/usr/bin/env python3
This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that the first Python interpreter in your $$PATH environment variable is taken. You can also hardcode the path of your Python interpreter (e.g. #!/usr/bin/python3), but this is not flexible and not portable on other machines. Next, you’ll need to set the permissions of the file to allow execution:
chmod +x test.py
Now you can run the script with nohup which ignores the hangup signal. This means that you can close the terminal without stopping the execution. Also, don’t forget to add & so the script runs in the background:
nohup /path/to/test.py &
If you did not add a shebang to the file you can instead run the script with this command:
nohup python /path/to/test.py &
The output will be saved in the nohup.out file, unless you specify the output file like here:
nohup /path/to/test.py > output.log &
nohup python /path/to/test.py > output.log &
If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
# doesn't create nohup.out
nohup command >/dev/null 2>&1
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
# runs in background, still doesn't create nohup.out
nohup command >/dev/null 2>&1 &
You can find the process and its process ID with this command:
ps ax | grep test.py
# or
# list of running processes Python
ps -fA | grep python
ps stands for process status
If you want to stop the execution, you can kill it with the kill command:
kill PID
You could also use GNU screen which just about every Linux/Unix system should have.
If you are on Ubuntu/Debian, its enhanced variant byobu is rather nice too.
You might consider turning your python script into a proper python daemon, as described here.
python-daemon is a good tool that can be used to run python scripts as a background daemon process rather than a forever running script. You will need to modify existing code a bit but its plain and simple.
If you are facing problems with python-daemon, there is another utility supervisor that will do the same for you, but in this case you wont have to write any code (or modify existing) as this is a out of the box solution for daemonizing processes.
Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>.
You can nohup it, but I prefer screen.
Here is a simple solution inside python using a decorator:
import os, time
def daemon(func):
def wrapper(*args, **kwargs):
if os.fork(): return
func(*args, **kwargs)
os._exit(os.EX_OK)
return wrapper
#daemon
def my_func(count=10):
for i in range(0,count):
print('parent pid: %d' % os.getppid())
time.sleep(1)
my_func(count=10)
#still in parent thread
time.sleep(2)
#after 2 seconds the function my_func lives on is own
You can of course replace the content of your bgservice.py file in place of my_func.
Try this:
nohup python -u <your file name>.py >> <your log file>.log &
You can run above command in screen and come out of screen.
Now you can tail logs of your python script by: tail -f <your log file>.log
To kill you script, you can use ps -aux and kill commands.
The zsh shell has an option to make all background processes run with nohup.
In ~/.zshrc add the lines:
setopt nocheckjobs #don't warn about bg processes on exit
setopt nohup #don't kill bg processes on exit
Then you just need to run a process like so: python bgservice.py &, and you no longer need to use the nohup command.
I know not many people use zsh, but it's a really cool shell which I would recommend.
If what you need is that the process should run forever no matter whether you are logged in or not, consider running the process as a daemon.
supervisord is a great out of the box solution that can be used to daemonize any process. It has another controlling utility supervisorctl that can be used to monitor processes that are being run by supervisor.
You don't have to write any extra code or modify existing scripts to make this work. Moreover, verbose documentation makes this process much simpler.
After scratching my head for hours around python-daemon, supervisor is the solution that worked for me in minutes.
Hope this helps someone trying to make python-daemon work
You can also use Yapdi:
Basic usage:
import yapdi
daemon = yapdi.Daemon()
retcode = daemon.daemonize()
# This would run in daemon mode; output is not visible
if retcode == yapdi.OPERATION_SUCCESSFUL:
print('Hello Daemon')
My question seems to be quite easy, but for some reason I did not find a quick answer to it. I have a python script that I want to run on the terminal command line (Ubuntu linux server), which works for me. But then I can't use the command line until the script ends. The script takes a long time to run, and I would like to continue using the command line to perform other tasks. How can you do the work of a script when its progress is not shown on the command line, but keep its work? And how can I see the active processes that are running on the server to see if a process is running?
Run script command:
python script.py
Add next with & echo "123":
The script takes a long time to run, and I would like to continue
using the command line to perform other tasks.
It seems that you want to run said process in background, please try pasting following
python script.py &
echo "123"
it should start your script.py and then output 123 (without waiting until script.py ends)
how can I see the active processes that are running on the server to
see if a process is running?
Using ps command
ps -ef
will list all processes which you would probably want to filter to get interested one to you.
I want to run two python applicaiton in docker with different ports.
My shell script is below, the name is serverRun.sh
exec python __server_code.py &
exec python server_time_test.py &
in dockerfile, I am trying to run these two python application
RUN ["chmod", "+x", "./serverRun.sh"]
It did not work. Any idea?
You must have one process in foreground. So remove the last &. And don't use exec
cd /rfk-thrift/nlp_search
python __server_code.py &
python server_time_test.py
And this in Dockerfile:
RUN chmod +x ./serverRun.sh
CMD ./serverRun.sh
Looks like your run command just sets the file permissions, it doesn't actually execute the script. Perhaps change the run command.
The RUN instruction is only referenced during build. It won't be running when you launch the container. You want to use ENTRYPOINT or CMD
Your shell script must not terminate, so lock it within a while loop with a strong sleep.
Docker detects the termination of the entrypoint, and ends the container.
Eventually you might want to resort to an entrypoint (bash or other scripting languages) that is able to detect application crashes so your container quits on failure.
I published for you a fully-featured Bash entrypoint here which does exactly this, and more.
I have a python script which executes the following command:
commands.getstatusoutput("./some_program -bg; sleep 10; kill -SIGUSR2 `pidof
some_program`; kill -9 `pidof some_program`")
In short, this calls a program (some_program) and puts it in the background, then, after 10s, it sends a signal to the program to prompt it to dump some data into a csv file, after which we kill the program.
When I run the python script manually, it dumps the output to the csv file as expected. When I run the program through a Jenkins build however, the csv file is not created.
Any ideas?
Sounds like a permission problem to me, are you running jenkins with a different user? I would chmod 777 the folder temporally only to see if thatś the issue
– gerosalesc
It turns out it was an ownership issue. – user2548144
I am trying to run a Python script from cron. I am using crontab to run the command as a user instead of root. My Python script has the shebang at the top #! /usr/bin/env python and I did chmod +x it to make the script executable.
The script does work when running it from a shell but not when using crontab. I did check the /var/log/cron file and saw that the script runs but that absolutely nothing from stdout or stderr prints anywhere.
Finally, I made a small script that prints the date and a string and called that every minute that worked but this other script does not. I am not sure why I am getting these variable results...
Here is my entry in crontab
SHELL=/bin/bash
#min #hour day-of-month month day-of-week command
#-------------------------------------------------------------------------
*/5 * * * * /path/to/script/script.py
Here is the source code of my script that will not run from crontab but will run from the shell it is in when called like so ./script.py. The script is executable after I use the chmod +x command on it, by the way...:
#! /usr/bin/env python
#create a file and write to it
import time
def create_and_write():
with open("py-write-out.out", "a+") as w:
w.write("writing to file. hello from python. :D\n")
w.write("the time is: " + str(time.asctime()) + "\n")
w.write("--------------------------------\n")
def main():
create_and_write()
if __name__ == "__main__":
main()
EDIT
I figured out what was going wrong. I needed to put the absolute file paths in the script otherwise it would write to some directory I had not planned for.
Ok, I guess this thread is still going to help googlers.
I use a workaround to run python scripts with cron jobs. In fact, python scripts need to be handled with delicate care with cron job.
So I let a bash script take care of all of it.I create a bash script which has the command to run the python script. Then I schedule a cron job to run bash script. You can even use a redirector to log the output of bash command executing the python script.For example
#reboot /home/user/auto_delete.sh
auto_delete.sh may contain following lines:-
#!/bin/sh
echo $(date) >> bash_cron_log.txt
/usr/bin/python /home/user/auto_delete.py >> bash_cron_log.txt
So I don' need to worry about Cron Jobs crashing on python scripts.
You can resolve this yourself by following these steps:
Modify the cron as /path/to/script/script.py > /tmp/log 2> &1
Now, let the cron run
Now read the file /tmp/log
You will find out the reason of the issue you are facing, so that you can fix it.
In my experience, the issue is mostly with the environment.
In cron, the env variables are not set. So you may have to explicitly set the env for your script in cron.
I would like to emphasise one more thing. I was trying to run a python script in cron using the same trick (using shell script) and this time it didn't run. It was #reboot cron. So I used redirection in crontab as I mentioned in one of the above comments. And understood the problem.
I was using lot many file handlers in python script and cron runs the script from user's home directory. In that case python could not find the files used for file handlers and will crash.
What I used for troubleshooting was that I created a crontab as below. It would run the start.sh and throw all the stderror or stdoutput to file status.txt and I got error in status.txt saying that the file I used for file handler was not found. That was right because python script was executed by cron from user's home directory and then the script starts searching for files in home directory only.
#reboot /var/www/html/start.sh > /cronStatus/status.txt 2>&1
This will write everything happening during cron execution to status.txt file. You can see the error over there. I will again advice running python scripts using bash scripts for cronjobs. SOLUTION:- Either you use full path for all the files being used in script (which wasn't feasible for me, since I don't want script to be location dependent). Or you execute script from correct directory
So I created my cron as below-
#reboot cd /var/www/html/ && /var/www/html/start.sh
This cron will first change directory to correct location and then start the script. Now I don't have to worry about hardcoding full path for all files in script. Yeah, it may sound being lazy though ;)
And my start.sh looks like-
#!/bin/sh
/usr/bin/python /var/www/html/script.py
Hope it helps
Regards,
Kriss
I had a similar issue but with a little bit different scenario: I had a bash script (run.sh) and a Python script (hello.py). When I typed
sh run.sh
from the script directory, it worked; if I added the sh run.sh command in my crontab, it did not work.
To troubleshoot the issue I added the following line in my run.sh script:
printenv > env.txt
In this way you are able to see the environment variable used when you (or the crontab) run the script. By analyzing the differences between the env.txt generated from the manual run and the crontab run I noticed that the PWD variable was different. In my case I was able to resolve by adding the following line at the top of my .sh script:
cd /PYTHON_SCRIPT_ABSOLUTE_PATH/
Hope this could help you!
Another reason may be that the way we judged executed or not executed is wrong.