Simple way to keep a Python script running? - python

I want a simple script to stay running in the background. It currently looks like this:
import keyboard
while True:
keyboard.wait('q')
keyboard.send('ctrl+6')
Now, this works already (When q is pressed, it also presses ctrl+6), but I guess there has to be a more efficient way of keeping a program running, so it can act on input.
I would rather not use an infinite while loop.
I'm on Windows
Thanks :)

You can using nohup
nohup python script.py > log.txt 2>&1 &
If you want check command is running
ps aux | grep script
user 5124 <- process_id 1.0 0.3 214588 13852 pts/4 Sl+ 11:19 0:00 python script.py
Kill command is running
kill -9 [process_id]

It depends on the platform you are using. In Linux from the terminal you can run your script with & at the end to start it as a background process:
python script.py &
You can then find your background process with:
ps -ef | grep script.py
And kill the process:
kill <pid number>
In windows, it's a bit more complex but the answer is here.
Note: I would add a time.sleep(0.025) command to your script (like mention in the comments).

I was searching for something same and landed here.
Here is another solution. Its working fine for me. Maybe useful to someone else too.
Use the -i option at the command line. Thus, if your Python script is foo.py, execute the following at the command line:
python -i foo.py

Related

continue program even after logout [duplicate]

I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH?
Run nohup python bgservice.py & to get the script to ignore the hangup signal and keep running. Output will be put in nohup.out.
Ideally, you'd run your script with something like supervise so that it can be restarted if (when) it dies.
If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine)
Running a Python Script in the Background
First, you need to add a shebang line in the Python script which looks like the following:
#!/usr/bin/env python3
This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that the first Python interpreter in your $$PATH environment variable is taken. You can also hardcode the path of your Python interpreter (e.g. #!/usr/bin/python3), but this is not flexible and not portable on other machines. Next, you’ll need to set the permissions of the file to allow execution:
chmod +x test.py
Now you can run the script with nohup which ignores the hangup signal. This means that you can close the terminal without stopping the execution. Also, don’t forget to add & so the script runs in the background:
nohup /path/to/test.py &
If you did not add a shebang to the file you can instead run the script with this command:
nohup python /path/to/test.py &
The output will be saved in the nohup.out file, unless you specify the output file like here:
nohup /path/to/test.py > output.log &
nohup python /path/to/test.py > output.log &
If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
# doesn't create nohup.out
nohup command >/dev/null 2>&1
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
# runs in background, still doesn't create nohup.out
nohup command >/dev/null 2>&1 &
You can find the process and its process ID with this command:
ps ax | grep test.py
# or
# list of running processes Python
ps -fA | grep python
ps stands for process status
If you want to stop the execution, you can kill it with the kill command:
kill PID
You could also use GNU screen which just about every Linux/Unix system should have.
If you are on Ubuntu/Debian, its enhanced variant byobu is rather nice too.
You might consider turning your python script into a proper python daemon, as described here.
python-daemon is a good tool that can be used to run python scripts as a background daemon process rather than a forever running script. You will need to modify existing code a bit but its plain and simple.
If you are facing problems with python-daemon, there is another utility supervisor that will do the same for you, but in this case you wont have to write any code (or modify existing) as this is a out of the box solution for daemonizing processes.
Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>.
You can nohup it, but I prefer screen.
Here is a simple solution inside python using a decorator:
import os, time
def daemon(func):
def wrapper(*args, **kwargs):
if os.fork(): return
func(*args, **kwargs)
os._exit(os.EX_OK)
return wrapper
#daemon
def my_func(count=10):
for i in range(0,count):
print('parent pid: %d' % os.getppid())
time.sleep(1)
my_func(count=10)
#still in parent thread
time.sleep(2)
#after 2 seconds the function my_func lives on is own
You can of course replace the content of your bgservice.py file in place of my_func.
Try this:
nohup python -u <your file name>.py >> <your log file>.log &
You can run above command in screen and come out of screen.
Now you can tail logs of your python script by: tail -f <your log file>.log
To kill you script, you can use ps -aux and kill commands.
The zsh shell has an option to make all background processes run with nohup.
In ~/.zshrc add the lines:
setopt nocheckjobs #don't warn about bg processes on exit
setopt nohup #don't kill bg processes on exit
Then you just need to run a process like so: python bgservice.py &, and you no longer need to use the nohup command.
I know not many people use zsh, but it's a really cool shell which I would recommend.
If what you need is that the process should run forever no matter whether you are logged in or not, consider running the process as a daemon.
supervisord is a great out of the box solution that can be used to daemonize any process. It has another controlling utility supervisorctl that can be used to monitor processes that are being run by supervisor.
You don't have to write any extra code or modify existing scripts to make this work. Moreover, verbose documentation makes this process much simpler.
After scratching my head for hours around python-daemon, supervisor is the solution that worked for me in minutes.
Hope this helps someone trying to make python-daemon work
You can also use Yapdi:
Basic usage:
import yapdi
daemon = yapdi.Daemon()
retcode = daemon.daemonize()
# This would run in daemon mode; output is not visible
if retcode == yapdi.OPERATION_SUCCESSFUL:
print('Hello Daemon')

How to run python script in linux terminal console and continue using command line?

My question seems to be quite easy, but for some reason I did not find a quick answer to it. I have a python script that I want to run on the terminal command line (Ubuntu linux server), which works for me. But then I can't use the command line until the script ends. The script takes a long time to run, and I would like to continue using the command line to perform other tasks. How can you do the work of a script when its progress is not shown on the command line, but keep its work? And how can I see the active processes that are running on the server to see if a process is running?
Run script command:
python script.py
Add next with & echo "123":
The script takes a long time to run, and I would like to continue
using the command line to perform other tasks.
It seems that you want to run said process in background, please try pasting following
python script.py &
echo "123"
it should start your script.py and then output 123 (without waiting until script.py ends)
how can I see the active processes that are running on the server to
see if a process is running?
Using ps command
ps -ef
will list all processes which you would probably want to filter to get interested one to you.

How to stop a shell script (running a infinite loop ) from python?

I have a shell script (test.sh -> example shown below) which has a infinte while loop and prints some data to screen.
I am calling all my .sh scripts from python and I need to stop the test.sh before calling my other commands
I am using python 2.7 and linux system is on propritary hardware where I cannot install any python modules.
Here is my test.sh
#!/bin/sh
while :
do
echo "this code is in infinite while loop"
sleep 1
done
Here is my python Scripts
import subprocess as SP
SP.call(['./test.sh']) # I need to stop the test.sh in order for python to
# go and execute more commands and call
# another_script.sh
# some code statements
SP.call(['./another_script.sh'])
Well, quick google search made me look into subprocess call and Popen modules . and Popen has a terminate option and it doesn't work for me (or) I'm doing something wrong here
cmd=['test.sh']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.terminate()
Any other suggestions on how I can stop the test.sh from python are highly appreciated
PS: I don't mind to run the test.sh for like T seconds and then stop it
I use tmux for these type of processes, python has a good package libtmux which should solve your problem.
Basically you create a tmux session:
import libtmux
server = libtmux.Server()
session = server.new_session(session_name='my_session_name')
then you create a window to run the command in
window = session.new_window(attach=False, window_name='my_window_name')
command = './my_bash_file.sh'
window.select_pane('0').send_keys(command, enter=True)
You'll be able to run subsequent commands right after this one. To access the tmux session from your bash terminal use tmux attach -t my_session_name you'll then be in a tmux window, the one which ran your bash script.
To kill the tmux window use window.kill_window() there's a lot of options look at the libtmux docs.
The project aileen has some useful tmux commands if you want to see some more implementations.

How to Trap Error and Re-Launch Python from Shell

I have a large Python program running on a Raspberry Pi, and every week or two it will get overloaded and throw an out of memory error. I want to trap those errors and call a shell script "kill-and-relaunch.sh" (code below) that will kill the running Python processes and re-launch the program...so it needs to run the shell command as an entirely separate process. Two questions: (1) what is the best method to call the shell that will survive killing the original Python process; and (2) where would I put the error trapping code in a Python program that is already running in multiple processes...do I need to have the error trapping in each process?
Here is the shell command I want to call:
kill $(ps aux | grep '[p]ython -u home_security.py' | awk '{print $2}')
cd ~/raspsecurity
source ~/.profile
workon py3cv34
nohup python -u home_security.py &
Thank you for any suggestions.
Perhaps subprocess might help?
import subprocess
# do something
try:
# trap the anticipated error zone
except: # Best if you catch the specific error anticipated instead of catch-all.
# log the error if you wish
subprocess.run(my_ps_script)
You could fire your shell script in a cronjob and add the error (or all) output in an file (as described here https://stackoverflow.com/a/7526988/7727137).

I run a program in a shell script. How do i know if it has loaded? (Linux)

I wrote a shell script that runs a python program. Then I want to load a file into the program using xdotool. Right now this is my code:
#!/bin/bash
cd ~/Folder
python program.py &
sleep 10
WID=$(xdotool search --onlyvisible program)
....
I really don't like my solution of just waiting 10 seconds so that the program is loaded. Is there a better way?
This really depends. If program.py is supposed to finish then go on to xdotool then you might want to use && instead of &. A single & means you want the command to execute then move on to the next command as soon as possible without waiting for it to finish. A dobule && means you want to wait for the execution to be done, then only continue if you have a zero error exit. You could also just remove the & or use ; if you want to run the next command regardless of program.py's success. Here's what I'm getting at:
#!/bin/bash
cd ~/Folder
python program.py &&
WID=$(xdotool search --onlyvisible program)
...
If program.py is supposed continue run while you run the xdotool command, but you need program.py to come to some ready state before you continue, then you're right in using &, but you need to monitor the program.py somehow, and get a signal from it that it is ok to continue. An ancient way of doing this is to simply let program.py create/touch/edit a file, and when that file is detected you continue. A more advanced way would be to use network sockets or similar, but I'd advice against it if you really don't need to do anything fancy. Anyway, if you make program.py create a signal file, say /tmp/.program_signalfile, when things are ready all you have to do is:
#!/bin/bash
cd ~/Folder
python program.py &
until [ -f /tmp/.program_signal_file ]
do
sleep 1
done
WID=$(xdotool search --onlyvisible program)
rm /tmp/.program_signal_file
...
In this last solution program.py and xdotool are both running at the same time. If that's what you were looking for.

Categories

Resources