How to start a script from another script in python - python

I have created a script that checks if some other script is running
import os
import datetime
import time
from time import ctime
statinfo = os.stat('nemo_logs/nemo_log_file_' + time.strftime('%Y-%m-%d') + '.txt')
for i in range(1):
first_size = statinfo.st_size
time.sleep(10)
if statinfo.st_size > first_size:
print("SCRIPT IS RUNNING")
else:
print("SCRIPT IS NOT RUNNING. TRYING TO KILL THE SCRIPT...")
os.system("pkill -9 -f selenium_nemo.py")
print("SCRIPT KILLED. TRYING TO RESTART THE SCRIPT...")
os.system("python selenium_nemo.py")
print("SCRIPT STARTED")
If the script logs are increasing then we are OK, but if the script has stuck for some case i want to stop it and restart the script once more.
First i kill the script and then i'am executing os.system("python selenium_nemo.py") . The script starts but it runs inside my main script. How i can i start the selenium_nemo.py on another proccess?
'The for loop is for later'
Thanks

You can use subprocess module for this.
Check out this answer for more.

Related

How can I run function before the process killed?

I have a python script. How can I do something before the process will be terminated?
For example I want to print something on display or write to log file.
OS: Windows 11 x64
python version: 3.8.10
I tried this one, but it doesn't work:
import signal
import sys
import time
def handle_iterrupt():
print("Handling interrupt")
sys.exit(0)
if __name__ == '__main__':
signal.signal(signal.SIGTERM, handle_iterrupt)
for x in range(100):
print(x)
time.sleep(1)
Update:
If the process is terminated from the task manager, or cmd is closed, or the process is terminated with taskkill /F /PID {PID}, I want to run some function or write something to a log file.

Unable to prevent program to print a Error

I took a piece of code from here:
https://stackoverflow.com/a/10457565/13882705
which is
# I have used os comands for a while
# this program will try to close a firefox window every ten secounds
import os
import time
# creating a forever loop
while 1 :
os.system("TASKKILL /F /IM firefox.exe")
time.sleep(10)
It will terminate a process if it is running using OS module
But if the program did not find the app we mentioned then it prints
ERROR: The process "firefox.exe" not found.
Is there a way to make the program just print application not found once and wait until the program is rerunned?
It is fine even if it just prints "Application Not found"
Use subprocess.run instead of os.system so you have more control:
import subprocess
import time
while True:
proc = subprocess.run(["TASKKILL", "/F", "/IM", "firefox.exe"], stderr=subprocess.PIPE)
if proc.returncode != 0:
print("Application not found.")
break # since application isn't here we just exit
time.sleep(10)

Python input() blocks subprocesses from executing

I have a Python script that accepts user input. Different user inputs trigger different functionality. The functionality in question here is one that spawns multiple processes. Here is the script, main.py.
import time
import threading
import concurrent.futures as cf
def executeparallelprocesses():
numprocesses = 2
durationseconds = 10
futures = []
print('Submit jobs as new processes.')
with cf.ProcessPoolExecutor(max_workers=numprocesses) as executor:
for i in range(numprocesses):
futures.append(executor.submit(workcpu, 500, durationseconds))
print('job submitted')
print('all jobs submitted')
print('Wait for jobs to complete.', flush=True)
for future in cf.as_completed(futures):
future.result()
print('All jobs done.', flush=True)
def workcpu(x, durationseconds):
print('Job executing in new process.')
start = time.time()
while time.time() - start < durationseconds:
x * x
def main():
while True:
cmd = input('Press ENTER\n')
if cmd == 'q':
break
thread = threading.Thread(target=executeparallelprocesses)
thread.start()
time.sleep(15)
if __name__ == '__main__':
main()
When this script is invoked from the terminal, it works as expected (i.e., the subprocesses execute). Specifically, notice the two lines "Job executing in new process." in the example run that follows:
(terminal prompt $) python3 main.py
Press ENTER
Submit jobs as new processes.
Press ENTER
job submitted
job submitted
all jobs submitted
Wait for jobs to complete.
Job executing in new process.
Job executing in new process.
All jobs done.
q
(terminal prompt $)
THE PROBLEM:
When the script is invoked from another program, the subprocesses are not executed. Here is the driver script, driver.py:
import time
import subprocess
from subprocess import PIPE
args = ['python3', 'main.py']
p = subprocess.Popen(args, bufsize=0, stdin=PIPE, universal_newlines=True)
time.sleep(1)
print('', file=p.stdin, flush=True)
time.sleep(1)
print('q', file=p.stdin, flush=True)
time.sleep(20)
Notice how "Job executing in new process." is not present in the output from the example run that follows:
(terminal prompt $) python3 driver.py
Press ENTER
Submit jobs as new processes.
Press ENTER
job submitted
job submitted
all jobs submitted
Wait for jobs to complete.
(terminal prompt $)
It seems like the cmd = input('Press ENTER\n') statement in main.py is blocking and preventing the subprocesses from executing. Strangely, commenting out the second time.sleep(1) statement in driver.py causes the main.py subprocesses to spawn as expected. Another way to make this "work" is to add time.sleep(1) inside the loop of main.py, right after thread.start().
This time-sensitive code is brittle. Is there a robust way to do this?
The problem lies in how you try to communicate with the second script using stdin=PIPE - try the following instead for the second script:
import time
import subprocess
from subprocess import PIPE
args = ['python', 'junk.py']
p = subprocess.Popen(args, bufsize=0, stdin=PIPE, universal_newlines=True)
p.communicate(input='\nq\n')
time.sleep(20)
Output:
Press ENTER
Submit jobs as new processes.
Press ENTER
job submitted
job submitted
all jobs submitted
Wait for jobs to complete.
Job executing in new process.
Job executing in new process.
All jobs done.
Process finished with exit code 0
Note that, instead of inserting timeouts everywhere, you should probably look in to joining completed processes, but that goes beyond the question.
I tried ShadowRanger's suggestion to add a call to multiprocessing.set_start_method():
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
main()
This solved the problem for me. I will read the documentation to learn more about this.

Python kill a subprocess(that starts another process) and start it again

I'm trying to make a python script that starts the program livestreamer (that starts the program mplayer) and after 10 seconds it should kill the program, or the subprocess. here is my current code that doesn't work, I think I know why but I don't know how to solve it.
I think the problem is that the subprocess starts livestreamer and then the program livestreamer starts the program mplayer. Python doesn't know about mplayer and can't close it. How would I be able to kill both livestreamer and mplayer after 10 second and then start them again as a loop?
I'm using Ubuntu 14.04 (Linux) and Python 2.7.6
import subprocess
import time
import os
import sys
import signal
url = "http://new.livestream.com/accounts/398160/events/3155348"
home = os.environ['HOME']
if not os.geteuid() == 0:
if not os.path.exists('/%s/.config/livestreamer' % home):
os.makedirs('/%s/.config/livestreamer' % home)
lscfg = open('%s/.config/livestreamer/config' % home, 'w+')
lscfg.write("player=mplayer -geometry 0%:0% -nomouseinput -loop 100 -noborder -fixed-vo")
lscfg.close()
cmd = "livestreamer %s best --player-continuous-http --player-no-close" % url
while True:
proc1 = subprocess.Popen(cmd.split(), shell=False)
time.sleep(10)
proc1.kill()
Solution:
import subprocess
import time
import os
import sys
import signal
url = "http://new.livestream.com/accounts/398160/events/3155348"
home = os.environ['HOME']
if not os.geteuid() == 0:
if not os.path.exists('/%s/.config/livestreamer' % home):
os.makedirs('/%s/.config/livestreamer' % home)
lscfg = open('%s/.config/livestreamer/config' % home, 'w+')
lscfg.write("player=mplayer -geometry 0%:0% -nomouseinput -loop 100 -noborder -fixed-vo")
lscfg.close()
cmd = "livestreamer %s best --player-continuous-http --player-no-close" % url
#restarting the player every 10th minute to catch up on possible delay
while True:
proc1 = subprocess.Popen(cmd.split(), shell=False)
time.sleep(600)
os.system("killall -9 mplayer")
proc1.kill()
As you can see os.system("killall -9 mplayer") was the command to kill the process mplayer.
In your code you kill livestreamer but not mplayer so mplayer will continue running.
By using kill on your subprocess you send a signal SIGKILL and unless the subprocess do handle the signal interruption it will simply close itself fast and without killing his own childs so mplayer will live (and may become a zombie process).
You have no reference to your subprocess child 'mplayer' but if you can get his PID you can kill it with os.kill(...)
os.kill(process_pid, signal.SIGTERM)
Using os.system("killall -9 mplayer") was the easy way to solve this. Mind using this option will kill all process of mplayer though this is not a problem in my case but may be a problem for other cases.
while True:
proc1 = subprocess.Popen(cmd.split(), shell=False)
time.sleep(600)
os.system("killall -9 mplayer")
proc1.kill()

Daemon thread launching software won't die

I'm trying to write a small script which will use plink.exe (from the same folder) to create a ssh tunnel (on windows).
I'm basically using os.system to launch the the command:
import time
import threading
from os.path import join, dirname, realpath
pc_tunnel_command = '-ssh -batch -pw xxxx -N -L 1234:host1:5678 user#host2'
if __name__ == '__main__':
t = threading.Thread(target = os.system, \
args = (join(dirname(realpath(__file__)), 'plink.exe ') + \
pc_tunnel_command,))
t.daemon = True
t.start()
#without this line it will die. I guess that plink doesn't have enough time to start.
time.sleep(5)
print 'Should die now'
However, it seems that the thread (and plink.exe) keep running. Why is this happening? Any way to force the thread to close? Better way to launch plink?
I want plink.exe to die when my program ends. Using a daemon thread was my plan of having the tunnel run in the background, and then dying when my main code exits.
BTW - same thing happens with subprocess.call.
You can use the atexit and signal modules to register calls back that will explicitly kill the process when your program exits normally or receives SIGTERM, respectively:
import sys
import time
import atexit
import signal
import subprocess
from functools import partial
from os.path import join, dirname, realpath
pc_tunnel_command = '-ssh -batch -pw xxxx -N -L 1234:host1:5678 user#host2'
def handle_exit(p, *args):
print("killing it")
p.terminate()
sys.exit(0)
if __name__ == '__main__':
p = subprocess.Popen(join(dirname(realpath(__file__)), 'plink.exe ') + pc_tunnel_command, shell=True)
func = partial(handle_exit, p)
signal.signal(signal.SIGTERM, func)
atexit.register(func)
print 'Should die now'
The one thing that is odd about the behavior your desrcibed is that I would have expected your program to exit after your sleep call, but leave plink running in the background, rather than having your program hang until the os.system call completes. That's the behavior I see on Linux, at least. In any case, explicitly terminating the child process should solve the issue for you.
os.system does not return until the child process exits. The same is true for subprocess.call. That's why your thread is sitting there, waiting for plink to finish. You can probably use subprocess.Popen to launch the process asynchronously and then exit. In any case, the additional thread you are creating is unnecessary.

Categories

Resources