I have two python scripts that use two different cameras for a project I am working on and I am trying to run them both inside a different script or within each other, either way is fine.
import os
os.system('python 1.py')
os.system('python 2.py')
My problem however is that they don't run at the same time, I have to quit the first one for the next to open. I also tried doing it with bash as well with the & shell operator
python 1.py &
python 2.py &
And this does in fact make them both run however the issue is that they both run endlessly in the background and I need to close them rather easily. Any suggestion what I can do to avoid the issues with these implementations
You could do it with multiprocessing
import os
import time
import psutil
from multiprocessing import Process
def run_program(cmd):
# Function that processes will run
os.system(cmd)
# Initiating Processes with desired arguments
program1 = Process(target=run_program, args=('python 1.py',))
program2 = Process(target=run_program, args=('python 2.py',))
# Start our processes simultaneously
program1.start()
program2.start()
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
# Wait 5 seconds and kill first program
time.sleep(5)
kill(program1.pid)
program1.join()
# Wait another 1 second and kill second program
time.sleep(1)
kill(program2.pid)
program2.join()
# Print current status of our programs
print('1.py alive status: {}'.format(program1.is_alive()))
print('2.py alive status: {}'.format(program2.is_alive()))
One possible method is to use systemd to control your process (i.e. treat them as daemons).
This is how I control my Python servers since they need to run in the background and be completely detached from the current tty so I can exit my connection to the machine and the continue processes continue. You can then also stop the server later using systemctl, as explained below.
Instructions:
Create a .service file and save it in /etc/systemd/system, with contents along the lines of:
[Unit]
Description=daemon one
[Service]
ExecStart=/path/to/1.py
and repeat with one going to 2.py.
Then you can use systemctl to control your daemons.
First reload all config files with:
systemctl daemon-reload
then start either of your daemons (where my_daemon.service is one of your unit files):
systemctl start my_daemon
it should now be running and you should find it in:
systemctl list-units
You can also check its status with:
systemctl status my_daemon
and stop/restart them with:
systemctl stop|restart my_daemon
Use subprocess.Popen. This will create a child process and return its pid.
pid = Popen("python 1.py").pid
And then check out these functions for communicating with the child process and checking if it is still running.
Related
I've got a long running python script that I want to be able to end from another python script. Ideally what I'm looking for is some way of setting a process ID to the first script and being able to see if it is running or not via that ID from the second. Additionally, I'd like to be able to terminate that long running process.
Any cool shortcuts exist to make this happen?
Also, I'm working in a Windows environment.
I just recently found an alternative answer here: Check to see if python script is running
You could get your own PID (Process Identifier) through
import os
os.getpid()
and to kill a process in Unix
import os, signal
os.kill(5383, signal.SIGKILL)
to kill in Windows use
import subprocess as s
def killProcess(pid):
s.Popen('taskkill /F /PID {0}'.format(pid), shell=True)
You can send the PID to the other programm or you could search in the process-list to find the name of the other script and kill it with the above script.
I hope that helps you.
You're looking for the subprocess module.
import subprocess as sp
extProc = sp.Popen(['python','myPyScript.py']) # runs myPyScript.py
status = sp.Popen.poll(extProc) # status should be 'None'
sp.Popen.terminate(extProc) # closes the process
status = sp.Popen.poll(extProc) # status should now be something other than 'None' ('1' in my testing)
subprocess.Popen starts the external python script, equivalent to typing 'python myPyScript.py' in a console or terminal.
The status from subprocess.Popen.poll(extProc) will be 'None' if the process is still running, and (for me) 1 if it has been closed from within this script. Not sure about what the status is if it has been closed another way.
This worked for me under windows 11 and PyQt5:
subprocess.Popen('python3 MySecondApp.py')
Popen.terminate(app)
where app is MyFirstApp.py (the caller script, running) and MySecondApp.py (the called script)
I am developing some Python (version 3.6.1) code to install an application in Windows 7. The code used is this:
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
The application is installed successfully. The problem is that it always requires a reboot after it is finished (a popup with a message "You must restart your system for the configuration changes made to to take effect. Click Yes to restart now or No if you plan to restart later.).
I tried to insert parameter "/forcerestart" (source here) in the installation command but it still stops to request the reboot:
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /forcerestart /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
Another attempt was to create a following command like this one below, although since the previous command is not finished yet (as per my understanding) I realized it will never be called:
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Does anyone had such an issue and could solve it?
As an ugly workaround, if you're not time-critical but you want to emphasise the "automatic" aspect, why not
run the installCMD in a thread
wait sufficiently long to be sure that the command has completed
perform the shutdown
like this:
import threading,time
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
t = threading.Thread(target=installApp)
t.start()
time.sleep(1800) # half-hour should be enough
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Another (safer) way would be to find out which file is created last in the installation, and monitor for its existence in a loop like this:
while not os.path.isfile("somefile"):
time.sleep(60)
time.sleep(60) # another minute for safety
# perform the reboot
To be clean, you'd have to use subprocess.Popen for the installation process, export it as global and call terminate() on it in the main process, but since you're calling a shutdown that's not necessary.
(to be clean, we wouldn't have to do that hack in the first place)
I start this program all.py
import subprocess
import os
scripts_to_run = ['AppFlatForRent.py','AppForSale.py','CommercialForSale.py','LandForSale.py','MultipleUnitsForSale.py','RentalWanted.py','RentCommercial.py','RoomsForRent.py','RoomsWanted.py','ShortTermDaily.py','ShortTermMonthly.py','VillaHouseForRent.py','VillaHouseForSale.py']
for s in scripts_to_run:
subprocess.Popen(["python", os.path.join(os.getcwd(), s)])
Its running 13 programs at a time. The problem is that in the sublime - unlike other programs- this particular program doesnt cancel the built. it just keep running (I know because the program is inputting values in the database and it doesnt stop doing that)
I want it to be done via terminal.
any help?
There are two approaches you can take.
The shell approach
If you only want to kill the child processes after the main app has finished but don't want the main app to handle this itself, for any reason (mostly its for debugging purposes), you can do it from the terminal:
kill $(ps aux |grep -E 'python[[:space:]][[:alnum:]]+.py' |awk '{print $2}')
▲ ▲ ▲ ▲
║ ╚═══════════════════╦══════════════════╝ ║
Get all ═════╝ ║ ║
running ║ Get the second column
processes Find all scripts executed which is the PID
by Python, ending with .py
(check Note 1 for more details)
Note 1: the regular expression in the upper example is just for demonstration purposes and it kills only scripts executed with a relative path like python script.py, but does not include processes like python /path/to/script.py. This is just an example, so make sure to adapt the regular expression to your specific needs.
Note 2: this approach is risky because it can kill unwanted applications, make sure you know what you are doing before using it.
The Python approach
The other approach offers more control, and is implemented in the main application itself.
You can make sure that all child processes are exited when the main application ends by keeping track of all the processes you created, and killing them afterwards.
Example usage:
First change your process spawning code to keep the Popen objects of the running processes for later usage:
running_procs = []
for s in scripts_to_run:
running_procs.append(
subprocess.Popen(["python", os.path.join(os.getcwd(), s)])
)
Then define the do_clean() function that will iterate through them and terminate them:
def do_clean():
for p in running_procs:
p.kill()
You can call this function manually whenever you wish to do this, or you can use the atexit module to do this when the application is terminating.
The atexit module defines a single function to register cleanup
functions. Functions thus registered are automatically executed upon
normal interpreter termination.
Note: The functions registered via this module are not called when the
program is killed by a signal not handled by Python, when a Python
fatal internal error is detected, or when os._exit() is called.
For example:
import atexit
atexit.register(do_clean)
To stop all child script, you could call .terminate(), .kill() methods:
import sys
import time
from subprocess import Popen
# start child processes
children = [Popen([sys.executable or 'python', scriptname])
for scriptname in scripts_to_run]
time.sleep(30) # wait 30 seconds
for p in children:
p.terminate() # or p.kill()
for p in children:
p.wait() # wait until they exit
I have a problem with the way signals are propagated within a process group. Here is my situation and an explication of the problem :
I have an application, that is launched by a shell script (with a su). This shell script is itself launched by a python application using subprocess.Popen
I call os.setpgrp as a preexec_function and have verified using ps that the bash script, the su command and the final application all have the same pgid.
Now when I send signal USR1 to the bash script (the leader of the process group), sometimes the application see this signal, and sometimes not. I can't figure out why I have this random behavior (The signal is seen by the app about 50% of the time)
Here is he example code I am testing against :
Python launcher :
#!/usr/bin/env python
p = subprocess.Popen( ["path/to/bash/script"], stdout=…, stderr=…, preexec_fn=os.setpgrp )
# loop to write stdout and stderr of the subprocesses to a file
# not that I use fcntl.fcntl(p.stdXXX.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
p.wait()
Bash script :
#!/bin/bash
set -e
set -u
cd /usr/local/share/gios/exchange-manager
CONF=/etc/exchange-manager.conf
[ -f $CONF ] && . $CONF
su exchange-manager -p -c "ruby /path/to/ruby/app"
Ruby application :
#!/usr/bin/env ruby
Signal.trap("USR1") do
puts "Received SIGUSR1"
exit
end
while true do
sleep 1
end
So I try to send the signal to the bash wrapper (from a terminal or from the python application), sometimes the ruby application will see the signal and sometimes not. I don't think it's a logging issue as I have tried to replace the puts by a method that write directly to a different file.
Do you guys have any idea what could be the root cause of my problem and how to fix it ?
Your signal handler is doing too much. If you exit from within the signal handler, you are not sure that your buffers are properly flushed, in other words you may not be exiting gracefully your program. Be careful of new signals being received when the program is already inside a signal handler.
Try to modify your Ruby source to exit the program from the main loop as soon as an "exit" flag is set, and don't exit from the signal handler itself.
Your Ruby application becomes:
#!/usr/bin/env ruby
$done = false
Signal.trap("USR1") do
$done = true
end
until $done do
sleep 1
end
puts "** graceful exit"
Which should be much safer.
For real programs, you may consider using a Mutex to protect your flag variable.
I am trying to run multiple Python scripts in parallel in Windows 7 (and 10). I am running them all from another Python script, which performs more functions on the files the scripts are editing. I want the external script to wait until the other scripts are done running. I have tried start /w, but that made each script wait before closing the console window.
Essentially what I want to do is for Python to wait until the 3 processes are done. The last script is just a print("done"), and is meaningless for all I care. This is important for me to solve with 3 processes because I need to do the same thing with 30. (On a server, there are enough available threads.)
This is the CMD command I am trying to run.
os.system("start python node1.py & start python node2.py & start python node3.py && start /w printstatement.py")
Any suggestions?
Use subprocess.Popen instead of os.system. You'll get 3 Popen instances that you can wait on. For example:
import subprocess
procs = [subprocess.Popen(['python', 'node{}.py'.format(n)])
for n in range(1, 4)]
retcodes = [p.wait() for p in procs]
If you want separate console windows, like how CMD's start command works, then add the option creationflags=subprocess.CREATE_NEW_CONSOLE to the Popen call (Windows only). If you instead want separate consoles that don't create windows, use creationflags=CREATE_NO_WINDOW (0x08000000). In this case they still have console standard I/O; it's just not rendered to a window.
Solution using asyncio:
import asyncio
commands = [
'python node1.py',
'python node2.py',
]
async def run_command(command):
task = await asyncio.create_subprocess_exec(*command.split())
await task.wait()
combined_task = asyncio.gather(*(run_command(command) for command in commands))
asyncio.get_event_loop().run_until_complete(combined_task)