I currently have a Python program that calls a MATLAB script as batch like so:
matlab = QProcess()
matlab.start('matlab -noFigureWindows -batch "cd(users/script_directory/); MyScript.m;"')
#^ command to start MATLAB batch process in CMD
The issue I'm running into is that once this batch process starts, there's no way to kill it. So if my Python app gets force-closed, the MATLAB script keeps running and causes all sorts of issues, meaning I need to kill the process on app close.
I'm calling the MATLAB script as a QProcess and get the following message when I force-close the Python app before the MATLAB script finishes executing:
QProcess: Destroyed while process ("matlab") is still running.
With this, how do I stop the batch MATLAB process? Using 'ctrl-c' in CMD works for me sometimes to kill the process but I need it to be consistent to make the Python work right.
Similarly, can I just have it 'force quit' or 'restart' batch MATLAB or anything along those lines to clear all running processes?
A brute force way to kill it would be to just kill any matlab process via the process and system utilities library at the start of your application:
import psutil
for process in psutil.process_iter():
if process.name().lower() == 'matlab.exe':
process.terminate()
Related
I have a multi-threaded C program, which runs perfectly fine as a standalone program. It uses pthread_create and pthread_join to execute some some logic. Now, I am trying to execute this code from python using subprocess. However, when executing via subprocess, it seems subprocess returns as soon as the main thread exits, but I wish to wait for the entire code to finish executing. Is it possible to do this?
I'm writing a Python program that runs under Mac OS and Linux, and I want to run some logic in a multiprocessing.Process. That logic will take a while, and I want it to continue running even after my program is finished and has exited. i.e., I want the main process to not wait for the auxiliary process to finish. I want the main process to exit as soon as it's finished.
I made a few experiments, and it seems that this behavior is the default when using subprocess, but I can't make it happen using multiprocessing.Process, even when I run set_start_method('spawn').
Do you know of a way to get multiprocessing.Process to behave this way?
Looks like starting a new process, and then calling os.fork from it does the trick.
In my current application, I have a Python 2.7 script called main.py that launches another Python 2.7 script called calculator.py using GNU Parallel like the following:
os.system("seq 10000 | parallel -N0 -j 50 nohup python calculator.py &")
print "Done"
This works pretty well, with one exception: I need to resume executing other commands in main.py (that is, after the os.system call, e.g. the print "Done" line) just after all the 10000 instances spawned with GNU Parallel finish running.
Is there a proper way to do that? Solutions with os.spawn and Python 2.7 subprocess are both welcome, but using GNU Parallel is absolutely mandatory.
EDIT: Here are my requirements:
1) it is crucial to me that the many instances of calculator.py that are spawned keep running if the terminal closes (hence the nohup)
2) I need it to not block current terminal session (hence the &)
3) I need it to print "Done" in the example above gets executed only after the 10000 jobs finish
If achieving all above at the same time is not possible, I think I could then manually keep a log of all launched processes and then manually force the rest of the code "main.py" code to continue after all those processes end. This, of course, is a cumbersome last-resource option.
I have an issue with interrupting the subprocess.Popen This is the setup:
I have a Tkinter gui and it is running a another python script using the Popen. This inner script (let's call it running script) is running a shell shell script that is executing a piece of C++ code using a Popen so the hierarchical structure is looking like this:
GUI
\running_script
\shell-script
\c++
running_script works so that if it receives an interrupt, it sends SIGINT to shell-script. If i run a shell_script with my piece of C++ code with the running_script alone and do a CRTL+C everything works like a charm. However if i do it with GUI running the running_script as Popen, SIGINT is sent to the running_script is receives it properly and sends the interrupt signal to shell-script, but instead of terminating the inner process(being c++ code), the shell-script terminates itself and the C++ process continues running, as it was ran in the background, but it was not. When I execute ps -xaf the tree looks like this:
GUI
\running_script
\shell-script <defunct>
c++
So to reiterate, when i run it without the GUI it works like a charm, but with GUI it behaves as explained above. I've tried sending shell-command a SIGTERM instead of SIGINT as well, the result is the same.
You could catch SIGINT in the shell script and make it send SIGINT to the c++ program, e.g:
#!/bin/bash
./cpp_program &
procpid=$!
function killit() {
kill -SIGINT $procpid
}
trap killit SIGINT
......
I am trying to set up a python code to be executed automatically.
I started with a small code to be executed:
import datetime
with open("out.txt","a") as f:
f.write(datetime.datetime.now().isoformat())
The task will start allright, and executes (The file is modified), but it never ends in the task scheduler.
this and this exist in SO, but have no real answer. The only workaround proposed in these threads is to force the end of task after a given time in Windows, but this requires to know how long the python script will take which will not be the case for my actual task.
How can the task scheduler know that a python script is finished ?
I run it the following way in the task scheduler :
program : cmd
arguments : /c C:\python27\python.exe C:\path\of\script.py
execute in : C:\path\of\
I tried some variations around this, like executing python instead of cmd, but it didn't change anything. I had hoped the /c would force the task to close.
as Gaurav Pundir mentioned, adding sys.exit(0) should end the script properly and thus the task. However, you do need to add the sys library with import sys in order to use sys.exit(0). Hope this helps!
it looks like a bug to me.
Try looking up for python console under Task Manager.
if it is not there then the program has exited successfully.
I have the same issue with Windows 10, python script ran successfully, there is no python console under Task Manager, yet the scheduled task Status still says 'Running'
There seems to be no correct fix for such issue with CMD as the intermediate launcher.
There is a [End] command in Task Scheduler GUI, clicking it will always terminate the CMD/batch file leaving the spawned python.exe process to straw.
The real problem: there doesn't seem to be any way for cmd to pass on the terminate signal to python.exe.. and neither can taskengine reliably determine if python.exe is alive or not.
I ran into the same problem, the python file didn't stop in the task scheduler. I imported sys and wrote sys.exit(0) but I still got the same problem.
Finally, I decided to press "Update" which solved my problem; the status of the task was "Ready", and not "Running". For information, I use windows 11.