I'm running an application from within my code, and it rewrites files which I need to read later on in the code. There is no output the goes directly into my program. I can't get my code to wait until the subprocess has finished, it just goes ahead and reads the unchanged files.
I've tried subprocess.Popen.wait(), subprocess.call(), and subprocess.check_call(), but none of them work for my problem. Does anyone have any idea how to make this work? Thanks.
Edit: Here is the relevant part of my code:
os.chdir('C:\Users\Jeremy\Documents\FORCAST\dusty')
t = subprocess.Popen('start dusty.exe', shell=True)
t.wait()
os.chdir('C:\Users\Jeremy\Documents\FORCAST')
Do you use the return object of subprocess.Popen()?
p = subprocess.Popen(command)
p.wait()
should work.
Are you sure that the command does not end instantly?
If you execute a program with
t = subprocess.Popen(prog, Shell=True)
Python won't thrown an error, regardless whether the program exists or not. If you try to start an non-existing program with Popen and Shell=False, you will get an error. My guess would be that your program either doesn't exist in the folder or doesn't execute. Try to execute in the Python IDLE environment with Shell=False and see if you get a new window.
Related
I have in my root directory
$ cat pssa.py
import subprocess,sys
p = subprocess.Popen(["powershell.exe",".\\pre-commit.ps1"],
stdout=sys.stdout,stderr=sys.stderr,shell=True)
p.communicate()
pre-commit.ps1 returns 1, so it's in error, but
python pssa.py
returns 0.
Forgive us the complete lack of python skills, but I'm stuck. Grateful for help suggesting how python pssa.py can return the error code from the powershell script.
I think I read somewhere Popen does not wait for the script to finish. So 1) is there another method I can use that does wait, and in turn can read the return code from powershell?
Python is installed on Windows. The idea with above is to be able to use, for example, pre-commit run meaningfully on Windows. Right now, pre-commit run, executes the powershell script but does not fail as I would like it to.
Popen.communicate waits for a subprocess to finish and fills the returncode in Popen. You can use it like this:
import subprocess, sys
p = subprocess.Popen(["powershell.exe",".\\pre-commit.ps1"],
stdout=sys.stdout,stderr=sys.stderr,shell=True)
outs, errs = p.communicate()
code = p.returncode
I have a Python 3.9 script that starts another process in a new console. The 'other process' keeps running even after the original one has completed.
This is what I have on Windows:
# startup.py script
# =================
import sys, subprocess
if __name__ == '__main__':
print('start startup.py script')
arguments = ['python', 'other_process.py']
arguments.extend(sys.argv[1:])
subprocess.Popen(
arguments,
creationflags = subprocess.CREATE_NEW_CONSOLE,
)
print('end startup.py script')
It works great. Below you see the original console on the left, in which I invoke startup.py. I also pass it a --help flag, which is then simply passed to the other_process.py script.
On the right, you see the other_process.py script running. Please note that the original startup.py script has already finished, while the other_process.py script is still running. That's exactly what I need:
The subprocess.CREATE_NEW_CONSOLE parameter doesn't work on Linux. I've heard that setting shell=True would have a similar effect, but it doesn't spawn a new console.
How can I get the same effect on Linux?
Unix doesn’t provide this option/service, but you can run a terminal emulator:
subprocess.Popen(["gnome-terminal","--"]+arguments)
There isn’t a standard means of finding which terminal emulator to use (or even which are available), unfortunately. Checking shutil.which for a few common ones might be the right idea; from Wikipedia’s list, I’d recommend gnome-terminal, konsole, and xterm. You still then have to deal with the slightly different syntax to run a command in each.
So I have been trying for hours..DAYS to figure out a code to run 2 python files simultaneously.
I have tried subprocesses, multiprocessing, bash and whatnot, I must be doing something wrong, I just don't know what.
I have 2 python files, and I want to run them in parallel, but note that neither of them end. I want, while the first file is open and running, to run a second file.
Everything I have tried only opens the first file and stops there, since the script there is supposed to be running 24/7. Note that when I tried to use a separate bash file, it, for some reason, opened on git and then closed, doing nothing. I'm really desperate at this point ngl
Please do provide detailed answers with code, as I have been scanning the whole internet (StackOverflow included), I have tried everything and nothing seems to be working..
import subprocess
import LemonBot_General
import LemonBot_Time
import multiprocessing
def worker(file):
subprocess.Popen(["python3 LemonBot_Time.py"], stdout=subprocess.PIPE)
subprocess.Popen(["python3 LemonBot_General.py"],stdout=subprocess.PIPE)
if __name__ == '__main__':
files = ["LemonBot_General.py","LemonBot_Time.py"]
for i in files:
p = multiprocessing.Process(target=worker, args=(i,))
p.start()
This is the latest I tried and didn't work..
I also tried the subprocess commands alone, that didn't work as well.
Bash file also didn't work.
EDIT: NEITHER of the files FINISH. I want to run them in parallel.
You should be able to use Popen from subproccess. Worked for me. If you remove the p.wait() line, the second file will quit as soon as this first file finishes.
import time
import subprocess
p = subprocess.Popen(['python', 'test_file.py'])
time.sleep(5)
print("Working well")
p.wait()
Use the os.system('python3 myprogram.py') command inside of a threading.thread() command for each file.
I'm generating a text file that is later processed by an external program. This must be done 1000 times, for this, i use a subprocess.call() inside a loop for each text file i want to process.
The first call of subprocess.call() works perfectly.
The second call fails and the python program exits with a []Stop.
- There is no debug output.
- Both remain stopped, but in the process list
I have tried subprocess.call(), subprocess.Popen() and the outcome is the same. I have tried to run it with the same textfile as the first execution and it also fails, so the culprit is the subprocess.call() function for sure.
This is the line that calls the external program
subprocess.call(['/bin/bash', '-i', '-c', 'nucplot textfile.txt']);
The program is a simple binary file, but it must use the ENV variables of its installation to work properly, hence the usage of /bin/bash with those options. If I try using a shell, it doesn´t work.
Is there anything else i need to do after calling subprocess.call() in order for it to flush its internal stuff?
Try using subprocess.check_output
https://docs.python.org/3/library/subprocess.html#subprocess.check_output
_ = subprocess.check_output(['/path/to/nucplot', '-i', '-c', 'textfile.txt'])
I need to run another python script which generating data in my script I current working with. I use subprocess to run it:
cmd = 'python /home/usr/script.py arg1 arg2 arg3'
subprocess.Popen(cmd, shell=True)
But have a problem. Previous script generate few directories in 'current directory', it means in directory it was run in. And I can't modify previous script, cause it's not mine. How to set current directory to dir where I want to get data? \n
Another small problem is that when I run subprocess.Popen() my script doesn't end. Should I run it in another way?
the best way is to use subprocess.call instead (waits & terminates, Popen without the relevant wait() may create a zombie process) and use the cwd= parameter to specify current dir for the subprocess:
cmd = ['python','/home/usr/script.py','arg1','arg2','arg3']
return_code = subprocess.call(cmd, cwd="/some/dir")
(also pass the command as a list, and drop shell=True, you don't need it here)