running a python script from within ESRI's ArcMap and it calls another python script (or at least attempts to call it) using the subprocess module. However, the system window that it executes in (DOS window) comes up only very briefly and enough for me to see there is an error but goes away too quickly for me to actually read it and see what the error is!
Does anyone know of a way to "pause" the DOS window or possibly pipe the output of it to a file or something using python?
Here is my code that calls the script that pops up the DOS window and has the error in it:
py_path2="C:\Python25\python.exe"
py_script2="C:\DataDownload\PythonScripts\DownloadAdministrative.py"
subprocess.call([py_path2, py_script2])
Much appreciated!
Cheers
subprocess.call accepts the same arguments as Popen. See http://docs.python.org/library/subprocess.html
You are especially interested in argument stderr, I think. Perhaps something like that would help:
err = fopen('logfile', 'w')
subprocess.call([py_path2, py_script2], stderr=err)
err.close()
You could do more if you used Popen directly, without wrapping it around in call.
Try doing a raw_input() command at the end of your script (it's input() in Python 3).
This will pause the script and wait for keyboard input. If the script raises an exception, you will need to catch it and then issue the command.
Also, there are ways to read the stdout and stderr streams of your command, try looking at subprocess.Popen arguments at http://docs.python.org/library/subprocess.html.
Related
I am using a 3rd-party python module which is normally called through terminal commands. When called through terminal commands it has a verbose option which prints to terminal in real time.
I then have another python program which calls the 3rd-party program through subprocess. Unfortunately, when called through subprocess the terminal output no longer flushes, and is only returned on completion (the process takes many hours so I would like real-time progress).
I can see the source code of the 3rd-party module and it does not set printing to be flushed such as print('example', flush=True). Is there a way to force the flushing through my module without editing the 3rd-party source code? Furthermore, can I send this output to a log file (again in real time)?
Thanks for any help.
The issue is most likely that many programs work differently if run interactively in a terminal or as part of a pipe line (i.e. called using subprocess). It has very little to do with Python itself, but more with the Unix/Linux architecture.
As you have noted, it is possible to force a program to flush stdout even when run in a pipe line, but it requires changes to the source code, by manually applying stdout.flush calls.
Another way to print to screen, is to "trick" the program to think it is working with an interactive terminal, using a so called pseudo-terminal. There is a supporting module for this in the Python standard library, namely pty. Using, that, you will not explicitly call subprocess.run (or Popen or ...). Instead you have to use the pty.spawn call:
def prout(fd):
data = os.read(fd, 1024)
while(data):
print(data.decode(), end="")
data = os.read(fd, 1024)
pty.spawn("./callee.py", prout)
As can be seen, this requires a special function for handling stdout. Here above, I just print it to the terminal, but of course it is possible to do other thing with the text as well (such as log or parse...)
Another way to trick the program, is to use an external program, called unbuffer. Unbuffer will take your script as input, and make the program think (as for the pty call) that is called from a terminal. This is arguably simpler if unbuffer is installed or you are allowed to install it on your system (it is part of the expect package). All you have to do then, is to change your subprocess call as
p=subprocess.Popen(["unbuffer", "./callee.py"], stdout=subprocess.PIPE)
and then of course handle the output as usual, e.g. with some code like
for line in p.stdout:
print(line.decode(), end="")
print(p.communicate()[0].decode(), end="")
or similar. But this last part I think you have already covered, as you seem to be doing something with the output.
I have a compiled program called program that takes in 1 argument called 2phase_eff. I would like to run this program from python but also be able to view its progress (it outputs various progress messages) on the shell in real time. So far I have succeeded in actually running it and viewing output after it is done running using the following code:
import subprocess
subprocess.Popen("program 2phase_eff", stdout=subprocess.PIPE, shell=True).communicate()
Yes this does output all the intermediate stuff at the very end but there are two problems
I cannot see the cmd shell and
The output is not in real time
How can I tweak the above command to fulfill above two objectives? Thanks.
To show the command shell you need to pass a value for creationflags to your call to subprocess.Popen(). In Windows this is the nShowCmd parameter of ShellExecute(). It's an integer between 0 and 10. Popen()'s default is zero, which corresponds to SW_HIDE. You can find a full list of the available values at https://msdn.microsoft.com/en-us/library/windows/desktop/bb762153(v=vs.85).aspx .
The lack of real-time output is a consequence of using Popen.communicate() in combination with stdout=subprocess.PIPE. The output is buffered in memory until the subprocess completes. That is because .communicate() returns you the output, and you don't get that until the method call returns.
You could try passing it a file descriptor instead, and poll that.
I run a Python Discord bot. I import some modules and have some events. Now and then, it seems like the script gets killed for some unknown reason. Maybe because of an error/exception or some connection issue maybe? I'm no Python expert but I managed to get my bot working pretty well, I just don't exactly understand how it works under the hood (since the program does nothing besides waiting for events). Either way, I'd like it to restart automatically after it stops.
I use Windows 10 and just start my program either by double-clicking on it or through pythonw.exe if I don't want the window. What would be the best approach to verify if my program is still running (it doesn't have to be instant, the verification could be done every X minutes)? I thought of using a batch file or another Python script but I have no idea how to do such thing.
Thanks for your help.
You can write another python code (B) to call your original python code (A) using Popen from subprocess. In python code (B), ask the program to wait for your python code (A). If 'A' exits with an error code, recall it from B.
I provide an example for python_code_B.py
import subprocess
filename = 'my_python_code_A.py'
while True:
"""However, you should be careful with the '.wait()'"""
p = subprocess.Popen('python '+filename, shell=True).wait()
"""#if your there is an error from running 'my_python_code_A.py',
the while loop will be repeated,
otherwise the program will break from the loop"""
if p != 0:
continue
else:
break
This will generally work well on Unix / Windows systems. Tested on Win7/10 with latest code update.
Also, please run python_code_B.py from a 'real terminal' which means running from a command prompt or terminal, and not in IDLE.
for problem you stated i prefer to use python subprocess call to rerun python script or use try blocks.
This might be helpful to you.
check this sample try block code:
try:
import xyz # consider it is not exist or any error code
except:
pass # go to next line of code to execute
I have a Python program from which I spawn a sub-program to process some files without holding up the main program. I'm currently using bash for the sub-program, started with a command and two parameters like this:
result = os.system('sub-program.sh file.txt file.txt &')
That works fine, but I (eventually!) realised that I could use Python for the sub-program, which would be far preferable, so I have converted it. The simplest way of spawning it might be:
result = os.system('python3 sub-program.py file.txt file.txt &')
Some research has shown several more sophisticated alternatives, but I have the impression that the latest and most approved method is this one:
subprocess.Popen(["python3", "-u", "sub-program.py"])
Am I correct in thinking that that is the most appropriate way of doing it? Would anyone recommend a different method and why? Simple would be good as I'm a bit of a Python novice.
If this is the recommended method, I can probably work out what the "-u" does and how to add the parameters for myself.
Optional extras:
Send a message back from the sub-program to the main program.
Make the sub-program quit when the main program does.
Yes, using subprocess is the recommended way to go according to the documentation:
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function.
However, subprocess.Popen may not be what you're looking for. As opposed to os.system you will create a Popen object that corresponds to the subprocess and you'll have to wait for it in order to wait for it's completion, fx:
proc = subprocess.Popen(["python3", "-u", "sub-program.py"])
do_something()
res = proc.wait()
If you want to just run a program and wait for completion you should probably use subprocess.run (or maybe subprocess.call, subprocess.check_call or subprocess.check_output) instead.
Thanks skyking!
With
import subprocess
at the beginning of the main program, this does what I want:
with open('output.txt', 'w') as f:
subprocess.Popen([spawned.py, parameter1, parameter2], stdout = f)
The first line opens a file for the output from the sub-program started in the second line. In the second line, the square brackets contain the stuff for the sub-program - name followed by two parameters. The parameters are available in the sub-program in sys.argv[1] and sys.argv[2]. After that come the subprocess parameters - the f says to output to the text file mentioned above.
Is there any particular reason it has to be another program entirely? Why not just spawn another process which runs one of the functions defined within your script?
I suggest that you read up on multiprocessing. Python has module just for that: https://docs.python.org/dev/library/multiprocessing.html
Here you can find info on spawning new processes, communicating between them and syncronizing them.
Be warned though that if you want to really speed up your file processing you'll want to use processes instead of threads (due to some limitations in python, threads will only slow you down which is confusing).
Also check out this page: https://pymotw.com/2/multiprocessing/basics.html
It has some code samples that will help you out a lot.
Don't forget this guard in your script:
if __name__ == '__main__':
It is very important ;)
Caveat: new to Python.
Wanting to hear from professionals who actually use it:
What are the main differences between subprocess.Popen() and subprocess.call() and when is it best to use each one?
Unless you want to read why I was thinking about this question or what to center your answer around, you may stop reading now.
I was inspired to ask this question because I am working through an issue in a script where I started using subprocess.Popen(), eventually called a system pause, and then wanted to delete the .exe that created the system pause, but I noticed with Popen(), the commands all seemed to run together (the delete on the .exe gets executed before the exe is closed..), though I tried adding communicate().
Here is fake code for what I'm describing above:
subprocess.Popen(r'type pause.exe > c:\worker.exe', shell=True).communicate()
subprocess.Popen(r'c:\worker.exe', shell=True).communicate()
subprocess.Popen(r'del c:\worker.exe', shell=True).communicate()
subprocess.call(*popenargs, **kwargs)
Run command with arguments. Wait for
command to complete, then return the
returncode attribute.
If you create a Popen object, you must call the sp.wait() yourself.
If you use call, that's done for you.