I have a compiled program called program that takes in 1 argument called 2phase_eff. I would like to run this program from python but also be able to view its progress (it outputs various progress messages) on the shell in real time. So far I have succeeded in actually running it and viewing output after it is done running using the following code:
import subprocess
subprocess.Popen("program 2phase_eff", stdout=subprocess.PIPE, shell=True).communicate()
Yes this does output all the intermediate stuff at the very end but there are two problems
I cannot see the cmd shell and
The output is not in real time
How can I tweak the above command to fulfill above two objectives? Thanks.
To show the command shell you need to pass a value for creationflags to your call to subprocess.Popen(). In Windows this is the nShowCmd parameter of ShellExecute(). It's an integer between 0 and 10. Popen()'s default is zero, which corresponds to SW_HIDE. You can find a full list of the available values at https://msdn.microsoft.com/en-us/library/windows/desktop/bb762153(v=vs.85).aspx .
The lack of real-time output is a consequence of using Popen.communicate() in combination with stdout=subprocess.PIPE. The output is buffered in memory until the subprocess completes. That is because .communicate() returns you the output, and you don't get that until the method call returns.
You could try passing it a file descriptor instead, and poll that.
Related
I am using a 3rd-party python module which is normally called through terminal commands. When called through terminal commands it has a verbose option which prints to terminal in real time.
I then have another python program which calls the 3rd-party program through subprocess. Unfortunately, when called through subprocess the terminal output no longer flushes, and is only returned on completion (the process takes many hours so I would like real-time progress).
I can see the source code of the 3rd-party module and it does not set printing to be flushed such as print('example', flush=True). Is there a way to force the flushing through my module without editing the 3rd-party source code? Furthermore, can I send this output to a log file (again in real time)?
Thanks for any help.
The issue is most likely that many programs work differently if run interactively in a terminal or as part of a pipe line (i.e. called using subprocess). It has very little to do with Python itself, but more with the Unix/Linux architecture.
As you have noted, it is possible to force a program to flush stdout even when run in a pipe line, but it requires changes to the source code, by manually applying stdout.flush calls.
Another way to print to screen, is to "trick" the program to think it is working with an interactive terminal, using a so called pseudo-terminal. There is a supporting module for this in the Python standard library, namely pty. Using, that, you will not explicitly call subprocess.run (or Popen or ...). Instead you have to use the pty.spawn call:
def prout(fd):
data = os.read(fd, 1024)
while(data):
print(data.decode(), end="")
data = os.read(fd, 1024)
pty.spawn("./callee.py", prout)
As can be seen, this requires a special function for handling stdout. Here above, I just print it to the terminal, but of course it is possible to do other thing with the text as well (such as log or parse...)
Another way to trick the program, is to use an external program, called unbuffer. Unbuffer will take your script as input, and make the program think (as for the pty call) that is called from a terminal. This is arguably simpler if unbuffer is installed or you are allowed to install it on your system (it is part of the expect package). All you have to do then, is to change your subprocess call as
p=subprocess.Popen(["unbuffer", "./callee.py"], stdout=subprocess.PIPE)
and then of course handle the output as usual, e.g. with some code like
for line in p.stdout:
print(line.decode(), end="")
print(p.communicate()[0].decode(), end="")
or similar. But this last part I think you have already covered, as you seem to be doing something with the output.
I have a script that adds variables to the environment. That script is called through
subprocess.call('. myscript.sh', shell=True)
Is there a way I can get the modified environment and use it on my next subprocess call?This questions shows you can get the output of one call and chain it to another call Python subprocess: chaining commands with subprocess.run.Is there something similar with passing the environment?
You'll have to output the variables' content somehow. You're spawning a new process which will not propagate the environment variables back, so your python app will not see those values.
You could either make the script echo those to some file, or to the standard output if possible.
(Technically, it would be possible to stop the process and extract the values if you really wanted to hack that, but it's a bad idea.)
I have a python application which i want to purpose as a multi as a multi terminal handler, i want each object to have it's own terminal separated from the rest each running it's own instance, exactly like when i run two or more separate terminals in Linux (/bin/sh or /bin/bash)
sample: (just logic not code)
first_terminal = terminalInstance()
second_terminal = terminalInstance()
first_result = first_terminal.doSomething("command")
second_result = second_terminal.doSomething("command")
i actually need to have each terminal to grab a stdin & stdout in a virtual environment and control them, this is why they must be seperate, is this possible in python range? i've seen alot of codes handling a single terminal but how do you do it with multiple terminals.
PS i don't want to include while loops (if possible) since i want to add scalability from dealing with 2 or more terminals to as much as my system can handle? is it possible to control them by reference giving each terminal a reference and then calling on that object and issuing a command?
The pexpect module (https://pypi.python.org/pypi/pexpect/), among others, allows you to launch programs via a pseudo-tty, which "allows your script to spawn a child application and control it as if a human were typing commands."
You can easily spawn multiple commands, each running in a separate pseudo-tty and represented by a separate object, and you can interact with each object separately. There is a lot of flexibility as to when/how you interact. You can send input to them, and read their output, either blocking or non-blocking, and incorporating timeouts and alternative outputs.
Here's a trivial session example (run bash, have it execute an "ls" command, gather the first line of output).
import pexpect
x = pexpect.spawn("/bin/bash")
x.sendline("ls")
x.expect("\n") # End of echoed command
x.expect("\n") # End of first line of output
print x.before # Print first line of output
Note that you'll receive all the output from the terminal, typically including an echoed copy of every character you send to it. If running something like a shell, you might also need to set the shell prompt (or determine the shell prompt in use) and use that in parsing the output (i.e. in finding the end of each command's output).
I was reading about std.flush() in python. And I found this example a lot.
import sys,time
for i in range(10):
print i,
#sys.stdout.flush()
time.sleep(1)
It is often said that it makes a difference with/without the "sys.stdout.flush()".
However, when I called this script from command prompt, it didn't make a difference in my case. Both printed numbers to the screen in real time.
I used python 2.7.5 in windows.
Why is that happening?
p.s. In another example which printed the output through subprocess.PIPE instead of to the screen directly, I did observe a difference of the buffering.
What am I missing?
Using flush will generally guarantee that flushing is done but assuming the reverse relationship is a logical fallacy, akin to:
Dogs are animals.
This is an animal.
Therefore this is a dog.
In other words, not using flush does not guarantee flushing will not happen.
Interestingly enough, using Python 2.7.8 under Cygwin in Win81, I see the opposite behaviour - everything is batched up until the end. It may be different with Windows-native Python, it may also be different from within IDLE.
See stdio buffering. In brief:
Default Buffering modes:
stdin is always buffered
stderr is always unbuffered
if stdout is a terminal then buffering is automatically set to line buffered, else it is set to buffered
For me, the example you gave prints:
In cmd:
all the numbers upon exit in Cygwin's python
one by one in Win32 python
In mintty:
both all upon exit
both one by one with -u option
sys.stdout.isatty() returns False!
So, it looks like msvcrt's stdout is unbuffered when it points to a terminal. A test with a simple C program shows the same behaviour.
running a python script from within ESRI's ArcMap and it calls another python script (or at least attempts to call it) using the subprocess module. However, the system window that it executes in (DOS window) comes up only very briefly and enough for me to see there is an error but goes away too quickly for me to actually read it and see what the error is!
Does anyone know of a way to "pause" the DOS window or possibly pipe the output of it to a file or something using python?
Here is my code that calls the script that pops up the DOS window and has the error in it:
py_path2="C:\Python25\python.exe"
py_script2="C:\DataDownload\PythonScripts\DownloadAdministrative.py"
subprocess.call([py_path2, py_script2])
Much appreciated!
Cheers
subprocess.call accepts the same arguments as Popen. See http://docs.python.org/library/subprocess.html
You are especially interested in argument stderr, I think. Perhaps something like that would help:
err = fopen('logfile', 'w')
subprocess.call([py_path2, py_script2], stderr=err)
err.close()
You could do more if you used Popen directly, without wrapping it around in call.
Try doing a raw_input() command at the end of your script (it's input() in Python 3).
This will pause the script and wait for keyboard input. If the script raises an exception, you will need to catch it and then issue the command.
Also, there are ways to read the stdout and stderr streams of your command, try looking at subprocess.Popen arguments at http://docs.python.org/library/subprocess.html.