read python STDOUT realtime - python

My code is as follows, basically this module will run the required command and capture its ouput line by line but in my case when the command runs, it takes just more than a second to return to the command prompt and thats where child.stdout.read(1) hangs, if I run a normal command using this it prints everthing as expected. but in a particular case where, the command prints somthing to STDOUT and then takes some time to return to the prompt, it hangs.. Please help
New code:
def run_command(shell_command):
'''run the required command and print the log'''
child = subprocess.Popen(shell_command, shell=True,stdout=subprocess.PIPE)
(stdoutdata, stderrdata) = child.communicate()
print stdoutdata
print "Exiting.."
Error:
File "upgrade_cloud.py", line 62, in <module>
stop_cloud()
File "upgrade_cloud.py", line 49, in stop_cloud
run_command(shell_command)
File "upgrade_cloud.py", line 33, in run_command
(stdoutdata, stderrdata) = child.communicate()
File "/usr/lib/python2.6/subprocess.py", line 693, in communicate
stdout = self.stdout.read()
KeyboardInterrupt

Here's your problem:
child.wait()
This line causes Python to wait for the child process to exit. If the child process tries to print a lot of data to stdout, it will block waiting for Python to read said data. Since Python is waiting for the child process and the child process is waiting for Python, you get a deadlock.
I would recommend using subprocess.check_output() instead of subprocess.Popen. You could also use the Popen.communicate() method instead of the .wait() method.

Related

Reading output from terminal using subprocess.run

I'm writing a python string to parse a value from a JSON file, run a tool called Googler with a couple of arguments including the value from the JSON file, and then save the output of the tool to a file (CSV preferred, but that's for another day).
So far the code is:
import json
import os
import subprocess
import time
with open("test.json") as json_file:
json_data = json.load(json_file)
test = (json_data["search_term1"]["apparel"]["biba"])
#os.system("googler -N -t d1 "+test) shows the output, but can't write to a file.
result= subprocess.run(["googler", "-N","-t","d1",test], stdout=subprocess.PIPE, universal_newlines=True)
print(result.stdout)
When I run the above script, nothing happens, the terminal just sits blank until I send a keyboard interrupt and then I get this error:
Traceback (most recent call last):
File "script.py", line 12, in <module>
result= subprocess.run(["googler", "-N","-t","d1",test], stdout=subprocess.PIPE, universal_newlines=True)
File "/usr/lib/python3.5/subprocess.py", line 695, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.5/subprocess.py", line 1059, in communicate
stdout = self.stdout.read()
KeyboardInterrupt
I tried replacing the test variable with a string, same error. The same line works on something like "ls", "-l", "/dev/null".
How do I extract the output of this tool and write it to a file?
Your googler command works in interactive mode. It never exits, so your program is stuck.
You want googler to run the search, print the output and then exit.
From the docs, I think --np (or --noprompt) is the right parameter for that. I didn't test.
result = subprocess.run(["googler", "-N", "-t", "d1", "--np", test], stdout=subprocess.PIPE, universal_newlines=True)

How does one create custom output stream for subprocess.call

I am trying to get realtime output of a subprocess.call by defining my own output stream but it doesn't seem to work.
Reason: I want to run a subprocess and get output of that call to both stdout(in realtime so i can look at the script and see current progress) as well as logging it to a file
subprocess.py:
import time
while True:
print("Things")
time.sleep(1)
mainprocess.py
import subprocess
import io
class CustomIO(io.IOBase):
def write(self, str):
print("CustomIO: %s"%str)
# logging to be implemented here
customio = CustomIO()
subprocess.call(["python3", "print_process.py"], stdout=customio)
But when i run this code i get this error message:
Traceback (most recent call last):
File "call_test.py", line 9, in <module>
subprocess.call(["python3", "print_process.py"], stdout=customio)
File "/usr/lib/python3.4/subprocess.py", line 537, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.4/subprocess.py", line 823, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "/usr/lib/python3.4/subprocess.py", line 1302, in _get_handles
c2pwrite = stdout.fileno()
io.UnsupportedOperation: fileno
So, anyone have any clue if this is possible?
Am i inheriting the wrong baseclass?
Am i not overloading the proper methods?
Or am i completely off the rails and should be going about this in a completely different way?
If you want to process the output of a subprocess, you need to pass stdout=subprocess.PIPE. However, call() and run() will both wait until the process is finished before making it available, so you cannot handle it in real time using these functions.
You need to use subprocess.Popen:
import subprocess as sp
def handle_output(output_line):
...
my_process = sp.Popen(["python3", "print_process.py"],
stdout=sp.PIPE,
universal_newlines=True) # changes stdout from bytes to text
for line in my_process.stdout:
handle_output(line)
my_process.wait()
Update: Make sure to flush the output buffer in your child process:
while True:
print("Things", flush=True)
time.sleep(1)
You need to specify and open stream with a file descriptor. fileno isn't implemented for io.IOBase because this is just an in-memory stream:
Frequently Used Arguments
stdin, stdout and stderr specify the executed program’s standard
input, standard output and standard error file handles, respectively.
Valid values are PIPE, DEVNULL, an existing file descriptor (a
positive integer), an existing file object, and None. PIPE indicates
that a new pipe to the child should be created. DEVNULL indicates that
the special file os.devnull will be used. With the default settings of
None, no redirection will occur;
So you might use sockets, pipes, and open files as stdout, the file descriptor is passed to the child process as it's stdout. I didn't use sockets with subprocess.Popen though, but I expect them to work, I believe what matters here is the file descriptor to the child, not what type of object the file descriptor points to.

Python: pass ctrl-c to a process ran via os.spawnvpe

In my python script I have:
os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
where cmd is something like ['mail', '-b', emails,...] which allows me to run mail interactively and go back to the python script after mail finishes.
The only problem is when I press Ctrl-C. It seems that "both mail and the python script react to it" (*), whereas I'd prefer that while mail is ran, only mail should react, and no exception should be raised by python. Is it possible to achieve it?
(*) What happens exactly on the console is:
^C
(Interrupt -- one more to kill letter)
Traceback (most recent call last):
File "./tutster.py", line 104, in <module>
cmd(cmd_run)
File "./tutster.py", line 85, in cmd
code = os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
File "/usr/lib/python3.4/os.py", line 868, in spawnvpe
return _spawnvef(mode, file, args, env, execvpe)
File "/usr/lib/python3.4/os.py", line 819, in _spawnvef
wpid, sts = waitpid(pid, 0)
KeyboardInterrupt
and then the mail is in fact sent (which is already bad because the intention was to kill it), but the body is empty and the content is sent as a attachment with a bin extension.
Wrap it with an try/except statement:
try:
os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
except KeyboardInterrupt:
pass

How to execute a funcion callback as subprocess in python?

I'm currently learning how to use subprocesses, for this (and some other) reasons
I've bought a book to learn how to work with subprocesses. It's a good book
and I'm not having troubles understanding it. In my book they start to explain
how to execute shell commands as subprocesses.
I had a programming problem I had for ages, and with the subprocesses, I could
be able to solve it but I need to execute a function callback as subprocess.
I have this code to echo something but it's a shell command:
import subprocess
proc = subprocess.Popen(['echo', 'Hello, this is child process speaking'],
stdout=subprocess.PIPE, shell=True)
out, err = proc.communicate()
print(out.decode('utf-8'))
I want this callback to be executed as a subprocess:
def callb()
import time as t
print('2')
t.sleep(2)
print('1')
t.sleep(2)
print('0')
I just tried out to execute this callback like this (it was a simple naive idea):
proc = subprocess.Popen(callb())
but this gives me the following error:
Traceback (most recent call last):
File "/root/testfile.py", line 6, in <module>
proc = subprocess.Popen(callb())
File "/usr/lib/python3.3/subprocess.py", line 818, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.3/subprocess.py", line 1321, in _execute_child
args = list(args)
TypeError: 'NoneType' object is not iterable
The strange thing is, that it does execute the callback, but then it raises this error!
What did I do wrong? Did I forget something?
the subprocess module is not suitable for executing python callbacks. you want to look at the multiprocessing module instead. the first few examples with Process and Pool seem to do what you want.

NameError: name 'buffer' is not defined with Ant Based framework batch file

I'm using a python script to execute an Ant based framework batch file(Helium.bat)
subprocess.Popen('hlm '+commands, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
However the script will always stop and display the following error when it executes the .bat file:
import codecs
File "C:\Python25\lib\codecs.py", line 1007, in <module>
strict_errors = lookup_error("strict")
File "C:\Python25\lib\codecs.py", line 1007, in <module>
strict_errors = lookup_error("strict")
File "C:\Python25\lib\encodings\__init__.py", line 31, in <module>
import codecs, types
File "C:\Python25\lib\types.py", line 36, in <module>
BufferType = buffer
NameError: name 'buffer' is not defined
If I execute the .bat directly on command line, there will not be any issue.
I think at least part of the problem is how you're executing the batch file. Give this a try:
# execute the batch file as a separate process and echo its output
Popen_kwargs = { 'stdout': subprocess.PIPE, 'stderr': subprocess.STDOUT,
'universal_newlines': True }
with subprocess.Popen('hlm '+commands, **Popen_kwargs).stdout as output:
for line in output:
print line,
This pass different arguments to Popen -- the difference are this version removes the shell=True which isn't needed on a batch file, sets stderr=subprocess.STDOUT which redirects stdout to the same place stdout is going to to avoid missing any error messages, and adds a universal_newlines=True to make the output more readable.
Another difference is it reads and prints the output from the Popen process which will effectively make the Python script running the batch file wait until it's finished executing before continuing on -- which I suspect is important.

Categories

Resources