I'm writing some code which involves running a few shell commands from Python and ideally, I would like to integrate the output from these commands into the logger that I'm using. I know I can divert stdout into a file / socket as follows:
call( '<a-shell-cmd>', shell=True, stdout=myFile )
but I'd rather not have the bind of opening a temporary file, looping over the file writing the output, closing the file, deleting the file etc. If there's anyway that I can send the output directly to the logger, it would seem a lot neater to me. Any ideas?
Use the subprocess module.
Tip: you can go to the documentation for a particular version of python via http://docs.python.org/release/<major>.<minor>/
From Python 2.7 and above:
output = subprocess.check_output(["command", "arg1"], shell=True)
In Python 2.4:
process = subprocess.Popen(["command", "arg1"], shell=True, stdout=subprocess.PIPE)
stdout,stderr = process.communicate()
# not shown: how to use Popen.poll() to wait for process death.
# while filling an output buffer
print stdout
Below Python 2.4:
output = os.popen('ls')
Use os.popen
output = os.popen('ls')
You can then log output or do it directly when calling the above.
Related
I have a small python file which just outputs a string:
#!/usr/bin/env python
print("This is a Test")
I can call this python script from another python script like so:
subprocess.call(['python', 'myPythonFile.py'])
And I can see the 'This is a Test' in my source python program.
But I want to call this script from a running Daemon as described here: https://gist.github.com/andreif/cbb71b0498589dac93cb
When I put the call to
subprocess.call(['python', 'myPythonFile.py'])
In MyDaemon.Run I DO NOT see the output.
How can I do this?
Try using the check_output function to see the actual output in your console
subprocess.check_output(['python', 'myPythonFile.py'])
You can find more info in the subprocess docs
subprocess.call can send its output to a file
tempfile = '/tmp/output.tmp' # better use mktemp
with open( tempfile, 'w') as output:
response = subprocess.call(
['python', 'myPythonFile.py'],
stdout=output,
stderr=output,
)
with open( tempfile, 'r') as output:
# read what happened from output, and decide what to do next
# or perhaps just feed it into your logging system
A daemon process is characterised by having no controlling terminal, because it is detached from whatever started the daemon. The daemon process is not connected to any console, by definition.
So, if that daemon runs another process:
I want to call this script from a running Daemon
then there is still no controlling terminal, and standard output is by default connected to the null device.
You will need to have the daemon process arrange to have its output somewhere. For example, a log file.
Try the python daemon library for a way to create daemons and nominate specific files (e.g. a log file you opened) to remain open in the daemon process.
process = subprocess.check_output(BACKEND+"mainbgw setup " + str(NUM_USERS), shell=True,\
stderr=subprocess.STDOUT)
I am using the above statement to run a C program in django-python based server for some computations, there are some printf() statements whose output I would like to see on stdout while the server is running and executing the subprocess, how can that be done ?
If you actually don't need the output to be available to your python code as a string, you can just use os.system, or subprocess.call without redirecting stdout elsewhere. Then stdout of your C program will just go directly to stdout of your python program.
If you need both streaming stdout and access to the output as a string, you should use subprocess.Popen (or the old popen2.popen4) to obtain a file descriptor of the output stream, then repeatedly read lines from the stream until you exhausted it. In the mean time, you keep a concatenated version of all data you grabbed. This is an example of the loop.
I'm calling rtmpdump via subprocess and trying to redirect its output to a file. The problem is that I simply can't redirect it.
I tried first setting up the sys.stdout to the opened file. This works for, say, ls, but not for rtmpdump. I also tried setting the sys.stderr just to make sure and it also didn't work.
I tried then using a ">> file" with the command line argument but again it doesn't seem to work.
Also for the record, for some reason, Eclipse prints rtmpdump's output even if I use subprocess.call instead of subprocess.check_output, and without having to call the print method. This is black magic!
Any suggestions?
Edit: Here's some sample code.
# /!\ note: need to use os.chdir first to get to the folder with rtmpdump!
command = './rtmpdump -r rtmp://oxy.videolectures.net/video/ -y 2007/pascal/bootcamp07_vilanova/keller_mikaela/bootcamp07_keller_bss_01 -a video -s http://media.videolectures.net/jw-player/player.swf -w ffa4f0c469cfbe1f449ec42462e8c3ba16600f5a4b311980bb626893ca81f388 -x 53910 -o test.flv'
split_command = shlex.split(command)
subprocess.call(split_command)
sys.stdout is the python's idea of the parent's output stream.
In any case you want to change the child's output stream.
subprocess.call and subprocess.Popen take named parameters for the output streams.
So open the file you want to output to and then pass that as the appropriate argument to subprocess.
f = open("outputFile","wb")
subprocess.call(argsArray,stdout=f)
Your talk of using >> suggest you are using shell=True, or think you are passing your arguments to the shell. In any case it is better to use the array form of subprocess, which avoid an unnecessary process, and any weirdness from the shell.
EDIT:
So I downloaded RTMPDump and tried it out, it would appear the messages are appearing on stderr.
So with the following program, nothing appears on the programs output, and the rtmpdump logs when into the stderr.txt file:
#!/usr/bin/env python
import os
import subprocess
RTMPDUMP="./rtmpdump"
assert os.path.isfile(RTMPDUMP)
command = [RTMPDUMP,'-r','rtmp://oxy.videolectures.net/video/',
'-y','2007/pascal/bootcamp07_vilanova/keller_mikaela/bootcamp07_keller_bss_01',
'-a','video','-s',
'http://media.videolectures.net/jw-player/player.swf',
'-w','ffa4f0c469cfbe1f449ec42462e8c3ba16600f5a4b311980bb626893ca81f388'
,'-x','53910','-o','test.flv']
stdout = open("stdout.txt","wb")
stderr = open("stderr.txt","wb")
subprocess.call(command,stdout=stdout,stderr=stderr)
See the link on getting the output from subprocess on SO
Getting the entire output from subprocess.Popen
https://stackoverflow.com/questions/tagged/subprocess
I guess the way would be to collect the output and write it to a file directly or provide file descriptors to which you output can be written.
Something like this:
f = open('dump.txt', 'wb')
p = subprocess.Popen(args, stdout=f, stderr=subprocess.STDOUT, shell=True)
I want to spawn (fork?) multiple Python scripts from my program (written in Python as well).
My problem is that I want to dedicate one terminal to each script, because I'll gather their output using pexpect.
I've tried using pexpect, os.execlp, and os.forkpty but neither of them do as I expect.
I want to spawn the child processes and forget about them (they will process some data, write the output to the terminal which I could read with pexpect and then exit).
Is there any library/best practice/etc. to accomplish this job?
p.s. Before you ask why I would write to STDOUT and read from it, I shall say that I don't write to STDOUT, I read the output of tshark.
See the subprocess module
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, such as:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
From Python 3.5 onwards you can do:
import subprocess
result = subprocess.run(['python', 'my_script.py', '--arg1', val1])
if result.returncode != 0:
print('script returned error')
This also automatically redirects stdout and stderr.
I don't understand why you need expect for this. tshark should send its output to stdout, and only for some strange reason would it send it to stderr.
Therefore, what you want should be:
import subprocess
fp= subprocess.Popen( ("/usr/bin/tshark", "option1", "option2"), stdout=subprocess.PIPE).stdout
# now, whenever you are ready, read stuff from fp
You want to dedicate one terminal or one python shell?
You already have some useful answers for Popen and Subprocess, you could also use pexpect if you're already planning on using it anyways.
#for multiple python shells
import pexpect
#make your commands however you want them, this is just one method
mycommand1 = "print 'hello first python shell'"
mycommand2 = "print 'this is my second shell'"
#add a "for" statement if you want
child1 = pexpect.spawn('python')
child1.sendline(mycommand1)
child2 = pexpect.spawn('python')
child2.sendline(mycommand2)
Make as many children/shells as you want and then use the child.before() or child.after() to get your responses.
Of course you would want to add definitions or classes to be sent instead of "mycommand1", but this is just a simple example.
If you wanted to make a bunch of terminals in linux, you just need to replace the 'python' in the pextpext.spawn line
Note: I haven't tested the above code. I'm just replying from past experience with pexpect.
In a script , I want to run a .exe with some command line parameters as "-a",and then
redirect the standard output of the program to a file?
How can I implement that?
You can redirect directly to a file using subprocess.
import subprocess
with open('output.txt', 'w') as output_f:
p = subprocess.Popen('Text/to/execute with-arg',
stdout=output_f,
stderr=output_f)
Easiest is os.system("the.exe -a >thefile.txt"), but there are many other ways, for example with the subprocess module in the standard library.
You can do something like this
e.g. to read output of ls -l (or any other command)
p = subprocess.Popen(["ls","-l"],stdout=subprocess.PIPE)
print p.stdout.read() # or put it in a file
you can do similar thing for stderr/stdin
but as Alex mentioned if you just want it in a file, just redirect the cmd output to a file
If you just want to run the executable and wait for the results, Anurag's solution is probably the best. I needed to respond to each line of output as it arrived, and found the following worked:
1) Create an object with a write(text) method. Redirect stdout to it (sys.stdout = obj). In your write method, deal with the output as it arrives.
2) Run a method in a seperate thread with something like the following code:
p = subprocess.Popen('Text/to/execute with-arg', stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=False)
while p.poll() is None:
print p.stdout.readline().strip()
Because you've redirected stdout, PIPE will send the output to your write method line by line. If you're not certain you're going to get line breaks, read(amount) works too, I believe.
3) Remember to redirect stdout back to the default: sys.stdout = __sys.stdout__
Although the title (.exe) sounds like it's a problem on Windows.
I had to share that the accepted answer (subprocess.Popen() with stdout/stderr arguments) didn't work for me on Mac OS X (10.8) with python 2.7.
I had to use subprocess.check_output() (python 2.7 and above) to make it work. Example:
import subprocess
cmd = 'ls -l'
out = subprocess.check_output(cmd, shell=True)
with open('my.log', 'w') as f:
f.writelines(out)
f.close()
Note that this solution writes all the accumulated output out when the program finishes.
If you want to monitor the log file during the run. You may want to try something else.
In my own case, I only cared about the end result.