It seems that using shell=True in the first process of a chain somehow drops the stdout from downstream tasks:
p1 = Popen(['echo','hello'], stdout=PIPE)
p2 = Popen('cat', stdin=p1.stdout, stdout=PIPE)
p2.communicate()
# outputs correctly ('hello\n', None)
Making the first process use shell=True kills the output somehow...
p1 = Popen(['echo','hello'], stdout=PIPE, shell=True)
p2 = Popen('cat', stdin=p1.stdout, stdout=PIPE)
p2.communicate()
# outputs incorrectly ('\n', None)
shell=True on the second process doesn't seem to matter. Is this expected behavior?
When you pass shell=True, Popen expects a single string argument, not a list. So when you do this:
p1 = Popen(['echo','hello'], stdout=PIPE, shell=True)
What happens is this:
execve("/bin/sh", ["/bin/sh", "-c", "echo", "hello"], ...)
That is, it calls sh -c "echo", and hello is effectively ignored (technically it becomes a positional argument to the shell). So the shell runs echo, which prints \n, which is why you see that in your output.
If you use shell=True, you need to do this:
p1 = Popen('echo hello', stdout=PIPE, shell=True)
Related
I'm using multiple commands to run:
e.g. cd foo/bar; ../../run_this -arg1 -arg2="yeah_ more arg1 arg2" arg3=/my/path finalarg
Running with:
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
(out, err) = p.communicate()
But this spits output on screen (Python 2.7.5)
And out is empty string.
You have shell=True, so you're basically reading the standard output of the shell spawned, not the standard output of the program you want to run.
I'm guessing you're using shell=True to accommodate the directory changing. Fortunately, subprocess can take care of that for you (by passing a directory via the cwd keyword argument):
import subprocess
import shlex
directory = 'foo/bar'
cmd = '../../run_this -arg1 -arg2="yeah_ more arg1 arg2" arg3=/my/path finalarg'
p = subprocess.Popen(shlex.split(cmd), cwd=directory, stdout=subprocess.PIPE)
(out, err) = p.communicate()
As per comment I added stderr too and that worked!:
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
I searched a lot here and googled also, trying to find why stderr from my first command is not seen in the final stderr. I know of other methods like "check_output" (python3) and "commands" (python2), but I want to write my own cross-platform one. Here is the problem:
import subprocess
p1 = subprocess.Popen('dirr', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
p2 = subprocess.Popen('find "j"', shell=True, stdin=p1.stdout, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
p1.stdout.close()
output,error=p2.communicate()
print(output,'<----->',error)
I also tried redirecting stderr=subprocess.STDOUT, but this didn't change things.
Can you please tell how to redirect the stderr from the first command, so I can see it in the stdout or stderr?
Regards,
To see stderr of the first command in stdout/stderr of the second command:
the second command could read stderr1 e.g., from its stdin2
the second command could print this content to its stdout2/stderr2
Example
#!/usr/bin/env python
import sys
from subprocess import Popen, PIPE, STDOUT
p1 = Popen([sys.executable, '-c', """import sys; sys.stderr.write("stderr1")"""],
stderr=STDOUT, stdout=PIPE)
p2 = Popen([sys.executable, '-c', """import sys
print(sys.stdin.read() + " stdout2")
sys.stderr.write("stderr2")"""],
stdin=p1.stdout, stderr=PIPE, stdout=PIPE)
p1.stdout.close()
output, error = p2.communicate()
p1.wait()
print(output, '<----->', error)
Output
('stderr1 stdout2\n', '<----->', 'stderr2')
For p2, you shouldn't be using subprocess.PIPE for either of the outputs because you're not piping it to another program.
I have a case to want to execute the following shell command in Python and get the output,
echo This_is_a_testing | grep -c test
I could use this python code to execute the above shell command in python,
>>> import subprocess
>>> subprocess.check_output("echo This_is_a_testing | grep -c test", shell=True)
'1\n'
However, as I do not want to use the "shell=True" option, I tried the following python code,
>>> import subprocess
>>> p1 = subprocess.Popen(["echo", "This_is_a_testing"], stdout=subprocess.PIPE)
>>> p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout)
>>> p1.stdout.close()
>>> p2.communicate()
(None, None)
I wonder why the output is "None" as I have referred to the descriptions in the webpage : http://docs.python.org/library/subprocess.html#subprocess.PIPE
Had I missed some points in my code ? Any suggestion / idea ? Thanks in advance.
>>> import subprocess
>>> mycmd=subprocess.getoutput('df -h | grep home | gawk \'{ print $1 }\' | cut -d\'/\' -f3')
>>> mycmd
'sda6'
>>>
Please look here:
>>> import subprocess
>>> p1 = subprocess.Popen(["echo", "This_is_a_testing"], stdout=subprocess.PIPE)
>>> p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout)
>>> 1
p1.stdout.close()
>>> p2.communicate()
(None, None)
>>>
here you get 1 as output after you write p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout), Do not ignore this output in the context of your question.
If this is what you want, then pass stdout=subprocess.PIPE as argument to the second Popen:
>>> p1 = subprocess.Popen(["echo", "This_is_a_testing"], stdout=subprocess.PIPE)
>>> p2 = subprocess.Popen(["grep", "test"], stdin=p1.stdout, stdout=subprocess.PIPE)
>>> p2.communicate()
('This_is_a_testing\n', None)
>>>
From the manual:
to get anything other than None in the result tuple, you need to give
stdout=PIPE and/or stderr=PIPE
p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout, stdout=subprocess.PIPE)
While the accepted answer is correct/working, another option would be to use the Popen.communicate() method to pass something to a process' stdin:
>>> import subprocess
>>> p2 = subprocess.Popen(["grep", "-c", "test"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> p2.communicate("This_is_a_testing")
('1\n', None)
>>> print p2.returncode
0
>>>>
This resolves the need to execute another command just to redirect it's output, if the output is already known in the python script itself.
However communicate has the side-effect, that it waits for the process to terminate. If asynchronous execution is needed/desired using two processes might be the better option.
Answer is similar to mentioned earlier, with little formatting. I wanted to get exactly same output as normal shell command with pipe on python 3.
import subprocess
p1 = subprocess.Popen(["ls", "-l", "."], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["grep", "May"], stdin=p1.stdout, stdout=subprocess.PIPE)
for s in (str(p2.communicate())[2:-10]).split('\\n'):
print(s)
I have the following command:
$ ffmpeg -i http://url/1video.mp4 2>&1 | perl -lane 'print $1 if /(\d+x\d+)/'
640x360
I'm trying to set the output of this command into a python variable. Here is what I have so far:
>>> from subprocess import Popen, PIPE
>>> p1 = Popen(['ffmpeg', '-i', 'http://url/1video.mp4', '2>&1'], stdout=PIPE)
>>> p2=Popen(['perl','-lane','print $1 if /(\d+x\d+)/'], stdin=p1.stdout, stdout=PIPE)
>>> dimensions = p2.communicate()[0]
''
What am I doing incorrectly here, and how would I get the correct value for dimensions?
In general, you can replace a shell pipeline with this pattern:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
However, in this case, no pipeline is necessary:
import subprocess
import shlex
import re
url='http://url/1video.mp4'
proc=subprocess.Popen(shlex.split('ffmpeg -i {f}'.format(f=url)),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
dimensions=None
for line in proc.stderr:
match=re.search(r'(\d+x\d+)',line)
if match:
dimensions=match.group(1)
break
print(dimensions)
No need to call perl from within python.
If you have the output from ffmpeg in a variable, you can do something like this:
print re.search(r'(\d+x\d+)', str).group()
Note the “shell” argument to subprocess.Popen: this specifies whether the command you pass is parsed by the shell or not.
That “2>&1” is one of those things that needs to be parsed by a shell, otherwise FFmpeg (like most programs) will try to treat it as a filename or option value.
The Python sequence that most closely mimics the original would probably be more like
p1 = subprocess.Popen("ffmpeg -i http://url/1video.mp4 2>&1", shell = True, stdout = subprocess.PIPE)<BR>
p2 = subprocess.Popen(r"perl -lane 'print $1 if /(\d+x\d+)/'", shell = True, stdin = p1.stdout, stdout = subprocess.PIPE)<BR>
dimensions = p2.communicate()[0]
I wrote a script to run a command-line program with different input arguments and grab a certain line from the output. I have the following running in a loop:
p1 = subprocess.Popen(["program", args], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False)
p2 = subprocess.Popen(["grep", phrase], stdin=p1.stdout, stdout=subprocess.PIPE, shell=False)
p1.wait()
p2.wait()
p = str(p2.stdout.readlines())
print 'p is ', p
One problem is that there is only output after the loop is finished running. I want to print something each time a process is finished. How can I do that?
Also, I want to have the option of displaying the output of p1. But I can't grab it with p1.stdout.readlines() without breaking p2. How can I do this?
I was thinking that I could just not make the call to grep, store the output of p1 and search for the phrase, but there's a lot of output, so this way seems pretty inefficient.
Any suggestions would be greatly appreciated. Thanks!
Here's a quick hack that worked for me on Linux. It might work for you, depending on your requirements. It uses tee as a filter that, if you pass print_all to your script, will duplicate an extra copy to /dev/tty (hey, I said it was a hack):
#!/usr/bin/env python
import subprocess
import sys
phrase = "bar"
if len(sys.argv) > 1 and sys.argv[1] == 'print_all':
tee_args = ['tee', '/dev/tty']
else:
tee_args = ['tee']
p1 = subprocess.Popen(["./program"], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False)
p2 = subprocess.Popen(tee_args, stdin=p1.stdout, stdout=subprocess.PIPE, shell=False)
p3 = subprocess.Popen(["grep", phrase], stdin=p2.stdout, stdout=subprocess.PIPE, shell=False)
p1.wait()
p2.wait()
p3.wait()
p = str(p3.stdout.readlines())
print 'p is ', p
With the following as contents for program:
#!/bin/sh
echo foo
echo bar
echo baz
Example output:
$ ./foo13.py
p is ['bar\n']
$ ./foo13.py print_all
foo
bar
baz
p is ['bar\n']
Try calling sys.stdout.flush() after each print statement.