I have a case to want to execute the following shell command in Python and get the output,
echo This_is_a_testing | grep -c test
I could use this python code to execute the above shell command in python,
>>> import subprocess
>>> subprocess.check_output("echo This_is_a_testing | grep -c test", shell=True)
'1\n'
However, as I do not want to use the "shell=True" option, I tried the following python code,
>>> import subprocess
>>> p1 = subprocess.Popen(["echo", "This_is_a_testing"], stdout=subprocess.PIPE)
>>> p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout)
>>> p1.stdout.close()
>>> p2.communicate()
(None, None)
I wonder why the output is "None" as I have referred to the descriptions in the webpage : http://docs.python.org/library/subprocess.html#subprocess.PIPE
Had I missed some points in my code ? Any suggestion / idea ? Thanks in advance.
>>> import subprocess
>>> mycmd=subprocess.getoutput('df -h | grep home | gawk \'{ print $1 }\' | cut -d\'/\' -f3')
>>> mycmd
'sda6'
>>>
Please look here:
>>> import subprocess
>>> p1 = subprocess.Popen(["echo", "This_is_a_testing"], stdout=subprocess.PIPE)
>>> p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout)
>>> 1
p1.stdout.close()
>>> p2.communicate()
(None, None)
>>>
here you get 1 as output after you write p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout), Do not ignore this output in the context of your question.
If this is what you want, then pass stdout=subprocess.PIPE as argument to the second Popen:
>>> p1 = subprocess.Popen(["echo", "This_is_a_testing"], stdout=subprocess.PIPE)
>>> p2 = subprocess.Popen(["grep", "test"], stdin=p1.stdout, stdout=subprocess.PIPE)
>>> p2.communicate()
('This_is_a_testing\n', None)
>>>
From the manual:
to get anything other than None in the result tuple, you need to give
stdout=PIPE and/or stderr=PIPE
p2 = subprocess.Popen(["grep", "-c", "test"], stdin=p1.stdout, stdout=subprocess.PIPE)
While the accepted answer is correct/working, another option would be to use the Popen.communicate() method to pass something to a process' stdin:
>>> import subprocess
>>> p2 = subprocess.Popen(["grep", "-c", "test"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> p2.communicate("This_is_a_testing")
('1\n', None)
>>> print p2.returncode
0
>>>>
This resolves the need to execute another command just to redirect it's output, if the output is already known in the python script itself.
However communicate has the side-effect, that it waits for the process to terminate. If asynchronous execution is needed/desired using two processes might be the better option.
Answer is similar to mentioned earlier, with little formatting. I wanted to get exactly same output as normal shell command with pipe on python 3.
import subprocess
p1 = subprocess.Popen(["ls", "-l", "."], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["grep", "May"], stdin=p1.stdout, stdout=subprocess.PIPE)
for s in (str(p2.communicate())[2:-10]).split('\\n'):
print(s)
Related
I tested following command on bash(Linux) and it works fine:
awk '/string1\/parameters\/string2/' RS= myfile | grep Value | sed 's/.*"\(.*\)"[^"]*$/\1/'
Now I have to call it in a python script, while string1 and string2 are python variables.
I tried it with os.popen but I didn't figure out how to concatenate the characters.
Any ideas how to solve this issue?
Thank you in advance for your help!
You can replace shell pipeline with Popen:
from subprocess import PIPE,Popen
from shlex import split
p1 = Popen(split("awk /string1\/parameters\/string2 RS=myfile"), stdout=PIPE)
p2 = Popen(["grep", "Value"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
p3 = Popen(split("""sed 's/.*"\(.*\)"[^"]*$/\1/'"""), stdin=p2.stdout, stdout=PIPE)
p2.stdout.close() # Allow p2 to receive a SIGPIPE if p3 exits.
output = p3.communicate()[0]
You can use subprocess.check_output() with the variables being substituted into the command using format():
cmd = """awk '/{}\/parameters\/{}/' RS= myfile | grep Value | sed 's/.*"\(.*\)"[^"]*$/\1/'""".format('string1', 'string2')
cmd_output = subprocess.check_output(cmd, shell=True)
But note the warnings regarding the use of shell=True in the referenced documentation.
An alternative is to set up the pipeline yourself using Popen():
import shlex
from subprocess import Popen, PIPE
awk_cmd = """awk '/{}\/parameters\/{}/' RS= myfile""".format('s1', 's2')
grep_cmd = 'grep Value'
sed_cmd = """sed 's/.*"\(.*\)"[^"]*$/\1/'"""
p_awk = Popen(shlex.split(awk_cmd), stdout=PIPE)
p_grep = Popen(shlex.split(grep_cmd), stdin=p_awk.stdout, stdout=PIPE)
p_sed = Popen(shlex.split(sed_cmd), stdin=p_grep.stdout, stdout=PIPE)
for p in p_awk, p_grep:
p.stdout.close()
stdout, stderr = p_sed.communicate()
print stdout
It seems that using shell=True in the first process of a chain somehow drops the stdout from downstream tasks:
p1 = Popen(['echo','hello'], stdout=PIPE)
p2 = Popen('cat', stdin=p1.stdout, stdout=PIPE)
p2.communicate()
# outputs correctly ('hello\n', None)
Making the first process use shell=True kills the output somehow...
p1 = Popen(['echo','hello'], stdout=PIPE, shell=True)
p2 = Popen('cat', stdin=p1.stdout, stdout=PIPE)
p2.communicate()
# outputs incorrectly ('\n', None)
shell=True on the second process doesn't seem to matter. Is this expected behavior?
When you pass shell=True, Popen expects a single string argument, not a list. So when you do this:
p1 = Popen(['echo','hello'], stdout=PIPE, shell=True)
What happens is this:
execve("/bin/sh", ["/bin/sh", "-c", "echo", "hello"], ...)
That is, it calls sh -c "echo", and hello is effectively ignored (technically it becomes a positional argument to the shell). So the shell runs echo, which prints \n, which is why you see that in your output.
If you use shell=True, you need to do this:
p1 = Popen('echo hello', stdout=PIPE, shell=True)
I have the following command:
$ ffmpeg -i http://url/1video.mp4 2>&1 | perl -lane 'print $1 if /(\d+x\d+)/'
640x360
I'm trying to set the output of this command into a python variable. Here is what I have so far:
>>> from subprocess import Popen, PIPE
>>> p1 = Popen(['ffmpeg', '-i', 'http://url/1video.mp4', '2>&1'], stdout=PIPE)
>>> p2=Popen(['perl','-lane','print $1 if /(\d+x\d+)/'], stdin=p1.stdout, stdout=PIPE)
>>> dimensions = p2.communicate()[0]
''
What am I doing incorrectly here, and how would I get the correct value for dimensions?
In general, you can replace a shell pipeline with this pattern:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
However, in this case, no pipeline is necessary:
import subprocess
import shlex
import re
url='http://url/1video.mp4'
proc=subprocess.Popen(shlex.split('ffmpeg -i {f}'.format(f=url)),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
dimensions=None
for line in proc.stderr:
match=re.search(r'(\d+x\d+)',line)
if match:
dimensions=match.group(1)
break
print(dimensions)
No need to call perl from within python.
If you have the output from ffmpeg in a variable, you can do something like this:
print re.search(r'(\d+x\d+)', str).group()
Note the “shell” argument to subprocess.Popen: this specifies whether the command you pass is parsed by the shell or not.
That “2>&1” is one of those things that needs to be parsed by a shell, otherwise FFmpeg (like most programs) will try to treat it as a filename or option value.
The Python sequence that most closely mimics the original would probably be more like
p1 = subprocess.Popen("ffmpeg -i http://url/1video.mp4 2>&1", shell = True, stdout = subprocess.PIPE)<BR>
p2 = subprocess.Popen(r"perl -lane 'print $1 if /(\d+x\d+)/'", shell = True, stdin = p1.stdout, stdout = subprocess.PIPE)<BR>
dimensions = p2.communicate()[0]
I can run this normally on the command line in Linux:
$ tar c my_dir | md5sum
But when I try to call it with Python I get an error:
>>> subprocess.Popen(['tar','-c','my_dir','|','md5sum'],shell=True)
<subprocess.Popen object at 0x26c0550>
>>> tar: You must specify one of the `-Acdtrux' or `--test-label' options
Try `tar --help' or `tar --usage' for more information.
You have to use subprocess.PIPE, also, to split the command, you should use shlex.split() to prevent strange behaviours in some cases:
from subprocess import Popen, PIPE
from shlex import split
p1 = Popen(split("tar -c mydir"), stdout=PIPE)
p2 = Popen(split("md5sum"), stdin=p1.stdout)
But to make an archive and generate its checksum, you should use Python built-in modules tarfile and hashlib instead of calling shell commands.
Ok, I'm not sure why but this seems to work:
subprocess.call("tar c my_dir | md5sum",shell=True)
Anyone know why the original code doesn't work?
What you actually want is to run a shell subprocess with the shell command as a parameter:
>>> subprocess.Popen(['sh', '-c', 'echo hi | md5sum'], stdout=subprocess.PIPE).communicate()
('764efa883dda1e11db47671c4a3bbd9e -\n', None)
>>> from subprocess import Popen,PIPE
>>> import hashlib
>>> proc = Popen(['tar','-c','/etc/hosts'], stdout=PIPE)
>>> stdout, stderr = proc.communicate()
>>> hashlib.md5(stdout).hexdigest()
'a13061c76e2c9366282412f455460889'
>>>
i would try your on python v3.8.10 :
import subprocess
proc1 = subprocess.run(['tar c my_dir'], stdout=subprocess.PIPE, shell=True)
proc2 = subprocess.run(['md5sum'], input=proc1.stdout, stdout=subprocess.PIPE, shell=True)
print(proc2.stdout.decode())
key points (like outline in my solution on related https://stackoverflow.com/a/68323133/12361522):
subprocess.run()
no splits of bash command and parameters, i.e. ['tar c my_dir']or ["tar c my_dir"]
stdout=subprocess.PIPE for all processes
input=proc1.stdout chain of output of previous one into input of the next one
enable shell shell=True
I wrote a script to run a command-line program with different input arguments and grab a certain line from the output. I have the following running in a loop:
p1 = subprocess.Popen(["program", args], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False)
p2 = subprocess.Popen(["grep", phrase], stdin=p1.stdout, stdout=subprocess.PIPE, shell=False)
p1.wait()
p2.wait()
p = str(p2.stdout.readlines())
print 'p is ', p
One problem is that there is only output after the loop is finished running. I want to print something each time a process is finished. How can I do that?
Also, I want to have the option of displaying the output of p1. But I can't grab it with p1.stdout.readlines() without breaking p2. How can I do this?
I was thinking that I could just not make the call to grep, store the output of p1 and search for the phrase, but there's a lot of output, so this way seems pretty inefficient.
Any suggestions would be greatly appreciated. Thanks!
Here's a quick hack that worked for me on Linux. It might work for you, depending on your requirements. It uses tee as a filter that, if you pass print_all to your script, will duplicate an extra copy to /dev/tty (hey, I said it was a hack):
#!/usr/bin/env python
import subprocess
import sys
phrase = "bar"
if len(sys.argv) > 1 and sys.argv[1] == 'print_all':
tee_args = ['tee', '/dev/tty']
else:
tee_args = ['tee']
p1 = subprocess.Popen(["./program"], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False)
p2 = subprocess.Popen(tee_args, stdin=p1.stdout, stdout=subprocess.PIPE, shell=False)
p3 = subprocess.Popen(["grep", phrase], stdin=p2.stdout, stdout=subprocess.PIPE, shell=False)
p1.wait()
p2.wait()
p3.wait()
p = str(p3.stdout.readlines())
print 'p is ', p
With the following as contents for program:
#!/bin/sh
echo foo
echo bar
echo baz
Example output:
$ ./foo13.py
p is ['bar\n']
$ ./foo13.py print_all
foo
bar
baz
p is ['bar\n']
Try calling sys.stdout.flush() after each print statement.