I want to record voice, process it, than play it on the fly. The code below should describe more.
arecord -D plughw:USER -f S16_LE -t raw | python my_app.py | aplay -D plughw:USER -f S16_LE
I was able to capture input from arecord for my_app.py by using raw_input(). Now, I have problem on how to send my_app.py output to aplay.
I've tried using subprocess inside my_app.py to run aplay,
cmdPlay = 'aplay -D plughw:USER -f S16_LE'
p = subprocess.Popen(
cmdPlay,
stdout = subprocess.PIPE,
stdin = subprocess.PIPE,
stderr = subprocess.PIPE,
shell = True
)
p.communicate(input=dataHex.decode('hex'))
But, I heard no voice played on speaker.
So, how do you piping between python app and other processes (as input and output)?
n.b.: Please also suggest a better method, if any, than raw_input() for capture data from arecord.
Related
I have a script that saves 5 seconds length of videos locally and it omits the file name.
Here's the bash command
ffmpeg -i http://0.0.0.0:8080/stream/video.mjpeg -vcodec copy -map 0 -f segment -segment_time 2 -loglevel 40 -segment_format mp4 capture-%05d.mp4 2>&1 | grep --line-buffered -Eo "segment:.+ended" | gawk -F "'" '{print $2; system("")}' | xargs -n1
If I run this command in the terminal, it will return the expected file name, as such
capture-00001.mp4
Notice that the xargs command at the end easily lets me to pass the file name to a python script as a new argument. But now I want to execute this command within Python itself, specifically getting the file name with subprocess.
Here's what I've done so far. When running the script, as expected the terminal will print the file name, but it never pass it as a string to fName. I've tried subprocess.check_output but it never passes anything as the command continuously capture videos and save it locally.
FFMPEG_SCRIPT = r"""ffmpeg -i http://0.0.0.0:8080/stream/video.mjpeg -vcodec copy -map 0 -f segment -segment_time 2 -loglevel 40 -segment_format mp4 capture-%05d.mp4 2>&1 | grep --line-buffered -Eo "segment:.+ended" | gawk -F "'" '{print $2; system("")}' | xargs -n1 """
try:
fName = subprocess.check_call(FFMPEG_SCRIPT, stderr=subprocess.STDOUT, shell=True).decode('utf-8')
print(">>> {}".format(fName))
except subprocess.CalledProcessError as e:
print(e.output)
import subprocess
from subprocess import PIPE
fName = subprocess.Popen("test.bat", stdin=PIPE, stdout=PIPE)
(stdout, stderr) = fName.communicate()
print(">>> {}".format(fName))
print(stdout)
test.bat is a simple echo yes since I can't test with your script, and the resulting output of print(stdout) is b'yes\r\n'. If the file name is the only thing the scripts prints, it shouldn't be too hard to extract it.
That is a common problem when you pipe commands. The IO subsystem processes differently output on terminal and on files or pipes. On terminal, output is flushed on each newline, which does not append on files or pipes unless the program specifically asks for an explicit flush.
It is not a problem on pipelines where the first command ends on an end of file, because everything is flushed before the command exists, and the write end of the pipe is closed. So next commands sees an end of file and everything propagates smoothly.
But when the first program contains an infinite reading loop, it does queue its output, but nothing is flushed until the buffer is full, and buffers are huge on modern systems. In that case, everything works fine but you cannot see any output.
Here is a version based on Alexandre Cox proposal with a piped command to check a mount:
import subprocess
fName = subprocess.Popen('mount | grep sda3 | cut -d " " -f 3', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(stdout, stderr) = fName.communicate()
print(">>> {}".format(fName))
print(stdout)
If the mount exists, the output is:
>>> <subprocess.Popen object at 0x7f9b5eb2fdd8>
b'/mnt/sda3\n'
This question already has answers here:
How do I use subprocess.Popen to connect multiple processes by pipes?
(9 answers)
Closed 7 years ago.
I want to run this command using call subprocess
ls -l folder | wc -l
My code in Python file is here:
subprocess.call(["ls","-l","folder","|","wc","-l"])
I got an error message like this:
ls: cannot access |: No such file or directory
ls: cannot access wc: No such file or directory
It's like command |wc can't be read by call subprocess.
How can i fix it?
Try out the shell option using a string as first parameter:
subprocess.call("ls -l folder | wc -l",shell=True)
Although this work, note that using shell=True is not recommended since it can introduce a security issue through shell injection.
You can setup a command pipeline by connecting one process's stdout with another's stdin. In your example, errors and the final output are written to the screen, so I didn't try to redirect them. This is generally preferable to something like communicate because instead of waiting for one program to complete before starting another (and encouring the expense of moving the data into the parent) they run in parallel.
import subprocess
p1 = subprocess.Popen(["ls","-l"], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["wc","-l"], stdin=p1.stdout)
# close pipe in parent, its still open in children
p1.stdout.close()
p2.wait()
p1.wait()
You'll need to implement the piping logic yourself to make it work properly.
def piped_call(prog1, prog2):
out, err = subprocess.call(prog1).communicate()
if err:
print(err)
return None
else:
return subprocess.call(prog2).communicate(out)
You could try using subprocess.PIPE, assuming you wanted to avoid using subprocess.call(..., shell=True).
import subprocess
# Run 'ls', sending output to a PIPE (shell equiv.: ls -l | ... )
ls = subprocess.Popen('ls -l folder'.split(),
stdout=subprocess.PIPE)
# Read output from 'ls' as input to 'wc' (shell equiv.: ... | wc -l)
wc = subprocess.Popen('wc -l'.split(),
stdin=ls.stdout,
stdout=subprocess.PIPE)
# Trap stdout and stderr from 'wc'
out, err = wc.communicate()
if err:
print(err.strip())
if out:
print(out.strip())
For Python 3 keep in mind the communicate() method used here will return a byte object instead of a string. :
In this case you will need to convert the output to a string using decode():
if err:
print(err.strip().decode())
if out:
print(out.strip().decode())
I want to execute bash command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
in python using Popen. The code does not raise any exception, but none change is made on the jruby.log after execution. The python code is shown below.
>>> command='/bin/echo </verbosegc> >> '+fullpath
>>> command
'/bin/echo </verbosegc> >> /tmp/jruby.log'
>>process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
>>> output= process.communicate()[0]
>>> output
'</verbosegc> >> /tmp/jruby.log\n
I also print out the process.pid and then check the pid using ps -ef | grep pid. The result shows that the process pid has been finished.
Just use pass file object if you want to append the output to a file, you cannot redirect to a file unless you set shell=True:
command = ['/bin/echo', '</verbosegc>']
with open('/tmp/jruby.log',"a") as f:
subprocess.check_call(command, stdout=f,stderr=subprocess.STDOUT)
The first argument to subprocess.Popen is the array ['/bin/echo', '</verbosegc>', '>>', '/tmp/jruby.log']. When the first argument to subprocess.Popen is an array, it does not launch a shell to run the command, and the shell is what's responsible for interpreting >> /tmp/jruby.log to mean "write output to jruby.log".
In order to make the >> redirection work in this command, you'll need to pass command directly to subprocess.Popen() without splitting it into a list. You'll also need to quote the first argument (or else the shell will interpret the "<" and ">" characters in ways you don't want):
command = '/bin/echo "</verbosegc>" >> /tmp/jruby.log'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
Consider the following:
command = [ 'printf "%s\n" "$1" >>"$2"', # shell script to execute
'', # $0 in shell
'</verbosegc>', # $1
'/tmp/jruby.log' ] # $2
subprocess.Popen(command, shell=True)
The first argument is a shell script referring to $1 and $2, which are in turn passed as separate arguments. Keeping data separate from code, rather than trying to substitute the former into the latter, is a precaution against shell injection (think of this as an analog to SQL injection).
Of course, don't actually do anything like this in Python -- the native primitives for file IO are far more appropriate.
Have you tried without splitting the command and using shell=True? My usual format is:
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
output = process.stdout.read() # or .readlines()
I have been writing some python code and in my code I was using "command"
The code was working as I intended but then I noticed in the Python docs that command has been deprecated and will be removed in Python 3 and that I should use "subprocess" instead.
"OK" I think, "I don't want my code to go straight to legacy status, so I should change that right now.
The thing is that subprocess.Popen seems to prepend a nasty string to the start of any output e.g.
<subprocess.Popen object at 0xb7394c8c>
All the examples I see have it there, it seems to be accepted as given that it is always there.
This code;
#!/usr/bin/python
import subprocess
output = subprocess.Popen("ls -al", shell=True)
print output
produces this;
<subprocess.Popen object at 0xb734b26c>
brettg#underworld:~/dev$ total 52
drwxr-xr-x 3 brettg brettg 4096 2011-05-27 12:38 .
drwxr-xr-x 21 brettg brettg 4096 2011-05-24 17:40 ..
<trunc>
Is this normal? If I use it as part of a larger program that outputs various formatted details to the console it messes everything up.
I'm using the command to obtain the IP address for an interface by using ifconfig along with various greps and awks to scrape the address.
Consider this code;
#!/usr/bin/python
import commands,subprocess
def new_get_ip (netif):
address = subprocess.Popen("/sbin/ifconfig " + netif + " | grep inet | grep -v inet6 | awk '{print $2}' | sed 's/addr://'i", shell=True)
return address
def old_get_ip (netif):
address = commands.getoutput("/sbin/ifconfig " + netif + " | grep inet | grep -v inet6 | awk '{print $2}' | sed 's/addr://'i")
return address
print "OLD IP is :",old_get_ip("eth0")
print ""
print "NEW IP is :",new_get_ip("eth0")
This returns;
brettg#underworld:~/dev$ ./IPAddress.py
OLD IP is : 10.48.16.60
NEW IP is : <subprocess.Popen object at 0xb744270c>
brettg#underworld:~/dev$ 10.48.16.60
Which is fugly to say the least.
Obviously I am missing something here. I am new to Python of course so I'm sure it is me doing the wrong thing but various google searches have been fruitless to this point.
What if I want cleaner output? Do I have to manually trim the offending output or am I invoking subprocess.Popen incorrectly?
The "ugly string" is what it should be printing. Python is correctly printing out the repr(subprocess.Popen(...)), just like what it would print if you said print(open('myfile.txt')).
Furthermore, python has no knowledge of what is being output to stdout. The output you are seeing is not from python, but from the process's stdout and stderr being redirected to your terminal as spam, that is not even going through the python process. It's like you ran a program someprogram & without redirecting its stdout and stderr to /dev/null, and then tried to run another command, but you'd occasionally see spam from the program. To repeat and clarify:
<subprocess.Popen object at 0xb734b26c> <-- output of python program
brettg#underworld:~/dev$ total 52 <-- spam from your shell, not from python
drwxr-xr-x 3 brettg brettg 4096 2011-05-27 12:38 . <-- spam from your shell, not from python
drwxr-xr-x 21 brettg brettg 4096 2011-05-24 17:40 .. <-- spam from your shell, not from python
...
In order to capture stdout, you must use the .communicate() function, like so:
#!/usr/bin/python
import subprocess
output = subprocess.Popen(["ls", "-a", "-l"], stdout=subprocess.PIPE).communicate()[0]
print output
Furthermore, you never want to use shell=True, as it is a security hole (a major security hole with unsanitized inputs, a minor one with no input because it allows local attacks by modifying the shell environment). For security reasons and also to avoid bugs, you generally want to pass in a list rather than a string. If you're lazy you can do "ls -al".split(), which is frowned upon, but it would be a security hole to do something like ("ls -l %s"%unsanitizedInput).split().
See the subprocess module documentation for more information.
Here is how to get stdout and stderr from a program using the subprocess module:
from subprocess import Popen, PIPE, STDOUT
cmd = 'echo Hello World'
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = p.stdout.read()
print output
results:
b'Hello\r\n'
you can run commands with PowerShell and see results:
from subprocess import Popen, PIPE, STDOUT
cmd = 'powershell.exe ls'
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = p.stdout.read()
useful link
The variable output does not contain a string, it is a container for the subprocess.Popen() function. You don't need to print it. The code,
import subprocess
output = subprocess.Popen("ls -al", shell=True)
works perfectly, but without the ugly : <subprocess.Popen object at 0xb734b26c> being printed.
I've currently got a Bash command being executed (via Python's subprocess.Popen) which is reading from stdin, doing something and outputing to stdout. Something along the lines of:
pid = subprocess.Popen( ["-c", "cmd1 | cmd2"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
shell =True )
output_data = pid.communicate( "input data\n" )
Now, what I want to do is to change that to execute another command in that same subshell that will alter the state before the next commands execute, so my shell command line will now (conceptually) be:
cmd0; cmd1 | cmd2
Is there any way to have the input sent to cmd1 instead of cmd0 in this scenario? I'm assuming the output will include cmd0's output (which will be empty) followed by cmd2's output.
cmd0 shouldn't actually read anything from stdin, does that make a difference in this situation?
I know this is probably just a dumb way of doing this, I'm trying to patch in cmd0 without altering the other code too significantly. That said, I'm open to suggestions if there's a much cleaner way to approach this.
execute cmd0 and cmd1 in a subshell and redirect /dev/null as stdin for cmd0:
(cmd0 </dev/null; cmd1) | cmd2
I don't think you should have to do anything special. If cmd0 doesn't touch stdin, it'll be intact for cmd1. Try for yourself:
ls | ( echo "foo"; sed 's/^/input: /')
(Using ls as an arbitrary command to produce a few lines of input for the pipeline)
And the additional pipe to cmd2 doesn't affect the input either, of course.
Ok, I think I may be able to duplicate the stdin file descriptor to a temporary one, close it, run cmd0, then restore it before running cmd1:
exec 0>&3; exec 0<&-; cmd0 ; exec 3>&0 ; cmd1 | cmd2
Not sure if it's possible to redirect stdin in this way though, and can't test this at the moment.
http://tldp.org/LDP/abs/html/io-redirection.html
http://tldp.org/LDP/abs/html/x17601.html