I just want to build a little python music client on my raspberry pi. I installed "mpg321" and it works great but now my problem. After sending the command
os.system("mpg321 -R testPlayer")
python waits for user input like play, pause or quit. If I write this in my terminal the player pause the music oder quits. Perfect but I want python to do that so I send the command
os.system("LOAD test.mp3")
where LOAD is the command for loading this mp3. But nothing happens. When I quit the player via terminal I get the error:
sh: 1: LOAD: not found
I think this means that
os.system("mpg321 -R testPlayer")
takes the whole process and after I quit it python tries to execute the comman LOAD. So how do I get these things work together?
My code:
import os
class PyMusic:
def __init__(self):
print "initial stuff later"
def playFile(self, fileName, directory = ""):
os.system("mpg321 -R testPlayer")
os.system("LOAD test.mp3")
if __name__ == "__main__":
pymusic = PyMusic()
pymusic.playFile("test.mp3")
Thanks for your help!
First, you should almost never be using os.system. See the subprocess module.
One major advantage of using subprocess is that you can choose whatever behavior you want—run it in the background, start it and wait for it to finish (and throw an exception if it returns non-zero), interact with its stdin and stdout explicitly, whatever makes sense.
Here, you're not trying to run another command "LOAD test.mp3", you're trying to pass that as input to the existing process. So:
p = subprocess.Popen(['mpg321', '-R', 'testPlayer'], stdin=PIPE)
Then you can do this:
p.stdin.write('LOAD test.mp3\n')
This is roughly equivalent to doing this from the shell:
echo -e 'LOAD test.mp3\n' | mpg321 -R testPlayer
However, you should probably read about communicate, because whenever it's possible to figure out how to make your code work with communicate, it's a lot simpler than trying to deal with generic I/O (especially if you've never coded with pipes, sockets, etc. before).
Or, if you're trying to interact with a command-line UI (e.g., you can't send the command until you get the right prompt), you may want to look at an "expect" library. There are a few of these to choose from, so you should search at PyPI to find the right one for you (although I can say that I've used pexpect successfully in the past, and the documentation is full of samples that get the ideas across a lot more quickly than most expect documentation does).
You are looking for a way to send data to stdin. Here is an example of this using Popen:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['mpg321', '-R testPlayer'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
mpg123_stdout = p.communicate(input='LOAD test.mp3\n')[0]
print(mpg123_stdout)
You establish pointers to stdin and stdout, then after you start your process, you communicate with stdin and read from stdout. Be sure to send new lines (carriage returns)
Related
I have created a simple method that executes a command like you do in the terminal
from subprocess import PIPE, run
class Command_Line():
#staticmethod
def execute(command):
result = run(command, stdout=PIPE, stderr=PIPE, universal_newlines=True, shell=True)
print(result)
return result.stdout
My problem with the code above is it does not wait until the task/process is done. Lets say i use ffmpeg to change the frame rate of a video via the following code
import Command_Line as cmd
cmd.execute('ffmpeg -i "000000004.avi" -c copy -y -r 30 "000000004.avi"')
The problem is the output video because it does not complete the process. I've search how to wait like Is there a way to check if a subprocess is still running?
but could not incorporate it with my code. Can you share your experience with this.
Thanks
According to the python documentation subprocess.run waits for the process to end.
The problem is that ffmpeg overwrites the input file if the input and output files are the same and therefore the output video becomes unusable.
subprocess.run() is synchronous which means that the system will wait till it finishes before moving on to the next command. subprocess.Popen() does the same thing but it is asynchronous (the system will not wait for it to finish). You can try reloading your file using importlib.reload command. It may find your generated file then.
In the past week I install a Terraria 1.3.5.3 server into an Ubuntu v18.04 OS, for playing online with friends. This server should be powered on 24/7, without any GUI, only been accessed by SSH on internal LAN.
My friends ask me if there is a way for them to control the server, e.g. send a message, via internal in-game chat, so I thought use a special character ($) in front of the desired command ('$say something' or '$save', for instance) and a python program, that read the terminal output via pipe, interpreter the command and send it back with a bash command.
I follow these instructions to install the server:
https://www.linode.com/docs/game-servers/host-a-terraria-server-on-your-linode
And config my router to forward a dedicated port to the terraria server.
All is working fine, but I really struggle to make python send a command via "terrariad" bash script, described in the link above.
Here is a code used to send a command, in python:
import subprocess
subprocess.Popen("terrariad save", shell=True)
This works fine, but if I try to input a string with space:
import subprocess
subprocess.Popen("terrariad \"say something\"", shell=True)
it stop the command in the space char, output this on the terminal:
: say
Instead of the desired:
: say something
<Server>something
What could I do to solve this problem?
I tried so much things but I get the same result.
P.S. If I send the command manually in the ssh putty terminal, it works!
Edit 1:
I abandoned the python solution, by now I'll try it with bash instead, seem to be more logic to do this way.
Edit 2:
I found the "terrariad" script expect just one argument, but the Popen is splitting my argument into two no matter the method I use, as my input string has one space char in the middle. Like this:
Expected:
terrariad "say\ something"
$1 = "say something"
But I get this of python Popen:
subprocess.Popen("terrariad \"say something\"", shell=True)
$1 = "say
$2 = something"
No matter i try to list it:
subprocess.Popen(["terrariad", "say something"])
$1 = "say
$2 = something"
Or use \ quote before the space char, It always split variables if it reach a space char.
Edit 3:
Looking in the bash script I could understand what is going on when I send a command... Basically it use the command "stuff", from the screen program, to send characters to the terraria screen session:
screen -S terraria -X stuff $send
$send is a printf command:
send="`printf \"$*\r\"`"
And it seems to me that if I run the bash file from Python, it has a different result than running from the command line. How this is possible? Is this a bug or bad implementation of the function?
Thanks!
I finally come with a solution to this, using pipes instead of the Popen solution.
It seems to me that Popen isn't the best solution to run bash scripts, as described in How to do multiple arguments with Python Popen?, the link that SiHa send in the comments (Thanks!):
"However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script.".
So I came with the solution, using a fifo file:
First, create a fifo to be use as a pipe, in the desired directory (for instance, /samba/terraria/config):
mkfifo cmdOutput
*/samba/terraria - this is the directory I create in order to easily edit the scripts, save and load maps to the server using another computer, that are shared with samba (https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/)
Then I create a python script to read from the screen output and then write to a pipe file (I know, probably there is other ways to this):
import shlex, os
outputFile = os.open("/samba/terraria/config/cmdOutput", os.O_WRONLY )
print("python script has started!")
while 1:
line = input()
print(line)
cmdPosition = line.find("&")
if( cmdPosition != -1 ):
cmd = slice(cmdPosition+1,len(line))
cmdText = line[cmd]
os.write(outputFile, bytes( cmdText + "\r\r", 'utf-8'))
os.write(outputFile, bytes("say Command executed!!!\r\r", 'utf-8'))
Then I edit the terraria.service file to call this script, piped from terrariaServer, and redirect the errors to another file:
ExecStart=/usr/bin/screen -dmS terraria /bin/bash -c "/opt/terraria/TerrariaServer.bin.x86_64 -config /samba/terraria/config/serverconfig.txt < /samba/terraria/config/cmdOutput 2>/samba/terraria/config/errorLog.txt | python3 /samba/terraria/scripts/allowCommands.py"
*/samba/terraria/scripts/allowCommands.py - where my script is.
**/samba/terraria/config/errorLog.txt - save Log of errors in a file.
Now I can send commands, like 'noon' or 'dawn' so I can change the in-game time, save world and backup it with samba server before boss fights, do another stuff if I have some time XD, and have the terminal showing what is going on with the server.
Is there any way to display the output of a shell command in Python, as the command runs?
I have the following code to send commands to a specific shell (in this case, /bin/tcsh):
import subprocess
import select
cmd = subprocess.Popen(['/bin/tcsh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
poll = select.poll()
poll.register(cmd.stdout.fileno(),select.POLLIN)
# The list "commands" holds a list of shell commands
for command in commands:
cmd.stdin.write(command)
# Must include this to ensure data is passed to child process
cmd.stdin.flush()
ready = poll.poll()
if ready:
result = cmd.stdout.readline()
print result
Also, I got the code above from this thread, but I am not sure I understand how the polling mechanism works.
What exactly is registered above?
Why do I need the variable ready if I don't pass any timeout to poll.poll()?
Yes, it is entirely possible to display the output of a shell comamand as the command runs. There are two requirements:
1) The command must flush its output.
Many programs buffer their output differently according to whether the output is connected to a terminal, a pipe, or a file. If they are connected to a pipe, they might write their output in much bigger chunks much less often. For each program that you execute, consult its documentation. Some versions of /bin/cat', for example, have the -u switch.
2) You must read it piecemeal, and not all at once.
Your program must be structured to one piece at a time from the output stream. This means that you ought not do these, which each read the entire stream at one go:
cmd.stdout.read()
for i in cmd.stdout:
list(cmd.stdout.readline())
But instead, you could do one of these:
while not_dead_yet:
line = cmd.stdout.readline()
for line in iter(cmd.stdout.readline, b''):
pass
Now, for your three specific questions:
Is there any way to display the output of a shell command in Python, as the command runs?
Yes, but only if the command you are running outputs as it runs and doesn't save it up for the end.
What exactly is registered above?
The file descriptor which, when read, makes available the output of the subprocess.
Why do I need the variable ready if I don't pass any timeout to poll.poll()?
You don't. You also don't need the poll(). It is possible, if your commands list is fairly large, that might need to poll() both the stdin and stdout streams to avoid a deadlock. But if your commands list is fairly modest (less than 5Kbytes), then you will be OK just writing them at the beginning.
Here is one possible solution:
#! /usr/bin/python
import subprocess
import select
# Critical: all of this must fit inside ONE pipe() buffer
commands = ['echo Start\n', 'date\n', 'sleep 10\n', 'date\n', 'exit\n']
cmd = subprocess.Popen(['/bin/tcsh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# The list "commands" holds a list of shell commands
for command in commands:
cmd.stdin.write(command)
# Must include this to ensure data is passed to child process
cmd.stdin.flush()
for line in iter(cmd.stdout.readline, b''):
print line
ok, so my google-fu really kind of sucks and I was unable to find an answer, hopefully you folks can help me ^_^
ok, so what I thought would be a simple script is seemingly not communicating with its subprocess correctly, I'm running this line by line. I'm also using the mpg123 player, this is a Linux system (well, Raspberry Pi)
from subprocess import Popen, PIPE, STDOUT
p = Popen(["mpg123", "-C", "test.mp3"], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
#wait a few seconds to enter this, "q" without a newline is how the controls for the player work to quit out if it were ran like "mpg123 -C test.mp3" on the command line
p.communicate(input='q')[0]
I can run stdout.read() on it just fine, but using communicate for input just makes it hang and p.stdin.write('q') does seemingly nothing at all. This is python related though I have a feeling I'm also not looking in the right place in the mpg123 documentation. Please be kind as I'm exceptionally new to this ^_^
Check what arguments your mpg123 version understands. The following works on my machine:
#!/usr/bin/env python3
import time
from subprocess import Popen, PIPE, DEVNULL, STDOUT
p = Popen(["mpg123", "-K", "test.mp3"], stdin=PIPE, stdout=DEVNULL, stderr=STDOUT)
# wait a little
time.sleep(3)
# send command "Skip song", wait for the player to exit
p.communicate(b'n')[0]
It starts playing the file, waits ~3 seconds, and exits.
This is an awful solution, but it works in a pinch. I'm using this as a patch because for some reason, I cannot get Python libraries to play properly on my Raspberry Pi.
If you start mpg123 in remote mode (mpg123 -R), you can send commands to it far more easily:
proc = sp.Popen(["mpg123", "-R"], stdin=sp.PIPE)
Then, you can send commands to its stdin attribute.
Note:
The commands are different. To pause, it's "pause", not " " for example. Run mpg123 -R in a console, then send it the help command to see a list of commands.
The commands need to be newline terminated.
From the docs:
-R, --remote
Activate generic control interface. mpg123 will then read and execute commands from stdin. Basic usage is ''load '' to play some file and the obvious ''pause'', ''command. ''jump '' will jump/seek to a given point (MPEG frame number). Issue ''help'' to get a full list of commands and syntax.
Background: I have a Python subprocess that connects to a shell-like application, which uses the readline library to handle input, and that app has a TAB-complete routine for command input, just like bash. The child process is spawned, like so:
def get_cli_subprocess_handle():
return subprocess.Popen(
'/bin/myshell',
shell=False,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
Everything works great, except tab-complete. Whenever my Python program passes the tab character, '\t' to the subprocess, I get 5 spaces in the STDIN, instead of triggering the readline library's tab-complete routine. :(
Question: What can I send to the subprocess's STDIN to trigger the child's tab-complete function? Maybe asked another way: How do I send the TAB key as opposed to the TAB character, if that is even possible?
Related but Unanswered and Derailed:
trigger tab completion for python batch process built around readline
The shell like application is probably differentiating between a terminal being connected to stdin and a pipe being connected to it. Many Unix utilities do just that to optimise their buffering (line vs. block) and shell-like utilities are likely to disable command completion facilities on batch input (i.e. PIPE) to avoid unexpected results. Command completion is really an interactive feature which requires a terminal input.
Check out the pty module and try using a master/slave pair as the pipe for your subprocess.
There really is no such thing as sending a tab key to a pipe. A pipe can only accept strings of bits, and if the tab character isn't doing it, there may not be a solution.
There is a project that does something similar called pexpect. Just looking at its interact() code, I'm not seeing anything obvious that makes it work and yours not. Given that, the most likely explanation is that pexpect actually does some work to make itself look like a pseudo-terminal. Perhaps you could incorporate its code for that?
Based on isedev's answer, I modified my code as follows:
import os, pty
def get_cli_subprocess_handle():
masterPTY, slaveTTY = pty.openpty()
return masterPTY, slaveTTY, subprocess.Popen(
'/bin/myshell',
shell=False,
stdin=slaveTTY,
stdout=slaveTTY,
stderr=slaveTTY,
)
Using this returned tuple, I was able to perform select.select([masterPTY],[],[]) and os.read(masterPTY, 1024) as needed, and I wrote to the master-pty with a function that is very similar to a private method in the pty module source:
def write_all(masterPTY, data):
"""Successively write all of data into a file-descriptor."""
while data:
chars_written = os.write(masterPTY, data)
data = data[chars_written:]
return data
Thanks to all for the good solutions. Hope this example helps someone else. :)