run command in interactive opend bash with python - python

I use this script to run interactive VLC bash:
import os
import sys
if len(sys.argv) > 1:
tmp = os.popen('vlc -I rc --novideo --noaudio --rc-fake-tty -q udp://#1.2.3.4:1234').read()
else :
print "Error: no input"
now in this bash opend I like to run 'info' command, How to do this?
if in bash I type
vlc -I rc --novideo --noaudio --rc-fake-tty -q udp://#1.2.3.4:1234
It shows this
VLC media player 2.0.8 Twoflower (revision 2.0.8a-0-g68cf50b)
VLC media player 2.0.8 Twoflower
Command Line Interface initialized. Type `help' for help.
>
and It wait for get a command.

This can be done with pure pipes, but it's going to be hard. And even harder if you use os.popen() instead of using subprocess.
The right way to script an interactive program is to use a higher-level library that's designed to make it easy, like pexpect. Then you just write something like:
import pexpect
child = pexpect.spawn('vlc -I rc --novideo --noaudio --rc-fake-tty -q udp://#1.2.3.4:1234')
child.expect('>')
child.sendline('info')
response = child.before
However, a much better solution is to not run VLC in interactive mode; just run it in batch mode and pass it commands. Going out of your way to have it treat your input as a TTY just so you can try to figure out how to act like a human at a TTY is making things harder for no good reason.
Or, even better, use libVLC instead. As you can see from that link, there are Python bindings for it.
If you really want to do it interactively, and you want to do it manually over pipes, you will have to be very careful. If you don't mind just deadlocking on any unexpected results, you can do something like this:
import subprocess
child = subprocess.Popen(['vlc', '-I', 'rc', '--novideo', '--noaudio',
'--rc-fake-tty', '-q', 'udp://#1.2.3.4:1234'],
stdin=PIPE, stdout=PIPE)
def split_on_prompts():
rbuf = ''
while True:
newbuf = child.stdout.read()
rbuf += newbuf
out, prompt, rest = rbuf.partition('\n>')
if prompt:
yield out
rbuf = rest
if not newbuf:
yield rest
return
output = split_on_prompts()
banner = next(output)
child.stdin.write('info\n')
response = next(output)
# etc.
As you can see, this is a lot less fun.
And if you insist on using os.open instead even though it's deprecated and even more painful to use, you obviously can't write to it if you open the pipe in the default 'r' mode, just like any other file-like object, and of course tacking .read() on the end means you don't even have the popen object anymore, you just stored the first buffer it gave you and then leaked the handle. If you change that to open in 'r+' mode, if that works on your platform, and you store the popen object itself, you can use it similarly to the subprocess.Popen object above, using child.write and child.read instead of child.stdin.writeandchild.stdout.read`.

Related

Run script for send in-game Terraria server commands

In the past week I install a Terraria 1.3.5.3 server into an Ubuntu v18.04 OS, for playing online with friends. This server should be powered on 24/7, without any GUI, only been accessed by SSH on internal LAN.
My friends ask me if there is a way for them to control the server, e.g. send a message, via internal in-game chat, so I thought use a special character ($) in front of the desired command ('$say something' or '$save', for instance) and a python program, that read the terminal output via pipe, interpreter the command and send it back with a bash command.
I follow these instructions to install the server:
https://www.linode.com/docs/game-servers/host-a-terraria-server-on-your-linode
And config my router to forward a dedicated port to the terraria server.
All is working fine, but I really struggle to make python send a command via "terrariad" bash script, described in the link above.
Here is a code used to send a command, in python:
import subprocess
subprocess.Popen("terrariad save", shell=True)
This works fine, but if I try to input a string with space:
import subprocess
subprocess.Popen("terrariad \"say something\"", shell=True)
it stop the command in the space char, output this on the terminal:
: say
Instead of the desired:
: say something
<Server>something
What could I do to solve this problem?
I tried so much things but I get the same result.
P.S. If I send the command manually in the ssh putty terminal, it works!
Edit 1:
I abandoned the python solution, by now I'll try it with bash instead, seem to be more logic to do this way.
Edit 2:
I found the "terrariad" script expect just one argument, but the Popen is splitting my argument into two no matter the method I use, as my input string has one space char in the middle. Like this:
Expected:
terrariad "say\ something"
$1 = "say something"
But I get this of python Popen:
subprocess.Popen("terrariad \"say something\"", shell=True)
$1 = "say
$2 = something"
No matter i try to list it:
subprocess.Popen(["terrariad", "say something"])
$1 = "say
$2 = something"
Or use \ quote before the space char, It always split variables if it reach a space char.
Edit 3:
Looking in the bash script I could understand what is going on when I send a command... Basically it use the command "stuff", from the screen program, to send characters to the terraria screen session:
screen -S terraria -X stuff $send
$send is a printf command:
send="`printf \"$*\r\"`"
And it seems to me that if I run the bash file from Python, it has a different result than running from the command line. How this is possible? Is this a bug or bad implementation of the function?
Thanks!
I finally come with a solution to this, using pipes instead of the Popen solution.
It seems to me that Popen isn't the best solution to run bash scripts, as described in How to do multiple arguments with Python Popen?, the link that SiHa send in the comments (Thanks!):
"However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script.".
So I came with the solution, using a fifo file:
First, create a fifo to be use as a pipe, in the desired directory (for instance, /samba/terraria/config):
mkfifo cmdOutput
*/samba/terraria - this is the directory I create in order to easily edit the scripts, save and load maps to the server using another computer, that are shared with samba (https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/)
Then I create a python script to read from the screen output and then write to a pipe file (I know, probably there is other ways to this):
import shlex, os
outputFile = os.open("/samba/terraria/config/cmdOutput", os.O_WRONLY )
print("python script has started!")
while 1:
line = input()
print(line)
cmdPosition = line.find("&")
if( cmdPosition != -1 ):
cmd = slice(cmdPosition+1,len(line))
cmdText = line[cmd]
os.write(outputFile, bytes( cmdText + "\r\r", 'utf-8'))
os.write(outputFile, bytes("say Command executed!!!\r\r", 'utf-8'))
Then I edit the terraria.service file to call this script, piped from terrariaServer, and redirect the errors to another file:
ExecStart=/usr/bin/screen -dmS terraria /bin/bash -c "/opt/terraria/TerrariaServer.bin.x86_64 -config /samba/terraria/config/serverconfig.txt < /samba/terraria/config/cmdOutput 2>/samba/terraria/config/errorLog.txt | python3 /samba/terraria/scripts/allowCommands.py"
*/samba/terraria/scripts/allowCommands.py - where my script is.
**/samba/terraria/config/errorLog.txt - save Log of errors in a file.
Now I can send commands, like 'noon' or 'dawn' so I can change the in-game time, save world and backup it with samba server before boss fights, do another stuff if I have some time XD, and have the terminal showing what is going on with the server.

Suppress output of subprocess

I want to use the subprocess module to control some processes spawned via ssh.
By searching and testing I found that this works:
import subprocess
import os
import time
node = 'guest#localhost'
my_cmd = ['sleep','1000']
devnull = open(os.devnull, 'wb')
cmd = ['ssh', '-t', '-t', node] + my_cmd
p = subprocess.Popen(cmd, stderr=devnull, stdout=devnull)
while True:
time.sleep(1)
print 'Normal output'
The -t -t option I provide allows me to terminate the remote process instead of just the ssh command. But this, also scrambles my program output as newlines are no longer effective making it a long and hard to read string.
How can I make ssh not affecting the formatting of the python program?
Sample output:
guest:~$ python2 test.py
Normal output
Normal output
Normal output
Normal output
Normal output
Normal output
Normal output
(First ctrl-c)
Normal output
Normal output
Normal output
(Second ctrl-c)
^CTraceback (most recent call last):
File "test.py", line 13, in <module>
time.sleep(1)
KeyboardInterrupt
Ok, the output is now clear. I do not exactly know why, but the command ssh -t -t puts the local terminal in raw mode. It makes sense anyway, because it is intended to allow you to directly use curses programs (such as vi) on the remote, and in that case, no conversion should be done, not even the simple \n -> \r\n that allows a simple new line to leave the cursor on first column. But I could not find a reference on this in ssh documentation.
It (-t -t) allows you to kill the remote process because the raw mode let the Ctrl + C to be sent to the remote instead of being processed by the local tty driver.
IMHO, this is design smell, because you only use a side effect of the pty allocation to pass a Ctrl + C to the remote and you suffer for another side effect which is the raw mode on local system. You should rather process the standard input (stdinput = subprocess.PIPE) and explicitely send a chr(3) when you input a special character on local keyboard, or install a signal handler for SIG-INT that does it.
Alternatively, as a workaround, you can simply use something like os.system("stty opost -igncr") (or better its subprocess equivalent) after starting the remote command to reset the local terminal in an acceptable mode.

python subprocess.Popen stdin.write

I'm new to python and would like to open a windows cmd prompt, start a process, leave the process running and then issue commands to the same running process.
The commands will change so i cant just include these commands in the cmdline variable below. Also, the process takes 10-15 seconds to start so i dont want to waste time waiting for the process to start and run commands each time. just want to start process once. and run quick commands as needed in the same process
I was hoping to use subprocess.Popen to make this work, though i am open to better methods. Note that my process to run is not cmd, but im just using this as example
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi' readback
cmd.stdin.write("echo hi again") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi again' readback
The results arent what i expect. Seems as though the stdin.write commands arent actually getting in and the readline freezes up with nothing to read.
I have tried the popen.communicate() instead of write/readline, but it kills the process. I have tried setting bufsize in the Popen line, but that didn't make too much difference
Your comments suggest that you are confusing command-line arguments with input via stdin. Namely, the fact that system-console.exe program accepts script=filename parameter does not imply that you can send it the same string as a command via stdin e.g., python executable accepts -c "print(1)" command-line arguments but it is a SyntaxError if you pass it as a command to Python shell.
Therefore, the first step is to use the correct syntax. Suppose the system-console.exe accepts a filename by itself:
#!/usr/bin/env python3
import time
from subprocess import Popen, PIPE
with Popen(r'C:\full\path\to\system-console.exe -cli -',
stdin=PIPE, bufsize=1, universal_newlines=True) as shell:
for _ in range(10):
print('capture.tcl', file=shell.stdin, flush=True)
time.sleep(5)
Note: if you've redirected more than one stream e.g., stdin, stdout then you should read/write both streams concurrently (e.g., using multiple threads) otherwise it is very easy to deadlock your program.
Related:
Q: Why not just use a pipe (popen())? -- mandatory reading for Unix environment but it might also be applicable for some programs on Windows
subprocess readline hangs waiting for EOF -- code example on how to pass multiple inputs, read multiple outputs using subprocess, pexpect modules.
The second and the following steps might have to deal with buffering issues on the side of the child process (out of your hands on Windows), whether system-console allows to redirect its stdin/stdout or whether it works with a console directly, and character encoding issues (how various commands in the pipeline encode text).
Here is some code that I tested and is working on Windows 10, Quartus Prime 15.1 and Python 3.5
import subprocess
class altera_system_console:
def __init__(self):
sc_path = r'C:\altera_lite\15.1\quartus\sopc_builder\bin\system-console.exe --cli --disable_readline'
self.console = subprocess.Popen(sc_path, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
def read_output(self):
rtn = ""
loop = True
i = 0
match = '% '
while loop:
out = self.console.stdout.read1(1)
if bytes(match[i],'utf-8') == out:
i = i+1
if i==len(match):
loop=False
else:
rtn = rtn + out.decode('utf-8')
return rtn
def cmd(self,cmd_string):
self.console.stdin.write(bytes(cmd_string+'\n','utf-8'))
self.console.stdin.flush()
c = altera_system_console()
print(c.read_output())
c.cmd('set jtag_master [lindex [get_service_paths master] 0]')
print(c.read_output())
c.cmd('open_service master $jtag_master')
print(c.read_output())
c.cmd('master_write_8 $jtag_master 0x00 0xFF')
print(c.read_output())
You need to use iter if you want to see the output in real time:
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi\n")#would like this to be written to the cmd prompt
for line in iter(cmd.stdout.readline,""):
print line
cmd.stdin.write("echo hi again\n")#would like this to be written to the cmd prompt
Not sure exactly what you are trying to do but if you want to input certain data when you get certain output then I would recommend using pexpect

Sending commans on sub process in python

I just want to build a little python music client on my raspberry pi. I installed "mpg321" and it works great but now my problem. After sending the command
os.system("mpg321 -R testPlayer")
python waits for user input like play, pause or quit. If I write this in my terminal the player pause the music oder quits. Perfect but I want python to do that so I send the command
os.system("LOAD test.mp3")
where LOAD is the command for loading this mp3. But nothing happens. When I quit the player via terminal I get the error:
sh: 1: LOAD: not found
I think this means that
os.system("mpg321 -R testPlayer")
takes the whole process and after I quit it python tries to execute the comman LOAD. So how do I get these things work together?
My code:
import os
class PyMusic:
def __init__(self):
print "initial stuff later"
def playFile(self, fileName, directory = ""):
os.system("mpg321 -R testPlayer")
os.system("LOAD test.mp3")
if __name__ == "__main__":
pymusic = PyMusic()
pymusic.playFile("test.mp3")
Thanks for your help!
First, you should almost never be using os.system. See the subprocess module.
One major advantage of using subprocess is that you can choose whatever behavior you want—run it in the background, start it and wait for it to finish (and throw an exception if it returns non-zero), interact with its stdin and stdout explicitly, whatever makes sense.
Here, you're not trying to run another command "LOAD test.mp3", you're trying to pass that as input to the existing process. So:
p = subprocess.Popen(['mpg321', '-R', 'testPlayer'], stdin=PIPE)
Then you can do this:
p.stdin.write('LOAD test.mp3\n')
This is roughly equivalent to doing this from the shell:
echo -e 'LOAD test.mp3\n' | mpg321 -R testPlayer
However, you should probably read about communicate, because whenever it's possible to figure out how to make your code work with communicate, it's a lot simpler than trying to deal with generic I/O (especially if you've never coded with pipes, sockets, etc. before).
Or, if you're trying to interact with a command-line UI (e.g., you can't send the command until you get the right prompt), you may want to look at an "expect" library. There are a few of these to choose from, so you should search at PyPI to find the right one for you (although I can say that I've used pexpect successfully in the past, and the documentation is full of samples that get the ideas across a lot more quickly than most expect documentation does).
You are looking for a way to send data to stdin. Here is an example of this using Popen:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['mpg321', '-R testPlayer'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
mpg123_stdout = p.communicate(input='LOAD test.mp3\n')[0]
print(mpg123_stdout)
You establish pointers to stdin and stdout, then after you start your process, you communicate with stdin and read from stdout. Be sure to send new lines (carriage returns)

How to run and control a commandline program from python?

I have a python script which will give an output file. I need to feed this output file to a command line program. Is there any way I could call the commandline program and control it to process the file in python?
I tried to run this code
import os
import subprocess
import sys
proc = subprocess.Popen(["program.exe"], stdin=subprocess.PIPE)
proc.communicate(input=sys.argv[1]) #here the filename should be entered
proc.communicate(input=sys.argv[2]) #choice 1
proc.communicate(input=sys.argv[3]) #choice 2
is there any way I could enter the input coming from the commandline. And also though the cmd program opens the interface flickers after i run the code.
Thanks.
Note: platform is windows
Have a look at http://docs.python.org/library/subprocess.html. It's the current way to go when starting external programms. There are many examples and you have to check yourself which one fits your needs best.
You could do os.system(somestr) which lets you execute semestr as a command on the command line. However, this has been scrutinized over time for being insecure, etc (will post a link as soon as I find it).
As a result, it has been conventionally replaced with subprocess.popen
Hope this helps
depending on how much control you need, you might find it easier to use pexpect which makes parsing the output of the program rather easy and can also easily be used to talk to the programs stdin. Check out the website, they have some nice examples.
If your target program is expecting the input on STDIN, you can redirect using pipe:
python myfile.py | someprogram
As I just answered another question regarding subprocess, there is a better alternative!
Please have a look at the great library python sh, it is a full-fledged subprocess interface for Python that allows you to call any program as if it were a function, and more important, it's pleasingly pythonic.
Beside redirecting data stream with pipes, you can also process a command line such as:
mycode.py -o outputfile inputfilename.txt
You must import sys
import sys
and in you main function:
ii=1
infile=None
outfile=None
# let's process the command line
while ii < len(sys.argv):
arg = sys.argv[ii]
if arg == '-o':
ii = ii +1
outfile = sys.argv[ii]
else:
infile=arg
ii = ii +1
Of course, you can add some file checking, etc...

Categories

Resources