Running multiple Bash commands interactively from Python - python

I have just come across pexpect and have been figuring out how to use it to automate various practices I would otherwise have to fill in manually in a command shell.
Here's an example script:
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files")
child.expect('#')
child.sendline('ls')
child.expect('#')
child.sendline('git add .')
child.expect('#')
child.sendline('git commit')
child.expect('#')
child.sendline('git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.expect(pexpect.EOF)
(I know these particular tasks do not necessarily require pexpect, just trying to understand its best practices.)
Now, the above works. It cds to my local repo folder, lists the files there, stages my commits, and pushes to Github with authentication, all the while providing real-time output to the Python stdout. But I have two areas I'd like to improve:
Firstly, .expect('#') between every line I would run in Bash (that doesn't require interactivity) is a little tedious. (And I'm not sure whether / why it always seems to work, whatever was the output in stdout - although so far it does.) Ideally I could just clump them into one multiline string and dispense with all those expects. Isn't there a more natural way to automate parts of the script that could be e.g., a multiline string with Bash commands separated by ';' or '&&' or '||'?
Secondly, if you run a script like the above you'll see it times out after 60 seconds sharp, then yields a TimeoutError in Python. Although - assuming the job fits within 60 seconds - it gets done, I would prefer something which (1) doesn't take unnecessarily long, (2) doesn't risk cutting off a >60 second process midway, (3) doesn't end the whole thing giving me an error in Python. Can we instead have it come to a natural end, i.e., when the shell processes are finished, that's when it stops running in Python too? (If (2) and (3) can be addressed, I could probably just set an enormous timeout value - not sure if there is better practice though.)
What's the best way of rewriting the code above? I grouped these two issues in one question because my guess is there is a generally better way of using pexpect, which could solve both problems (and probably others I don't even know I have yet!), and in general I'd invite being shown the best way of doing this kind of task.

You don't need to wait for # between each command. You can just send all the commands and ignore the shell prompts. The shell buffers all the inputs.
You only need to wait for the username and password prompts, and then the final # after the last command.
You also need to send an exit command at the end, otherwise you won't get EOF.
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files")
child.sendline('ls')
child.sendline('git add .')
child.sendline('git commit')
child.sendline('git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.sendline('exit')
child.expect(pexpect.EOF)
If you're running into the 60 second timeout, you can use timeout=None to disable this. See pexpect timeout with large block of data from child
You could also combine multiple commands in a single line:
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files && ls && git add . && git commit && git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.sendline('exit')
child.expect(pexpect.EOF)
Using && between the commands ensures that it stops if any of them fails.
In general I wouldn't recommend using pexpect for this at all. Make a shell script that does everything you want, and run the script with a single subprocess.Popen() call.

Related

Know if subprocess is not stuck by it's prints to stdout

I have subprocess that I am running by:
proc = subprocess.Popen("python -u my_script.py", shell=True)
my_script.py should print regularly to stdout and I have other non related process that is listening to this output so I can't change the output to be printed to somewhere else.
I want to ensure that the process is really regularly printing and not got stuck in some loop .etc, do I have way to check if stdout was wroten for some amount of time?
any other options to reach this goal?
EDIT
I am using windows
you can create a named pipe with mkfifo and use tee to output your script's data to both the process listening for it and the pipe.
mkfifo blarg
my_script.py | tee blarg | your_greedy_data_processing_instance
tail -f blarg
instead of tail you can use an arbitrarly complicated script to study the output and the state of the process generating it (timers, pid checks)
It appears that the access time and modification time of /dev/stdout is updated regularly. Note, however, that /dev/stdout will always be a soft link -- er, a symbolic link, I mean -- to the file handle of stdout for the process that's checking /dev/stdout. I.e., /dev/stdout links to /proc/self/fd/1.
So it seems that you could check the first file descriptor of your process to see if its modification time has changed, e.g.:
$ stat -c %y -L /proc/10830/fd/1
2021-05-13 02:34:00.367857061
-L means act on the target of the soft link, not the soft link itself; -c %y is just asking for the modification time. This Python script is running as process 10830 on my system right now, and it's occasionally updating the modification time (about every 8 seconds):
>>> import time
>>> while True: time.sleep(1); print("still alive")
still alive
still alive
still alive
....
You should Google this answer to be sure that the behavior I'm seeing is reliable, though, because I've never read anything about it before.
Alternatively, you could either (a) trust that the script is fine -- which it will, of course, always be (unless it's catching exceptions and refusing to exit even if it can no longer do anything useful, in which case you should change it to die the way it should), or (b) set up a daemon to do something like send a signal to the script, at which point the script could send a signal to the daemon to say "I'm still alive." There's literally no reason to do that, in my opinion, but how you write your programs is up to you.
So assuming that you want to press forward with this, here's a trivial example of the daemon that would monitor the script you want to make sure isn't stuck in a loop or something:
import time
import signal
import os
import sys
# keep a timestamp of when we receive a response
response_timestamp = time.time()
# add code here to get the process ID of the other script
other_pid = 0
def sig_handler(signum, frame):
global response_timestamp
response_timestamp = time.time()
if __name__ == '__main__':
# make sure that when we receive SIGBREAK, sig_handler() gets called
signal.signal(signal.SIGBREAK, sig_handler)
while True:
# send SIGBREAK to "other_pid"
os.kill(other_pid, signal.SIGBREAK)
time.sleep(15)
if time.time() - 20 > response_timestamp:
print("the other process is frozen")
sys.exit(os.EX_SOFTWARE)
Then you add this to the other script that you're monitoring:
import signal
import os
# add code here to get the process ID
other_pid = 0
def sig_handler(signum, frame):
os.kill(other_pid, signal.SIGBREAK)
...
...
(rest of your script)
Now be aware that the only thing this will do, is make sure that the process isn't completely frozen. Regrettably, Windows doesn't have a great deal of options when it comes to signals: SIGBREAK was the best one that I saw, but note that it's the signal received by a process when you hit CTRL+C to interrupt the program (so if you manually hit CTRL+C in the window running the Python program, it won't kill it, it will just make it call sig_handler()).
I would also be remiss if I did not inform you that even though this will probably work just fine, it is not safe to do almost anything inside of a signal handler function. It's bad form and may blow up on you unexpectedly, but in practice, it's pretty safe.

Run script for send in-game Terraria server commands

In the past week I install a Terraria 1.3.5.3 server into an Ubuntu v18.04 OS, for playing online with friends. This server should be powered on 24/7, without any GUI, only been accessed by SSH on internal LAN.
My friends ask me if there is a way for them to control the server, e.g. send a message, via internal in-game chat, so I thought use a special character ($) in front of the desired command ('$say something' or '$save', for instance) and a python program, that read the terminal output via pipe, interpreter the command and send it back with a bash command.
I follow these instructions to install the server:
https://www.linode.com/docs/game-servers/host-a-terraria-server-on-your-linode
And config my router to forward a dedicated port to the terraria server.
All is working fine, but I really struggle to make python send a command via "terrariad" bash script, described in the link above.
Here is a code used to send a command, in python:
import subprocess
subprocess.Popen("terrariad save", shell=True)
This works fine, but if I try to input a string with space:
import subprocess
subprocess.Popen("terrariad \"say something\"", shell=True)
it stop the command in the space char, output this on the terminal:
: say
Instead of the desired:
: say something
<Server>something
What could I do to solve this problem?
I tried so much things but I get the same result.
P.S. If I send the command manually in the ssh putty terminal, it works!
Edit 1:
I abandoned the python solution, by now I'll try it with bash instead, seem to be more logic to do this way.
Edit 2:
I found the "terrariad" script expect just one argument, but the Popen is splitting my argument into two no matter the method I use, as my input string has one space char in the middle. Like this:
Expected:
terrariad "say\ something"
$1 = "say something"
But I get this of python Popen:
subprocess.Popen("terrariad \"say something\"", shell=True)
$1 = "say
$2 = something"
No matter i try to list it:
subprocess.Popen(["terrariad", "say something"])
$1 = "say
$2 = something"
Or use \ quote before the space char, It always split variables if it reach a space char.
Edit 3:
Looking in the bash script I could understand what is going on when I send a command... Basically it use the command "stuff", from the screen program, to send characters to the terraria screen session:
screen -S terraria -X stuff $send
$send is a printf command:
send="`printf \"$*\r\"`"
And it seems to me that if I run the bash file from Python, it has a different result than running from the command line. How this is possible? Is this a bug or bad implementation of the function?
Thanks!
I finally come with a solution to this, using pipes instead of the Popen solution.
It seems to me that Popen isn't the best solution to run bash scripts, as described in How to do multiple arguments with Python Popen?, the link that SiHa send in the comments (Thanks!):
"However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script.".
So I came with the solution, using a fifo file:
First, create a fifo to be use as a pipe, in the desired directory (for instance, /samba/terraria/config):
mkfifo cmdOutput
*/samba/terraria - this is the directory I create in order to easily edit the scripts, save and load maps to the server using another computer, that are shared with samba (https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/)
Then I create a python script to read from the screen output and then write to a pipe file (I know, probably there is other ways to this):
import shlex, os
outputFile = os.open("/samba/terraria/config/cmdOutput", os.O_WRONLY )
print("python script has started!")
while 1:
line = input()
print(line)
cmdPosition = line.find("&")
if( cmdPosition != -1 ):
cmd = slice(cmdPosition+1,len(line))
cmdText = line[cmd]
os.write(outputFile, bytes( cmdText + "\r\r", 'utf-8'))
os.write(outputFile, bytes("say Command executed!!!\r\r", 'utf-8'))
Then I edit the terraria.service file to call this script, piped from terrariaServer, and redirect the errors to another file:
ExecStart=/usr/bin/screen -dmS terraria /bin/bash -c "/opt/terraria/TerrariaServer.bin.x86_64 -config /samba/terraria/config/serverconfig.txt < /samba/terraria/config/cmdOutput 2>/samba/terraria/config/errorLog.txt | python3 /samba/terraria/scripts/allowCommands.py"
*/samba/terraria/scripts/allowCommands.py - where my script is.
**/samba/terraria/config/errorLog.txt - save Log of errors in a file.
Now I can send commands, like 'noon' or 'dawn' so I can change the in-game time, save world and backup it with samba server before boss fights, do another stuff if I have some time XD, and have the terminal showing what is going on with the server.

Python script which runs multiple shell commands and waits for the result

I'm trying to write a python wrapper for building some software. I need to automate building it hundreds of times with different configurations which means I can't just autogen.sh ; ./configure ; make ; make install. Some of the configurations I use require running a script which conditionally set up some environment variables. What I want is to be able to do something like this:
command = './autogen.sh'
ret = subprocess.call(command.split())
if ret != 0:
sys.exit(ret)
command = './script.sh ; ./configure <configure-flags>'
ret = subprocess.call(command.split())
if ret != 0:
sys.exit(ret)
command = 'make'
ret = subprocess.call(command.split())
if ret != 0:
sys.exit(ret)
command = 'make install'
ret = subprocess.call(command.split())
if ret != 0):
sys.exit(ret)
The problem I'm running into is that the environment variables set in script.sh are not getting preserved for configure. I saw a partial solution in Sending multiple commands to a bash shell which must share an environment, but that involves flushing the commands to stdin and polling for a result which won't really work when you have a really long makefile (mine takes about 10 - 20 minutes) and it also doesn't give you the return value which I need to know if the build was successful or not.
Does anyone know a better way to do this?
If you have a script that sets variables you want to access afterwards, you must source it (similar to what other languages call "include").
Instead of
command = './script.sh ; ./configure <configure-flags>'
ret = subprocess.call(command.split())
you can do
command = ["bash", "-c", "source script.sh; ./configure"]
subprocess.call(command)
The basic problem here is that environment variables are copied only "downward" (from parent to child), never "upward" (child to parent). Your python script is a parent. It runs a shell command, which is therefore a child; the shell runs more commands, which are children of the shell (and therefore grandchildren of the Python process).
To make environment variables persist, you'll need to import them upwards somehow. Exactly how is up to you. A common technique (used in shell scripts as well as in Python) is to have the exporter print the values it wants set, then have the shell or Python process read that output and do the setting. (I see that's what the post you linked-to does.)
For instance, a child process might print:
CONFIG_PATH=/path/to/config/file
(or the same with export added) and then the outer shell would simply eval this. This implies a great deal of trust: what if the child process print rm -rf / for instance? One can apply rules (regular expression matching, for instance) to the output before executing it, or even manually (or automatically) parse it but not execute the result until after a verification step.
Another method is to write the configuration to a file, and have the parent read the file. This is pretty much the same technique, but using a file for the communications depot, instead of fiddling with stdin and stdout. It has several more issues (naming the file, and knowing when to read it).
(There are, of course, many build and/or test frameworks written in Python. I'm not going to recommend any specific ones as I don't have that much experience with them.)

Paramiko exec command failure based on time

I have been searching and fooling around with this problem for 2 days now. Firstly, some context in the form of (summarised) code.
def setService(self, ...
ssh_client = self.conn.get_ssh_client(hostname, username=username, password=password)
setCommand = str('service ' + service_name + ' ' + status)
stdin, stdout, stderr = ssh_client.exec_command(setCommand)
# time.sleep(2)
return ...
Secondly. The whole codeset uses the same code, and everything works except for this "service foobar stop" and "service foobar start" command. It causes a Read Error (in ssh/auth.log) and does not actually effect the command. All other commands using this setup works fine (we do about 10 different commands). It happens on all target machines, from both dev machines, so I am ruling out ssh configs.
But, if I add any time delaying code after the exec_command(in the comment position), it works. A sleep(2), or a loop doing some debug printing, makes it work fine. Read Errors disappear from the auth.log and service start/stop as they should. Removing the sleep, or whatever it may be, breaks it again.
We "hack" fixed it by leaving a sleep in there, but I do not understand completely why it happens, or why stalling in the function fixes it.
Are we returning too quickly, before the exec was finished on the remote side? I do not think so, it seems to be blocking (returning into stdin, stderr, stdout).
Any advice on this would be highly appreciated.
Note: exec_command(command) is non-blocking..
I usually try to read the output from the buffer(which consumes some time - before returning), or I use a time.sleep which you've used in this case.
If you use(should) stdout.read()/readlines(), it forces your script to return the output in the stdout buffer, and in turn wait for exec_command to finish.

Sending commans on sub process in python

I just want to build a little python music client on my raspberry pi. I installed "mpg321" and it works great but now my problem. After sending the command
os.system("mpg321 -R testPlayer")
python waits for user input like play, pause or quit. If I write this in my terminal the player pause the music oder quits. Perfect but I want python to do that so I send the command
os.system("LOAD test.mp3")
where LOAD is the command for loading this mp3. But nothing happens. When I quit the player via terminal I get the error:
sh: 1: LOAD: not found
I think this means that
os.system("mpg321 -R testPlayer")
takes the whole process and after I quit it python tries to execute the comman LOAD. So how do I get these things work together?
My code:
import os
class PyMusic:
def __init__(self):
print "initial stuff later"
def playFile(self, fileName, directory = ""):
os.system("mpg321 -R testPlayer")
os.system("LOAD test.mp3")
if __name__ == "__main__":
pymusic = PyMusic()
pymusic.playFile("test.mp3")
Thanks for your help!
First, you should almost never be using os.system. See the subprocess module.
One major advantage of using subprocess is that you can choose whatever behavior you want—run it in the background, start it and wait for it to finish (and throw an exception if it returns non-zero), interact with its stdin and stdout explicitly, whatever makes sense.
Here, you're not trying to run another command "LOAD test.mp3", you're trying to pass that as input to the existing process. So:
p = subprocess.Popen(['mpg321', '-R', 'testPlayer'], stdin=PIPE)
Then you can do this:
p.stdin.write('LOAD test.mp3\n')
This is roughly equivalent to doing this from the shell:
echo -e 'LOAD test.mp3\n' | mpg321 -R testPlayer
However, you should probably read about communicate, because whenever it's possible to figure out how to make your code work with communicate, it's a lot simpler than trying to deal with generic I/O (especially if you've never coded with pipes, sockets, etc. before).
Or, if you're trying to interact with a command-line UI (e.g., you can't send the command until you get the right prompt), you may want to look at an "expect" library. There are a few of these to choose from, so you should search at PyPI to find the right one for you (although I can say that I've used pexpect successfully in the past, and the documentation is full of samples that get the ideas across a lot more quickly than most expect documentation does).
You are looking for a way to send data to stdin. Here is an example of this using Popen:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['mpg321', '-R testPlayer'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
mpg123_stdout = p.communicate(input='LOAD test.mp3\n')[0]
print(mpg123_stdout)
You establish pointers to stdin and stdout, then after you start your process, you communicate with stdin and read from stdout. Be sure to send new lines (carriage returns)

Categories

Resources