Python subprocess - run multiple shell commands over SSH - python

I am trying to open an SSH pipe from one Linux box to another, run a few shell commands, and then close the SSH.
I don't have control over the packages on either box, so something like fabric or paramiko is out of the question.
I have had luck using the following code to run one bash command, in this case "uptime", but am not sure how to issue one command after another. I'm expecting something like:
sshProcess = subprocess.call('ssh ' + <remote client>, <subprocess stuff>)
lsProcess = subprocess.call('ls', <subprocess stuff>)
lsProcess.close()
uptimeProcess = subprocess.call('uptime', <subprocess stuff>)
uptimeProcess.close()
sshProcess.close()
What part of the subprocess module am I missing?
Thanks
pingtest = subprocess.call("ping -c 1 %s" % <remote client>,shell=True,stdout=open('/dev/null', 'w'),stderr=subprocess.STDOUT)
if pingtest == 0:
print '%s: is alive' % <remote client>
# Uptime + CPU Load averages
print 'Attempting to get uptime...'
sshProcess = subprocess.Popen('ssh '+<remote client>, shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
sshProcess,stderr = sshProcess.communicate()
print sshProcess
uptime = subprocess.Popen('uptime', shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
uptimeProcess,stderr = uptimeProcess.communicate()
uptimeProcess.close( )
print 'Uptime : ' + uptimeProcess.split('up ')[1].split(',')[0]
else:
print "%s: did not respond" % <remote client>

basically if you call subprocess it creates a local subprocess not a remote one
so you should interact with the ssh process. so something along this lines:
but be aware that if you dynamically construct my directory it is suceptible of shell injection then END line should be a unique identifier
To avoid the uniqueness of END line problem, an easiest way would be to use different ssh command
from __future__ import print_function,unicode_literals
import subprocess
sshProcess = subprocess.Popen(['ssh',
'-tt'
<remote client>],
stdin=subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines=True,
bufsize=0)
sshProcess.stdin.write("ls .\n")
sshProcess.stdin.write("echo END\n")
sshProcess.stdin.write("uptime\n")
sshProcess.stdin.write("logout\n")
sshProcess.stdin.close()
for line in sshProcess.stdout:
if line == "END\n":
break
print(line,end="")
#to catch the lines up to logout
for line in sshProcess.stdout:
print(line,end="")

Related

Log output of background process to a file

I have time consuming SNMP walk task to perform which I am running as a background process using Popen command. How can I capture the output of this background task in a log file. In the below code, I am trying to do snampwalk on each IP in ip_list and logging all the results to abc.txt. However, I see the generated file abc.txt is empty.
Here is my sample code below -
import subprocess
import sys
f = open('abc.txt', 'a+')
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
for ip in ip_list:
cmd = "snmpwalk.exe -t 1 -v2c -c public "
cmd = cmd + ip
print(cmd)
p = subprocess.Popen(cmd, shell=True, stdout=f)
p.wait()
f.close()
print("File output - " + open('abc.txt', 'r').read())
the sample output from the command can be something like this for each IP -
sysDescr.0 = STRING: Software: Whistler Version 5.1 Service Pack 2 (Build 2600)
sysObjectID.0 = OID: win32
sysUpTimeInstance = Timeticks: (15535) 0:02:35.35
sysContact.0 = STRING: unknown
sysName.0 = STRING: UDLDEV
sysLocation.0 = STRING: unknown
sysServices.0 = INTEGER: 72
sysORID.4 = OID: snmpMPDCompliance
I have already tried Popen. But it does not logs output to a file if it is a time consuming background process. However, it works when I try to run background process like ls/dir. Any help is appreciated.
The main issue here is the expectation of what Popen does and how it works I assume.
p.wait() here will wait for the process to finish before continuing, that is why ls for instance works but more time consuming tasks doesn't. And there's nothing flushing the output automatically until you call p.stdout.flush().
The way you've set it up is more meant to work for:
Execute command
Wait for exit
Catch output
And then work with it. For your usecase, you'd better off using an alternative library or use the stdout=subprocess.PIPE and catch it yourself. Which would mean something along the lines of:
import subprocess
import sys
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
with open('abc.txt', 'a+') as output:
for ip in ip_list:
print(cmd := f"snmpwalk.exe -t 1 -v2c -c public {ip}")
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) # Be wary of shell=True
while process.poll() is None:
for c in iter(lambda: process.stdout.read(1), ''):
if c != '':
output.write(c)
with open('abc.txt', 'r') as log:
print("File output: " + log.read())
The key things to take away here is process.poll() which checks if the process has finished, if not, we'll try to catch the output with process.stdout.read(1) to read one byte at a time. If you know there's new lines coming, you can switch those three lines to output.write(process.stdout.readline()) and you're all set.

ssh + here-document syntax with Python

I'm trying to run a set of commands through ssh from a Python script. I came upon the here-document concept and thought: cool, let me implement something like this:
command = ( ( 'ssh user#host /usr/bin/bash <<EOF\n'
+ 'cd %s \n'
+ 'qsub %s\n'
+ 'EOF' ) % (test_dir, jobfile) )
try:
p = subprocess.Popen( command.split(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT )
except :
print ('from subprocess.Popen( %s )' % command.split() )
raise Exception
#endtry
Unfortunately, here is what I get:
bash: warning: here-document at line 0 delimited by end-of-file (wanted `EOF')
Not sure how I can code up that end-of-file statement (I'm guessing the newline chars get in the way here?)
I've done a search on the website but there seem to be no Python examples of this sort...
Here is a minimum working example,the key is that after << EOF the remaining string should not be split. Note that command.split() is only called once.
import subprocess
# My bash is at /user/local/bin/bash, your mileage may vary.
command = 'ssh user#host /usr/local/bin/bash'
heredoc = ('<< EOF \n'
'cd Downloads \n'
'touch test.txt \n'
'EOF')
command = command.split()
command.append(heredoc)
print command
try:
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
except Exception as e:
print e
Verify by checking that the created file test.txt shows up in the Downloads directory on the host that you ssh:ed into.

Not able to give inputs to subprocess(process which runs adb shell command) after 100 iterations

I want to run a stress test for adb(android debug bridge) shell. ( adb shell in this respect just a command line tool provided by Android phones).
I create a sub-process from python and in this subprocess i execute 'adb shell' command. there are some commands which has to be given to this subprocess which I am providing via stdin proper of the sub process.
Everything seems to be fine but when I am running a stress test. after around 100 iterations the command which I give to stdin does not reach to subprocess. If I run commands in separate terminal it is running fine. but the problem is with this stdin.
Can anyone tell me what I am doing wrong. Below is the code sample
class ADB():
def __init__(self):
self.proc = subprocess.Popen('adb shell', stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True,bufsize=0)
def provideAMcommand(self, testParam):
try:
cmd1 = "am startservice -n com.test.myapp/.ADBSupport -e \"" + "command" + "\" \"" + "test" + "\""
cmd2 = " -e \"" + "param" + "\"" + " " + testParam
print cmd1+cmd2
sys.stdout.flush()
self.proc.stdin.write(cmd1 + cmd2 + "\n")
except:
raise Exception("Phone is not Connected to Desktop or ADB is not available \n")
If it works for the first few commands but blocks later then you might forgot to read from self.proc.stdout that might lead to (as the docs warn) to OS pipe buffer filling up and blocking the child process.
To discard the output, redirect it to os.devnull:
import os
from subprocess import Popen, PIPE, STDOUT
DEVNULL = open(os.devnull, 'wb')
# ...
self.proc = Popen(['adb', 'shell'], stdin=PIPE, stdout=DEVNULL, stderr=STDOUT)
# ...
self.proc.stdin.write(cmd1 + cmd2 + "\n")
self.proc.stdin.flush()
There is pexpect module that might be a better tool for a dialog-based interaction (if you want both read/write intermitently).
IN provideAMcommand you are writing to and flushing the stdout of your main process. That will not send anything to the stdin of the child process you have created with Popen. The following code creates a new bash child process, a bit like the code in your __init__:
import subprocess as sp
cproc = sp.Popen("bash", stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, shell=True)
Now, the easiest way to communicate with that child process is the following:
#Send command 'ls' to bash.
out, err = cproc.communicate("ls")
This will send the text "ls" and EOF to bash (equal to running a bash script with only the text "ls" in it). Bash will execute the ls command and then quit. Anything that bash or ls write to stdout and stderr will end up in the variables out and err respectively. I have not used the adb shell, but I guess it behaves like bash in this regard.
If you just want your child process to print to the terminal, don't specify the stdout and stderr arguments to Popen.
You can check the exit code of the child, and raise an exception if it is non-zero (indicating an error):
if (cproc.returncode != 0):
raise Exception("Child process returned non-zero exit code")

How to control background process in linux

I need to write a script in Linux which can start a background process using one command and stop the process using another.
The specific application is to take userspace and kernel logs for android.
following command should start taking logs
$ mylogscript start
following command should stop the logging
$ mylogscript stop
Also, the commands should not block the terminal. For example, once I send the start command, the script run in background and I should be able to do other work on terminal.
Any pointers on how to implement this in perl or python would be helpful.
EDIT:
Solved: https://stackoverflow.com/a/14596380/443889
I got the solution to my problem. Solution essentially includes starting a subprocess in python and sending a signal to kill the process when done.
Here is the code for reference:
#!/usr/bin/python
import subprocess
import sys
import os
import signal
U_LOG_FILE_PATH = "u.log"
K_LOG_FILE_PATH = "k.log"
U_COMMAND = "adb logcat > " + U_LOG_FILE_PATH
K_COMMAND = "adb shell cat /proc/kmsg > " + K_LOG_FILE_PATH
LOG_PID_PATH="log-pid"
def start_log():
if(os.path.isfile(LOG_PID_PATH) == True):
print "log process already started, found file: ", LOG_PID_PATH
return
file = open(LOG_PID_PATH, "w")
print "starting log process: ", U_COMMAND
proc = subprocess.Popen(U_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process1 id = ", proc.pid
file.write(str(proc.pid) + "\n")
print "starting log process: ", K_COMMAND
proc = subprocess.Popen(K_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process2 id = ", proc.pid
file.write(str(proc.pid) + "\n")
file.close()
def stop_log():
if(os.path.isfile(LOG_PID_PATH) != True):
print "log process not started, can not find file: ", LOG_PID_PATH
return
print "terminating log processes"
file = open(LOG_PID_PATH, "r")
log_pid1 = int(file.readline())
log_pid2 = int(file.readline())
file.close()
print "log-pid1 = ", log_pid1
print "log-pid2 = ", log_pid2
os.killpg(log_pid1, signal.SIGTERM)
print "logprocess1 killed"
os.killpg(log_pid2, signal.SIGTERM)
print "logprocess2 killed"
subprocess.call("rm " + LOG_PID_PATH, shell=True)
def print_usage(str):
print "usage: ", str, "[start|stop]"
# Main script
if(len(sys.argv) != 2):
print_usage(sys.argv[0])
sys.exit(1)
if(sys.argv[1] == "start"):
start_log()
elif(sys.argv[1] == "stop"):
stop_log()
else:
print_usage(sys.argv[0])
sys.exit(1)
sys.exit(0)
There are a couple of different approaches you can take on this:
1. Signal - you use a signal handler, and use, typically "SIGHUP" to signal the process to restart ("start"), SIGTERM to stop it ("stop").
2. Use a named pipe or other IPC mechanism. The background process has a separate thread that simply reads from the pipe, and when something comes in, acts on it. This method relies on having a separate executable file that opens the pipe, and sends messages ("start", "stop", "set loglevel 1" or whatever you fancy).
I'm sorry, I haven't implemented either of these in Python [and perl I haven't really written anything in], but I doubt it's very hard - there's bound to be a ready-made set of python code to deal with named pipes.
Edit: Another method that just struck me is that you simply daemonise the program at start, and then let the "stop" version find your deamonized process [e.g. by reading the "pidfile" that you stashed somewhere suitable], and then sends a SIGTERM for it to terminate.
I don't know if this is the optimum way to do it in perl, but for example:
system("sleep 60 &")
This starts a background process that will sleep for 60 seconds without blocking the terminal. The ampersand in shell means to do something in the background.
A simple mechanism for telling the process when to stop is to have it periodically check for the existence of a certain file. If the file exists, it exits.

catching stdout in realtime from subprocess

I want to subprocess.Popen() rsync.exe in Windows, and print the stdout in Python.
My code works, but it doesn't catch the progress until a file transfer is done! I want to print the progress for each file in real time.
Using Python 3.1 now since I heard it should be better at handling IO.
import subprocess, time, os, sys
cmd = "rsync.exe -vaz -P source/ dest/"
p, line = True, 'start'
p = subprocess.Popen(cmd,
shell=True,
bufsize=64,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
Some rules of thumb for subprocess.
Never use shell=True. It needlessly invokes an extra shell process to call your program.
When calling processes, arguments are passed around as lists. sys.argv in python is a list, and so is argv in C. So you pass a list to Popen to call subprocesses, not a string.
Don't redirect stderr to a PIPE when you're not reading it.
Don't redirect stdin when you're not writing to it.
Example:
import subprocess, time, os, sys
cmd = ["rsync.exe", "-vaz", "-P", "source/" ,"dest/"]
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print(">>> " + line.rstrip())
That said, it is probable that rsync buffers its output when it detects that it is connected to a pipe instead of a terminal. This is the default behavior - when connected to a pipe, programs must explicitly flush stdout for realtime results, otherwise standard C library will buffer.
To test for that, try running this instead:
cmd = [sys.executable, 'test_out.py']
and create a test_out.py file with the contents:
import sys
import time
print ("Hello")
sys.stdout.flush()
time.sleep(10)
print ("World")
Executing that subprocess should give you "Hello" and wait 10 seconds before giving "World". If that happens with the python code above and not with rsync, that means rsync itself is buffering output, so you are out of luck.
A solution would be to connect direct to a pty, using something like pexpect.
I know this is an old topic, but there is a solution now. Call the rsync with option --outbuf=L. Example:
cmd=['rsync', '-arzv','--backup','--outbuf=L','source/','dest']
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE)
for line in iter(p.stdout.readline, b''):
print '>>> {}'.format(line.rstrip())
Depending on the use case, you might also want to disable the buffering in the subprocess itself.
If the subprocess will be a Python process, you could do this before the call:
os.environ["PYTHONUNBUFFERED"] = "1"
Or alternatively pass this in the env argument to Popen.
Otherwise, if you are on Linux/Unix, you can use the stdbuf tool. E.g. like:
cmd = ["stdbuf", "-oL"] + cmd
See also here about stdbuf or other options.
On Linux, I had the same problem of getting rid of the buffering. I finally used "stdbuf -o0" (or, unbuffer from expect) to get rid of the PIPE buffering.
proc = Popen(['stdbuf', '-o0'] + cmd, stdout=PIPE, stderr=PIPE)
stdout = proc.stdout
I could then use select.select on stdout.
See also https://unix.stackexchange.com/questions/25372/
for line in p.stdout:
...
always blocks until the next line-feed.
For "real-time" behaviour you have to do something like this:
while True:
inchar = p.stdout.read(1)
if inchar: #neither empty string nor None
print(str(inchar), end='') #or end=None to flush immediately
else:
print('') #flush for implicit line-buffering
break
The while-loop is left when the child process closes its stdout or exits.
read()/read(-1) would block until the child process closed its stdout or exited.
Your problem is:
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
the iterator itself has extra buffering.
Try doing like this:
while True:
line = p.stdout.readline()
if not line:
break
print line
You cannot get stdout to print unbuffered to a pipe (unless you can rewrite the program that prints to stdout), so here is my solution:
Redirect stdout to sterr, which is not buffered. '<cmd> 1>&2' should do it. Open the process as follows: myproc = subprocess.Popen('<cmd> 1>&2', stderr=subprocess.PIPE)
You cannot distinguish from stdout or stderr, but you get all output immediately.
Hope this helps anyone tackling this problem.
To avoid caching of output you might wanna try pexpect,
child = pexpect.spawn(launchcmd,args,timeout=None)
while True:
try:
child.expect('\n')
print(child.before)
except pexpect.EOF:
break
PS : I know this question is pretty old, still providing the solution which worked for me.
PPS: got this answer from another question
p = subprocess.Popen(command,
bufsize=0,
universal_newlines=True)
I am writing a GUI for rsync in python, and have the same probelms. This problem has troubled me for several days until i find this in pyDoc.
If universal_newlines is True, the file objects stdout and stderr are opened as text files in universal newlines mode. Lines may be terminated by any of '\n', the Unix end-of-line convention, '\r', the old Macintosh convention or '\r\n', the Windows convention. All of these external representations are seen as '\n' by the Python program.
It seems that rsync will output '\r' when translate is going on.
if you run something like this in a thread and save the ffmpeg_time property in a property of a method so you can access it, it would work very nice
I get outputs like this:
output be like if you use threading in tkinter
input = 'path/input_file.mp4'
output = 'path/input_file.mp4'
command = "ffmpeg -y -v quiet -stats -i \"" + str(input) + "\" -metadata title=\"#alaa_sanatisharif\" -preset ultrafast -vcodec copy -r 50 -vsync 1 -async 1 \"" + output + "\""
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, shell=True)
for line in self.process.stdout:
reg = re.search('\d\d:\d\d:\d\d', line)
ffmpeg_time = reg.group(0) if reg else ''
print(ffmpeg_time)
Change the stdout from the rsync process to be unbuffered.
p = subprocess.Popen(cmd,
shell=True,
bufsize=0, # 0=unbuffered, 1=line-buffered, else buffer-size
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
I've noticed that there is no mention of using a temporary file as intermediate. The following gets around the buffering issues by outputting to a temporary file and allows you to parse the data coming from rsync without connecting to a pty. I tested the following on a linux box, and the output of rsync tends to differ across platforms, so the regular expressions to parse the output may vary:
import subprocess, time, tempfile, re
pipe_output, file_name = tempfile.TemporaryFile()
cmd = ["rsync", "-vaz", "-P", "/src/" ,"/dest"]
p = subprocess.Popen(cmd, stdout=pipe_output,
stderr=subprocess.STDOUT)
while p.poll() is None:
# p.poll() returns None while the program is still running
# sleep for 1 second
time.sleep(1)
last_line = open(file_name).readlines()
# it's possible that it hasn't output yet, so continue
if len(last_line) == 0: continue
last_line = last_line[-1]
# Matching to "[bytes downloaded] number% [speed] number:number:number"
match_it = re.match(".* ([0-9]*)%.* ([0-9]*:[0-9]*:[0-9]*).*", last_line)
if not match_it: continue
# in this case, the percentage is stored in match_it.group(1),
# time in match_it.group(2). We could do something with it here...
In Python 3, here's a solution, which takes a command off the command line and delivers real-time nicely decoded strings as they are received.
Receiver (receiver.py):
import subprocess
import sys
cmd = sys.argv[1:]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
for line in p.stdout:
print("received: {}".format(line.rstrip().decode("utf-8")))
Example simple program that could generate real-time output (dummy_out.py):
import time
import sys
for i in range(5):
print("hello {}".format(i))
sys.stdout.flush()
time.sleep(1)
Output:
$python receiver.py python dummy_out.py
received: hello 0
received: hello 1
received: hello 2
received: hello 3
received: hello 4

Categories

Resources