Save output of bash command into variable inside if..else statement - python

I have following function:
def check_process_running(pid_name):
if subprocess.call(["pgrep", pid_name]):
print pid_name + " is not running"
else:
print pid_name + " is running and has PID="
check_process_running(sys.argv[1])
if I run the script it gives me:
$ ./test.py firefox
22977
firefox is running and has PID=
I need to get pid_num to work with the process further. I've learnt that if I want to create variable with above pid of value 22977 I can use:
tempvar = subprocess.Popen(['pgrep', sys.argv[1]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
pid_num = tempvar.stdout.read()
print pid_num
22977
Is there solution where construction of tempvar is not needed, where the pid is picked up and saved into variable pid_num within the if..else statement as it is in my function? Or what is the most straight forward way to create the pid_num variable with just one call into the shell using subprocess and keep the function as simple as it is now?
EDIT:
With bellow solution I was able to reconstruct the statement, keep it simple and have pid_num to work with the process further:
def check_process_running(pid_name):
pid_num = subprocess.Popen(['pgrep', sys.argv[1]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT).communicate()[0]
if pid_num:
print pid_name + " is running and has PID=" + pid_num
else:
print pid_name + " is not running"

pid_number = subprocess.Popen(['pgrep', sys.argv[1]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT).communicate()[0]
maybe? or probably better
pid_number = subprocess.check_output(['pgrep', sys.argv[1]])

Related

How to execute a non-blocking script in python and get its return code?

I am trying to execute a non-blocking bash script from python and to get its return code. Here is my function so far:
def run_bash_script(script_fullname, logfile):
my_cmd = ". " + script_fullname + " >" + logfile +" 2>&1"
p = subprocess.Popen(my_cmd, shell=True)
os.waitpid(p.pid, 0)
print(p.returncode)
As you can see, all the output is redirected into a log file, which I can monitor while the bash process is running.
However, the last command just returns 'None' instead of a useful exit code.
What am I doing wrong here?
You should use p.wait() rather than os.waitpid(). os.waitpid() is a low level api and it knows nothing about the Popen object so it could not touch p.

Exit if the called python script encounters an error

I have a central python script that calls various other python scripts and looks like this:
os.system("python " + script1 + args1)
os.system("python " + script2 + args2)
os.system("python " + script3 + args3)
Now, I want to exit from my central script if any of the sub-scripts encounter an error.
What is happening with current code is that let's say script1 encounters an error. The console will display that error and then central script will move onto calling script2 and so on.
I want to display the encountered error and immediately exit my central code.
What is the best way to do this?
Overall this is a terrible way to execute a series of commands from within Python. However here's a minimal way to handle it:
#!python
import os, system
for script, args in some_tuple_of_commands:
exit_code = os.system("python " + script + args)
if exit_code > 0:
print("Error %d running 'python %s %s'" % (
exit_code, script, args), file=sys.stderr)
sys.exit(exit_code)
But, honestly this is all horrible. It's almost always a bad idea to concatenate strings and pass them to your shell for execution from within any programming language.
Look at the subprocess module for much more sane handling of subprocesses in Python.
Also consider trying the sh or the pexpect third party modules depending on what you're trying to do with input or output.
You can try subprocess
import subprocess,sys
try:
output = subprocess.check_output("python test.py", shell=True)
print(output)
except ValueError as e:
print e
sys.exit(0)
print("hello world")
I don't know if it's ideal for you but enclosing these commands in a function seems a good idea to me:
I am using the fact that when a process exits with error os.system(process) returns 256 else it returns 0 as an output respectively.
def runscripts():
if os.system("python " + script1 + args1):return(-1); #Returns -1 if script1 fails and exits.
if os.system("python " + script2 + args2):return(-2); #Returns -2 and exits
if os.system("python " + script3 + args3):return(-3); #Pretty obvious
return(0)
runscripts()
#or if you want to exit the main program
if runscripts():sys.exit(0)
Invoking the operating system like that is a security breach waiting to happen. One should use the subprocess module, because it is more powerful and does not invoke a shell (unless you specifically tell it to). In general, avoid invoking shell whenever possible (see this post).
You can do it like this:
import subprocess
import sys
# create a list of commands
# each command to subprocess.run must be a list of arguments, e.g.
# ["python", "echo.py", "hello"]
cmds = [("python " + script + " " + args).split()
for script, args in [(script1, args1), (script2, args2), (script3,
args3)]]
def captured_run(arglist):
"""Run a subprocess and return the output and returncode."""
proc = subprocess.run( # PIPE captures the output
arglist, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return proc.stdout, proc.stderr, proc.returncode
for cmd in cmds:
stdout, stderr, rc = captured_run(cmd)
# do whatever with stdout, stderr (note that they are bytestrings)
if rc != 0:
sys.exit(rc)
If you don't care about the output, just remove the subprocess.PIPE stuff and return only the returncode from the function. You may also want to add a timeout to the execution, see the subprocess docs linked above for how to do that.

Simultaneously wait() for multiple subproccess.Popen commands, then exit

I'm trying to run an unknown number of commands and capture their stdout in a file. However, I am presented with a difficulty when attempting to p.wait() on each instance. My code looks something like this:
print "Started..."
for i, cmd in enumerate(commands):
i = "output_%d.log" % i
p = Popen(cmd, shell=True, universal_newlines=True, stdout=open(i, 'w'))
p.wait()
print "Done!"
I'm looking for a way to execute everything in commands simultaneously and exit the current script only when each and every single process has been completed. It would also help to be informed when each command returns an exit code.
I've looked at some answers, including this one by J.F. Sebastian and tried to adapt it to my situation by changing args=(p.stdout, q) to args=(p.returncode, q) but it ended up exiting immediately and running in the background (possibly due to shell=True?), as well as not responding to any keys pressed inside the bash shell... I don't know where to go with this.
Jeremy Brown's answer also helped, sort of, but select.epoll() was throwing an AttributeError exception.
Is there any other seamless way or trick to make it work? It doesn't need to be cross platform, a solution for GNU/Linux and macOS would be much appreciated. Thanks in advance!
A big thanks to Adam Matan for the biggest hint towards the solution. This is what I came up with, and it works flawlessly:
It initiates each Thread object in parallel
It starts each instance simultaneously
Finally it waits for each exit code without blocking other threads
Here is the code:
import threading
import subprocess
...
def run(cmd):
name = cmd.split()[0]
out = open("%s_log.txt" % name, 'w')
err = open('/dev/null', 'w')
p = subprocess.Popen(cmd.split(), stdout=out, stderr=err)
p.wait()
print name + " completed, return code: " + str(p.returncode)
...
proc = [threading.Thread(target=run, args=(cmd)) for cmd in commands]
[p.start() for p in proc]
[p.join() for p in proc]
print "Done!"
I would have rathered add this as a comment because I was working off of Jack of all Spades' answer. I had trouble getting that exact command to work because it was unpacking the string list I had of commands.
Here's my edit for python3:
import subprocess
import threading
commands = ['sleep 2', 'sleep 4', 'sleep 8']
def run(cmd):
print("Command %s" % cmd)
name = cmd.split(' ')[0]
print("name %s" % name)
out = open('/tmp/%s_log.txt' % name, 'w')
err = open('/dev/null', 'w')
p = subprocess.Popen(cmd.split(' '), stdout=out, stderr=err)
p.wait()
print(name + " completed, return code: " + str(p.returncode))
proc = [threading.Thread(target=run, kwargs={'cmd':cmd}) for cmd in commands]
[p.start() for p in proc]
[p.join() for p in proc]
print("Done!")

Use Jython Subprocess to send commands to bash shell

I need to send a number of subsequent commands to one bash shell in a Jython engine.
Executing commands 1 by 1, with os.system(s) or subsystem.call(s, ...) does not work as a new shell is created every time.
I hope someone has an idea .... following 3 tests are not a sufficient slution.
Sample Commands : <br>
cd /home/xxx/dir1/dir2<br>
pwd<br>
cd ..<br>
pwd
In this first test, the commands are executed, but the output is only retrieved at the end.
def testRun1():
# Actual Output
# run 0
# run 1
# run 2
# /home/usr/dir1/dir2
# /home/usr/dir1
# /home/usr
print 'All output is shown at the end...'
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
output = proc.communicate()[0]
print output
Whereas the 'desired output' is
# run 0
# /home/usr/dir1/dir2
# run 1
# /home/usr/dir1
# run 2
# /home/usr
This second tryout delivers what we want, but the output is only shown when jython script is interrupted.
def testRun2():
# Weird : it is what we want, but all output is blocked until CTRL-C is pressed
# run 0
# /home/usr/dir1/dir2
# run 1
# /home/usr/dir1
# run 2
# /home/usr
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
print 'start to print output'
for line in proc.stdout:
print(line.decode("utf-8"))
print_remaining(proc.stdout)
print 'printed output'
This last tryout crashes in the second run because a stream was closed.
def testRun3():
# This fails with error
# ValueError: I/O operation on closed file
proc = subprocess.Popen('/bin/bash',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
for i in range(3):
print 'run ' + str(i)
proc.stdin.write('pwd\n')
proc.stdin.write('cd ..\n')
output = proc.communicate()[0]
print output
The troubles you're having are only partially to do with subprocess. Pipes are fundamentally the wrong IPC mechanism for this job. To put an interactive command interpreter under scripted control, what you want is a pseudoterminal, and even then it's not as simple as reading and writing.
The Python standard library doesn't have any built-in modules that do pseudoterminal handling for you, unless they've added something very recently that I'm not aware of. However, the third-party package pexpect can do it, and it's geared for exactly the thing you are trying to do.
Using the basic pexpect API:
import pexpect
def testRun4():
proc = pexpect.spawn("/bin/bash")
for i in range(3):
proc.expect(":^[^$#]*[$#] *")
print("run", i)
proc.sendline("pwd")
proc.expect("^[^$#]*[$#] *")
print(proc.before)
proc.sendline("cd ..")
With pexpect.replwrap, it's a little more involved to set up but then the loop is tidier:
def testRun5():
proc = pexpect.replwrap.REPLWrapper(
"/bin/bash",
orig_prompt="^[^$#]*[$#] *",
prompt_change="PS1='{}'; PS2='{}'")
for i in range(3):
print("run", i)
print(proc.run_command("pwd"))
proc.run_command("cd ..")

How to control background process in linux

I need to write a script in Linux which can start a background process using one command and stop the process using another.
The specific application is to take userspace and kernel logs for android.
following command should start taking logs
$ mylogscript start
following command should stop the logging
$ mylogscript stop
Also, the commands should not block the terminal. For example, once I send the start command, the script run in background and I should be able to do other work on terminal.
Any pointers on how to implement this in perl or python would be helpful.
EDIT:
Solved: https://stackoverflow.com/a/14596380/443889
I got the solution to my problem. Solution essentially includes starting a subprocess in python and sending a signal to kill the process when done.
Here is the code for reference:
#!/usr/bin/python
import subprocess
import sys
import os
import signal
U_LOG_FILE_PATH = "u.log"
K_LOG_FILE_PATH = "k.log"
U_COMMAND = "adb logcat > " + U_LOG_FILE_PATH
K_COMMAND = "adb shell cat /proc/kmsg > " + K_LOG_FILE_PATH
LOG_PID_PATH="log-pid"
def start_log():
if(os.path.isfile(LOG_PID_PATH) == True):
print "log process already started, found file: ", LOG_PID_PATH
return
file = open(LOG_PID_PATH, "w")
print "starting log process: ", U_COMMAND
proc = subprocess.Popen(U_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process1 id = ", proc.pid
file.write(str(proc.pid) + "\n")
print "starting log process: ", K_COMMAND
proc = subprocess.Popen(K_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process2 id = ", proc.pid
file.write(str(proc.pid) + "\n")
file.close()
def stop_log():
if(os.path.isfile(LOG_PID_PATH) != True):
print "log process not started, can not find file: ", LOG_PID_PATH
return
print "terminating log processes"
file = open(LOG_PID_PATH, "r")
log_pid1 = int(file.readline())
log_pid2 = int(file.readline())
file.close()
print "log-pid1 = ", log_pid1
print "log-pid2 = ", log_pid2
os.killpg(log_pid1, signal.SIGTERM)
print "logprocess1 killed"
os.killpg(log_pid2, signal.SIGTERM)
print "logprocess2 killed"
subprocess.call("rm " + LOG_PID_PATH, shell=True)
def print_usage(str):
print "usage: ", str, "[start|stop]"
# Main script
if(len(sys.argv) != 2):
print_usage(sys.argv[0])
sys.exit(1)
if(sys.argv[1] == "start"):
start_log()
elif(sys.argv[1] == "stop"):
stop_log()
else:
print_usage(sys.argv[0])
sys.exit(1)
sys.exit(0)
There are a couple of different approaches you can take on this:
1. Signal - you use a signal handler, and use, typically "SIGHUP" to signal the process to restart ("start"), SIGTERM to stop it ("stop").
2. Use a named pipe or other IPC mechanism. The background process has a separate thread that simply reads from the pipe, and when something comes in, acts on it. This method relies on having a separate executable file that opens the pipe, and sends messages ("start", "stop", "set loglevel 1" or whatever you fancy).
I'm sorry, I haven't implemented either of these in Python [and perl I haven't really written anything in], but I doubt it's very hard - there's bound to be a ready-made set of python code to deal with named pipes.
Edit: Another method that just struck me is that you simply daemonise the program at start, and then let the "stop" version find your deamonized process [e.g. by reading the "pidfile" that you stashed somewhere suitable], and then sends a SIGTERM for it to terminate.
I don't know if this is the optimum way to do it in perl, but for example:
system("sleep 60 &")
This starts a background process that will sleep for 60 seconds without blocking the terminal. The ampersand in shell means to do something in the background.
A simple mechanism for telling the process when to stop is to have it periodically check for the existence of a certain file. If the file exists, it exits.

Categories

Resources