Python Loop synchronization - python

I am calling an external process multiple times, in a loop. To give you a pseudocode:
for i in xrange(1, 100):
call external proc which inserts a row into a table
The problem here is, whenever the external process is called, it runs in a seperate thread, which could take any amount of time to run. So, python would have continued with the execution. This causes the insertion to run into a row lock and prevent insertion.
What is the ideal way to wait for the process to complete, under the following constraints:
I cannot modify the way the external process works.
I know I can, but I do not want to use a hack, like thread.sleep
I cannot modify any DB settings.
The code for calling the external proc is:
def run_query(query, username, password):
try:
process = subprocess.Popen( "<path to exe> -u " + username + " -p "+ password +" " + query,
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
result, error = process.communicate()
if error != '':
_pretty_error('stderr', error)
except OSError, error:
_pretty_error('OSError', str(error))
return result

You have several options according to the subprocess documentation:
Calling process.wait() after running process = subprocess.Popen(...)
Using subprocess.call instead of Popen
Using subprocess.check_call instead of Popen

depending on how the result looks, one way would be to use wait():
process = subprocess.Popen( "<path to exe> -u " + username + " -p "+ password +" " + query,
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
retcode = process.wait()

You can try to start the process like:
process = subprocess.call( ["<path to exe>", "-u", "username", "-p", password, query],
shell = False)
This way the main process sleeps until the subprocess ends, but you don't get output.

Related

When using python to run adb shell command, how can i kill a running program in adb shell?

For Android device, if i enter adb shell in iTerm and then run some program to continuously output data(like getevent, logcat), i can kill these programs by CTRL+C and return to adb shell(not return to the OS shell).
What i want to know is how can i do the same thing in python?
here's my sample code:
def thread_func(process):
while True:
out = process.stdout.readline()
line = out.decode()
if not line:
print("EOF")
break
print(line.replace('\n', ''))
def thread_read(process):
t = threading.Thread(target=thread_func, args=[process,])
t.start()
process = subprocess.Popen("adb shell", shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
thread_read(process)
time.sleep(1)
cmd = "ls"
print("\n=====> send " + cmd)
process.stdin.write((cmd + "\n").encode())
process.stdin.flush()
time.sleep(1)
cmd = "getevent"
print("\n=====> send " + cmd)
process.stdin.write((cmd + "\n").encode())
process.stdin.flush()
# after sending 'getevent' i can continuously read event data from stdout
# but, how can i kill the getevent and back to adb shell environment so that i can send a new command?

Getting subprocess.Popen stdout when running by cron

I want to get a service status and if it's not up, to send the status (stdout) in email.
This script is scheduled to run every hour by cron.
When running manually, the following works fine:
def is_service_running(name):
with open(os.devnull, 'wb') as hide_output:
proc = subprocess.Popen(['service', name, 'status'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
output = proc.stdout.read()
exit_code = proc.wait()
return exit_code == 0, output
But when running by cron. output is empty.
How can I capture stdout when running by cron?
Thank you
The problem wasn't cron but shell=True.
Apparently, when using shell=True, popen expects single string and not a list.
So when I updated my call to:
proc = subprocess.Popen(['service ' + name + ' status'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
everything worked.

How to run a new process and establish connection with it without any shell script?

My purpose is create a new process of other program and establish with it a long-time connection (opportunity to write to its stdin and read a result) i.e. not atomic write-read operation with following killing of created process. I have to use program code only, not any shell command.
There is my code:
import subprocess
proc = subprocess.Popen(['myprog', '-l'], shell = True, stdin = subprocess.PIPE, stdout = subprocess.PIPE)
#proc was kept
#after some waiting I try to send to proc some command
if proc.returncode == None:
proc.stdin.write(b'command')
response = process.communicate()[0]
This code returns either empty string (if one transaction was commited) or rises BrokenPipeError (if it was running in loop).
Does proc stay alive after the first process.communicate()? What approach I need to use to get control of stdin/stdout of proc?
You are checking for proc.returncode == None.
But if you read the documentation of subprocess the returncode is either 0 or a negative number, but never None.
Second, if you have long running processes, you should either adjust and handle the timeout, or disable it..
Third: You should really really really avoid shell=True in Popen, it is a huge security risk.
Here is some example how I normally deal with Popen:
from shlex import split as sh_split
from subprocess import PIPE, Popen, TimeoutExpired
def launch(command, cwd=None, stdin=None, timeout=15):
with Popen(
sh_split(command), universal_newlines=True,
cwd=cwd, stdout=PIPE, stderr=PIPE, stdin=PIPE
) as proc:
try:
out, err = proc.communicate(input=stdin, timeout=timeout)
except TimeoutExpired:
proc.kill()
out, err = proc.communicate()
return proc.returncode, out.splitlines(), err.splitlines()
This is for short living processes, but I hope you can see how stdin, stdout and stderr handling is done.

ping command doesn't return when pinging a remote server that is down

I have a function in a python script that basically checks whether a remote server is up or not using ping. If it is not up then it should wait till it is up and only then return. But the script doesn't return if the remote server is down, if the remote server is up then the script returns.
I have tried both subprocess and os.system, I also tried with various parameters in ping command like -c, -w and -W but nothing seems to help. Any ideas on what I might be doing wrong?
Here is the code:
def waitTillUp():
command = "ping -c 1 -W 2 " + remoteServer
response = os.system(command)
if response == 0:
print "UP\n"
else:
print "Down\n"
'''
args = shlex.split(command)
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, err = p.communicate()
logging.debug("Waiting for the Remote Server to be up")
while "4 packets transmitted, 4 received" not in output:
logging.debug("Waiting for the remoteServer to be up")
p = subprocess.Popen(args, stdout=subprocess.PIPE)
output, err = p.communicate()
'''
Ideally it should loop till it is up but just for the sake of checking whether it is returning or not I have just put an if else condition. When the remote server does actually come up, the script is still stuck and not stopping. Once I hit an enter in the window, it returns. Any suggestions are welcome.
Update# 2
Right now I am just trying to do this, not checking for any response or anything. I just hope to let the program fall through to main and then end of program but it is not even doing that.
def waitTillUp():
command = "ping -c 1 " + Params.storageArray
response = os.system(command)
I am using lot of subprocess.Popen calls one after the other, so is it possible that some buffer is not getting cleared or something of similar sort? Can this be the reason for the weird behavior?
Update# 3
The problem is probably in the reboot call of the remote server before the code that I have pasted. I changed few things and realized that at the function where I am trying to do a reboot, at that point the function is not returning.
This is the complete code. With logging statements I am able to determine that the call for reboot, below marked as the "CULPRIT CALL" is the point where the execution is getting stuck and doesn't proceed till it gets a ENTER key from the user.
def waitTillUp():
command = "ping -c 1 " + remoteServer
response = os.system(command)
def execCmd(op, command):
logging.info("Executing %s operation(command: %s)" %(op, command))
args = shlex.split(command)
sys.stdout.flush()
p = subprocess.Popen(args, stdout=subprocess.PIPE)
output, err = p.communicate()
logging.debug("Output of %s operation: %s" %(op, output))
def install():
execCmd("chmod", "ssh root#" + remoteServer + " chmod +x ~/OS*.bin")
execCmd("reboot", "ssh root#" + remoteServer + " reboot -nf") ### CULPRIT CALL
waitTillUp()
def main():
install()

python ssh stdout multiple

In python, I am trying to connect thru ssh and play multiple commands one by one.
This code works fine and output is printed out to my screen:
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
p.stdin.write('pwd\n')
p.stdin.write('ls -l\n')
p.stdin.write('exit\n')
p.stdin.close()
My problem is when I try to grab each response in a string. I have tried this but the read function is blocking:
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('pwd\n')
st1 = p.stdout.read()
p.stdin.write('ls -l\n')
st2 = p.stdout.read()
p.stdin.close()
I agree with Alp that it's probably easier to have a library to do the connection logic for you. pexpect is one way to go. The below is an example with paramiko. http://docs.paramiko.org/en/1.13/
import paramiko
host = 'myhost'
port, user, password = '22', 'myuser', 'mypass'
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(host, port, user, password, timeout=10)
command = 'ls -l'
stdin, stdout, stderr = client.exec_command(command)
errors = stderr.read()
output = stdout.read()
client.close()
The read() call is blocking because, when called with no argument, read() will read from the stream in question until it encounters EOF.
If your use case is as simple as your example code, a cheap workaround is to defer reading from p.stdout until after you close the connection:
cmd = ['ssh', '-t', '-t', 'deploy#pdb0']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('pwd\n')
p.stdin.write('ls -l\n')
p.stdin.write('exit\n')
p.stdin.close()
outstr = p.stdout.read()
You'll then have to parse outstr to separate the output of the different comamnds. (Looking for occurrences of the remote shell prompt is probably the most straightforward way to do that.)
If you need to read the complete output of one command before sending another, you have several problems. First this can block:
p.stdin.write('pwd\n')
st1 = p.stdout.read()
because the command you write to p.stdin might be buffered. You need to flush the command before looking for output:
p.stdin.write('pwd\n')
p.stdin.flush()
st1 = p.stdout.read()
The read() call will still block, though. What you want to do is call read() with a specified buffer size and read the output in chunks until you encounter the remote shell prompt again. But even then you'll still need to use select to check the status of p.stdout to make sure you don't block.
There's a library called pexpect that implements that logic for you. It'll be much easier to use that (or, even better, pxssh, which specializes pexpect for use over ssh connections), as getting everything right is rather hairy, and different OSes behave somewhat differently in edge cases. (Take a look at pexpect.spawn.read_nonblocking() for an example of how messy it can be.)
Even cleaner, though, would be to use paramiko, which provides a higher level abstraction to doing things over ssh connections. In particular, look at the example usage of the paramiko.client.SSHClient class.
Thanks for both of you for your answers. To keep it simple I have updated my code with:
def getAnswer(p, cmnd):
# send my command
if len(cmnd) > 0:
p.stdin.write(cmnd + '\n')
p.stdin.flush()
# get reply -- until prompt received
st = ""
while True:
char = p.stdout.read(1)
st += char
if char == '>':
return st
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
#discard welcome message
getAnswer(p, '')
st1 = getAnswer(p, 'pwd')
st2 = getAnswer(p, 'ls -l')
...
p.stdin.write('exit\n')
p.stdin.flush()
p.stdin.close()
p.stdout.close()
This is not perfect but works fine. To detect a prompt I am simply waiting for a '>' this could be improved by first sending a 'echo $PS1' and build a regexp accordingly.

Categories

Resources