I want to get a service status and if it's not up, to send the status (stdout) in email.
This script is scheduled to run every hour by cron.
When running manually, the following works fine:
def is_service_running(name):
with open(os.devnull, 'wb') as hide_output:
proc = subprocess.Popen(['service', name, 'status'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
output = proc.stdout.read()
exit_code = proc.wait()
return exit_code == 0, output
But when running by cron. output is empty.
How can I capture stdout when running by cron?
Thank you
The problem wasn't cron but shell=True.
Apparently, when using shell=True, popen expects single string and not a list.
So when I updated my call to:
proc = subprocess.Popen(['service ' + name + ' status'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
everything worked.
Related
I use this bit of code to start the server as a subprocess and put the stdout into a text file.
with open('serverlog.txt', 'w') as outfile:
proc = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=outfile, shell=False)
and then use this to send a commmand to the subprocess via the communicate method
if message.content[:5] == "++say":
userMessage = message.content[6:]
proc.communicate(input=f"say {userMessage}".encode())
but once this block of code is reached, the program hangs.
My purpose is create a new process of other program and establish with it a long-time connection (opportunity to write to its stdin and read a result) i.e. not atomic write-read operation with following killing of created process. I have to use program code only, not any shell command.
There is my code:
import subprocess
proc = subprocess.Popen(['myprog', '-l'], shell = True, stdin = subprocess.PIPE, stdout = subprocess.PIPE)
#proc was kept
#after some waiting I try to send to proc some command
if proc.returncode == None:
proc.stdin.write(b'command')
response = process.communicate()[0]
This code returns either empty string (if one transaction was commited) or rises BrokenPipeError (if it was running in loop).
Does proc stay alive after the first process.communicate()? What approach I need to use to get control of stdin/stdout of proc?
You are checking for proc.returncode == None.
But if you read the documentation of subprocess the returncode is either 0 or a negative number, but never None.
Second, if you have long running processes, you should either adjust and handle the timeout, or disable it..
Third: You should really really really avoid shell=True in Popen, it is a huge security risk.
Here is some example how I normally deal with Popen:
from shlex import split as sh_split
from subprocess import PIPE, Popen, TimeoutExpired
def launch(command, cwd=None, stdin=None, timeout=15):
with Popen(
sh_split(command), universal_newlines=True,
cwd=cwd, stdout=PIPE, stderr=PIPE, stdin=PIPE
) as proc:
try:
out, err = proc.communicate(input=stdin, timeout=timeout)
except TimeoutExpired:
proc.kill()
out, err = proc.communicate()
return proc.returncode, out.splitlines(), err.splitlines()
This is for short living processes, but I hope you can see how stdin, stdout and stderr handling is done.
I would run a server and view real-time results in a textarea on a html page. Is it possible? For now I get to do it when the command is finished, but I would like to do while the server is running.
I tried:
r = subprocess.Popen(argServer, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True)
stdout, stderr = r.communicate()
print """<textarea>""" + stdout + """</textarea>"""
but it does not work
In python, I am trying to connect thru ssh and play multiple commands one by one.
This code works fine and output is printed out to my screen:
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE)
p.stdin.write('pwd\n')
p.stdin.write('ls -l\n')
p.stdin.write('exit\n')
p.stdin.close()
My problem is when I try to grab each response in a string. I have tried this but the read function is blocking:
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('pwd\n')
st1 = p.stdout.read()
p.stdin.write('ls -l\n')
st2 = p.stdout.read()
p.stdin.close()
I agree with Alp that it's probably easier to have a library to do the connection logic for you. pexpect is one way to go. The below is an example with paramiko. http://docs.paramiko.org/en/1.13/
import paramiko
host = 'myhost'
port, user, password = '22', 'myuser', 'mypass'
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(host, port, user, password, timeout=10)
command = 'ls -l'
stdin, stdout, stderr = client.exec_command(command)
errors = stderr.read()
output = stdout.read()
client.close()
The read() call is blocking because, when called with no argument, read() will read from the stream in question until it encounters EOF.
If your use case is as simple as your example code, a cheap workaround is to defer reading from p.stdout until after you close the connection:
cmd = ['ssh', '-t', '-t', 'deploy#pdb0']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('pwd\n')
p.stdin.write('ls -l\n')
p.stdin.write('exit\n')
p.stdin.close()
outstr = p.stdout.read()
You'll then have to parse outstr to separate the output of the different comamnds. (Looking for occurrences of the remote shell prompt is probably the most straightforward way to do that.)
If you need to read the complete output of one command before sending another, you have several problems. First this can block:
p.stdin.write('pwd\n')
st1 = p.stdout.read()
because the command you write to p.stdin might be buffered. You need to flush the command before looking for output:
p.stdin.write('pwd\n')
p.stdin.flush()
st1 = p.stdout.read()
The read() call will still block, though. What you want to do is call read() with a specified buffer size and read the output in chunks until you encounter the remote shell prompt again. But even then you'll still need to use select to check the status of p.stdout to make sure you don't block.
There's a library called pexpect that implements that logic for you. It'll be much easier to use that (or, even better, pxssh, which specializes pexpect for use over ssh connections), as getting everything right is rather hairy, and different OSes behave somewhat differently in edge cases. (Take a look at pexpect.spawn.read_nonblocking() for an example of how messy it can be.)
Even cleaner, though, would be to use paramiko, which provides a higher level abstraction to doing things over ssh connections. In particular, look at the example usage of the paramiko.client.SSHClient class.
Thanks for both of you for your answers. To keep it simple I have updated my code with:
def getAnswer(p, cmnd):
# send my command
if len(cmnd) > 0:
p.stdin.write(cmnd + '\n')
p.stdin.flush()
# get reply -- until prompt received
st = ""
while True:
char = p.stdout.read(1)
st += char
if char == '>':
return st
cmd = ['ssh', '-t', '-t', 'user#host']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
#discard welcome message
getAnswer(p, '')
st1 = getAnswer(p, 'pwd')
st2 = getAnswer(p, 'ls -l')
...
p.stdin.write('exit\n')
p.stdin.flush()
p.stdin.close()
p.stdout.close()
This is not perfect but works fine. To detect a prompt I am simply waiting for a '>' this could be improved by first sending a 'echo $PS1' and build a regexp accordingly.
I am calling an external process multiple times, in a loop. To give you a pseudocode:
for i in xrange(1, 100):
call external proc which inserts a row into a table
The problem here is, whenever the external process is called, it runs in a seperate thread, which could take any amount of time to run. So, python would have continued with the execution. This causes the insertion to run into a row lock and prevent insertion.
What is the ideal way to wait for the process to complete, under the following constraints:
I cannot modify the way the external process works.
I know I can, but I do not want to use a hack, like thread.sleep
I cannot modify any DB settings.
The code for calling the external proc is:
def run_query(query, username, password):
try:
process = subprocess.Popen( "<path to exe> -u " + username + " -p "+ password +" " + query,
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
result, error = process.communicate()
if error != '':
_pretty_error('stderr', error)
except OSError, error:
_pretty_error('OSError', str(error))
return result
You have several options according to the subprocess documentation:
Calling process.wait() after running process = subprocess.Popen(...)
Using subprocess.call instead of Popen
Using subprocess.check_call instead of Popen
depending on how the result looks, one way would be to use wait():
process = subprocess.Popen( "<path to exe> -u " + username + " -p "+ password +" " + query,
shell = True,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE )
retcode = process.wait()
You can try to start the process like:
process = subprocess.call( ["<path to exe>", "-u", "username", "-p", password, query],
shell = False)
This way the main process sleeps until the subprocess ends, but you don't get output.