Nagios python script wait for execution - python

I have this script which is run by nagios which checks the provider API if mitigation is enabled and reports back. I just copied an example from Nagios, I have no Python knowledge whatsoever. The problem is that sometimes the scripts needs 10 seconds to run and python just continues, so I need it to wait for execution.
I found some examples using subprocess which were successful but I don't know how to add the .readline and .strip to the command.
This is the original script:
#!/usr/bin/python
import os, sys
mitigation_enabled=os.popen("/usr/local/nagios/libexec/check_mitigation.py
| grep auto | awk '{print $2}'").readline().strip()
if mitigation_enabled == "false":
print "OK - Mitigation disabled." .format (mitigation_enabled)
sys.exit(0)
elif mitigation_enabled == "true":
print "WARNING - Mitigation enabled." .format (mitigation_enabled)
sys.exit(1)
else:
print "UKNOWN - mitigation status unknown." .format (mitigation_enabled)
sys.exit(2)
So how do I make this with subprocess, wait for the execution of the external script and add the .readline and .strip values?
Short question, how to make this work :)
Thanks!

You are complaining that the ancient historic API permits "short reads" of zero bytes.
Yes, that is correct, it is working as designed.
Recommend you use subprocess directly.
Also, two nits about awk '{print $2}':
Delete the grep by invoking awk '/auto/ {print $2}'.
Delete the overhead of the awk child by using split().

Related

retrieve value from python shell beginner question

I can use this python file from a library to make a read request of a temperature sensor value via BACnet protocol by running this command from terminal:
echo 'read 12345:2 analogInput:2 presentValue' | py -3.9 ReadProperty.py
And I can see in the console the value of 67.02999114990234 is returned as expected.
I apologize if this question seems real silly and entry level, but could I ever call this script and assign a value to the sensor reading? Any tips greatly appreciated.
for example if I run this:
import subprocess
read = "echo 'read 12345:2 analogInput:2 presentValue' | python ReadProperty.py"
sensor_reading = subprocess.check_output(read, shell=True)
print("sensor reading is: ",sensor_reading)
It will just print 0 but hoping to figure out a way to print the sensor reading of 67.02999114990234. I think what is happening under the hood is the BACnet library brings up some sort of shell scripting that is using std in/out/flush.
os.system does not return the output from stdout, but the exit code of the executed program/code.
See the docs for more information:
On Unix, the return value is the exit status of the process encoded in the format specified for wait().
On Windows, the return value is that returned by the system shell after running command.
For getting the output from stdout into your program, you have to use the subsystem module. There are plenty of tutorials outside on how to use subsystem, but this is an easy way:
import subprocess
read = "echo 'read 12345:2 analogInput:2 presentValue' | py -3.9 ReadProperty.py"
sensor_reading = subprocess.check_output(read, shell=True)
print(sensor_reading)

Running multiple Bash commands interactively from Python

I have just come across pexpect and have been figuring out how to use it to automate various practices I would otherwise have to fill in manually in a command shell.
Here's an example script:
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files")
child.expect('#')
child.sendline('ls')
child.expect('#')
child.sendline('git add .')
child.expect('#')
child.sendline('git commit')
child.expect('#')
child.sendline('git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.expect(pexpect.EOF)
(I know these particular tasks do not necessarily require pexpect, just trying to understand its best practices.)
Now, the above works. It cds to my local repo folder, lists the files there, stages my commits, and pushes to Github with authentication, all the while providing real-time output to the Python stdout. But I have two areas I'd like to improve:
Firstly, .expect('#') between every line I would run in Bash (that doesn't require interactivity) is a little tedious. (And I'm not sure whether / why it always seems to work, whatever was the output in stdout - although so far it does.) Ideally I could just clump them into one multiline string and dispense with all those expects. Isn't there a more natural way to automate parts of the script that could be e.g., a multiline string with Bash commands separated by ';' or '&&' or '||'?
Secondly, if you run a script like the above you'll see it times out after 60 seconds sharp, then yields a TimeoutError in Python. Although - assuming the job fits within 60 seconds - it gets done, I would prefer something which (1) doesn't take unnecessarily long, (2) doesn't risk cutting off a >60 second process midway, (3) doesn't end the whole thing giving me an error in Python. Can we instead have it come to a natural end, i.e., when the shell processes are finished, that's when it stops running in Python too? (If (2) and (3) can be addressed, I could probably just set an enormous timeout value - not sure if there is better practice though.)
What's the best way of rewriting the code above? I grouped these two issues in one question because my guess is there is a generally better way of using pexpect, which could solve both problems (and probably others I don't even know I have yet!), and in general I'd invite being shown the best way of doing this kind of task.
You don't need to wait for # between each command. You can just send all the commands and ignore the shell prompts. The shell buffers all the inputs.
You only need to wait for the username and password prompts, and then the final # after the last command.
You also need to send an exit command at the end, otherwise you won't get EOF.
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files")
child.sendline('ls')
child.sendline('git add .')
child.sendline('git commit')
child.sendline('git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.sendline('exit')
child.expect(pexpect.EOF)
If you're running into the 60 second timeout, you can use timeout=None to disable this. See pexpect timeout with large block of data from child
You could also combine multiple commands in a single line:
import pexpect, sys
child = pexpect.spawn("bash", timeout=60)
child.logfile = sys.stdout
child.sendline("cd /workspace/my_notebooks/code_files && ls && git add . && git commit && git push origin main')
child.expect('Username .*:')
child.sendline(<my_github_username>)
child.expect('Password .*:')
child.sendline(<my_github_password>)
child.expect('#')
child.sendline('exit')
child.expect(pexpect.EOF)
Using && between the commands ensures that it stops if any of them fails.
In general I wouldn't recommend using pexpect for this at all. Make a shell script that does everything you want, and run the script with a single subprocess.Popen() call.

Run script for send in-game Terraria server commands

In the past week I install a Terraria 1.3.5.3 server into an Ubuntu v18.04 OS, for playing online with friends. This server should be powered on 24/7, without any GUI, only been accessed by SSH on internal LAN.
My friends ask me if there is a way for them to control the server, e.g. send a message, via internal in-game chat, so I thought use a special character ($) in front of the desired command ('$say something' or '$save', for instance) and a python program, that read the terminal output via pipe, interpreter the command and send it back with a bash command.
I follow these instructions to install the server:
https://www.linode.com/docs/game-servers/host-a-terraria-server-on-your-linode
And config my router to forward a dedicated port to the terraria server.
All is working fine, but I really struggle to make python send a command via "terrariad" bash script, described in the link above.
Here is a code used to send a command, in python:
import subprocess
subprocess.Popen("terrariad save", shell=True)
This works fine, but if I try to input a string with space:
import subprocess
subprocess.Popen("terrariad \"say something\"", shell=True)
it stop the command in the space char, output this on the terminal:
: say
Instead of the desired:
: say something
<Server>something
What could I do to solve this problem?
I tried so much things but I get the same result.
P.S. If I send the command manually in the ssh putty terminal, it works!
Edit 1:
I abandoned the python solution, by now I'll try it with bash instead, seem to be more logic to do this way.
Edit 2:
I found the "terrariad" script expect just one argument, but the Popen is splitting my argument into two no matter the method I use, as my input string has one space char in the middle. Like this:
Expected:
terrariad "say\ something"
$1 = "say something"
But I get this of python Popen:
subprocess.Popen("terrariad \"say something\"", shell=True)
$1 = "say
$2 = something"
No matter i try to list it:
subprocess.Popen(["terrariad", "say something"])
$1 = "say
$2 = something"
Or use \ quote before the space char, It always split variables if it reach a space char.
Edit 3:
Looking in the bash script I could understand what is going on when I send a command... Basically it use the command "stuff", from the screen program, to send characters to the terraria screen session:
screen -S terraria -X stuff $send
$send is a printf command:
send="`printf \"$*\r\"`"
And it seems to me that if I run the bash file from Python, it has a different result than running from the command line. How this is possible? Is this a bug or bad implementation of the function?
Thanks!
I finally come with a solution to this, using pipes instead of the Popen solution.
It seems to me that Popen isn't the best solution to run bash scripts, as described in How to do multiple arguments with Python Popen?, the link that SiHa send in the comments (Thanks!):
"However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script.".
So I came with the solution, using a fifo file:
First, create a fifo to be use as a pipe, in the desired directory (for instance, /samba/terraria/config):
mkfifo cmdOutput
*/samba/terraria - this is the directory I create in order to easily edit the scripts, save and load maps to the server using another computer, that are shared with samba (https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/)
Then I create a python script to read from the screen output and then write to a pipe file (I know, probably there is other ways to this):
import shlex, os
outputFile = os.open("/samba/terraria/config/cmdOutput", os.O_WRONLY )
print("python script has started!")
while 1:
line = input()
print(line)
cmdPosition = line.find("&")
if( cmdPosition != -1 ):
cmd = slice(cmdPosition+1,len(line))
cmdText = line[cmd]
os.write(outputFile, bytes( cmdText + "\r\r", 'utf-8'))
os.write(outputFile, bytes("say Command executed!!!\r\r", 'utf-8'))
Then I edit the terraria.service file to call this script, piped from terrariaServer, and redirect the errors to another file:
ExecStart=/usr/bin/screen -dmS terraria /bin/bash -c "/opt/terraria/TerrariaServer.bin.x86_64 -config /samba/terraria/config/serverconfig.txt < /samba/terraria/config/cmdOutput 2>/samba/terraria/config/errorLog.txt | python3 /samba/terraria/scripts/allowCommands.py"
*/samba/terraria/scripts/allowCommands.py - where my script is.
**/samba/terraria/config/errorLog.txt - save Log of errors in a file.
Now I can send commands, like 'noon' or 'dawn' so I can change the in-game time, save world and backup it with samba server before boss fights, do another stuff if I have some time XD, and have the terminal showing what is going on with the server.

linux python application pid exisisting check

I have strange problem with auto run my python application. As everybody know to run this kind of app I need run command:
python app_script.py
Now I try to run this app by cronetab using one simple script to ensure that this app isn't running. If answer is no, script run application.
#!/bin/bash
pidof appstart.py >/dev/null
if [[ $? -ne 0 ]] ; then
python /path_to_my_app/appstart.py &
fi
Bad side of this approach is that script during checking pid, checks only first word from command of ps aux table and in this example it always will be python and skip script name (appstart). So when i run another app based on python language the script will failed... Maybe somebody know how to check this out in a proper way?
This might be a question better suited for Unix & Linux Stack Exchange.
However, it's common to use pgrep instead of pidof for applications like yours:
$ pidof appstart.py # nope
$ pidof python # works, but it can be different python
16795
$ pgrep appstart.py # nope, it would match just 'python', too
$ pgrep -f appstart.py # -f is for 'full', it searches the whole commandline (so it finds appstart.py)
16795
From man pgrep: The pattern is normally only matched against the process name. When -f is set, the full command line is used.
Maybe you should better check for pid-file created in your application?
This will help you track even different instances of same script if needed. Something just like this:
#!/usr/bin/env python3
import os
import sys
import atexit
PID_file = "/tmp/app_script.pid"
PID = str(os.getpid())
if os.path.isfile(PID_file):
sys.exit('{} already exists!'.format(PID_file))
open(PID_file, 'w').write(PID)
def cleanup():
os.remove(PID_file)
atexit.register(cleanup)
# DO YOUR STUFF HERE
After that you'll be able to check if file exists, and if it exists you'll be able to retrieve PID of your script.
[ -f /tmp/app_script.pid ] && ps up $(cat /tmp/app_script.pid) >/dev/null && echo "Started" || echo "Not Started"
you could also do the whole thing in python without the bash-script around it by creating a pidfile somewhere writeable.
import os
import sys
pidpath = os.path.abspath('/tmp/myapp.pid')
def myfunc():
"""
Your logic goes here
"""
return
if __name__ == '__main__':
# check for existing pidfile and fail if true
if os.path.exists(pidpath):
print('Script already running.')
sys.exit(1)
else:
# otherwise write current pid to file
with open(pidpath,'w') as _f:
_f.write(str(os.getpid()))
try:
# call your function
myfunc()
except Exception, e:
# clean up after yourself in case something breaks
os.remove(pidpath)
print('Exception: {}'.format(e))
sys.exit(1)
finally:
# also clean up after yourself in case everything's fine...
os.remove(pidpath)

Redirect to a file output of remote python app started through ssh

[EDITED]
I have a python app in a remote server that i need to debug, when I run the app locally it prints some debug information (including python tracebacks) that i need to monitor.
Thanks to jeremy i got to monitor the output file using tail -F and studying his code I got here where i found the following variation of his command:
ssh root#$IP 'nohup python /root/python/run_dev_server.py &>> /var/log/myapp.log &'
This gets me almost exactly what i want, loggin information and python tracebacks, but i do not get any of the information displayed using print from python which i need.
so I also tried his command:
ssh root#$IP 'nohup python /root/python/run_dev_server.py 2>&1 >> /var/log/myapp.log &'
it logs in the file the output of the program from print and also the logging information, but all the tracebacks are lost so i cannot debug the python exceptions.
Is there a way I can capture all the information produced by the app?
Thanks in advance for any suggestion.
I would suggest doing something like this:
/usr/bin/nohup COMMAND ARGS 2>&1 >> /var/log/COMMAND.log &
/bin/echo $! > /var/run/COMMAND.pid
nohup keeps the process alive after your terminal/ssh session is closed, >> will save all stdout and stderr to /var/log/COMMAND.log for you to "tail -f" later on.
To get the stacktrace output (which you could print to stdout, or do something fancy like email it) add the following lines to your python code.
import sys
import traceback
_old_excepthook = sys.excepthook
def myexcepthook(exctype, value, tb):
# if exctype == KeyboardInterrupt: # handle keyboard events
# Print to stdout for now, but could email or anything here.
print traceback.print_tb(tb);
_old_excepthook(exctype, value, traceback)
sys.excepthook = myexcepthook
This will catch all exceptions (Including keyboard interrupts, so be careful)

Categories

Resources