nagios event handler executing python script - python

I am stumped as to what this issue could be. The nagios log does not report any errors anymore but my file does not get written with anything.
def log_something(host_name host_address, attempt_number):
with open('file', 'a+') as log:
log.write('called function at %s with args %s' %s (str(datetime.datetime.now()), locals()))
def main():
parser = argparse.ArgumentParser()
parser.add_argument('host_name')
parser.add_argument('host_address')
parser.add_argument('attempt_number')
args = parser.parse_args()
log_something(args.host_name, args.host_address, args.attempt_number)
if __name__ == "__main__":
main()
And in my commands.cfg
define command {
command_name my_command
command_line $USER1$/my_command.py $HOSTNAME$ $HOSTADDRESS$ $HOSTATTEMPT$
}
And in my host config
define host {
...
event_handler my_command
}
And in the nagios log (journalctl -xe)
OST ALERT: test-router;UP;HARD;5;PING OK - Packet loss = 0%, RTA = 0.96 ms
Jan 31 15:38:47 nagios-server.nagios[9212]: HOST EVENT HANDLER: test-router;UP;HARD;5;my_command
Nothing is written to the file, no errors are reported. When there were errors in my syntax the nagios log would print the errors that were reported to stderr, one of which was a file permission issue. I fixed that by creating the file in the same folder and chmod 777 everything. Besides if that was an issue it should be logged.
Anyone have any idea what's going on here?

I figured something out. It seems as if you define where the output of nagios event handlers goes in the nagios.cfg file.
# TEMP PATH
# This is path where Nagios can create temp files for service and
# host check results, etc.
temp_path=/tmp
I think this means that the output of the plugin checks get sent to the temp folder. However, changing that path seems to have no effect and the result of my output in the python script still gets written as file in the /tmp folder. This is CentOS7.

Check you function, it seem you missing the "," in function parameters.
def log_something(host_name host_address, attempt_number):
with open('file', 'a+') as log:

Related

Get Python Ansible_Runner module STDOUT Key Value

I'm using ansbile_runner Python module as bellow:
import ansible_runner
r = ansible_runner.run(private_data_dir='/tmp/demo', playbook='test.yml')
When I execute the above code, it will show the output without printing in Python. What I want is to save the stdout content into a Python variable for further text processing.
Did you read the manual under https://ansible-runner.readthedocs.io/en/stable/python_interface/ ? There is an example where you add another parameter, which is called output_fd and that could be a file handler instead of sys.stdout.
Sadly, this is a parameter of the run_command function and the documentation is not very good. A look into the source code at https://github.com/ansible/ansible-runner/blob/devel/ansible_runner/interface.py could help you.
According to the implementation details in https://github.com/ansible/ansible-runner/blob/devel/ansible_runner/runner.py it looks like, the run() function always prints to stdout.
According to the interface, there is a boolean flag in run(json_mode=TRUE) that stores the response in JSON (I expect in r instead of stdout) and there is another boolean flag quiet.
I played around a little bit. The relevant option to avoid output to stdout is quiet=True as run() attribute.
Ansible_Runner catches the output and writes it to a file in the artifacts directory. Every run() command produces that directory as described in https://ansible-runner.readthedocs.io/en/stable/intro/#runner-artifacts-directory-hierarchy. So there is a file called stdout in the artifact directory. It contains the details. You can read it as JSON.
But also the returned object contains already some relevant data. Here is my example
playbook = 'playbook.yml'
private_data_dir = 'data/' # existing folder with inventory etc
work_dir = 'playbooks/' # contains playbook and roles
try:
logging.debug('Running ansible playbook {} with private data dir {} in project dir {}'.format(playbook, private_data_dir, work_dir))
runner = ansible_runner.run(
private_data_dir=private_data_dir,
project_dir=work_dir,
playbook=playbook,
quiet=True,
json_mode=True
)
processed = runner.stats.get('processed')
failed = runner.stats.get('failures')
# TODO inform backend
for host in processed:
if host in failed:
logging.error('Host {} failed'.format(host))
else:
logging.debug('Host {} backupd'.format(host))
logging.error('Playbook runs into status {} on inventory {}'.format(runner.status, inventory.get('name')))
if runner.rc != 0:
# we have an overall failure
else:
# success message
except BaseException as err:
logging.error('Could not process ansible playbook {}\n{}'.format(inventory.get('name'),err))
So this outputs all processed hosts and informs about failures per host. Concrete more output can be found in the stdout file in artifact directory.

bash to python calling wlst and passing parameters

I am something of a beginner with python so I am writing bash script wrapper that passes some arguments to wlst - jython script that reads these values.
for example:
./serverArgUpdate.sh myAdminServer 40100 'server_MS1, server_MS2' -Dweblogic.logs2=debug
However, in my WLST script I read the values but having issue with arg value 4 == server_MS1, server_MS2
import sys
### Domain Properties ####
adminURL = sys.argv[1]
domainUserName = sys.argv[2]
domainPassword = sys.argv[3]
svrName = str(sys.argv[4])
print("The server names are "+svrName)
setarg = sys.argv[5]
############### Connecting to AdminServer #################################
def connectAdmin() :
try:
connect(userConfigFile=domainUserName,userKeyFile=domainPassword,url=adminURL)
except:
banner('Unable to find admin server..')
exit()
def banner(line):
print('='*80)
print(line)
print('='*80)
########################################################################
def updateSvrArg():
for i in svrName:
cd('/Servers/'+i+'/ServerStart/'+i)
...
############### Main Script #####################################
if __name__== "main":
try:
updateSvrArg()
disconnect()
except:
dumpStack()
banner("There was an error with the script: Check the properties file or check if Someone has the lock/edit on the console")
cancelEdit(defaultAnswer='y')
exit(exitcode=0)
sys.exit(1)
While executing
wlst.sh serverArgUpdate.py t3://myAdminServer:40100 systemKeys/system_dev.config systemKeys/system_dev.key server_MS1,server_MS2 -Dweblogic.logs=debug
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
The server names are server_MS1
Error: No domain or domain template has been read.
Error: No domain or domain template has been read.
========================================
The server name is s
========================================
basically should read
svrName=('server_MS1','server_MS2') to work
how do I pass 'server_MS1','server_MS2' to svrName variable from the bash script to wlst?
any help much appreciated.

Python Fabric won't pass in variable

I had a script that was working. I made one small change and now it stopped working. The top version works, while the bottom one fails.
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
conf = open('/home/myuser/verify_yslog_conf/%s/%s' % (host, filename), 'r')
comment = open('/home/myuser/verify_yslog_conf/%s/localconfig.txt' % host, 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
path = host + "/" + filename
pwd = local("pwd")
conf = open('%s/%s' % (pwd, path), 'r')
comment = open('%s/%s/localconfig.txt' % (pwd, host), 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
For troubleshooting purposes I added a print pwd and print path line to make sure the variables were getting filled correctly. pwd comes up empty. Why isn't this variable being set correctly? I use this same format of
var = sudo("cmd")
all the time. Is local different than sudo and run?
In short, you may need to add capture=True:
pwd = local("pwd", capture=True)
local runs a command locally:
a convenience wrapper around the use of the builtin Python subprocess
module with shell=True activated.
run runs a command on a remote server and sudo runs a remote command as super-user.
There is also a note in the documentation:
local is not currently capable of simultaneously printing and capturing output, as run/sudo do. The capture kwarg allows you to switch between printing and capturing as necessary, and defaults to False.
When capture=False, the local subprocess’ stdout and stderr streams are hooked up directly to your terminal, though you may use the global output controls output.stdout and output.stderr to hide one or both if desired. In this mode, the return value’s stdout/stderr values are always empty.

Fabric Python run command does not display remote server command "history"

I can't seem to figure this one out but when I do a very simple test to localhost to have fabric execute this command run('history'), the resulting output on the command line is blank.
Nor will this work either: run('history > history_dump.log')
Here is the complete FabFile script below, obviously I'm missing something here.
-- FabFile.py
from fabric.api import run, env, hosts, roles, parallel, cd, task, settings, execute
from fabric.operations import local,put
deploymentType = "LOCAL"
if (deploymentType == "LOCAL"):
env.roledefs = {
'initial': ['127.0.0.1'],
'webservers': ['127.0.0.1'],
'dbservers' : ['127.0.0.1']
}
env.use_ssh_config = False
# Get History
# -------------------------------------------------------------------------------------
#task
#roles('initial')
def showHistoryCommands():
print("Logging into %s and accessing the command history " % env.host_string)
run('history') #does not display anything
run('history > history_dump.log') #does not write anything out
print "Completed displaying the command history"
Any suggestions/solutions would be most welcomed.
History is a shell builtin, so it doesn't work like a normal command. I think your best bet would be to try and read the history file from the filesystem.
local('cat ~/.bash_history')
or
run('cat ~/.bash_history')
Substitute for the appropriate history file path.
To expand a bit after some research, the command succeeds when run, but for some reason, be it that fabric neither captures or prints the output. Or the way history prints it's output. While other builtins commands like env work fine. So for now I don't know what exactly is going on.

Fabric used as library not working

I cannot get fabric working when used as a library within my own python scripts. I made a very short example fabfile.py to demonstrate my problem:
#!/usr/bin/env python
from fabric.api import *
print("Hello")
def test():
with settings(host_string='myIp', user="myUser", password="myPassword"):
run("hostname")
if __name__ == '__main__':
test()
Running fab works like a charm:
$ fab test
Hello
[myIp] run: hostname
[myIp] out: ThisHost
[myIp] out:
Done.
Disconnecting from myUser#myIp... done.
Ok, now, running the python script without fab seems to break somewhere:
$ python fabfile.py
Hello
[myIp] run: hostname
It immediatly returns, so it does not even seem to wait for a response. Maybe there are errors, but I don't see how to output those.
I am running this script inside my vagrant virtual machine. As fab executes without any errors, I guess this should not be a problem!
UPDATE
The script seems to crash as it does not execute anything after the first run. local on the other hand works!
We executed the script on a co-workers laptop and it runs without any issues. I am using Python 2.6.5 on Ubuntu 10.04 with fabric 1.5.1, so I guess there is a problem with some of this! Is there any way to debug this properly?
I've experienced a similar issue, that the fab command exited without error but just a blank line on the first run()/sudo() command.
So I put the run() command into a try: except: block and printed the traceback:
def do_something():
print(green("Executing on %(host)s as %(user)s" % env))
try:
run("uname -a")
except:
import traceback
tb = traceback.format_exc()
print(tb)
I saw that it the script exited in the fabfile/network.py at line 419 when it caught an EOFError or TypeError. I modified the script to:
...
except (EOFError, TypeError) as err:
print err
# Print a newline (in case user was sitting at prompt)
print('')
sys.exit(0)
...
which then printed out:
connect() got an unexpected keyword argument 'sock'
So I remove the sock keyword argument in the connect method a few lines above and it worked like charm. I guess it is a problem with a paramiko version, that does not allow the sock keyword.
Versions:
Python 2.7.3
Fabric >= 1.5.3
paramiko 1.10.0
if you look at the fab command it looks like this:
sys.exit(
load_entry_point('Fabric==1.4.3', 'console_scripts', 'fab')()
)
this means it looks for a block labeled console_scripts in a file called entry_points.txt in the Fabric package and executes the methods listed there, in this case fabric.main:main
when we look at this method we see argument parsing, interesting fabfile importing and then:
if fabfile:
docstring, callables, default = load_fabfile(fabfile)
state.commands.update(callables)
....
for name, args, kwargs, arg_hosts, arg_roles, arg_exclude_hosts in commands_to_run:
execute(
name,
hosts=arg_hosts,
roles=arg_roles,
exclude_hosts=arg_exclude_hosts,
*args, **kwargs
)
with some experimentation we can come up with something like:
from fabric import state
from fabric.api import *
from fabric.tasks import execute
from fabric.network import disconnect_all
def test():
with settings(host_string='host', user="user", password="password"):
print run("hostname")
if __name__ == '__main__':
state.commands.update({'test': test})
execute("test")
if state.output.status:
print("\nDone.")
disconnect_all()
which is obviously very incomplete, but perhaps you only need to add the
disconnect_all()
line at the end of your script

Categories

Resources