I have a bash script for performing the passive checks i.e., external agent/application. I tried converting the bash script into python but when I execute the file I don't see any kind of responses on my nagios core interface regarding my passive check result.
import os
import datetime
CommandFile='/usr/local/nagios/var/rw/nagios.cmd'
datetime = datetime.datetime.now()
os.stat(CommandFile)
f = open(CommandFile, 'w')
f.write("/bin/echo " + str(datetime) + " PROCESS_SERVICE_CHECK_RESULT;compute-1;python dummy;0;I am dummy python")
f.close()
my bash script code is:
#!/bin/sh
# Write a command to the Nagios command file to cause
# it to process a service check result
echocmd="/bin/echo"
CommandFile="/usr/local/nagios/var/rw/nagios.cmd"
# get the current date/time in seconds since UNIX epoch
datetime=`date +%s`
# create the command line to add to the command file
cmdline="[$datetime] PROCESS_SERVICE_CHECK_RESULT;host-name;dummy bash;0;I am dummy bash"
# append the command to the end of the command file
`$echocmd $cmdline >> $CommandFile`
Changed my code, now its working perfectly fine. I can see the response in the Nagios interface.
import time
import sys
HOSTNAME = "compute-1"
service = "python dummy"
return_code = "0"
text = "python dummy is working .....I am python dummy"
timestamp = int(time.time())
nagios_cmd = open("/usr/local/nagios/var/rw/nagios.cmd", "w")
nagios_cmd.write("[{timestamp}] PROCESS_SERVICE_CHECK_RESULT;{hostname};{service};{return_code};{text}\n".format
(timestamp = timestamp,
hostname = HOSTNAME,
service = service,
return_code = return_code,
text = text))
nagios_cmd.close()
Related
import sys
import subprocess
command = 'C:\Program Files (x86)\Windows Kits\10\Debuggers\x64 -y ' + sys.argv[1] + ' -i ' + sys.argv[2] + ' -z ' + sys.argv[3] + ' -c "!analyze" '
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I tried this code, I am trying to take input of crash dump name and exe location and then I have to display user understandable crash analysis ouput.How to do that using python scripting? Is it easier with cpp scripting?
take input of crash dump name and exe location and then I have to display user understandable crash analysis ouput.
It seems you want to parse the text output of the !analyze command. You can do that, but you should be aware that this command can have a lot of different output.
Let me assume you're analyzing a user mode crash dump. In such a case, I would first run a few simpler commands to check whether you got a legit dump. You may try the following commands:
|| to check the dump type (should be "user")
| to get the name of the executable (should match your application)
lmvm <app> to check the version number of your executable
If everything is fine, you can go on:
.exr -1: distinguish between a crash and a hang. A 80000003 breakpoint is more likely a hang or nothing at all.
This may help you decide if you should run !analyze or !analyze -hang.
How to do that using Python scripting?
[...] \Windows Kits\10\Debuggers\x64 -y ' + [...]
This path contains backslashes, so you want to escape them or use an r-string like r"C:\Program Files (x86)\Windows Kits\10\...".
You should probably start an executable here to make it work. cdb.exe is the command line version of WinDbg.
command.split()
This will not only split the arguments, but also the path to the exectuable. Thus subprocess.popen() will try to an application called C:\Program which does not exist.
This could fail even more often, depending on the arguments with spaces in sys.argv[].
I suggest that you pass the options as they are:
command = r'C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\cdb.exe'
arguments = [command]
arguments.extend(['-y', sys.argv[1]]) # Symbol path
arguments.extend(['-i', sys.argv[2]]) # Image path
arguments.extend(['-z', sys.argv[3]]) # Dump file
arguments.extend(['-c', '!analyze']) # Command(s) for analysis
process = subprocess.Popen(arguments, stdout=subprocess.PIPE)
Note that there's no split() involved, which could split in wrong position.
Side note: -i may not work as expected. If you receive the crash dump from clients, they may have a different version than the one you have on disk. Set up a proper symbol server to mitigate this.
Is it easier with CPP scripting?
It will be different, not easier.
Working example
This is a Python code that considers the above. It's still a bit hacky because of the delays etc. but there's no real indicator other than time and output for deciding when a command finished. This succeeds with Python 3.8 on a crash dump of Windows Explorer.
import subprocess
import threading
import time
import re
class ReaderThread(threading.Thread):
def __init__(self, stream):
super().__init__()
self.buffer_lock = threading.Lock()
self.stream = stream # underlying stream for reading
self.output = "" # holds console output which can be retrieved by getoutput()
def run(self):
"""
Reads one from the stream line by lines and caches the result.
:return: when the underlying stream was closed.
"""
while True:
line = self.stream.readline() # readline() will block and wait for \r\n
if len(line) == 0: # this will only apply if the stream was closed. Otherwise there is always \r\n
break
with self.buffer_lock:
self.output += line
def getoutput(self, timeout=0.1):
"""
Get the console output that has been cached until now.
If there's still output incoming, it will continue waiting in 1/10 of a second until no new
output has been detected.
:return:
"""
temp = ""
while True:
time.sleep(timeout)
if self.output == temp:
break # no new output for 100 ms, assume it's complete
else:
temp = self.output
with self.buffer_lock:
temp = self.output
self.output = ""
return temp
command = r'C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\cdb.exe'
arguments = [command]
arguments.extend(['-y', "srv*D:\debug\symbols*https://msdl.microsoft.com/download/symbols"]) # Symbol path, may use sys.argv[1]
# arguments.extend(['-i', sys.argv[2]]) # Image path
arguments.extend(['-z', sys.argv[3]]) # Dump file
arguments.extend(['-c', ".echo LOADING DONE"])
process = subprocess.Popen(arguments, stdout=subprocess.PIPE, stdin=subprocess.PIPE, universal_newlines=True)
reader = ReaderThread(process.stdout)
reader.start()
result = ""
while not re.search("LOADING DONE", result):
result = reader.getoutput() # ignore initial output
def dbg(command):
process.stdin.write(command+"\r\n")
process.stdin.flush()
return reader.getoutput()
result = dbg("||")
if "User mini" not in result:
raise Exception("Not a user mode dump")
else:
print("Yay, it's a user mode dump")
result = dbg("|")
if "explorer" not in result:
raise Exception("Not an explorer crash")
else:
print("Yay, it's an Explorer crash")
result = dbg("lm vm explorer")
if re.search(r"^\s*File version:\s*10\.0\..*$", result, re.M):
print("That's a recent version for which we should analyze crashes")
else:
raise Exception("That user should update to a newer version before we spend effort on old bugs")
dbg("q")
if you don't want to use windbg which is a gui
use cdb.exe it is console mode windbg it will output all the results to terminal
here is a demo
F:\>cdb -c "!analyze -v;qq" -z testdmp.dmp | grep -iE "bucket|owner"
DEFAULT_BUCKET_ID: BREAKPOINT
Scope: DEFAULT_BUCKET_ID (Failure Bucket ID prefix)
BUCKET_ID
FOLLOWUP_NAME: MachineOwner
BUCKET_ID: BREAKPOINT_ntdll!LdrpDoDebuggerBreak+30
BUCKET_ID_IMAGE_STR: ntdll.dll
BUCKET_ID_MODULE_STR: ntdll
BUCKET_ID_FUNCTION_STR: LdrpDoDebuggerBreak
BUCKET_ID_OFFSET: 30
BUCKET_ID_MODTIMEDATESTAMP: c1bb301
BUCKET_ID_MODCHECKSUM: 1f647b
BUCKET_ID_MODVER_STR: 10.0.18362.778
BUCKET_ID_PREFIX_STR: BREAKPOINT_
FAILURE_BUCKET_ID: BREAKPOINT_80000003_ntdll.dll!LdrpDoDebuggerBreak
Followup: MachineOwner
grep is a general purpose string parser
it is built-in in Linux
it is available for windows in several places
if in 32 bit you can use it from gnuwin32 package / Cygwin
if in 64 bit you can find it in git
you can use the native findstr.exe also
:\>dir /b f:\git\usr\bin\gr*
grep.exe
groups.exe
or in msys / mingw / Cygwin / wsl / third party clones /
:\>dir /b /s *grep*.exe
F:\git\mingw64\bin\x86_64-w64-mingw32-agrep.exe
F:\git\mingw64\libexec\git-core\git-grep.exe
F:\git\usr\bin\grep.exe
F:\git\usr\bin\msggrep.exe
F:\msys64\mingw64\bin\msggrep.exe
F:\msys64\mingw64\bin\pcregrep.exe
F:\msys64\mingw64\bin\x86_64-w64-mingw32-agrep.exe
F:\msys64\usr\bin\grep.exe
F:\msys64\usr\bin\grepdiff.exe
F:\msys64\usr\bin\msggrep.exe
F:\msys64\usr\bin\pcregrep.exe
or you can write your own simple string parser in python / JavaScript / typescript / c / c++ / ruby / rust / whatever
here is a sample python word lookup and repeat script
import sys
for line in sys.stdin:
if "BUCKET" in line:
print(line)
lets check this out
:\>dir /b *.py
pyfi.py
:\>cat pyfi.py
import sys
for line in sys.stdin:
if "BUCKET" in line:
print(line)
:\>cdb -c "!analyze -v ;qq" -z f:\testdmp.dmp | python pyfi.py
DEFAULT_BUCKET_ID: BREAKPOINT
Scope: DEFAULT_BUCKET_ID (Failure Bucket ID prefix)
BUCKET_ID
BUCKET_ID: BREAKPOINT_ntdll!LdrpDoDebuggerBreak+30
BUCKET_ID_IMAGE_STR: ntdll.dll
BUCKET_ID_MODULE_STR: ntdll
BUCKET_ID_FUNCTION_STR: LdrpDoDebuggerBreak
BUCKET_ID_OFFSET: 30
BUCKET_ID_MODTIMEDATESTAMP: c1bb301
BUCKET_ID_MODCHECKSUM: 1f647b
BUCKET_ID_MODVER_STR: 10.0.18362.778
BUCKET_ID_PREFIX_STR: BREAKPOINT_
FAILURE_BUCKET_ID: BREAKPOINT_80000003_ntdll.dll!LdrpDoDebuggerBreak
NOTE:
using cmd in my profile -working
using sch.cmd in my profile -working
using task scheduler running sch.cmd by choosing user is logged in- working
using task scheduler running sch.cmd by choosing user is logged in or not-NOT WORKING. And also A.py run for some 1 or 2 seconds and then shut down without giving result.
NOTE:
All files are running using cmd in my profile and fetching correct results.But not with the windows task scheduler in my profile.
All the permission is already given. I want to list the filename and both methods glob.glob and listdir is not working giving-[ ] empty set or printing nothing at all.
Task scheduler setting:(elastic search is running already and I want to run A.py)
program script:
cmd
arguments:
/c sche.cmd > yash.txt
Start in:
D:\path\
sche.cmd contains:
#echo on
cmd /k "cd /d D:\path\env\Scripts\ & activate & pythonw.exe & cd /d D:\path\files & python A.py"
Now,
NOTE: There is gap in path: '\11.11.11.11\d$\E*ELE In A5*\S\A' but restriction is I just can't change my path as this is path of production.
NOTE: Run with when the user is logged or not AND the highest privileged is enabled.
A.py is:
from elasticsearch import Elasticsearch
import os, glob
import datetime as dt
from datetime import datetime
from dateutil import parser
ES_HOST = {"host": "localhost", "port": 9250}
es = Elasticsearch(hosts=[ES_HOST])
n = 0
a=b=c=d=0
ip = '11.11.11.11'
username = 'abcdefgh'
password = '12345678'
use_dict = {}
use_dict['remote'] = ('\\11.11.11.11\d$\E\ELE In A5\S\A')
use_dict['password'] = ('12345678')
use_dict['username'] = ('abcdefgh')
folder_path = r'\\11.11.11.11\d$\E\ELE In A5\S\A'
print("############", folder_path)
dirContents = os.listdir(r'\\11.11.11.11\d$\E\ELE In A5\S\A')
print("yahoo")
if len(dirContents) == 0:
print('Folder is Empty')
n = 0
else:
print('Folder is Not Empty')
for filename in glob.glob(os.path.join(folder_path, '*')):
with open(filename, 'r', encoding="ISO-8859-1") as f:
text = f.read()
{DO SOMETHING}
**more details are already in unanswered question:
**Running python script in Service account by using windows task scheduler****
I'm trying to run a python script in crontab.
5 0 * * * python /home/hadoop/import_openapp.py >> /home/hadoop/openapp.out 2>&1
The python script is something like below:
import sys
import datetime
from fabric.api import local
ystd = datetime.date.today() - datetime.timedelta(days=1)
c = ystd.strftime('%Y-%m-%d')
print(c)
print('Start to format file ...')
......
print('Start to upload on HDFS ...')
local("/home/hadoop/hadoop/bin/hadoop fs -put " + finalfile + " /user/hadoop/yunying/openapp")
print('Start to upload on MaxCompute ...')
......
When the crontab is called, the log file is like:
2016-07-01
Start to format file ...
Start to upload on HDFS ...
[localhost] local: /home/hadoop/hadoop/bin/hadoop fs -put /data/uxin/nsq_client_active_collect/hadoop/openappfinal.log /user/hadoop/yunying/openapp
And then, the process is over. I cannot find it in ps -ef|grep python
Why it comes to an end while meeting local()?
It is likely that the PYTHONPATH it not set up correctly for whatever the user that CRON is using to run the script. Print out the path to a debug file to check:
with open('/path/to/debug_file.txt', 'wt') as f:
f.write(str(sys.path))
Try adding the line:
sys.path.append("/path/to/fabric.api")
before importing local
You can also dynamically get the location of the file that is being run using
import os
os.path.realpath(__file__)
this will allow you to use relative paths if you need them
I had a script that was working. I made one small change and now it stopped working. The top version works, while the bottom one fails.
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
conf = open('/home/myuser/verify_yslog_conf/%s/%s' % (host, filename), 'r')
comment = open('/home/myuser/verify_yslog_conf/%s/localconfig.txt' % host, 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
path = host + "/" + filename
pwd = local("pwd")
conf = open('%s/%s' % (pwd, path), 'r')
comment = open('%s/%s/localconfig.txt' % (pwd, host), 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
For troubleshooting purposes I added a print pwd and print path line to make sure the variables were getting filled correctly. pwd comes up empty. Why isn't this variable being set correctly? I use this same format of
var = sudo("cmd")
all the time. Is local different than sudo and run?
In short, you may need to add capture=True:
pwd = local("pwd", capture=True)
local runs a command locally:
a convenience wrapper around the use of the builtin Python subprocess
module with shell=True activated.
run runs a command on a remote server and sudo runs a remote command as super-user.
There is also a note in the documentation:
local is not currently capable of simultaneously printing and capturing output, as run/sudo do. The capture kwarg allows you to switch between printing and capturing as necessary, and defaults to False.
When capture=False, the local subprocess’ stdout and stderr streams are hooked up directly to your terminal, though you may use the global output controls output.stdout and output.stderr to hide one or both if desired. In this mode, the return value’s stdout/stderr values are always empty.
OpenShift has these default dir's:
# $_ENV['OPENSHIFT_INTERNAL_IP'] - IP Address assigned to the application
# $_ENV['OPENSHIFT_GEAR_NAME'] - Application name
# $_ENV['OPENSHIFT_GEAR_DIR'] - Application dir
# $_ENV['OPENSHIFT_DATA_DIR'] - For persistent storage (between pushes)
# $_ENV['OPENSHIFT_TMP_DIR'] - Temp storage (unmodified files deleted after 10 days)
How do reference them in a python script?
Example script "created a log file in log directory and log in data directory?
from time import strftime
now= strftime("%Y-%m-%d %H:%M:%S")
fn = "${OPENSHIFT_LOG_DIR}/test.log"
fn2 = "${OPENSHIFT_DATA_DIR}/test.log"
#fn = "test.txt"
input = "appended text " + now + " \n"
with open(fn, "ab") as f:
f.write(input)
with open(fn2, "ab") as f:
f.write(input)
Can these script be used with cron?
EDIT the BASH File:
#! /bin/bash
#date >> ${OPENSHIFT_LOG_DIR}/new.log
source $OPENSHIFT_HOMEDIR/python-2.6/virtenv/bin/activate
python file.py
date >> ${OPENSHIFT_DATA_DIR}/new2data.log
import os
os.getenv("OPENSHIFT_INTERNAL_IP")
should work.
So with your example, modify to:-
import os
OPENSHIFT_LOG_DIR = os.getenv("OPENSHIFT_LOG_DIR")
fn = os.path.join(OPENSHIFT_LOG_DIR, "test.log")
And, yes, you can call this python script with a cron by referencing your bash script if you want... Like this for example:-
#!/bin/bash
date >> ${OPENSHIFT_LOG_DIR}/status.log
chmod +x status
cd ${OPENSHIFT_REPO_DIR}/wsgi/crawler
nohup python file.py 2>&1 &
Those variables OPENSHIFT_* are provided as environment variables on OpenShift -- so the $_ENV["OPENSHIFT_LOG_DIR"] is an example to get the value inside a php script.
In python, the equivalent would just be os.getenv("OPENSHIFT_LOG_DIR").
Made edits to Calvin's post above and submitted 'em.
Re: the question of where file.py exists -- use os.getenv("OPENSHIFT_REPO_DIR") as the base directory where all your code would be located on the gear where you app is running.
So if your file is located in .openshift/misc/file.py -- then just use:
os.path.join(os.getenv("OPENSHIFT_REPO_DIR"), ".openshift", "misc", "file.py")
to get the full path.
Or in bash, the equivalent would be:
$OPENSHIFT_REPO_DIR/.openshift/misc/file.py
HTH