Get Windows Process/File Description in Python - python

I have the following code so far that tells me every time a new process is created.
import wmi
c = wmi.WMI()
process_watcher = c.Win32_Process.watch_for("creation")
while True:
new_process = process_watcher()
print(new_process.Caption)
print(new_process.ExecutablePath)
This works fine, but what I'm really trying to do is get at the Processes Description because while the filename of what I'm looking for might change, the description does not. I can't find anything in Win32_Process or win32file that gets me the file description though. Does anybody know how to do this?
Thanks!

while True:
try:
new_process = process_watcher()
proc_owner = new_process.GetOwner()
proc_owner = "%s\\%s" % (proc_owner[0],proc_owner[2])
create_date = new_process.CreationDate
executable = new_process.ExecutablePath
cmdline = new_process.CommandLine
pid = new_process.ProcessId
parent_pid = new_process.parentProcessId
privileges = "N/A"
process_log_message = "%s,%s,%s,%s,%s,%s,%s,\r\n" % (create_date,proc_owner,executable,cmdline,pid,parent_pid,privileges)
print "1"
print process_log_message
log_to_file(process_log_message)
except:
print "2"
pass
Hope this helps :)

Related

MegaRAID nagios monitoring

I have been beating my head against the wall, trying to figure out what is wrong with the following nagios plugin I wrote. When I run the following code:
#!/usr/bin/env python3
import paramiko
import os.path
import sys
OK = 0
WARNING = 1
CRITICAL = 2
DEPENDENT = 3
UNKNOWN = 4
active = str("Active")
online = str("Online")
optimal = str("Optimal")
k = str("OK")
degrade = str("Degraded")
fail = str("Failed")
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(
hostname='<hostname>',
username='<service account>',
key_filename=os.path.join(os.path.expanduser('~'), ".ssh", "id_rsa.pub")
)
stdin,stdout,stderr = client.exec_command("sudo /opt/MegaRAID/MegaCli/MegaCli64 -ShowSummary -a0")
check = str(stdout.read().decode('ascii'))
client.close()
OK_STR = str("RAID is OK!")
WARN_STR = str("Warning! Something is wrong with the RAID!")
CRIT_STR = str("CRITICAL! THE RAID IS BROKEN")
UNK_STR = str("Uh oh! Something ain't right?")
print(check)
if (degrade) in (check):
print(WARN_STR) and sys.exit(WARNING)
elif (fail) in (check):
print(CRIT_STR) and sys.exit(CRITICAL)
elif str('Exit Code: 0x00') in (check):
print(OK_STR) and sys.exit(OK)
else:
sys.exit(UNKNOWN) and print(UNK_STR)
I recieve the output that I expect running it from CLI. If I change my logic on the if statement, the exit code changes and I have verified using 'echo $?'.
However, in my LibreNMS front end, I receive no stdout message and an exit code of 1, which is not how my code outputs in the terminal. If anybody can find something wrong with my code, I really need the help.

Adding simple menu in python

All, I wrote a small python script to parse out data from a log file. I was able to parse out what I need. Now I am trying to create a menu so that user can choose which data they want to parse out rather than all of the log content. I am having a little struggle trying to figure out how to do it, could someone please help me start on making a menu. I am a newbie to Python.
This is what I have so far:
import re
with open('temp.log') as f:
lines = f.readlines()
data = []
for line in lines:
date = re.match(r'\d{2} \w+ \d{2}', line).group()
time = line.split()[3]
ids = line.split()[4]
try:
agent = re.search(r'agent:\s(.*?),', line).group()
except:
agent = 'agent:'
try:
errID = re.search(r'ErrIdText:\s(.*?),', line).group()
except:
errID = 'ErrIdText:'
try:
clear = re.search(r'clearedID:\s(.*?)\)', line).group()
except:
clear = 'clearedID:'
row = [date, time, ids, agent, errID, clear]
data.append(row)
for row in data:
print(row)
So I want to make a menu so user can choose if they only want to parse out the date and the agent name for example.
You can use click to implement your menu through the command line. It will parse the arguments and you will be able to filter out the operations. It is also easy to understand and implement for simple stuff. For example:
import re
import click
date_pattern = re.compile(r'\d{2} \w+ \d{2}')
agent_pattern = re.compile(r'agent:\s(.*?),')
err_pattern = re.compile(r'ErrIdText:\s(.*?),')
clear_pattern = re.compile(r'clearedID:\s(.*?)\)')
#click.command()
#click.option('--filter-agent', is_flag=True, default=False, help='Filter agent')
#click.option('--filter-err-id', is_flag=True, default=False, help='Filter Error ID')
#click.option('--filter-cleared-id', is_flag=True, default=False, help='Filter Cleared ID')
#click.argument('filename')
def get_valid_rows(filter_agent, filter_err_id, filter_cleared_id, filename):
with open(filename) as f:
lines = f.readlines()
data = []
for line in lines:
date = date_pattern.match(line).group()
time = line.split()[3]
ids = line.split()[4]
row = [date, time, ids]
if filter_agent:
try:
agent = agent_pattern.search(line).group()
except:
agent = 'agent:'
row.append(agent)
if filter_err_id:
try:
errID = err_pattern.search(line).group()
except:
errID = 'ErrIdText:'
row.append(errID)
if filter_cleared_id:
try:
clear = clear_pattern.search(line).group()
except:
clear = 'clearedID:'
row.append(clear)
data.append(row)
# Do everything else
if __name__ == "__main__":
get_valid_rows()
It'll even generate a well-formatted help message for you
Usage: parselog.py [OPTIONS] FILENAME
Options:
--filter-agent Filter agent
--filter-err-id Filter Error ID
--filter-cleared-id Filter Cleared ID
--help Show this message and exit.
You could edit it to your liking to achieve exactly what you want.
That's a very large question, but what you need is either a UI (like Tkinter or Pyqt) or a command line interface (which you could implement yourself, or build using a library like docopt).
However, the command-line option will be a lot simpler to implement.

Python on Crontab does not execute bash script

import subprocess as sub
import re
import os
from datetime import datetime as influx_timestap
from influxdb import InfluxDBClient
from collections import OrderedDict
insert_json = []
hostname = str(sub.check_output('hostname')).strip()
location = str(sub.check_output(['ps -ef | grep mgr'], shell=True)).split()
current_dir = os.getcwd()
print("script executed")
gg_location_pattern = re.compile(r'mgr\.prm$')
gg_process_pattertn = re.compile(r'^REPLICAT|^EXTRACT')
for index in location:
if gg_location_pattern.search(index) != None:
gg_location = index[:-14]
os.chdir(gg_location)
print("checkpoint1")
get_lag = sub.check_output(str(current_dir) + '/ggsci_test.sh', shell=True)
print("checkpoint2")
processes = get_lag.split("\n")
for process in processes:
if gg_process_pattertn.search(process) != None:
lag_at_chkpnt = int((process.split()[3]).split(":")[0]) * 3600 + int((process.split()[3]).split(":")[1]) *60 + int((process.split()[3]).split(":")[2])
time_since_chkpnt = int((process.split()[4]).split(":")[0]) * 3600 + int((process.split()[4]).split(":")[1]) *60 + int((process.split()[4]).split(":")[2]
)
process_dict = OrderedDict({"measurement": "GoldenGate_Mon_" + str(hostname) + "_Graph",
"tags": {"hostname": hostname, "process_name": process.split()[2]},
"time": influx_timestap.now().isoformat('T'),
"fields": {"process_type": process.split()[0], "process_status": process.split()[1],
"lag_at_chkpnt": lag_at_chkpnt, "time_since_chkpnt": time_since_chkpnt}})
insert_json.append(process_dict)
host = 'xxxxxxxx'
port = 'x'
user = 'x'
password = 'x'
dbname = 'x'
print("before client")
client = InfluxDBClient(host, port, user, password, dbname)
client.write_points(insert_json)
print("after client")
This code works manually perfect, but on the crontab it is not working. After searching on the internet I found that they say change or set your "PATH" variable on the crontab. I changed my "PATH" variable and it is still not working.
Crontab log file write "checkpoint1" after that there is nothing. So, line not working is "get_lag = sub.check_output(str(current_dir) + '/ggsci_test.sh', shell=True)"
What can I do here afterwards?
Take care,
it looks like your external script (ggsci_test.sh) has some issues with the paths / general failure.
From the Python subprocess documentation about subprocess.check_output:
If the return code was non-zero it raises a CalledProcessError. The
CalledProcessError object will have the return code in the returncode
attribute and any output in the output attribute.
So thats the reason why you see the error when catching it, but not being able to continue.
You should check therefore if your shell script has any issues that need to be solved before.

monitoring a text site (json) using python

IM working on a program to grab variant ID from this website
https://www.deadstock.ca/collections/new-arrivals/products/nike-air-max-1-cool-grey.json
Im using the code
import json
import requests
import time
endpoint = "https://www.deadstock.ca/collections/new-arrivals/products/nike-air-max-1-cool-grey.json"
req = requests.get(endpoint)
reqJson = json.loads(req.text)
for id in reqJson['product']:
name = (id['title'])
print (name)
I dont know what to do here in order to grab the Name of the items. If you visit the link you will see that the name is under 'title'. If you could help me with this that would be awesome.
I get the error message "TypeError: string indices must be integers" so im not too sure what to do.
Your biggest problem right now is that you are adding items to the list before you're checking if they're in it, so everything is coming back as in the list.
Looking at your code right now, I think what you want to do is combine things into a single for loop.
Also as a heads up you shouldn't use a variable name like list as it is shadowing the built-in Python function list().
list = [] # You really should change this to something else
def check_endpoint():
endpoint = ""
req = requests.get(endpoint)
reqJson = json.loads(req.text)
for id in reqJson['threads']: # For each id in threads list
PID = id['product']['globalPid'] # Get current PID
if PID in list:
print('checking for new products')
else:
title = (id['product']['title'])
Image = (id['product']['imageUrl'])
ReleaseType = (id['product']['selectionEngine'])
Time = (id['product']['effectiveInStockStartSellDate'])
send(title, PID, Image, ReleaseType, Time)
print ('added to database'.format(PID))
list.append(PID) # Add PID to the list
return
def main():
while(True):
check_endpoint()
time.sleep(20)
return
if __name__ == "__main__":
main()

Google Drive SDK - changes( ) Key Error

I'm checking Drive for changes using the following code:
deltaDict = drive_service.changes().list(includeDeleted = True, startChangeId = driveRC.deltaCursor).execute()
if not str(driveRC.deltaCursor) == str(deltaDict['largestChangeId']):
print '*** Change Detected ***'
fileItems = deltaDict['items']
for item in fileItems:
isDeleted = item['deleted']
theFile = item['file']
fileID = theFile['id']
fileLabels = theFile['labels']
fileName = theFile['title']
isTrashed = fileLabels['trashed']
and this was working fine for some time. At the moment however, I'm seeing the error:
theFile = item['file']
KeyError: 'file'
but looking at the documentation this looks to me like it should work? Can anyone spot what I'm missing?
Thanks in advance for any help.
According to the documentation, item['file'] is present only if the file has not been deleted, so you can only use it if item['deleted'] is False or at least wrap it in a try/except block.
for item in fileItems:
isDeleted = item['deleted']
try:
theFile = item['file']
# Rest of your code
except KeyError:
print "Item deleted"

Categories

Resources