Record Error from CMD in output with Python - python

I am trying to run the below code - and what it does is run the commands in a file line by line and extract the results from the cmd into a new file. the command looks something like this 'ping (host name)' with many hosts and a line for each host.
some hosts fail in the cmd, as in it cannot find a response - usually when that happens the code breaks, so that is why I have the try and except below. but I am struggling to make the except section record the failed items in the same document (if possible).
so for example if ping (host name3) failed - I want it to record that message and store in the file.
If you have a better way of doing all of this please let me know!
command_path = pathlib.Path(r"path to the file with commands")
command_file = command_path.joinpath('command file.txt')
commands = command_file.read_text().splitlines()
#print(commands)
try:
for command in commands:
#Args = command.split()
#print(f"/Running: {Args[0]}")
outputfile = subprocess.check_output(command)
print(outputfile.decode("utf-8"))
results_path = command_path.joinpath(f"Passed_Results.txt")
results = open(results_path, "a")
results.write('\n' + outputfile.decode("utf-8"))
results.close()
except:
#this is where I need help.

I got a response on a different question that I was able to augment into this. I essentially broke my entire code down and rewrote as follows. this worked for me, However if you are able to provide insight on a faster processing time for this. PLEASE LET ME KNOW
cmds_file = pathlib.Path(r"C:\Users'path to file here').joinpath("Newfile.txt")
output_file = pathlib.Path(r"C:\Users'path to file
here').joinpath("All_Results.txt")
with open(cmds_file, encoding="utf-8") as commands, open(output_file, "w",
encoding="utf-8") as output:
for command in commands:
command = shlex.split(command)
output.write(f"\n# {shlex.join(command)}\n")
output.flush()
subprocess.run(command, stdout=output, encoding="utf-8")

Related

Get local DNS settings in Python

Is there any elegant and cross platform (Python) way to get the local DNS settings?
It could probably work with a complex combination of modules such as platform and subprocess, but maybe there is already a good module, such as netifaces which can retrieve it in low-level and save some "reinventing the wheel" effort.
Less ideally, one could probably query something like dig, but I find it "noisy", because it would run an extra request instead of just retrieving something which exists already locally.
Any ideas?
Using subprocess you could do something like this, in a MacBook or Linux system
import subprocess
process = subprocess.Popen(['cat', '/etc/resolv.conf'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
print(stdout, stderr)
or do something like this
import subprocess
with open('dns.txt', 'w') as f:
process = subprocess.Popen(['cat', '/etc/resolv.conf'], stdout=f)
The first output will go to stdout and the second to a file
Maybe this one will solve your problem
import subprocess
def get_local_dns(cmd_):
with open('dns1.txt', 'w+') as f:
with open('dns_log1.txt', 'w+') as flog:
try:
process = subprocess.Popen(cmd_, stdout=f, stderr=flog)
except FileNotFoundError as e:
flog.write(f"Error while executing this command {str(e)}")
linux_cmd = ['cat', '/etc/resolv.conf']
windows_cmd = ['windows_command', 'parameters']
commands = [linux_cmd, windows_cmd]
if __name__ == "__main__":
for cmd in commands:
get_local_dns(cmd)
Thanks #MasterOfTheHouse.
I ended up writing my own function. It's not so elegant, but it does the job for now. There's plenty of room for improvement, but well...
import os
import subprocess
def get_dns_settings()->dict:
# Initialize the output variables
dns_ns, dns_search = [], ''
# For Unix based OSs
if os.path.isfile('/etc/resolv.conf'):
for line in open('/etc/resolv.conf','r'):
if line.strip().startswith('nameserver'):
nameserver = line.split()[1].strip()
dns_ns.append(nameserver)
elif line.strip().startswith('search'):
search = line.split()[1].strip()
dns_search = search
# If it is not a Unix based OS, try "the Windows way"
elif os.name == 'nt':
cmd = 'ipconfig /all'
raw_ipconfig = subprocess.check_output(cmd)
# Convert the bytes into a string
ipconfig_str = raw_ipconfig.decode('cp850')
# Convert the string into a list of lines
ipconfig_lines = ipconfig_str.split('\n')
for n in range(len(ipconfig_lines)):
line = ipconfig_lines[n]
# Parse nameserver in current line and next ones
if line.strip().startswith('DNS-Server'):
nameserver = ':'.join(line.split(':')[1:]).strip()
dns_ns.append(nameserver)
next_line = ipconfig_lines[n+1]
# If there's too much blank at the beginning, assume we have
# another nameserver on the next line
if len(next_line) - len(next_line.strip()) > 10:
dns_ns.append(next_line.strip())
next_next_line = ipconfig_lines[n+2]
if len(next_next_line) - len(next_next_line.strip()) > 10:
dns_ns.append(next_next_line.strip())
elif line.strip().startswith('DNS-Suffix'):
dns_search = line.split(':')[1].strip()
return {'nameservers': dns_ns, 'search': dns_search}
print(get_dns_settings())
By the way... how did you manage to write two answers with the same account?

Python - Loop through log files with some logic

Good Morning,
I have taken on a small project, solely for the purposes of learning python.
So far the script will ssh to another server, to obtain a list of nodes which are down. I want to change this though, to have it store a list of down nodes in a tmp file, and the next day compare one to the other and only work with nodes which are down that weren't down yesterday. But that part can wait...
The issue I'm seeing at the moment is searching for various strings in a number of log files, but if the line count for a particular node exceeds a certain number; rather than being sent to the terminal.. a message is sent instead saying "too many log entries; entries save to /tmp/..
Here's what I have so far, but it doesn't really do what I want.
Also, if you have any other advice for my script, I would be infinitely grateful. I am learning, but it's sinking in slowly! :)
#!/usr/bin/python
#
from subprocess import *
import sys
from glob import glob
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-f', metavar='logname', help='logfile to check')
args = parser.parse_args()
ssh = Popen(["ssh", "root#srv1", 'check_nodes'],
shell=False,
stdout=PIPE,
stderr=PIPE)
result = ssh.stdout.readlines()
down_nodes=[]
status_list=["down", "admindown"]
if result == []:
error = ssh.stderr.readlines()
print >>sys.stderr, "ERROR: %s" % error
else:
for line in result[1:]:
columns = line.split()
status = columns[3]
nodes = columns[2]
if status in status_list:
down_nodes.append(nodes)
if args.f:
logs = args.f
else:
try:
logs = glob("/var/log/*")
except IndexError:
print "Something is wrong. Logs not available."
sys.exit(1)
valid_errors = ["error", "MCE", "CATERR"]
for log in logs:
with open("log", "r") as tmp_log:
opn_log = tmp_log.readlines()
for line in open_log:
for down_nodes in open_log:
if valid_errors in open_log:
print valid_errors
What I have so far, sort of works in testing.. but it just finds the errors in valid_errors and doesn't find lines that have both down_node and valid_errors in the same time. Also, with a date.. maybe something like, lines in a log that contain down_node, valid_errors and contains a date string of less than 3 days or something.
As from Friday - I hadn't used Python for anything! I've worked only with shell scripts and always found that a bash script it perfect for what I need. So I am a beginner... :)
Thanks
Jon
I've broken down my specific question if it makes it clearer.. Essentially at the moment, I'm just trying to find a line in a bunch of log files that contains any of the down_nodes AND any of the valid_errors my code:
for node in down_nodes:
for log in logs:
with open(log) as log1:
open_log = log1.readlines()
for line in open_log:
if node + valid_errors in line
print line

Redirecting Output From a Program to a File with Python: Specific Bug

I've been trying to run a Java program and capture it's STDOUT output to a file from the Python script. The idea is to run test files through my program and check if it matches the answers.
Per this and this SO questions, using subprocess.call is the way to go. In the code below, I am doing subprocess.call(command, stdout=f) where f is the file I opened.
The resulted file is empty and I can't quite understand why.
import glob
test_path = '/path/to/my/testfiles/'
class_path = '/path/to/classfiles/'
jar_path = '/path/to/external_jar/'
test_pattern = 'test_case*'
temp_file = 'res'
tests = glob.glob(test_path + test_pattern) # find all test files
for i, tc in enumerate(tests):
with open(test_path+temp_file, 'w') as f:
# cd into directory where the class files are and run the program
command = 'cd {p} ; java -cp {cp} package.MyProgram {tc_p}'
.format(p=class_path,
cp=jar_path,
tc_p=test_path + tc)
# execute the command and direct all STDOUT to file
subprocess.call(command.split(), stdout=f, stderr=subprocess.STDOUT)
# diff is just a lambda func that uses os.system('diff')
exec_code = diff(answers[i], test_path + temp_file)
if exec_code == BAD:
scream(':(')
I checked the docs for subprocess and they recommended using subprocess.run (added in Python 3.5). The run method returns the instance of CompletedProcess, which has a stdout field. I inspected it and the stdout was an empty string. This explained why the file f I tried to create was empty.
Even though the exit code was 0 (success) from the subprocess.call, it didn't mean that my Java program actually got executed. I ended up fixing this bug by breaking down command into two parts.
If you notice, I initially tried to cd into correct directory and then execute the Java file -- all in one command. I ended up removing cd from command and did the os.chdir(class_path) instead. The command now contained only the string to run the Java program. This did the trick.
So, the code looked like this:
good_code = 0
# Assume the same variables defined as in the original question
os.chdir(class_path) # get into the class files directory first
for i, tc in enumerate(tests):
with open(test_path+temp_file, 'w') as f:
# run the program
command = 'java -cp {cp} package.MyProgram {tc_p}'
.format(cp=jar_path,
tc_p=test_path + tc)
# runs the command and redirects it into the file f
# stores the instance of CompletedProcess
out = subprocess.run(command.split(), stdout=f)
# you can access useful info now
assert out.returncode == good_code

bcp randomly fails in a batch of 100+ jobs

I have a Python program that generates over 300 files and uses bcp to move them to MSSQL. There is a high level of concurrency as about 21 files are being generated and bcp'd in at the same time. Here is the critical part of the program:
cmd = ['bcp', self.bcptbl, 'IN', outfile, '-f', 'bcpfmt.fmt', '-m1', '-U', uid, '-S', self.srv, '-P', pwd]
subprocess.check_output(cmd)
Three batch threads go at a time, 7 sub-threads each, so 21 concurrent processes. At a random file bcp fails with error:
[Microsoft][SQL Server Native Client 11.0]Unable to open BCP host data-file
The error might have something to do with the way I create file before BCP is invoked:
with open(outfile, 'a') as outf:
proc = Popen('ext_prog.exe', stdin=PIPE, stdout=outf, stderr=PIPE)
_, err = proc.communicate(input='\n'.join(patterns).encode('latin1'))
Something tells me that the file handle is not released by the external program, even though file open and close is seemingly handled by me.
This is not a typical error, as permissions, folders, paths, etc are all set up correctly, since it copies 80 ~ 150 files successfully before failing.
BCP call in the code above failed frequently until I inserted the following check before the bcp call:
#staticmethod
def wait_file_is_ready(outfile):
try:
with open(outfile, 'r'):
print("File {} is ready for reading".format(outfile))
except BaseException as e:
print("File {} is not ready: {}".format(outfile, e))
My reasoning is that Windows does not mark the file as closed in time so opening and closing it helps. This fixed 99% of errors but with the massive job I got today it came back to haunt me.
Things I tried to recover from error:
Adding a 1 hour sleep before re-running same bcp command - fails
Making a copy of the input file and re-running bcp command - fails
Running the BCP command manually from command line always works
More detailed code excerpt:
MAX_THREADS = 7
def start_batch(self):
ts = []
self.patternq = queue.Queue()
self.bcptbl = '"tempdb.dbo.outtbl_{}"'.format(randint(0,1E15))
for thread_no in range(MAX_THREADS):
tname = "thread_{:02}_of_{}".format(thread_no, MAX_THREADS)
t = Thread(name=tname, target=self.load, args=(thread_no,))
t.start()
ts.append(t)
for t in ts:
t.join()
def load(self, thread_no):
outfile = "d:\\tmp\\outfile_{}_{:02}.temp".format(
randint(0,1E15), thread_no)
try:
os.unlink(outfile)
except FileNotFoundError:
pass
while True:
try:
patterns = self.patternq.get_nowait()
except queue.Empty:
break
with open(outfile, 'a') as outf:
proc = Popen('ext_prog.exe', stdin=PIPE, stdout=outf, stderr=PIPE)
_, err = proc.communicate(input='\n'.join(patterns).encode('latin1'))
cmd = ['bcp', self.bcptbl, 'IN', outfile, '-f', 'bcpfmt.fmt', '-m1', '-U', uid, '-S', self.srv, '-P', pwd]
try:
subprocess.check_output(cmd)
except subprocess.CalledProcessError as e:
# OK, it failed because "Unable to open BCP host data-file"
# How can I recover from it?
raise
I went around the problem by using ODBC to insert the records, slowly and carefully. That worked 2 out of 3 times. Here is the error I got on third iteration:
os.unlink(outfile)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'd:\tmp\outfile_678410328373703.temp_03'
And a feasible explanation for the error:
Seems to be an issue long standing and a bug inherent with Windows 7
Found this in MS forums
Seems to be an issue long standing and a bug inherent with Windows 7.
There has been no "official" statement from MS acknowledging this bug hence a patch or fix release is unlikely. Some users in the thread above have offered "fixes" but they are way too time consuming and inefficient in regards to workflow productivity. This shouldn't be an issue one has to deal with when purchasing a new OS ...

Script to capture everything on screen

So I have this python3 script that does a lot of automated testing for me, it takes roughly 20 minutes to run, and some user interaction is required. It also uses paramiko to ssh to a remote host for a separate test.
Eventually, I would like to hand this script over to the rest of my team however, it has one feature missing: evidence collection!
I need to capture everything that appears on the terminal to a file. I have been experimenting with the Linux command 'script'. However, I cannot find an automated method of starting script, and executing the script.
I have a command in /usr/bin/
script log_name;python3.5 /home/centos/scripts/test.py
When I run my command, it just stalls. Any help would be greatly appreciated!
Thanks :)
Is a redirection of the output to a file what you need ?
python3.5 /home/centos/scripts/test.py > output.log 2>&1
Or if you want to keep the output on the terminal AND save it into a file:
python3.5 /home/centos/scripts/test.py 2>&1 | tee output.log
I needed to do this, and ended up with a solution that combined pexpect and ttyrec.
ttyrec produces output files that can be played back with a few different player applications - I use TermTV and IPBT.
If memory serves, I had to use pexpect to launch ttyrec (as well as my test's other commands) because I was using Jenkins to schedule the execution of my test, and pexpect seemed to be the easiest way to get a working interactive shell in a Jenkins job.
In your situation you might be able to get away with using just ttyrec, and skip the pexpect step - try running ttyrec -e command as mentioned in the ttyrec docs.
Finally, on the topic of interactive shells, there's an alternative to pexpect named "empty" that I've had some success with too - see http://empty.sourceforge.net/. If you're running Ubuntu or Debian you can install empty with apt-get install empty-expect
I actually managed to do it in python3, took a lot of work, but here is the python solution:
def record_log(output):
try:
with open(LOG_RUN_OUTPUT, 'a') as file:
file.write(output)
except:
with open(LOG_RUN_OUTPUT, 'w') as file:
file.write(output)
def execute(cmd, store=True):
proc = Popen(cmd.encode("utf8"), shell=True, stdout=PIPE, stderr=PIPE)
output = "\n".join((out.decode()for out in proc.communicate()))
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = template % (cmd, output)
print(output)
if store:
record_log(output)
return output
# SSH function
def ssh_connect(start_message, host_id, user_name, key, stage_commands):
print(start_message)
try:
ssh.connect(hostname=host_id, username=user_name, key_filename=key, timeout=120)
except:
print("Failed to connect to " + host_id)
for command in stage_commands:
try:
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
except:
input("Paused, because " + command + " failed to run.\n Please verify and press enter to continue.")
else:
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = ssh_stderr.read() + ssh_stdout.read()
output = template % (command, output)
record_log(output)
print(output)

Categories

Resources