How to print file path and line number while program execution? - python

I am running a Python program and I am getting an error. The program is written by someone.
The program has a lot of classes, methods, loops etc.,
Assume that I got an error, the error reporting in Python contains the line number and the nested call information and ends with key error.
I want something like this:
While the program executes, the output, which I can see in terminal, should contain the filename (or path of the file) and the line number it is executing and the error reporting should be as usual.
Example (output):
file1 line1: normal output (if any)
file1 line2: normal output (if any)
file2 line1: normal output (if any)
file2 line2: normal output (if any)
file1 line9: normal output (if any)
file1 line10: normal output (if any)
.......................................

You just need to set up a log format of your choice that consists of lineno and pathname. Consider this file called test.py:
import logging
logging.basicConfig(
format="%(pathname)s line%(lineno)s: %(message)s",
level=logging.INFO
)
logging.info("test message")
And when you run this program,
$ python3 test.py
test.py line6: test message
Just paste the above snippet in any of your files. Make sure it is executed before any calls to any logging calls.
Here's a list of all parameters you can use in logging formats: logging.LogRecord

Use two internal modules of Python (linecache, sys) :
import linecache
import sys
def PrintException():
exc_type, exc_obj, exc_info = sys.exc_info()
f = exc_info.tb_frame
lineno = exc_info.tb_lineno
filename = f.f_code.co_filename
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
print ('{}, {} "{}" (Error: {})'.format(filename, lineno, line.strip(), exc_obj))
try:
print (1/0) # place here code which you need check
except:
PrintException()

Related

Get Python Ansible_Runner module STDOUT Key Value

I'm using ansbile_runner Python module as bellow:
import ansible_runner
r = ansible_runner.run(private_data_dir='/tmp/demo', playbook='test.yml')
When I execute the above code, it will show the output without printing in Python. What I want is to save the stdout content into a Python variable for further text processing.
Did you read the manual under https://ansible-runner.readthedocs.io/en/stable/python_interface/ ? There is an example where you add another parameter, which is called output_fd and that could be a file handler instead of sys.stdout.
Sadly, this is a parameter of the run_command function and the documentation is not very good. A look into the source code at https://github.com/ansible/ansible-runner/blob/devel/ansible_runner/interface.py could help you.
According to the implementation details in https://github.com/ansible/ansible-runner/blob/devel/ansible_runner/runner.py it looks like, the run() function always prints to stdout.
According to the interface, there is a boolean flag in run(json_mode=TRUE) that stores the response in JSON (I expect in r instead of stdout) and there is another boolean flag quiet.
I played around a little bit. The relevant option to avoid output to stdout is quiet=True as run() attribute.
Ansible_Runner catches the output and writes it to a file in the artifacts directory. Every run() command produces that directory as described in https://ansible-runner.readthedocs.io/en/stable/intro/#runner-artifacts-directory-hierarchy. So there is a file called stdout in the artifact directory. It contains the details. You can read it as JSON.
But also the returned object contains already some relevant data. Here is my example
playbook = 'playbook.yml'
private_data_dir = 'data/' # existing folder with inventory etc
work_dir = 'playbooks/' # contains playbook and roles
try:
logging.debug('Running ansible playbook {} with private data dir {} in project dir {}'.format(playbook, private_data_dir, work_dir))
runner = ansible_runner.run(
private_data_dir=private_data_dir,
project_dir=work_dir,
playbook=playbook,
quiet=True,
json_mode=True
)
processed = runner.stats.get('processed')
failed = runner.stats.get('failures')
# TODO inform backend
for host in processed:
if host in failed:
logging.error('Host {} failed'.format(host))
else:
logging.debug('Host {} backupd'.format(host))
logging.error('Playbook runs into status {} on inventory {}'.format(runner.status, inventory.get('name')))
if runner.rc != 0:
# we have an overall failure
else:
# success message
except BaseException as err:
logging.error('Could not process ansible playbook {}\n{}'.format(inventory.get('name'),err))
So this outputs all processed hosts and informs about failures per host. Concrete more output can be found in the stdout file in artifact directory.

How to get 'print()', 'os.system()' and 'subprocess.run()' output to be displayed in both console and log file?

Initially, I've a simple program to print out the whole output to the console.
Initial Code to display output in the console only
import os, subprocess
print("1. Before")
os.system('ver')
subprocess.run('whoami')
print('\n2. After')
Output in console
1. Before
Microsoft Windows [Version 10]
user01
2. After
Then, I decided to have a copy on a log file (log.txt) too while maintaining the original output to the console.
So, this is the new code.
import os, subprocess, sys
old_stdout = sys.stdout
log_file = open("log.txt","w")
sys.stdout = log_file
print("1. Before") # This appear in message.log only, but NOT in console
os.system('ver') # This appear in console only, but NOT in message.log
subprocess.run('whoami') # This appear in console only, but NOT in message.log
print('\n2. After') # This appear in message.log only, but NOT in console
sys.stdout = old_stdout
log_file.close()
Unfortunately, this didn't really work as expected. Some of the output only displayed on the console (os.system('ver') and subprocess.run('whoami')) while the print() function was only displayed on log.txt file and not in the console anymore.
Output in console
Microsoft Windows [Version 10]
user01
Output in log.txt file
1. Before
2. After
I was hoping to get similar output on both console and log.txt file. Is this possible?
What's wrong with my new code? Please let me know how to fix this.
Desired Output in both console and log.txt file
1. Before
Microsoft Windows [Version 10]
user01
2. After
The most appropriate way to handle this is with logging. Here's an example:
This is the python 2.6+ and 3.x version of how you can do it. (Can't override print() before 2.6)
log = logging.getLogger()
log.setLevel(logging.INFO)
# How should our message appear?
formatter = logging.Formatter('%(message)s')
# This prints to screen
ch = log.StreamHandler()
ch.setLevel(logging.INFO)
ch.setFormatter(formatter)
log.addHandler(ch)
# This prints to file
fh = log.FileHandler('/path/to/output_file.txt')
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
log.addHandler(fh)
def print(*args, **kwargs):
log.DEBUG(*args)
That option allows you the capability of using logging levels. For instance, you can put debug logging throughout your code for when the application starts acting funky. Swap logLevel to logging.DEBUG and suddenly, you're getting that output to screen. Notice in the above example, we have 2 different logging levels, one for screen and another for file. Yes, this will produce different output to each destination. You can remedy this by changing both to use logging.INFO (or logging.DEBUG, etc). (See full docs relating to log levels here.)
In the above example, I've overridden print(), but I'd recommend instead that you just reference your framework using log.DEBUG('Variable xyz: {}'.format(xyz)) or log.INFO('Some stuff that you want printed.)
Full logging documentation.
There's another, easier way to do it with overriding, but not quite so robust:
try:
# Python 2
import __builtin__
except ImportError:
# Python 3
import builtins as __builtin__
logfile = '/path/to/logging_file.log'
def print(*args, **kwargs):
"""Your custom print() function."""
with open(logfile) as f_out:
f_out.write(args[0])
f_out.write('\n')
# Uncomment the below line if you want to tail the log or something where you need that info written to disk ASAP.
# f_out.flush()
return __builtin__.print(*args, **kwargs)
There is no magic done by system, file pointer such as stdout and stderr need to be treated differently by your code. For example, stdout is one of the file pointer, you can do it in below:
log_file_pointer = open('log.txt', 'wt')
print('print_to_fp', file=log_file_pointer)
# Note: the print function will actually call log_file_pointer.write('print_to_fp')
Based on your requirement, you want to make the magic function to handle more than one file pointer in single line, you need a wrapper function in below:
def print_fps(content, files=[]):
for fi in files:
print(content, file=fi)
# the argument `file` of print does zero magic, it can only handle one file pointer once.
Then, you can make the magic happen now (make the output in both screen and file.)
import sys
log_file_pointer = open('log.txt', 'wt')
print_fps('1. Before', files=[log_file_pointer, sys.stdout])
print_fps('\n2. After', files=[log_file_pointer, sys.stdout])
After finishing the print part, let's move on to system call. Running any command in the operating system, you will get the return in default system file pointers: stdout and stderr. In python3, you can get those result in bytes by subprocess.Popen. And while running below code, what you want should be the result in stdout.
import subprocess
p = subprocess.Popen("whoami", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
# stdout: b'user01'
# stdout: b''
Yet again, you can call the wrapper function written in above and make output in both stdout and targeted file_pointer.
print_fps(stdout, files=[log_file_pointer, sys.stdout])
Finally, combining all the code in above. (Plus one more convenient function.)
import subprocess, sys
def print_fps(content, files=[]):
for fi in files:
print(content, file=fi)
def get_stdout(command):
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
# Note: Original idea is to return raw stdout
# return stdout
# Based on the scenario of the #Sabrina, the raw bytes of stdout needs decoding in utf-8 plus replacing newline '\r\n' to be pure
return stdout.decode().replace('\r\n', '')
log_file_pointer = open('log.txt', 'wt')
print_fps('1. Before', files=[log_file_pointer, sys.stdout])
print_fps(get_stdout('ver'), files=[log_file_pointer, sys.stdout])
print_fps(get_stdout('whoami'), files=[log_file_pointer, sys.stdout])
print_fps('\n2. After', files=[log_file_pointer, sys.stdout])
Note: because the output of Popen is in bytes, you might need to do decode to remove b''. You can run stdout.decode() to decode bytes to utf-8 decoded str.*

shell multipipe broken with multiple python scripts

I am trying to get the stdout of a python script to be shell-piped in as stdin to another python script like so:
find ~/test -name "*.txt" | python my_multitail.py | python line_parser.py
It should print an output but nothing comes out of it.
Please note that this works:
find ~/test -name "*.txt" | python my_multitail.py | cat
And this works too:
echo "bla" | python line_parser.py
my_multitail.py prints out the new content of the .txt files:
from multitail import multitail
import sys
filenames = sys.stdin.readlines()
# we get rid of the trailing '\n'
for index, filename in enumerate(filenames):
filenames[index] = filename.rstrip('\n')
for fn, line in multitail(filenames):
print '%s: %s' % (fn, line),
sys.stdout.flush()
When a new line is added to the .txt file ("hehe") then my_multitail.py prints:
/home/me/test2.txt: hehe
line_parser.py simply prints out what it gets on stdin:
import sys
for line in sys.stdin:
print "line=", line
There is something I must be missing. Please community help me :)
There's a hint if you run your line_parser.py interactively:
$ python line_parser.py
a
b
c
line= a
line= b
line= c
Note that I hit ctrl+D to provoke an EOF after entering the 'c'. You can see that it's slurping up all the input before it starts iterating over the lines. Since this is a pipeline and you're continuously sending output through to it, this doesn't happen and it never starts processing. You'll need to choose a different way of iterating over stdin, for example:
import sys
line = sys.stdin.readline()
while line:
print "line=", line
line = sys.stdin.readline()

Python Command Line Arguments Try/Except

I want to create a program that will take two command line arguments. The first being the name of a file to open for parsing and the second the flag -s. If the user provides the wrong number of arguments or the other argument is not -s then it will print the message "Usage: [-s] file_name" and terminate the program using exit.
Next, I want my program to attempt to open the file for reading. The program should open the file read each line and return a count of every float, integer, and other kinds of strings that are not ints or floats. However, if opening the file fails it should raise an exception and print "Unable to open [filename]" and quit using exit.
I've been looking up lots of stuff on the internet about command lines in Python but I've ended up more confused. So here's my attempt at it so far from what I've researched.
from optparse import OptionParser
def command_line():
parser = OptionParser()
parser.add_option("--file", "-s")
options, args = parser.parse_args()
if options.a and obtions.b:
parser.error("Usage: [-s] file_name")
exit
def read_file():
#Try:
#Open input file
#Except:
#print "Unable to open [filename]"
#Exit
from optparse import OptionParser
import sys,os
def command_line():
parser = OptionParser("%prog [-s] file_name")
parser.add_option("-s",dest="filename",
metavar="file_name",help="my help message")
options, args = parser.parse_args()
if not options.filename:
parser.print_help()
sys.exit()
return options.filename
def read_file(fn):
if os.path.isfile(fn):
typecount = {}
with open(fn) as f:
for line in f:
for i in line.split()
try:
t = type(eval(i))
except NameError:
t = type(i)
if t in typecount:
typecount[t] += 1
else:
typecount[t] = 1
else:
print( "Unable to open {}".format(fn))
sys.exit()
print(typecount)
read_file(command_line())
So step by step:
options.a is not defined unless you define an option --a or (preferably) set dest="a".
using the built-in parser.print_help() is better than making your own, you get -h/--help for free then.
you never called your function command_line, therefore never getting any errors, as the syntax was correct. I set the commandline to pass only the filename as a return value, but there are better ways of doing this for when you have more options/arguments.
When it comes to read_file, instead of using try-except for the file I recommend using os.path.isfile which will check whether the file exists. This does not check that the file has the right format though.
We then create a dictionary of types, then loop over all lines and evaluate objects which are separated by whitespace(space,newline,tab). If your values are separated by eg. a comma, you need to use line.split(',').
If you want to use the counts later in your script, you might want to return typecount instead of printing it.

Python output on both console and file

I'm writing a code to analyze PDF file. I want to display the output on the console as well as to have a copy of the output in a file, I used this code save the output in a file:
import sys
sys.stdout = open('C:\\users\\Suleiman JK\\Desktop\\file.txt',"w")
print "test"
but could I display the output into console as well but without using classes because I'm not good with them?
(This answer uses Python 3 and you will have to adapt it if you prefer Python 2.)
Start by importing the Python logging package (and sys for accessing the standard output stream):
import logging
import sys
In your entry point, set up a handler for both the standard output stream and your output file:
targets = logging.StreamHandler(sys.stdout), logging.FileHandler('test.log')
and configure the logging package to output only the message without the log level:
logging.basicConfig(format='%(message)s', level=logging.INFO, handlers=targets)
Now you can use it:
>>> logging.info('testing the logging system')
testing the logging system
>>> logging.info('second message')
second message
>>> print(*open('test.log'), sep='')
testing the logging system
second message
sys.stdout can point to any object that has a write method, so you can create a class that writes to both a file and the console.
import sys
class LoggingPrinter:
def __init__(self, filename):
self.out_file = open(filename, "w")
self.old_stdout = sys.stdout
#this object will take over `stdout`'s job
sys.stdout = self
#executed when the user does a `print`
def write(self, text):
self.old_stdout.write(text)
self.out_file.write(text)
#executed when `with` block begins
def __enter__(self):
return self
#executed when `with` block ends
def __exit__(self, type, value, traceback):
#we don't want to log anymore. Restore the original stdout object.
sys.stdout = self.old_stdout
print "Entering section of program that will be logged."
with LoggingPrinter("result.txt"):
print "I've got a lovely bunch of coconuts."
print "Exiting logged section of program."
Result:
Console:
Entering section of program that will be logged.
I've got a lovely bunch of coconuts.
Exiting logged section of program.
result.txt:
I've got a lovely bunch of coconuts.
This method may be preferable to codesparkle's in some circumstances, because you don't have to replace all your existing prints with logging.info. Just put everything you want logged into a with block.
You could make a function which prints both to console and to file. You can either do it by switching stdout, e.g. like this:
def print_both(file, *args):
temp = sys.stdout #assign console output to a variable
print ' '.join([str(arg) for arg in args])
sys.stdout = file
print ' '.join([str(arg) for arg in args])
sys.stdout = temp #set stdout back to console output
or by using file write method (I suggest using this unless you have to use stdout)
def print_both(file, *args):
toprint = ' '.join([str(arg) for arg in args])
print toprint
file.write(toprint)
Note that:
The file argument passed to the function must be opened outside of the function (e.g. at the beginning of the program) and closed outside of the function (e.g. at the end of the program). You should open it in append mode.
Passing *args to the function allows you to pass arguments the same way you do to a print function. So you pass arguments to print...
...like this:
print_both(open_file_variable, 'pass arguments as if it is', 'print!', 1, '!')
Otherwise, you'd have to turn everything into a single argument i.e. a single string. It would look like this:
print_both(open_file_variable, 'you should concatenate'+str(4334654)+'arguments together')
I still suggest you learn to use classes properly, you'd benefit from that very much. Hope this helps.
I was too lazy to write a function, so when I needed to print to both the console and file I wrote this quick and (not so) dirty code:
import sys
...
with open('myreport.txt', 'w') as f:
for out in [sys.stdout, f]:
print('some data', file=out)
print('some mre data', file=out)

Categories

Resources