python stdout traceback as separate lines in loggly - python

I'm running a small python web app on Heroku and I've drained the logs to loggly. When an exception is raised, the traceback appears as separate lines in loggly. This is of course hard to search.
How do you make tracebacks appear as a single log on Loggly?
Example:

You should set up python logging according to the instructions in this page:
https://www.loggly.com/docs/python-http/
Modify Step 3 (where you are sending the log events) so that you can send an exception, as follows:
import logging
import logging.config
import loggly.handlers
logging.config.fileConfig('python.conf')
logger = logging.getLogger('myLogger')
logger.info('Test log')
try:
main_loop()
except Exception:
logger.exception("Fatal error in main loop")
You will see that the exception appears as a single log event:
{ "loggerName":"myLogger", "asciTime":"2015-08-04 15:09:00,220", "fileName":"test_log.py", "logRecordCreationTime":"1438726140.220768", "functionName":"<module>", "levelNo":"40", "lineNo":"15", "time":"220", "levelName":"ERROR", "message":"Fatal error in main loop"}
Traceback (most recent call last):
File "./test_log.py", line 13, in <module>
main_loop()
NameError: name 'main_loop' is not defined
}

Related

python: Handling log output of module during program execution

I'm setting up a logger in my script like shown at the bottom. This works fine for my purposes and logs my __main__ log messages and those of any modules I use to stdout and a log file.
During program execution a module call that I'm using xarray.open_dataset(file, engine="cfgrib") raises an Error in some conditions and produces the following log output:
2023-02-18 10:02:06,731 cfgrib.dataset ERROR skipping variable: paramId==228029 shortName='i10fg'
Traceback (most recent call last):
...
How can I access this output during program execution?
The raised error in the cfgrib module is handled there gracefully and program execution can continue, but the logic of my program requires that I access the error message, in particular the part saying shortName='i10fg' in order to handle the error exhaustively.
Here is how my logger is set up:
def init_log():
"""initialize logging
returns logger using log settings from the config file (settings.toml)
"""
# all settings from a settings file with reasonable defaults
lg.basicConfig(
level=settings.logging.log_level,
format=settings.logging.format,
filemode=settings.logging.logfile_mode,
filename=settings.logging.filename,
)
mylogger = lg.getLogger(__name__)
stream = lg.StreamHandler()
mylogger.addHandler(stream)
clg.install(
level=settings.logging.log_level,
logger=mylogger,
fmt="%(asctime)s %(levelname)s:\t%(message)s",
)
return mylogger
# main
log = init_log()
log.info('...reading files...')
I went through the python logging documentation and cookbook. While this contains ample examples on how to modify logging for various purposes, I could not find an example for accessing and reacting to a log message during program execution.
The Exception in my logs look this:
2023-02-20 12:22:37,209 cfgrib.dataset ERROR skipping variable: paramId==228029 shortName='i10fg'
Traceback (most recent call last):
File "/home/foo/projects/windgrabber/.venv/lib/python3.10/site-packages/cfgrib/dataset.py", line 660, in build_dataset_components
dict_merge(variables, coord_vars)
File "/home/foo/projects/windgrabber/.venv/lib/python3.10/site-packages/cfgrib/dataset.py", line 591, in dict_merge
raise DatasetBuildError(
cfgrib.dataset.DatasetBuildError: key present and new value is different: key='time' value=Variable(dimensions=('time',), data=array([1640995200, 1640998800, 1641002400, ..., 1672520400, 1672524000,
1672527600])) new_value=Variable(dimensions=('time',), data=array([1640973600, 1641016800, 1641060000, 1641103200, 1641146400,
1641189600, 1641232800, 1641276000, 1641319200, 1641362400,
I cannot catch the Exception directly for some reason:
...
import sys
from cfgrib.dataset import DatasetBuildError
...
try:
df = xr.open_dataset(file, engine="cfgrib").to_dataframe()
# triggering error manually like with the two lines below works as expected
# raise Exception()
# raise DatasetBuildError()
except Exception as e:
print('got an Exception')
print(e)
print(e.args)
except BaseException as e:
print('got a BaseException')
print(e.args)
except DatasetBuildError as e:
print(e)
except:
print('got any and all exception')
type, value, traceback = sys.exc_info()
print(type)
print(value)
print(traceback)
Unless I uncomment the two lines where I raise the exception manually, the except clauses are never triggered, event though I can see the DatabaseBuildError in my logs.
Not sure if this has any bearing, but while I can see the Exception as quoted above in my file log, it is not printed to stdout.

How to catch errors thrown by logging library itself in Python

I accidentally called logging.info() in a wrong way using Python3, and the library itself threw an error to the console, but the FileHandler failed to catch such error. So is there a way to catch all the errors no matter where they are thrown.
The error message looks like:
--- Logging error ---
...
TypeError: not all arguments converted during string formatting
Call stack:
File "<ipython-input-12-5ba547bc4aeb>", line 1, in <module>
logging.info(1,1)
Message: 1
Arguments: (1,)
Copying the following code can reproduce my question. The log file can catch logging.info() and the error of ZeroDivisionError, but it failed to catch error messages thrown by the logging library.
import logging
logger_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s: %(message)s')
logger_handler = logging.FileHandler('/Users/your_name/Desktop/logging.log')
logger_handler.setLevel(logging.DEBUG)
logger_handler.setFormatter(logger_formatter)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Log file output:
2021-03-10 18:07:32,315 - root - INFO: test
2021-03-10 18:07:32,324 - root - ERROR:
Traceback (most recent call last):
File "<ipython-input-1-6a4f609a80ca>", line 17, in <module>
1/0
ZeroDivisionError: division by zero
Logging all errors that can happen during logging is impossible, because the error may be such that it breaks logging and if that would trigger a logging call it would lead to an infinite loop. You can however implement custom error handling by overriding handleError for your Handlers, and if you are feeling particularly brave, attempt to write a log within that error handler. Based on your code it would look something like this:
import logging
class MyFileHandler(logging.FileHandler):
def handleError(self, record):
logging.error('There was an error when writing a log. Msg was: '+str(record.msg))
logger_handler = MyFileHandler('custom.log')
logger_handler.setLevel(logging.DEBUG)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Of course if you would rather have an exception that bubbles up instead you could just raise from handleError.

logging exception into log file in python

I have to create a one single log file for entire application and if there is any exception raised by any of module in the application , exception should go into except block and error should be written to log file . This log file should not be overridden instead if there are exceptions raised by multiple modules , all exceptions should be logged into one single log file .
I have tried logger with below code , but it is not creating log file :
import logging
with open("D:\logfile\log.txt", "w") as log:
logging.basicConfig(filename=log, level=logging.DEBUG, format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger=logging.getLogger(__name__)
try:
1/0
except ZeroDivisionError as err
logger.error(err)
also using with clause needs indentation and I do not want to use all modules under one with clause , instead I want to simply create one log file at the beginning of program and as program executes and modules raise an exceptions those exceptions should be written into one log file .
I don't know the logging module, but some googling suggests that the logging module supports writing the traceback into the logfile. Also your code didn't seem to work on my machine (Python v3.8.5), so i edited it so it works.
logging.exception('text') logs the traceback to the logfile, and you can specify a message which will displayed beforehand.
The code:
import logging
#with open("log.txt", "w") as log:
logging.basicConfig(filename='log.txt', level=logging.DEBUG, format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger=logging.getLogger(__name__)
try:
1/0
except ZeroDivisionError:
logging.exception('Oh no! Error. Here is the traceback info:')
The logfile:
2020-08-20 08:31:02,310 ERROR root Oh no! Error. Here is the traceback info:
Traceback (most recent call last):
File "logger.py", line 7, in <module>
1/0
ZeroDivisionError: division by zero
This has the advantage that the whole traceback is logged, which usually more helpful.

Python: How to write error in the console in txt file?

I have a python script which every 10 minutes sends me an email with everything written in the console. I am running this with the crontab in my ubuntu 18.04 vps.
Sometimes it doesn't send the mail so I assume that when an error happens execution stops but how can I get the errors to be written in a txt file so I can analyze the error ?
Logging Module
To demonstrate the approach with the logging module, this would be the general approach
import logging
# Create a logging instance
logger = logging.getLogger('my_application')
logger.setLevel(logging.INFO) # you can set this to be DEBUG, INFO, ERROR
# Assign a file-handler to that instance
fh = logging.FileHandler("file_dir.txt")
fh.setLevel(logging.INFO) # again, you can set this differently
# Format your logs (optional)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter) # This will set the format to the file handler
# Add the handler to your logging instance
logger.addHandler(fh)
try:
raise ValueError("Some error occurred")
except ValueError as e:
logger.exception(e) # Will send the errors to the file
And if I cat file_dir.txt
2019-03-14 14:52:50,676 - my_application - ERROR - Some error occurred
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError: Some error occurred
Print to File
As I pointed out in the comments, you could accomplish this with print as well (I'm not sure you will be applauded for it)
# Set your stdout pointer to a file handler
with open('my_errors.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
print(e, file=fh)
cat my_errors.txt
Some error occurred
Note that logging.exception includes the traceback in this case, which is one of the many huge benefits of that module
Edit
In the interest of completeness, the traceback module leverages a similar approach as print, where you can supply a file handle:
import traceback
import sys
with open('error.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
e_type, e_val, e_tb = sys.exc_info()
traceback.print_exception(e_type, e_val, e_tb, file=fh)
This will include all of the information you want from logging
You can use the logging module as suggested in the comments (possibly superior but outside the scope of my knowledge), or catch the errors with try and except like:
try:
pass
#run the code you currently have
except Exception as e: # catch ALLLLLL errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
Although this is generally frowned upon to catch all exceptions, because then should your program fail for whatever reason it will get gobbled up in the except clause so a better approach is to figure out what specific error you are encountering like IndexError and then just catch this specific error like:
try:
pass
#run the code you currently have
except IndexError as e: # catch only indexing errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
To be able to debug and not only know the kind of error that happened, you can also get the error stack using traceback module (usually in the starting package of modules):
import traceback
try:
my_function()
except Exception as e:
print(e)
traceback.print_exc()
And then run your code 'my_code.py' in console usig >>
python my_code.py >> my_prints.txt
All the prints of your code will then be written in this .txt file, including the printed error and its stack. This is very interesting in your case or while running code on a docker if you want to detach yourself from it with ctrl+p+q and still know what is printed.

How to do debugging prints in Python?

I have a function like this....
def validate_phone(raw_number, debug=False):
I want the debug flag to control whether it outputs logging statements. For example:
if (debug):
print('Before splitting numbers', file=sys.stderr)
split_version = raw_number.split('-')
if (debug):
print('After splitting numbers', file=sys.stderr)
That code is very repetitive however. What is the cleanest (DRYest?) way to handle such if-flag-then-log logic?
I agree that using logging is the best solution for printing debugging information while running a python script. I wrote a DebugPrint module that helps facilitate using the logger more easily:
#DebugPrint.py
import logging
import os
import time
DEBUGMODE=True
logging.basicConfig(level=logging.DEBUG)
log=logging.getLogger('=>')
#DebugPrint.py
#DbgPrint=logging.debug
def DbgPrint(*args, **kwargs):
if DEBUGMODE:
#get module, class, function, linenumber information
import inspect
className = None
try:
className = inspect.stack()[2][0].f_locals['self'].__class__.__name__
except:
pass
modName=None
try:
modName = os.path.basename(inspect.stack()[2][1])
except:
pass
lineNo=inspect.stack()[2][2]
fnName=None
try:
fnName = inspect.stack()[2][3]
except:
pass
DbgText="line#{}:{}->{}->{}()".format(lineNo, modName,className, fnName)
argCnt=len(args)
kwargCnt=len(kwargs)
#print("argCnt:{} kwargCnt:{}".format(argCnt,kwargCnt))
fmt=""
fmt1=DbgText+":"+time.strftime("%H:%M:%S")+"->"
if argCnt > 0:
fmt1+=(argCnt-1)*"%s,"
fmt1+="%s"
fmt+=fmt1
if kwargCnt>0:
fmt2="%s"
args+=("{}".format(kwargs),)
if len(fmt)>0:
fmt+=","+fmt2
else:
fmt+=fmt2
#print("fmt:{}".format(fmt))
log.debug(fmt,*args)
if __name__=="__main__":
def myTest():
print("Running myTest()")
DbgPrint("Hello","World")
myTest()
If the DEBUGMODE variable is false, nothing will be printed.
If it is true, the sample code above prints out:
DEBUG:=>:16:24:14:line#78:DebugPrint.py->None->myTest():->Hello,World
Now I'm going to test DebugPrint with a module that defines a class.
#testDebugPrint.py
from DebugPrint import DbgPrint
class myTestClass(object):
def __init__(self):
DbgPrint("Initializing the class")
def doSomething(self, arg):
DbgPrint("I'm doing something with {}".format(arg))
if __name__=='__main__':
test=myTestClass()
test.doSomething("a friend!")
When this script is run the output is as follows:
DEBUG:=>:16:25:02:line#7:testDebugPrint.py->myTestClass->__init__():->Initializing the class
DEBUG:=>:16:25:02:line#10:testDebugPrint.py->myTestClass->doSomething():->I'm doing something with a friend!
Note that the module name, class name, function name and line number printed on the console is correct as well as the time the statement was printed.
I hope that you will find this utility useful.
I would use the logging module for it. It's made for it.
> cat a.py
import logging
log = logging.getLogger(__name__)
def main():
log.debug('This is debug')
log.info('This is info')
log.warn('This is warn')
log.fatal('This is fatal')
try:
raise Exception("this is exception")
except Exception:
log.warn('Failed with exception', exc_info=True)
raise
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='something')
parser.add_argument(
'-v', '--verbose', action='count', default=0, dest='verbosity')
args = parser.parse_args()
logging.basicConfig()
logging.getLogger().setLevel(logging.WARN - 10 * args.verbosity)
main()
> python a.py
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
> python a.py -v
INFO:__main__:This is info
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
> python a.py -vv
DEBUG:__main__:This is debug
INFO:__main__:This is info
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
you can change the logging level to INFO/DEBUG/WARNING/ERROR/CRITICAL, and also logging module can record the timestamp for you and it is configurable as well.
Check the link python3 logging HowTo
You ought to use the logging module:
import logging
import sys
# Get the "root" level logger.
root = logging.getLogger()
# Set the log level to debug.
root.setLevel(logging.DEBUG)
# Add a handler to output log messages as a stream (to a file/handle)
# in this case, sys.stderr
ch = logging.StreamHandler(sys.stderr)
# This handler can log debug messages - multiple handlers could log
# different "levels" of log messages.
ch.setLevel(logging.DEBUG)
# Format the output to include the time, etc.
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
# Add the formatter to the root handler...
root.addHandler(ch)
# Get a logger object - this is what we use to omit log messages.
logger=logging.getLogger(__name__)
# Generate a bunch of log messages as a demonstration
for i in xrange(100):
logger.debug('The value of i is: %s', i)
# Demonstrate a useful example of logging full exception tracebacks
# if the logger will output debug, but a warning in other modes.
try:
a='a'+12
except Exception as e:
# Only log exceptions if debug is enabled..
if logger.isEnabledFor(logging.DEBUG):
# Log the traceback w/ the exception message.
logger.exception('Something went wrong: %s', e)
else:
logger.warning('Something went wrong: %s', e)
Read more about it here: https://docs.python.org/2/library/logging.html
To disable logging just set the level (logging.DEBUG) to something else (like logging.INFO) . Note that it's also quite easy to redirect the messages elsewhere (like a file) or send some messages (debug) some place, and other messages (warning) to others.

Categories

Resources