Store unhandled exceptions in log file - python

I am trying to build a logger logger_script.py for my python scripts that:
outputs a log file with customizable log level.
outputs a console output with customizable log level (not necessarily equal to the log file's one)
logs unhandled exceptions both to the log file and to the console
I achieved the first two points by following the answer to "https://stackoverflow.com/questions/29087297/is-there-a-way-to-change-the-filemode-for-a-logger-object-that-is-not-configured/29087645 ". I adapted it a to my needs and it now looks like:
import sys
import logging
def create_logger(log_filename, logfile_level, console_level):
# create logger and file handler
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler(log_filename, mode='w')
fh.setLevel(logfile_level)
# create console handler with independent log level
ch = logging.StreamHandler(stream=sys.stdout)
ch.setLevel(console_level)
formatter = logging.Formatter('[%(asctime)s] %(levelname)8s: %(message)s' +
' (%(filename)s:%(lineno)s)',
datefmt='%m-%d, %H:%M:%S')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
if (logger.hasHandlers()): #clear pre-existing logs (?)
logger.handlers.clear()
logger.addHandler(ch)
logger.addHandler(fh)
return logger
#create the logger: file log + console output
logger = create_logger("LogFile.log", logging.INFO, logging.WARNING)
##### piece of code with handled exceptions: #####
beta = 3
while beta > -3:
try:
2/beta
logger.info("division successful".rjust(20))
except ZeroDivisionError:
logger.exception("ZeroDivisionError".rjust(20))
beta -= 1
##### piece of code with unhandled exception #####
gamma = 1/0
However, when running a code with an unhandled exception (see last line), these do not get passed to the log file, only to the console.
I followed advices from Logging uncaught exceptions in Python and added the following snippet;
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
either before or after the logger creation lines, but it does not work in my case.
How can I make unhandled exception appear, together with their traceback message?
I would like to avoid encapsulating the whole code into a try: except statement.

I would like to avoid encapsulating the whole code into a try: except statement.
I'd recommend doing the whole code in def main() then - it would be already useful if you decide to reuse parts of your code so that no code gets executed when importing the file. :)
Then you can do a catch-all thing in if __name__== "__main__"::
if __name__ == "__main__":
try:
main()
except Exception as e:
logger.exception("Program stopped because of a critical error.")
raise
[That's literally how I do it in my own code. It doesn't need any tricks, I literally added it for unexpected exceptions happened so that I could debug later.]
raise re-raises the same error that was caught - so it's uncaught to the outside world. And .exception does the same thing as .error but includes the exception info on its own.

Related

How to catch errors thrown by logging library itself in Python

I accidentally called logging.info() in a wrong way using Python3, and the library itself threw an error to the console, but the FileHandler failed to catch such error. So is there a way to catch all the errors no matter where they are thrown.
The error message looks like:
--- Logging error ---
...
TypeError: not all arguments converted during string formatting
Call stack:
File "<ipython-input-12-5ba547bc4aeb>", line 1, in <module>
logging.info(1,1)
Message: 1
Arguments: (1,)
Copying the following code can reproduce my question. The log file can catch logging.info() and the error of ZeroDivisionError, but it failed to catch error messages thrown by the logging library.
import logging
logger_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s: %(message)s')
logger_handler = logging.FileHandler('/Users/your_name/Desktop/logging.log')
logger_handler.setLevel(logging.DEBUG)
logger_handler.setFormatter(logger_formatter)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Log file output:
2021-03-10 18:07:32,315 - root - INFO: test
2021-03-10 18:07:32,324 - root - ERROR:
Traceback (most recent call last):
File "<ipython-input-1-6a4f609a80ca>", line 17, in <module>
1/0
ZeroDivisionError: division by zero
Logging all errors that can happen during logging is impossible, because the error may be such that it breaks logging and if that would trigger a logging call it would lead to an infinite loop. You can however implement custom error handling by overriding handleError for your Handlers, and if you are feeling particularly brave, attempt to write a log within that error handler. Based on your code it would look something like this:
import logging
class MyFileHandler(logging.FileHandler):
def handleError(self, record):
logging.error('There was an error when writing a log. Msg was: '+str(record.msg))
logger_handler = MyFileHandler('custom.log')
logger_handler.setLevel(logging.DEBUG)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Of course if you would rather have an exception that bubbles up instead you could just raise from handleError.

Python Log File is not created unless basicConfig is called on top before any functions

I have a script that processes csvs and load them to database. My intern mentor wanted us to use log file to capture what's going on and he wanted it to be flexible so one can use a config.ini file to edit where they want the log file to be created. As a result I did just that, using a config file that use key value pairs in a dict that i can extract the path to the log file from. These are excepts from my code where log file is created and used:
dirconfig_file = r"C:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\venv\odfs_tester_history_dirs.ini"
start_time = datetime.now()
def process_dirconfig_file(config_file_from_sysarg):
try:
if Path.is_file(dirconfig_file_Pobj):
parseddict = {}
configsects_set = set()
for sect in config.sections():
configsects_set.add(sect)
for k, v in config.items(sect):
# print('{} = {}'.format(k, v))
parseddict[k] = v
print(parseddict)
try:
if ("log_dir" not in parseddict or parseddict["log_dir"] == "" or "log_dir" not in configsects_set):
raise Exception(f"Error: Your config file is missing 'logfile path' or properly formatted [log_file] section for this script to run. Please edit config file to include logfile path to capture errors")
except Exception as e:
#raise Exception(e)
logging.exception(e)
print(e)
parse_dict = process_dirconfig_file(dirconfig_file)
logfilepath = parse_dict["log_dir"]
log_file_name = start_time.strftime(logfilepath)
print(log_file_name)
logging.basicConfig(
filename=log_file_name,
level=logging.DEBUG,
format='[Probe Data Quality] %(asctime)s - %(name)s %(levelname)-7.7s %(message)s'
# can you explain this Tenzin?
)
if __name__ == '__main__':
try:
startTime = datetime.now()
db_instance = dbhandler(parse_dict["db_string"])
odfs_tabletest_dict = db_instance['odfs_tester_history_files']
odf_history_from_csv_to_dbtable(db_instance)
#print("test exception")
print(datetime.now() - startTime)
except Exception as e:
logging.exception(e)
print(e)
Doing this, no file is created. The script runs with no errors but no log file is created. I've tried several things including using a hardcoded log file name, instead of calling it from the config file but it didn't work
The only thing that works is when the log file is created up top before any method. Why is this?
When you are calling your process_dirconfig_file function, the logging configuration has not been set yet, so no file could have been created. The script executes top to bottom. It would be similar to doing something like this:
import sys
# default logging points to stdout/stderr kind of like this
my_logger = sys.stdout
my_logger.write("Something")
# Then you've pointed logging to a file
my_logger = open("some_file.log", 'w')
my_logger.write("Something else")
Only Something else would be written to our some_file.log, because my_logger pointed somewhere else beforehand.
Much the same is happening here. By default, the logging.<debug/info> functions do nothing because logging won't do anything with them without additional configuration. logging.error, logging.warning, and logging.exception will always at least write to stdout out of the box.
Also, I don't think the inner try is valid Python, you need a matching except. And I wouldn't just print an exception raised by that function, I'd probably raise and have the program crash:
def process_dirconfig_file(config_file_from_sysarg):
try:
# Don't use logging.<anything> yet
~snip~
except Exception as e:
# Just raise or don't use try/except at all until
# you have a better idea of what you want to do in this circumstance
raise
Especially since you are trying to use the logger while validating that its configuration is correct.
The fix? Don't use the logger until after you've determined it's ready.

Python: How to write error in the console in txt file?

I have a python script which every 10 minutes sends me an email with everything written in the console. I am running this with the crontab in my ubuntu 18.04 vps.
Sometimes it doesn't send the mail so I assume that when an error happens execution stops but how can I get the errors to be written in a txt file so I can analyze the error ?
Logging Module
To demonstrate the approach with the logging module, this would be the general approach
import logging
# Create a logging instance
logger = logging.getLogger('my_application')
logger.setLevel(logging.INFO) # you can set this to be DEBUG, INFO, ERROR
# Assign a file-handler to that instance
fh = logging.FileHandler("file_dir.txt")
fh.setLevel(logging.INFO) # again, you can set this differently
# Format your logs (optional)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter) # This will set the format to the file handler
# Add the handler to your logging instance
logger.addHandler(fh)
try:
raise ValueError("Some error occurred")
except ValueError as e:
logger.exception(e) # Will send the errors to the file
And if I cat file_dir.txt
2019-03-14 14:52:50,676 - my_application - ERROR - Some error occurred
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError: Some error occurred
Print to File
As I pointed out in the comments, you could accomplish this with print as well (I'm not sure you will be applauded for it)
# Set your stdout pointer to a file handler
with open('my_errors.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
print(e, file=fh)
cat my_errors.txt
Some error occurred
Note that logging.exception includes the traceback in this case, which is one of the many huge benefits of that module
Edit
In the interest of completeness, the traceback module leverages a similar approach as print, where you can supply a file handle:
import traceback
import sys
with open('error.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
e_type, e_val, e_tb = sys.exc_info()
traceback.print_exception(e_type, e_val, e_tb, file=fh)
This will include all of the information you want from logging
You can use the logging module as suggested in the comments (possibly superior but outside the scope of my knowledge), or catch the errors with try and except like:
try:
pass
#run the code you currently have
except Exception as e: # catch ALLLLLL errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
Although this is generally frowned upon to catch all exceptions, because then should your program fail for whatever reason it will get gobbled up in the except clause so a better approach is to figure out what specific error you are encountering like IndexError and then just catch this specific error like:
try:
pass
#run the code you currently have
except IndexError as e: # catch only indexing errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
To be able to debug and not only know the kind of error that happened, you can also get the error stack using traceback module (usually in the starting package of modules):
import traceback
try:
my_function()
except Exception as e:
print(e)
traceback.print_exc()
And then run your code 'my_code.py' in console usig >>
python my_code.py >> my_prints.txt
All the prints of your code will then be written in this .txt file, including the printed error and its stack. This is very interesting in your case or while running code on a docker if you want to detach yourself from it with ctrl+p+q and still know what is printed.

Python, logging pathname in decorator

When I logging an error in a decorator, the logging pathname is not what I want.
logging.conf:
[loggers]
keys=root
[handlers]
keys=console
[formatters]
keys=console
[logger_root]
...
[handler_console]
...
[formatter_console]
format=%(levelname)s - File "%(pathname)s", line %(lineno)s, %(funcName)s: %(message)s
Nomally, logging an error in file /home/lizs/test/app.py:
def app():
try:
a # error, on line 12
except Exception, err:
logging.getLogger().error(str(err))
Debug message on console:
ERROR - File "/home/lizs/test/app.py", line 12, app: global name 'a' is not defined
The above logging pathname(/home/lizs/test/app.py) is what I want. But when I use a decorator:
/home/lizs/test/app.py:
from decorators import logging_decorator
#logging_decorator
def app():
a
/home/lizs/test/decorators.py:
def logging_decorator(func):
def error_log():
try:
func() # on line 10
except Exception, err:
logging.getLogger().error(str(err))
return error_log
The debug message:
ERROR - File "/home/lizs/test/decorators.py", line 10, error_log: global name 'a' is not defined
Now, the logging pathname is a pathname of the decorator (/home/lizs/test/decorators.py).
How to set the logging pathname to /home/lizs/test/app.py when I use decorator.
Solution:
Try this:
app.py:
from decorators import logging_decorator
#logging_decorator
def app():
a
app()
decorators.py:
import logging
import inspect
# init logger
logger = logging.getLogger()
formatter = logging.Formatter('%(levelname)s - File %(real_pathname)s,'
' line %(real_lineno)s, %(real_funcName)s: %(message)s')
console_handle = logging.StreamHandler()
console_handle.setFormatter(formatter)
logger.addHandler(console_handle)
def logging_decorator(func):
def error_log():
try:
func()
except Exception as err:
logger.error(err, extra={'real_pathname': inspect.getsourcefile(func), # path to source file
'real_lineno': inspect.trace()[-1][2], # line number from trace
'real_funcName': func.__name__}) # function name
return error_log
Explanation:
According to docs here you can pass a dictionary as extra argument to populate the __dict__ of the LogRecord created for the logging event with user-defined attributes. These custom attributes can then be used as you like.
Thus, because we can't modify pathname directly, this approach with real_pathname is most straight possible.
Links:
inspect.getsourcefile
inspect.trace
logging.message
Your problem is that your exception handler is one level upper than where the exception was initially raised, so you will have to examine the stacktrace and manually build a LogRecord with the correct file/line information:
def logging_decorator(func):
def error_log():
try:
func() # on line 10
except Exception, err:
tb = sys.exc_info()[2] # extract the current exception info
exc_tup = traceback.extract_tb(tb)[-1] # extract the deeper stack frame
logger = logging.getLogger()
# manually build a LogRecord from that stack frame
lr = logger.makeRecord(logger.name,
logging.ERROR, exc_tup[0], exc_tup[1],
str(err), {}, None, exc_tup[2])
logger.handle(lr) # and ask the logging system to process it
return error_log

Print exception with stack trace to file

I'm trying to put a simple log into my script. This log should tell me where is the error and as much as possible info needed to repair the script.
I've put print to file str(e) into each except but it provides a very few info to know what is going wrong.
How could I make it elaborated? For example the whole not catched exception text which I can see in the console?
try:
#code
except Exception as e:
print_to_file(log.txt,str(e))
try this,
import traceback
try:
1/0
except Exception as e:
with open('log.txt', 'a') as f:
f.write(str(e))
f.write(traceback.format_exc())
If you want a better solution should use Logger that manage timestamps, file size, and rotation for syou (doing a logger handler)
this is an example with logger, timestamp and rotation
import logging
from logging.handlers import RotatingFileHandler
import traceback
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.ERROR)
handler = RotatingFileHandler("log.txt", maxBytes=10000, backupCount=5)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
try:
1/0
except Exception as e:
logger.error(str(e))
logger.error(traceback.format_exc())

Categories

Resources