open xml file error to log - python

when I run my python script in command line I get an error output when trying to retrieve an xml file. I would like to write this error message to a logfile.
import logging
logger = logging.getLogger('fileChecker')
hdlr = logging.FileHandler('./fileChecker.log')
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.WARNING)
try:
fileXML = mylibrary.getXML(enf, name)
logger.info('got xml file ok')
except TypeError:
errorCount += 1
logger.error('we have a problem')
mylibrary.getXML() is throwing an error here that I can see in commandline output.
how do I get that to be written to my log file?

You can catch the exception object that was raised:
try:
fileXML = myLibrary.getXML(enf, name)
logger.info('got xml file ok')
except TypeError, e:
errorCount += 1
logger.error('We have a problem: ' + str(e))
Note that if you're using the Python logging module, there is also the Logger.exception method.

Related

Store unhandled exceptions in log file

I am trying to build a logger logger_script.py for my python scripts that:
outputs a log file with customizable log level.
outputs a console output with customizable log level (not necessarily equal to the log file's one)
logs unhandled exceptions both to the log file and to the console
I achieved the first two points by following the answer to "https://stackoverflow.com/questions/29087297/is-there-a-way-to-change-the-filemode-for-a-logger-object-that-is-not-configured/29087645 ". I adapted it a to my needs and it now looks like:
import sys
import logging
def create_logger(log_filename, logfile_level, console_level):
# create logger and file handler
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler(log_filename, mode='w')
fh.setLevel(logfile_level)
# create console handler with independent log level
ch = logging.StreamHandler(stream=sys.stdout)
ch.setLevel(console_level)
formatter = logging.Formatter('[%(asctime)s] %(levelname)8s: %(message)s' +
' (%(filename)s:%(lineno)s)',
datefmt='%m-%d, %H:%M:%S')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
if (logger.hasHandlers()): #clear pre-existing logs (?)
logger.handlers.clear()
logger.addHandler(ch)
logger.addHandler(fh)
return logger
#create the logger: file log + console output
logger = create_logger("LogFile.log", logging.INFO, logging.WARNING)
##### piece of code with handled exceptions: #####
beta = 3
while beta > -3:
try:
2/beta
logger.info("division successful".rjust(20))
except ZeroDivisionError:
logger.exception("ZeroDivisionError".rjust(20))
beta -= 1
##### piece of code with unhandled exception #####
gamma = 1/0
However, when running a code with an unhandled exception (see last line), these do not get passed to the log file, only to the console.
I followed advices from Logging uncaught exceptions in Python and added the following snippet;
def handle_exception(exc_type, exc_value, exc_traceback):
if issubclass(exc_type, KeyboardInterrupt):
sys.__excepthook__(exc_type, exc_value, exc_traceback)
return
logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
sys.excepthook = handle_exception
either before or after the logger creation lines, but it does not work in my case.
How can I make unhandled exception appear, together with their traceback message?
I would like to avoid encapsulating the whole code into a try: except statement.
I would like to avoid encapsulating the whole code into a try: except statement.
I'd recommend doing the whole code in def main() then - it would be already useful if you decide to reuse parts of your code so that no code gets executed when importing the file. :)
Then you can do a catch-all thing in if __name__== "__main__"::
if __name__ == "__main__":
try:
main()
except Exception as e:
logger.exception("Program stopped because of a critical error.")
raise
[That's literally how I do it in my own code. It doesn't need any tricks, I literally added it for unexpected exceptions happened so that I could debug later.]
raise re-raises the same error that was caught - so it's uncaught to the outside world. And .exception does the same thing as .error but includes the exception info on its own.

How to catch errors thrown by logging library itself in Python

I accidentally called logging.info() in a wrong way using Python3, and the library itself threw an error to the console, but the FileHandler failed to catch such error. So is there a way to catch all the errors no matter where they are thrown.
The error message looks like:
--- Logging error ---
...
TypeError: not all arguments converted during string formatting
Call stack:
File "<ipython-input-12-5ba547bc4aeb>", line 1, in <module>
logging.info(1,1)
Message: 1
Arguments: (1,)
Copying the following code can reproduce my question. The log file can catch logging.info() and the error of ZeroDivisionError, but it failed to catch error messages thrown by the logging library.
import logging
logger_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s: %(message)s')
logger_handler = logging.FileHandler('/Users/your_name/Desktop/logging.log')
logger_handler.setLevel(logging.DEBUG)
logger_handler.setFormatter(logger_formatter)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Log file output:
2021-03-10 18:07:32,315 - root - INFO: test
2021-03-10 18:07:32,324 - root - ERROR:
Traceback (most recent call last):
File "<ipython-input-1-6a4f609a80ca>", line 17, in <module>
1/0
ZeroDivisionError: division by zero
Logging all errors that can happen during logging is impossible, because the error may be such that it breaks logging and if that would trigger a logging call it would lead to an infinite loop. You can however implement custom error handling by overriding handleError for your Handlers, and if you are feeling particularly brave, attempt to write a log within that error handler. Based on your code it would look something like this:
import logging
class MyFileHandler(logging.FileHandler):
def handleError(self, record):
logging.error('There was an error when writing a log. Msg was: '+str(record.msg))
logger_handler = MyFileHandler('custom.log')
logger_handler.setLevel(logging.DEBUG)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Of course if you would rather have an exception that bubbles up instead you could just raise from handleError.

How to correctly use place holders for log file path/name within a config file?

I am trying to set the log file name dynamically based on the path where the python script is running.
I have a config file that looks like this (it does contain additional configuration, what I am showing here is just the parts relevant to logging):
[loggers]
keys = root
[logger_root]
handlers = screen,file
level = NOTSET
[formatters]
keys = simple,complex
[formatter_simple]
format = %(asctime)s - %(name)s - %(levelname)s - %(message)s
[formatter_complex]
format = %(asctime)s - %(name)s - %(levelname)s - %(module)s : %(lineno)d - %(message)s
[handlers]
keys = file,screen
[handler_file]
class = handlers.TimedRotatingFileHandler
interval = midnight
backupcount = 5
formatter = complex
level = DEBUG
args = ('%(logfile)s',)
[handler_screen]
class = StreamHandler
formatter = simple
level = INFO
args = (sys.stdout,)
and my code looks like this:
if not os.path.exists(DEFAULT_CONFIG_FILE) or not os.path.isfile(DEFAULT_CONFIG_FILE):
msg = '%s configuration file does not exist!', config_file
logging.getLogger(__name__).error(msg)
raise ValueError(msg)
try:
logfilename = os.path.join(
os.path.dirname(__file__), 'logs', 'camera.log')
config.read(DEFAULT_CONFIG_FILE)
logging.config.fileConfig(
config, defaults={'logfile': logfilename}, disable_existing_loggers=False)
logging.info(f'{DEFAULT_CONFIG_FILE} configuration file was loaded.')
except Exception as e:
logging.getLogger(__name__).error(
'Failed to load configuration from %s!', DEFAULT_CONFIG_FILE)
logging.getLogger(__name__).debug(str(e), exc_info=True)
raise e
I am trying to achieve that the logs are written to a subdirectory called logs which is located in the path that the script is running.
What I actually get is a log file called %(logfile)s
I suppose that this is something really obvious but I am just going in circles!
Any help greatly appreciated.
Well I have found one solution.
I still do not understand why the OP did not work and would really appreciate it if someone who knows can add another answer but this achieves the same result:
[handler_file]
class = handlers.TimedRotatingFileHandler
interval = midnight
backupcount = 5
formatter = complex
level = DEBUG
args = (os.getcwd()+'/logs/camera.log',)
with this code
if not os.path.exists(DEFAULT_CONFIG_FILE) or not os.path.isfile(DEFAULT_CONFIG_FILE):
msg = '%s configuration file does not exist!', config_file
logging.getLogger(__name__).error(msg)
raise ValueError(msg)
try:
config.read(DEFAULT_CONFIG_FILE)
logging.config.fileConfig(
config, disable_existing_loggers=False)
logging.info(f'{DEFAULT_CONFIG_FILE} configuration file was loaded.')
except Exception as e:
logging.getLogger(__name__).error(
'Failed to load configuration from %s!', DEFAULT_CONFIG_FILE)
logging.getLogger(__name__).debug(str(e), exc_info=True)
raise e
I did eventually get the '%(logfile)s' parsing to substitute the variables correctly.
The issue was that the defaults should have been passed in to the initialisation of the ConfigParser not the logger. Like this:
logfile = os.path.join(os.path.dirname(__file__), 'logs','camera.log')
config = ConfigParser(defaults={'logfile': logfile})
if not os.path.exists(DEFAULT_CONFIG_FILE) or not os.path.isfile(DEFAULT_CONFIG_FILE):
msg = '%s configuration file does not exist!', config_file
logging.getLogger(__name__).error(msg)
raise ValueError(msg)
try:
config.read(DEFAULT_CONFIG_FILE)
logging.config.fileConfig(
config, disable_existing_loggers=False)
logging.info(f'{DEFAULT_CONFIG_FILE} configuration file was loaded.')
except Exception as e:
logging.getLogger(__name__).error(
'Failed to load configuration from %s!', DEFAULT_CONFIG_FILE)
logging.getLogger(__name__).debug(str(e), exc_info=True)
raise e
and not like this:
logfile = os.path.join(os.path.dirname(__file__), 'logs','camera.log')
logging.config.fileConfig(
config,
defaults={'logfile': logfile},
disable_existing_loggers=False)
However, I then started running in to issues with unicode errors:
yntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
Caused, I think, by the \Users in the path.
I gave up at this point and simply set the working directory = to the the directory where the script was installed
os.path.dirname(__file__)
On Windows, I got your initial configuration to work by adding as_posix()
logging.config.fileConfig(
config, defaults={"logfile": path.as_posix(),},
)
where path is a pathlib.Path instance (from pathlib import Path)
[handler_file]
class = FileHandler
formatter = file
args=('%(logfile)s',)

Python: How to write error in the console in txt file?

I have a python script which every 10 minutes sends me an email with everything written in the console. I am running this with the crontab in my ubuntu 18.04 vps.
Sometimes it doesn't send the mail so I assume that when an error happens execution stops but how can I get the errors to be written in a txt file so I can analyze the error ?
Logging Module
To demonstrate the approach with the logging module, this would be the general approach
import logging
# Create a logging instance
logger = logging.getLogger('my_application')
logger.setLevel(logging.INFO) # you can set this to be DEBUG, INFO, ERROR
# Assign a file-handler to that instance
fh = logging.FileHandler("file_dir.txt")
fh.setLevel(logging.INFO) # again, you can set this differently
# Format your logs (optional)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter) # This will set the format to the file handler
# Add the handler to your logging instance
logger.addHandler(fh)
try:
raise ValueError("Some error occurred")
except ValueError as e:
logger.exception(e) # Will send the errors to the file
And if I cat file_dir.txt
2019-03-14 14:52:50,676 - my_application - ERROR - Some error occurred
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError: Some error occurred
Print to File
As I pointed out in the comments, you could accomplish this with print as well (I'm not sure you will be applauded for it)
# Set your stdout pointer to a file handler
with open('my_errors.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
print(e, file=fh)
cat my_errors.txt
Some error occurred
Note that logging.exception includes the traceback in this case, which is one of the many huge benefits of that module
Edit
In the interest of completeness, the traceback module leverages a similar approach as print, where you can supply a file handle:
import traceback
import sys
with open('error.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
e_type, e_val, e_tb = sys.exc_info()
traceback.print_exception(e_type, e_val, e_tb, file=fh)
This will include all of the information you want from logging
You can use the logging module as suggested in the comments (possibly superior but outside the scope of my knowledge), or catch the errors with try and except like:
try:
pass
#run the code you currently have
except Exception as e: # catch ALLLLLL errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
Although this is generally frowned upon to catch all exceptions, because then should your program fail for whatever reason it will get gobbled up in the except clause so a better approach is to figure out what specific error you are encountering like IndexError and then just catch this specific error like:
try:
pass
#run the code you currently have
except IndexError as e: # catch only indexing errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
To be able to debug and not only know the kind of error that happened, you can also get the error stack using traceback module (usually in the starting package of modules):
import traceback
try:
my_function()
except Exception as e:
print(e)
traceback.print_exc()
And then run your code 'my_code.py' in console usig >>
python my_code.py >> my_prints.txt
All the prints of your code will then be written in this .txt file, including the printed error and its stack. This is very interesting in your case or while running code on a docker if you want to detach yourself from it with ctrl+p+q and still know what is printed.

Print exception with stack trace to file

I'm trying to put a simple log into my script. This log should tell me where is the error and as much as possible info needed to repair the script.
I've put print to file str(e) into each except but it provides a very few info to know what is going wrong.
How could I make it elaborated? For example the whole not catched exception text which I can see in the console?
try:
#code
except Exception as e:
print_to_file(log.txt,str(e))
try this,
import traceback
try:
1/0
except Exception as e:
with open('log.txt', 'a') as f:
f.write(str(e))
f.write(traceback.format_exc())
If you want a better solution should use Logger that manage timestamps, file size, and rotation for syou (doing a logger handler)
this is an example with logger, timestamp and rotation
import logging
from logging.handlers import RotatingFileHandler
import traceback
logger = logging.getLogger("Rotating Log")
logger.setLevel(logging.ERROR)
handler = RotatingFileHandler("log.txt", maxBytes=10000, backupCount=5)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
try:
1/0
except Exception as e:
logger.error(str(e))
logger.error(traceback.format_exc())

Categories

Resources