I have to create a one single log file for entire application and if there is any exception raised by any of module in the application , exception should go into except block and error should be written to log file . This log file should not be overridden instead if there are exceptions raised by multiple modules , all exceptions should be logged into one single log file .
I have tried logger with below code , but it is not creating log file :
import logging
with open("D:\logfile\log.txt", "w") as log:
logging.basicConfig(filename=log, level=logging.DEBUG, format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger=logging.getLogger(__name__)
try:
1/0
except ZeroDivisionError as err
logger.error(err)
also using with clause needs indentation and I do not want to use all modules under one with clause , instead I want to simply create one log file at the beginning of program and as program executes and modules raise an exceptions those exceptions should be written into one log file .
I don't know the logging module, but some googling suggests that the logging module supports writing the traceback into the logfile. Also your code didn't seem to work on my machine (Python v3.8.5), so i edited it so it works.
logging.exception('text') logs the traceback to the logfile, and you can specify a message which will displayed beforehand.
The code:
import logging
#with open("log.txt", "w") as log:
logging.basicConfig(filename='log.txt', level=logging.DEBUG, format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger=logging.getLogger(__name__)
try:
1/0
except ZeroDivisionError:
logging.exception('Oh no! Error. Here is the traceback info:')
The logfile:
2020-08-20 08:31:02,310 ERROR root Oh no! Error. Here is the traceback info:
Traceback (most recent call last):
File "logger.py", line 7, in <module>
1/0
ZeroDivisionError: division by zero
This has the advantage that the whole traceback is logged, which usually more helpful.
Related
I accidentally called logging.info() in a wrong way using Python3, and the library itself threw an error to the console, but the FileHandler failed to catch such error. So is there a way to catch all the errors no matter where they are thrown.
The error message looks like:
--- Logging error ---
...
TypeError: not all arguments converted during string formatting
Call stack:
File "<ipython-input-12-5ba547bc4aeb>", line 1, in <module>
logging.info(1,1)
Message: 1
Arguments: (1,)
Copying the following code can reproduce my question. The log file can catch logging.info() and the error of ZeroDivisionError, but it failed to catch error messages thrown by the logging library.
import logging
logger_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s: %(message)s')
logger_handler = logging.FileHandler('/Users/your_name/Desktop/logging.log')
logger_handler.setLevel(logging.DEBUG)
logger_handler.setFormatter(logger_formatter)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Log file output:
2021-03-10 18:07:32,315 - root - INFO: test
2021-03-10 18:07:32,324 - root - ERROR:
Traceback (most recent call last):
File "<ipython-input-1-6a4f609a80ca>", line 17, in <module>
1/0
ZeroDivisionError: division by zero
Logging all errors that can happen during logging is impossible, because the error may be such that it breaks logging and if that would trigger a logging call it would lead to an infinite loop. You can however implement custom error handling by overriding handleError for your Handlers, and if you are feeling particularly brave, attempt to write a log within that error handler. Based on your code it would look something like this:
import logging
class MyFileHandler(logging.FileHandler):
def handleError(self, record):
logging.error('There was an error when writing a log. Msg was: '+str(record.msg))
logger_handler = MyFileHandler('custom.log')
logger_handler.setLevel(logging.DEBUG)
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
root_logger.addHandler(logger_handler)
try:
logging.info('test')
logging.info(1,1)
1/0
except:
logging.exception("")
Of course if you would rather have an exception that bubbles up instead you could just raise from handleError.
I have a python script which every 10 minutes sends me an email with everything written in the console. I am running this with the crontab in my ubuntu 18.04 vps.
Sometimes it doesn't send the mail so I assume that when an error happens execution stops but how can I get the errors to be written in a txt file so I can analyze the error ?
Logging Module
To demonstrate the approach with the logging module, this would be the general approach
import logging
# Create a logging instance
logger = logging.getLogger('my_application')
logger.setLevel(logging.INFO) # you can set this to be DEBUG, INFO, ERROR
# Assign a file-handler to that instance
fh = logging.FileHandler("file_dir.txt")
fh.setLevel(logging.INFO) # again, you can set this differently
# Format your logs (optional)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter) # This will set the format to the file handler
# Add the handler to your logging instance
logger.addHandler(fh)
try:
raise ValueError("Some error occurred")
except ValueError as e:
logger.exception(e) # Will send the errors to the file
And if I cat file_dir.txt
2019-03-14 14:52:50,676 - my_application - ERROR - Some error occurred
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
ValueError: Some error occurred
Print to File
As I pointed out in the comments, you could accomplish this with print as well (I'm not sure you will be applauded for it)
# Set your stdout pointer to a file handler
with open('my_errors.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
print(e, file=fh)
cat my_errors.txt
Some error occurred
Note that logging.exception includes the traceback in this case, which is one of the many huge benefits of that module
Edit
In the interest of completeness, the traceback module leverages a similar approach as print, where you can supply a file handle:
import traceback
import sys
with open('error.txt', 'a') as fh:
try:
raise ValueError("Some error occurred")
except ValueError as e:
e_type, e_val, e_tb = sys.exc_info()
traceback.print_exception(e_type, e_val, e_tb, file=fh)
This will include all of the information you want from logging
You can use the logging module as suggested in the comments (possibly superior but outside the scope of my knowledge), or catch the errors with try and except like:
try:
pass
#run the code you currently have
except Exception as e: # catch ALLLLLL errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
Although this is generally frowned upon to catch all exceptions, because then should your program fail for whatever reason it will get gobbled up in the except clause so a better approach is to figure out what specific error you are encountering like IndexError and then just catch this specific error like:
try:
pass
#run the code you currently have
except IndexError as e: # catch only indexing errors!!!
print(e) # or more likely you'd want something like "email_to_me(e)"
To be able to debug and not only know the kind of error that happened, you can also get the error stack using traceback module (usually in the starting package of modules):
import traceback
try:
my_function()
except Exception as e:
print(e)
traceback.print_exc()
And then run your code 'my_code.py' in console usig >>
python my_code.py >> my_prints.txt
All the prints of your code will then be written in this .txt file, including the printed error and its stack. This is very interesting in your case or while running code on a docker if you want to detach yourself from it with ctrl+p+q and still know what is printed.
I am coding a tool in python and I want to put all the errors -and only the errors-(computations which didn't go through as they should have) into a single log file. Additionally I would want to have a different text in the error log file for each section of my code in order to make the error log file easy to interpret. How do I code this? Much appreciation for who could help with this!
Check out the python module logging. This is a core module for unifying logging not only in your own project but potentially in third party modules too.
For a minimal logging file example, this is taken directly from the documentation:
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
Which results in the contents of example.log:
DEBUG:root:This message should go to the log file
INFO:root:So should this
WARNING:root:And this, too
However, I personally recommend using the yaml configuration method (requires pyyaml):
#logging_config.yml
version: 1
disable_existing_loggers: False
formatters:
standard:
format: '%(asctime)s [%(levelname)s] %(name)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
level: INFO
formatter: standard
stream: ext://sys.stdout
file:
class: logging.FileHandler
level: DEBUG
formatter: standard
filename: output.log
email:
class: logging.handlers.SMTPHandler
level: WARNING
mailhost: smtp.gmail.com
fromaddr: to#address.co.uk
toaddrs: to#address.co.uk
subject: Oh no, something's gone wrong!
credentials: [email, password]
secure: []
root:
level: DEBUG
handlers: [console, file, email]
propagate: True
Then to use, for example:
import logging.config
import yaml
with open('logging_config.yml', 'r') as config:
logging.config.dictConfig(yaml.safe_load(config))
logger = logging.getLogger(__name__)
logger.info("This will go to the console and the file")
logger.debug("This will only go to the file")
logger.error("This will go everywhere")
try:
list = [1, 2, 3]
print(list[10])
except IndexError:
logger.exception("This will also go everywhere")
This prints:
2018-07-18 13:29:21,434 [INFO] __main__ - This will go to the console and the file
2018-07-18 13:29:21,434 [ERROR] __main__ - This will go everywhere
2018-07-18 13:29:21,434 [ERROR] __main__ - This will also go everywhere
Traceback (most recent call last):
File "C:/Users/Chris/Desktop/python_scratchpad/a.py", line 16, in <module>
print(list[10])
IndexError: list index out of range
While the contents of the log file is:
2018-07-18 13:35:55,423 [INFO] __main__ - This will go to the console and the file
2018-07-18 13:35:55,424 [DEBUG] __main__ - This will only go to the file
2018-07-18 13:35:55,424 [ERROR] __main__ - This will go everywhere
2018-07-18 13:35:55,424 [ERROR] __main__ - This will also go everywhere
Traceback (most recent call last):
File "C:/Users/Chris/Desktop/python_scratchpad/a.py", line 15, in <module>
print(list[10])
IndexError: list index out of range
Of course, you can add or remove handlers, formatters, etc, or do all of this in code (see the Python documentation) but this is my starting point whenever I use logging in a project. I find it helpful to have the configuration in a dedicated config file rather than polluting my project with defining logging in code.
If I understand the question correctly, the request was to capture only the errors in a dedicated log file, and I would do that differently.
I would stick to the BKM that all modules in the package define their own logger objects (logger = logging.getLogger(__name__)).
I'd let them be without any handlers and whenever they will emit they will look up the hierarchy tree for handlers to actually take care of the emitted messages.
At the root logger, I would add a dedicated FileHandler(filename='errors.log') and I would set the log level of that handler to logging.ERROR.
That means, whenever a logger from the package will emit something, this dedicated file-handler will discard anything below ERROR and will log into the files only ERROR and CRITICAL messages.
You could still add global StreamHandler and regular FileHandler to your root logger. Since you'll not change their log levels, they will be set to logging.NOTSET and will log everything that is emitted from the loggers in the package.
And to answer the second part of the question, the logger handlers can define their own formatting. So for the handler that handles only the errors, you could set the formatter to something like this: %(name)s::%(funcName)s:%(lineno)d - %(message)s which basically means, it will print:
the logger name (and if you used the convention to define loggers in every *.py file using __name__, then name will actually hold the hierarchical path to your module file (e.g. my_pkg.my_sub_pkg.module))
the funcName will hold the function from where the log was emitted and
lineno is the line number in the module file where the log was emitted.
I have a function like this....
def validate_phone(raw_number, debug=False):
I want the debug flag to control whether it outputs logging statements. For example:
if (debug):
print('Before splitting numbers', file=sys.stderr)
split_version = raw_number.split('-')
if (debug):
print('After splitting numbers', file=sys.stderr)
That code is very repetitive however. What is the cleanest (DRYest?) way to handle such if-flag-then-log logic?
I agree that using logging is the best solution for printing debugging information while running a python script. I wrote a DebugPrint module that helps facilitate using the logger more easily:
#DebugPrint.py
import logging
import os
import time
DEBUGMODE=True
logging.basicConfig(level=logging.DEBUG)
log=logging.getLogger('=>')
#DebugPrint.py
#DbgPrint=logging.debug
def DbgPrint(*args, **kwargs):
if DEBUGMODE:
#get module, class, function, linenumber information
import inspect
className = None
try:
className = inspect.stack()[2][0].f_locals['self'].__class__.__name__
except:
pass
modName=None
try:
modName = os.path.basename(inspect.stack()[2][1])
except:
pass
lineNo=inspect.stack()[2][2]
fnName=None
try:
fnName = inspect.stack()[2][3]
except:
pass
DbgText="line#{}:{}->{}->{}()".format(lineNo, modName,className, fnName)
argCnt=len(args)
kwargCnt=len(kwargs)
#print("argCnt:{} kwargCnt:{}".format(argCnt,kwargCnt))
fmt=""
fmt1=DbgText+":"+time.strftime("%H:%M:%S")+"->"
if argCnt > 0:
fmt1+=(argCnt-1)*"%s,"
fmt1+="%s"
fmt+=fmt1
if kwargCnt>0:
fmt2="%s"
args+=("{}".format(kwargs),)
if len(fmt)>0:
fmt+=","+fmt2
else:
fmt+=fmt2
#print("fmt:{}".format(fmt))
log.debug(fmt,*args)
if __name__=="__main__":
def myTest():
print("Running myTest()")
DbgPrint("Hello","World")
myTest()
If the DEBUGMODE variable is false, nothing will be printed.
If it is true, the sample code above prints out:
DEBUG:=>:16:24:14:line#78:DebugPrint.py->None->myTest():->Hello,World
Now I'm going to test DebugPrint with a module that defines a class.
#testDebugPrint.py
from DebugPrint import DbgPrint
class myTestClass(object):
def __init__(self):
DbgPrint("Initializing the class")
def doSomething(self, arg):
DbgPrint("I'm doing something with {}".format(arg))
if __name__=='__main__':
test=myTestClass()
test.doSomething("a friend!")
When this script is run the output is as follows:
DEBUG:=>:16:25:02:line#7:testDebugPrint.py->myTestClass->__init__():->Initializing the class
DEBUG:=>:16:25:02:line#10:testDebugPrint.py->myTestClass->doSomething():->I'm doing something with a friend!
Note that the module name, class name, function name and line number printed on the console is correct as well as the time the statement was printed.
I hope that you will find this utility useful.
I would use the logging module for it. It's made for it.
> cat a.py
import logging
log = logging.getLogger(__name__)
def main():
log.debug('This is debug')
log.info('This is info')
log.warn('This is warn')
log.fatal('This is fatal')
try:
raise Exception("this is exception")
except Exception:
log.warn('Failed with exception', exc_info=True)
raise
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='something')
parser.add_argument(
'-v', '--verbose', action='count', default=0, dest='verbosity')
args = parser.parse_args()
logging.basicConfig()
logging.getLogger().setLevel(logging.WARN - 10 * args.verbosity)
main()
> python a.py
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
> python a.py -v
INFO:__main__:This is info
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
> python a.py -vv
DEBUG:__main__:This is debug
INFO:__main__:This is info
WARNING:__main__:This is warn
CRITICAL:__main__:This is fatal
WARNING:__main__:Failed with exception
Traceback (most recent call last):
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
Traceback (most recent call last):
File "a.py", line 27, in <module>
main()
File "a.py", line 12, in main
raise Exception("this is exception")
Exception: this is exception
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
you can change the logging level to INFO/DEBUG/WARNING/ERROR/CRITICAL, and also logging module can record the timestamp for you and it is configurable as well.
Check the link python3 logging HowTo
You ought to use the logging module:
import logging
import sys
# Get the "root" level logger.
root = logging.getLogger()
# Set the log level to debug.
root.setLevel(logging.DEBUG)
# Add a handler to output log messages as a stream (to a file/handle)
# in this case, sys.stderr
ch = logging.StreamHandler(sys.stderr)
# This handler can log debug messages - multiple handlers could log
# different "levels" of log messages.
ch.setLevel(logging.DEBUG)
# Format the output to include the time, etc.
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
# Add the formatter to the root handler...
root.addHandler(ch)
# Get a logger object - this is what we use to omit log messages.
logger=logging.getLogger(__name__)
# Generate a bunch of log messages as a demonstration
for i in xrange(100):
logger.debug('The value of i is: %s', i)
# Demonstrate a useful example of logging full exception tracebacks
# if the logger will output debug, but a warning in other modes.
try:
a='a'+12
except Exception as e:
# Only log exceptions if debug is enabled..
if logger.isEnabledFor(logging.DEBUG):
# Log the traceback w/ the exception message.
logger.exception('Something went wrong: %s', e)
else:
logger.warning('Something went wrong: %s', e)
Read more about it here: https://docs.python.org/2/library/logging.html
To disable logging just set the level (logging.DEBUG) to something else (like logging.INFO) . Note that it's also quite easy to redirect the messages elsewhere (like a file) or send some messages (debug) some place, and other messages (warning) to others.
I'm running a small python web app on Heroku and I've drained the logs to loggly. When an exception is raised, the traceback appears as separate lines in loggly. This is of course hard to search.
How do you make tracebacks appear as a single log on Loggly?
Example:
You should set up python logging according to the instructions in this page:
https://www.loggly.com/docs/python-http/
Modify Step 3 (where you are sending the log events) so that you can send an exception, as follows:
import logging
import logging.config
import loggly.handlers
logging.config.fileConfig('python.conf')
logger = logging.getLogger('myLogger')
logger.info('Test log')
try:
main_loop()
except Exception:
logger.exception("Fatal error in main loop")
You will see that the exception appears as a single log event:
{ "loggerName":"myLogger", "asciTime":"2015-08-04 15:09:00,220", "fileName":"test_log.py", "logRecordCreationTime":"1438726140.220768", "functionName":"<module>", "levelNo":"40", "lineNo":"15", "time":"220", "levelName":"ERROR", "message":"Fatal error in main loop"}
Traceback (most recent call last):
File "./test_log.py", line 13, in <module>
main_loop()
NameError: name 'main_loop' is not defined
}