Python logging working on Windows but not Mac OS - python

# Logging
cur_flname = os.path.splitext(os.path.basename(__file__))[0]
LOG_FILENAME = constants.log_dir + os.sep + 'Log_' + cur_flname + '.txt'
util.make_dir_if_missing(constants.log_dir)
logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO, filemode='w',
format='%(asctime)s %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt="%m-%d %H:%M") # Logging levels are DEBUG, INFO, WARNING, ERROR, and CRITICAL
# Output to screen
logger = logging.getLogger(cur_flname)
logger.addHandler(logging.StreamHandler())
I use the code above to create a logger that I can use in my module to print messages to screen as well simultaneously to a text file.
On Windows, the messages get output to both file and screen. However, on Mac OS X 10.9.5, they only get output to file. I am using Python 2.7.
Any ideas on how to fix this?

From your question it is clear, you have no problem with creating logger name,
log file name and with logging to a file, so I will keep this part simplified to keep my code succinct.
First thing: To me your solution seems correct as logging.StreamHandler shall
send output to sys.stderr by default. You might have some code around (not
shown in your question), which is modifying sys.stderr, or you are running
your code on OSX in such a way, that output to stderr is not shown (but is
really output).
Solution with logging
Put following code into with_logging.py:
import logging
import sys
logformat = "%(asctime)s %(levelname)s %(module)s - %(funcName)s: %(message)s"
datefmt = "%m-%d %H:%M"
logging.basicConfig(filename="app.log", level=logging.INFO, filemode="w",
format=logformat, datefmt=datefmt)
stream_handler = logging.StreamHandler(sys.stderr)
stream_handler.setFormatter(logging.Formatter(fmt=logformat, datefmt=datefmt))
logger = logging.getLogger("app")
logger.addHandler(stream_handler)
logger.info("information")
logger.warning("warning")
def fun():
logger.info(" fun inf")
logger.warning("fun warn")
if __name__ == "__main__":
fun()
Run it: $ python with_logging.py and you shall see expected log records (properly formatted)
in log file app.log and on stderr too.
Note, if you do not see it on stderr, something is hiding your stderr
output. To see something, change in StreamHandler the stream to sys.stdout.
Solution with logbook
logbook is python package providing alternative logging means. I am showing it here to show main
difference to stdlib logging: with logbook the use and configuration seems simpler to me.
Previous solution rewritten to with_logbook.py
import logbook
import sys
logformat = ("{record.time:%m-%d %H:%M} {record.level_name} {record.module} - "
"{record.func_name}: {record.message}")
kwargs = {"level": logbook.INFO, "format_string": logformat, "bubble": True}
logbook.StreamHandler(sys.stderr, **kwargs).push_application()
logbook.FileHandler("app.log", **kwargs).push_application()
logger = logbook.Logger("app")
logger.info("information")
logger.warning("warning")
def fun():
logger.info(" fun inf")
logger.warning("fun warn")
if __name__ == "__main__":
fun()
The main difference is, that you do not have to attach handlers to loggers, your loggers simply emit
some log records.
These records will be handled by handlers, if they are put in place. One method
is to create a handler and call push_application() on it. This will put the
handler into stack of handlers in use.
As before, we need two handlers, one for the file, another for stderr.
If you run this script $ python with_logbook.py, you shall see exactly the same results as
with logging.
Separate logging setup from module code: short_logbook.py
With stdlib logging I reall do not like the introductory dances to make the logger work. I want
simply emit some log records and want to do that as simply as possible.
Following example is modification of previous one. Instead of setting up
logging on very beginning of the module, I do it just before the code is run
- inside the if __name__ == "__main__" (you may do the same on any other
place).
For practical reasons, I separated the module and calling code to two files:
File funmodule.py
from logbook import Logger
log = Logger(__name__)
log.info("information")
log.warning("warning")
def fun():
log.info(" fun inf")
log.warning("fun warn")
You can notice, that there are really only two lines of code related to making
logging available.
Then, create the calling code, put into short_logbook.py:
import sys
import logbook
if __name__ == "__main__":
logformat = ("{record.time:%m-%d %H:%M} {record.level_name} {record.module}"
"- {record.func_name}: {record.message}")
kwargs = {"level": logbook.INFO, "format_string": logformat, "bubble": True}
logbook.StreamHandler(sys.stderr, **kwargs).push_application()
logbook.FileHandler("app.log", **kwargs).push_application()
from funmodule import fun
fun()
Running the code you will see it working the same way as before, only logger name will be funmodule.
Note, that I did the from funmodule import fun after the logging was set up. If I did it on the
top if the short_logbook.py file, the first two log records from funmodule.py would not be seen
as they happen during module import.
EDIT: added another logging solution to have fair comparison
One more stdlib logging solution
Trying to have fair comparison of logbook and logging I rewrote final logbook example to
logging.
funmodule_logging.py looks like:
import logging
log = logging.getLogger(__name__)
log.info("information")
log.warning("warning")
def fun():
log.info(" fun inf")
log.warning("fun warn")
log.error("no fun at all")
and short_logging.py looks as follows:
import sys
import logging
if __name__ == "__main__":
logformat = "%(asctime)s %(levelname)s %(module)s - %(funcName)s: %(message)s"
datefmt = "%m-%d %H:%M"
logging.basicConfig(filename="app.log", level=logging.INFO, filemode="w",
format=logformat, datefmt=datefmt)
stream_handler = logging.StreamHandler(sys.stderr)
stream_handler.setFormatter(logging.Formatter(fmt=logformat, datefmt=datefmt))
logger = logging.getLogger("funmodule_logging")
logger.addHandler(stream_handler)
from funmodule_logging import fun
fun()
Functionality is very similar.
I still strugle with logging. stdlib logging is not easy to grasp, but it is in stdlib and offers
some nice things like logging.config.dictConfig allowing to configure logging by a dictionary.
logbook was much simpler to start with, but is a bit slower at the moment and lacks the
dictConfig. Anyway, these differences are not relevant to your question.

Related

Create log file named after filename of caller script

I have a logger.py file which initialises logging.
import logging
logger = logging.getLogger(__name__)
def logger_init():
import os
import inspect
global logger
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
fh = logging.FileHandler(os.getcwd() + os.path.basename(__file__) + ".log")
fh.setLevel(level=logging.DEBUG)
logger.addHandler(fh)
return None
logger_init()
I have another script caller.py that calls the logger.
from logger import *
logger.info("test log")
What happens is a log file called logger.log will be created containing the logged messages.
What I want is the name of this log file to be named after the caller script filename. So, in this case, the created log file should have the name caller.log instead.
I am using python 3.7
It is immensely helpful to consolidate logging to one location. I learned this the hard way. It is easier to debug when events are sorted by time and it is thread-safe to log to the same file. There are solutions for multiprocessing logging.
The log format can, then, contain the module name, function name and even line number from where the log call was made. This is invaluable. You can find a list of attributes you can include automatically in a log message here.
Example format:
format='[%(asctime)s] [%(module)s.%(funcName)s] [%(levelname)s] %(message)s
Example log message
[2019-04-03 12:29:48,351] [caller.work_func] [INFO] Completed task 1.
You can get the filename of the main script from the first item in sys.argv, but if you want to get the caller module not the main script, check the answers on this question.

Python logging multiple modules each module log to separate files

I have a main script and multiple modules. Right now I have logging setup where all the logging from all modules go into the same log file. It gets hard to debug when its all in one file. So I would like to separate each module into its own log file. I would also like to see the requests module each module uses into the log of the module that used it. I dont know if this is even possible. I searched everywhere and tried everything I could think of to do it but it always comes back to logging everything into one file or setup logging in each module and from my main module initiate the script instead of import as a module.
main.py
import logging, logging.handlers
import other_script.py
console_debug = True
log = logging.getLogger()
def setup_logging():
filelog = logging.handlers.TimedRotatingFileHandler(path+'logs/api/api.log',
when='midnight', interval=1, backupCount=3)
filelog.setLevel(logging.DEBUG)
fileformatter = logging.Formatter('%(asctime)s %(name)-15s %(levelname)-8s %(message)s')
filelog.setFormatter(fileformatter)
log.addHandler(filelog)
if console_debug:
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-15s: %(levelname)-8s %(message)s')
console.setFormatter(formatter)
log.addHandler(console)
if __name__ == '__main__':
setup_logging()
other_script.py
import requests
import logging
log = logging.getLogger(__name__)
One very basic concept of python logging is that every file, stream or other place that logs go is equivalent to one Handler. So if you want every module to log to a different file you will have to give every module it's own handler. This can also be done from a central place. In your main.py you could add this to make the other_script module log to a separate file:
other_logger = logging.getLogger('other_script')
other_logger.addHandler(logging.FileHandler('other_file'))
other_logger.propagate = False
The last line is only required if you add a handler to the root logger. If you keep propagate at the default True you will have all logs be sent to the root loggers handlers too. In your scenario it might be better to not even use the root logger at all, and use a specific named logger like getLogger('__main__') in main.

Python logging create handlers in wrapper script for logger in imported module

I am writing an API (python 2.7.x), and I have a worker script for it which does nothing on its own but can be wrapped by a variety of higher level scripts (ie one that feeds the worker data from csv, one from dB etc). The current task requires me to:
log INFO+ to console
log a certain set of INFO+ events to a .csv file
log ALL events to a distinct .log file
I've distilled my code to the following examples:
# SuperExample.py
import logging
import SubExample
def main():
logging.basicConfig(level=logging.INFO)
verbose_log = 'debug.log'
data_log = 'data.csv'
format_string = '%(asctime)s::%(name)s::%(levelname)s::%(message)s'
formatter = logging.Formatter(format_string)
# verbose log is a typical event log used for debugging
verbose = logging.FileHandler(verbose_log, mode='w')
verbose.setLevel(logging.DEBUG)
verbose.setFormatter(formatter)
SubExample.logger.addHandler(verbose)
# data log will eventually have a different formatter and a filter in
# order to get a narrow set of events, formatted for post-processing ease
data = logging.FileHandler(data_log, mode='w')
data.setLevel(logging.INFO)
data.setFormatter(formatter)
SubExample.logger.addHandler(data)
logging.info('Started')
SubExample.do_something()
logging.info('Finished')
if __name__ == '__main__':
main()
and
# SubExample.py
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
def do_something():
logger.debug('hey look I am doing something!')
logger.debug('now I am doing something else!!')
logger.info('this is my result!!!')
which gives me what I want in my files, but gives me this in my console:
INFO:root:Started
DEBUG:SubExample:hey look I am doing something!
DEBUG:SubExample:now I am doing something else!!
INFO:SubExample:this is my result!!!
INFO:root:Finished
I've read about the logging module and it's best practices, but very little of the example code works exactly the way its described when libraries get involved. So, my first question is: is this a basically sane approach? I haven't actually seen anyone else attach handlers to the subscript logger from the wrapper script, but it seems to do what I want.
And my second question is why do the DEBUG statements get into the console? I would think that logging.basicConfig(level=logging.INFO) should prevent this?
In the SuperExample.py file, I removed the basicConfig step and instead did this:
# SuperExample.py
import logging
import SubExample
def main():
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
logger.addHandler(console)
...
...
logger.info('Started')
...
logger.info('Finished')
In the SubExample.py file:
# SubExample.py
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
logger.addHandler(console)
def do_something():
....
Rest of the code is same as yours. When I run SuperExample.py, this is the output:
test_project ~$ python SuperExample.py
Started
this is my result!!!
Finished
The debug.log file has this:
2017-10-25 16:18:08,292::SubExample::DEBUG::hey look I am doing something!
2017-10-25 16:18:08,292::SubExample::DEBUG::now I am doing something else!!
2017-10-25 16:18:08,292::SubExample::INFO::this is my result!!!
The data.csv file has this:
2017-10-25 16:18:08,292::SubExample::INFO::this is my result!!!
So, it seems like the right way to do this is to add a StreamHandler to the logger in each of your modules and set it's level to what you want logged from there to the console. Also, whenever you do logging.getLogger(), you HAVE to set that logger's level to get the expected behavior from the handlers.

Multiple streamhandlers

I am trying to beef up the logging in my Python scripts and I am grateful if could share best practices with me. For now I have created this little script (I should say that I run Python 3.4)
import logging
import io
import sys
def Streamhandler(stream, level, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"):
ch = logging.StreamHandler(stream)
ch.setLevel(level)
formatter = logging.Formatter(format)
ch.setFormatter(formatter)
return ch
# get the root logger
logger = logging.getLogger()
stream = io.StringIO()
logger.addHandler(Streamhandler(stream, logging.WARN))
stream_error = io.StringIO()
logger.addHandler(Streamhandler(stream_error, logging.ERROR))
logger.addHandler(Streamhandler(stream=sys.stdout, level=logging.DEBUG))
print(logger)
for h in logger.handlers:
print(h)
print(h.level)
# 'application' code # goes to the root logger!
logging.debug('debug message')
logging.info('info message')
logging.warning('warn message')
logging.error('error message')
logging.critical('critical message')
print(stream.getvalue())
print(stream_error.getvalue())
I have three handlers, 2 of them write into a io.StringIO (this seems to work). I need this to simplify testing but also to send logs via a HTTP email service. And then there is a Streamhandler for the console. However, logging.debug and logging.info messages are ignored on the console here despite setting the level explicitly low enough?!
First, you didn't set the level on the logger itself:
logger.setLevel(logging.DEBUG)
Also, you define a logger but do your calls on logging - which will call on the root logger. Not that it will make any difference in your case since you didn't specify a name for your logger, so logging.getLogger() returns the root logger.
wrt/ "best practices", it really depends on how "complex" your scripts are and of course on your logging needs.
For self-contained simple scripts with simple use cases (single known environment, no concurrent execution, simple logging to a file or stderr etc), a simple call to logging.basicConfig() and direct calls to logging.whatever() are usually good enough.
For anything more complex, it's better to use a distinct config file - either in ini format or as Python dict (using logging.dictConfig), split your script into distinct module(s) or package(s) each defining it's own named logger (with logger = logging.getLogger(__name__)) and only keep your script itself as the "runner" for your code, ie: configure logging, import modules, parse command line args and call the main function - preferably in a try/except block as to properly log any unhandled exception before crashing.
A logger has a threshold level too, you need to set it to DEBUG first:
logger.setLevel(logging.DEBUG)

Logging over multiple modules

How can I log everything using Python 'logging' to 1 text file, over multiple modules?
Main.py:
import logging
logging.basicConfig(format='localhost - - [%(asctime)s] %(message)s', level=logging.DEBUG)
log_handler = logging.handlers.RotatingFileHandler('debug.out', maxBytes=2048576)
log = logging.getLogger('logger')
log.addHandler(log_handler)
import test
Test.py:
import logging
log = logging.getLogger('logger')
log.error('test')
debug.out stays empty. I'm not sure what to try next, even after reading the logging documentation.
Edit: Fixed with the code above.
Set the correct logging level (at least ERROR if you want to get all messages with level ERROR or higher) and add a handler to write all messages into a file. For more details have a look at https://docs.python.org/2/howto/logging-cookbook.html.

Categories

Resources