Logging over multiple modules - python

How can I log everything using Python 'logging' to 1 text file, over multiple modules?
Main.py:
import logging
logging.basicConfig(format='localhost - - [%(asctime)s] %(message)s', level=logging.DEBUG)
log_handler = logging.handlers.RotatingFileHandler('debug.out', maxBytes=2048576)
log = logging.getLogger('logger')
log.addHandler(log_handler)
import test
Test.py:
import logging
log = logging.getLogger('logger')
log.error('test')
debug.out stays empty. I'm not sure what to try next, even after reading the logging documentation.
Edit: Fixed with the code above.

Set the correct logging level (at least ERROR if you want to get all messages with level ERROR or higher) and add a handler to write all messages into a file. For more details have a look at https://docs.python.org/2/howto/logging-cookbook.html.

Related

Python logging multiple modules each module log to separate files

I have a main script and multiple modules. Right now I have logging setup where all the logging from all modules go into the same log file. It gets hard to debug when its all in one file. So I would like to separate each module into its own log file. I would also like to see the requests module each module uses into the log of the module that used it. I dont know if this is even possible. I searched everywhere and tried everything I could think of to do it but it always comes back to logging everything into one file or setup logging in each module and from my main module initiate the script instead of import as a module.
main.py
import logging, logging.handlers
import other_script.py
console_debug = True
log = logging.getLogger()
def setup_logging():
filelog = logging.handlers.TimedRotatingFileHandler(path+'logs/api/api.log',
when='midnight', interval=1, backupCount=3)
filelog.setLevel(logging.DEBUG)
fileformatter = logging.Formatter('%(asctime)s %(name)-15s %(levelname)-8s %(message)s')
filelog.setFormatter(fileformatter)
log.addHandler(filelog)
if console_debug:
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-15s: %(levelname)-8s %(message)s')
console.setFormatter(formatter)
log.addHandler(console)
if __name__ == '__main__':
setup_logging()
other_script.py
import requests
import logging
log = logging.getLogger(__name__)
One very basic concept of python logging is that every file, stream or other place that logs go is equivalent to one Handler. So if you want every module to log to a different file you will have to give every module it's own handler. This can also be done from a central place. In your main.py you could add this to make the other_script module log to a separate file:
other_logger = logging.getLogger('other_script')
other_logger.addHandler(logging.FileHandler('other_file'))
other_logger.propagate = False
The last line is only required if you add a handler to the root logger. If you keep propagate at the default True you will have all logs be sent to the root loggers handlers too. In your scenario it might be better to not even use the root logger at all, and use a specific named logger like getLogger('__main__') in main.

Python Logging - Only for own imported modules

referring to this question here: LINK
How can I set up a config, that will only log my root script and my own sub-scripts? The question of the link asked for disabling all imported modules, but that is not my intention.
My root setup:
import logging
from exchangehandler import send_mail
log_wp = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(levelname)s [%(filename)s]: %(name)s %(funcName)20s - Message: %(message)s',
datefmt='%d.%m.%Y %H:%M:%S',
filename='C:/log/myapp.log',
filemode='a')
handler = logging.StreamHandler()
log_wp.addHandler(handler)
log_wp.debug('This is from root')
send_mail('address#eg.com', 'Request', 'Hi there')
My sub-module exchangehandler.py:
import logging
log_wp = logging.getLogger(__name__)
def send_mail(mail_to,mail_subject,mail_body, mail_attachment=None):
log_wp.debug('Hey this is from exchangehandler.py!')
m.send_and_save()
myapp.log:
16.07.2018 10:27:40 - DEBUG [test_script.py]: __main__ <module> - Message: This is from root
16.07.2018 10:28:02 - DEBUG [exchangehandler.py]: exchangehandler send_mail - Message: Hey this is from exchangehandler.py!
16.07.2018 10:28:02 - DEBUG [folders.py]: exchangelib.folders get_default_folder - Message: Testing default <class 'exchangelib.folders.SentItems'> folder with GetFolder
16.07.2018 10:28:02 - DEBUG [services.py]: exchangelib.services get_payload - Message: Getting folder ArchiveDeletedItems (archivedeleteditems)
16.07.2018 10:28:02 - DEBUG [services.py]: exchangelib.services get_payload - Message: Getting folder ArchiveInbox (archiveinbox)
My problem is, that the log-file contains also a lot of information of the exchangelib-module, that is imported in exchangehandler.py. Either the imported exchangelib-module is configured incorrectly or I have made a mistake. So how can I reduce the log-output only to my logging messages?
EDIT:
An extract of the folder.py of the exchangelib-module. This is not anything that I have written:
import logging
log = logging.getLogger(__name__)
def get_default_folder(self, folder_cls):
try:
# Get the default folder
log.debug('Testing default %s folder with GetFolder', folder_cls)
# Use cached instance if available
for f in self._folders_map.values():
if isinstance(f, folder_cls) and f.has_distinguished_name:
return f
return folder_cls.get_distinguished(account=self.account)
The imported exchangelib module is not configured at all when it comes to logging. You are configuring it implicitly by calling logging.basicConfig() in your main module.
exchangelib does create loggers and logs to them, but by default these loggers do not have handlers and formatters attached, so they don't do anything visible. What they do, is propagating up to the root logger, which by default also has no handlers and formatters attached.
By calling logging.basicConfig in your main module, you actually attach handlers to the root logger. Your own, desired loggers propagate to the root logger, hence the messages are written to the handlers, but the same is true for the exchangelib loggers from that point onwards.
You have at least two options here. You can explicitly configure "your" named logger(s):
main module
import logging
log_wp = logging.getLogger(__name__) # or pass an explicit name here, e.g. "mylogger"
hdlr = logging.StreamHandler()
fhdlr = logging.FileHandler("myapp.log")
log_wp.addHandler(hdlr)
log_wp.addHandler(fhdlr)
log_wp.setLevel(logging.DEBUG)
The above is very simplified. To explicitly configure multiple named loggers, refer to the logging.config HowTo
If you rather want to stick to just using the root logger (configured via basicConfig()), you can also explicitly disable the undesired loggers after exchangelib has been imported and these loggers have been created:
logging.getLogger("exchangelib.folders").disabled = True
logging.getLogger("exchangelib.services").disabled = True
If you don't know the names of the loggers to disable, logging has a dictionary holding all the known loggers. So you could temporarily do this to see all the loggers your program creates:
# e.g. after the line 'log_wp.addHandler(handler)'
print([k for k in logging.Logger.manager.loggerDict])
Using the dict would also allow you to do sth. like this:
for v in logging.Logger.manager.loggerDict.values():
if v.name.startswith('exchangelib'):
v.disabled = True

Python logging working on Windows but not Mac OS

# Logging
cur_flname = os.path.splitext(os.path.basename(__file__))[0]
LOG_FILENAME = constants.log_dir + os.sep + 'Log_' + cur_flname + '.txt'
util.make_dir_if_missing(constants.log_dir)
logging.basicConfig(filename=LOG_FILENAME, level=logging.INFO, filemode='w',
format='%(asctime)s %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt="%m-%d %H:%M") # Logging levels are DEBUG, INFO, WARNING, ERROR, and CRITICAL
# Output to screen
logger = logging.getLogger(cur_flname)
logger.addHandler(logging.StreamHandler())
I use the code above to create a logger that I can use in my module to print messages to screen as well simultaneously to a text file.
On Windows, the messages get output to both file and screen. However, on Mac OS X 10.9.5, they only get output to file. I am using Python 2.7.
Any ideas on how to fix this?
From your question it is clear, you have no problem with creating logger name,
log file name and with logging to a file, so I will keep this part simplified to keep my code succinct.
First thing: To me your solution seems correct as logging.StreamHandler shall
send output to sys.stderr by default. You might have some code around (not
shown in your question), which is modifying sys.stderr, or you are running
your code on OSX in such a way, that output to stderr is not shown (but is
really output).
Solution with logging
Put following code into with_logging.py:
import logging
import sys
logformat = "%(asctime)s %(levelname)s %(module)s - %(funcName)s: %(message)s"
datefmt = "%m-%d %H:%M"
logging.basicConfig(filename="app.log", level=logging.INFO, filemode="w",
format=logformat, datefmt=datefmt)
stream_handler = logging.StreamHandler(sys.stderr)
stream_handler.setFormatter(logging.Formatter(fmt=logformat, datefmt=datefmt))
logger = logging.getLogger("app")
logger.addHandler(stream_handler)
logger.info("information")
logger.warning("warning")
def fun():
logger.info(" fun inf")
logger.warning("fun warn")
if __name__ == "__main__":
fun()
Run it: $ python with_logging.py and you shall see expected log records (properly formatted)
in log file app.log and on stderr too.
Note, if you do not see it on stderr, something is hiding your stderr
output. To see something, change in StreamHandler the stream to sys.stdout.
Solution with logbook
logbook is python package providing alternative logging means. I am showing it here to show main
difference to stdlib logging: with logbook the use and configuration seems simpler to me.
Previous solution rewritten to with_logbook.py
import logbook
import sys
logformat = ("{record.time:%m-%d %H:%M} {record.level_name} {record.module} - "
"{record.func_name}: {record.message}")
kwargs = {"level": logbook.INFO, "format_string": logformat, "bubble": True}
logbook.StreamHandler(sys.stderr, **kwargs).push_application()
logbook.FileHandler("app.log", **kwargs).push_application()
logger = logbook.Logger("app")
logger.info("information")
logger.warning("warning")
def fun():
logger.info(" fun inf")
logger.warning("fun warn")
if __name__ == "__main__":
fun()
The main difference is, that you do not have to attach handlers to loggers, your loggers simply emit
some log records.
These records will be handled by handlers, if they are put in place. One method
is to create a handler and call push_application() on it. This will put the
handler into stack of handlers in use.
As before, we need two handlers, one for the file, another for stderr.
If you run this script $ python with_logbook.py, you shall see exactly the same results as
with logging.
Separate logging setup from module code: short_logbook.py
With stdlib logging I reall do not like the introductory dances to make the logger work. I want
simply emit some log records and want to do that as simply as possible.
Following example is modification of previous one. Instead of setting up
logging on very beginning of the module, I do it just before the code is run
- inside the if __name__ == "__main__" (you may do the same on any other
place).
For practical reasons, I separated the module and calling code to two files:
File funmodule.py
from logbook import Logger
log = Logger(__name__)
log.info("information")
log.warning("warning")
def fun():
log.info(" fun inf")
log.warning("fun warn")
You can notice, that there are really only two lines of code related to making
logging available.
Then, create the calling code, put into short_logbook.py:
import sys
import logbook
if __name__ == "__main__":
logformat = ("{record.time:%m-%d %H:%M} {record.level_name} {record.module}"
"- {record.func_name}: {record.message}")
kwargs = {"level": logbook.INFO, "format_string": logformat, "bubble": True}
logbook.StreamHandler(sys.stderr, **kwargs).push_application()
logbook.FileHandler("app.log", **kwargs).push_application()
from funmodule import fun
fun()
Running the code you will see it working the same way as before, only logger name will be funmodule.
Note, that I did the from funmodule import fun after the logging was set up. If I did it on the
top if the short_logbook.py file, the first two log records from funmodule.py would not be seen
as they happen during module import.
EDIT: added another logging solution to have fair comparison
One more stdlib logging solution
Trying to have fair comparison of logbook and logging I rewrote final logbook example to
logging.
funmodule_logging.py looks like:
import logging
log = logging.getLogger(__name__)
log.info("information")
log.warning("warning")
def fun():
log.info(" fun inf")
log.warning("fun warn")
log.error("no fun at all")
and short_logging.py looks as follows:
import sys
import logging
if __name__ == "__main__":
logformat = "%(asctime)s %(levelname)s %(module)s - %(funcName)s: %(message)s"
datefmt = "%m-%d %H:%M"
logging.basicConfig(filename="app.log", level=logging.INFO, filemode="w",
format=logformat, datefmt=datefmt)
stream_handler = logging.StreamHandler(sys.stderr)
stream_handler.setFormatter(logging.Formatter(fmt=logformat, datefmt=datefmt))
logger = logging.getLogger("funmodule_logging")
logger.addHandler(stream_handler)
from funmodule_logging import fun
fun()
Functionality is very similar.
I still strugle with logging. stdlib logging is not easy to grasp, but it is in stdlib and offers
some nice things like logging.config.dictConfig allowing to configure logging by a dictionary.
logbook was much simpler to start with, but is a bit slower at the moment and lacks the
dictConfig. Anyway, these differences are not relevant to your question.

Logging basicConfig not creating log file when I run in PyCharm?

When I run below code in terminal its create a log file
import logging
logging.basicConfig(filename='ramexample.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
but when I run the same code (with different filename='ram.log') in PyCharm it's not creating any log file. Why?
import logging
logging.basicConfig(filename='ram.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
What I have to do to create a log file with PyCharm?
I encountered same issue and found none of the answers previously provided here would work. Maybe this issue had been solved long ago to Ramnath Reddy, but I could not find the correct answer anywhere online.
Luckily, I found a solution from a colleague's code by adding the following lines before logging.basicConfig().
# Remove all handlers associated with the root logger object.
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
Try and see if it helps for whomever had the same issue.
Python 3.8: A new option, force, has been made available to automatically remove the root handlers while calling basicConfig().
For example:
logging.basicConfig(filename='ramexample.log', level=logging.DEBUG, force=True)`
See logging.basicConfig parameters:
force: If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments.
I can't remember where I got this otherwise I would have provided a link. But had the same problem some time ago using in jupyter notebooks and this fixed it:
import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='mylog.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.DEBUG)
The answer why this error happens is this:
The call to basicConfig() should come before any calls to debug(), info() etc.
If you do so the basicConfig can not create and write a new file.
Here I called logging.info() right before logging.basicConfig().
Don't:
import logging
logging.info("root") # call to info too early
logging.basicConfig(filename="rec/test.log", level=logging.DEBUG) # no file created
Maximas is right. File path is relative to execution environment. However instead of writing down the absolute path you could try the dynamic path resolution approach:
filename = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ram.log')
logging.basicConfig(filename=filename, level=logging.DEBUG)
This assumes that ram.log resides in the same directory with the one that contains the above code (that's why __file__ is used for).
By using force it get solve.
like logging.basicConfig(filename="test.log", force=True)
This does create a log within the pycharm terminal using the Py terminal within it. You need to check the location of where the terminal is (try dir on Windows or pwd on linux/mac). Instead of just putting in ram.log, use the full file path of where you would like the file to appear.
E.G.
logging.basicConfig(filename='/Users/Donkey/Test/ram.log', level=logging.DEBUG)
I used to get this error, but I solved by adding force=True in basicConfig:
logging.basicConfig(level=logging.INFO,filename='C:\\Users\\sukal\\PycharmProjects\\Test\\Logs\\Automation.log',format=Log_Format,force=True)
import logging
class LogGen:
#staticmethod
def loggen():
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='.\\logs\\automation.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.INFO)
return logger

Python logging over multiple files

I've read through the logging module documentation and whilst I may have missed something obvious, the code I've got doesn't appear to be working as intended. I'm using Python 2.6.4.
My program consists of several different python files, from which I want to send logging messages to a text file and, potentially, the screen. I imagine this is a common thing to do so I'm messing this up somewhere.
What my code is doing at the minute is logging to the text file correctly, kinda. But logging to the screen is being duplicated, one with the specified formatting, and one without. Also, when I turn off the screen output, I'm still getting the text printed once, which I don't want - I just want it to be logged to the file.
Anyway, some code:
#logger.py
import logging
from logging.handlers import RotatingFileHandler
import os
def setup_logging(logdir=None, scrnlog=True, txtlog=True, loglevel=logging.DEBUG):
logdir = os.path.abspath(logdir)
if not os.path.exists(logdir):
os.mkdir(logdir)
log = logging.getLogger('stumbler')
log.setLevel(loglevel)
log_formatter = logging.Formatter("%(asctime)s - %(levelname)s :: %(message)s")
if txtlog:
txt_handler = RotatingFileHandler(os.path.join(logdir, "Stumbler.log"), backupCount=5)
txt_handler.doRollover()
txt_handler.setFormatter(log_formatter)
log.addHandler(txt_handler)
log.info("Logger initialised.")
if scrnlog:
console_handler = logging.StreamHandler()
console_handler.setFormatter(log_formatter)
log.addHandler(console_handler)
Nothing unusual there.
#core.py
import logging
corelog = logging.getLogger('stumbler.core') # From what I understand of the docs, this should work :/
class Stumbler:
[...]
corelog.debug("Messages and rainbows...")
The screen output shows how this is being duplicated:
2010-01-08 22:57:07,587 - DEBUG :: SCANZIP: Checking zip contents, file: testscandir/testdir1/music.mp3
DEBUG:stumbler.core:SCANZIP: Checking zip contents, file: testscandir/testdir1/music.mp3
2010-01-08 22:57:07,587 - DEBUG :: SCANZIP: Checking zip contents, file: testscandir/testdir2/subdir/executable.exe
DEBUG:stumbler.core:SCANZIP: Checking zip contents, file: testscandir/testdir2/subdir/executable.exe
Although the textfile is getting the correctly formatted output, turning the screen logging off in logger.py still has the incorrectly formatted output displayed.
From what I understand of the docs, calling corelog.debug(), seeing as corelog is a child of the "stumbler" logger, it should use that formatting and output the logs as such.
Apologies for the essay over such a trivial issue.
TL;DR: How do I do logging from multiple files?
Are you sure no other logging setup is being done in anything you import.
The incorrect output in your console logs look like the default configuration for a logger, so something else may be setting that up.
Running this quick test script:
import logging
from logging.handlers import RotatingFileHandler
import os
def setup_logging(logdir=None, scrnlog=True, txtlog=True, loglevel=logging.DEBUG):
logdir = os.path.abspath(logdir)
if not os.path.exists(logdir):
os.mkdir(logdir)
log = logging.getLogger('stumbler')
log.setLevel(loglevel)
log_formatter = logging.Formatter("%(asctime)s - %(levelname)s :: %(message)s")
if txtlog:
txt_handler = RotatingFileHandler(os.path.join(logdir, "Stumbler.log"), backupCount=5)
txt_handler.doRollover()
txt_handler.setFormatter(log_formatter)
log.addHandler(txt_handler)
log.info("Logger initialised.")
if scrnlog:
console_handler = logging.StreamHandler()
console_handler.setFormatter(log_formatter)
log.addHandler(console_handler)
setup_logging('/tmp/logs')
corelog = logging.getLogger('stumbler.core')
corelog.debug("Messages and rainbows...")
yields this result:
2010-01-08 15:39:25,335 - DEBUG ::
Messages and rainbows...
and in my /tmp/logs/Stumbler.log
2010-01-08 15:39:25,335 - INFO ::
Logger initialised. 2010-01-08
15:39:25,335 - DEBUG :: Messages and
rainbows...
This worked as expected when I ran it in python 2.4, 2.5, and 2.6.4

Categories

Resources