Logging basicConfig not creating log file when I run in PyCharm? - python

When I run below code in terminal its create a log file
import logging
logging.basicConfig(filename='ramexample.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
but when I run the same code (with different filename='ram.log') in PyCharm it's not creating any log file. Why?
import logging
logging.basicConfig(filename='ram.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
What I have to do to create a log file with PyCharm?

I encountered same issue and found none of the answers previously provided here would work. Maybe this issue had been solved long ago to Ramnath Reddy, but I could not find the correct answer anywhere online.
Luckily, I found a solution from a colleague's code by adding the following lines before logging.basicConfig().
# Remove all handlers associated with the root logger object.
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
Try and see if it helps for whomever had the same issue.
Python 3.8: A new option, force, has been made available to automatically remove the root handlers while calling basicConfig().
For example:
logging.basicConfig(filename='ramexample.log', level=logging.DEBUG, force=True)`
See logging.basicConfig parameters:
force: If this keyword argument is specified as true, any existing handlers attached to the root logger are removed and closed, before carrying out the configuration as specified by the other arguments.

I can't remember where I got this otherwise I would have provided a link. But had the same problem some time ago using in jupyter notebooks and this fixed it:
import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='mylog.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.DEBUG)

The answer why this error happens is this:
The call to basicConfig() should come before any calls to debug(), info() etc.
If you do so the basicConfig can not create and write a new file.
Here I called logging.info() right before logging.basicConfig().
Don't:
import logging
logging.info("root") # call to info too early
logging.basicConfig(filename="rec/test.log", level=logging.DEBUG) # no file created

Maximas is right. File path is relative to execution environment. However instead of writing down the absolute path you could try the dynamic path resolution approach:
filename = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ram.log')
logging.basicConfig(filename=filename, level=logging.DEBUG)
This assumes that ram.log resides in the same directory with the one that contains the above code (that's why __file__ is used for).

By using force it get solve.
like logging.basicConfig(filename="test.log", force=True)

This does create a log within the pycharm terminal using the Py terminal within it. You need to check the location of where the terminal is (try dir on Windows or pwd on linux/mac). Instead of just putting in ram.log, use the full file path of where you would like the file to appear.
E.G.
logging.basicConfig(filename='/Users/Donkey/Test/ram.log', level=logging.DEBUG)

I used to get this error, but I solved by adding force=True in basicConfig:
logging.basicConfig(level=logging.INFO,filename='C:\\Users\\sukal\\PycharmProjects\\Test\\Logs\\Automation.log',format=Log_Format,force=True)

import logging
class LogGen:
#staticmethod
def loggen():
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='.\\logs\\automation.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.INFO)
return logger

Related

Create log file named after filename of caller script

I have a logger.py file which initialises logging.
import logging
logger = logging.getLogger(__name__)
def logger_init():
import os
import inspect
global logger
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
logger.addHandler(ch)
fh = logging.FileHandler(os.getcwd() + os.path.basename(__file__) + ".log")
fh.setLevel(level=logging.DEBUG)
logger.addHandler(fh)
return None
logger_init()
I have another script caller.py that calls the logger.
from logger import *
logger.info("test log")
What happens is a log file called logger.log will be created containing the logged messages.
What I want is the name of this log file to be named after the caller script filename. So, in this case, the created log file should have the name caller.log instead.
I am using python 3.7
It is immensely helpful to consolidate logging to one location. I learned this the hard way. It is easier to debug when events are sorted by time and it is thread-safe to log to the same file. There are solutions for multiprocessing logging.
The log format can, then, contain the module name, function name and even line number from where the log call was made. This is invaluable. You can find a list of attributes you can include automatically in a log message here.
Example format:
format='[%(asctime)s] [%(module)s.%(funcName)s] [%(levelname)s] %(message)s
Example log message
[2019-04-03 12:29:48,351] [caller.work_func] [INFO] Completed task 1.
You can get the filename of the main script from the first item in sys.argv, but if you want to get the caller module not the main script, check the answers on this question.

Python logging multiple modules each module log to separate files

I have a main script and multiple modules. Right now I have logging setup where all the logging from all modules go into the same log file. It gets hard to debug when its all in one file. So I would like to separate each module into its own log file. I would also like to see the requests module each module uses into the log of the module that used it. I dont know if this is even possible. I searched everywhere and tried everything I could think of to do it but it always comes back to logging everything into one file or setup logging in each module and from my main module initiate the script instead of import as a module.
main.py
import logging, logging.handlers
import other_script.py
console_debug = True
log = logging.getLogger()
def setup_logging():
filelog = logging.handlers.TimedRotatingFileHandler(path+'logs/api/api.log',
when='midnight', interval=1, backupCount=3)
filelog.setLevel(logging.DEBUG)
fileformatter = logging.Formatter('%(asctime)s %(name)-15s %(levelname)-8s %(message)s')
filelog.setFormatter(fileformatter)
log.addHandler(filelog)
if console_debug:
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)-15s: %(levelname)-8s %(message)s')
console.setFormatter(formatter)
log.addHandler(console)
if __name__ == '__main__':
setup_logging()
other_script.py
import requests
import logging
log = logging.getLogger(__name__)
One very basic concept of python logging is that every file, stream or other place that logs go is equivalent to one Handler. So if you want every module to log to a different file you will have to give every module it's own handler. This can also be done from a central place. In your main.py you could add this to make the other_script module log to a separate file:
other_logger = logging.getLogger('other_script')
other_logger.addHandler(logging.FileHandler('other_file'))
other_logger.propagate = False
The last line is only required if you add a handler to the root logger. If you keep propagate at the default True you will have all logs be sent to the root loggers handlers too. In your scenario it might be better to not even use the root logger at all, and use a specific named logger like getLogger('__main__') in main.

Multiple streamhandlers

I am trying to beef up the logging in my Python scripts and I am grateful if could share best practices with me. For now I have created this little script (I should say that I run Python 3.4)
import logging
import io
import sys
def Streamhandler(stream, level, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"):
ch = logging.StreamHandler(stream)
ch.setLevel(level)
formatter = logging.Formatter(format)
ch.setFormatter(formatter)
return ch
# get the root logger
logger = logging.getLogger()
stream = io.StringIO()
logger.addHandler(Streamhandler(stream, logging.WARN))
stream_error = io.StringIO()
logger.addHandler(Streamhandler(stream_error, logging.ERROR))
logger.addHandler(Streamhandler(stream=sys.stdout, level=logging.DEBUG))
print(logger)
for h in logger.handlers:
print(h)
print(h.level)
# 'application' code # goes to the root logger!
logging.debug('debug message')
logging.info('info message')
logging.warning('warn message')
logging.error('error message')
logging.critical('critical message')
print(stream.getvalue())
print(stream_error.getvalue())
I have three handlers, 2 of them write into a io.StringIO (this seems to work). I need this to simplify testing but also to send logs via a HTTP email service. And then there is a Streamhandler for the console. However, logging.debug and logging.info messages are ignored on the console here despite setting the level explicitly low enough?!
First, you didn't set the level on the logger itself:
logger.setLevel(logging.DEBUG)
Also, you define a logger but do your calls on logging - which will call on the root logger. Not that it will make any difference in your case since you didn't specify a name for your logger, so logging.getLogger() returns the root logger.
wrt/ "best practices", it really depends on how "complex" your scripts are and of course on your logging needs.
For self-contained simple scripts with simple use cases (single known environment, no concurrent execution, simple logging to a file or stderr etc), a simple call to logging.basicConfig() and direct calls to logging.whatever() are usually good enough.
For anything more complex, it's better to use a distinct config file - either in ini format or as Python dict (using logging.dictConfig), split your script into distinct module(s) or package(s) each defining it's own named logger (with logger = logging.getLogger(__name__)) and only keep your script itself as the "runner" for your code, ie: configure logging, import modules, parse command line args and call the main function - preferably in a try/except block as to properly log any unhandled exception before crashing.
A logger has a threshold level too, you need to set it to DEBUG first:
logger.setLevel(logging.DEBUG)

Logging over multiple modules

How can I log everything using Python 'logging' to 1 text file, over multiple modules?
Main.py:
import logging
logging.basicConfig(format='localhost - - [%(asctime)s] %(message)s', level=logging.DEBUG)
log_handler = logging.handlers.RotatingFileHandler('debug.out', maxBytes=2048576)
log = logging.getLogger('logger')
log.addHandler(log_handler)
import test
Test.py:
import logging
log = logging.getLogger('logger')
log.error('test')
debug.out stays empty. I'm not sure what to try next, even after reading the logging documentation.
Edit: Fixed with the code above.
Set the correct logging level (at least ERROR if you want to get all messages with level ERROR or higher) and add a handler to write all messages into a file. For more details have a look at https://docs.python.org/2/howto/logging-cookbook.html.

Python logging over multiple files

I've read through the logging module documentation and whilst I may have missed something obvious, the code I've got doesn't appear to be working as intended. I'm using Python 2.6.4.
My program consists of several different python files, from which I want to send logging messages to a text file and, potentially, the screen. I imagine this is a common thing to do so I'm messing this up somewhere.
What my code is doing at the minute is logging to the text file correctly, kinda. But logging to the screen is being duplicated, one with the specified formatting, and one without. Also, when I turn off the screen output, I'm still getting the text printed once, which I don't want - I just want it to be logged to the file.
Anyway, some code:
#logger.py
import logging
from logging.handlers import RotatingFileHandler
import os
def setup_logging(logdir=None, scrnlog=True, txtlog=True, loglevel=logging.DEBUG):
logdir = os.path.abspath(logdir)
if not os.path.exists(logdir):
os.mkdir(logdir)
log = logging.getLogger('stumbler')
log.setLevel(loglevel)
log_formatter = logging.Formatter("%(asctime)s - %(levelname)s :: %(message)s")
if txtlog:
txt_handler = RotatingFileHandler(os.path.join(logdir, "Stumbler.log"), backupCount=5)
txt_handler.doRollover()
txt_handler.setFormatter(log_formatter)
log.addHandler(txt_handler)
log.info("Logger initialised.")
if scrnlog:
console_handler = logging.StreamHandler()
console_handler.setFormatter(log_formatter)
log.addHandler(console_handler)
Nothing unusual there.
#core.py
import logging
corelog = logging.getLogger('stumbler.core') # From what I understand of the docs, this should work :/
class Stumbler:
[...]
corelog.debug("Messages and rainbows...")
The screen output shows how this is being duplicated:
2010-01-08 22:57:07,587 - DEBUG :: SCANZIP: Checking zip contents, file: testscandir/testdir1/music.mp3
DEBUG:stumbler.core:SCANZIP: Checking zip contents, file: testscandir/testdir1/music.mp3
2010-01-08 22:57:07,587 - DEBUG :: SCANZIP: Checking zip contents, file: testscandir/testdir2/subdir/executable.exe
DEBUG:stumbler.core:SCANZIP: Checking zip contents, file: testscandir/testdir2/subdir/executable.exe
Although the textfile is getting the correctly formatted output, turning the screen logging off in logger.py still has the incorrectly formatted output displayed.
From what I understand of the docs, calling corelog.debug(), seeing as corelog is a child of the "stumbler" logger, it should use that formatting and output the logs as such.
Apologies for the essay over such a trivial issue.
TL;DR: How do I do logging from multiple files?
Are you sure no other logging setup is being done in anything you import.
The incorrect output in your console logs look like the default configuration for a logger, so something else may be setting that up.
Running this quick test script:
import logging
from logging.handlers import RotatingFileHandler
import os
def setup_logging(logdir=None, scrnlog=True, txtlog=True, loglevel=logging.DEBUG):
logdir = os.path.abspath(logdir)
if not os.path.exists(logdir):
os.mkdir(logdir)
log = logging.getLogger('stumbler')
log.setLevel(loglevel)
log_formatter = logging.Formatter("%(asctime)s - %(levelname)s :: %(message)s")
if txtlog:
txt_handler = RotatingFileHandler(os.path.join(logdir, "Stumbler.log"), backupCount=5)
txt_handler.doRollover()
txt_handler.setFormatter(log_formatter)
log.addHandler(txt_handler)
log.info("Logger initialised.")
if scrnlog:
console_handler = logging.StreamHandler()
console_handler.setFormatter(log_formatter)
log.addHandler(console_handler)
setup_logging('/tmp/logs')
corelog = logging.getLogger('stumbler.core')
corelog.debug("Messages and rainbows...")
yields this result:
2010-01-08 15:39:25,335 - DEBUG ::
Messages and rainbows...
and in my /tmp/logs/Stumbler.log
2010-01-08 15:39:25,335 - INFO ::
Logger initialised. 2010-01-08
15:39:25,335 - DEBUG :: Messages and
rainbows...
This worked as expected when I ran it in python 2.4, 2.5, and 2.6.4

Categories

Resources