currently i am logging all my test from the console. I mean whatever is being displayed in console is being logged in the log file. However, if there is any exception raised during the script execution it is not being logged though its shows in the console.
Following is my logger class:
class Logging(object):
def __init__(self, *files):
self.files = files
def write(self, obj):
for f in self.files:
f.write(obj)
sys.stdout = functions.Logging(sys.stdout, logfile)}
Thanks,
Tejas
Exceptions get written to sys.stderr, so you have to set up a logger for that file handle, too.
You may want to look into using the logging module for this sort of thing, though.
Related
I need some advice for a problem:
Ich try to do unittests which are started by a campaign runner.
Runner: HTMLTestRunner
The tests are arranged in a package (Testcases). Each testcase is a python module with a class for it. So: -> Testmodul -> TestClass -> test_funct
Now, to extend logging (different reasons like only saving logs for specific errors and starting and ending a screenrecord) I made a decorator which should handle my logfiles. So that means:
When a decorated testcase is started, The logfile handle should be set to a temporary file
During the testcase run, the logfile should be filled with ALL log-messages from the project
When the testcase is finished, the logfile should be copied to another directory (timestamp appended to name) and then the old logfile should be deleted.
So far so good, this all works fine so far, but the content of the logfile is not correct. I tried so many things. Sometimes not all logmessages are covered, sometimes it is a doubled amount of logmessages. Sometimes the messages start in the middle of the testcase. Can you maybe help me at this issue?
Initialization of logger in My Campaign:
def main():
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter(my_format_string)
console.setFormatter(formatter)
logging.getLogger().addHandler(console)
logger = logging.getLogger("campaign")
logger.setLevel(logging.INFO)
Decorators in other module:
def extended_logging(func):
#wraps(func)
def wrapped_func(*args, **kwargs):
root = logging.getLogger("campaign")
for hndlrs in root.handlers:
root.removeHandler(hndlrs)
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter(my_format_string)
console.setFormatter(formatter)
root.addHandler(console)
filehandler = logging.FileHandler(config.LOG_FILEPATH, mode='w+')
formatter = logging.Formatter(my_format_string)
filehandler.setFormatter(formatter)
root.addHandler(filehandler)
logger.info("This should actually work")
try:
time.sleep(5)
func(*args, **kwargs)
except Exception as error: # *** This is just dummy error handling for the test
logger.info("An exception occured")
logger.info("This testcase is false")
raise Exception
finally:
# *** Copies and renames the temp logfile and removes it after
save_logs(None, None, "module", "1")
return wrapped_func
Definition of TC is like this:
logger=logging.getLogger(f"camapaign.{__name__}")
class TestClass(unittest.Testcase):
#extended_logging
def test_mylittlemoduletest:
logger.info("Doing my testcase now")
# here smthng will be done
Background
I have a very large python application that launches command-line utilities to get pieces of data it needs. I currently just redirect the python launcher script to a log file, which gives me all of the print() output, plus the output of the command-line utilities, i.e.:
python -m launcher.py &> /root/out.log
Problem
I've since implemented a proper logger via logging, which lets me format the logging statements more precisely, lets me limit log file size, etc. I've swapped out most of my print()statements with calls to my logger. However, I have a problem: none of the output from the command-line applications is appearing in my log. It instead gets dumped to the console. Also, the programs aren't all launched the same way: some are launched via popen(), some by exec(), some by os.system(), etc.
Question
Is there a way to globally redirect all stdout/stderr text to my logging function, without having to re-write/modify the code that launches these command-line tools? I tried setting setting the following which I found in another question:
sys.stderr.write = lambda s: logger.error(s)
However it fails with "sys.stderr.write is read-only".
While this is not a full answer, it may show you a redirect to adapt to your particular case. This is how I did it a while back. Although I cannot remember why I did it this way, or what the limitation was I was trying to circumvent, the following is redirecting stdout and stderr to a class for print() statements. The class subsequently writes to screen and to file:
import os
import sys
import datetime
class DebugLogger():
def __init__(self, filename):
timestamp = datetime.datetime.strftime(datetime.datetime.utcnow(),
'%Y-%m-%d-%H-%M-%S-%f')
#build up full path to filename
logfile = os.path.join(os.path.dirname(sys.executable),
filename + timestamp)
self.terminal = sys.stdout
self.log = open(logfile, 'a')
def write(self, message):
timestamp = datetime.datetime.strftime(datetime.datetime.utcnow(),
' %Y-%m-%d-%H:%M:%S.%f')
#write to screen
self.terminal.write(message)
#write to file
self.log.write(timestamp + ' - ' + message)
self.flush()
def flush(self):
self.terminal.flush()
self.log.flush()
os.fsync(self.log.fileno())
def close(self):
self.log.close()
def main(debug = False):
if debug:
filename = 'blabla'
sys.stdout = DebugLogger(filename)
sys.stderr = sys.stdout
print('test')
if __name__ == '__main__':
main(debug = True)
import sys
import io
class MyStream(io.IOBase):
def write(self, s):
logger.error(s)
sys.stderr = MyStream()
print('This is an error', stream=sys.stderr)
This make all call to sys.stderr go to the logger.
The original one is always in sys.__stderr__
I have a python program running which is continuously logging to a file using a TimedRotatingFileHandler. From time to time i want to get the log files without closing the program so i simply cut & paste the log files to a different location.
The program doesn't crash if i do this but doesn't log to any file anymore either.
After looking at the source code of BaseRotatingFileHandler:
def emit(self, record):
try:
if self.shouldRollover(record):
self.doRollover()
logging.FileHandler.emit(self, record)
except Exception:
self.handleError(record)
i figured i could subclass TimedRotatingFileHandler and reimplement its emit function like this:
def emit(self, record):
try:
if not os.path.exists(self.baseFilename) or self.shouldRollover(record):
self.doRollover()
logging.FileHandler.emit(self, record)
except Exception:
self.handleError(record)
The following snippet seems to achieve what i want. But i am not sure if my approach is the best way to solve my problem. Is there a better way to achieve this or am i doing it right?
import logging.handlers
import logging
import time
import os
class TimedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, filename, **kwargs):
super().__init__(filename, **kwargs)
def emit(self, record):
try:
if not os.path.exists(self.baseFilename) or self.shouldRollover(record):
self.doRollover()
logging.FileHandler.emit(self, record)
except Exception:
self.handleError(record)
logging.basicConfig(handlers = [TimedRotatingFileHandler('test.log')])
for x in range(10):
time.sleep(1)
logging.critical('test')
EDIT:
I applied the solution provided by #danny. In order to keep the RotatingFileHandler capabilites i created a hybrid class of TimedRotatingFileHandler and WatchedFileHandler:
class WatchedTimedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler, logging.handlers.WatchedFileHandler):
def __init__(self, filename, **kwargs):
super().__init__(filename, **kwargs)
self.dev, self.ino = -1, -1
self._statstream()
def emit(self, record):
self.reopenIfNeeded()
super().emit(record)
Since opened files cannot be changed on Windows platform, safe to say the OP is not looking for a cross platform solution.
In that case, logging.handlers.WatchedFileHandler is an appropriate solution to logging continuing when the file being logged to is changed.
From docs:
The WatchedFileHandler class, located in the logging.handlers module,
is a FileHandler which watches the file it is logging to. If the file
changes, it is closed and reopened using the file name.
There is no rotating and watched file handler available so will have to move rotating to logrotate or similar with this solution.
Alternatively use your patch to TimedRotatingHandler which will need to close and re-open the file handle correctly in addition to the code you have already, to match what WatchedFileHandler does.
The most cross-platform way to handle it is to only take the "old" files, leaving the most recent file in place.
This will work even without changing the handler implementation and will not fail under different platforms.
I want to write a logger which I can use in multiple modules. I must be able to enable and disable it from one place. And it must be reusable.
Following is the scenario.
switch_module.py
class Brocade(object):
def __init__(self, ip, username, password):
...
def connect(self):
...
def disconnect(self):
...
def switch_show(self):
...
switch_module_library.py
import switch_module
class Keyword_Mapper(object):
def __init__(self, keyword_to_execute):
self._brocade_object = switch_module.Brocade(ip, username, password)
...
def map_keyword_to_command(self)
...
application_gui.py
class GUI:
# I can open a file containing keyword for brocade switch
# in this GUI in a tab and tree widget(it uses PyQt which I don't know)
# Each tab is a QThread and there could be multiple tabs
# Each tab is accompanied by an execute button.
# On pressing exeute it will read the string/keywords from the file
# and create an object of keyword_Mapper class and call
# map_key_word_to_command method, execute the command on brocade
# switch and log the results. Current I am logging the result
# only from the Keyword_Mapper class.
Problem I have is how to write a logger which could be enabled and disabled at will
and it must log to one file as well as console from all three modules.
I tried writing global logger in int.py and then importing it in all three modules
and had to give a common name so that they log to the same file, but then
ran into trouble since there is multi-threading and later created logger to
log to a file which has thread-id in its name so that I can have each log
per thread.
What if I am required to log to different file rather than the same file?
I have gone through python logging documentation but am unable to get a clear picture
about writing a proper logging system which could be reused.
I have gone through this link
Is it better to use root logger or named logger in Python
but due to the GUI created by someone other than me using PyQt and multi-threading I am unable to get my head around logging here.
I my project I only use a root logger (I dont have the time to create named loggers, even if it would be nice). So if you don't want to use a named logger, here is a quick solution:
I created a function to set up logger quickly:
import logging
def initLogger(level=logging.DEBUG):
if level == logging.DEBUG:
# Display more stuff when in a debug mode
logging.basicConfig(
format='%(levelname)s-%(module)s:%(lineno)d-%(funcName)s: %(message)s',
level=level)
else:
# Display less stuff for info mode
logging.basicConfig(format='%(levelname)s: %(message)s', level=level)
I created a package for it so that I can import it anywhere.
Then, in my top level I have:
import LoggingTools
if __name__ == '__main__':
# Configure logger
LoggingTools.initLogger(logging.DEBUG)
#LoggingTools.initLogger(logging.INFO)
Depending if I am debugging or not, I using the corresponding statement.
Then in each other files, I just use the logging:
import logging
class MyClass():
def __init__(self):
logging.debug("Debug message")
logging.info("Info message")
I can create a named child logger, so that all the logs output by that logger are marked with it's name. I can use that logger exclusively in my function/class/whatever.
However, if that code calls out to functions in another module that makes use of logging using just the logging module functions (that proxy to the root logger), how can I ensure that those log messages go through the same logger (or are at least logged in the same way)?
For example:
main.py
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
do_stuff(logger)
other.py
import logging
def do_more_stuff():
logging.info("doing other stuff")
Outputs:
$ python main.py
INFO:stuff:doing stuff
INFO:root:doing other stuff
I want to be able to cause both log lines to be marked with the name 'stuff', and I want to be able to do this only changing main.py.
How can I cause the logging calls in other.py to use a different logger without changing that module?
This is the solution I've come up with:
Using thread local data to store the contextual information, and using a Filter on the root loggers handlers to add this information to LogRecords before they are emitted.
context = threading.local()
context.name = None
class ContextFilter(logging.Filter):
def filter(self, record):
if context.name is not None:
record.name = "%s.%s" % (context.name, record.name)
return True
This is fine for me, because I'm using the logger name to indicate what task was being carried out when this message was logged.
I can then use context managers or decorators to make logging from a particular passage of code all appear as though it was logged from a particular child logger.
#contextlib.contextmanager
def logname(name):
old_name = context.name
if old_name is None:
context.name = name
else:
context.name = "%s.%s" % (old_name, name)
try:
yield
finally:
context.name = old_name
def as_logname(name):
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
with logname(name):
return f(*args, **kwargs)
return wrapper
return decorator
So then, I can do:
with logname("stuff"):
logging.info("I'm doing stuff!")
do_more_stuff()
or:
#as_logname("things")
def do_things():
logging.info("Starting to do things")
do_more_stuff()
The key thing being that any logging that do_more_stuff() does will be logged as if it were logged with either a "stuff" or "things" child logger, without having to change do_more_stuff() at all.
This solution would have problems if you were going to have different handlers on different child loggers.
Use logging.setLoggerClass so that all loggers used by other modules use your logger subclass (emphasis mine):
Tells the logging system to use the class klass when instantiating a logger. The class should define __init__() such that only a name argument is required, and the __init__() should call Logger.__init__(). This function is typically called before any loggers are instantiated by applications which need to use custom logger behavior.
This is what logging.handlers (or the handlers in the logging module) is for. In addition to creating your logger, you create one or more handlers to send the logging information to various places and add them to the root logger. Most modules that do logging create a logger that they use for there own purposes but depend on the controlling script to create the handlers. Some frameworks decide to be super helpful and add handlers for you.
Read the logging docs, its all there.
(edit)
logging.basicConfig() is a helper function that adds a single handler to the root logger. You can control the format string it uses with the 'format=' parameter. If all you want to do is have all modules display "stuff", then use logging.basicConfig(level=logging.INFO, format="%(levelname)s:stuff:%(message)s").
The logging.{info,warning,…} methods just call the respective methods on a Logger object called root (cf. the logging module source), so if you know the other module is only calling the functions exported by the logging module, you can overwrite the logging module in others namespace with your logger object:
import logging
import other
def do_stuff(logger):
logger.info("doing stuff")
other.do_more_stuff()
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("stuff")
# Overwrite other.logging with your just-generated logger object:
other.logging = logger
do_stuff(logger)