I'd like to know if there's a way to log the same thing in two files in twisted.
Let's say this is the code, now I'd like the same output going to "logs.log" to be redirected to sys.stdout.
if __name__ == "__main__":
log.startLogging(open("logs.log", 'a'))
log.startLogging(sys.stdout)
This is absolutely possible and easier than ever before if you're on the latest version of Twisted.
from sys import stdout
from twisted.logger import Logger, textFileLogObserver, globalLogBeginner
# start the global logger
logfile = open('log.log', 'a')
globalLogBeginner.beginLoggingTo([
textFileLogObserver(stdout),
textFileLogObserver(logfile)])
log = Logger()
log.info('hello world')
log.debug('hello world')
You can also implement you own logger if you want custom messages.
Related
I want to use a memory logger in my project. It keeps track of the last n logging records. A minimal example main file looks like this:
import sys
import logging
from logging import StreamHandler
from test_module import do_stuff
logger = logging.getLogger(__name__)
class MemoryHandler(StreamHandler):
def __init__(self, n_logs: int):
StreamHandler.__init__(self)
self.n_logs = n_logs
self.my_records = []
def emit(self, record):
self.my_records.append(self.format(record))
self.my_records = self.my_records[-self.n_logs:]
def to_string(self):
return '\n'.join(self.my_records)
if __name__ == '__main__':
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
mem_handler = MemoryHandler(n_logs=10)
logger.addHandler(mem_handler)
logger.info('hello')
do_stuff()
print(mem_handler.to_string())
The test module I am importing do_stuff from looks like this:
import logging
logger = logging.getLogger(__name__)
def do_stuff():
logger.info('doing stuff')
When I run the main function two log statements appear. The one from main and the one from doing stuff, but the memory logger only receives "hello" and not "doing stuff":
INFO:__main__:hello
INFO:test_module:doing stuff
hello
I assume that this is because mem_handler is not added to the test_module logger. I can fix this by adding the mem_handler explicitely:
logging.getLogger('test_module').addHandler(mem_handler)
But in general I don't want to list all modules and add the mem_handler manually. How can I add the mem_handler to all loggers in my project?
The Python logging system is federated. That means there is a tree like structure similar to the package structure. This structure works by logger name and the levels are separated by dots.
If you use the module's __name__ to get the logger it will be equivalant to the dotted name of the package. for example:
package.subpackage.module
In this federated system a message is send up the loggers structure (unless one of the loggers is explicitly configured with propagate=False).
So, the best way to add a handler is to add it to the root logger on the top of the structure and make sure all loggers below propagate.
You can get the root logger with logging.getLogger() (without any name) and then add handlers or other configuration as you like.
I am currently designing an application where calling a specific function "export_log" will output all the previous log history regarding user interaction with the app from the start. I initially stored all the logs in lists and then outputted to a file, but I was looking for a better way. I came across the Python logging module which seems perfect for my needs, however I'm a bit unfamiliar regards to getting it to work properly.
Below is my Minimum Working Example. func1 is basically the logger which will output "foo" everytime it is called by func2. At the moment, a blank file is created first time running the script. However running the script a second time appends the file, but this is not expected. I want to store all the previous history of output generated from func1.
What should I do to make sure that export_log function outputs a log file only when it is called on demand?
import logging
global logger
logger = logging.getLogger('log_file')
logger.setLevel(logging.INFO)
#
def export_log():
global logger
fh = logging.FileHandler('logFile.log')
fh.setLevel(logging.INFO)
logger.addHandler(fh)
# run 3 times
def func1():
global logger
logger.info("foo")
def func2():
for i in range(3):
func1()
func2()
export_log()
# Desired output upon calling the export_log() function:
# foo
# foo
# foo
# Actual output:
# File is initially blank after running script once.
# always appends "foo" every time the script is run again
Edit:
The export_log function is required because it is called by a Tkinter button: Button(controls, text="Export", command = export_log). I basically want to output to file all the log history only when the Button is clicked.
In this case a possible solution is to save the data in a io.StringIO() and when you press the button save it in a file:
import io
import logging
import shutil
stream = io.StringIO()
handler = logging.StreamHandler(stream)
logger = logging.getLogger("mylogger")
logger.setLevel(logging.INFO)
logger.addHandler(handler)
def export_log():
handler.flush()
with open("logFile.log", "w") as fd:
stream.seek(0)
shutil.copyfileobj(stream, fd)
# ...
I'm seeing a behavior that I have no way of explaining... Here's my simplified setup:
module x:
import logging
logger = logging.getLogger('x')
def test_debugging():
logger.debug('Debugging')
test for module x:
import logging
import unittest
from x import test_debugging
class TestX(unittest.TestCase):
def test_test_debugging(self):
test_debugging()
if __name__ == '__main__':
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
# logging.debug('another test')
unittest.main()
If I uncomment the logging.debug('another test') line I can also see the log from x. Note, it is not a typo, I'm calling debug on logging, not on the logger from module x. And if I call debug on logger, I don't see logs.
What is this, I can't even?..
In your setup, you didn't actually configure logging. Although the configuration can be pretty complex, it would suffice to set the log level in your example:
if __name__ == '__main__':
# note I configured logging, setting e.g. the level globally
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
unittest.main()
This will create a simple StreamHandler with a predefined output format that prints all the log records to the stdout. I suggest you to quickly look over the relevant docs for more info.
Why did it work with the logging.debug call? Because the logging.{info,debug,warn,error} functions all call logging.basicConfig internally, so once you have called logging.debug, you configured logging implicitly.
Edit: let's take a quick look under the hood what is the actual meaning of the logging.{info,debug,error,warning} functions. Let's take the following snippet:
import logging
logger = logging.getLogger('mylogger')
logger.warning('hello world')
If you run the snippet, hello world will be not printed (and this is correct so!). Why not? It's because you didn't actually specify how the log records should be treated - should they be printed to stdout, or maybe printed to a file, or maybe sent to some server that will email them to the recipients? The logger mylogger will receive the log record hello world, but it doesn't know yet what to do with it. So, to actually print the record, let's do some configuration for the logger:
import logging
logger = logging.getLogger('mylogger')
formatter = logging.Formatter('Logger received message %(message)s at time %(asctime)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('hello world')
We now attached a handler that handles the record by printing it to the stdout in the format specified by formatter. Now the record hello world will be printed to the stdout. We could attach more handlers and the record would be handled by each of the handler. Example: try to attach another StreamHandler and you will notice that the record is now printed twice.
So, what's with the logging functions now? If you have some simple program that has only one logger that should print the messages and that's all, you can replace the manual configuration by using convenience logging functions:
import logging
logging.warning('hello world')
This will configure the root logger to print the messages to stdout by adding a StreamHandler to it with some default formatter, so you don't have to configure it yourself. After that, it will tell the root logger to process the record hello world. Merely a convenience, nothing more. If you want to explicitly trigger this basic configuration of the root logger, issue
logging.basicConfig()
with or without the additional configuration parameters.
Now, let's go through my first code snippet once again:
logging.basicConfig(level=logging.DEBUG)
After this line, the root logger will print all log records with level DEBUG and higher to the command line.
logger = logging.getLogger('x')
logger.setLevel(logging.DEBUG)
We did not configure this logger explicitly, so why are the records still being printed? This is because by default, any logger will propagate the log records to the root logger. So the logger x does not print the records - it has not been configured for that, but it will pass the record further up to the root logger that knows how to print the records.
I want to write a logger which I can use in multiple modules. I must be able to enable and disable it from one place. And it must be reusable.
Following is the scenario.
switch_module.py
class Brocade(object):
def __init__(self, ip, username, password):
...
def connect(self):
...
def disconnect(self):
...
def switch_show(self):
...
switch_module_library.py
import switch_module
class Keyword_Mapper(object):
def __init__(self, keyword_to_execute):
self._brocade_object = switch_module.Brocade(ip, username, password)
...
def map_keyword_to_command(self)
...
application_gui.py
class GUI:
# I can open a file containing keyword for brocade switch
# in this GUI in a tab and tree widget(it uses PyQt which I don't know)
# Each tab is a QThread and there could be multiple tabs
# Each tab is accompanied by an execute button.
# On pressing exeute it will read the string/keywords from the file
# and create an object of keyword_Mapper class and call
# map_key_word_to_command method, execute the command on brocade
# switch and log the results. Current I am logging the result
# only from the Keyword_Mapper class.
Problem I have is how to write a logger which could be enabled and disabled at will
and it must log to one file as well as console from all three modules.
I tried writing global logger in int.py and then importing it in all three modules
and had to give a common name so that they log to the same file, but then
ran into trouble since there is multi-threading and later created logger to
log to a file which has thread-id in its name so that I can have each log
per thread.
What if I am required to log to different file rather than the same file?
I have gone through python logging documentation but am unable to get a clear picture
about writing a proper logging system which could be reused.
I have gone through this link
Is it better to use root logger or named logger in Python
but due to the GUI created by someone other than me using PyQt and multi-threading I am unable to get my head around logging here.
I my project I only use a root logger (I dont have the time to create named loggers, even if it would be nice). So if you don't want to use a named logger, here is a quick solution:
I created a function to set up logger quickly:
import logging
def initLogger(level=logging.DEBUG):
if level == logging.DEBUG:
# Display more stuff when in a debug mode
logging.basicConfig(
format='%(levelname)s-%(module)s:%(lineno)d-%(funcName)s: %(message)s',
level=level)
else:
# Display less stuff for info mode
logging.basicConfig(format='%(levelname)s: %(message)s', level=level)
I created a package for it so that I can import it anywhere.
Then, in my top level I have:
import LoggingTools
if __name__ == '__main__':
# Configure logger
LoggingTools.initLogger(logging.DEBUG)
#LoggingTools.initLogger(logging.INFO)
Depending if I am debugging or not, I using the corresponding statement.
Then in each other files, I just use the logging:
import logging
class MyClass():
def __init__(self):
logging.debug("Debug message")
logging.info("Info message")
Is there a better way to only print when run as a script, when __name__ == '__main__' ?
I have some scripts that I also import and use parts of.
Something like the below will work but is ugly, and would have to be defined in each script separately:
def printif(s):
if globals()['__name__'] == '__main__':
print (s)
return
I looked briefly at some of python's logging libraries but would prefer a two lighter solution...
edit:
I ended up doing something like this:
# mylog.py
import sys
import logging
log = logging.getLogger()
#default logging level
log.setLevel(logging.WARNING)
log.addHandler(logging.StreamHandler(sys.stdout))
And from the script:
import log from mylog
...
log.info(...)
log.warning(...)
...
if __name__ == '__main__':
#override when script is run..
log.setLevel(logger.INFO)
This scheme has minimal code duplication, per script log levels, and a project-wide default level...which is exactly what I wanted.
run_as_script = False
def printif(s):
if run_as_script:
print (s)
return
if __name__ == '__main__':
run_as_script = True
In light of user318904's comment on my other answer, I'll provide an alternative (although this may not work in all cases, it might just be "good enough").
For a separate module:
import sys
def printif(s):
if sys.argv[0] != '':
print (s)
Using a logging library is really not that heavyweight:
import logging
log = logging.getLogger('myscript')
def dostuff(...):
....
log.info('message!')
...
if __name__ == '__main__':
import sys
log.setLevel(logging.INFO)
log.addHandler(logging.StreamHandler(sys.stdout))
...
One wart is the "WARNING: no handlers found for myscript" message that logging prints by default if you import this module (rather than run it as a script), and call your function without setting up logging. It'll be gone in Python 3.2. For Python 2.7, you can shut it off by adding
log.addHandler(logging.NullHandler())
at the top, and for older versions you'd have to define a NullHandler class like this:
class NullHandler(logging.Handler):
def emit(self, record):
pass
Looking back at all this, I say: go with Gerrat's suggestion. I'll leave mine here, for completeness.