How many times was logging.error() called? - python

Maybe it's just doesn't exist, as I cannot find it. But using python's logging package, is there a way to query a Logger to find out how many times a particular function was called? For example, how many errors/warnings were reported?

The logging module doesn't appear to support this. In the long run you'd probably be better off creating a new module, and adding this feature via sub-classing the items in the existing logging module to add the features you need, but you could also achieve this behavior pretty easily with a decorator:
class CallCounted:
"""Decorator to determine number of calls for a method"""
def __init__(self,method):
self.method=method
self.counter=0
def __call__(self,*args,**kwargs):
self.counter+=1
return self.method(*args,**kwargs)
import logging
logging.error = CallCounted(logging.error)
logging.error('one')
logging.error('two')
print(logging.error.counter)
Output:
ERROR:root:one
ERROR:root:two
2

You can also add a new Handler to the logger which counts all calls:
class MsgCounterHandler(logging.Handler):
level2count = None
def __init__(self, *args, **kwargs):
super(MsgCounterHandler, self).__init__(*args, **kwargs)
self.level2count = {}
def emit(self, record):
l = record.levelname
if (l not in self.level2count):
self.level2count[l] = 0
self.level2count[l] += 1
You can then use the dict afterwards to output the number of calls.

There'a a warnings module that -- to an extent -- does some of that.
You might want to add this counting feature to a customized Handler. The problem is that there are a million handlers and you might want to add it to more than one kind.
You might want to add it to a Filter, since that's independent of the Handlers in use.

Based on Rolf's answer and how to write a dictionary to a file, here another solution which stores the counts in a json file. In case the json file exists and continue_counts=True, it restores the counts on initialisation.
import json
import logging
import logging.handlers
import os
class MsgCounterHandler(logging.Handler):
"""
A handler class which counts the logging records by level and periodically writes the counts to a json file.
"""
level2count_dict = None
def __init__(self, filename, continue_counts=True, *args, **kwargs):
"""
Initialize the handler.
PARAMETER
---------
continue_counts: bool, optional
defines if the counts should be loaded and restored if the json file exists already.
"""
super(MsgCounterHandler, self).__init__(*args, **kwargs)
filename = os.fspath(filename)
self.baseFilename = os.path.abspath(filename)
self.continue_counts = continue_counts
# if another instance of this class is created, get the actual counts
if self.level2count_dict is None:
self.level2count_dict = self.load_counts_from_file()
def emit(self, record):
"""
Counts a record.
In case, create add the level to the dict.
If the time has come, update the json file.
"""
level = record.levelname
if level not in self.level2count_dict:
self.level2count_dict[level] = 0
self.level2count_dict[level] += 1
self.flush()
def flush(self):
"""
Flushes the dictionary.
"""
self.acquire()
try:
json.dump(self.level2count_dict, open(self.baseFilename, 'w'))
finally:
self.release()
def load_counts_from_file(self):
"""
Load the dictionary from a json file or create an empty dictionary
"""
if os.path.exists(self.baseFilename) and self.continue_counts:
try:
level2count_dict = json.load(open(self.baseFilename))
except Exception as a:
logging.warning(f'Failed to load counts with: {a}')
level2count_dict = {}
else:
level2count_dict = {}
return level2count_dict

Related

Does pydispatcher run the handler function in a background thread?

Upon looking up event handler modules, I came across pydispatcher, which seemed beginner friendly. My use case for the library is that I want to send a signal if my queue size is over a threshold. The handler function can then start processing and removing items from the queue (and subsequently do a bulk insert into the database).
I would like the handler function to run in the background. I am aware that I can simply overwrite the queue.append() method checking for the queue size and calling the handler function asynchronously, but I would like to implement the listener-dispatcher model to keep the logic clean and separated.
Does pydispatcher do this out of the box? If not, is there another module that can help me do this? Would I need to manage the access to the queue, since there might be multiple threads processing and appending to the queue at the same time?
Note that in my use case there is only a single dispatcher and event handler.
I've recently released the Akuanduba module, that may help you with this task. There's a single example on the repository that may help you understand how it works and it seems similar to what you want.
Anyway, I'll try to explain here a way of implementing your code with Akuanduba:
First you could make a data frame that would hold your queue:
# Mandatory imports
from Akuanduba.core.messenger.macros import *
from Akuanduba.core.constants import *
from Akuanduba.core import NotSet, AkuandubaDataframe
# Your imports go here:
from queue import Queue
class MyQueue (AkuandubaDataframe):
def __init__(self, name):
# Mandatory stuff
AkuandubaDataframe.__init__(self, name)
self.__queue = Queue ()
def getQueue (self):
return self.__queue
def putQueue (self, val):
self.__queue.put(val)
def getQueueSize (self):
return self.__queue.qsize()
#
# "toRawObj" method is a mandatory method that delivers a dict with the desired data
# for file saving
#
def toRawObj(self):
d = {
"Queue" : self.getQueue(),
}
return d
Then you could make a TriggerCondition that would check the queue size:
from Akuanduba.core import StatusCode, NotSet, StatusTrigger
from Akuanduba.core.messenger.macros import *
from Akuanduba.core import TriggerCondition
import time
class CheckQueueSize (TriggerCondition):
def __init__(self, name, maxSize):
TriggerCondition.__init__(self, name)
self._name = name
self._maxSize = maxSize
def initialize(self):
return StatusCode.SUCCESS
def execute (self):
size = self.getContext().getHandler("MyQueue").getQueueSize()
if (size > SIZE_THRESHOLD):
return StatusTrigger.TRIGGERED
else:
return StatusTrigger.NOT_TRIGGERED
def finalize(self):
return StatusCode.SUCCESS
Make a tool that would be your handler function:
# Mandatory imports
from Akuanduba.core import AkuandubaTool, StatusCode, NotSet, retrieve_kw
# Your imports go here:
class SampleTool(AkuandubaTool):
def __init__(self, name, **kw):
# Mandatory stuff
AkuandubaTool.__init__(self, name)
def initialize(self):
# Lock the initialization. After that, this tool can not be initialized once again
self.init_lock()
return StatusCode.SUCCESS
def execute(self,context):
#
# DO SOMETHING HERE
#
# Always return SUCCESS
return StatusCode.SUCCESS
def finalize(self):
self.fina_lock()
return StatusCode.SUCCESS
And finally, make a main script in order to make it all work together:
# Akuanduba imports
from Akuanduba.core import Akuanduba, LoggingLevel, AkuandubaTrigger
from Akuanduba import ServiceManager, ToolManager, DataframeManager
# This sample's imports
import MyQueue, CheckQueueSize, SampleTool
# Creating your handler
your_handler = SampleTool ("Your Handler's name")
# Creating dataframes
queue = MyQueue ("MyQueue")
# Creating trigger
trigger = AkuandubaTrigger("Sample Trigger Name", triggerType = 'or')
# Append conditions and tools to trigger just adding them
# Tools appended to the trigger will only run when trigger is StatusTrigger.TRIGGERED,
# and will run in the order they've been appended
trigger += CheckQueueSize( "CheckQueueSize condition", MAX_QUEUE_SIZE )
trigger += your_handler
# Creating Akuanduba
manager = Akuanduba("Akuanduba", level=LoggingLevel.INFO)
# Appending tools
#
# ToolManager += TOOL_1
# ToolManager += TOOL_2
#
ToolManager += trigger
# Apprending dataframes
DataframeManager += sampleDataframe
# Initializing
manager.initialize()
manager.execute()
manager.finalize()
That way, you'd have clean and separated code.

How to buffer logs from multithreaded function call so that they can be logged in order the functions finish?

the problem
I'm trying to use the concurrent.futures library to run a function on a list of "things". The code looks something like this.
import concurrent.futures
import logging
logger = logging.getLogger(__name__)
def process_thing(thing, count):
logger.info(f'starting processing for thing {count}')
# Do some io related stuff
logger.info(f'finished processing for thing {count}')
def process_things_concurrently(things)
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for count, thing in enumerate(things):
futures.append(executor.submit(process_thing, thing, count))
for future in concurrent.futures.as_completed(futures):
future.result()
As the code is now, the logging can happen in any order.
For example:
starting processing for thing 2
starting processing for thing 1
finished processing for thing 2
finished processing for thing 1
I want to change the code so that the records for a particular call of process_thing() are buffered until the future finishes.
In other words, all of the records for a particular call stick together. These 'groups' of records are ordered by when the call finished.
So from the example above the log output above would instead look like
starting processing for thing 2
finished processing for thing 2
starting processing for thing 1
finished processing for thing 1
what I've tried
I tried making a logger for each call that would have its own custom handler, possibly subclassing BufferingHandler. But eventually there will be lots of "things" and I read that making a lot of loggers is bad.
I'm open to anything that works! Thanks.
Here's a little recipe for a DelaydLogger class that puts all calls to logger's methods into a list instead of actually performing the call, until you finally do a flush where they are all fired up.
from functools import partial
class DelayedLogger:
def __init__(self, logger):
self.logger = logger
self._call_stack = [] # list of (method, *args, **kwargs) tuples
self._delayed_methods = {
name : partial(self._delayed_method_proxy, getattr(logger, name))
for name in ["info", "debug", "warning", "error", "critical"]
}
def __getattr__(self, name):
""" Proxy getattr to self.logger, except for self._delayed_methods. """
return self._delayed_methods.get(name, getattr(self.logger, name))
def _delayed_method_proxy(self, method, *args, **kwargs):
self._call_stack.append((method, args, kwargs))
def flush(self):
""" Flush self._call_stack to the real logger. """
for method, args, kwargs in self._call_stack:
method(*args, **kwargs)
self._call_stack = []
In your example, you could use it like so:
import logging
logger = logging.getLogger(__name__)
def process_thing(thing, count):
dlogger = DelayedLogger(logger)
dlogger.info(f'starting processing for thing {count}')
# Do some io related stuff
dlogger.info(f'finished processing for thing {count}')
dlogger.flush()
process_thing(None, 10)
There may be ways to beautfiy this or make it more compact, but it should get the job done if that's what you really want.
First I modified #Jeronimo's answer to come up with this
class DelayedLogger:
class ThreadLogger:
"""to be logged from a single thread"""
def __init__(self, logger):
self._call_stack = [] # list of (method, *args, **kwargs) tuples
self.logger = logger
self._delayed_methods = {
name: partial(self._delayed_method_proxy, getattr(logger, name))
for name in ["info", "debug", "warning", "error", "critical"]
}
def __getattr__(self, name):
""" Proxy getattr to self.logger, except for self._delayed_methods. """
return self._delayed_methods.get(name, getattr(self.logger, name))
def _delayed_method_proxy(self, method, *args, **kwargs):
self._call_stack.append((method, args, kwargs))
def flush(self):
""" Flush self._call_stack to the real logger. """
for method, args, kwargs in self._call_stack:
method(*args, **kwargs)
self._call_stack = []
def __init__(self, logger):
self.logger = logger
self._thread_loggers: typing.Dict[self.ThreadLogger] = {}
def new_thread(self, count):
"""Make a new sub-logger class that writes to the call stack in its slot"""
new_logger = self.ThreadLogger(self.logger)
self._thread_loggers[count] = new_logger
return new_logger
def get_thread(self, count):
return self._thread_loggers[count]
delayed_logger = DelayedLogger(logger)
Which can be used like this
delayed_logger = DelayedLogger(logger)
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for count, thing in enumerate(things):
futures.append(executor.submit(process_thing,
count,
thing,
logger=delayed_logger.new_thread(count)))
for future in concurrent.futures.as_completed(futures):
count = future.result()
delayed_logger.get_thread(count).flush()
The problem here is that process_thing() now needs to take the logger as an argument and the logger is limited in scope. If process_thing() calls subroutines then the their logging won't be delayed.
Probably the solution is just to not try to do this at all. Instead threads can make a log filter or some other way to distinguish their messages.

Thread-local Python Print

Are there any built-in ways to have different threads have different destinations for print() and similar?
I'm exploring the creation of an interactive Python environment, so I can't just use print() from module spamegg. It has to be the globally available one with no arguments.
You can replace sys.stdout with an object that checks the current thread and writes to the appropriate file:
import sys, threading
class CustomOutput(object):
def __init__(self):
# the "softspace" slot is used internally by Python's print
# to keep track of whether to prepend space to the
# printed expression
self.softspace = 0
self._old_stdout = None
def activate(self):
self._old_stdout = sys.stdout
sys.stdout = self
def deactivate(self):
sys.stdout = self._old_stdout
self._old_stdout = None
def write(self, s):
# actually write to an open file obtained from an attribute
# on the current thread
threading.current_thread().open_file.write(s)
def writelines(self, seq):
for s in seq:
self.write(s)
def close(self):
pass
def flush(self):
pass
def isatty(self):
return False
It is possible to do what you're asking, although it's complicated and clunky and possibly not portable, and I don't think it's what you want to do.
Your objection to just using spamegg.print is:
I'm exploring the creation of an interactive Python environment, so I can't just use print() from module spamegg. It has to be the globally available one with no arguments.
But the solution to that is easy: Just use print from module spamegg in your code, and from spamegg import print in the interactive interpreter. That's all there is to it.
For that matter, there's no good reason this even needs to be called print in the first place. If all of your code used some other output function with a different name, you could do the same thing in the interactive interpreter.
But how does that let each thread have a different destination?
The easy way to do that is to just look up the destination in a threading.local().
But if you really want to do both parts of this the hard way, you can.
To do the global print the hard way, you can either have spamegg replace the builtin print instead of just giving you a way to shadow it, or have it replace sys.stdout, so the builtin print with default arguments will print somewhere else.
import builtins
_real_print = builtins.print
def _print(*args, **kwargs):
kwargs.setdefault('file', my_output_destination)
_real_print(*args, **kwargs)
builtins.print = _print
import io
import sys
class MyStdOut(io.TextIOBase):
# ... __init__, write, etc.
sys.stdout = MyStdOut()
That still requires having MyStdOut use a thread-local target.
Alternatively, you can compile or wrap each thread function in its own custom globals environment that replaces __builtins__ and/or sys from the default, allowing you to give a different one to each thread from the start. For example:
from functools import partial
from threading import Thread
from types import FunctionType
class MyThread(Thread):
def __init__(self, group=None, target=None, *args, **kwargs):
if target:
g = target.__globals__.copy()
g['__builtins__'] = g['__builtins__'].copy()
output = make_output_for_new_thread()
g['__builtins__']['print'] = partial(print, file=output)
target = FunctionType(thread_func.__code__, g, thread_func.__name__,
thread_func.__defaults__, thread_func.__closure__)
super().__init__(self, group, target, *args, **kwargs)
I might have solution for you, but it's quite more complicated than just print.
class ClusteredLogging(object):
'''
Class gathers all logs performed inside with statement and flush
it to mainHandler on exit at once.
Good for multithreaded applications that has to log several
lines at once.
'''
def __init__(self, mainHandler, formatter):
self.mainHandler = mainHandler
self.formatter = formatter
self.buffer = StringIO()
self.handler = logging.StreamHandler(self.buffer)
self.handler.setFormatter(formatter)
def __enter__(self):
rootLogger = logging.getLogger()
rootLogger.addHandler(self.handler)
def __exit__(self, t, value, tb):
rootLogger = logging.getLogger()
rootLogger.removeHandler(self.handler)
self.handler.flush()
self.buffer.flush()
rootLogger.addHandler(self.mainHandler)
logging.info(self.buffer.getvalue().strip())
rootLogger.removeHandler(self.mainHandler)
Using this, you can create log handler for each thread and configure them to store logs to different locations.
Keep in mind that this is developed with slightly different goal in mind (see comments) but you can adapt it by taking handler juggling feature of ClusteredLogging as a start.
And some test code:
import concurrent.futures
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
import logging
import sys
# put ClusteredLogging here
if __name__ == "__main__":
formatter = logging.Formatter('%(asctime)s %(levelname)8s\t%(message)s')
onlyMessageFormatter = logging.Formatter("%(message)s")
mainHandler = logging.StreamHandler(sys.stdout)
mainHandler.setFormatter(onlyMessageFormatter)
mainHandler.setLevel(logging.DEBUG)
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.DEBUG)
def logSomethingLong(label):
with ClusteredLogging(mainHandler, formatter):
for i in range(15):
logging.info(label + " " + str(i))
labels = ("TEST", "EXPERIMENT", "TRIAL")
executor = concurrent.futures.ProcessPoolExecutor()
futures = [executor.submit(logSomethingLong, label) for label in labels]
concurrent.futures.wait(futures)

Logging variable data with new format string

I use logging facility for python 2.7.3. Documentation for this Python version say:
the logging package pre-dates newer formatting options such as str.format() and string.Template. These newer formatting options are supported...
I like 'new' format with curly braces. So i'm trying to do something like:
log = logging.getLogger("some.logger")
log.debug("format this message {0}", 1)
And get error:
TypeError: not all arguments converted during string formatting
What I miss here?
P.S. I don't want to use
log.debug("format this message {0}".format(1))
because in this case the message is always being formatted regardless of logger level.
EDIT: take a look at the StyleAdapter approach in #Dunes' answer unlike this answer; it allows to use alternative formatting styles without the boilerplate while calling logger's methods (debug(), info(), error(), etc).
From the docs — Use of alternative formatting styles:
Logging calls (logger.debug(), logger.info() etc.) only take
positional parameters for the actual logging message itself, with
keyword parameters used only for determining options for how to handle
the actual logging call (e.g. the exc_info keyword parameter to
indicate that traceback information should be logged, or the extra
keyword parameter to indicate additional contextual information to be
added to the log). So you cannot directly make logging calls using
str.format() or string.Template syntax, because internally the logging
package uses %-formatting to merge the format string and the variable
arguments. There would no changing this while preserving backward
compatibility, since all logging calls which are out there in existing
code will be using %-format strings.
And:
There is, however, a way that you can use {}- and $- formatting to
construct your individual log messages. Recall that for a message you
can use an arbitrary object as a message format string, and that the
logging package will call str() on that object to get the actual
format string.
Copy-paste this to wherever module:
class BraceMessage(object):
def __init__(self, fmt, *args, **kwargs):
self.fmt = fmt
self.args = args
self.kwargs = kwargs
def __str__(self):
return self.fmt.format(*self.args, **self.kwargs)
Then:
from wherever import BraceMessage as __
log.debug(__('Message with {0} {name}', 2, name='placeholders'))
Note: actual formatting is delayed until it is necessary e.g., if DEBUG messages are not logged then the formatting is not performed at all.
Here is another option that does not have the keyword problems mentioned in Dunes' answer. It can only handle positional ({0}) arguments and not keyword ({foo}) arguments. It also does not require two calls to format (using the underscore). It does have the ick-factor of subclassing str:
class BraceString(str):
def __mod__(self, other):
return self.format(*other)
def __str__(self):
return self
class StyleAdapter(logging.LoggerAdapter):
def __init__(self, logger, extra=None):
super(StyleAdapter, self).__init__(logger, extra)
def process(self, msg, kwargs):
if kwargs.pop('style', "%") == "{": # optional
msg = BraceString(msg)
return msg, kwargs
You use it like this:
logger = StyleAdapter(logging.getLogger(__name__))
logger.info("knights:{0}", "ni", style="{")
logger.info("knights:{}", "shrubbery", style="{")
Of course, you can remove the check noted with # optional to force all messages through the adapter to use new-style formatting.
Note for anyone reading this answer years later: Starting with Python 3.2, you can use the style parameter with Formatter objects:
Logging (as of 3.2) provides improved support for these two additional formatting styles. The
Formatter class been enhanced to take an additional, optional keyword parameter named style. This
defaults to '%', but other possible values are '{' and '$', which correspond to the other two
formatting styles. Backwards compatibility is maintained by default (as you would expect), but by
explicitly specifying a style parameter, you get the ability to specify format strings which work
with str.format() or
string.Template.
The docs provide the example
logging.Formatter('{asctime} {name} {levelname:8s} {message}', style='{')
Note that in this case you still can't call the logger with the new format. I.e., the following still won't work:
logger.info("knights:{say}", say="ni") # Doesn't work!
logger.info("knights:{0}", "ni") # Doesn't work either
This was my solution to the problem when I found logging only uses printf style formatting. It allows logging calls to remain the same -- no special syntax such as log.info(__("val is {}", "x")). The change required to code is to wrap the logger in a StyleAdapter.
from inspect import getargspec
class BraceMessage(object):
def __init__(self, fmt, args, kwargs):
self.fmt = fmt
self.args = args
self.kwargs = kwargs
def __str__(self):
return str(self.fmt).format(*self.args, **self.kwargs)
class StyleAdapter(logging.LoggerAdapter):
def __init__(self, logger):
self.logger = logger
def log(self, level, msg, *args, **kwargs):
if self.isEnabledFor(level):
msg, log_kwargs = self.process(msg, kwargs)
self.logger._log(level, BraceMessage(msg, args, kwargs), (),
**log_kwargs)
def process(self, msg, kwargs):
return msg, {key: kwargs[key]
for key in getargspec(self.logger._log).args[1:] if key in kwargs}
Usage is:
log = StyleAdapter(logging.getLogger(__name__))
log.info("a log message using {type} substitution", type="brace")
It's worth noting that this implementation has problems if key words used for brace substitution include level, msg, args, exc_info, extra or stack_info. These are argument names used by the log method of Logger. If you need to one of these names then modify process to exclude these names or just remove log_kwargs from the _log call. On a further note, this implementation also silently ignores misspelled keywords meant for the Logger (eg. ectra).
The easier solution would be to use the excellent logbook module
import logbook
import sys
logbook.StreamHandler(sys.stdout).push_application()
logbook.debug('Format this message {k}', k=1)
Or the more complete:
>>> import logbook
>>> import sys
>>> logbook.StreamHandler(sys.stdout).push_application()
>>> log = logbook.Logger('MyLog')
>>> log.debug('Format this message {k}', k=1)
[2017-05-06 21:46:52.578329] DEBUG: MyLog: Format this message 1
UPDATE: There is now a package on PyPI called bracelogger that implements the solution detailed below.
To enable brace-style formatting for log messages, we can monkey-patch a bit of the logger code.
The following patches the logging module to create a get_logger function that will return a logger that uses the new-style formatting for every log record that it handles.
import functools
import logging
import types
def _get_message(record):
"""Replacement for logging.LogRecord.getMessage
that uses the new-style string formatting for
its messages"""
msg = str(record.msg)
args = record.args
if args:
if not isinstance(args, tuple):
args = (args,)
msg = msg.format(*args)
return msg
def _handle_wrap(fcn):
"""Wrap the handle function to replace the passed in
record's getMessage function before calling handle"""
#functools.wraps(fcn)
def handle(record):
record.getMessage = types.MethodType(_get_message, record)
return fcn(record)
return handle
def get_logger(name=None):
"""Get a logger instance that uses new-style string formatting"""
log = logging.getLogger(name)
if not hasattr(log, "_newstyle"):
log.handle = _handle_wrap(log.handle)
log._newstyle = True
return log
Usage:
>>> log = get_logger()
>>> log.warning("{!r}", log)
<logging.RootLogger object at 0x4985a4d3987b>
Notes:
Fully compatible with normal logging methods (just replace logging.getLogger with get_logger)
Will only affect specific loggers created by the get_logger function (doesn't break 3rd party packages).
The formatting of the message is delayed until it is output (or not at all if the log message is filtered).
Args are stored on logging.LogRecord objects as usual (useful in some cases with custom log handlers).
Works in all versions of Python from 2.7 to 3.10.
Try logging.setLogRecordFactory in Python 3.2+:
import collections
import logging
class _LogRecord(logging.LogRecord):
def getMessage(self):
msg = str(self.msg)
if self.args:
if isinstance(self.args, collections.Mapping):
msg = msg.format(**self.args)
else:
msg = msg.format(*self.args)
return msg
logging.setLogRecordFactory(_LogRecord)
I created a custom Formatter, called ColorFormatter that handles the problem like this:
class ColorFormatter(logging.Formatter):
def format(self, record):
# previous stuff, copy from logging.py…
try: # Allow {} style
message = record.getMessage() # printf
except TypeError:
message = record.msg.format(*record.args)
# later stuff…
This keeps it compatible with various libraries.
The drawback is that it is probably not performant due to potentially attempting format of the string twice.
Similar solution to pR0Ps' , wrapping getMessage in LogRecord by wrapping makeRecord (instead of handle in their answer) in instances of Logger that should be new-formatting-enabled:
def getLogger(name):
log = logging.getLogger(name)
def Logger_makeRecordWrapper(name, level, fn, lno, msg, args, exc_info, func=None, extra=None, sinfo=None):
self = log
record = logging.Logger.makeRecord(self, name, level, fn, lno, msg, args, exc_info, func, sinfo)
def LogRecord_getMessageNewStyleFormatting():
self = record
msg = str(self.msg)
if self.args:
msg = msg.format(*self.args)
return msg
record.getMessage = LogRecord_getMessageNewStyleFormatting
return record
log.makeRecord = Logger_makeRecordWrapper
return log
I tested this with Python 3.5.3.
Combined string.Formatter to add pprint.pformat type conversion and from logging: setLogRecordFactory, setLoggerClass. There's one neat trick- i create extra nested tuple for argument args for Logger._log method and then unpack it in LogRecord init to omit overriding in Logger.makeRecord. Using log.f wraps every attribute (log methods on purpose) with use_format so you don't have to write it explicitly. This solution is backward compatible.
from collections import namedtuple
from collections.abc import Mapping
from functools import partial
from pprint import pformat
from string import Formatter
import logging
Logger = logging.getLoggerClass()
LogRecord = logging.getLogRecordFactory()
class CustomFormatter(Formatter):
def format_field(self, value, format_spec):
if format_spec.endswith('p'):
value = pformat(value)
format_spec = format_spec[:-1]
return super().format_field(value, format_spec)
custom_formatter = CustomFormatter()
class LogWithFormat:
def __init__(self, obj):
self.obj = obj
def __getattr__(self, name):
return partial(getattr(self.obj, name), use_format=True)
ArgsSmuggler = namedtuple('ArgsSmuggler', ('args', 'smuggled'))
class CustomLogger(Logger):
def __init__(self, *ar, **kw):
super().__init__(*ar, **kw)
self.f = LogWithFormat(self)
def _log(self, level, msg, args, *ar, use_format=False, **kw):
super()._log(level, msg, ArgsSmuggler(args, use_format), *ar, **kw)
class CustomLogRecord(LogRecord):
def __init__(self, *ar, **kw):
args = ar[5]
# RootLogger use CustomLogRecord but not CustomLogger
# then just unpack only ArgsSmuggler instance
args, use_format = args if isinstance(args, ArgsSmuggler) else (args, False)
super().__init__(*ar[:5], args, *ar[6:], **kw)
self.use_format = use_format
def getMessage(self):
return self.getMessageWithFormat() if self.use_format else super().getMessage()
def getMessageWithFormat(self):
msg = str(self.msg)
args = self.args
if args:
fmt = custom_formatter.format
msg = fmt(msg, **args) if isinstance(args, Mapping) else fmt(msg, *args)
return msg
logging.setLogRecordFactory(CustomLogRecord)
logging.setLoggerClass(CustomLogger)
log = logging.getLogger(__name__)
log.info('%s %s', dict(a=1, b=2), 5)
log.f.info('{:p} {:d}', dict(a=1, b=2), 5)
Here's something real simple that works:
debug_logger: logging.Logger = logging.getLogger("app.debug")
def mydebuglog(msg: str, *args, **kwargs):
if debug_logger.isEnabledFor(logging.DEBUG):
debug_logger.debug(msg.format(*args, **kwargs))
Then:
mydebuglog("hello {} {val}", "Python", val="World")

pygtk gtk.Builder.connect_signals onto multiple objects?

I am updating some code from using libglade to GtkBuilder, which is supposed to be the way of the future.
With gtk.glade, you could call glade_xml.signal_autoconnect(...) repeatedly to connect signals onto objects of different classes corresponding to different windows in the program. However Builder.connect_signals seems to work only once, and (therefore) to give warnings about any handlers that aren't defined in the first class that's passed in.
I realize I can connect them manually but this seems a bit laborious. (Or for that matter I could use some getattr hackery to let it connect them through a proxy to all the objects...)
Is it a bug there's no function to hook up handlers across multiple objects? Or am I missing something?
Someone else has a similar problem http://www.gtkforums.com/about1514.html which I assume means this can't be done.
Here's what I currently have. Feel free to use it, or to suggest something better:
class HandlerFinder(object):
"""Searches for handler implementations across multiple objects.
"""
# See <http://stackoverflow.com/questions/4637792> for why this is
# necessary.
def __init__(self, backing_objects):
self.backing_objects = backing_objects
def __getattr__(self, name):
for o in self.backing_objects:
if hasattr(o, name):
return getattr(o, name)
else:
raise AttributeError("%r not found on any of %r"
% (name, self.backing_objects))
I have been looking for a solution to this for some time and found that it can be done by passing a dict of all the handlers to connect_signals.
The inspect module can extract methods using
inspect.getmembers(instance, predicate=inspect.ismethod
These can then be concatenated into a dictionary using d.update(d3), watching out for duplicate functions such as on_delete.
Example code:
import inspect
...
handlers = {}
for c in [win2, win3, win4, self]: # self is the main window
methods = inspect.getmembers(c, predicate=inspect.ismethod)
handlers.update(methods)
builder.connect_signals(handlers)
This will not pick up alias method names declared using #alias. For an example of how to do that, see the code for Builder.py, at def dict_from_callback_obj.
I'm only a novice but this is what I do, maybe it can inspire;-)
I instantiate the major components from a 'control' and pass the builder object so that the instantiated object can make use of any of the builder objects (mainwindow in example) or add to the builder (aboutDialog example). I also pass a dictionary (dic) where each component adds "signals" to it.
Then the 'connect_signals(dic)' is executed.
Of course I need to do some manual signal connecting when I need to pass user arguments to the callback method, but those are few.
#modules.control.py
class Control:
def __init__(self):
# Load the builder obj
guibuilder = gtk.Builder()
guibuilder.add_from_file("gui/mainwindow.ui")
# Create a dictionnary to store signal from loaded components
dic = {}
# Instanciate the components...
aboutdialog = modules.aboutdialog.AboutDialog(guibuilder, dic)
mainwin = modules.mainwindow.MainWindow(guibuilder, dic, self)
...
guibuilder.connect_signals(dic)
del dic
#modules/aboutdialog.py
class AboutDialog:
def __init__(self, builder, dic):
dic["on_OpenAboutWindow_activate"] = self.on_OpenAboutWindow_activate
self.builder = builder
def on_OpenAboutWindow_activate(self, menu_item):
self.builder.add_from_file("gui/aboutdialog.ui")
self.aboutdialog = self.builder.get_object("aboutdialog")
self.aboutdialog.run()
self.aboutdialog.destroy()
#modules/mainwindow.py
class MainWindow:
def __init__(self, builder, dic, controller):
self.control = controller
# get gui xml and/or signals
dic["on_file_new_activate"] = self.control.newFile
dic["on_file_open_activate"] = self.control.openFile
dic["on_file_save_activate"] = self.control.saveFile
dic["on_file_close_activate"] = self.control.closeFile
...
# get needed gui objects
self.mainWindow = builder.get_object("mainWindow")
...
Edit: alternative to auto attach signals to callbacks:
Untested code
def start_element(name, attrs):
if name == "signal":
if attrs["handler"]:
handler = attrs["handler"]
#Insert code to verify if handler is part of the collection
#we want.
self.handlerList.append(handler)
def extractSignals(uiFile)
import xml.parsers.expat
p = xml.parsers.expat.ParserCreate()
p.StartElementHandler = self.start_element
p.ParseFile(uiFile)
self.handlerList = []
extractSignals(uiFile)
for handler in handlerList:
dic[handler] = eval(''. join(["self.", handler, "_cb"]))
builder.connect_signals
({
"on_window_destroy" : gtk.main_quit,
"on_buttonQuit_clicked" : gtk.main_quit
})

Categories

Resources