I run this earlier in the code
watch("httpstream")
Subsequently, any py2neo commands that triggers HTTP traffic will result in verbose logging. How can I stop the effect of watch() logging without creating a new Graph instance?
You don't. That is, I've not written a way to do that.
The watch function is intended only as a debugging utility for an interactive console session. You shouldn't need to use it in an application.
You can set the logging level to a higher value, for example
import logging
logging.getLogger("httpstream").setLevel(Logging.WARNING)
Get Logger Information
You can enumerate a list of all available loggers
print logging.Logger.manager.loggerDict.keys()
Then, you can use either
logging.getLogger("httpstream").getEffectiveLevel()
or
logging.getLogger("httpstream").isEnabledFor(logging.DEBUG)
to get the logging level.
Related
From reading the docs about logging, I know I can use the basicConfig for example to set the format of logging records, such as:
import logging
logging.basicConfig(format="%(levelname)s - %(message)s")
for example.
My question is, can I specify different formats for different levels? What I'm trying to achieve is to format logging records to Azure Pipelines logging commands so that for examle the Python code:
logging.error("some error")
Will be printed as:
##[error]some error
Now I know I can use the %(levelname) but I don't want to rely on the correspondence between Azure Pipelines logging commands to Python's "logging" module. For example, in Python's logging there's info level, but not in Azure Pipelines.
There's an Azure doc specifically for Python logging which you might find useful. If I understand it correctly (I've never used Azure), you should be able to send it Python logs without doing anything special.
However, the log messages shown as part of this Azure chart are plainly a bit different. If you want your messages to appear like that, one option is to set some custom levels with the level-names Azure prefers. The doc you referenced has a section on that, although it warns against doing it in complex libraries.
If you want to reuse standard levels, but rename "info" -> "command" or some such, you're probably going to need a separate formatter for each such rename.
I want to log everything:
Function entered + values of parameters + function exited
Result of every assignment or operation
etc.
Is it possible to log "everything" in a Python execution without instrumenting the code?
Since things are executing in a VM, it should be possible to configure this at the VM level (hopefully?).
I'm using Pycharm but I could do it via commandline it it's necessary.
There's this existing question: How to do logging at function entry, inside and exit in Python but it doesn't address how to log the result of variable assignments.
You would need to use the trace module and/or perhaps the pdb module. They may not give you everything you need, but it would be a starting point. The logging module doesn't work at such a low level as you seem to want.
I'm using Advanced Python Scheduler in a Python script. The main program defines a log by calling logging.basicConfig with the file name of the log that I want. This log is also set to "DEBUG" as the logging level, since that's what I need at present for my script.
Unfortunately, because logging.basicConfig has been set up in this manner, apscheduler writes its log entries to the same log file. There are an awful lot of these, especially since I have one scheduled task that runs every minute.
Is there any way to redirect apscheduler's log output to another log file (without changing apscheduler's code) while using my log file for my own script? I.e. is there a way to change the file name for each module's output within my script?
I tried reading the module page and the HOWTO for logging, but could not find an answer to this.
Set the logger level for apscheduler to your desired value (e.g. WARNING to avoid seeing DEBUG and INFO messages from apscheduler like this:
logging.getLogger('apscheduler').setLevel(logging.WARNING)
You will still get messages for WARNING and higher severities. To direct messages from apscheduler into a separate file, use
aplogger = logging.getLogger('apscheduler')
aplogger.propagate = False
aplogger.setLevel(logging.WARNING) # or whatever
aphandler = logging.FileHandler(...) # as per what you want
aplogger.addHandler(aphandler)
Ensure the above code is only called once (otherwise you will add multiple FileHandler instances - probably not what you want).
maybe you want to call logging.getLogger("apscheduler") and setup its log file in there? see this answer https://stackoverflow.com/a/2031557/782168
I'm writing a large hardware simulation library in Python3. For logging, I use the Python3 Logging module.
For controlling debug messages with method-level granularity, I learned "on the street" (ok, here at StackOverflow) to create sub-loggers within each method I wanted to log from:
sub_logger = logging.getChild("new_sublogger_name")
sub_logger.setLevel(logging.DEBUG)
# Sample debug message
sub_logger.debug("This is a debug message...")
By changing the call to setLevel(), the user is able to enable/disable debugging messages on a per-method basis.
Now the Boss Man don't like this approach. He's advocating a single-point at which all logging messages in the library can be enabled/disabled with the same method-level granularity. (This was to be accomplished by writing our own Python logging library BTW).
Not wanting to re-invent the logging wheel, I proposed to instead continue to use the Python Logging library, but instead use Filters to allow single-point control of logging messages.
Having not used Python Logging Filters very often, is there a consensus on using Filters vs Sublogger.setLevel() for this application? What are the pros/cons of each method?
I'm quite used to setLevel() after using it for a while, but that may be coloring my objectiveness. I DO NOT, however, wish to waste everyone's time writing another Python logging library.
I think the existing logging module does what you want. The trick is to separate the place where you call setLevel() (a configuration operation) from the places where you call getChild() (ongoing logging operations).
import logging
logger = logging.getLogger('mod1')
def fctn1():
logger.getChild('fctn1').debug('I am chatty')
# do stuff (notice, no setLevel)
def fctn2():
logger.getChild('fctn2').debug('I am even more chatty')
# do stuff (notice, no setLevel)
Notice there was no setLevel() there, which makes sense. Why call setLevel() every time and since when does a method know what logging level the user wants.
You set your logging levels in a configuration step at the beginning of the program. You can do it with the dictionary based configuration, a python module that does a bunch of setLevel() calls or even something you cook up with ini files or whatever. But basically it boils down to:
def config_logger():
logging.getLogger('abc.def').setLevel(logging.INFO)
logging.getLogger('mod1').setLevel(logging.WARN)
logging.getLogger('mod1.fctn1').setLeveL(logging.DEBUG)
(etc...)
Now, if you want to get fancy with filters, you can use them to inspect the stack frame and pull the method name out for you. But that gets more complicated.
I'm working with Django-nonrel on Google App Engine, which forces me to use logging.debug() instead of print().
The "logging" module is provided by Django, but I'm having a rough time using it instead of print().
For example, if I need to verify the content held in the variable x, I will put
logging.debug('x is: %s' % x). But if the program crashes soon after (without flushing the stream), then it never gets printed.
So for debugging, I need debug() to be flushed before the program exits on error, and this is not happening.
I think this may work for you, assuming you're only using one(or default) handler:
>>> import logging
>>> logger = logging.getLogger()
>>> logging.debug('wat wat')
>>> logger.handlers[0].flush()
It's kind of frowned upon in the documentation, though.
Application code should not directly instantiate and use instances of Handler. Instead, the Handler class is a base class that defines the interface that all handlers should have and establishes some default behavior that child classes can use (or override).
http://docs.python.org/2/howto/logging.html#handler-basic
And it could be a performance drain, but if you're really stuck, this may help with your debugging.
If the use case is that you have a python program that should flush its logs when exiting, use logging.shutdown().
From the python documentation:
logging.shutdown()
Informs the logging system to perform an orderly
shutdown by flushing and closing all handlers. This should be called
at application exit and no further use of the logging system should be
made after this call.
Django logging relies on the standard python logging module.
This module has a module-level method: logging.shutdown() which flushes all of the handlers and shuts down the logging system (i.e. logging can not longer be used after it is called)
Inspecting the code of this function shows that currently (python 2.7) the logging module holds a list of weak references to all handlers in a module-level variable called _handlerList so all of the handlers can be flushed by doing something like
[h_weak_ref().flush() for h_weak_ref in logging._handlerList]
because this solution uses the internals of the module #Mikes solution above is better, but it relies on having access to a logger, it can be generalized as follows:
[h.flush() for h in my_logger.handlerList]
a simple function that always working for register you debug messsages while programming. dont use it for production, since it will not rotate:
def make_log(message):
import datetime
with open('mylogfile.log','a') as f:
f.write(f"{datetime.datetime.now()} {message}\n")
then use as
make_log('my message to register')
when to put on production, just comment the last 2 lines
def make_log(message):
import datetime
#with open('mylogfile.log','a') as f:
# f.write(f"{datetime.datetime.now()} {message}\n")