I'm looking at how to log to syslog from within my Python app, and I found there are two ways of doing it:
Using syslog.syslog() routines
Using the logger module SysLogHandler
Which is the best option to use, advantages/disadvantages of each one, etc, because I really don't know which one should I use.
syslog.syslog() can only be used to send messages to the local syslogd. SysLogHandler can be used as part of a comprehensive, configurable logging subsystem, and can log to remote machines.
The logging module is a more comprehensive solution that can potentially handle all of your log messages, and is very flexible. For instance, you can setup multiple handers for your logger and each can be set to log at a different level. You can have a SysLogHandler for sending errors to syslog, and a FileHandler for debugging logs, and an SMTPHandler to email the really critical messages to ops. You can also define a hierarchy of loggers within your modules, and each one has its own level so you can enable/disable messages from specific modules, such as:
import logging
logger = logging.getLogger('package.stable_module')
logger.setLevel(logging.WARNING)
And in another module:
import logging
logger = logging.getLogger('package.buggy_module')
logger.setLevel(logging.DEBUG)
The log messages in both of the these modules will be sent, depending on the level, to the 'package' logger and ultimately to the handlers you've defined. You can also add handlers directly to the module loggers, and so on. If you've followed along this far and are still interested, then I recommend jumping to the logging tutorial for more details.
So far, there is a disadvantage in logging.handlers.SysLogHander which is not mentioned yet. That is I can't set options like LOG_ODELAY or LOG_NOWAIT or LOG_PID. On the other hands, LOG_CONS and LOG_PERROR can be achieved with adding more handlers, and LOG_NDELAY is already set by default, because the connection opens when the handler is instantiated.
Related
I just discovered this very strange behaviour of the logging module in Spyder:
import logging
logging.getLogger("hurricane").handlers
Out[2]: [] # expected
logging.getLogger("tornado").handlers
Out[3]: [<StreamHandler <stderr> (NOTSET)>] # Where does that StreamHandler come from?!
Note that these are the first lines from a freshly started interpreter. So I haven't imported tornado or any other package except logging. Yet, unlike any other logger I tried to get, it comes with a StreamHandler attached.
Why?
Related question: How to prevent Python Tornado from logging to stdout/console?
I think Tornado uses logging for its normal-operation request logging. This is not a great idea - an application is supposed to work exactly as before if all logging is disabled (aside from the logging part, of course) but numerous web server authors use logging functionality as a convenience to do part of their normal operation rather than just using it for diagnostics. I believe you can turn this off using a logging=none configuration option; see this page for more information.
I have been trying to figure out the difference between the python logger and the celery logger, specifically the difference between the commands below, but cannot find a good answer.
I am using celery v3, with django 1.10.
from celery.utils.log import get_task_logger
logger = get_logger(__name__)
...
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
...
import logging
logger = logging.get_logger(__name__)
The celery documentation (latest, v3.1) is very lacking on this topic.
I have looked at similar questions such as this one, but it is still it unclear which to use, why to use that one, and specifically what the differences are. I am looking for a clear, concise answer.
I am also using sentry in my production environment. How does this choice affect your sentry logs? i.e. these common settings
From experience using get_task_logger seems to get you a few things of importance, especially with Sentry.
Auto prepending task names to your log output
The ability to set log handling rules at a higher level than just module (I believe it's actually setting the logger name to celery.task)
Probably, most importantly for Sentry setup, is it hooks the logging into their log handlers which Sentry makes use of.
Important: There is a bit of extra config that needs to go into Celery registration for Sentry:
https://docs.sentry.io/clients/python/integrations/celery/
You may be able to get errors to flow into Sentry without some of this setup, but I think this will give you the best traces and details + ensure that things like expected exceptions declared via throws are properly ignored.
I am starting to explore Python 3.5's logging package. I set logging up with these two commands in the main file
import logging
logging.basicConfig(filename=r'fractal.log', level=logging.DEBUG)
In addition I inserted
import logging
in another module.
All my desired log entries showed up correctly in the log file, but in addition to those I got these two, which were not AFAIK generated by my logging code:
DEBUG:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): ichart.finance.yahoo.com
DEBUG:requests.packages.urllib3.connectionpool:http://ichart.finance.yahoo.com:80 "GET /table.csv?d=11&b=1&e=6&s=%5EIRX&g=d&c=1900&a=0&f=2016&ignore=.csv HTTP/1.1" 200 None
These log entries were clearly not generated in the main module. Also, they do not show up when I set the logging level to INFO.
Does anyone know why I am getting these log entries? Are they evidence of problems with my code?
This is because you are setting the level of the root logger, and the requests module delegates all logging to it. You can fix this by explicitly shutting down the requests logger:
logging.getLogger('requests').setLevel(logging.ERROR)
This will silence all the DEBUG-level messages from requests.
Alternatively you could use your own logger instead of relying on the root logger, there are various ways to do this (e.g. logging.config.dictConfig could be used) but you can easily do this manually, something along the lines of:
import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.FileHandler('fractal.log'))
logger.setLevel(logging.DEBUG)
In an application I want to collect messages related to some dedicated part of the processing, and then show these messages later at user request. I would like to report severity (e.g. info, warning), time of message, etc., so for this I considered to use the Python standard logging module to collect the messages and related information. However, I don't want these messages to go to a console or file.
Is there a way to create a Python logger, using logging, where the messages are kept internally (in memory) only, until read out by the application. I would expect start of code like:
log = logging.getLogger('my_logger')
... some config of log for internal only; not to console
log.error('Just some error')
... some code to get/clear messages in log until now
I have tried to look in logging — Logging facility for Python, but most example are for immediate output to file or console, so an example for internal logging or reference is appreciated.
You should just use another handler. You could use a StreamHandler over an io.StringIO that would simply log to memory:
log = logging.getLogger('my_logger')
memlog = io.StringIO()
log.addHandler(logging.StreamHandler(memlog))
All logging sent to log can be found in memlog.getvalue()
Of course, this is just a simple Handler that concatenates everything in one single string, even if for versions >= 3.2 each record is terminated, by default with a \n. For more specific requirements, you could have a look at a QueueHandler or implement a dedicated Handler.
References: logging.handlers in the Python Standard Library reference manual.
I've been struggled with multiprocessing logging for some time, and for many reasons.
One of my reason is, why another get_logger.
Of course I've seen this question and it seems the logger that multiprocessing.get_logger returns do some "process-shared locks" magic to make logging handling smooth.
So, today I looked into the multiprocessing code of Python 2.7 (/multiprocessing/util.py), and found that this logger is just a plain logging.Logger, and there's barely any magic around it.
Here's the description in Python documentation, right before the
get_logger function:
Some support for logging is available. Note, however, that the logging
package does not use process shared locks so it is possible (depending
on the handler type) for messages from different processes to get
mixed up.
So when you use a wrong logging handler, even the get_logger logger may go wrong?
I've used a program uses get_logger for logging for some time.
It prints logs to StreamHandler and (seems) never gets mixed up.
Now My theory is:
multiprocessing.get_logger don't do process-shared locks at all
StreamHandler works for multiprocessing, but FileHandler doesn't
major purpose of this get_logger logger is for tracking processes'
life-cycle, and provide a easy-to-get and ready-to-use logger
that already logs process's name/id kinds of stuff
Here's the question:
Is my theory right?
How/Why/When do you use this get_logger?
Yes, I believe you're right that multiprocessing.get_logger() doesn't do process-shared locks - as you say, the docs even state this. Despite all the upvotes, it looks like the question you link to is flawed in stating that it does (to give it the benefit of doubt, it was written over a decade ago - so perhaps that was the case at one point).
Why does multiprocessing.get_logger() exist then? The docs say that it:
Returns the logger used by multiprocessing. If necessary, a new one will be created.
When first created the logger has level logging.NOTSET and no default handler. Messages sent to this logger will not by default propagate to the root logger.
i.e. by default the multiprocessing module will not produce any log output since its logger's logging level is set to NOTSET so no log messages are produced.
If you were to have a problem with your code that you suspected to be an issue with multiprocessing, that lack of log output wouldn't be helpful for debugging, and that's what multiprocessing.get_logger() exists for - it returns the logger used by the multiprocessing module itself so that you can override the default logging configuration to get some logs from it and see what it's doing.
Since you asked for how to use multiprocessing.get_logger(), you'd call it like so and configure the logger in the usual fashion, for example:
logger = multiprocessing.get_logger()
formatter = logging.Formatter('[%(levelname)s/%(processName)s] %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
# now run your multiprocessing code
That said, you may actually want to use multiprocessing.log_to_stderr() instead for convenience - as per the docs:
This function performs a call to get_logger() but in addition to returning the logger created by get_logger, it adds a handler which sends output to sys.stderr using format '[%(levelname)s/%(processName)s] %(message)s'
i.e. it saves you needing to set up quite so much logging config yourself, and you can instead start debugging your multiprocessing issue with just:
logger = multiprocessing.log_to_stderr()
logger.setLevel(logging.INFO)
# now run your multiprocessing code
To reiterate though, that's just a normal module logger that's being configured and used, i.e. there's nothing special or process-safe about it. It just lets you see what's happening inside the multiprocessing module itself.
This answer is not about get_logger specifically, but perhaps you can use the approach suggested in this post? Note that the QueueHandler/QueueListener classes are available for earlier Python versions via the logutils package (available on PyPI, too).