python logging shows no sys.stdout when from different thread - python

i now have a strange problem with logging in my multithreaded python application. Whenever i debug the application, i properly see the logging output in the stdout, such as
2016-11-05 21:51:36,851 (connectionpool.py:735 MainThread) INFO - requests.packages.urllib3.connectionpool: "Starting new HTTPS connection (1): api.telegram.org"
2016-11-05 21:51:41,920 (converter.py:16 WorkerThread1) DEBUG - converter: "resizing file test/test_input/"
2016-11-05 21:51:50,199 (bot.py:221 WorkerThread1) ERROR - __main__: "MoviePy error: failed to read the duration of file test/test_input/.
However, when i run the code without debug, all the logs from the WorkingThread1 disappear, leaving only the MainThread ones. The code is unchanged and the error remains. I guess it has something to do with multithreading. The WorkerThread1 is started from the pyTelegramBotAPI framework. I have my logs output to the sys.stdout:
formatter = logging.Formatter(
'%(asctime)s (%(filename)s:%(lineno)d %(threadName)s) %(levelname)s - %(name)s: "%(message)s"')
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setFormatter(formatter)
root = logging.getLogger()
root.setLevel(logging.NOTSET)
root.addHandler(stream_handler)
Any ideas?
Update: it has 100% to do with multithreading, because when i tell the framework to only use one thread, the logging messages appear. pyTelegramBotAPI uses WorkerThread and ThreadPool to implement concurrency as exemplified here

You have to show more code. In the code which you have shown, stream_handler is created, but then handler is used in addHandler.
My guess will be that your logging level is improperly set causing all the logs from the WorkingThread1 to be not logged.

Related

Unexpected behavior of the Python logging module with info and warning messages

I have a question regarding logging module in Python.
If I instantiate a logger and set its level to INFO without adding any handlers, then I expect that info messages will be printed on the screen.
However, in reality, warning messages are printed but info messages are not.
After adding explicitly a handler, both levels are printed.
Precisely, the following small script:
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info("This is an info message")
logger.warning("This is a warning message")
print("*** Add a stream handler explicitly")
handler = logging.StreamHandler()
logger.addHandler(handler)
logger.info("This is an info message")
logger.warning("This is a warning message")
gives output
This is a warning message
*** Add a stream handler explicitly
This is an info message
This is a warning message
(checked in Python 3.7.6 and 3.8.2).
I would expect that either no messages are printed without a handler, or both levels are printed after setting level to INFO.
I suggest you use loguru. It's a really simple to use tool and for me it's very intuitive.
You can add a default logger just using this line:
from loguru import logger
If you want to change the logging level, just setup the logger like this:
logger.remove() # This disables the default logger
logger.add(sys.stderr, level="INFO") # This adds a default logger (format and colors included) but at the logging level you need
And that's it. All the scripts that use the loguru logger in your code will be set up like this. By default it outputs to stderr but you can change it however you need.

Python logging issue?

I am writing the below code to log messages into a file:
#!/usr/bin/python3
import logging
## init logger
logger = logging.getLogger(__name__)
## create handler file
handler = logging.FileHandler('mylog3.log')
## set log level
handler.setLevel(logging.INFO)
## add handler to logger
logger.addHandler(handler)
l = [1, 2, 3, 4]
i = 3
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.debug('This is a debug message. I=%d l=%s', i, l)
logger.error('This is an error message. l =%s', l)
logger.fatal('This is a fatal error message')
The problem I am seeing is that no matter what setting I give, it always prints the warning, error and fatal error messages in file. The debug and info messages are never written.
I verified that I am seeing the issue w/ both PyCharm on Windows and also on my Ubuntu linux Python 3. So, it must be that I am doing something wrong. What can it be? I am at a loss!
Both loggers and handlers have levels. You've set the level for the handler, but not for the logger - and that defaults to WARNING. If you do e.g. logger.setLevel(logging.DEBUG), you should see DEBUG and INFO messages as well (if you don't set a level on the handler at all) or INFO messages but not DEBUG messages (if you set the handler's level to logging.INFO).
See this diagram for more info on the information flow in logging.
The debug and info messages are not shown because of the line handler.setLevel(logging.INFO). This causes all messages of level logging.INFO or less (only info and debug) to not be shown. You can replace loggign.INFO with logging.NOTSET to fix this.

How to share a file between modules for logging in python

I wanted to log messages from different module in python to a file. Also I need to print some messages to console for debugging purpose. I used logger module for this purpose . But logger module will log all the logs with given severity and above to file or console.
I wanted only some messages logged to file and it should not include the messages from the console.
Similarly the console messages should not contain messages logged to file.
My approach would be to have a singleton class which shares file write operation between various modules.
Is there any easier approach than this in python ?
EDIT:
I am new to Python. Sample program I tried
logger = logging.getLogger('simple_example')
logger.setLevel(logging.INFO)
# create file handler which logs even debug messages
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.CRITICAL)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
fh.setFormatter(formatter)
# add the handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
# 'application' code
logger.debug('debug message')
logger.info('info message')
logger.warn('warn message')
logger.error('error message')
logger.critical('critical message')
Console prints :
2015-02-03 15:36:00,651 - simple_example - ERROR - error message
2015-02-03 15:36:00,651 - simple_example - CRITICAL - critical message
#I don't want critical messages in console.
Here is a script that creates two loggers, use the one you wish to log to a file or stdout. The question is : on which criteria do you choose to log to stdout or file, knowing that (from your question) you don't want the criteria to be the log level (debug, error, critical...)
#!/usr/bin/python
import logging
logger_stdout = logging.getLogger('logger_stdout')
logger_stdout.setLevel(logging.DEBUG)
sh = logging.StreamHandler()
sh.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger_stdout.addHandler(sh)
logger_stdout.debug('stdout debug message')
logger_file = logging.getLogger('logger_file')
logger_file.setLevel(logging.DEBUG)
fh = logging.FileHandler("foo.log")
fh.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger_file.addHandler(fh)
logger_file.debug('file debug message')
And when I run this script:
D:\jrx\jrxpython
λ python essai.py
2015-02-03 11:12:07,210 - logger_stdout - DEBUG - stdout debug message
D:\jrx\jrxpython
λ cat foo.log
2015-02-03 11:12:07,224 - logger_file - DEBUG - file debug message
D:\jrx\jrxpython
λ
CRITICAL is higher than ERROR:
You can also verify yourself:
>>> import logging
>>> print logging.CRITICAL
50
>>> print logging.ERROR
40
>>>
There are two cases in logging:
Logging the same process - you should have several handlers with different logging levels based on how verbose the logs should be. A higher level means less output. That's why DEBUG is the lowest predefined log level - it writes everything for debug purposes.
Logging different processes - you should have several loggers set up, they can be accessed from anywhere in your code using logging.getLogger(name). This gives the same logger every time, so that logger set-up persists through the code and only needs to be executed once.
The first case demonsrates that you can't have an "error but not critical" log, since this is the opposite of how logs should work. You can have a "critical but not error" log, that is less verbose. This is what you probably want.

Unable to log debug messages to the system log in Python

I'm using Python 3.4 on Mac OSX. I have the following code to setup a logger:
LOGGER = logging.getLogger(PROGRAM_NAME)
LOGGER.setLevel(logging.DEBUG)
LOGGER.propagate = False
LOGGER_FH = logging.FileHandler(WORKING_DIR + "/syslog.log", 'a')
LOGGER_FH.setLevel(logging.DEBUG)
LOGGER_FH.setFormatter(logging.Formatter('%(name)s: [%(levelname)s] %(message)s'))
LOGGER.addHandler(LOGGER_FH)
LOGGER_SH = logging.handlers.SysLogHandler(address='/var/run/syslog',
facility=logging.handlers.SysLogHandler.LOG_USER)
LOGGER_SH.setLevel(logging.DEBUG)
LOGGER_SH.setFormatter(logging.Formatter('%(name)s: [%(levelname)s] %(message)s'))
LOGGER.addHandler(LOGGER_SH)
The FileHandler works perfectly, and I'm able to see all expected messages at all logging levels show up in the log. The SysLogHandler doesn't work correctly. I'm unable to see any LOGGER.info() or LOGGER.debug() messages in the syslog output. I can see error and warning messages, but not info or debug. Even tweaking the /etc/syslog.conf file has no effect (even after explicitly reloading the syslog daemon with launchctl). What am I missing here ?
Try: address='/dev/log'
It's a bit confusing from the doc, but "address" is expected to be a unix domain socket, not a file.

How to programmatically tell Celery to send all log messages to stdout or stderr?

How does one turn on celery logging programmatically?
From the terminal, this works fine:
celery worker -l DEBUG
When I call get_task_logger(__name__).debug('hello'), I can see the message come up in the terminal. (stdout and stderr are being displayed) I can even import logging and call logger.info('hi') and see that too. (both work)
However, while developing a task, I prefer to use a test module and call the task function directly rather than firing up a whole worker. But I can't see the log messages. I understand that Celery is redirecting everything to its internal apparatus, but I want to see the log messages on the stdout too.
How do I tell Celery to send a copy of the log messages back to stdout?
I've read a bunch of online articles about logging but it seems that a number of logging-related configuration vars from celery have been deprecated and it's unclear to me from the docs what is the supported path today.
Here is an example module that creates a celery object and attempts to log output. Nothing shows in the terminal.
example mymodule.py
from celery import Celery
import logging
from celery.utils.log import get_task_logger
app = Celery('test')
app.config_from_object('myfile', True)
get_task_logger(__name__).warn('hello world')
logging.getLogger(__name__).warn('hello world 2')
EDIT
I know that I can add a handler to redirect some of the output back to the terminal by adding a handler
log = get_task_logger(__name__)
h = logging.StreamHandler(sys.stdout)
log.addHandler(h)
But is there a "Celery way" to do this? Maybe one that lets me also have the Celery formatted lines of text.
[2014-03-02 15:51:32,949: WARNING] hello world
I have been looking at the same issue...
What seems to work best is to use the signal handler, according to http://docs.celeryproject.org/en/latest/userguide/signals.html#after-setup-logger
In your celery.py file use:
from celery.signals import after_setup_logger
import logging
#after_setup_logger.connect()
def logger_setup_handler(logger, **kwargs ):
my_handler = MyLogHandler()
my_handler.setLevel(logging.DEBUG)
my_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') #custom formatter
my_handler.setFormatter(my_formatter)
logger.addHandler(my_handler)
logging.info("My log handler connected -> Global Logging")
if __name__ == '__main__':
app.start()
then you can define MyLogHandler() as you wish.
To send the logs to STDOUT you should also be able to use (I have not tested it):
my_handler = logging.StreamHandler(sys.stdout)

Categories

Resources