How can i log stdout and stderr outputs using fb-hydra? - python

I am trying to log stdout and stderr into a file.
I found custom.yaml file in the facebookresearch/hydra github.
# #package _group_
version: 1
formatters:
simple:
format: '[%(levelname)s] - %(message)s'
handlers:
console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
root:
handlers: [console]
disable_existing_loggers: False
I figured that I am able to create a custom job_logging config file and log stderr by editing the file as below
stream: ext://sys.stderr
However, I want to log stderr and stdout at the same time.
I am having a hard time figuring it out.. Does anyone know how I can do it by changing the config file?

Hydra is forwarding your config (as a primitive dictionary) to logging.config.dictConfig.
This is more of question about that API than about Hydra.
Do you know how to do this via logging.config.dictConfig without Hydra?

Related

Log streams are random hash instead of logger name

so recently I moved my app into a docker container.
I noticed, that the log streams of the log group changed its names to some random hash.
Before moving to docker:
After moving to docker:
The logger in each file is initialized as
logger = logging.getLogger(__name__)
The logger's config is set up inside the __main__ with
def setup_logger(config_file):
with open(config_file) as log_config:
config_yml = log_config.read()
config_dict = yaml.safe_load(config_yml)
logging.config.dictConfig(config_dict)
with the config loaded from this file
version: 1
disable_existing_loggers: False
formatters:
json:
format: "[%(asctime)s] %(process)d %(levelname)s %(name)s:%(funcName)s:%(lineno)s - %(message)s"
plaintext:
format: "%(asctime)s %(levelname)s %(name)s - %(message)s"
datefmt: "%Y-%m-%d %H:%M:%S"
handlers:
console:
class: logging.StreamHandler
formatter: plaintext
level: INFO
stream: ext://sys.stdout
root:
level: DEBUG
propagate: True
handlers: [console]
The docker image is run with the flags
--log-driver=awslogs \
--log-opt awslogs-group=XXXXX \
--log-opt awslogs-create-group=true \
Is there a way to keep the original log stream names?
That's how the awslogs driver works.
Per the documentation, you can control the name somewhat using the awslogs-stream-prefix option:
The awslogs-stream-prefix option allows you to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task to which the container belongs. If you specify a prefix with this option, then the log stream takes the following format:
prefix-name/container-name/ecs-task-id
If you don't specify a prefix with this option, then the log stream is named after the container ID that is assigned by the Docker daemon on the container instance. Because it is difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
You cannot change this behavior if you're using the awslogs driver. The only option would be to disable the log driver and use the AWS SDK to put the events into CloudWatch manually, but I don't think that'd be a good idea.
To be clear, your container settings/code don't affect the stream name at all when using awslogs - the log driver is just redirecting all of the container's STDOUT to CloudWatch.

python logging printing empty lines message for print

We are trying to introducing python logging in out existing python project.
As we already have lot more print statement in code, we decided to redirect all print logs to logging file using below statement.
import logging, sys
import os
from logging.handlers import TimedRotatingFileHandler
formatter = logging.Formatter("%(asctime)s - %(pathname)s - %(funcName)s - %(levelname)s - %(message)s")
log_path = '/logs/server.log'
handler = TimedRotatingFileHandler(filename=log_path, when='midnight')
handler.setFormatter(formatter)
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(handler)
sys.stderr.write = log.error
sys.stdout.write = log.info
However, it printing two lines with second as empty logs.
2020-09-21 09:03:05,978 - utils.py - logger - INFO - <_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>
2020-09-21 09:03:05,978 - utils.py - logger - INFO -
2020-09-21 09:03:05,978 - utils.py - logger - INFO - 1600693385 Mon Sep 21 09:03:05 2020 19882 Registering functions
2020-09-21 09:03:05,978 - utils.py - logger - INFO -
It is working for logging function like info, error. it only giving issue for print.
It would be great help if someone know cause of it.
In additional detail, we are using gunicorn as server and falcon as rest framework.
You should not modify the write functions of stdout and stderr but add a stream handler for info and for error (one on stdout and the other one on stderr)
You can find an example on SO here: https://stackoverflow.com/a/31459386/14306518

python logging shows no sys.stdout when from different thread

i now have a strange problem with logging in my multithreaded python application. Whenever i debug the application, i properly see the logging output in the stdout, such as
2016-11-05 21:51:36,851 (connectionpool.py:735 MainThread) INFO - requests.packages.urllib3.connectionpool: "Starting new HTTPS connection (1): api.telegram.org"
2016-11-05 21:51:41,920 (converter.py:16 WorkerThread1) DEBUG - converter: "resizing file test/test_input/"
2016-11-05 21:51:50,199 (bot.py:221 WorkerThread1) ERROR - __main__: "MoviePy error: failed to read the duration of file test/test_input/.
However, when i run the code without debug, all the logs from the WorkingThread1 disappear, leaving only the MainThread ones. The code is unchanged and the error remains. I guess it has something to do with multithreading. The WorkerThread1 is started from the pyTelegramBotAPI framework. I have my logs output to the sys.stdout:
formatter = logging.Formatter(
'%(asctime)s (%(filename)s:%(lineno)d %(threadName)s) %(levelname)s - %(name)s: "%(message)s"')
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setFormatter(formatter)
root = logging.getLogger()
root.setLevel(logging.NOTSET)
root.addHandler(stream_handler)
Any ideas?
Update: it has 100% to do with multithreading, because when i tell the framework to only use one thread, the logging messages appear. pyTelegramBotAPI uses WorkerThread and ThreadPool to implement concurrency as exemplified here
You have to show more code. In the code which you have shown, stream_handler is created, but then handler is used in addHandler.
My guess will be that your logging level is improperly set causing all the logs from the WorkingThread1 to be not logged.

Unable to log debug messages to the system log in Python

I'm using Python 3.4 on Mac OSX. I have the following code to setup a logger:
LOGGER = logging.getLogger(PROGRAM_NAME)
LOGGER.setLevel(logging.DEBUG)
LOGGER.propagate = False
LOGGER_FH = logging.FileHandler(WORKING_DIR + "/syslog.log", 'a')
LOGGER_FH.setLevel(logging.DEBUG)
LOGGER_FH.setFormatter(logging.Formatter('%(name)s: [%(levelname)s] %(message)s'))
LOGGER.addHandler(LOGGER_FH)
LOGGER_SH = logging.handlers.SysLogHandler(address='/var/run/syslog',
facility=logging.handlers.SysLogHandler.LOG_USER)
LOGGER_SH.setLevel(logging.DEBUG)
LOGGER_SH.setFormatter(logging.Formatter('%(name)s: [%(levelname)s] %(message)s'))
LOGGER.addHandler(LOGGER_SH)
The FileHandler works perfectly, and I'm able to see all expected messages at all logging levels show up in the log. The SysLogHandler doesn't work correctly. I'm unable to see any LOGGER.info() or LOGGER.debug() messages in the syslog output. I can see error and warning messages, but not info or debug. Even tweaking the /etc/syslog.conf file has no effect (even after explicitly reloading the syslog daemon with launchctl). What am I missing here ?
Try: address='/dev/log'
It's a bit confusing from the doc, but "address" is expected to be a unix domain socket, not a file.

Python Service launched with Popen have logging stderr & stdout redirection problems

I want log all from my service, I'm launching it with:
launcher.py:
subprocess.Popen(['myservice.py'])
service.py:
import logging
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/var/log/myservice.log')
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
log.addHandler(handler)
.....
But the problem is that stdout and stderr aren't writing in myservice.log
I was testing with:
with open("/var/log/myservice_out.log","w+") as out, open("/var/log/myservice_err.log","w+") as err:
subprocess.Popen(['myservice.py'],stdout=out,stderr=err)
But this isn't using logging, and I want to log all messages(stderror,stdout) in only one file.Thanks in advance.
Everything is normal.
The logging module does NOT redirect stdout and stderr. It CAN write to stdout/stderr depending on what handler you configured, but it's not supposed to redirect stdout or stderr.
It seems you really want to use os.dup2() in service.py or specify the stdout and stderr arguments to subprocess.Popen in launcher.py

Categories

Resources