My python code generates a log file using logging framework and all INFO messages are captured in the log file. I integrated my program with ROBOT framework and now the log file is not generated. Instead the INFO messages are printed in the log.html. I understand this is because robot existing logger is being called and hence INFO are directed to log.html. I don't want the behavior to change, I still want the user defined log file to be generated separately with just the INFO level messages.
How can I achieve this?
Python Code --> Logging Library --> "Log File"
RobotFramework --> Python Code --> Logging Library --> by default "log.html"
When you run using python code it will allow you set log file name.
But when you run using robotframework, the file is by default set to log.html (since robot uses the same logging library internally that you are using) so your logging function is overridden by that of robotframework.
That is why you see it in log.html instead of your file.
You can also refer Robot Framework not creating file or writing to it
Hope it helps!
The issue has been fixed now, which was a very minor one. But am still analyzing it deeper, will update when I am clear on the exact cause.
This was the module that I used,
def call_logger(logger_name, logFile):
level = logging.INFO
l = logging.getLogger(logger_name)
if not getattr(l, 'handler_set', None):
formatter = logging.Formatter('%(asctime)s : %(message)s')
fileHandler = logging.FileHandler(logFile, mode = 'a')
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler()
streamHandler.setFormatter(formatter)
l.setLevel(level)
l.addHandler(fileHandler)
l.addHandler(streamHandler)
l.handler_set = True
When I changed the parameter "logFile" to a different name "log_file" it worked.
Looks like "logFile" was a built in robot keyword.
Related
I deploy greengrass components into my EC2 instance. The deploy greengrass components have been generating logs which wraps around my python log.
what is causing the "wrapping" around it? how can I remove these wraps.
For example, the logs in bold are wraps the original python log.
The log in emphasis is generated by my python log formatter.
2022-12-13T23:59:56.926Z [INFO] (Copier) com.bolt-data.iot.RulesEngineCore: stdout. [2022-12-13 23:59:56,925][DEBUG ][iot-ipc] checking redis pub-sub health (io_thread[140047617824320]:pub_sub_redis:_connection_health_nanny_task:61).
{scriptName=services.com.bolt-data.iot.RulesEngineCore.lifecycle.Run, serviceName=com.bolt-data.iot.RulesEngineCore, currentState=RUNNING}
The following is my python log formatter.
formatter = logging.Formatter(
fmt="[%(asctime)s][%(levelname)-7s][%(name)s] %(message)s (%(threadName)s[%(thread)d]:%(module)s:%(funcName)s:%(lineno)d)"
)
# TODO: when we're running in the lambda function, don't stream to stdout
_handler = logging.StreamHandler(stream=stdout)
_handler.setLevel(get_level_from_environment())
_handler.setFormatter(formatter)
by default Greengrass Nucleus captures the stdoud and stderr streams from the processes it manages, including custom components. It then outputs each line of the logs with the prefix and suffix you have highlighted in bold. This cannot be changed. You can switch the format from TEXT to JSON which can make the log easier to parse by a machine (check greengrass-nucleus-component-configuration - logging.format)
import logging
from logging.handlers import RotatingFileHandler
logger = logging.Logger(__name__)
_handler = RotatingFileHandler("mylog.log")
# additional _handler configuration
logger.addHandler(_handler)
If you want to output a log containing only what your application generates, change the logger configuration output to file. You can write the file in the work folder of the component or in another location, such as /var/log
I'm using logging to log some information.
I used both FileHandler and StreamHandler in my logger, to output the message in the console and saved to a specific file.
In the console, to highlight some important message, I used \033. But this word also add into my log file and shown a weird symbol. Following is my currently usage example
logger = logging.getLogger('SayHello')
file_hdlr = logging.FileHandler('test.log')
file_hdlr.setFormatter(logging.Formatter('%(message)s'))
console_hdlr = logging.StreamHandler()
console_hdlr.setFormatter(logging.Formatter('%(message)s'))
logger.addHandler(file_hdlr)
logger.addHandler(console_hdlr)
logger.setLevel(logging.INFO)
logger.info('\033[1;31mHello World\033[0m')
Output in console:
Hello World (in red)
Output in log file:
[1;31mHello World[0m
How can I ignore the \033 and other color code in my file handler? Should I override the FileHandler class? Thanks a lot!
After several hours tracing, I found that FileHandler also inherited from StreamHandler. FileHandler.emit call StreamHandler.emit to write message into the file.
To ignore the control sequences in the message, I have to override both FileHandler and StreamHandler, says CustomFileHandler and CustomStreamHandler, respectively.
CustomFileHandler.emit call CustomStreamHandler.emit to write the message, where we have to remove the control sequences in it. With the help of regular expression, this should be easy to find them.
I set up a basic python logger that writes to a log file and to stdout. When I run my python program locally, log messages with logging.info appear as expected in the file and in the console. However, when I run the same program remotely via ssh -n user#server python main.py neither the console nor the file show any logging.info messages.
This is the code used to set up the logger:
def setup_logging(model_type, dataset):
file_name = dataset + "_" + model_type + time.strftime("_%Y%m%d_%H%M")
logging.basicConfig(
level=logging.INFO,
format="[%(levelname)-5.5s %(asctime)s] %(message)s",
datefmt='%H:%M:%S',
handlers=[
logging.FileHandler("log/{0}.log".format(file_name)),
logging.StreamHandler()
])
I already tried the following things:
Sending a message to logging.warning: Those appear as expected on the root logger. However, even without setting up the logger and falling back to the default logging.info messages do not show up.
The file and folder permissions seem to be alright and an empty file is created on disk.
Using print works as usual as well
If you look into the source code of basicConfig function, you will see that the function is applied only when there are no handlers on the root logger:
_acquireLock()
try:
force = kwargs.pop('force', False)
if force:
for h in root.handlers[:]:
root.removeHandler(h)
h.close()
if len(root.handlers) == 0:
handlers = kwargs.pop("handlers", None)
if handlers is None:
...
I think, one of the libraries you use configures logging on import. And as you see from the sample above, one of the solutions is to use force=True argument.
A possible disadvantage is that several popular data-science libraries keep a reference to the loggers they configure, so that when you reconfigure logging yourselves their old loggers with the handlers are still there and do not see your changes. In which case you will also need to clean the handlers for those loggers as well.
I have found a lot of documentation and tutorials such as the official logging config docs, the official logging cookbook, and this nice tutorial by Fang.
Each of them have gotten me near to an answer, but not quite. My question is this:
When using Config Files, how can I use a logger with 2 separate handlers at 2 separate levels?
To clarify, here is an example of my YAML file:
---
version: 1
handlers:
debug_console:
class: logging.StreamHandler
level: DEBUG
.
.
.
info_file_handler:
class: logging.handlers.RotatingFileHandler
level: INFO
.
.
.
loggers:
dev:
handlers: [debug_console, info_file_handler]
test:
handlers: [info_file_handler]
root:
handlers: [info_file_handler]
I want to have two ways to run the logger, where one way (dev) is more verbose than the other. Moreover, when running the dev logger, I want it to have two different levels for the two different handlers.
This is a snippet of the code to try to launch the logger:
with open('logging.yaml', 'r') as f:
log_cfg = yaml.safe_load(f.read())
logging.config.dictConfig(log_cfg)
my_logger = logging.getLogger('dev')
The dictConfig line above works correctly. I say this because when I get to the code which asks to log to the console, I will see dev as the name when the log prints out. (I have edited the yaml, but it contains %(name)s in the format.)
But there is something wrong with my_logger. Even though it is tied to the name of dev, none of the rest of the attributes seem to have been set. Specifically, I see:
>>> my_logger
<Logger dev_model (WARNING)>
I don't know the logging module well enough to understand where the problem is. What I want is:
When I activate the 'dev' logger, I want to launch 2 handlers, one which is at the DEBUG level and writes to console, the other which is at the INFO level and writes to a file.
How can this be done?
If I understand the question correctly, the problem is caused by the fact the logger itself has a log level, not just handlers. Logger's log level defaults to WARNING, which seems to be set on your logger. If a generated message has a lower priority than the logger's level then it does not even make to the handlers.
So try setting logger's level to DEBUG. info_file_handler should ignore any messages more verbose than it's own level.
As for this part:
none of the rest of the attributes seem to have been set.
What happens there is logger's repr() method is called to convert Logger to some sort of string representation in order to render it. Which is not guaranteed to show all the attributes of the object.
Such a long question... Too long for me to understand well.
But I think you misunderstand how handlers work. Actually logger itself doesn't output anything but handlers do.
So let's say if you set DEBUG on dev logger, it will pass logs >= DEBUG to all handlers. And then debug_console handler will process logs >= DEBUG but info_file_handler will only process logs >= INFO. Setting DEBUG on dev logger won't let info_file_handler output logs < INFO. So you do can have two separate levels which one is >= DEBUG and goes to console while another is >= INFO and goes to file.
I am presuming I understand you rightly...
EDIT: The reason that the :root: was appearing was because I typed logging.error(...) instead of logger.error. This caused the program to default to handler root. This changed the general formatting which includes the handler name. Correcting to logger and just adding the error name to the message seems to create the correct output.
OLD TEXT
With the logging package I am trying log a custom error. When doing so the error message shows up, but the custom error exception appears to show up as root. For example a simple case is shown below running on python 3.6.6
Input looks like:
import logging
import logging.config
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
HANDLER = logging.FileHandler('test.txt')
logger.addHandler(HANDLER)
FORMATTER = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
HANDLER.setFormatter(FORMATTER)
class FailedToDoSimpleTaskError(Exception):
pass
def fail_todo_thing():
raise FailedToDoSimpleTaskError('It was a good attempt though')
try:
fail_todo_thing()
except FailedToDoSimpleTaskError as err:
logging.error(err)
Output looks like:
__main__-ERROR:root:It was a good attempt though
What I am trying to understand is why it shows up with :root:, and if there is any way to have it show up instead with :FailedToDoSimpleTaskError: ?
As you stated in your edit, if you correct logging.error(err) with logger.error(err) you get the correct output __main__ - ERROR - It was a good attempt though.
The reason for root appearing in your message is explained under logging documentation. In that documentation you can find out about changing the format of displayed messages.
Since you did use logging.error(err) you were not invoking logger which had specified the format you wanted. Hence the root word appeared.
You should have also noticed that when using logging.error(err) you would get the message on the console and not added to your log file. And if you try to use logging.info no message would have been written to the console, as the defaul log level is WARNING and the level you had set as INFO was for the HANDLER named logger. So if you try for example logging.info("Info test") nothing would appear in the log console nor in your log file. However using logger.info("Info test") you would get the message "Info test" written in your log file.