Log disabling in python - python

I am new to this logging module.
logging.basicConfig(level=logging.DEBUG)
logging.disable = True
As per my understanding this should disable debug logs. But when it is executed it prints debug logs also.
I have only debug logs to print. I dont have critical or info logs. So how i can disable this debug logs.

logging.disable is method, not a configurable attribute.
You can disable logging with :
https://docs.python.org/2/library/logging.html#logging.disable
To disable all, call:
logging.disable(logging.DEBUG)
This will disable all logs of level DEBUG and below.
To enable all logging, do logging.disable(logging.NOTSET) as it is the lowest level.

the level argument in logging.basicConfig you've set to logging.DEBUG is the lowest level of logging which will be displayed.
the order of logging levels is documented here.
if you don't want to display DEBUG, you can either set logging.basicConfig(level=logging.INFO), or specify levels to be disabled via logging.disable(logging.DEBUG)

You can change to level=logging.CRITICAL and receive only critical logs

Related

Add python logger to stream logs to CloudWatch within Fargate

I have a docker container with a python script (Python 3.8), which I execute in AWS Fargate via Airflow (ECSOperator). The script streams several logs to Cloudwatch using the awslog driver defined in the task definition. I'm able to correctly see all the logs in Cloudwatch but the problem is that the logs are always attached to a main log message, that is, my logs are visualized within another log message.
Here is an example of a log, where the first 3 columns are injected automatically, whereas the rest of the message refers to my custom log:
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - [2021-11-04T17:22:47.719000] 2021-11-04 17:22:47,718 - myscript - WARNING - testing log message
Thus, no matter which logLevel I set that the first log message is always INFO. It seems like it is something that Fargate adds automatically. I would like my log message to stream directly to Cloudwatch without being delivered into another log message, just:
[2021-11-04T17:22:47.719000] 2021-11-04 17:22:47,718 - myscript - WARNING - testing log message
I assume that I'm not configuring the logger correctly or that I have to get another logger, but I donĀ“t know how to do it properly. These are some of the approaches I followed and the results I obtained.
Prints
If a use prints within my code, the log messages are placed in the stdout so they are streamed to Cloudwatch through the awslog driver.
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - testing log message
Logging without configuration
If I use the logger with any ConsoleHandler or StreamHandler configured, the generated log messages are equal to the ones created with prints.
import logging
logger = logging.getLogger(__name__)
logger.warning('testing log message')
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - testing log message
Logging with StreamHandler
If I configure a StreamHandler with a formatter, then my log is attached to the main log, as stated before. Thus, it just replaces the string messsage (last column) by the new formatted log message.
import logging
logger = logging.getLogger(__name__)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('testing log message')
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - [2021-11-04T17:22:47.719000] 2021-11-04 17:22:47,718 - myscript - WARNING - testing log message
This is the defined log configuration witihn the task definition:
"logConfiguration": {
"logDriver": "awslogs",
"secretOptions": [],
"options": {
"awslogs-group": "/ecs/my-group",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "ecs"
}
EDIT 1
I've been investigating the logs in Cloudwatch and I found out that the logs are streaming to 2 different log groups, since I'm using Airflow to launch fargate.
Airflow group: Airflow creates automatically a log group named <airflow_environment>-Task where places the logs generated within the tasks. Here, it seems that Airflow wraps my custom logs within its own log, which are always INFO. When visualizing the logs from the Airflow UI, it shows the logs obtained from this log group.
ECS group: this is the log group defined in the TaskDefinition (/ecs/my-group). In this group, the logs are streamed as they are, without being wrapped.
Hence, the problem seems to be with Airflow as it wraps the logs within its own logger and it shows these logs in the Airflow UI. Anyway, logs are correctly delivered and formatted within the log group defined in the TaskDefinition.
Probably a bit late and you already solved, but I think the solution is here. Probably fargate pre-configure a log handler like lambda, given the fact that there is the configuration on awslogs in the task definition.

Why doesn't this logging setting work for INFO?

import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info('good')
logger.warning('bad')
It only prints out 'bad', not 'good'. What's the issue of the setLevel(logging.INFO)?
You never configured handlers, so the logging system is using the "last resort" handler, which defaults to a level of WARNING. Even if the logger levels say to log a message, it won't be handled unless there's a handler configured to handle it.
Run logging.basicConfig with the settings you want, to perform basic handler configuration.

Does anyone know how to suppress all airflow "info" level logs, but not suppress application implementation specific logs?

Airflow 1.10.1 has an attribute called "logging_level" that I believe is tied to the Python logging level. When the value is INFO or lower, the output logs are too verbose and unnecessary in deployments.
Rather, I want to be able to log just airflow framework errors, and everything I want my application to log. Then I cut down on the logging to something minimal, most just in the context of the application, and only keep airflow framework/execution errors.
In a particular PythonOperator I wrote at 5 different levels of log to see what happens to them when I modify the airflow.cfg logging_level.
logging.debug('******************* HELLO debug *******************')
logging.info('******************* HELLO info *******************')
logging.warning('******************* HELLO warning *******************')
logging.error('******************* HELLO error *******************')
logging.critical('******************* HELLO critical *******************')
The idea being that by changing the airflow.cfg attribute for logging_level from debug to info to warning, I can see less and less of the airflow logs, and just leave the application specific logs I want.
Step 1: logging_level = DEBUG
Here's the log from the task that has logs at all level from debug upward.
Step 2: logging_level = INFO
As expected, the logs do not include debug level messages.
Step 3: logging_level = WARNING
When we go up from INFO to WARNING, the file is empty. I was expecting the warning, error, and critical messages in the file and the rest suppressed from Airflow since the log did not contain anything from airflow at the level above INFO.
Step 4: logging_level = ERROR
The same problem here again, the file is empty. I expected to get the error and critical messages, but the file is empty.
Note, in the last two screenshots, it's not that the path is invalid, but Airflow just displays the path to the file it seems in the absence of any content in the log file.
So my question is:
1) Is this just an Airflow bug?
2) Am I not using this properly? Or do I need to do something else in order to suppress Airflow level logs from INFO and below in production, and just keep my application specific logs?
If you notice in your log screenshots your log message are actually wrapped in an info log. If you want to actually change the log level within the task log and not wrap it you can pull the log off of the task instance (from the **kwargs) and use it directly as opposed to generically calling logging.warning().
Here is an example:
def your_python_callable(**kwargs):
log = kwargs["ti"].log
log.warning("******HELLO Debug******")

Two Pylons logger handlers (Sentry/Raven and console) for the same qualname

I have a Pylons/TurboGears app. I would like to log the same logger (as specified by the qualname property) to use two different log handlers, each with their own log level.
The Sentry / Raven logger should receive only WARN+ level SQLAlchemy messages, and the console logger should receive INFO+ level SQLAlchemy messages.
Here's my abbreviated ini file:
[loggers]
keys = root, sqlalchemy_console, sqlalchemy_sentry
[handlers]
keys = console, sentry
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console, sentry
[logger_sqlalchemy_console]
level = INFO
handlers = console
qualname = sqlalchemy.engine
propagate = 0
[logger_sqlalchemy_sentry]
level = WARN
handlers = sentry
qualname = sqlalchemy.engine
propagate = 0
However, the logger_sqlalchemy_sentry seems to override logger_sqlalchemy_console and steal its messages. This occurs regardless of the order of loggers in the ini file.
Is it possible using Pylons to log the same logger/qualname to multiple places with different levels?
If so, is it possible for Sentry/Raven to be one of those loggers? Is there something wrong with my ini file, or is there a bug in Raven?
The problem you're having is that you're configuring the sqlalchemy.engine Logger twice. The logger sections correspond to instances of logging.Logger, things that are returned by logging.getLogger(qualname). Only one object can be returned by that call, you can't possibly set up more than one of them with the same qualname.
What you need is multiple handlers for that logger, in the same way that you gave your root logger multiple handlers. You can then specify the desired log level on the individual handlers.
Unfortunately, fileConfig() doesn't give you an easy way to configure the same handler with different log levels depending on the logger that originated the record, you'll need to set up duplicate handler sections for both root and the sqlalchemy.engine loggers in order to have different log levels for them.
You're getting loggers and handlers mixed up - as TokenMacGuy says, you need two handlers for the logger named sqlalchemy.engine. Configure the StreamHandler (console) with level INFO and a SentryHandler (sentry) with level WARNING, and the sqlalchemy.engine logger with a level of DEBUG or INFO. Then you should get the desired result. (Even when DEBUG messages are logged, the levels on the handlers will prevent them emitting events which are less than their level.)
Unless you're limited to Python 2.6 or earlier, it's worth considering using the logging.config.dictConfig API in preference to logging.config.fileConfig, if you can. The dictConfig API allows better control over logging configuration than the older fileConfig API, which will not be further developed.

syslog.syslog vs SysLogHandler

I'm looking at how to log to syslog from within my Python app, and I found there are two ways of doing it:
Using syslog.syslog() routines
Using the logger module SysLogHandler
Which is the best option to use, advantages/disadvantages of each one, etc, because I really don't know which one should I use.
syslog.syslog() can only be used to send messages to the local syslogd. SysLogHandler can be used as part of a comprehensive, configurable logging subsystem, and can log to remote machines.
The logging module is a more comprehensive solution that can potentially handle all of your log messages, and is very flexible. For instance, you can setup multiple handers for your logger and each can be set to log at a different level. You can have a SysLogHandler for sending errors to syslog, and a FileHandler for debugging logs, and an SMTPHandler to email the really critical messages to ops. You can also define a hierarchy of loggers within your modules, and each one has its own level so you can enable/disable messages from specific modules, such as:
import logging
logger = logging.getLogger('package.stable_module')
logger.setLevel(logging.WARNING)
And in another module:
import logging
logger = logging.getLogger('package.buggy_module')
logger.setLevel(logging.DEBUG)
The log messages in both of the these modules will be sent, depending on the level, to the 'package' logger and ultimately to the handlers you've defined. You can also add handlers directly to the module loggers, and so on. If you've followed along this far and are still interested, then I recommend jumping to the logging tutorial for more details.
So far, there is a disadvantage in logging.handlers.SysLogHander which is not mentioned yet. That is I can't set options like LOG_ODELAY or LOG_NOWAIT or LOG_PID. On the other hands, LOG_CONS and LOG_PERROR can be achieved with adding more handlers, and LOG_NDELAY is already set by default, because the connection opens when the handler is instantiated.

Categories

Resources