Log who ran a python script: cron or human? - python

I created a python script which is usually run by a cron job, but the script can at times be run manually by a human. Is it possible to determine who ran the script and saved it in a log file?
I'm using python's logging library. It seems the LogRecord attributes name only shows the root as being the logger used to log the call.
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'

How about using command line options?
https://docs.python.org/3/library/argparse.html
When triggering the script from cron use a special argument that defaults to something else if not explicitly set.

Related

python logging in AWS Fargate, datetime duplicated

I'm trying to use python logging module in AWS Fargate. The same application should work also locally, so I'd like to use a custom logger for local use but to keep intact cloudwatch logs.
This is what I'm doing:
if logging.getLogger().hasHandlers():
log = logging.getLogger()
log.setLevel(logging.INFO)
else:
from logging.handlers import RotatingFileHandler
log = logging.getLogger('sm')
log.root.setLevel(logging.INFO)
...
But I get this in cloudwatch:
2023-02-08T13:06:27.317+01:00 08/02/2023 12:06 - sm - INFO - Starting
And this locally:
08/02/2023 12:06 - sm - INFO - Starting
I thought Fargate was already defining a logger, but apparently the following has no effect:
logging.getLogger().hasHandlers()
Ideally this should be the desired log in cloudwatch:
2023-02-08T13:06:27.317+01:00 sm - INFO - Starting
Fargate just runs docker containers. It doesn't do any setup of your Python code that happens to be running in that docker container for you. It doesn't even know or care that you are running Python code.
Anything written to STDOUT/STDERR by the primary process of the docker container gets sent to CloudWatch Logs, so if you want to be compatible with ECS CloudWatch Logs just make sure you are sending logs in the format you want to the console.
You can use python logging basicconfig to configure the root logger. debug, info, warning, error and critical call basicConfig automatically if no handlers are defined.
logging.basicConfig(filename='test.log', format='%(filename)s: %(message)s',
level=logging.DEBUG)
set the logging format to include details which are required as args
logging.basicConfig(format='%(asctime)s %(name)s - %(levelname)s - %(message)s', level=logging.INFO)
use this to format logs in cloudwatch. Found one stackoverflow answer with some detailed explanation here

python logging in aws lambda

Something just doesn't click internally for me with pythons logging despite reading the documentation.
I have this code
import logging
logging.basicConfig(level=logging.INFO,format='%(levelname)s::%(message)s')
LOG = logging.getLogger("__name__")
LOG.info("hey")
If I run it from bash I get this:
INFO::hey
If I run it in an aws lambda the "hey" doesn't shows up at all in the logs.
I then did a test setting the level on the logger by adding this:
LOG.setLevel(logging.INFO)
Run from bash I get the same thing I got before (desired format), but run from the lambda this shows up in the logs:
[INFO] 2022-02-14T23:30:43.39Z eb94600a-af45-4124-99b6-d9651d6a3cf6 hey
Okay... that is pretty odd. The format is not the same as from bash.
I thought I could rationalize the first example because the output on bash is actually going to stderr. And I then assumed the aws lamdba logs just don't grab that. But the second example is also going to stderr on bash, yet it shows up in the lambda logs but with the wrong format. So clearly I am missing something.
What is going on under the hood here?
When your Lambda runs, a harness is running that does some basic bootstrap and then loads your module and invokes it. Part of that bootstrap in the AWS Lambda Python Runtime replaces the standard Python logger with its own:
logger_handler = LambdaLoggerHandler(log_sink)
logger_handler.setFormatter(
logging.Formatter(
"[%(levelname)s]\t%(asctime)s.%(msecs)dZ\t%(aws_request_id)s\t%(message)s\n",
"%Y-%m-%dT%H:%M:%S",
)
)
logger_handler.addFilter(LambdaLoggerFilter())
This behavior is formally documented by AWS as AWS Lambda function logging in Python
, under the section "Logging library".

Add python logger to stream logs to CloudWatch within Fargate

I have a docker container with a python script (Python 3.8), which I execute in AWS Fargate via Airflow (ECSOperator). The script streams several logs to Cloudwatch using the awslog driver defined in the task definition. I'm able to correctly see all the logs in Cloudwatch but the problem is that the logs are always attached to a main log message, that is, my logs are visualized within another log message.
Here is an example of a log, where the first 3 columns are injected automatically, whereas the rest of the message refers to my custom log:
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - [2021-11-04T17:22:47.719000] 2021-11-04 17:22:47,718 - myscript - WARNING - testing log message
Thus, no matter which logLevel I set that the first log message is always INFO. It seems like it is something that Fargate adds automatically. I would like my log message to stream directly to Cloudwatch without being delivered into another log message, just:
[2021-11-04T17:22:47.719000] 2021-11-04 17:22:47,718 - myscript - WARNING - testing log message
I assume that I'm not configuring the logger correctly or that I have to get another logger, but I donĀ“t know how to do it properly. These are some of the approaches I followed and the results I obtained.
Prints
If a use prints within my code, the log messages are placed in the stdout so they are streamed to Cloudwatch through the awslog driver.
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - testing log message
Logging without configuration
If I use the logger with any ConsoleHandler or StreamHandler configured, the generated log messages are equal to the ones created with prints.
import logging
logger = logging.getLogger(__name__)
logger.warning('testing log message')
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - testing log message
Logging with StreamHandler
If I configure a StreamHandler with a formatter, then my log is attached to the main log, as stated before. Thus, it just replaces the string messsage (last column) by the new formatted log message.
import logging
logger = logging.getLogger(__name__)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.warning('testing log message')
[2021-11-04 17:23:22,026] {{ecs.py:317}} INFO - [2021-11-04T17:22:47.719000] 2021-11-04 17:22:47,718 - myscript - WARNING - testing log message
This is the defined log configuration witihn the task definition:
"logConfiguration": {
"logDriver": "awslogs",
"secretOptions": [],
"options": {
"awslogs-group": "/ecs/my-group",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "ecs"
}
EDIT 1
I've been investigating the logs in Cloudwatch and I found out that the logs are streaming to 2 different log groups, since I'm using Airflow to launch fargate.
Airflow group: Airflow creates automatically a log group named <airflow_environment>-Task where places the logs generated within the tasks. Here, it seems that Airflow wraps my custom logs within its own log, which are always INFO. When visualizing the logs from the Airflow UI, it shows the logs obtained from this log group.
ECS group: this is the log group defined in the TaskDefinition (/ecs/my-group). In this group, the logs are streamed as they are, without being wrapped.
Hence, the problem seems to be with Airflow as it wraps the logs within its own logger and it shows these logs in the Airflow UI. Anyway, logs are correctly delivered and formatted within the log group defined in the TaskDefinition.
Probably a bit late and you already solved, but I think the solution is here. Probably fargate pre-configure a log handler like lambda, given the fact that there is the configuration on awslogs in the task definition.

How to disable discord.py logger?

So I've tried to add some logger to my discord bot, to see logs in file not just in console, cause obviously it's irritating when I reset app and find out that I have to check logs that I've already destroyed, I set it up like this:
logging.basicConfig(filename='CoronaLog.log', level=logging.DEBUG, format='%(levelname)s %(asctime)s %(message)s')
And learned the hard way that discord.py library has its own logger installed so now my logs look like one big mess, is there any way to disable discord.py's logging, or at least output it to another file?
EDIT: I've tried creating two loggers, so it would look like this:
logging.basicConfig(filename='discord.log', level=logging.DEBUG, format='%(levelname)s %(asctime)s %(message)s')
nonDiscordLog = logging.getLogger('discord')
handler = logging.FileHandler(filename='CoronaLog.log', encoding='utf-8', mode='w')
handler.setFormatter(logging.Formatter('%(levelname)s %(asctime)s:%(name)s: %(message)s'))
nonDiscordLog.addHandler(handler)
So the discord log, would be logged as the basic config says to discord.log file and, when executed like this:
nonDiscordLog.info("execution took %s seconds \n" % (time.time() - startTime))
would log into CoronaLog.log file, although it didn't really change anything
discord.py is in this regard terribly unintuitive for everyone but beginners, anyway after confronting the docs you can find out that this behavior can be avoided with:
import discord
client = discord.Client(intents=discord.Intents.all())
# Your regular code here
client.run(__YOURTOKEN__, log_handler=None)
Of course, you can supplement your own logger instead of None but to answer your question exactly, this is how you can disable discord.py's logging.
There's actually a bit more that you can do with the default logging, and you can read all about it on
the official docs
https://discordpy.readthedocs.io/en/latest/logging.html says:
"discord.py logs errors and debug information via the logging python module. It is strongly recommended that the logging module is configured, as no errors or warnings will be output if it is not set up. Configuration of the logging module can be as simple as:
import logging
logging.basicConfig(level=logging.INFO)
Placed at the start of the application. This will output the logs from discord as well as other libraries that use the logging module directly to the console."
Maybe try configuring logging a different way? Because when starting logging, it appears to initialize discord.py's llogging. maybe try
import logging
# Setup logging...
import discord
Maybe if you import it afterwords, it won't set it up.

Django management command doesn't show logging output from python library

I have written a python module that I'm using in my django app, so that it can be generalised. I have a django mangement function that calls this library.
Inside the library I've tried to use proper python logging. I have written a simple command line programme that calls the library and dumps logging data to stdout, and that works. However when this library is called from the django management command, there is logging output from the django management command, however there is no logging output from the library. It is as if there is no logging calls in the library.
I would like the logging output from the library and the django management command to appear on stdout. As if I just used print in both the django management command and the library.
Inside the django management command, I call this, so that all logging output goes to the terminal. I want logging output from this django management command and the library to go to the terminal.
import logging
logger = logging.getLogger(__name__)
class Command(BaseCommand):
def handle(self, *args, **options):
# ...
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
logger.setLevel(logging.DEBUG)
Inside the library, I do logger = logging.getLogger(__name__)"at the root of the file as well.
I am new using the python logging framework (usually I just print ;)) So I might be making a simple mistake or asking for something that's delibrately impossible. This is Django 1.6.5, and python 2.7.6. My python library is pure python and is single threaded, it's all very simple.

Categories

Resources