how to configure logging system in one file on python - python

I have two files.
first is the TCP server.
second is the flask app. they are one project but they are inside of a separated docker container
they should write logs same file due to being the same project
ı try to create my logging library ı import my logging library to two file
ı try lots of things
firstly ı deleted bellow code
if (logger.hasHandlers()):
logger.handlers.clear()
when ı delete,ı get same logs two times
my structure
docker-compose
docker file
loggingLib.py
app.py
tcp.py
requirements.txt
.
.
.
my last logging code
from logging.handlers import RotatingFileHandler
from datetime import datetime
import logging
import time
import os, os.path
project_name= "proje_name"
def get_logger():
if not os.path.exists("logs/"):
os.makedirs("logs/")
now = datetime.now()
file_name = now.strftime(project_name + '-%H-%M-%d-%m-%Y.log')
log_handler = RotatingFileHandler('logs/'+file_name,mode='a', maxBytes=10000000, backupCount=50)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ', '%d-%b-%y %H:%M:%S')
formatter.converter = time.gmtime
log_handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
if (logger.hasHandlers()):
logger.handlers.clear()
logger.addHandler(log_handler)
return logger
it is working but only in one file
if app.py works first, it only makes a log
other file don't make any logs

Anything that directly uses files – config files, log files, data files – is a little trickier to manage in Docker than running locally. For logs in particular, it's usually better to set your process to log directly to stdout. Docker will collect the logs, and you can review them with docker logs. In this setup, without changing your code, you can configure Docker to send the logs somewhere else or use a log collector like fluentd or logstash to manage the logs.
In your Python code, you usually will want to configure the detailed logging setup at the top level, on the root logger
import logging
def main():
logging.basicConfig(
format='%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ',
datefmt='%d-%b-%y %H:%M:%S',
level=logging.INFO
)
...
and in each individual module you can just get a local logger, which will inherit the root logger's setup
import logging
LOGGER = logging.getLogger(__name__)
With its default setup, Docker will capture log messages into JSON files on disk. If you generate a large amount of log messages in a long-running container, it can lead to local disk exhaustion (it will have no effect on memory available to processes). The Docker logging documentation advises using the local file logging driver, which does automatic log rotation. In a Compose setup you can specify logging: options:
version: '3.8'
services:
app:
image: ...
logging:
driver: local
You can also configure log rotation on the default JSON File logging driver:
version: '3.8'
services:
app:
image: ...
logging:
driver: json-file # default, can be omitted
options:
max-size: 10m
max-file: 50
You "shouldn't" directly access the logs, but they are in a fairly stable format in /var/lib/docker, and tools like fluentd and logstash know how to collect them.
If you ever decide to run this application in a cluster environment like Kubernetes, that will have its own log-management system, but again designed around containers that directly log to their stdout. You would be able to run this application unmodified in Kubernetes, with appropriate cluster-level configuration to forward the logs somewhere. Retrieving a log file from opaque storage in a remote cluster can be tricky to set up.

Related

Log streams are random hash instead of logger name

so recently I moved my app into a docker container.
I noticed, that the log streams of the log group changed its names to some random hash.
Before moving to docker:
After moving to docker:
The logger in each file is initialized as
logger = logging.getLogger(__name__)
The logger's config is set up inside the __main__ with
def setup_logger(config_file):
with open(config_file) as log_config:
config_yml = log_config.read()
config_dict = yaml.safe_load(config_yml)
logging.config.dictConfig(config_dict)
with the config loaded from this file
version: 1
disable_existing_loggers: False
formatters:
json:
format: "[%(asctime)s] %(process)d %(levelname)s %(name)s:%(funcName)s:%(lineno)s - %(message)s"
plaintext:
format: "%(asctime)s %(levelname)s %(name)s - %(message)s"
datefmt: "%Y-%m-%d %H:%M:%S"
handlers:
console:
class: logging.StreamHandler
formatter: plaintext
level: INFO
stream: ext://sys.stdout
root:
level: DEBUG
propagate: True
handlers: [console]
The docker image is run with the flags
--log-driver=awslogs \
--log-opt awslogs-group=XXXXX \
--log-opt awslogs-create-group=true \
Is there a way to keep the original log stream names?
That's how the awslogs driver works.
Per the documentation, you can control the name somewhat using the awslogs-stream-prefix option:
The awslogs-stream-prefix option allows you to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task to which the container belongs. If you specify a prefix with this option, then the log stream takes the following format:
prefix-name/container-name/ecs-task-id
If you don't specify a prefix with this option, then the log stream is named after the container ID that is assigned by the Docker daemon on the container instance. Because it is difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
You cannot change this behavior if you're using the awslogs driver. The only option would be to disable the log driver and use the AWS SDK to put the events into CloudWatch manually, but I don't think that'd be a good idea.
To be clear, your container settings/code don't affect the stream name at all when using awslogs - the log driver is just redirecting all of the container's STDOUT to CloudWatch.

Python logging not recording debug messages

I have created a package and I'm adding logging file to log debug information, here is my __init__.py file:
import logging
logging.basicConfig(filename='logs/tmp.log',
format='%(levelname)s %(asctime)s :: %(message)s',
level=logging.DEBUG)
logger = logging.getLogger('logs/tmp.log')
logger.debug('First debug!')
When I ran the code, the file logs/tmp.log wasn't even created, so I created the file manually just in case logging needs the file to exists, but it's not working either, any ideas on what am I doing wrong?

Logging with WSGI server and flask application

I am using wsgi server to spawn the servers for my web application. I am having problem with information logging.
This is how I am running the app
from gevent import monkey; monkey.patch_all()
from logging.handlers import RotatingFileHandler
import logging
from app import app # this imports app
# create a file to store weblogs
log = open(ERROR_LOG_FILE, 'w'); log.seek(0); log.truncate();
log.write("Web Application Log\n"); log.close();
log_handler = RotatingFileHandler(ERROR_LOG_FILE, maxBytes =1000000, backupCount=1)
formatter = logging.Formatter(
"[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s"
)
log_handler.setFormatter(formatter)
app.logger.setLevel(logging.DEBUG)
app.logger.addHandler(log_handler)
# run the application
server= wsgi.WSGIServer(('0.0.0.0', 8080), app)
server.serve_forever()
However, on running the application it is not logging anything. I guess it must be because of WSGI server because app.logger works in the absence of WSGI. How can I log information when using WSGI?
According to the gevent uwsgi documentation you need to pass your log handler object to the WSGIServer object at creation:
log – If given, an object with a write method to which request (access) logs will be written. If not given, defaults to sys.stderr. You may pass None to disable request logging. You may use a wrapper, around e.g., logging, to support objects that don’t implement a write method. (If you pass a Logger instance, or in general something that provides a log method but not a write method, such a wrapper will automatically be created and it will be logged to at the INFO level.)
error_log – If given, a file-like object with write, writelines and flush methods to which error logs will be written. If not given, defaults to sys.stderr. You may pass None to disable error logging (not recommended). You may use a wrapper, around e.g., logging, to support objects that don’t implement the proper methods. This parameter will become the value for wsgi.errors in the WSGI environment (if not already set). (As with log, wrappers for Logger instances and the like will be created automatically and logged to at the ERROR level.)
so you should be able to do wsgi.WSGIServer(('0.0.0.0', 8080), app, log=app.logger)
You can log like this -
import logging
import logging.handlers as handlers
.
.
.
logger = logging.getLogger('MainProgram')
logger.setLevel(10)
logHandler = handlers.RotatingFileHandler(filename.log,maxBytes =1000000, backupCount=1)
logger.addHandler(logHandler)
logger.info("Logging configuration done")
.
.
.
# run the application
server= wsgi.WSGIServer(('0.0.0.0', 8080), app, log=logger)
server.serve_forever()

How to programmatically tell Celery to send all log messages to stdout or stderr?

How does one turn on celery logging programmatically?
From the terminal, this works fine:
celery worker -l DEBUG
When I call get_task_logger(__name__).debug('hello'), I can see the message come up in the terminal. (stdout and stderr are being displayed) I can even import logging and call logger.info('hi') and see that too. (both work)
However, while developing a task, I prefer to use a test module and call the task function directly rather than firing up a whole worker. But I can't see the log messages. I understand that Celery is redirecting everything to its internal apparatus, but I want to see the log messages on the stdout too.
How do I tell Celery to send a copy of the log messages back to stdout?
I've read a bunch of online articles about logging but it seems that a number of logging-related configuration vars from celery have been deprecated and it's unclear to me from the docs what is the supported path today.
Here is an example module that creates a celery object and attempts to log output. Nothing shows in the terminal.
example mymodule.py
from celery import Celery
import logging
from celery.utils.log import get_task_logger
app = Celery('test')
app.config_from_object('myfile', True)
get_task_logger(__name__).warn('hello world')
logging.getLogger(__name__).warn('hello world 2')
EDIT
I know that I can add a handler to redirect some of the output back to the terminal by adding a handler
log = get_task_logger(__name__)
h = logging.StreamHandler(sys.stdout)
log.addHandler(h)
But is there a "Celery way" to do this? Maybe one that lets me also have the Celery formatted lines of text.
[2014-03-02 15:51:32,949: WARNING] hello world
I have been looking at the same issue...
What seems to work best is to use the signal handler, according to http://docs.celeryproject.org/en/latest/userguide/signals.html#after-setup-logger
In your celery.py file use:
from celery.signals import after_setup_logger
import logging
#after_setup_logger.connect()
def logger_setup_handler(logger, **kwargs ):
my_handler = MyLogHandler()
my_handler.setLevel(logging.DEBUG)
my_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') #custom formatter
my_handler.setFormatter(my_formatter)
logger.addHandler(my_handler)
logging.info("My log handler connected -> Global Logging")
if __name__ == '__main__':
app.start()
then you can define MyLogHandler() as you wish.
To send the logs to STDOUT you should also be able to use (I have not tested it):
my_handler = logging.StreamHandler(sys.stdout)

Why is my log level not being used when using loadapp from paste.deploy?

I want to temporailiy turn on debug messages in a production pyramid web project so I adjusted the production.ini file, pushed it to Heroku and saw only error and warn level messages.
So I thought, that seems odd since if I start the pyramid application like the following on my local PC I get all the log level messages.
env/bin/pserve production.ini
OK, so that's not exactly how it runs on Heroku, it is actually run from a little bit of python that looks like this (in a file called runapp.py):
import os
from paste.deploy import loadapp
from waitress import serve
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app = loadapp('config:production.ini', relative_to='.')
serve(app, host='0.0.0.0', port=port)
Now, sure enough if I do this on my local PC I get the same behavior as when it is deployed to Heroku (hardly surprising).
python runapp.py
My question is, what am I missing here? Why does running it the second way result in no log messages other than ERROR and WARN being output to standard out? Surely, since it is using the same production.ini file it should work the same as if I use the pserve process?
Here is my logging section from production.ini:
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/logging.html
###
[loggers]
keys = root, test
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = DEBUG
handlers = console
[logger_test]
level = DEBUG
handlers = console
qualname = test
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = DEBUG
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
PasteDeploy does not actually assume responsibility for configuring logging. This is a little quirk where the INI file is dual-purposed. There are sections that PasteDeploy cares about, and there are sections that logging.config.fileConfig cares about, and both must be run to fully load an INI file.
If you follow the pyramid wrappers for doing this, you'd do:
pyramid.paster.setup_logging(inipath)
pyramid.paster.get_app(inipath)
The main reason you would use these instead of doing it yourself is that they support doing "the right thing" when inipath contains a section specifier like development.ini#myapp, which fileConfig would crash on.

Categories

Resources