Logging with WSGI server and flask application - python

I am using wsgi server to spawn the servers for my web application. I am having problem with information logging.
This is how I am running the app
from gevent import monkey; monkey.patch_all()
from logging.handlers import RotatingFileHandler
import logging
from app import app # this imports app
# create a file to store weblogs
log = open(ERROR_LOG_FILE, 'w'); log.seek(0); log.truncate();
log.write("Web Application Log\n"); log.close();
log_handler = RotatingFileHandler(ERROR_LOG_FILE, maxBytes =1000000, backupCount=1)
formatter = logging.Formatter(
"[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s"
)
log_handler.setFormatter(formatter)
app.logger.setLevel(logging.DEBUG)
app.logger.addHandler(log_handler)
# run the application
server= wsgi.WSGIServer(('0.0.0.0', 8080), app)
server.serve_forever()
However, on running the application it is not logging anything. I guess it must be because of WSGI server because app.logger works in the absence of WSGI. How can I log information when using WSGI?

According to the gevent uwsgi documentation you need to pass your log handler object to the WSGIServer object at creation:
log – If given, an object with a write method to which request (access) logs will be written. If not given, defaults to sys.stderr. You may pass None to disable request logging. You may use a wrapper, around e.g., logging, to support objects that don’t implement a write method. (If you pass a Logger instance, or in general something that provides a log method but not a write method, such a wrapper will automatically be created and it will be logged to at the INFO level.)
error_log – If given, a file-like object with write, writelines and flush methods to which error logs will be written. If not given, defaults to sys.stderr. You may pass None to disable error logging (not recommended). You may use a wrapper, around e.g., logging, to support objects that don’t implement the proper methods. This parameter will become the value for wsgi.errors in the WSGI environment (if not already set). (As with log, wrappers for Logger instances and the like will be created automatically and logged to at the ERROR level.)
so you should be able to do wsgi.WSGIServer(('0.0.0.0', 8080), app, log=app.logger)

You can log like this -
import logging
import logging.handlers as handlers
.
.
.
logger = logging.getLogger('MainProgram')
logger.setLevel(10)
logHandler = handlers.RotatingFileHandler(filename.log,maxBytes =1000000, backupCount=1)
logger.addHandler(logHandler)
logger.info("Logging configuration done")
.
.
.
# run the application
server= wsgi.WSGIServer(('0.0.0.0', 8080), app, log=logger)
server.serve_forever()

Related

how to configure logging system in one file on python

I have two files.
first is the TCP server.
second is the flask app. they are one project but they are inside of a separated docker container
they should write logs same file due to being the same project
ı try to create my logging library ı import my logging library to two file
ı try lots of things
firstly ı deleted bellow code
if (logger.hasHandlers()):
logger.handlers.clear()
when ı delete,ı get same logs two times
my structure
docker-compose
docker file
loggingLib.py
app.py
tcp.py
requirements.txt
.
.
.
my last logging code
from logging.handlers import RotatingFileHandler
from datetime import datetime
import logging
import time
import os, os.path
project_name= "proje_name"
def get_logger():
if not os.path.exists("logs/"):
os.makedirs("logs/")
now = datetime.now()
file_name = now.strftime(project_name + '-%H-%M-%d-%m-%Y.log')
log_handler = RotatingFileHandler('logs/'+file_name,mode='a', maxBytes=10000000, backupCount=50)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ', '%d-%b-%y %H:%M:%S')
formatter.converter = time.gmtime
log_handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
if (logger.hasHandlers()):
logger.handlers.clear()
logger.addHandler(log_handler)
return logger
it is working but only in one file
if app.py works first, it only makes a log
other file don't make any logs
Anything that directly uses files – config files, log files, data files – is a little trickier to manage in Docker than running locally. For logs in particular, it's usually better to set your process to log directly to stdout. Docker will collect the logs, and you can review them with docker logs. In this setup, without changing your code, you can configure Docker to send the logs somewhere else or use a log collector like fluentd or logstash to manage the logs.
In your Python code, you usually will want to configure the detailed logging setup at the top level, on the root logger
import logging
def main():
logging.basicConfig(
format='%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ',
datefmt='%d-%b-%y %H:%M:%S',
level=logging.INFO
)
...
and in each individual module you can just get a local logger, which will inherit the root logger's setup
import logging
LOGGER = logging.getLogger(__name__)
With its default setup, Docker will capture log messages into JSON files on disk. If you generate a large amount of log messages in a long-running container, it can lead to local disk exhaustion (it will have no effect on memory available to processes). The Docker logging documentation advises using the local file logging driver, which does automatic log rotation. In a Compose setup you can specify logging: options:
version: '3.8'
services:
app:
image: ...
logging:
driver: local
You can also configure log rotation on the default JSON File logging driver:
version: '3.8'
services:
app:
image: ...
logging:
driver: json-file # default, can be omitted
options:
max-size: 10m
max-file: 50
You "shouldn't" directly access the logs, but they are in a fairly stable format in /var/lib/docker, and tools like fluentd and logstash know how to collect them.
If you ever decide to run this application in a cluster environment like Kubernetes, that will have its own log-management system, but again designed around containers that directly log to their stdout. You would be able to run this application unmodified in Kubernetes, with appropriate cluster-level configuration to forward the logs somewhere. Retrieving a log file from opaque storage in a remote cluster can be tricky to set up.

Logging at Gevent application

I'm trying to use standard Python logging module alongside with gevent. I have monkey patched threading and I expect logging to work with my app:
import gevent.monkey
gevent.monkey.patch_all()
import logging
logger = logging.getLogger()
fh = logging.FileHandler('E:\\spam.log')
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)
def foo():
logger.debug('Running in foo')
gevent.sleep(0)
logger.debug('Explicit context switch to foo again')
def bar():
logger.debug('Explicit context to bar')
gevent.sleep(0)
logger.debug('Implicit context switch back to bar')
gevent.joinall([
gevent.spawn(foo),
gevent.spawn(bar),
])
Unfortunately E:\spam.log is empty and I can see no output to the console. It seems that either I haven't configured logging properly or gevent does not support it at all (which I don't believe, because gevent documentation says that it does). So, how can I log at gevent app?
You haven't configured it correctly. You need to set the DEBUG level on the logger rather than the handler, otherwise the logger's default level (WARNING) causes your debug messages to be dropped. Try doing a
logger.setLevel(logging.DEBUG)
and it should work.

How to programmatically tell Celery to send all log messages to stdout or stderr?

How does one turn on celery logging programmatically?
From the terminal, this works fine:
celery worker -l DEBUG
When I call get_task_logger(__name__).debug('hello'), I can see the message come up in the terminal. (stdout and stderr are being displayed) I can even import logging and call logger.info('hi') and see that too. (both work)
However, while developing a task, I prefer to use a test module and call the task function directly rather than firing up a whole worker. But I can't see the log messages. I understand that Celery is redirecting everything to its internal apparatus, but I want to see the log messages on the stdout too.
How do I tell Celery to send a copy of the log messages back to stdout?
I've read a bunch of online articles about logging but it seems that a number of logging-related configuration vars from celery have been deprecated and it's unclear to me from the docs what is the supported path today.
Here is an example module that creates a celery object and attempts to log output. Nothing shows in the terminal.
example mymodule.py
from celery import Celery
import logging
from celery.utils.log import get_task_logger
app = Celery('test')
app.config_from_object('myfile', True)
get_task_logger(__name__).warn('hello world')
logging.getLogger(__name__).warn('hello world 2')
EDIT
I know that I can add a handler to redirect some of the output back to the terminal by adding a handler
log = get_task_logger(__name__)
h = logging.StreamHandler(sys.stdout)
log.addHandler(h)
But is there a "Celery way" to do this? Maybe one that lets me also have the Celery formatted lines of text.
[2014-03-02 15:51:32,949: WARNING] hello world
I have been looking at the same issue...
What seems to work best is to use the signal handler, according to http://docs.celeryproject.org/en/latest/userguide/signals.html#after-setup-logger
In your celery.py file use:
from celery.signals import after_setup_logger
import logging
#after_setup_logger.connect()
def logger_setup_handler(logger, **kwargs ):
my_handler = MyLogHandler()
my_handler.setLevel(logging.DEBUG)
my_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') #custom formatter
my_handler.setFormatter(my_formatter)
logger.addHandler(my_handler)
logging.info("My log handler connected -> Global Logging")
if __name__ == '__main__':
app.start()
then you can define MyLogHandler() as you wish.
To send the logs to STDOUT you should also be able to use (I have not tested it):
my_handler = logging.StreamHandler(sys.stdout)

Python Tornado - disable logging to stderr

I have minimalistic Tornado application:
import tornado.ioloop
import tornado.web
class PingHandler(tornado.web.RequestHandler):
def get(self):
self.write("pong\n")
if __name__ == "__main__":
application = tornado.web.Application([ ("/ping", PingHandler), ])
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Tornado keeps reporting error requests to stderr:
WARNING:tornado.access:404 GET / (127.0.0.1) 0.79ms
Question: It want to prevent it from logging error messages. How?
Tornado version 3.1; Python 2.6
Its clear that "someone" is initializing logging subsystem when we start Tornado. Here is the code from ioloop.py that reveals the mystery:
def start(self):
if not logging.getLogger().handlers:
# The IOLoop catches and logs exceptions, so it's
# important that log output be visible. However, python's
# default behavior for non-root loggers (prior to python
# 3.2) is to print an unhelpful "no handlers could be
# found" message rather than the actual log entry, so we
# must explicitly configure logging if we've made it this
# far without anything.
logging.basicConfig()
basicConfig is called and configures default stderr handler.
So to setup proper logging for tonado access, you need to:
Add a handler to tornado.access logger: logging.getLogger("tornado.access").addHandler(...)
Disable propagation for the above logger: logging.getLogger("tornado.access").propagate = False. Otherwise messages will arrive BOTH to your handler AND to stderr
The previous answer was correct, but a little incomplete. This will send everything to the NullHandler:
hn = logging.NullHandler()
hn.setLevel(logging.DEBUG)
logging.getLogger("tornado.access").addHandler(hn)
logging.getLogger("tornado.access").propagate = False
You could also quite simply (in one line) do:
logging.getLogger('tornado.access').disabled = True

How to use native Python logging or/and Twisted logger for multiple daemon/cron scripts?

I use logging to one file with different scripts of one program - such as cron tasks, twisted daemons (HttpServers with some data) etc.
If I use default Python logging in base class such as
import logging
import logging.handlers
....
self.__fname = open(logname, 'a')
logging.basicConfig(format=FORMAT, filename=logname, handler=logging.handlers.RotatingFileHandler)
self._log = logging.getLogger(self.__pname)
self._log.setLevel(loglevel)
logging.warn('%s %s \033[0m' % (self.__colors[colortype], msg))
then it's work normally, sending output of all scripts in one file, but some important part of default twisted log missing - such as info about http request/headers etc
else I use twisted logging such as
from twisted.python.logfile import DailyLogFile
from twisted.python import log
from twisted.application.service import Application
....
application = Application("foo")
log.startLogging(DailyLogFile.fromFullPath(logname))
print '%s %s \033[0m' % (self.__colors[colortype], msg)
then works with additional data, but some trouble with logging from different scripts exists - looks like cron tasks trouble appears. Looks like these cron tasks switch context of output and some part of logging output is missing and not restored
Of, course - cron tasks working without Twisted reactor, but using twisted logging.
What I should do with logging for log all data printed both Twisted/cron parts of app?
Thanks for any help!
I think the point is that you should not use DailyLogFile but use PythonLOggingObserver to redirect the log to standard lib log
from twisted.python import log
observer = log.PythonLoggingObserver()
observer.start()
log.msg('%s %s \033[0m' % (self.__colors[colortype], msg))
Also you might want to see the example in docs: http://twistedmatrix.com/documents/current/core/howto/logging.html#auto3

Categories

Resources