I've been trying multiple things on this one, but with no success.
I want to save log to file (SqlAlchemy logs, app debug logs, stack traces on errors, etc.).
I'm starting uwsgi with the following command:
uwsgi --ini-paste-logged myapp.ini
And here is the content of the ini file (where apiservice is my pakage)
[loggers]
keys = root, apiservice, sqlalchemy
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_apiservice]
level = DEBUG
handlers =
qualname = apiservice
[logger_sqlalchemy]
level = INFO
handlers =
qualname = sqlalchemy.engine
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
[uwsgi]
socket = /tmp/myapp-uwsgi.sock
virtualenv = /var/www/myapp/env
pidfile = ./uwsgi.pid
daemonize = ./uwsgi.log
master = true
processes = 4
The uwsgi.log contains only request log, without any actual logging data.
I've tried with INI options like:
paste: config:%p
paste-logger: %p
logto: file
Nothing seem to work.
Apparently, the uwsgi config section was fine.
After closer look at the uwsgi.log, even though the server was launched and running successfully, you could see an error:
ImportError: No module named script.util.logging_config
I've installed following packages to solve my problems:
pip install pastescript
pip install pastedeploy
Related
I have two files.
first is the TCP server.
second is the flask app. they are one project but they are inside of a separated docker container
they should write logs same file due to being the same project
ı try to create my logging library ı import my logging library to two file
ı try lots of things
firstly ı deleted bellow code
if (logger.hasHandlers()):
logger.handlers.clear()
when ı delete,ı get same logs two times
my structure
docker-compose
docker file
loggingLib.py
app.py
tcp.py
requirements.txt
.
.
.
my last logging code
from logging.handlers import RotatingFileHandler
from datetime import datetime
import logging
import time
import os, os.path
project_name= "proje_name"
def get_logger():
if not os.path.exists("logs/"):
os.makedirs("logs/")
now = datetime.now()
file_name = now.strftime(project_name + '-%H-%M-%d-%m-%Y.log')
log_handler = RotatingFileHandler('logs/'+file_name,mode='a', maxBytes=10000000, backupCount=50)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ', '%d-%b-%y %H:%M:%S')
formatter.converter = time.gmtime
log_handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
if (logger.hasHandlers()):
logger.handlers.clear()
logger.addHandler(log_handler)
return logger
it is working but only in one file
if app.py works first, it only makes a log
other file don't make any logs
Anything that directly uses files – config files, log files, data files – is a little trickier to manage in Docker than running locally. For logs in particular, it's usually better to set your process to log directly to stdout. Docker will collect the logs, and you can review them with docker logs. In this setup, without changing your code, you can configure Docker to send the logs somewhere else or use a log collector like fluentd or logstash to manage the logs.
In your Python code, you usually will want to configure the detailed logging setup at the top level, on the root logger
import logging
def main():
logging.basicConfig(
format='%(asctime)s - %(levelname)s - %(funcName)s - %(message)s ',
datefmt='%d-%b-%y %H:%M:%S',
level=logging.INFO
)
...
and in each individual module you can just get a local logger, which will inherit the root logger's setup
import logging
LOGGER = logging.getLogger(__name__)
With its default setup, Docker will capture log messages into JSON files on disk. If you generate a large amount of log messages in a long-running container, it can lead to local disk exhaustion (it will have no effect on memory available to processes). The Docker logging documentation advises using the local file logging driver, which does automatic log rotation. In a Compose setup you can specify logging: options:
version: '3.8'
services:
app:
image: ...
logging:
driver: local
You can also configure log rotation on the default JSON File logging driver:
version: '3.8'
services:
app:
image: ...
logging:
driver: json-file # default, can be omitted
options:
max-size: 10m
max-file: 50
You "shouldn't" directly access the logs, but they are in a fairly stable format in /var/lib/docker, and tools like fluentd and logstash know how to collect them.
If you ever decide to run this application in a cluster environment like Kubernetes, that will have its own log-management system, but again designed around containers that directly log to their stdout. You would be able to run this application unmodified in Kubernetes, with appropriate cluster-level configuration to forward the logs somewhere. Retrieving a log file from opaque storage in a remote cluster can be tricky to set up.
I am running a python flask app using uwsgi and nginx. I am having trouble getting the app modules to log. If i run the flask app by itself, i can see the logs properly formatted, but in uwsgi, i see 'no handlers could be found for logger...' and the logs are missing. prints show up fine. Could someone help with what I am doing wrong?
Thanks
I run uwsgi as
/usr/local/bin/uwsgi --ini /root/uwsgi.ini
# cat /root/uwsgi.ini
[uwsgi]
base=/root/mainapp
app = mainapp
module = %(app)
pythonpath = %(base)
socket = /tmp/mainapp.sock
chmod-socket = 666
callable = app
logto = /var/log/mainapp/app.log
paste-logger = %p
[formatters]
keys: detailed
[handlers]
keys: console
[loggers]
keys: root, module1, module2, module3
[formatter_detailed]
format: %(asctime)s %(name)s:%(levelname)s %(module)s:%(lineno)d: %(message)s
[handler_console]
class: StreamHandler
args: []
formatter: detailed
[logger_root]
level: DEBUG
handlers:
[logger_module1]
level: DEBUG
qualname: module1
handlers: console
[logger_module2]
level: DEBUG
qualname: module2
handlers: console
[logger_module3]
level: DEBUG
qualname: module3
handlers: console
and in the module, i call
import logging
log = logging.getLogger('module1')
log.info('hello world')
Have you configured Flask to use the logger? Off the top of my head something like this should work
app.config['LOG_FILE'] = 'application.log'
# Configure logger.
if not app.debug:
import logging
from logging import FileHandler
file_handler = FileHandler(app.config['LOG_FILE'])
file_handler.setLevel(logging.WARNING)
app.logger.addHandler(file_handler)
looks like i have to instantiate flask's app.logger before i can do anything...this did the trick...set addhandler on app.logger and
import logging
import logging.config
shandler = logging.StreamHandler()
shandler.setLevel(logging.DEBUG)
app.logger.addHandler(shandler)
logging.config.fileConfig('logging.conf')
I am trying to setup logging for a Python Pyramid Waitress Server. I have followed the docs here:
Pyramid logging and here: Pyramid PasteDeploy logging. I have tired both methods which have yield no logging results from waitress. My own logging works perfectly.
I have set Waitress logging level to DEBUG and I get nothing even I remove server files. Waitress fails server silently.
How do you set up logging for a Pyramid Waitress Server so I can see files be requested, missing file errors, etc?
Method 1:
Setup from code:
import logging
logging.basicConfig()
logger = logging.getLogger('waitress')
logger.setLevel(logging.DEBUG)
Method 2:
Starting the server with pserve development.ini where the development.ini file sets up the logging as below
[app:main]
use = egg:MyProject
pyramid.reload_templates = true
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.default_locale_name = en
pyramid.includes =
pyramid_debugtoolbar
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543
[loggers]
keys = root, myproject, waitress
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_myproject]
level = DEBUG
handlers =
qualname = myproject
[logger_waitress]
level = DEBUG
handlers =
qualname = waitress
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
The logging configuration actually works. Here I demonstrate a simple view to emit logging message for waitress logger
#view_config(route_name='hello_baby',
request_method='GET',
renderer='string')
def hello_baby(request):
import logging
logger = logging.getLogger('waitress')
logger.info('Hello baby!')
return 'Hi there'
You should be able to see the logging message when you hit the page. The reason you didn't see messages from waitress is - there is no logging messages are emitted for common routines in waitress. It only emits messages when something goes wrong, you can read the source code
For some other knowledge about Python logging, you can read my article : Good logging practice in Python
I got console logging (for all requests) to show up by using paste's translogger; A good example is at http://flask.pocoo.org/snippets/27/.
Here's the relevant section of my .ini:
[app:main]
use = egg:${:app}
filter-with = translogger
[filter:translogger]
use = egg:Paste#translogger
# these are the option default values (see http://pythonpaste.org/modules/translogger.html)
# logger_name='wsgi'
# format=None
# logging_level=20
# setup_console_handler=True
# set_logger_level=10
The access log and root log for my Flask app was helped by zcbuildout. That's fine. Now I wondered how do I get logging from my own app. I know how to use logging library but paster just do not log it in the console or anywhere.
Thanks
Here's my config:
[loggers]
keys = root, wsgi, myapp
[handlers]
keys = console, accesslog
[formatters]
keys = generic, accesslog
[formatter_generic]
format = %(asctime)s %(levelname)s [%(name)s] %(message)s
[formatter_accesslog]
format = %(message)s
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[handler_accesslog]
class = FileHandler
args = (os.path.join(r'.', 'access.log'), 'a')
level = INFO
formatter = accesslog
[logger_root]
level = INFO
handlers = console
[logger_wsgi]
level = INFO
handlers = accesslog
qualname = wsgi
propagate = 0
[logger_myapp]
level = DEBUG
handlers = console
qualname = myapp
[filter:translogger]
use = egg:Paste#translogger
setup_console_handler = False
logger_name = wsgi
[app:main]
use = egg:myapp#debug
filter-with = translogger
...
Here's how I tried to log:
import logging as log
def myfunc():
log.debug("show me the log")
After a long time, this logging problem was resolved. Referring to the manual, Flask has configured its own logger. So to do logging, please use flask.logger
Documentation is here:
http://flask.pocoo.org/docs/api/#flask.Flask.logger
I want to temporailiy turn on debug messages in a production pyramid web project so I adjusted the production.ini file, pushed it to Heroku and saw only error and warn level messages.
So I thought, that seems odd since if I start the pyramid application like the following on my local PC I get all the log level messages.
env/bin/pserve production.ini
OK, so that's not exactly how it runs on Heroku, it is actually run from a little bit of python that looks like this (in a file called runapp.py):
import os
from paste.deploy import loadapp
from waitress import serve
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app = loadapp('config:production.ini', relative_to='.')
serve(app, host='0.0.0.0', port=port)
Now, sure enough if I do this on my local PC I get the same behavior as when it is deployed to Heroku (hardly surprising).
python runapp.py
My question is, what am I missing here? Why does running it the second way result in no log messages other than ERROR and WARN being output to standard out? Surely, since it is using the same production.ini file it should work the same as if I use the pserve process?
Here is my logging section from production.ini:
###
# logging configuration
# http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/logging.html
###
[loggers]
keys = root, test
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = DEBUG
handlers = console
[logger_test]
level = DEBUG
handlers = console
qualname = test
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = DEBUG
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
PasteDeploy does not actually assume responsibility for configuring logging. This is a little quirk where the INI file is dual-purposed. There are sections that PasteDeploy cares about, and there are sections that logging.config.fileConfig cares about, and both must be run to fully load an INI file.
If you follow the pyramid wrappers for doing this, you'd do:
pyramid.paster.setup_logging(inipath)
pyramid.paster.get_app(inipath)
The main reason you would use these instead of doing it yourself is that they support doing "the right thing" when inipath contains a section specifier like development.ini#myapp, which fileConfig would crash on.