Flask is writing access logs to STDERR stream instead of STDOUT. How to change this configuration so that access logs go to STDOUT and application errors to STDERR?
open-cricket [master] python3 flaskr.py > stdout.log 2> stderr.log &
[1] 11929
open-cricket [master] tail -f stderr.log
* Running on http://127.0.0.1:9001/
127.0.0.1 - - [11/Mar/2015 16:23:25] "GET /?search=Sachin+Tendulkar+stats HTTP/1.1" 200 -
127.0.0.1 - - [11/Mar/2015 16:23:25] "GET /favicon.ico HTTP/1.1" 404 -
I'll assume you're using the flask development server.
Flask's development server is based on werkzeug, whose WSGIRequestHandler is, in turn, based in the BaseHTTPServer on the standard lib.
As you'll notice, WSGIRequestHandler overrides the logging methods, log_request, log_error and log_message, to use it's own logging.Logger - so you can simply override it as you wish, in the spirit of IJade's answer.
If you go down that route, I think it'd be cleaner to add your own FileHandler instead, and split the stdout and stderr output using a filter
Note, however, that all this is very implementation specific - there's really no proper interface to do what you want.
Although it's tangential to your actual question, I feel I really must ask - why are you worried about what goes into each log on a development server?
If you're bumping into this kind of problem, shouldn't you be running a real web server already?
Here is what you got to do. Import logging and set the level which you need to log.
import logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
Related
I wanted to increase logging level in order to decrease spam in logs with all 2XX responses. According to Django docs and other SO questions I have changed level of django.server logger to WARNING in settings.py. Unfortunately, this change does not work when I run my server using daphne. To verify, I ran my app using manage.py runserver and there the config was working.
I've also tried changing log level (using LOGGING = {} in settings.py) of all loggers in logging.root.manager.loggerDict, also unsuccessfully.
Anyone has some idea on how to either determine which logger logs the messages like presented below in daphne or how to force daphne to respect django.server log assuming it works as I expect?
Sample log message I'm talking about:
127.0.0.1:33724 - - [21/Apr/2021:21:45:13] "GET /api/foo/bar" 200 455
I found an answer myself. In case someone has same problem:
After diving into Daphne source code it looks like as of now it does not use standard logging library, but relies on own class that writes all access log messages to stdout or given file. I do not see a possibility of increasing the log level.
Consult https://github.com/django/daphne/blob/main/daphne/access.py for more details.
set verbosity to 0 and you are done
daphne -v 0 xxx.asgi:application
I'm deploying via Kuberenetes come containers on Google Cloud, which are django project and uwsgi to run them.
I'm using the stackdrive logging tool to see the logging, the problem is that all the entries are seen as severity ERROR even thought they are not error. It seems that the log of uwsgi is written to stderr or something like that.
In the picture you can see that django uses INFO level, but that is received as ERROR by stackdrive.
this is how i set up uwsgi.
[uwsgi]
master = true
socket = :3031
chdir = .
wsgi-file = docker.wsgi
processes = 4
threads = 2
socket-timeout = 90
harakiri = 90
http = :8000
env = prometheus_multiproc_dir=multi
enable-threads = yes
lazy-apps = yes
pidfile=/tmp/project-master.pid
Kubernetes logs written to stderr are always tagged as ERROR -- this is hard-coded in the Stackdriver logging agent. Similarly, logs written to stdout are always tagged with INFO.
If you can configure your application to write non-error log messages to stdout, please do so. Another possible approach is to write the logs to a file, run the "tail -f" command on that file as a sidecar container in the same pod, and looking for your logs in Stackdriver Logs Viewer under the sidecar container instead. Finally, you might consider writing your logs directly into the Stackdriver Logging API, which gives you full control over the contents of each entry.
This answer helped me find the solution to this. Using the option logger-req=stdio uWSGI logs get the correct level in Stackdriver.
Example of uwsgi.ini:
[uwsgi]
logger-req=stdio
I have a Python pgm that I'm trying to add logging to. It is logging, but is not adding the timestamp to each entry. I've researched it, read the docs, etc, and it looks correctly written to me, but it isn't giving me the timestamp. What is incorrect here?
lfname = "TestAPIlog.log"
logging.basicConfig(filename=lfname, format='%(asctime)s %(message)s',
filemode='w', level=logging.WARNING )
logging.info('Started')
The top of the log produced by the above looks like this:
INFO:root:Started
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): api.uber.com
DEBUG:requests.packages.urllib3.connectionpool:"GET /v1/estimates/price?end_latitude=39.762239&start_latitude=39.753385&server_token=MY_TOKEN_HERE&start_longitude=-104.998454&end_longitude=-105.013322 HTTP/1.1" 200 None
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): api.uber.com
DEBUG:requests.packages.urllib3.connectionpool:"GET /v1/estimates/price?
Thanks,
Chop
It looks as if something has already called basicConfig() by the time your code runs. As documented, basicConfig() does nothing if logging has any handlers configured. It's only intended as a one-off, simple configuration approach. Check the other parts of your code which you didn't post here.
How can one view the Google App Engine logs outside the Admin console?
I'm developing, so using dev_appserver.py/the Admin Console and would like to see the logs as the records are emitted.
I'd like to monitor the logging output in a console with standard Unix tools e.g. less/grep/etc, but there doesn't seem to be an option to direct the logging from the dev_appserver.py command, and I can't open a new file in GAE (e.g. a FileHandler), so file handlers won't work, and I think using a socket/udp handler would be a bit of overkill (if it's even possible).
I'm hopeful there are other options to view the log.
Thanks for reading.
The default logger sends logging output to stderr. Use your shell's method of redirecting stderr to a file (in tcsh, (dev_appserver.py > /dev/tty) >& your_logfile.txt, your shell may vary.)
You can also use the logging module in python to change the logger to send directly to a file if you detect it's running locally (os.environ['SERVER_SOFTWARE'].startswith('Dev'))
You can download the logs using the request_logs parameter of appcfg.py
http://code.google.com/appengine/docs/python/tools/uploadinganapp.html#Downloading_Logs
Edit:
This person came up with a way to send logs over XMPP. His solution is for GAE Java, but this could be adapted to python.
http://www.professionalintellectualdevelopment.com/
http://code.google.com/p/gae-xmpp-logger/
I have 3 processes running under my twisted reactor: Orbited, WSGI (running django), and Twisted itself.
I am currently using
log.startLogging(sys.stdout)
When all the log are directed to the same place, there is too much flooding.
One line of my log from WSGI is like this:
2010-08-16 02:21:12-0500 [-] 127.0.0.1 - - [16/Aug/2010:07:21:11 +0000] "GET /statics/js/monitor_rooms.js HTTP/1.1" 304 - "http://localhost:11111/chat/monitor_rooms" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8"
The time is repeated twice basically. I think I should use my own formatter but unfortunately I cannot find it in twisted's docs (there's nothing on logging there)
What's the best way to deal with logging from 3 sources?
What kwargs do I pass in to which function in twisted.log to set up my own formatter (startLogging doesn't contain the answer)
What is a better solution than what I have suggested? ( I am not really experienced in setting up loggers. )
You can use the system keyword argument to twisted.python.log.msg to customize the message.
Assuming you've got:
log.msg("Service ready for eBusiness!", system="enterprise")
You'll get logging output like this:
2010-08-16 02:21:12-0500 [enterprise] Service ready for eBusiness!
You could then have each of your services add system="wsgi/orbited/..." to their log.msg and log.err calls.
I found this digging through the source last time I was working with Twisted.
Heh. I am thinking about exactly this problem. What I've come up with is a separate Twisted app that logs messages it receives over a socket. You can configure Python logging to send to a socket and you can configure Twisted's logging to send to Python logging. So you can get everything to send logging messages to a single process (which then uses Python's logging to log them to disk).
I have some initial proof of concept code at http://www.acooke.org/cute/APythonLog0.html
The main thing missing is that it would be nice to indicate which message came from which source. Not sure how best to add that yet (one approach would be to run the service on three different ports and have a different prefix for each).
PS How's the Orbited working out? That's on my list next...