I have 3 processes running under my twisted reactor: Orbited, WSGI (running django), and Twisted itself.
I am currently using
log.startLogging(sys.stdout)
When all the log are directed to the same place, there is too much flooding.
One line of my log from WSGI is like this:
2010-08-16 02:21:12-0500 [-] 127.0.0.1 - - [16/Aug/2010:07:21:11 +0000] "GET /statics/js/monitor_rooms.js HTTP/1.1" 304 - "http://localhost:11111/chat/monitor_rooms" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8"
The time is repeated twice basically. I think I should use my own formatter but unfortunately I cannot find it in twisted's docs (there's nothing on logging there)
What's the best way to deal with logging from 3 sources?
What kwargs do I pass in to which function in twisted.log to set up my own formatter (startLogging doesn't contain the answer)
What is a better solution than what I have suggested? ( I am not really experienced in setting up loggers. )
You can use the system keyword argument to twisted.python.log.msg to customize the message.
Assuming you've got:
log.msg("Service ready for eBusiness!", system="enterprise")
You'll get logging output like this:
2010-08-16 02:21:12-0500 [enterprise] Service ready for eBusiness!
You could then have each of your services add system="wsgi/orbited/..." to their log.msg and log.err calls.
I found this digging through the source last time I was working with Twisted.
Heh. I am thinking about exactly this problem. What I've come up with is a separate Twisted app that logs messages it receives over a socket. You can configure Python logging to send to a socket and you can configure Twisted's logging to send to Python logging. So you can get everything to send logging messages to a single process (which then uses Python's logging to log them to disk).
I have some initial proof of concept code at http://www.acooke.org/cute/APythonLog0.html
The main thing missing is that it would be nice to indicate which message came from which source. Not sure how best to add that yet (one approach would be to run the service on three different ports and have a different prefix for each).
PS How's the Orbited working out? That's on my list next...
Related
I wanted to increase logging level in order to decrease spam in logs with all 2XX responses. According to Django docs and other SO questions I have changed level of django.server logger to WARNING in settings.py. Unfortunately, this change does not work when I run my server using daphne. To verify, I ran my app using manage.py runserver and there the config was working.
I've also tried changing log level (using LOGGING = {} in settings.py) of all loggers in logging.root.manager.loggerDict, also unsuccessfully.
Anyone has some idea on how to either determine which logger logs the messages like presented below in daphne or how to force daphne to respect django.server log assuming it works as I expect?
Sample log message I'm talking about:
127.0.0.1:33724 - - [21/Apr/2021:21:45:13] "GET /api/foo/bar" 200 455
I found an answer myself. In case someone has same problem:
After diving into Daphne source code it looks like as of now it does not use standard logging library, but relies on own class that writes all access log messages to stdout or given file. I do not see a possibility of increasing the log level.
Consult https://github.com/django/daphne/blob/main/daphne/access.py for more details.
set verbosity to 0 and you are done
daphne -v 0 xxx.asgi:application
I'm reading on the WSGI specification, and tried implementing a simple WSGI server from scratch, and tested it on a simple flask application. Currently it
opens a socket listener
passes each incoming connection to another thread to handle it
the handler parses the request, creates the environ, passes it to the flask app object, and returns the response.
Overall, it seems to work. Conceptually, what more does a real server, eg. gunicorn do? I'm asking in terms of the basic functionality, not in terms of supporting more features (eg. different protocols). What makes a server better, eg. why is gunicorn suitable for production, but wsgiref is not?
my 2c, is that getting something working is pretty easy, it's just that HTTP is such an old/complex standard it takes a lot of work to get all the edge cases working nicely
how well does it tolerate errors in the WSGI client code
HTTP 0.9, 1.0, 1.1 or 2/SPDY?
how do you handle malicious clients that send a byte every 10 seconds
the various Keep-Alive and Tranfer-Encoding variants seems to end up consuming a lot of code
does it do Content-Encoding as well
Flask is writing access logs to STDERR stream instead of STDOUT. How to change this configuration so that access logs go to STDOUT and application errors to STDERR?
open-cricket [master] python3 flaskr.py > stdout.log 2> stderr.log &
[1] 11929
open-cricket [master] tail -f stderr.log
* Running on http://127.0.0.1:9001/
127.0.0.1 - - [11/Mar/2015 16:23:25] "GET /?search=Sachin+Tendulkar+stats HTTP/1.1" 200 -
127.0.0.1 - - [11/Mar/2015 16:23:25] "GET /favicon.ico HTTP/1.1" 404 -
I'll assume you're using the flask development server.
Flask's development server is based on werkzeug, whose WSGIRequestHandler is, in turn, based in the BaseHTTPServer on the standard lib.
As you'll notice, WSGIRequestHandler overrides the logging methods, log_request, log_error and log_message, to use it's own logging.Logger - so you can simply override it as you wish, in the spirit of IJade's answer.
If you go down that route, I think it'd be cleaner to add your own FileHandler instead, and split the stdout and stderr output using a filter
Note, however, that all this is very implementation specific - there's really no proper interface to do what you want.
Although it's tangential to your actual question, I feel I really must ask - why are you worried about what goes into each log on a development server?
If you're bumping into this kind of problem, shouldn't you be running a real web server already?
Here is what you got to do. Import logging and set the level which you need to log.
import logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
I need to implement a quite simple Django server that server some http requests and listens to a rabbitmq message queue that streams information into the Django app (that should be written to the db). the data must be written to the db in a synchronized order , So I can't use the obvious celery/rabbit configuration. I was told that there is no way to do this in the same Django project. since Django would listen to http requests on It's process. and It can't handle another process to listen for Rabbit - forcing me to to add Another python/django project for the rabbit/db writes part - working with the same models The http bound django project works with.. You can smell the trouble with this config from here. .. Any Ideas how to solve this?
Thanks!
If anyone else bumps into this problem:
The solution is using a RabbitMQ consumer from a different process (But in the same Django codebase) then Django (Not the running through wsgi, etc. you have to start it by it self)
The consumer, connects to the appropriate rabbitmq queues and writes the data into the Django models. Then the usual Django process(es) is actually a "read model" of the data inserted/updated/created/deleted as delivered by the message queue (RabbitMQ or other) from a remote process.
I'm running a script in python and takes a long time to process. The thing is if the function takes to long to run, i guess the nginx has a timeout, in his configuration and that prevents somekind of errors, and prevents the function to run completely.
I just want to know were i can increse the value of the timeout. Because i've tried some commands in the file conf of nginx such as:
uwsgi_connect_timeout 75;
uwsgi_send_timeout 75;
uwsgi_read_timeout 75;
keepalive_timeout 650;
but none of this worked.
Thks in advance
The problem with just extending the timeout is that no matter how much longer you set it to you will run into limitations somewhere along the line. Either with the web server, the browser or your geocode calls. If it is something that routinely fails n times in a request, then you can't really make any guarantees.
So rather than having the client request hanging on a long running process (and by extension risking a server timeout), why don't you use something like celery to run those geocode tasks and on the client-side, submit your client-side request via javascript and poll the server for the answer via ajax until it get's a response?
I also had Bad gateway error in NGIX + uWSGI configuration, and for sake of people who google this question: it might be missing uwsgi python plugin. Please see: uWSGI configuration issue: uwsgi fails without any error message..
I tried everything written in the above response as well as other places but they did not work.
My solution was changing my socket in both the uwsgi.conf and nginx.conf files.