Python 3.4.2 logging not displaying time stamp - python

I have a Python pgm that I'm trying to add logging to. It is logging, but is not adding the timestamp to each entry. I've researched it, read the docs, etc, and it looks correctly written to me, but it isn't giving me the timestamp. What is incorrect here?
lfname = "TestAPIlog.log"
logging.basicConfig(filename=lfname, format='%(asctime)s %(message)s',
filemode='w', level=logging.WARNING )
logging.info('Started')
The top of the log produced by the above looks like this:
INFO:root:Started
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): api.uber.com
DEBUG:requests.packages.urllib3.connectionpool:"GET /v1/estimates/price?end_latitude=39.762239&start_latitude=39.753385&server_token=MY_TOKEN_HERE&start_longitude=-104.998454&end_longitude=-105.013322 HTTP/1.1" 200 None
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): api.uber.com
DEBUG:requests.packages.urllib3.connectionpool:"GET /v1/estimates/price?
Thanks,
Chop

It looks as if something has already called basicConfig() by the time your code runs. As documented, basicConfig() does nothing if logging has any handlers configured. It's only intended as a one-off, simple configuration approach. Check the other parts of your code which you didn't post here.

Related

Django.server logging configuration not working in Django Daphne

I wanted to increase logging level in order to decrease spam in logs with all 2XX responses. According to Django docs and other SO questions I have changed level of django.server logger to WARNING in settings.py. Unfortunately, this change does not work when I run my server using daphne. To verify, I ran my app using manage.py runserver and there the config was working.
I've also tried changing log level (using LOGGING = {} in settings.py) of all loggers in logging.root.manager.loggerDict, also unsuccessfully.
Anyone has some idea on how to either determine which logger logs the messages like presented below in daphne or how to force daphne to respect django.server log assuming it works as I expect?
Sample log message I'm talking about:
127.0.0.1:33724 - - [21/Apr/2021:21:45:13] "GET /api/foo/bar" 200 455
I found an answer myself. In case someone has same problem:
After diving into Daphne source code it looks like as of now it does not use standard logging library, but relies on own class that writes all access log messages to stdout or given file. I do not see a possibility of increasing the log level.
Consult https://github.com/django/daphne/blob/main/daphne/access.py for more details.
set verbosity to 0 and you are done
daphne -v 0 xxx.asgi:application

Using Python requests library on a virtual machine with jupyterhub is extremely slow

I setup a virtual machine with Jupyterhub. When I want to get use the requests package i is extremely slow.
When doing the flowing:
import logging
import requests
logging.basicConfig() # you need to initialize logging, otherwise you will not see anything from requests
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True
session = requests.Session()
response = session.get("http://www.google.de")
I get this Debugging messages
DEBUG:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): www.google.de
DEBUG:requests.packages.urllib3.connectionpool:http://www.google.de:80 "GET / HTTP/1.1" 200 4477
I've to wait more than a minute between the both debugging messages.
When I run the same code on my local machine with a regular Jupyter setup (using anaconda) it works without any problem.
I could imagine I did something wrong with the network settings, but I can't locate the problem.
Thanks for your help.
UPDATE:
I tested urllib3 - same problem. When I use regular python via bash, it work just fine, when I use ipython via bash it is very slow.

Python Flask writes access log to STDERR

Flask is writing access logs to STDERR stream instead of STDOUT. How to change this configuration so that access logs go to STDOUT and application errors to STDERR?
open-cricket [master] python3 flaskr.py > stdout.log 2> stderr.log &
[1] 11929
open-cricket [master] tail -f stderr.log
* Running on http://127.0.0.1:9001/
127.0.0.1 - - [11/Mar/2015 16:23:25] "GET /?search=Sachin+Tendulkar+stats HTTP/1.1" 200 -
127.0.0.1 - - [11/Mar/2015 16:23:25] "GET /favicon.ico HTTP/1.1" 404 -
I'll assume you're using the flask development server.
Flask's development server is based on werkzeug, whose WSGIRequestHandler is, in turn, based in the BaseHTTPServer on the standard lib.
As you'll notice, WSGIRequestHandler overrides the logging methods, log_request, log_error and log_message, to use it's own logging.Logger - so you can simply override it as you wish, in the spirit of IJade's answer.
If you go down that route, I think it'd be cleaner to add your own FileHandler instead, and split the stdout and stderr output using a filter
Note, however, that all this is very implementation specific - there's really no proper interface to do what you want.
Although it's tangential to your actual question, I feel I really must ask - why are you worried about what goes into each log on a development server?
If you're bumping into this kind of problem, shouldn't you be running a real web server already?
Here is what you got to do. Import logging and set the level which you need to log.
import logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)

Disable grequests error logs on console

Is there anyway to disable grequests's logging to console? My applications returns error in the requests part:
Timeout: (<requests.packages.urllib3.connectionpool.HTTPConnectionPool object at 0x10daa1a50>, 'Connection to 116.231.213.50 timed out. (connect timeout=5)')
<Greenlet at 0x10d92bf50: <bound method AsyncRequest.send of <grequests.AsyncRequest object at 0x10da97990>>(stream=False)> failed with Timeout
I found this to disable requests's logging but no luck with the grequests.
If you used the approach in your link to disable requests logging (e.g. logging.getLogger("requests").setLevel(logging.CRITICAL)) then it should work for grequests too. Have you tried it? If you have and it still doesn't behave as you would like, configure logging to show logger names (e.g. via basicConfig using %(name)s in the format string) and you should see exactly which loggers are producing the messages, and you can then silence them using the same approach as for the requests logger.

Twisted logging

I have 3 processes running under my twisted reactor: Orbited, WSGI (running django), and Twisted itself.
I am currently using
log.startLogging(sys.stdout)
When all the log are directed to the same place, there is too much flooding.
One line of my log from WSGI is like this:
2010-08-16 02:21:12-0500 [-] 127.0.0.1 - - [16/Aug/2010:07:21:11 +0000] "GET /statics/js/monitor_rooms.js HTTP/1.1" 304 - "http://localhost:11111/chat/monitor_rooms" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.8) Gecko/20100723 Ubuntu/10.04 (lucid) Firefox/3.6.8"
The time is repeated twice basically. I think I should use my own formatter but unfortunately I cannot find it in twisted's docs (there's nothing on logging there)
What's the best way to deal with logging from 3 sources?
What kwargs do I pass in to which function in twisted.log to set up my own formatter (startLogging doesn't contain the answer)
What is a better solution than what I have suggested? ( I am not really experienced in setting up loggers. )
You can use the system keyword argument to twisted.python.log.msg to customize the message.
Assuming you've got:
log.msg("Service ready for eBusiness!", system="enterprise")
You'll get logging output like this:
2010-08-16 02:21:12-0500 [enterprise] Service ready for eBusiness!
You could then have each of your services add system="wsgi/orbited/..." to their log.msg and log.err calls.
I found this digging through the source last time I was working with Twisted.
Heh. I am thinking about exactly this problem. What I've come up with is a separate Twisted app that logs messages it receives over a socket. You can configure Python logging to send to a socket and you can configure Twisted's logging to send to Python logging. So you can get everything to send logging messages to a single process (which then uses Python's logging to log them to disk).
I have some initial proof of concept code at http://www.acooke.org/cute/APythonLog0.html
The main thing missing is that it would be nice to indicate which message came from which source. Not sure how best to add that yet (one approach would be to run the service on three different ports and have a different prefix for each).
PS How's the Orbited working out? That's on my list next...

Categories

Resources