Django mod_python logging issue - python

I'm using Django with mod_python on remote server and need to add some logging.
I try to add some test log, in settings.py I add this code:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='/home/voip_maker/django.log', filemode='a+')
logging.debug("hello world!")
logging.info("this is some interesting info!")
logging.error("this is an error!")
Then I restart apache and try to open my project via web,
but there are no changes in log file.
Can you please help me with this issue, how I must configure logging to get changes in log file.
Thanks very much,

Django has made logging even simpler. Please check the below docs:
https://docs.djangoproject.com/en/1.4/topics/logging/
All you need to do is to add a dictionary to settings.py file

Related

Async logging in Django

I have created a simple weather webapp in Django using API. Logging is enabled and are written into files in Windows. I want logging to be asynchronous that is at the end of execution. How can we do async logging in Django?
We can only create async views in Django.
There is Python Logstash package which has Async way of logging, but it stores logs in a database in a remote instance.
(Alternative of which is to store logs in SQLLite3 db). File logging option is not present in it.
Moreover, async is newbie in Django and still many complexities present unresolved in it. It might cause memory overhead which
can degrade performance. Please find some links below for reference.
https://pypi.org/project/python-logstash/
https://docs.djangoproject.com/en/3.1/topics/async/#:~:text=New%20in%20Django%203.0.,have%20efficient%20long%2Drunning%20requests.
https://deepsource.io/blog/django-async-support/
you can use the logging module from python standard library
import logging
logger = logging.getLogger(__name__)
# Set file as output
handler = logging.StreamHandler()
# Formatter template
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
# add a formatter to handler
handler.setFormatter(formatter)
# add a handler to logger
logger.addHandler(handler)
# your messages will be added to the file
logger.error("it's error message")
logger.info("it's info message")
logger.warning("it's warning message")
Official documentation: https://docs.python.org/3/library/logging.html
I hope I helped you!)
I can advise you to start the django project like this. Cons: nothing will be output to the console, but it will work faster than in middleware
nohup python manage.py runserver > file.log

Logging to scrapyd log file of each job

I had my scrapy app deployed to scrapyd, and in the log file of each job:
http://{host}:6800/logs/{project_name}/{spider_name}/{job_id}.log
I'm not seeing the logs that I logged using the logger I defined, but if I changed it to use self.logger.info(....) it shows up in the jobs' log file.
LOGGER = logging.getLogger(__name__)
LOGGER.info('...') # this not show up in the log file
self.logger.info('...') # this shows up in the log file
Can anyone provide some insights, please!
Found the reason.
Because I added:
LOGGER.propagate = False

Logging and Python bokeh compatibility

I am using import logging to save changes to my bokeh server and I want to save it to a file with a .log extension, but when I run the bokeh server, the file is not created and the can not save operations to .log file.
There is a part of the code I wrote below.
Could it be that I am making a mistake in the code or bokeh server does it not work in accordance with logging?
import logging
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
logging.basicConfig(filename = "test.log",
level = logging.DEBUG,
format = LOG_FORMAT,
filemode="w")
logger = logging.getLogger()
When you use bokeh serve %some_python_file%, the Bokeh server is started right away, but your code is executed only when you actually open the URL that points to the Bokeh document that you fill in that code.
bokeh serve configures logging by using logging.basicConfig as well, and calling this function again does not override anything - that's just how logging.basicConfig works.
Instead of using logging directly, you should just create and configure your own logger:
LOG_FORMAT = "%(levelname)s %(asctime)s - %(message)s"
file_handler = logging.FileHandler(filename='test.log', mode='w')
file_handler.setFormatter(logging.Formatter(LOG_FORMAT))
logger = logging.getLogger(__name__)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG)
logger.info('Hello there')
Eugene's answer is correct. Calling logging.basicConfig() for a second time does not have any effect. Nevertheless, if you are using python >= 3.8 then you can use force=True which will disable all existing logging handlers and setup a new one. This practically means that your own logging.basicCOnfig() will just work:
logging.basicConfig(..., force=True)
docs

Formatting Flask app logs in json

I'm working with a Python/Flask application and trying to get the logs to be formatted (by line) in json.
Using the python-json-logger package, I've modified the formatter for the app.logger as follows:
from pythonjsonlogger import jsonlogger
formatter = jsonlogger.JsonFormatter(
'%(asctime) %(levelname) %(module) %(funcName) %(lineno) %(message)')
app.logger.handlers[0].setFormatter(formatter)
This works as expected. Any messages passed to app.logger are correctly formatted in json.
However the application is also automatically logging all requests. This information shows up in stdout as follows:
127.0.0.1 - - [19/Jun/2015 12:22:03] "GET /portal/ HTTP/1.1" 200 -
I want this information to be formatted in json as well. I've been looking for the logger/code that is responsible for creating this output with no success.
Where is this output generated?
Are the mechanisms to change the formatting of this logged information?
When you use the app.logger attribute for the first time, Flask sets up some log handlers:
a debug logger that is set to log level DEBUG and that filters on app.debug being true.
a production logger that is set to ERROR.
You can remove those handlers again yourself, by running:
app.logger.handlers[:] = []
However, the log line you see is not logged by Flask, but by the WSGI server. Both the built-in Werkzeug server (used when you use app.run()) and various other WSGI servers do this. Gunicorn for example uses the default Python logger to record access.
The built-in WSGI server should never be used in production (it won't scale well, and is not battle-hardened against malicious attackers).
For Gunicorn, you can disable log propagation to keep the logging separate:
logging.getLogger('gunicorn').propagate = False
Flask internally uses werkzeug server. Those logs are printed by it, not flask. You can access to werkzeug logger with logging.getLogger('werkzeug') and configure as you wish. For example:
werkzeug = logging.getLogger('werkzeug')
if len(werkzeug.handlers) == 1:
formatter = logging.Formatter('%(message)s', '%Y-%m-%d %H:%M:%S')
werkzeug.handlers[0].setFormatter(formatter)

Django: How do I get logging working?

I've added the following to my settings.py file:
import logging
...
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
filename=os.path.join(rootdir, 'django.log'),
filemode='a+')
And in views.py, I've added:
import logging
log = logging.getLogger(__name__)
...
log.info("testing 123!")
Unfortunately, no log file is being created. Any ideas what I am doing wrong? And also is their a better method I should be using for logging? I am doing this on Webfaction.
Python logging for Django is fine on somewhere like Webfaction. If you were on a cloud-based provider (eg Amazon EC2) where you had a number of servers, it might be worth looking at either logging to key-value DB or using Python logging over the network.
Your logging setup code in settings.py looks fine, but I'd check that you can write to rootdir -- your syslog might show errors, but it's more likely that Django would be throwing a 500 if it couldn't log properly.
Which leads me to note that the only major difference in my logging (also on WebFaction) is that I do:
import logging
logging.info("Something here")
instead of log.info

Categories

Resources