I have a Django app on a Linux server. In one of the views, some form of print command is executed, and some string gets printed. How can I find out what the printed string was? Is there some log in which these things are kept?
The output should be in the terminal, where django was started. (if you don't started it directly, I don't believe there's a way to read it)
As linkedlinked pointed out, it's the best to not use print, because this can cause Exceptions! But that's not the only reason: There are modules (like logging) made for such purposes and they have a lot more options.
This site (even when it's from 2008) confirm my statements:
If you want to know what’s going on inside a view, the quickest way is to drop in a print statement. The development server outputs any print statements directly to the terminal; it’s the server-side alternative to a JavaScript alert().
If you want to be a bit more sophisticated with your logging, it’s worth turning to Python’s logging module (part of the standard library). You can configure it in your settings.py: here he describes, what to do (look on the site)
For debugging-purposes you could also enable the debug-mode or use the django-debug-toolbar.
Hope it helps! :)
Never use print, as once you deploy, it will print to stdout and WGSI will break.
Use the logging. For development purposes, is really easy to setup. On your project __init__.py:
import logging
from django.conf import settings
fmt = getattr(settings, 'LOG_FORMAT', None)
lvl = getattr(settings, 'LOG_LEVEL', logging.DEBUG)
logging.basicConfig(format=fmt, level=lvl)
logging.debug("Logging started on %s for %s" % (logging.root.name, logging.getLevelName(lvl)))
Now everything you log goes to stderr, in this case, your terminal.
logging.debug("Oh hai!")
Plus you can control the verbosity on your settings.py with a LOG_LEVEL setting.
The print shows up fine with "./manage.py runserver" or other variations - like Joschua mentions, it shows up in the terminal where you started it. If you're running FCGI from cron or such, that just gets dumped into nothingness and you lose it entirely.
For places where I want "print" like warnings or notices to come out, I use an instance of python's logger that pushes to syslog to capture the output and put it someplace. I instantiate an instance of logging in one of the modules as it gets loaded - models.py was the place I picked, just for its convenience and I knew it would always get evaluated before requests came rolling in.
import logging, logging.handlers
logger = logging.getLogger("djangosyslog")
hdlr = logging.handlers.SysLogHandler(facility=logging.handlers.SysLogHandler.LOG_DAEMON)
formatter = logging.Formatter('%(filename)s: %(levelname)s: %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
Then when you want to invoke a message to the logger in your views or whatever:
logger = logging.getLogger("djangosyslog")
logging.warning("Protocol problem: %s", "connection reset", extra=d)
There's .error(), .critical(), and more - check out http://docs.python.org/library/logging.html for the nitty gritty details.
Rob Hudson's debug toolbar is great if you're looking for that debug information - I use it frequently in development myself. It gives you data about the current request and response, including the SQL used to generate any given page. You can inject into that data like a print by shoving the
strings you're interested into the context/response - but I found that to be a bit difficult to deal with.
A warning: if you try to deploy code with print statements under WSGI, expect things to break. Use the logging module instead.
If you are using apache2 server to run django application and enabled access & error logs, your print statements will be printed in the error logs.
While you running your application kindly do the following as root user in linux,
tail -f /path-to-error-file.log
mostly apache2 logs will be in this location /var/log/apache2/.
It will print when ever it finds print command in your function.
Related
I have a python script with an error handling using the logging module. Although this python script works when imported to google colab, it doesn't log the errors in the log file.
As an experiment, I tried this following script in google colab just to see if it writes log at all
import logging
logging.basicConfig(filename="log_file_test.log",
filemode='a',
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.DEBUG)
logging.info("This is a test log ..")
To my dismay, it didn't even create a log file named log_file_test.log. I tried running the same script locally and it did produce a file log_file_test.log with the following text
13:20:53,441 root INFO This is a test log ..
What is it that I am missing here?
For the time being, I am replacing the error logs with print statements, but I assume that there must be a workaround to this.
Perhaps you've reconfigured your environment somehow? (Try Runtime menu -> Reset all runtimes...) Your snippets works exactly as written for me --
logging.basicConfig can be run just once*
Any subsequent call to basicConfig is ignored.
* unless you are in Python 3.8 and use the flag force=True
logging.basicConfig(filename='app.log',
level=logging.DEBUG,
force=True, # Resets any previous configuration
)
Workarounds (2)
(1) You can easily reset the Colab workspace with this command
exit
Wait for it to come back and try your commands again.
(2) But, if you plan to do the reset more than once and/or are learning to use logging, maybe it is better to use %%python magic to run the entire cell in a subprocess. See photo below.
What is it that I am missing here?
Deeper understanding of how logging works. It is a bit tricky, but there are many good webs explaining the gotchas.
In Colab
https://realpython.com/python-logging
[This answer][1] cover the issue.
You have to:
Clear your log handlers from the environment with logging.root.removeHandler
Set log level with logging.getLogger('RootLogger').setLevel(logging.DEBUG).
Setting level with logging.basicConfig only did not work for me.
I have a piece of software (2100 SLOC) that I consider to run in two different versions: one version provides verbose debug information to the console and the other version is an optimized release version.
My goal is to have a single git branch to maintain. Is there a way to mark the debug parts of the code and signal the Python interpreter to ignore these parts of the code?
Possible applications include: print statements, Python's logging facility, profiling, and assertions [Edit: These are apparently ignored by setting the -O flag].
I think you are overcomplicating this. You should ideally not have different code paths for development and production environments, just different configuration, otherwise it becomes harder to be sure whether your tests actually reflect how the code will behave when deployed. Things like profiling and debugging the code should be external to that process, things you run on your codebase rather than part of your codebase.
If all you're concerned with is the logging, just set different output levels in different environments. Assuming you have a standard library logging setup, you could do something like:
import logging
import os
logging.basicConfig(
level=getattr(logging, os.getenv('LOG_LEVEL', 'DEBUG')),
...
)
in your entry point, this way you could set an explicit LOG_LEVEL environment variable (one of the allowed values) in your production environment, and default to DEBUG for development. Alternatively make the default the production level (e.g. ERROR) and set it explicitly in your development environments. You should then only output messages via logging, and not use print at all.
You should also let the loggers handle any string interpolation, i.e. using:
logger.info('hello %s', 'world')
rather than:
logger.info('hello %s' % 'world') # or logger.info('hello {}'.format('world'))
So that if that logging level isn't active it can optimise out the interpolation for you.
I found an answer here:
if __debug__:
doSomething()
To set __debug__ to false requires you to run Python with either flag -O or -OO
I use robot framework 3.0 under Python 2.7.8. Robot framework's documentation (http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#programmatic-logging-apis) states that
In addition to the new public logging API, Robot Framework offers a built-in support to Python's standard logging module. This works so that all messages that are received by the root logger of the module are automatically propagated to Robot Framework's log file.
I made a short library file to test this:
from logging import debug, error, info, warn
def try_logging():
info("This is merely a humble info message.")
debug("Most users never saw me.")
warn("I warn you about something.")
error("Something bad happened.")
My test case is:
*** Test Cases ***
Logtest
Try logging
When I run it it is a PASSED case, but nothing logged into the HTML log. The test execution log has the suit and the case and the keyword as it should but when I expand them nothing is logged but the "Start / End / Elapsed" line.
How could I forward the Python logger messages to Robot? As you can see the so called automatic propagation is not working automatically. My goal is to write a library that can be run with or without Robot Fw.
Ty for your help in advance.
After hours of code digging I managed to find the answer. I think it is worth sharing as it may be help you if you have some similar issue.
In my case I had some unused libraries imported. One of them was a class that was instantiated when Robot Framework imported the library file. This object had some logger settings that messed up the defaults, that is why I got no result in the robot log.
Without it I got the expected results and automatic propagation worked fine.
I'm working with Django-nonrel on Google App Engine, which forces me to use logging.debug() instead of print().
The "logging" module is provided by Django, but I'm having a rough time using it instead of print().
For example, if I need to verify the content held in the variable x, I will put
logging.debug('x is: %s' % x). But if the program crashes soon after (without flushing the stream), then it never gets printed.
So for debugging, I need debug() to be flushed before the program exits on error, and this is not happening.
I think this may work for you, assuming you're only using one(or default) handler:
>>> import logging
>>> logger = logging.getLogger()
>>> logging.debug('wat wat')
>>> logger.handlers[0].flush()
It's kind of frowned upon in the documentation, though.
Application code should not directly instantiate and use instances of Handler. Instead, the Handler class is a base class that defines the interface that all handlers should have and establishes some default behavior that child classes can use (or override).
http://docs.python.org/2/howto/logging.html#handler-basic
And it could be a performance drain, but if you're really stuck, this may help with your debugging.
If the use case is that you have a python program that should flush its logs when exiting, use logging.shutdown().
From the python documentation:
logging.shutdown()
Informs the logging system to perform an orderly
shutdown by flushing and closing all handlers. This should be called
at application exit and no further use of the logging system should be
made after this call.
Django logging relies on the standard python logging module.
This module has a module-level method: logging.shutdown() which flushes all of the handlers and shuts down the logging system (i.e. logging can not longer be used after it is called)
Inspecting the code of this function shows that currently (python 2.7) the logging module holds a list of weak references to all handlers in a module-level variable called _handlerList so all of the handlers can be flushed by doing something like
[h_weak_ref().flush() for h_weak_ref in logging._handlerList]
because this solution uses the internals of the module #Mikes solution above is better, but it relies on having access to a logger, it can be generalized as follows:
[h.flush() for h in my_logger.handlerList]
a simple function that always working for register you debug messsages while programming. dont use it for production, since it will not rotate:
def make_log(message):
import datetime
with open('mylogfile.log','a') as f:
f.write(f"{datetime.datetime.now()} {message}\n")
then use as
make_log('my message to register')
when to put on production, just comment the last 2 lines
def make_log(message):
import datetime
#with open('mylogfile.log','a') as f:
# f.write(f"{datetime.datetime.now()} {message}\n")
It seems logging.debug() doesn't appear in GAE logs, but logging.error() does.
Does anyone have an idea how can I make logging.debug() appear in the GAE logs?
Logging in Python can be set to a different level, so that only a specified level of information appears in the log file. Try to change the logging level:
logging.setLevel(logging.DEBUG)
I observed that on the SDK-Server debug logging really disappears. In production I get full debug logs. This may be because of the way I call webapp.WSGIApplication:
application = webapp.WSGIApplication([
('/', Homepage)],
debug=True)
Do you also use debug=True. (Actually I always wondered what it exactly was meant to do)