In my code, I have a few debug lines using logging modules.
However, when running the program, I saw a lot of other debug messages that is not from my code.
Looks like it is from the other modules that I use in the code, is there a way to disable the log (debug) messages that is not from my code (modules)?
If not, what is usually the common practice?
logging.setLevel(logging.CRITICAL) will disable pretty much all non-critial logging accross all loggers (I think at least...)
after that you can set your logger to have a more reasonable threshold
logging.getLogger("my_logger").setLevel(logging.DEBUG)
I think that should work ... without extensively testing it at all
I use robot framework 3.0 under Python 2.7.8. Robot framework's documentation (http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#programmatic-logging-apis) states that
In addition to the new public logging API, Robot Framework offers a built-in support to Python's standard logging module. This works so that all messages that are received by the root logger of the module are automatically propagated to Robot Framework's log file.
I made a short library file to test this:
from logging import debug, error, info, warn
def try_logging():
info("This is merely a humble info message.")
debug("Most users never saw me.")
warn("I warn you about something.")
error("Something bad happened.")
My test case is:
*** Test Cases ***
Logtest
Try logging
When I run it it is a PASSED case, but nothing logged into the HTML log. The test execution log has the suit and the case and the keyword as it should but when I expand them nothing is logged but the "Start / End / Elapsed" line.
How could I forward the Python logger messages to Robot? As you can see the so called automatic propagation is not working automatically. My goal is to write a library that can be run with or without Robot Fw.
Ty for your help in advance.
After hours of code digging I managed to find the answer. I think it is worth sharing as it may be help you if you have some similar issue.
In my case I had some unused libraries imported. One of them was a class that was instantiated when Robot Framework imported the library file. This object had some logger settings that messed up the defaults, that is why I got no result in the robot log.
Without it I got the expected results and automatic propagation worked fine.
I want to create task-specific log files that reflect what's happening during a particular operation, but I want these logging messages to also go to the primary Django log.
My current solution, which seems to work fine at first glance, is something like:
logger = getLogger("%s:%s" % (__name__, task_id))
handler = FileHandler(task_log_file)
logger.addHandler(handler)
# Work
logger.removeHandler(handler)
As I said, this works, but the main issue that occurs to me is that this logger isn't really temporary -- from what I've read of logging.Manager each logger will just hang around indefinitely until shutdown. In this case, when I'm done I know I won't use the logger again (okay, technically I might, but that will be rare), and assuming the system is stable this could be running through hundreds of thousands of tasks.
Is there a "right" way to do this?
You could have a way to mark-and-sweep your logging resource files, or just use a singleton pattern.
Does anyone know how to reduce the verbosity of logging output from dev_appserver.py?
The noise level of these logs is just driving me crazy. I know how to do this kind of config in Java with log4j, but am really lost here on google app engine python.
Solution 1.
You can instruct the logging library to only log statements at or above a given level with logging.setLevel(). If you set this level threshold higher than the level which contains the messages you don't want then you'll filter out the unwanted messages from dev_appserver.
To make your log messages show up, you need to do one of the following:
Ensure your logging messages are logged at least at the filtering out threshold you set above (probably WARN).
Configure and use your own custom logger. Then you can control the logging level for your logger independently of the root logger used by the dev server.
Solution 2.
The workaround above is a little annoying because you either have to avoid DEBUG and INFO levels, or you have to use create your own logger.
Another solution is to comment out the offending log messages from dev_appserver.py (and related modules). This would be quite a pain to do by hand, but I've written a tool which replaces logging calls in all files in a given folder (and its subfolders) - check out my post Python logging and performance: how to have your cake and eat it too.
I have a Django app on a Linux server. In one of the views, some form of print command is executed, and some string gets printed. How can I find out what the printed string was? Is there some log in which these things are kept?
The output should be in the terminal, where django was started. (if you don't started it directly, I don't believe there's a way to read it)
As linkedlinked pointed out, it's the best to not use print, because this can cause Exceptions! But that's not the only reason: There are modules (like logging) made for such purposes and they have a lot more options.
This site (even when it's from 2008) confirm my statements:
If you want to know what’s going on inside a view, the quickest way is to drop in a print statement. The development server outputs any print statements directly to the terminal; it’s the server-side alternative to a JavaScript alert().
If you want to be a bit more sophisticated with your logging, it’s worth turning to Python’s logging module (part of the standard library). You can configure it in your settings.py: here he describes, what to do (look on the site)
For debugging-purposes you could also enable the debug-mode or use the django-debug-toolbar.
Hope it helps! :)
Never use print, as once you deploy, it will print to stdout and WGSI will break.
Use the logging. For development purposes, is really easy to setup. On your project __init__.py:
import logging
from django.conf import settings
fmt = getattr(settings, 'LOG_FORMAT', None)
lvl = getattr(settings, 'LOG_LEVEL', logging.DEBUG)
logging.basicConfig(format=fmt, level=lvl)
logging.debug("Logging started on %s for %s" % (logging.root.name, logging.getLevelName(lvl)))
Now everything you log goes to stderr, in this case, your terminal.
logging.debug("Oh hai!")
Plus you can control the verbosity on your settings.py with a LOG_LEVEL setting.
The print shows up fine with "./manage.py runserver" or other variations - like Joschua mentions, it shows up in the terminal where you started it. If you're running FCGI from cron or such, that just gets dumped into nothingness and you lose it entirely.
For places where I want "print" like warnings or notices to come out, I use an instance of python's logger that pushes to syslog to capture the output and put it someplace. I instantiate an instance of logging in one of the modules as it gets loaded - models.py was the place I picked, just for its convenience and I knew it would always get evaluated before requests came rolling in.
import logging, logging.handlers
logger = logging.getLogger("djangosyslog")
hdlr = logging.handlers.SysLogHandler(facility=logging.handlers.SysLogHandler.LOG_DAEMON)
formatter = logging.Formatter('%(filename)s: %(levelname)s: %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
Then when you want to invoke a message to the logger in your views or whatever:
logger = logging.getLogger("djangosyslog")
logging.warning("Protocol problem: %s", "connection reset", extra=d)
There's .error(), .critical(), and more - check out http://docs.python.org/library/logging.html for the nitty gritty details.
Rob Hudson's debug toolbar is great if you're looking for that debug information - I use it frequently in development myself. It gives you data about the current request and response, including the SQL used to generate any given page. You can inject into that data like a print by shoving the
strings you're interested into the context/response - but I found that to be a bit difficult to deal with.
A warning: if you try to deploy code with print statements under WSGI, expect things to break. Use the logging module instead.
If you are using apache2 server to run django application and enabled access & error logs, your print statements will be printed in the error logs.
While you running your application kindly do the following as root user in linux,
tail -f /path-to-error-file.log
mostly apache2 logs will be in this location /var/log/apache2/.
It will print when ever it finds print command in your function.