I am using MySQLdb in python. And I found MySQLdb can print some log to my console, mixed with other logs printed by me. So, could I offer a logging.conf file, and specifiy a logger for MySQLdb, and then control where and which level for that?
Related
I haven't been able to find a solution to this problem elsewhere, even though it seems it should be a common one.
import logging
logging.basicConfig(filename='logs.log', level=logging.INFO)
logging.info('something happened')
I want to create and write to the log file, but instead it displays the messages in the notebook output cell.
Solved: the root of the problem was the fact that subsequent calls to logging.basicConfig do nothing, only the first call affects the basic config of the root handler. So nothing worked until I restarted the kernel in my notebook or something.
I've just tried your code in Jupyter Notebook. It ran just fine. I'm using Python 3.6 and this is on Windows 10:
In [1]: import logging
logging.basicConfig(filename='logs.log', level=logging.INFO)
logging.info('something happened')
In [2]: with open("logs.log") as log:
print(log.read())
INFO:root:something happened
Logging in Python is surprisingly difficult to configure - imo one of the worst libraries in the python universe. You might try the package logbook, which claims to have a better experience.
M.e. you are missing a call like this:
logger = logging.getlogger("mymodile")
logger.info("blub")
Sorry, I first ignored the bit about jupyter but that'S an important clue.
JUpyter set's up a logger so it can capture logging for you and print it - but that disables basicConfig.
This thread contains information about how to circumvent the problem: Get Output From the logging Module in IPython Notebook the answer from skulz00 seems to be ok.
log both file and jupyter
fh = logging.FileHandler('example.log')
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)
logging.info('Hello world')
If you are encountering this issue in Jupyter Lab, it might also be because of this problem. Jupyter Lab doesn't seem to show the updated versions of log files that are opened even though they are updated. That's why I recommend using another application for specifically viewing log files like Visual Studio Code.
I'm using Paramiko in Python to run command on a box through SSH. How to use Paramiko logging? I mean force it to make logs (in a file or terminal) and set the log level.
Paramiko names its loggers, so simply:
import logging
import paramiko
logging.basicConfig()
logging.getLogger("paramiko").setLevel(logging.WARNING) # for example
See the logging cookbook for some more examples.
You can also use log_to_file from paramiko.util to log directly to a file.
paramiko.util.log_to_file("<log_file_path>", level = "WARN")
I know this has been discussed here before, but I haven't found a solution that will work for me. I already have a python script that I wrote, and I currently have it run at boot. What I would like to do is log all outputs, which include any print statements to the console, or any error messages that would come up. I do like the Logging module, and would prefer to use that over looking at all outputs on the console. Any suggestions?
If you manage your script using supervisor it will automatically handle all logging of stdout/stderr for you.
Additionally, it can automatically restart your script if it were to crash
I'm completely new to working with Python (I have a PHP background)
I'm used to PHP's Error Handling and Logging. Yet, with Python I'm getting a 500 Error on a very simple script that should work.
Is there a way to turn on Error Handling with Python? As 500 Error doesn't tell me much of anything, except that something is not right.
I have looked on the net for an answer to this, but I'm not finding a solution to what should be very obvious.
Your question is asking how to see errors or exceptions (not how to handle then, though of course you need to handle these errors as well), and from the 500 error and PHP background it seems to imply you are doing web programming with Python.
There are many ways to do so in Python, here is some that I tend to use:
in production use logging module to log errors. There are lots of tools that can help you such as Sentry [source code here]. You can read more on logging here.
in development, run your script with python -m pdb myscript.py which will start your script in debugger mode, then enter c (continue) to continue the script until error occurs, and now you will be able to see what the states are in the interactive PDB (Python debugger) prompt.
in development, if you are using a framework (or your script) that relies on WSGI, you can use the graphical, interactive JavaScript based in-browser helper Werkzeug which allows you debug at every level of the call stack.
And, to point out the most obvious one, Python interpreter will print out the stack trace if the program crashes, for eg:
→ python -c 'pprint "hello"'
File "<string>", line 1
pprint "hello"
^
SyntaxError: invalid syntax
# print.py is just one line: print 'hello world'
→ python print.py
File "print.py", line 1
pprint 'hello world'
^
SyntaxError: invalid syntax
UPDATE:
It seems like you aren't using any frameworks, and you are behind a host which from the look of it, didn't tell you how exactly it is serving your Python script. Now, since all you want is to see the stack trace in the browser, you can do the following based on what your hosts uses:
If your script is running behind a server via CGI, all you need to do is to use the Python cgitb module to print the stack trace on the HTML page:
import cgitb
cgitb.enable()
However, it is very likely the shared hosting you signed up is using mod_python with Apache, so turning on PythonDebug in the Apache config file should print the stack trace in the browser:
PythonDebug On
For Apache with mod_wsgi, there's a better article written than I could summarize here: mod_wsgi Debugging Techniques
We have recently switched to py.test for python testing (which is fantastic btw). However, I'm trying to figure out how to control the log output (i.e. the built-in python logging module). We have pytest-capturelog installed and this works as expected and when we want to see logs we can pass --nologcapture option.
However, how do you control the logging level (e.g. info, debug etc.) and also filter the logging (if you're only interested in a specific module). Is there existing plugins for py.test to achieve this or do we need to roll our own?
Thanks,
Jonny
Installing and using the pytest-capturelog plugin could satisfy most of your pytest/logging needs. If something is missing you should be able to implement it relatively easily.
As Holger said you can use pytest-capturelog:
def test_foo(caplog):
caplog.setLevel(logging.INFO)
pass
If you don't want to use pytest-capturelog you can use a stdout StreamHandler in your logging config so pytest will capture the log output. Here is an example basicConfig
logging.basicConfig(level=logging.DEBUG, stream=sys.stdout)
A bit of a late contribution, but I can recommend pytest-logging for a simple drop-in logging capture solution. After pip install pytest-logging you can control the verbosity of the your logs (displayed on screen) with
$ py.test -s -v tests/your_test.py
$ py.test -s -vv tests/your_test.py
$ py.test -s -vvvv tests/your_test.py
etc... NB - the -s flag is important, without it py.test will filter out all the sys.stderr information.
Pytest now has native support for logging control via the caplog fixture; no need for plugins.
You can specify the logging level for a particular logger or by default for the root logger:
import pytest
def test_bar(caplog):
caplog.set_level(logging.CRITICAL, logger='root.baz')
Pytest also captures log output in caplog.records so you can assert logged levels and messages. For further information see the official documentation here and here.
A bit of an even later contribution: you can try pytest-logger. Novelty of this plugin is logging to filesystem: pytest provides nodeid for each test item, which can be used to organize test session logs directory (with help of pytest tmpdir facility and it's testcase begin/end hooks).
You can configure multiple handlers (with levels) for terminal and filesystem separately and provide own cmdline options for filtering loggers/levels to make it work for your specific test environment - e.g. by default you can log all to filesystem and small fraction to terminal, which can be changed on per-session basis with --log option if needed. Plugin does nothing by default, if user defines no hooks.