Why logging is not working when I use pytest - python

When I run pytest, I am not seeing any log messages, how can I fix it, I tried to search for pytest.ini file which is not present in my local, I am new to pytest and need some help.
import test_todo_lib
import logging
logging.basicConfig(format='%(asctime)s - %(message)s', level=logging.CRITICAL)
# NOTE: Run pytest using --capture=tee-sys option inorder to display standard output
def test_basic_unit_tests(browser):
test_page = test_todo_lib.ToDoAppPage(browser)
test_page.load()
logging.info("Check the basic functions")

Related

How to set the log level for an imported module?

Write your code with a nice logger
import logging
def init_logging():
logFormatter = logging.Formatter("[%(asctime)s] %(levelname)s::%(module)s::%(funcName)s() %(message)s")
rootLogger = logging.getLogger()
LOG_DIR = os.getcwd() + '/' + 'logs'
if not os.path.exists(LOG_DIR):
os.makedirs(LOG_DIR)
fileHandler = logging.FileHandler("{0}/{1}.log".format(LOG_DIR, "g2"))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
rootLogger.setLevel(logging.DEBUG)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler)
return rootLogger
logger = init_logging()
works as expected. Logging using logger.debug("Hello! :)") logs to file and console.
In a second step you want to import an external module which is also logging using logging module:
Install it using pip3 install pymisp (or any other external module)
Import it using from pymisp import PyMISP (or any other external module)
Create an object of it using self.pymisp = PyMISP(self.ds_model.api_url, self.ds_model.api_key, False, 'json') (or any other...)
What now happens is, that every debug log output from the imported module is getting logged to the log file and the console. The question now is, how to set a different (higher) log level for the imported module.
As Meet Sinoja and anishtain4 pointed out in the comments, the best and most generic method is to retrieve the logger by the name of the imported module as follows:
import logging
import some_module_with_logging
logging.getLogger("some_module_with_logging").setLevel(logging.WARNING)
Another option (though not recommended if the generic method above works) is to extract the module's logger variable and customize it to your needs. Most third-party modules store it in a module-level variable called logger or _log. In your case:
import logging
import pymisp
pymisp.logger.setLevel(logging.INFO)
# code of module goes here
A colleague of mine helped with this question:
Get a named logger yourLogger = logging.getLogger('your_logger')
Add a filter to each handler prevents them to print/save other logs than yours
for handler in logging.root.handlers:
handler.addFilter(logging.Filter('your_logger'))

Python How can I set up global logger setting in multi python files?

I just started Python and I am struggling to use Logger. I have two python files: app.py and liba.py. I want to setup logging at app.py and use it for liba.py (and another libraries). Do you have any good ideas or can you share any references?
file structure
entry.py
lib/liba.py
app.py
#! /usr/bin/env python3
import logging
logger = logging.getLogger(__name__)
from lib import liba
handler = logging.FileHandler('/tmp/app.log', 'a+')
logger.addHandler(handler)
logger.warn('sample')
liba.out()
lib/liba.py
#! /usr/bin/env python3
import logging
logger = logging.getLogger(__name__)
def out():
logger.warn('liba')
run python
$ python3 app.py
liba
app.py output log to the logfile. liba.py does not output the log into the file. I want to save logs in the same file.
Do like so:
app.py
#! /usr/bin/env python3
import logging
logger = logging.getLogger()
handler = logging.FileHandler('/tmp/app.log', 'a+')
logger.addHandler(handler)
logger.warn('sample')
from lib import liba
liba.out()
lib/liba.py
#! /usr/bin/env python3
import logging
def out():
logging.warn('liba')
You don't need to instantiate the logging, unless you want to configure handlers, which you only do in your main script. Then all logging will go to the root logger which is what you get when instantiating with no specific name logging.getLogger(). I like to use it this way as you don't need to match names across all your modules for it to work. In your modules you just send log messages out by using logging.warn('blabla'). You further need to make sure you define all your handlers before any call to logging.warn is made, otherwise some default handler will take its place.

How python logging get it's configuration

I used to Python logging, it works fine. The logging.basicConfig(...) set in one module (a some.py file), then we can use logging every where. Obviously, logging is global.
The question is how logging find it's settings, when we not call the module where basicConfig(...) appeared (in some.py file )? Is logging scan all the packages?
Even the logging.basicConfig(...) put into an any.py and the module (any.py) never get imported, or not used anywhere, the logging setting take effect!
To understand logging you have dive into Python's standard library sources.
Here is the trick:
#/usr/lib/python3.2/logging/__init__.py
...
root = RootLogger(WARNING)
Logger.root = root
Logger.manager = Manager(Logger.root)
...
# and
def basicConfig(**kwargs):
...
hdlr = StreamHandler(stream)
fs = kwargs.get("format", BASIC_FORMAT)
dfs = kwargs.get("datefmt", None)
style = kwargs.get("style", '%')
fmt = Formatter(fs, dfs, style)
hdlr.setFormatter(fmt)
root.addHandler(hdlr)
So, when you call basicconfig() with certain parameters, root logger is set.
Finally getLogger:
def getLogger(name=None):
"""
Return a logger with the specified name, creating it if necessary.
If no name is specified, return the root logger.
"""
if name:
return Logger.manager.getLogger(name)
else:
return root
I think there is no magic scanning here.
Try to test it this way in a separate test directory:
test/main.py:
import logging
logging.info('test')
test/any.py:
import logging
logging.basicConfig(filename='test.log', level=logging.INFO)
python main.py
Result: NO test.log file.
Now let's update the test:
test/main.py:
import logging
import any
logging.info('test')
python main.py
Result: new test.log file with INFO:root:test string inside.
So I guess that any.py in your case is imported somehow,
despite your expectations.
You may find the way any.py is imported easily,
just add few lines there:
test/any.py:
from traceback import print_stack
print_stack()
...
python main.py
Result:
File "main.py", line 2, in
import any
File "any.py", line 2, in
print_stack()
This stack shows that any.py is imported from main.py.
I hope you will find where it is imported from in your case.

Logging does not work

I am using standard logging configurations, set in settings.py file, and accessed in program but I get the error
error No handlers could be found for logger.
It works when run from the console but does not work when run from Eclipse.
The code is as follows:
import logging
from config import settings
logger = logging.getLogger('engine')
class ReplyUser(object):
def __init__(self):
logger.info("Initalizes ReplyUser")
def myfun(self):
logger.info("Hi")
print "hi"
I am guessing the problem is in the PATH which eclipse is using and it is unable to find settings.py as the handler information is stored in the settings.py file, hence the error.

Django unit testing - Why can't I just run ./tests.py on myApp?

So I'm very familiar with manage.py test myapp. But I can't figure out how to make my tests.py work as an stand-alone executable. You may be wondering why I would want to do this.. Well I'm working (now) in Eclipse and I can't seem to figure out how to set up the tool to simply run this command. Regardless it would be very nice to simply wrap tests.py in a simple manner to just run that.
Here is what my tests.py looks like.
"""
This simply tests myapp
"""
import sys
import logging
from django.test import TestCase
from django.conf import settings
from django.test.utils import get_runner
class ModelTest(TestCase):
def test_model_test1(self):
"""
This is test 1
"""
self.failUnlessEqual(1 + 1, 2)
def test_model_test2(self):
"""
This is test 2
"""
self.failUnlessEqual(1 + 1, 2)
def test_model_test3(self):
"""
This is test 3
"""
self.failUnlessEqual(1 + 1, 2)
def run_tests():
test_runner = get_runner(settings)
failures = test_runner([], verbosity=9, interactive=False)
sys.exit(failures)
if __name__ == '__main__':
# Setup Logging
loglevel = logging.DEBUG
logging.basicConfig(format="%(levelname)-8s %(asctime)s %(name)s %(message)s",
datefmt='%m/%d/%y %H:%M:%S', stream=sys.stdout)
log = logging.getLogger("")
run_tests()
I think the solution is located on this line but I can't seem to figure out what the first argument needs to be in order for it to magically start working..
failures = test_runner([], verbosity=9, interactive=False)
Thanks for helping!!
**** Updates ****
What I am looking to do (Doh!) is to simply run "myApp" tests. The problem is that this works (and chmod is not the problem) but it wants to run the entire test suite. I don't want that. I just want to run the myApp test suite.
Thanks again!
You could create an "External Tool" configuration for your project, such as:
Location: ${project_loc}/src/${project_name}/manage.py
Working Directory: ${project_loc}/src/${project_name}/
Arguments: test ${string_prompt}
This will run manage.py test <whatever name you type in the string prompt>.
The values above assume that you created a pydev project in Eclipse and then housed your Django project in the pydev src directory. It also assumes that you have the project name for pydev be the same name of your Django project. It will use the currently selected project in the package explorer to determine project_loc and project_name.
a. this should be the first line at your code file (tests.py)
#!/usr/bin/env python
b. run $ chmod +x tests.py

Categories

Resources