Python - CherryPy testing - set session data? - python

When running a pytest unit test against a CherryPy server, using a cherrypy.helper.CPWebCase sub-class, how can I set data for the session object? I tried just calling cherrypy.session['foo']='bar' like I would if I was really in a cherrypy call, but that just gave an "AttributeError: '_Serving' object has no attribute 'session'"
For reference, a test case might look something like this (pulled from the CherryPy Docs with minor edits):
import cherrypy
from cherrypy.test import helper
from MyApp import Root
class SimpleCPTest(helper.CPWebCase):
def setup_server():
cherrypy.tree.mount(Root(), "/", {'/': {'tools.sessions.on': True}})
setup_server = staticmethod(setup_server)
def check_two_plus_two_equals_four(self):
#<code to set session variable to 2 here>
# This is the question: How do I set a session variable?
self.getPage("/")
self.assertStatus('200 OK')
self.assertHeader('Content-Type', 'text/html;charset=utf-8')
self.assertBody('4')
And the handler might look something like this (or anything else, it makes no difference whatsoever):
class Root:
#cherrypy.expose
def test_handler(self):
#get a random session variable and do something with it
number_var=cherrypy.session.get('Number')
# Add two. This will fail if the session variable has not been set,
# Or is not a number
number_var = number_var+2
return str(number_var)
It's safe to assume that the config is correct, and sessions work as expected.
I could, of course, write a CherryPy page that takes a key and value as arguments, and then sets the specified session value, and call that from my test code (EDIT: I've tested this, and it does work). That, however, seems kludgy, and I'd really want to limit it to testing only somehow if I went down that road.

What you are trying to achieve is usually referred as mocking.
While running tests you'd usually want to 'mock' some of resources you access with dummy objects having same interface (duck typing). This may be achieved with monkey patching. To simplify this process you may use unittest.mock.patch as either context manager or method/function decorator.
Please find below the working example with context manager option:
==> MyApp.py <==
import cherrypy
class Root:
_cp_config = {'tools.sessions.on': True}
#cherrypy.expose
def test_handler(self):
# get a random session variable and do something with it
number_var = cherrypy.session.get('Number')
# Add two. This will fail if the session variable has not been set,
# Or is not a number
number_var = number_var + 2
return str(number_var)
==> cp_test.py <==
from unittest.mock import patch
import cherrypy
from cherrypy.test import helper
from cherrypy.lib.sessions import RamSession
from MyApp import Root
class SimpleCPTest(helper.CPWebCase):
#staticmethod
def setup_server():
cherrypy.tree.mount(Root(), '/', {})
def test_check_two_plus_two_equals_four(self):
# <code to set session variable to 2 here>
sess_mock = RamSession()
sess_mock['Number'] = 2
with patch('cherrypy.session', sess_mock, create=True):
# Inside of this block all manipulations with `cherrypy.session`
# actually access `sess_mock` instance instead
self.getPage("/test_handler")
self.assertStatus('200 OK')
self.assertHeader('Content-Type', 'text/html;charset=utf-8')
self.assertBody('4')
Now you may safely run test as follows:
$ py.test -sv cp_test.py
============================================================================================================ test session starts =============================================================================================================
platform darwin -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- ~/.pyenv/versions/3.5.2/envs/cptest-pyenv-virtualenv/bin/python3.5
cachedir: .cache
rootdir: ~/src/cptest, inifile:
collected 2 items
cp_test.py::SimpleCPTest::test_check_two_plus_two_equals_four PASSED
cp_test.py::SimpleCPTest::test_gc <- ../../.pyenv/versions/3.5.2/envs/cptest-pyenv-virtualenv/lib/python3.5/site-packages/cherrypy/test/helper.py PASSED

Related

What is the best way to allow user to configure a Python package

I have situation like this. I am creating a Python package. That Python package needs to use Redis, so I want to allow the user of the package to define the Redis url.
Here's how I attempted to do it:
bin/main.py
from my_package.main import run
from my_package.config import config
basicConfig(filename='logs.log', level=DEBUG)
# the user defines the redis url
config['redis_url'] = 'redis://localhost:6379/0'
run()
my_package/config.py
config = {
"redis_url": None
}
my_package/main.py
from .config import config
def run():
print(config["redis_url"]) # prints None instead of what I want
Unfortunately, it doesn't work. In main.py the value of config["redis_url"] is None instead of the url defined in bin/main.py file. Why is that? How can I make it work?
I could pass the config to the run() function, but then if I run some other function I will need to pass the config to that function as well. I'd like to pass it one time ideally.

How to Dynamically Change pytest's tmpdir Base Directory

As per the pytest documentation, it possible to override the default temporary directory setting as follows:
py.test --basetemp=/base_dir
When the tmpdir fixture is then used in a test ...
def test_new_base_dir(tmpdir):
print str(tmpdir)
assert False
... something like the following would then be printed to the screen:
/base_dir/test_new_base_dir_0
This works as intended and for certain use cases can be very useful.
However, I would like to be able to change this setting on a per-test (or perhaps I should say a "per-fixture") basis. Is such a thing possible?
I'm close to just rolling my own tmpdir based on the code for the original, but would rather not do this -- I want to build on top of existing functionality where I can, not duplicate it.
As an aside, my particular use case is that I am writing a Python module that will act on different kinds of file systems (NFS4, etc), and it would be nice to be able to yield the functionality of tmpdir to be able to create the following fixtures:
def test_nfs3_stuff(nfs3_tmpdir):
... test NFS3 functionality
def test_nfs4_stuff(nfs4_tmpdir):
... test NFS4 functionality
In the sources of TempdirFactory the .config.option.basetemp is used as the attribute to store the basetemp. So you can directly set it before the usage:
import pytest
import time
import os
def mktemp_db(tmpdir_factory, db):
basetemp = None
if 'PYTEST_TMPDIR' in os.environ:
basetemp = os.environ['PYTEST_TMPDIR']
if basetemp:
tmpdir_factory.config.option.basetemp = basetemp
if db == "db1.db":
tmpdb = tmpdir_factory.mktemp('data1_').join(db)
elif db == "db2.db":
tmpdb = tmpdir_factory.mktemp('data2_').join(db)
return tmpdb
#pytest.fixture(scope='session')
def empty_db(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db1.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
#pytest.fixture(scope='session')
def empty_db2(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db2.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
def test_empty_db(empty_db):
pass
def test_empty_db2(empty_db2):
pass
-
>set PYTEST_TMPDIR=./tmp
>python.exe -m pytest -q -s test_my_db.py
* c:\tests\tmp\data1_0\db1.db
.* c:\tests\tmp\data2_0\db2.db
.
2 passed in 10.03 seconds
There didn't appear to be a nice solution to the problem as posed in the question so I settled on making two calls to py.test:
Passing in a different --basetemp for each.
Marking (using #pytest.mark.my_mark) which tests needed the special treatment of using a non-standard basetemp.
Passing -k my_mark or -k-my_mark into each call.

What is python best practice to to detect then influence behavior if in development?

I'm new to python and I want to know the pythonic way to influence behavior differently between my run time test environment and production.
My use case is I'm using a decorator that needs different parameters in test vs production runs.
It looks like this:
# buffer_time should be 0 in test and 5000 lets say in prod
#retry(stop_max_attempt_number=7, wait_fixed=buffer_time)
def wait_until_volume_status_is_in_use(self, volume):
if volume.status != 'in use':
log_fail('Volume status is ... %s' % volume.status)
volume.update()
return volume.status
One solution is use os environment variables.
At the top of a file I can write something like this
# some variation of this
buffer_time = 0 if os.environ['MODE'] == 'TEST' else 5000
class Guy(object):
# Body where buffer_time is used in decorator
Another solution is to use a settings file
# settings.py
def init():
global RETRY_BUFFER
RETRY_BUFFER = 5000
# __init__.py
import settings
settings.init()
# test file
from my_module import settings
settings.RETRY_BUFFER = 0
from my_module.class_file import MyKlass
# Do Tests
# class file
import settings
buffer_time = settings.RETRY_BUFFER
class Guy(object):
# Body where buffer_time is used in decorator
Ultimately my problem with both solutions is they both used shared state.
I would like to know what is the standard way to accomplish this.

How to stop cherrypy auto-reload from changing the process name on Debian?

I am developing a project using cherrypy, on debian. At my work, the administrators want to see the name of the project instead of "python" displayed when using commands like ps -e. However, when cherrypy auto-reloads when modifying one source file, it automatically changes the process name.
For example, if I take the most basic cherrypy tutorial and save it under NameToSee.py:
#!/usr/bin/python
import cherrypy
class HelloWorld(object):
#cherrypy.expose
def index(self):
return "Hello world!"
if __name__ == '__main__':
cherrypy.quickstart(HelloWorld())
By adding the shebang at the start, when I launch it $ ./NameToSee.py &, I get a process (say 31051) whose name is "NameToSee.py":
$ head /proc/31051/status
Name: NameToSee.py
State: S (sleeping)
However, whenever I change the source code files (for example, by adding an empty line), the process name changes:
$ head /proc/31051/status
Name: python
State: S (sleeping)
So, my question is: Can I get both cherrypy auto-reload and custom process name ? If not, can I remove the cherrypy auto-reload?
I am running on debian wheezy, with python 2.7.3 and cherrypy 3.2.2
This sample covers both cases:
import cherrypy
from cherrypy.process.plugins import SimplePlugin
PROC_NAME = 'sample'
def set_proc_name(newname):
"""
Set the process name.
Source: http://stackoverflow.com/a/923034/298371
"""
from ctypes import cdll, byref, create_string_buffer
libc = cdll.LoadLibrary('libc.so.6')
buff = create_string_buffer(len(newname)+1)
buff.value = newname
libc.prctl(15, byref(buff), 0, 0, 0)
class NamedProcess(SimplePlugin):
"""
Set the name of the process everytime that the
engine starts.
"""
def start(self):
self.bus.log("Setting the name as '{}'".format(PROC_NAME))
set_proc_name(PROC_NAME)
class HelloWorld(object):
#cherrypy.expose
def index(self):
return "Hello world!"
def run_without_autoreload():
set_proc_name(PROC_NAME)
cherrypy.quickstart(HelloWorld(), config={
'global': {
'engine.autoreload.on': False
}
})
def run_with_autoreload():
# Work on any configuration but for the sake of the
# question this works with the autoreload.
NamedProcess(cherrypy.engine).subscribe()
cherrypy.quickstart(HelloWorld())
if __name__ == '__main__':
run_with_autoreload()
# run_without_autoreload()
You can test it with:
cat /proc/`pgrep sample`/status
(or maybe just use pgrep)
And as a final recommendation consider using the "production" environment (see the docs) which also included the disabling of the autoreload plugin.

Django + Pytest + Selenium

I recently switched from Django's TestCase classes to the third party pytest system. This allowed me to speed up my test suite significantly (by a factor of 5), and has been a great experience overall.
I Do have issues with selenium though. I've made a simple fixture to include the browser in my tests
#pytest.yield_fixture
def browser(live_server, transactional_db, admin_user):
driver_ = webdriver.Firefox()
driver_.server_url = live_server.url
driver_.implicitly_wait(3)
yield driver_
driver_.quit()
But for some reason, The database is not properly reset between tests. I have a test similar to
class TestCase:
def test_some_unittest(db):
# Create some items
#...
def test_with_selenium(browser):
# The items from the above testcase exists in this testcase
The objects created in test_some_unittest are present in test_with_selenium. I'm not really sure how to solve this.
switch from django.test.TestCase in favour of pytest shall mean employing pytest-django plugin and your tests should look like this:
class TestSomething(object):
def setup_method(self, method):
pass
#pytest.mark.django_db
def test_something_with_dg(self):
assert True
that above all means no django.test.TestCase (which is a derivation from python std unittest framework) inheritance.
the #pytest.mark.django_db means your test case will run in a transaction which will be rolled back once the test case is over.
first occurrence of the django_db marker will also trigger django migrations.
beware using database calls in special pytest methods such as setup_method for it's unsupported and otherwise problematic:
django-pytest setup_method database issue
def _django_db_fixture_helper(transactional, request, _django_cursor_wrapper):
if is_django_unittest(request):
return
if transactional:
_django_cursor_wrapper.enable()
def flushdb():
"""Flush the database and close database connections"""
# Django does this by default *before* each test
# instead of after.
from django.db import connections
from django.core.management import call_command
for db in connections:
call_command('flush', verbosity=0,
interactive=False, database=db)
for conn in connections.all():
conn.close()
request.addfinalizer(_django_cursor_wrapper.disable)
request.addfinalizer(flushdb)
else:
if 'live_server' in request.funcargnames:
return
from django.test import TestCase
_django_cursor_wrapper.enable()
_django_cursor_wrapper._is_transactional = False
case = TestCase(methodName='__init__')
case._pre_setup()
request.addfinalizer(_django_cursor_wrapper.disable)
request.addfinalizer(case._post_teardown)
As i see you use pytest-django (which is fine)
From this code of it, it doesn't flush the db if it's non-transactional db.
So in your 'other' tests you'd have to use transactional_db and then it will be isolated as you wanted.
So your code will look like:
class TestCase:
def test_some_unittest(transactional_db):
# Create some items
#...
def test_with_selenium(browser):
# The items from the above testcase does not exist in this testcase
Hovewer, an improvement to pytest-django could be that flush is performed before, not after yield of the fixture value, which makes much more sense. It's not so important what's in teardown, it's important that set up is correct.
As a side suggestion, for browser fixture you can just use pytest-splinter plugin

Categories

Resources