I'm using pytest ,boto3 and aws and want to have dynamic assertions with parameterized tests. How to improve this code to only assert on a specific group of subnetids?
production_private_ids = ["subnet-08f6d70b65b5cxx38", "subnet-0b6aaaf1ce207xx03", "subnet-0e54fda8f811fxxd8"]) ....
nonproduction_private_ids = ["subnet-11f6xx0b65b5cxx38", "subnet-116aaaf1ce207xx99", "subnet-11xxfda8f811fxx77"]) ....
#pytest.mark.parametrize("subnet", ["production_private_ids", "nonproduction_private_ids", "nonproduction_public_ids","production_public_ids ")
# if environment = production, then only check if production_private_ids exists in team_subnet
def test_sharing_subnets_exist(subnet,accountid):
team_subnet = get_team_subnets(accountid)
assert subnet in team_subnet
# if environment = nonproduction, then only check if nonproduction_private_ids exists in team_subnet
def test_sharing_subnets_exist(subnet,accountid):
team_subnet = get_team_subnets(accountid)
assert subnet in team_subnet
One common practice is to set and read from environment variables to determine which platform you are running from.
For example, in the environment you can have a variable isProduction=1. Then in your code, you can check by os.environ['isProduction'] == 1.
You can even save the private ids in the environment for reasons such as safety.
For example in the environment, you can have the following variables on nonproduction
id1="subnet-11f6xx0b65b5cxx38"
id2="subnet-116aaaf1ce207xx99"
id3"subnet-11xxfda8f811fxx77"
And another set on production machine
id1="subnet-08f6d70b65b5cxx38"
id2="subnet-0b6aaaf1ce207xx03"
id3="subnet-0e54fda8f811fxxd8"
in the code you do
import os
private_ids = [os.environ['id1'], os.environ['id2'], os.environ['id3']]
So you'll get the configs on each machine. Just make sure in your workflow/testflow the environment variables are correctly sourced.
You can parametrize the tests via metafunc if you need to execute additional logic on parametrization. Example:
import os
import pytest
production_private_ids = [...]
nonproduction_private_ids = [...]
def pytest_generate_tests(metafunc):
# if the test has `subnet` in args, parametrize it now
if 'subnet' in metafunc.fixturenames:
# replace with your environment check
if os.environ.get('NAME', None) == 'production':
ids = production_private_ids
else:
ids = nonproduction_private_ids
metafunc.parametrize('subnet', ids)
def test_sharing_subnets_exist(subnet, accountid):
team_subnet = get_team_subnets(accountid)
assert subnet in team_subnet
Now running pytest ... will check only non-production IDs, while NAME="production" pytest ... will check only production IDs.
Related
Im using dependency_injection python lib and having issues to test my use_cases.
I have this conatiners:
class TestAdapters(containers.DeclarativeContainer):
repository: RepositoryInterface = providers.Singleton(
MockRepository
).add_args(1)
... # More adapters here
class Adapters(containers.DeclarativeContainer):
repository: RepositoryInterface = providers.Singleton(
RedisRepository
).add_args(1)
...
class UseCases(containers.DeclarativeContainer):
adapters = providers.DependenciesContainer()
example_use_case: ExampleUseCaseInterface = providers.Factory(
ExampleUseCase, repository=adapters.respository
)
...
def get_adapter():
import os
if os.getenv("ENVIRONMENT") == "TEST":
return TestAdapters()
return Adapters()
In my API, setting the dependecies like this:
# Fast api main.py file
def get_application() -> FastAPI:
container = UseCases(adapters=get_adapter())
container.wire(modules=["adapters.api.v1.controllers"])
application = FastAPI(
title=settings.PROJECT_NAME,
)
application.include_router(router)
application.middleware("http")(catch_exceptions_middleware)
return application
But I'm trying to test my use cases in isolation, without necessarily going through the api. So my strategy is to create a use_cases fixture, which returns UseCases container with the right Adapters container setted:
# In global conftest.py
#pytest.fixture
def use_cases():
container = UseCases(adapters=TestAdapters())
container.wire(modules=["adapters.api.v1.controllers"])
yield container
container.unwire()
But when I run any test with use_cases as fixture I have this error:
dependency_injector.errors.Error: Dependency "UseCases.adapters.respository" is not defined
What am I missing here? There is a better way to do it?
There is nothing wrong with my implementation. My problem was that there was a typo on my use_case Factory on UseCases container.
I have some pytests that all access a fixture whose scope is module. I want to move the duplicated parts of the tests into a common place and access it from there.
Specifically, in the below sample code, in test/test_blah.py each of the test methods has the variable dsn, which is the device under test's serial number. I couldn't figure out how to extract this common code out. I tried accessing the dut in TestBase, but couldn't make it work.
# my_pytest/__init__.py
import pytest
#pytest.fixture(scope="module")
def device_fixture(request):
config = getattr(request.module, 'config', {})
device = get_device(config.get('dsn'))
assert device is not None
return device
...some other code...
# test/base.py
class TestBase:
def common_method_1(self):
pass
def common_method_2(self):
pass
# test/test_blah.py
from base import TestBase
import my_pytest
from my_pytest import device_fixture as dut #'dut' stands for 'device under test'
class TestBlah(TestBase):
def test_001(self, dut):
dsn = dut.get_serialno()
...
# how to extract the dsn = dut.get_serialno() into
# something common so I can keep these tests more DRY?
def test_002(self, dut):
dsn = dut.get_serialno()
...
def test_003(self, dut):
dsn = dut.get_serialno()
...
If I understand your question correctly: Put your fixtures in conftest.py and they will be available to use as arguments for your test functions. No need to import anything, you just define
#pytest.fixture(scope='module')
def dut():
return 'something'
As per the pytest documentation, it possible to override the default temporary directory setting as follows:
py.test --basetemp=/base_dir
When the tmpdir fixture is then used in a test ...
def test_new_base_dir(tmpdir):
print str(tmpdir)
assert False
... something like the following would then be printed to the screen:
/base_dir/test_new_base_dir_0
This works as intended and for certain use cases can be very useful.
However, I would like to be able to change this setting on a per-test (or perhaps I should say a "per-fixture") basis. Is such a thing possible?
I'm close to just rolling my own tmpdir based on the code for the original, but would rather not do this -- I want to build on top of existing functionality where I can, not duplicate it.
As an aside, my particular use case is that I am writing a Python module that will act on different kinds of file systems (NFS4, etc), and it would be nice to be able to yield the functionality of tmpdir to be able to create the following fixtures:
def test_nfs3_stuff(nfs3_tmpdir):
... test NFS3 functionality
def test_nfs4_stuff(nfs4_tmpdir):
... test NFS4 functionality
In the sources of TempdirFactory the .config.option.basetemp is used as the attribute to store the basetemp. So you can directly set it before the usage:
import pytest
import time
import os
def mktemp_db(tmpdir_factory, db):
basetemp = None
if 'PYTEST_TMPDIR' in os.environ:
basetemp = os.environ['PYTEST_TMPDIR']
if basetemp:
tmpdir_factory.config.option.basetemp = basetemp
if db == "db1.db":
tmpdb = tmpdir_factory.mktemp('data1_').join(db)
elif db == "db2.db":
tmpdb = tmpdir_factory.mktemp('data2_').join(db)
return tmpdb
#pytest.fixture(scope='session')
def empty_db(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db1.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
#pytest.fixture(scope='session')
def empty_db2(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db2.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
def test_empty_db(empty_db):
pass
def test_empty_db2(empty_db2):
pass
-
>set PYTEST_TMPDIR=./tmp
>python.exe -m pytest -q -s test_my_db.py
* c:\tests\tmp\data1_0\db1.db
.* c:\tests\tmp\data2_0\db2.db
.
2 passed in 10.03 seconds
There didn't appear to be a nice solution to the problem as posed in the question so I settled on making two calls to py.test:
Passing in a different --basetemp for each.
Marking (using #pytest.mark.my_mark) which tests needed the special treatment of using a non-standard basetemp.
Passing -k my_mark or -k-my_mark into each call.
I'm new to python and I want to know the pythonic way to influence behavior differently between my run time test environment and production.
My use case is I'm using a decorator that needs different parameters in test vs production runs.
It looks like this:
# buffer_time should be 0 in test and 5000 lets say in prod
#retry(stop_max_attempt_number=7, wait_fixed=buffer_time)
def wait_until_volume_status_is_in_use(self, volume):
if volume.status != 'in use':
log_fail('Volume status is ... %s' % volume.status)
volume.update()
return volume.status
One solution is use os environment variables.
At the top of a file I can write something like this
# some variation of this
buffer_time = 0 if os.environ['MODE'] == 'TEST' else 5000
class Guy(object):
# Body where buffer_time is used in decorator
Another solution is to use a settings file
# settings.py
def init():
global RETRY_BUFFER
RETRY_BUFFER = 5000
# __init__.py
import settings
settings.init()
# test file
from my_module import settings
settings.RETRY_BUFFER = 0
from my_module.class_file import MyKlass
# Do Tests
# class file
import settings
buffer_time = settings.RETRY_BUFFER
class Guy(object):
# Body where buffer_time is used in decorator
Ultimately my problem with both solutions is they both used shared state.
I would like to know what is the standard way to accomplish this.
I've been thinking about ways to automatically setup configuration in my Python applications.
I usually use the following type of approach:
'''config.py'''
class Config(object):
MAGIC_NUMBER = 44
DEBUG = True
class Development(Config):
LOG_LEVEL = 'DEBUG'
class Production(Config):
DEBUG = False
REPORT_EMAIL_TO = ["ceo#example.com", "chief_ass_kicker#example.com"]
Typically, when I'm running the app in different ways I could do something like:
from config import Development, Production
do_something():
if self.conf.DEBUG:
pass
def __init__(self, config='Development'):
if config == "production":
self.conf = Production
else:
self.conf = Development
I like working like this because it makes sense, however I'm wondering if I can somehow integrate this into my git workflow too.
A lot of my applications have separate scripts, or modules that can be run alone, thus there isn't always a monolithic application to inherit configurations from some root location.
It would be cool if a lot of these scripts and seperate modules could check what branch is currently checked out and make their default configuration decisions based upon that, e.g., by looking for a class in config.py that shares the same name as the name of the currently checked out branch.
Is that possible, and what's the cleanest way to achieve it?
Is it a good/bad idea?
I'd prefer spinlok's method, but yes, you can do pretty much anything you want in your __init__, e.g.:
import inspect, subprocess, sys
def __init__(self, config='via_git'):
if config == 'via_git':
gitsays = subprocess.check_output(['git', 'symbolic-ref', 'HEAD'])
cbranch = gitsays.rstrip('\n').replace('refs/heads/', '', 1)
# now you know which branch you're on...
tbranch = cbranch.title() # foo -> Foo, for class name conventions
classes = dict(inspect.getmembers(sys.modules[__name__], inspect.isclass)
if tbranch in classes:
print 'automatically using', tbranch
self.conf = classes[tbranch]
else:
print 'on branch', cbranch, 'so falling back to Production'
self.conf = Production
elif config == 'production':
self.conf = Production
else:
self.conf = Development
This is, um, "slightly tested" (python 2.7). Note that check_output will raise an exception if git can't get a symbolic ref, and this also depends on your working directory. You can of course use other subprocess functions (to provide a different cwd for instance).