I've been thinking about ways to automatically setup configuration in my Python applications.
I usually use the following type of approach:
'''config.py'''
class Config(object):
MAGIC_NUMBER = 44
DEBUG = True
class Development(Config):
LOG_LEVEL = 'DEBUG'
class Production(Config):
DEBUG = False
REPORT_EMAIL_TO = ["ceo#example.com", "chief_ass_kicker#example.com"]
Typically, when I'm running the app in different ways I could do something like:
from config import Development, Production
do_something():
if self.conf.DEBUG:
pass
def __init__(self, config='Development'):
if config == "production":
self.conf = Production
else:
self.conf = Development
I like working like this because it makes sense, however I'm wondering if I can somehow integrate this into my git workflow too.
A lot of my applications have separate scripts, or modules that can be run alone, thus there isn't always a monolithic application to inherit configurations from some root location.
It would be cool if a lot of these scripts and seperate modules could check what branch is currently checked out and make their default configuration decisions based upon that, e.g., by looking for a class in config.py that shares the same name as the name of the currently checked out branch.
Is that possible, and what's the cleanest way to achieve it?
Is it a good/bad idea?
I'd prefer spinlok's method, but yes, you can do pretty much anything you want in your __init__, e.g.:
import inspect, subprocess, sys
def __init__(self, config='via_git'):
if config == 'via_git':
gitsays = subprocess.check_output(['git', 'symbolic-ref', 'HEAD'])
cbranch = gitsays.rstrip('\n').replace('refs/heads/', '', 1)
# now you know which branch you're on...
tbranch = cbranch.title() # foo -> Foo, for class name conventions
classes = dict(inspect.getmembers(sys.modules[__name__], inspect.isclass)
if tbranch in classes:
print 'automatically using', tbranch
self.conf = classes[tbranch]
else:
print 'on branch', cbranch, 'so falling back to Production'
self.conf = Production
elif config == 'production':
self.conf = Production
else:
self.conf = Development
This is, um, "slightly tested" (python 2.7). Note that check_output will raise an exception if git can't get a symbolic ref, and this also depends on your working directory. You can of course use other subprocess functions (to provide a different cwd for instance).
Related
Im using dependency_injection python lib and having issues to test my use_cases.
I have this conatiners:
class TestAdapters(containers.DeclarativeContainer):
repository: RepositoryInterface = providers.Singleton(
MockRepository
).add_args(1)
... # More adapters here
class Adapters(containers.DeclarativeContainer):
repository: RepositoryInterface = providers.Singleton(
RedisRepository
).add_args(1)
...
class UseCases(containers.DeclarativeContainer):
adapters = providers.DependenciesContainer()
example_use_case: ExampleUseCaseInterface = providers.Factory(
ExampleUseCase, repository=adapters.respository
)
...
def get_adapter():
import os
if os.getenv("ENVIRONMENT") == "TEST":
return TestAdapters()
return Adapters()
In my API, setting the dependecies like this:
# Fast api main.py file
def get_application() -> FastAPI:
container = UseCases(adapters=get_adapter())
container.wire(modules=["adapters.api.v1.controllers"])
application = FastAPI(
title=settings.PROJECT_NAME,
)
application.include_router(router)
application.middleware("http")(catch_exceptions_middleware)
return application
But I'm trying to test my use cases in isolation, without necessarily going through the api. So my strategy is to create a use_cases fixture, which returns UseCases container with the right Adapters container setted:
# In global conftest.py
#pytest.fixture
def use_cases():
container = UseCases(adapters=TestAdapters())
container.wire(modules=["adapters.api.v1.controllers"])
yield container
container.unwire()
But when I run any test with use_cases as fixture I have this error:
dependency_injector.errors.Error: Dependency "UseCases.adapters.respository" is not defined
What am I missing here? There is a better way to do it?
There is nothing wrong with my implementation. My problem was that there was a typo on my use_case Factory on UseCases container.
I am trying to use pytest-xdist for running parallel tests.
It works and I see improved performance.
To improve further more, I wanted to provide it multiple databases by using django_db_modify_db_settings_xdist_suffix
I override that functions in my conftest.py.
Since I have four workers, I created four databases manually. Then I use conftest to modify the settings.DATABASES to test for DBNAME against DBNAME_
I verified that my settings.DATABASES has changed and is correct.
But the queries are still going to old db DBNAME (which is no longer in my settings.DATABASES)
What could be going wrong?
Do I need to make any other change? Or change in conftest.py fixture is enough?
Thanks in advance for any help or direction.
EDIT:
My conftest:py has lot of stuff.
At the end of django_db_modify_db_settings_xdist_suffix if I log settings.DATABASES it shows me correct & expected info. But the queries still go to different db.
conftest.py is same in both runs (pytest -n4 or pytest). Since it depends on xdist_suffix it modifies the settings.DATABASE value in "-n 4" run only.
Two relevant functions which I think are important here:
#pytest.fixture(scope="session")
def django_db_setup(
request,
django_test_environment,
django_db_blocker,
django_db_use_migrations,
django_db_keepdb,
django_db_createdb,
django_db_modify_db_settings,
):
pass
And
#pytest.fixture(scope="session")
def django_db_modify_db_settings_xdist_suffix(request):
from django.conf import settings
default = settings.DATABASES["default"].copy()
settings.DATABASES.clear()
settings.DATABASES["default"] = default
xdist_suffix = None
xdist_worker_id = get_xdist_worker_id(request)
if xdist_worker_id != 'master':
xdist_suffix = xdist_worker_id
if xdist_suffix:
for db_settings in settings.DATABASES.values():
test_name = db_settings.get("TEST", {}).get("NAME")
if not test_name:
test_name = "test_{}".format(db_settings["NAME"])
db_settings.setdefault("TEST", {})
db_settings["TEST"]["NAME"] = "{}_{}".format(test_name, xdist_suffix)
db_settings["NAME"] = db_settings["TEST"]["NAME"]
I am doing something similar here to what you have done, but it may seem simpler or more elegant. To my mind, it seems that the context is more on the side of 'Django' than pytest-xdist.
I have used pytest-xdist to scale concurrent stress testing and the hook that seems most relevant for your question using gateway-id to send a setting to the remote worker which allowed distinction between nodes.
def pytest_configure_node(self, node: WorkerController) -> None:
"""set something peculiar for each node."""
node.workerinput['SPECIAL'] = get_db_for_node(node.gateway.id)
Please try to implement the function as shown for get_db_for_node(gateway_id: str) -> str:
Then in the worker you could perhaps leverage config.workerinput to access the special mentioned above:
#pytest.fixture(scope="session")
def special(pytestconfig: Config) -> str:
if not hasattr(pytestconfig, 'workerinput'):
log.exception('fixture requires xdist')
return ''
return pytestconfig.workerinput['SPECIAL']
I have a code that is based on a configuration file called config.py which defines a class called Config and contains all the configuration options. As the config file can be located anywhere in the user's storage, so I use importlib.util to import it (as specified in this answer). I want to test this functionality with unittest for different configurations. How do I do it? A simple answer could be make a different file for every possible config I want to test and then pass its path to the config loader but this is not what I want. What I basically need is that I implement the Config class, and fake it as if it were the actual config file. How to achieve this?
EDIT Here is the code I want to test:
import os
import re
import traceback
import importlib.util
from typing import Any
from blessings import Terminal
term = Terminal()
class UnknownOption(Exception):
pass
class MissingOption(Exception):
pass
def perform_checks(config: Any):
checklist = {
"required": {
"root": [
"flask",
"react",
"mysql",
"MODE",
"RUN_REACT_IN_DEVELOPMENT",
"RUN_FLASK_IN_DEVELOPMENT",
],
"flask": ["HOST", "PORT", "config"],
# More options
},
"optional": {
"react": [
"HTTPS",
# More options
],
"mysql": ["AUTH_PLUGIN"],
},
}
# Check for missing required options
for kind in checklist["required"]:
prop = config if kind == "root" else getattr(config, kind)
for val in kind:
if not hasattr(prop, val):
raise MissingOption(
"Error while parsing config: "
+ f"{prop}.{val} is a required config "
+ "option but is not specified in the configuration file."
)
def unknown_option(option: str):
raise UnknownOption(
"Error while parsing config: Found an unknown option: " + option
)
# Check for unknown options
for val in vars(config):
if not re.match("__[a-zA-Z0-9_]*__", val) and not callable(val):
if val in checklist["optional"]:
for ch_val in vars(val):
if not re.match("__[a-zA-Z0-9_]*__", ch_val) and not callable(
ch_val
):
if ch_val not in checklist["optional"][val]:
unknown_option(f"Config.{val}.{ch_val}")
else:
unknown_option(f"Config.{val}")
# Check for illegal options
if config.react.HTTPS == "true":
# HTTPS was set to true but no cert file was specified
if not hasattr(config.react, "SSL_KEY_FILE") or not hasattr(
config.react, "SSL_CRT_FILE"
):
raise MissingOption(
"config.react.HTTPS was set to True without specifying a key file and a crt file, which is illegal"
)
else:
# Files were specified but are non-existent
if not os.path.exists(config.react.SSL_KEY_FILE):
raise FileNotFoundError(
f"The file at { config.react.SSL_KEY_FILE } was set as the key file"
+ "in configuration but was not found."
)
if not os.path.exists(config.react.SSL_CRT_FILE):
raise FileNotFoundError(
f"The file at { config.react.SSL_CRT_FILE } was set as the certificate file"
+ "in configuration but was not found."
)
def load_from_pyfile(root: str = None):
"""
This loads the configuration from a `config.py` file located in the project root
"""
PROJECT_ROOT = root or os.path.abspath(
".." if os.path.abspath(".").split("/")[-1] == "lib" else "."
)
config_file = os.path.join(PROJECT_ROOT, "config.py")
print(f"Loading config from {term.green(config_file)}")
# Load the config file
spec = importlib.util.spec_from_file_location("", config_file)
config = importlib.util.module_from_spec(spec)
# Execute the script
spec.loader.exec_module(config)
# Not needed anymore
del spec, config_file
# Load the mode from environment variable and
# if it is not specified use development mode
MODE = int(os.environ.get("PROJECT_MODE", -1))
conf: Any
try:
conf = config.Config()
conf.load(PROJECT_ROOT, MODE)
except Exception:
print(term.red("Fatal: There was an error while parsing the config.py file:"))
traceback.print_exc()
print("This error is non-recoverable. Aborting...")
exit(1)
print("Validating configuration...")
perform_checks(conf)
print(
"Configuration",
term.green("OK"),
)
Without seeing a bit more of your code, it's tough to give a terribly direct answer, but most likely, you want to use Mocks
In the unit test, you would use a mock to replace the Config class for the caller/consumer of that class. You then configure the mock to give the return values or side effects that are relevant to your test case.
Based on what you've posted, you may not need any mocks, just fixtures. That is, examples of Config that exercise a given case. In fact, it would probably be best to do exactly what you suggested originally--just make a few sample configs that exercise all the cases that matter.
It's not clear why that is undesirable--in my experience, it's much easier to read and understand a test with a coherent fixture than it is to deal with mocking and constructing objects in the test class. Also, you'd find this much easier to test if you broke the perform_checks function into parts, e.g., where you have comments.
However, you can construct the Config objects as you like and pass them to the check function in a unit test. It's a common pattern in Python development to use dict fixtures. Remembering that in python objects, including modules, have an interface much like a dictionary, suppose you had a unit test
from unittest import TestCase
from your_code import perform_checks
class TestConfig(TestCase):
def test_perform_checks(self):
dummy_callable = lambda x: x
config_fixture = {
'key1': 'string_val',
'key2': ['string_in_list', 'other_string_in_list'],
'key3': { 'sub_key': 'nested_val_string', 'callable_key': dummy_callable},
# this is your in-place fixture
# you make the keys and values that correspond to the feature of the Config file under test.
}
perform_checks(config_fixture)
self.assertTrue(True) # i would suggest returning True on the function instead, but this will cover the happy path case
def perform_checks_invalid(self):
config_fixture = {}
with self.assertRaises(MissingOption):
perform_checks(config_fixture)
# more tests of more cases
You can also override the setUp() method of the unittest class if you want to share fixtures among tests. One way to do this would be set up a valid fixture, then make the invalidating changes you want to test in each test method.
As per the pytest documentation, it possible to override the default temporary directory setting as follows:
py.test --basetemp=/base_dir
When the tmpdir fixture is then used in a test ...
def test_new_base_dir(tmpdir):
print str(tmpdir)
assert False
... something like the following would then be printed to the screen:
/base_dir/test_new_base_dir_0
This works as intended and for certain use cases can be very useful.
However, I would like to be able to change this setting on a per-test (or perhaps I should say a "per-fixture") basis. Is such a thing possible?
I'm close to just rolling my own tmpdir based on the code for the original, but would rather not do this -- I want to build on top of existing functionality where I can, not duplicate it.
As an aside, my particular use case is that I am writing a Python module that will act on different kinds of file systems (NFS4, etc), and it would be nice to be able to yield the functionality of tmpdir to be able to create the following fixtures:
def test_nfs3_stuff(nfs3_tmpdir):
... test NFS3 functionality
def test_nfs4_stuff(nfs4_tmpdir):
... test NFS4 functionality
In the sources of TempdirFactory the .config.option.basetemp is used as the attribute to store the basetemp. So you can directly set it before the usage:
import pytest
import time
import os
def mktemp_db(tmpdir_factory, db):
basetemp = None
if 'PYTEST_TMPDIR' in os.environ:
basetemp = os.environ['PYTEST_TMPDIR']
if basetemp:
tmpdir_factory.config.option.basetemp = basetemp
if db == "db1.db":
tmpdb = tmpdir_factory.mktemp('data1_').join(db)
elif db == "db2.db":
tmpdb = tmpdir_factory.mktemp('data2_').join(db)
return tmpdb
#pytest.fixture(scope='session')
def empty_db(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db1.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
#pytest.fixture(scope='session')
def empty_db2(tmpdir_factory):
tmpdb = mktemp_db(tmpdir_factory, 'db2.db')
print("* " + str(tmpdb))
time.sleep(5)
return tmpdb
def test_empty_db(empty_db):
pass
def test_empty_db2(empty_db2):
pass
-
>set PYTEST_TMPDIR=./tmp
>python.exe -m pytest -q -s test_my_db.py
* c:\tests\tmp\data1_0\db1.db
.* c:\tests\tmp\data2_0\db2.db
.
2 passed in 10.03 seconds
There didn't appear to be a nice solution to the problem as posed in the question so I settled on making two calls to py.test:
Passing in a different --basetemp for each.
Marking (using #pytest.mark.my_mark) which tests needed the special treatment of using a non-standard basetemp.
Passing -k my_mark or -k-my_mark into each call.
I'm new to python and I want to know the pythonic way to influence behavior differently between my run time test environment and production.
My use case is I'm using a decorator that needs different parameters in test vs production runs.
It looks like this:
# buffer_time should be 0 in test and 5000 lets say in prod
#retry(stop_max_attempt_number=7, wait_fixed=buffer_time)
def wait_until_volume_status_is_in_use(self, volume):
if volume.status != 'in use':
log_fail('Volume status is ... %s' % volume.status)
volume.update()
return volume.status
One solution is use os environment variables.
At the top of a file I can write something like this
# some variation of this
buffer_time = 0 if os.environ['MODE'] == 'TEST' else 5000
class Guy(object):
# Body where buffer_time is used in decorator
Another solution is to use a settings file
# settings.py
def init():
global RETRY_BUFFER
RETRY_BUFFER = 5000
# __init__.py
import settings
settings.init()
# test file
from my_module import settings
settings.RETRY_BUFFER = 0
from my_module.class_file import MyKlass
# Do Tests
# class file
import settings
buffer_time = settings.RETRY_BUFFER
class Guy(object):
# Body where buffer_time is used in decorator
Ultimately my problem with both solutions is they both used shared state.
I would like to know what is the standard way to accomplish this.