Trying to use transitions package as per examples provided here https://github.com/pytransitions/transitions
For some reason, neither of the two approaches shown below provide typing suggestions for registered evaporate() trigger (at least in PyCharm 2019.1.2 for Windows x64)
At the same time, these triggers can still be used.
What can be done to have these triggers suggested as I type?
class Matter(Machine):
def say_hello(self): print("hello, new state!")
def say_goodbye(self): print("goodbye, old state!")
def __init__(self):
states = ['solid', 'liquid', 'gas']
Machine.__init__(self, states=states, initial='liquid')
self.add_transition('melt', 'solid', 'liquid')
testmatter= Matter()
testmatter.add_transition('evaporate', 'liquid', 'gas')
testmatter.evaporate()
Out: True
testmatter.get_model_state(testmatter)
Out: <State('gas')#14748976>
class Matter2():
pass
testmatter2 = Matter2()
machine = Machine(model=testmatter2, states=['solid', 'liquid', 'gas', 'plasma'], initial='liquid')
machine.add_transition('evaporate', 'liquid', 'gas')
testmatter2.evaporate()
Out: True
transitions adds triggers at runtime to the model (Matter) instance. This cannot be predicted by IDEs before the initialization code is actually executed. Imho, this is the biggest disadvantage of the way in which transitions works (but again imho, it is also its strength when dealing with dynamic state machines or state machines created/received during runtime but that's another story)
If you use an interactive shell with code completion (ipython), you will see that evaporate (based on __dir__ calls to the model)
will be suggested:
from transitions import Machine
class Model:
pass
model = Model()
>>> model.e # TAB -> nothing
# model will be decorated during machine initialization
machine = Machine(model, states=['A', 'B'],
transitions=[['evaporate', 'A', 'B']], initial='A')
>>> model.e # TAB -> completion!
But I assume that's not how you plan to code. So how can we give hints to the introspection?
Easiest solution: Use a docstring for your model to announce triggers.
from transitions import Machine
class Model:
"""My dynamically extended Model
Attributes:
evaporate(callable): dynamically added method
"""
model = Model()
# [1]
machine = Machine(model, states=['A', 'B'],
transitions=[['evaporate', 'A', 'B']], initial='A')
model.eva # code completion! will also suggest 'evaporate' before it was added at [1]
The problem here is that the IDE will rely on the docstring to be correct. So when the docstring method (masked as attribute) is calles evaparate, it will always suggests that even though you later add evaporate.
Use pyi files and PEP484 (PyCharm workaround)
Unfortunately, PyCharm does not consider attributes in docstrings for code completion as you correctly pointed out (see this discussion for more details). We need to use another approach. We can create so called pyi files to provide hints to PyCharm. Those files are named identically to their .py counterparts but are solely used for IDEs and other tools and must not be imported (see this post). Let's create a file called sandbox.pyi
# sandbox.pyi
class Model:
evaporate = None # type: callable
And now let's create the actual code file sandbox.py (I don't name my playground files 'test' because that always startles pytest...)
# sandbox.py
from transitions import Machine
class Model:
pass
## Having the type hints right here would enable code completion BUT
## would prevent transitions to decorate the model as it does not override
## already defined model attributes and methods.
# class Model:
# evaporate = None # type: callable
model = Model()
# machine initialization
model.ev # code completion
This way you have code completion AND transitions will correctly decorate the model. The downside is that you have another file to worry about which might clutter your project.
If you want to generate pyi files automatically, you could have a look at stubgen or extend Machine to generate event stubs of models for you.
from transitions import Machine
class Model:
pass
class PyiMachine(Machine):
def generate_pyi(self, filename):
with open(f'{filename}.pyi', 'w') as f:
for model in self.models:
f.write(f'class {model.__class__.__name__}:\n')
for event in self.events:
f.write(f' def {event}(self, *args, **kwargs) -> bool: pass\n')
f.write('\n\n')
model = Model()
machine = PyiMachine(model, states=['A', 'B'],
transitions=[['evaporate', 'A', 'B']], initial='A')
machine.generate_pyi('sandbox')
# PyCharm can now correctly infer the type of success
success = model.evaporate()
model.to_A() # A dynamically added method which is now visible thanks to the pyi file
Alternative: Generate machine configurations FROM docstrings
A similar issue has been already discussed in the issue tracker of transitions (see https://github.com/pytransitions/transitions/issues/383). You could also generate the machine configuration from the model's docstring:
import transitions
import inspect
import re
class DocMachine(transitions.Machine):
"""Parses states and transitions from model definitions"""
# checks for 'attribute:value' pairs (including [arrays]) in docstrings
re_pattern = re.compile(r"(\w+):\s*\[?([^\]\n]+)\]?")
def __init__(self, model, *args, **kwargs):
conf = {k: v for k, v in self.re_pattern.findall(model.__doc__, re.MULTILINE)}
if 'states' not in kwargs:
kwargs['states'] = [x.strip() for x in conf.get('states', []).split(',')]
if 'initial' not in kwargs and 'initial' in conf:
kwargs['initial'] = conf['initial'].strip()
super(DocMachine, self).__init__(model, *args, **kwargs)
for name, method in inspect.getmembers(model, predicate=inspect.ismethod):
doc = method.__doc__ if method.__doc__ else ""
conf = {k: v for k, v in self.re_pattern.findall(doc, re.MULTILINE)}
# if docstring contains "source:" we assume it is a trigger definition
if "source" not in conf:
continue
else:
conf['source'] = [s.strip() for s in conf['source'].split(', ')]
conf['source'] = conf['source'][0] if len(conf['source']) == 1 else conf['source']
if "dest" not in conf:
conf['dest'] = None
else:
conf['dest'] = conf['dest'].strip()
self.add_transition(trigger=name, **conf)
# override safeguard which usually prevents accidental overrides
def _checked_assignment(self, model, name, func):
setattr(model, name, func)
class Model:
"""A state machine model
states: [A, B]
initial: A
"""
def go(self):
"""processes information
source: A
dest: B
conditions: always_true
"""
def cycle(self):
"""an internal transition which will not exit the current state
source: *
"""
def always_true(self):
"""returns True... always"""
return True
def on_exit_B(self): # no docstring
raise RuntimeError("We left B. This should not happen!")
m = Model()
machine = DocMachine(m)
assert m.is_A()
m.go()
assert m.is_B()
m.cycle()
try:
m.go() # this will raise a MachineError since go is not defined for state B
assert False
except transitions.MachineError:
pass
This is a very simple docstring-to-machine-configration parser which does not take care of all eventualities that could be part of a docstring. It assumes that every method with a docstring containing ("source: " ) is supposed to be a trigger. It does however also approaches the issue of documentation. Using such a machine would make sure that at least some documentation for the developed machine exists.
Related
I have some method in a Python module db_access called read_all_rows that takes 2 strings as parameters and returns a list:
def read_all_rows(table_name='', mode=''):
return [] # just an example
I can mock this method non-conditionally using pytest like:
mocker.patch('db_access.read_all_rows', return_value=['testRow1', 'testRow2'])
But I want to mock its return value in pytest depending on table_name and mode parameters, so that it would return different values of parameters and combinations of them. And to make this as simple as possible.
The pseudocode of what I want:
when(db_access.read_all_rows).called_with('table_name1', any_string()).then_return(['testRow1'])
when(db_access.read_all_rows).called_with('table_name2' 'mode1').then_return(['testRow2', 'tableRow3'])
when(db_access.read_all_rows).called_with('table_name2' 'mode2').then_return(['testRow2', 'tableRow3'])
You can see that the 1st call is mocked for "any_string" placeholder.
I know that it can be achieved with side_effect like
def mock_read_all_rows:
...
mocker.patch('db_access.read_all_rows', side_effect=mock_read_all_rows)
but it is not very convenient because you need to add extra function which makes the code cumbersome. Even with lambda it is not so convenient because you would need to handle all conditions manually.
How this could be solved in a more short and readable way (ideally in a single line of code for each mock condition)?
P.S. In Java Mockito it is can be easily acheived with single line of code for each condition like
when(dbAccess.readAllRows(eq("tableName1"), any())).thenReturn(List.of(value1, value2));
...
but can I do this with Python's pytest mocker.patch?
There is no "of the shelf one liner" as you know it from the mockito-library.
The pythonic way how to mock based on input arguments is described in this answer.
However, nothing can stop you from creating your own single line wrapper and therefore have your own mocking interface (or domain specfic language (DSL)).
This simple wrapper should give you an idea. Place this class in a test helper module.
# tests/helper/mocking.py
class ConditionalMock:
def __init__(self, mocker, path):
self.mock = mocker.patch(path, new=self._replacement)
self._side_effects = {}
self._default = None
self._raise_if_not_matched = True
def expect(self, condition, return_value):
condition = tuple(condition)
self._side_effects[condition] = return_value
return self
def default_return(self, default):
self._raise_if_not_matched = False
self._default = default
return self
def _replacement(self, *args):
if args in self._side_effects:
return self._side_effects[args]
if self._raise_if_not_matched:
raise AssertionError(f'Arguments {args} not expected')
return self._default
Now import it in the tests just as usual and use the new one-line conditional mocking interace
# tests/mocking_test.py
import time
from .helper.mocking import ConditionalMock
def test_conditional_mocker(mocker) -> None:
ConditionalMock(mocker, 'time.strftime').expect('a', return_value='b').expect((1, 2), return_value='c')
assert time.strftime('a') == 'b'
assert time.strftime(1, 2) == 'c'
And result:
$ pytest tests/mocking_test.py
================================================================================================================================================= test session starts ==================================================================================================================================================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
plugins: mock-3.6.1
collected 1 item
tests/mocking_test.py .
Problem description
I've been trying to add MyPy support for my implementation of a slightly modified Enum class, where it is possible to define a certain containers for enum values.
My enum and descriptor classes definitions (enums.py file):
from __future__ import annotations
import inspect
from enum import Enum, EnumMeta
from typing import Iterable, Any, Generic, TypeVar
T = TypeVar("T", bound=Iterable[Any])
K = TypeVar("K", bound=dict[Any, Any])
class EnumMapper(Generic[T]):
"""Simple descriptor class that enables definitions of iterable within Enum classes."""
def __init__(self, container: T):
self._container = container
def __get__(self, instance: Any, instance_cls: Any) -> T:
return self._container
class StrictEnumMapper(EnumMapper[K]):
"""
Alias for `EnumMapper` descriptor class that requires all enum values
to be present in definition of mapping before Enum class can be
instantiated.
Only dict-based mappings are supported!
"""
def __init__(self, container: K):
if not issubclass(type(container), dict):
raise ValueError(
f"{StrictEnumMapper.__name__} supports only dict-based containers, {type(container)} provided"
)
super().__init__(container)
class NamespacedEnumMeta(EnumMeta):
"""
Metaclass checking Enum-based class for `StrictEnumMapper` fields
and ensuring that all enum values are provided within such fields.
E.g.
>>> class Items(metaclass=NamespacedEnumMeta):
>>> spam = "spam"
>>> eggs = "eggs"
>>> foo = "foo"
>>>
>>> food_preferences = StrictEnumMapper({
>>> spam: "I like spam!",
>>> eggs: "I really don't like eggs...",
>>> })
will not instantiate and the `RuntimeError` informing about missing mapping
for `Items.foo` will be raised.
This class takes burden of remembering to add new enum values to mapping
off programmers' shoulders by doing it automatically during runtime.
The app will simply not start and inform about a mistake.
"""
def __new__(
mcs,
cls,
bases: tuple[type, ...],
namespace: dict[str, Any], # namespace is actually of _EnumDict type
):
enum_values = [namespace[member] for member in namespace._member_names] # type: ignore
strict_enum_mappers_violated = [
field_name
for (field_name, field) in namespace.items()
if (
inspect.ismethoddescriptor(field)
and issubclass(type(field), StrictEnumMapper)
and not all(enum in field._container for enum in enum_values)
)
]
if strict_enum_mappers_violated:
raise RuntimeError(
f"The following {cls} fields do not contain all possible "
+ f"enum values: {strict_enum_mappers_violated}"
)
return EnumMeta.__new__(mcs, cls, bases, namespace)
class NamespacedEnum(Enum, metaclass=NamespacedEnumMeta):
"""Extension of the basic Enum class, allowing for EnumMapper and StrictEnumMapper usages."""
Example usage of the classes (main.py file):
from enums import NamespacedEnum, StrictEnumMapper
class Food(NamespacedEnum):
spam = "spam"
eggs = "eggs"
foo = "foo"
reactions = StrictEnumMapper(
{
spam: "I like it",
eggs: "I don't like it...",
foo: "I like?",
}
)
if __name__ == "__main__":
print(Food.reactions[Food.eggs.value])
try:
class ActionEnum(NamespacedEnum):
action1 = "action1"
action2 = "action2"
func_for_action = StrictEnumMapper({
action1: lambda: print("Doing action 1"),
})
except RuntimeError as e:
print(f"Properly detected missing enum value in `func_for_action`. Error message: {e}"
Running above should result in:
$ python main.py
I don't like it...
Properly detected missing enum value in `func_for_action`. Error message: The following ActionEnum fields do not contain all possible enum values: ['func_for_action']
Running MyPy (version 0.910) returns following errors:
$ mypy --python-version 3.9 enums.py main.py 1 ✘ NamespacedEnums 11:53:05
main.py:19: error: Value of type "Food" is not indexable
Found 1 error in 1 file (checked 2 source files)
Is there some kind of MyPy magic/annotations hack that would allow me to explicitly inform MyPy that the descriptors are not converted to enum values at runtime? I don't want to use # type: ignore on each line where the "mappers" are used.
EDIT (on 6th Oct 2021):
Just to rephrase my issue: I don't want to "teach" MyPy that it should detect any missing enum values defined in StrictEnumMapper - its metaclass already does it for me. I want MyPy to properly detect type of the "reaction" field in the Food class example above - it should determine that this field is not Food enum but actually a dict.
The rationale of my approach
(this part is optional, read if you want to understand my idea behind this approach and maybe suggest a better solution that I may not be aware of)
In my projects I often define enums that encapsulate set of possible values, e.g. Django ChoiceField (I'm aware of the TextChoices class but for the sake of this issue lets assume that it wasn't the tool I wanted to use), etc.
I've noticed that often there's a need to define some corresponding values/operations for each of the defined enum value. Usually such actions are defined using separate function, that has branching paths for enum values, e.g.
def perform_action(action_identifier: ActionEnum) -> None:
if action_identifier == ActionEnum.action1:
run_action1()
elif action_identifier == ActionEnum.action2:
run_action2()
[...]
else:
raise ValueError("Unrecognized enum value")
The problems arise when we update the enum and add new possible values. Programmers have to go through all the code and update places where the enum is used. I have prepared my own solution for this called NamespacedEnum - a simple class inheriting from Enum class, with slightly extended metaclass. This metaclass, along with my descriptors, allows defining containers within class definition. What's more, the metaclass takes care of ensuring that all enum values exist in the mapper. To make it more clear, the above example of perform_action() function would be rewritten using my class like that:
class MyAction(NamespacedEnum):
action1 = "action1"
action2 = "action2"
[...]
action_for_value = StrictEnumMapper({
action1: run_action1,
action2: run_action2,
[...]
})
def perform_action(action_identifier: ActionEnum) -> None:
ActionEnum.action_for_value[action_identifier.value]()
I find it a lot easier to use this approach in projects that rely heavily on enums. I also think that this approach is better than defining separate containers for enum values outside of enum class definition because of the namespace clutter.
If i understand you correctly, you want to statically check that a dictionary contains a key-value pair for every variant of a given enum.
This approach is probably overkill but you can write a custom mypy plugin to support this:
First, the runtime check:
# file: test.py
from enum import Enum
from typing import TypeVar, Dict, Type
E = TypeVar('E', bound='Enum')
T = TypeVar('T')
# check that all enum variants have an entry in the dict
def enum_extras(enum: Type[E], **kwargs: T) -> Dict[E, T]:
missing = ', '.join(e.name for e in enum if e.name not in kwargs)
assert not missing, f"Missing enum mappings: {missing}"
extras = {
enum[key]: value for key, value in kwargs.items()
}
return extras
# a dummy enum
class MyEnum(Enum):
x = 1
y = 2
# oups, misspelled the key 'y'
names = enum_extras(MyEnum, x="ex", zzz="why")
This raises AssertionError: Missing enum mappings: y at runtime but mypy does not find the error.
Here is the plugin:
# file: plugin.py
from typing import Optional, Callable, Type
from mypy.plugin import Plugin, FunctionContext
from mypy.types import Type as MypyType
# this is needed for mypy to discover the plugin
def plugin(_: str) -> Type[Plugin]:
return MyPlugin
# we define a plugin
class MyPlugin(Plugin):
# this callback looks at every function call
def get_function_hook(self, fullname: str) -> Optional[Callable[[FunctionContext], MypyType]]:
if fullname == 'test.enum_extras':
# return a custom checker for our special function
return enum_extras_plugin
return None
# this is called for every function call of `enum_extras`
def enum_extras_plugin(ctx: FunctionContext) -> MypyType:
# these are the dictionary keys
_, dict_keys = ctx.arg_names
# the first argument of the function call (a function that returns an enum class)
first_argument = ctx.arg_types[0][0]
try:
# the names of the enum variants
enum_keys = first_argument.ret_type.type.names.keys()
except AttributeError:
return ctx.default_return_type
# check that every variant has a key in the dict
missing = enum_keys - set(dict_keys)
if missing:
missing_keys = ', '.join(missing)
# signal an error if a variant is missing
ctx.api.fail(f"Missing value for enum variants: {missing_keys}", ctx.context)
return ctx.default_return_type
To use the plugin we need a mypy.ini:
[mypy]
plugins = plugin.py
And now mypy will find missing keys:
test.py:35: error: Missing value for enum variants: y
Found 1 error in 1 file (checked 1 source file)
Note: I was using python=3.6 and mypy=0.910
Warning: when you pass something unexpected to enum_extras mypy will crash. Some more error handling could help.
I am working on a project with multiple directories, each having a number of python scripts. And it involves use of certain key parameters I pass using a yaml config file.
Currently the method used is, (I'd say it is naive as) it simply parses the yaml to a python dictionary, which is then imported in other scripts and values are accessed.
From what I could find, there is:
Abseil library that can be used for accessing flags across different scripts but using it is cumbersome.
Another approach using a Class (preferably singleton), putting all global variables in it and exposing instance of that class in other scripts.
I wanted to ask, is there any other library that can be used for this purpose? And what is the most pythonic methodolgy to deal with it?
Any help would be really appreciated. Thanks!
To make global values accessible across modules I use the Class (singleton) Method.
The code I list below is also in my Gisthub https://gist.github.com/auphofBSF/278206afff675cd30377f4894a5b2b1d
My generic GlobalValues singleton class and usage is as follows. This class is located in a subdirectory below the main. In the example of use that I also attach I place the GlobalValues class in a file globals.py in the folder myClasses
class GlobalValues:
"""
a Singleton class to serve the GlobalValues
USAGE: (FirstTime)
from myClasses.globals import GlobalValues
global_values = GlobalValues()
global_values.<new value> = ...
... = global_values.<value>
USAGE: (Second and n'th time, in same module or other modules)
NB adjust `from myClasses.globals` dependent on relative path to this module
from myClasses.globals import GlobalValues
global_values = GlobalValues.getInstance()
global_values.<new value> = ...
... = global_values.<value>
"""
__instance = None
DEFAULT_LOG_LEVEL="CRITICAL"
#staticmethod
def get_instance():
""" Static access method. """
if GlobalValues.__instance == None:
GlobalValues()
return GlobalValues.__instance
def __init__(self):
""" Virtually private constructor. """
if GlobalValues.__instance != None:
raise Exception("This class is a singleton! once created use global_values = Glovalvalues.get_instance()")
else:
GlobalValues.__instance = self
my Example of use is as follows
Example File layout
<exampleRootDir>
Example_GlobalValues_Main.py #THIS is the main
myClasses # A folder
globals.py #for the singleton class GlobalValues
exampleSubModule.py # demonstrates use in submodules
Example_GlobalValues_Main.py
print(
"""
----------------------------------------------------------
Example of using a singleton Class as a Global value store
The files in this example are in these folders
file structure:
<exampleRootDir>
Example_GlobalValues_Main.py #THIS is the main
myClasses # A folder
globals.py #for the singleton class GlobalValues
exampleSubModule.py # demonstrates use in submodules
-----------------------------------------------------------
"""
)
from myClasses.globals import GlobalValues
globalvalues = GlobalValues() # THe only place an Instance of GlobalValues is created
print(f"MAIN: global DEFAULT_LOG_LEVEL is {globalvalues.DEFAULT_LOG_LEVEL}")
globalvalues.DEFAULT_LOG_LEVEL = "DEBUG"
print(f"MAIN: global DEFAULT_LOG_LEVEL is now {globalvalues.DEFAULT_LOG_LEVEL}")
#Add a new global value:
globalvalues.NEW_VALUE = "hello"
#demonstrate using global modules in another module
from myClasses import exampleSubModule
print(f"MAIN: globalvalues after opening exampleSubModule are now {vars(globalvalues)}")
print("----------------- Completed -------------------------------")
exampleSubModule.py is as follows and is located in the myClasses folder
"""
Example SubModule using the GlobalValues Singleton Class
"""
# observe where the globals module is in relation to this module . = same directory
from .globals import GlobalValues
# get the singleton instance of GlobalValues, cannot instantiate a new instance
exampleSubModule_globalvalues = GlobalValues.get_instance()
print(f"exampleSubModule: values in GlobalValues are: {vars(exampleSubModule_globalvalues)}")
#Change a value
exampleSubModule_globalvalues.NEW_VALUE = "greetings from exampleSubModule"
#add a new value
exampleSubModule_globalvalues.SUBMODULE = "exampleSubModule"
I use AzureML SDK for Python to define a Run and assign log parameters as shown below.
run = Run.get_context()
run.parent.log("param1", 25)
run.parent.log("param2", 100)
run.parent.log("param3", 10)
run.parent.log("param4", 40)
The problem is that I can only see param1 and param2 in Machine Learning Service Workspace. Is there any limitation on the number of variables?
The short answer is NO after I reviewed the source code of azureml-core which I got via decompress the azureml_core-1.0.85-py2.py3-none-any.whl file.
The key source codes are here.
# azureml_core-1.0.85-py2.py3-none-any.whl\azureml\core\run.py
class Run(_RunBase):
.......
#classmethod
def get_context(cls, allow_offline=True, used_for_context_manager=False, **kwargs):
"""Return current service context.
Use this method to retrieve the current service context for logging metrics and uploading files. If
``allow_offline`` is True (the default), actions against the Run object will be printed to standard
out.
.. remarks::
This function is commonly used to retrieve the authenticated Run object
inside of a script to be submitted for execution via experiment.submit(). This run object is both
an authenticated context to communicate with Azure Machine Learning services and a conceptual container
within which metrics, files (artifacts), and models are contained.
.. code-block:: python
run = Run.get_context() # allow_offline=True by default, so can be run locally as well
...
run.log("Accuracy", 0.98)
run.log_row("Performance", epoch=e, error=err)
:param cls: Indicates class method.
:param allow_offline: Allow the service context to fall back to offline mode so that the training script
can be tested locally without submitting a job with the SDK. True by default.
:type allow_offline: bool
:param kwargs: A dictionary of additional parameters.
:type kwargs: dict
:return: The submitted run.
:rtype: azureml.core.run.Run
"""
try:
experiment, run_id = cls._load_scope()
# Querying for the run instead of initializing to load current state
if used_for_context_manager:
return _SubmittedRun(experiment, run_id, **kwargs)
return _SubmittedRun._get_instance(experiment, run_id, **kwargs)
except RunEnvironmentException as ex:
module_logger.debug("Could not load run context %s, switching offline: %s", ex, allow_offline)
if allow_offline:
module_logger.info("Could not load the run context. Logging offline")
return _OfflineRun(**kwargs)
else:
module_logger.debug("Could not load the run context and allow_offline set to False")
raise RunEnvironmentException(inner_exception=ex)
class _OfflineRun(ChainedIdentity):
def __init__(self, parent_logger=None, run_id=None, **kwargs):
self._run_id = "OfflineRun_{}".format(uuid4()) if run_id is None else run_id
super(_OfflineRun, self).__init__(
_ident=self._run_id,
_parent_logger=parent_logger if parent_logger is not None else module_logger)
....
def log(self, name, value, description=""):
self._emit("scalar", name, value)
....
def _emit(self, type, name, value):
print("Attempted to log {0} metric {1}:\n{2}".format(type, name, value))
The run object got from get_context function is running on Offline mode, so its log function just be an alias of print.
I've been thinking about ways to automatically setup configuration in my Python applications.
I usually use the following type of approach:
'''config.py'''
class Config(object):
MAGIC_NUMBER = 44
DEBUG = True
class Development(Config):
LOG_LEVEL = 'DEBUG'
class Production(Config):
DEBUG = False
REPORT_EMAIL_TO = ["ceo#example.com", "chief_ass_kicker#example.com"]
Typically, when I'm running the app in different ways I could do something like:
from config import Development, Production
do_something():
if self.conf.DEBUG:
pass
def __init__(self, config='Development'):
if config == "production":
self.conf = Production
else:
self.conf = Development
I like working like this because it makes sense, however I'm wondering if I can somehow integrate this into my git workflow too.
A lot of my applications have separate scripts, or modules that can be run alone, thus there isn't always a monolithic application to inherit configurations from some root location.
It would be cool if a lot of these scripts and seperate modules could check what branch is currently checked out and make their default configuration decisions based upon that, e.g., by looking for a class in config.py that shares the same name as the name of the currently checked out branch.
Is that possible, and what's the cleanest way to achieve it?
Is it a good/bad idea?
I'd prefer spinlok's method, but yes, you can do pretty much anything you want in your __init__, e.g.:
import inspect, subprocess, sys
def __init__(self, config='via_git'):
if config == 'via_git':
gitsays = subprocess.check_output(['git', 'symbolic-ref', 'HEAD'])
cbranch = gitsays.rstrip('\n').replace('refs/heads/', '', 1)
# now you know which branch you're on...
tbranch = cbranch.title() # foo -> Foo, for class name conventions
classes = dict(inspect.getmembers(sys.modules[__name__], inspect.isclass)
if tbranch in classes:
print 'automatically using', tbranch
self.conf = classes[tbranch]
else:
print 'on branch', cbranch, 'so falling back to Production'
self.conf = Production
elif config == 'production':
self.conf = Production
else:
self.conf = Development
This is, um, "slightly tested" (python 2.7). Note that check_output will raise an exception if git can't get a symbolic ref, and this also depends on your working directory. You can of course use other subprocess functions (to provide a different cwd for instance).