lets say i have something as simple as that:
a.py:
from class_b import ClassB
class ClassA:
#classmethod
def do_class_a_thing(cls):
return
def action(self):
print(ClassB.do_class_b_thing())
b.py:
from class_a import ClassA
class ClassB:
#classmethod
def do_class_b_thing(cls):
return
def action(self):
print(ClassA.do_class_a_thing())
Obviously it wouldn't work due to circular imports. But the most non ugly way to still have Class A using a method from Class B, while B uses a method from class A?
I am designing a python class where I would write a method, say import_modules(), to which I would be passing a list of modules to be imported in this class. Is it possible to import these modules at runtime for the same class?
class Base():
def import_modules(self,modules):
#import all the passed modules into this class
def use_module(self):
imported_module.some_function()
You can import modules at runtime by using __import__ and set the property of an object by using __setattr__ (name, object)
class Base():
def import_modules(self,modules):
for m in modules:
self.__setattr__(m, __import__(m))
def use_module(self):
print(self.sys.platform)
b = Base()
b.import_modules(['sys'])
b.use_module()
With python and nosetests I have the following setup:
- package
- __init__.py
- test1.py
- test2.py
The __init__.py module contains a set up function
def setup():
print("Setup called")
var = 42
which will be used later to create a unique identified (different between running the tests, but the same for all the tests inside the package).
How can the tests itself access this variable (in this example case var)? The test scripts are just some stubs:
from nose.tools import assert_true
class TestSuite(object):
def test1(self):
# How to get content of 'var' here?
assert_true(True)
Is there some pythonic way to do this, or just use an environment variable to do this?
nose calls .setup() methods inside classes:
class Test:
def setup(self):
self.var = 1
def test_print_var(self):
print(self.var)
This also applies to methods inherited from elsewhere:
class TestBase:
def setup(self):
self.var = 1
class Test(TestBase):
def test_print_var(self):
print(self.var)
Main Goal: Automatically register classes (by a string) in a factory to be created dynamically at run time using that string, classes can be in their own file and not grouped in one file.
I have couple of classes which all inherit from the same base class and they define a string as their type.
A user wants to get an instance of one of these classes but only knows the type at run time.
Therefore I have a factory to create an instance given a type.
I didn't want to hard code an "if then statements" so I have a meta class to register all the sub classes of the base class:
class MetaRegister(type):
# we use __init__ rather than __new__ here because we want
# to modify attributes of the class *after* they have been
# created
def __init__(cls, name, bases, dct):
if not hasattr(cls, 'registry'):
# this is the base class. Create an empty registry
cls.registry = {}
else:
# this is a derived class. Add cls to the registry
interface_id = cls().get_model_type()
cls.registry[interface_id] = cls
super(MetaRegister, cls).__init__(name, bases, dct)
The problem is that for this to work the factory has to import all the subclass (So the meta class runs).
To fix this you can use from X import *
But for this to work you need to define an __all__ var in the __init__.py file of the package to include all the sub classes.
I don't want to hard code the sub classes because it beats the purpose of using the meta class.
I can go over the file in the package using:
import glob
from os.path import dirname, basename, isfile
modules = glob.glob(dirname(__file__) + "/*.py")
__all__ = [basename(f)[:-3] for f in modules if isfile(f)]
Which works great, but the project needs to compile to a single .so file, which nullifies the use of the file system.
So how could I achieve my main goal of creating instances at run time without hard codding the type?
Is there a way to populate an __all__ var at run time without touching the filesystem?
In Java I'd probably decorate the class with an annotation and then get all the classes with that annotation at run time, is there something similar on python?
I know there are decorators in python but I'm not sure I can use them in this way.
Edit 1:
Each subclass must be in a file:
- Models
-- __init__.py
-- ModelFactory.py
-- Regression
--- __init__.py
--- Base.py
--- Subclass1.py
--- Subclass2ExtendsSubclass1.py
Edit 2: Some code to Illustrate the problem:
+ main.py
|__ Models
|__ __init__.py
|__ ModelFactory.py
|__ Regression
|__ init__.py
|__ Base.py
|__ SubClass.py
|__ ModelRegister.py
main.py
from models.ModelFactory import ModelFactory
if __name__ == '__main__':
ModelFactory()
ModelFactory.py
from models.regression.Base import registry
import models.regression
class ModelFactory(object):
def get(self, some_type):
return registry[some_type]
ModelRegister.py
class ModelRegister(type):
# we use __init__ rather than __new__ here because we want
# to modify attributes of the class *after* they have been
# created
def __init__(cls, name, bases, dct):
print cls.__name__
if not hasattr(cls, 'registry'):
# this is the base class. Create an empty registry
cls.registry = {}
else:
# this is a derived class. Add cls to the registry
interface_id = cls().get_model_type()
cls.registry[interface_id] = cls
super(ModelRegister, cls).__init__(name, bases, dct)
Base.py
from models.regression.ModelRegister import ModelRegister
class Base(object):
__metaclass__ = ModelRegister
def get_type(self):
return "BASE"
SubClass.py
from models.regression.Base import Base
class SubClass(Base):
def get_type(self):
return "SUB_CLASS"
Running it you can see only "Base" it printed.
Using a decorator gives the same results.
A simple way to register classes as runtime is to use decorators:
registry = {}
def register(cls):
registry[cls.__name__] = cls
return cls
#register
class Foo(object):
pass
#register
class Bar(object):
pass
This will work if all of your classes are defined in the same module, and if that module is imported at runtime. Your situation, however, complicates things. First, you want to define your classes in different modules. This means that we must be able to dynamically determine which modules exist within our package at runtime. This would be straightforward using Python's pkgutil module, however, you also state that you are using Nuitka to compile your package into an extension module. pkgutil doesn't work with such extension modules.
I cannot find any documented way of determining the modules contained within an Nuitka extension module from within Python. If one does exist, the decorator approach above would work after dynamically importing each submodule.
As it is, I believe the most straightforward solution is to write a script to generate an __init__.py before compiling. Suppose we have the following package structure:
.
├── __init__.py
├── plugins
│ ├── alpha.py
│ └── beta.py
└── register.py
The "plugins" are contained within the plugins directory. The contents of the files are:
# register.py
# -----------
registry = {}
def register(cls):
registry[cls.__name__] = cls
return cls
# __init__.py
# -----------
from . import plugins
from . import register
# ./plugins/alpha.py
# ------------------
from ..register import register
#register
class Alpha(object):
pass
# ./plugins/beta.py
# ------------------
from ..register import register
#register
class Beta(object):
pass
As it stands, importing the package above will not result in any of the classes being registered. This is because the class definitions are never run, since the modules containing them are never imported. The remedy is to automatically generate an __init__.py for the plugins folder. Below is a script which does exactly this -- this script can be made part of your compilation process.
import pathlib
root = pathlib.Path('./mypkg/plugins')
exclude = {'__init__.py'}
def gen_modules(root):
for entry in root.iterdir():
if entry.suffix == '.py' and entry.name not in exclude:
yield entry.stem
with (root / '__init__.py').open('w') as fh:
for module in gen_modules(root):
fh.write('from . import %s\n' % module)
Placing this script one directory above your package root (assuming your package is called mypkg) and running it yields:
from . import alpha
from . import beta
Now for the test: we compile the package:
nuitka --module mypkg --recurse-to=mypkg
and try importing it, checking to see if all of the classes were properly registered:
>>> import mypkg
>>> mypkg.register.registry
{'Beta': <class 'mypkg.plugins.beta.Beta'>,
'Alpha': <class 'mypkg.plugins.alpha.Alpha'>}
Note that the same approach will work with using metaclasses to register the plugin classes, I simply preferred to use decorators here.
If the reflected classes are using your metaclass, you don't need to use from X import * to get them registered. Only import X should be enough. As soon as the module containing the classes is imported, the classes will be created and available in your metaclass registry.
I would do this with dynamic imports.
models/regression/base.py:
class Base(object):
def get_type(self):
return "BASE"
models/regression/subclass.py:
from models.regression.base import Base
class SubClass(Base):
def get_type(self):
return "SUB_CLASS"
__myclass__ = SubClass
loader.py:
from importlib import import_module
class_name = "subclass"
module = import_module("models.regression.%s" % class_name)
model = module.__myclass__()
print(model.get_type())
And empty __init__.py files in models/ and models/regression/
With:
nuitka --recurse-none --recurse-directory models --module loader.py
The resulting loader.so contains all the modules under the models/ subdirectory.
I'm trying to use sphinx and autodoc for a large set of python modules. How can it document a class from a module, that has been imported and insatiated in another module:
# module1.py
class Class1():
def method1():
pass
# module2.py
import module1
class Class2():
class1 = module1.Class1()
I want the class1 instance in Class2() to show up in the docs, and refer back to the module1 document.