I want to make something like plugin system but can't make it working. To be specific I have some requirements.
I have main script who should search for other python scripts in ./plugins dir and load them.
This main script is searching for classes who inherits from Base using globals()
If I place these classes in the same main file it works very well but I can't get it worked as I want.
Is it possible to do this in Python?
I try to make some like this:
source: plugins/test.py
class SomeClass(Base):
def __init__(self):
self.name = "Name of plugin"
Main script just execute some methods on this class.
You could either import the python file dynamically or use the exec statement (make sure to define a context to execute in, otherwise the context you use the statement in will be used). Then use Base.__subclasses__, assuming Base being a new-style class, or call a function from the imported plugin module. In the latter case, you must provide a plugin-registration mechanism.
Use http://docs.python.org/2/library/imp.html#imp.load_module
For py3 I think there is importlib but I don't know how to use that one offhand.
Try importing the modules using imp -- imp.loadmodule will let you create namespace names dynamically if you need to. Then you can use inspect.getmembers() and inspect.is_class() to find the classes in your imported module (example code in this answer) to find all the clases defined there. Test those for being subclasses of your plugin.
...or, more pythonically, just use hasattr to find out if the imported classes 'quack like a duck' (ie, have the methods you expect from your plugin).
PS - I'm assuming you're asking for python 2.x. Good idea to tag the post with version # in future.
Related
I'm trying to put together a small build system in Python that generates Ninja files for my C++ project. Its behavior should be similar to CMake; that is, a bldfile.py script defines rules and targets and optionally recurses into one or more directories by calling bld.subdir(). Each bldfile.py script has a corresponding bld.File object. When the bldfile.py script is executing, the bld global should be predefined as that file's bld.File instance, but only in that module's scope.
Additionally, I would like to take advantage of Python's bytecode caching somehow, but the .pyc file should be stored in the build output directory instead of in a __pycache__ directory alongside the bldfile.py script.
I know I should use importlib (requiring Python 3.4+ is fine), but I'm not sure how to:
Load and execute a module file with custom globals.
Re-use the bytecode caching infrastructure.
Any help would be greatly appreciated!
Injecting globals into a module before execution is an interesting idea. However, I think it conflicts with several points of the Zen of Python. In particular, it requires writing code in the module that depends on global values which are not explicitly defined, imported, or otherwise obtained - unless you know the particular procedure required to call the module.
This may be an obvious or slick solution for the specific use case but it is not very intuitive. In general, (Python) code should be explicit. Therefore, I would go for a solution where parameters are explicitly passed to the executing code. Sounds like functions? Right:
bldfile.py
def exec(bld):
print('Working with bld:', bld)
# ...
calling the module:
# set bld
# Option 1: static import
import bldfile
bldfile.exec(bld)
# Option 2: dynamic import if bldfile.py is located dynamically
import importlib.util
spec = importlib.util.spec_from_file_location("unique_name", "subdir/subsubdir/bldfile.py")
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
module.exec(bld)
That way no code (apart from the function definition) is executed when importing the module. The exec function needs to be called explicitly and when looking at the code inside exec it is clear where bld comes from.
I studied importlib's source code and since I don't intend to make a reusable Loader, it seems like a lot of unnecessary complexity. So I just settled on creating a module with types.ModuleType, adding bld to the module's __dict__, compiling and caching the bytecode with compile, and executing the module with exec. At a low level, that's basically all importutil does anyway.
It is possible to overcome the lack of possibility by using dummy module, which would load its globals .
#service.py
module = importlib.import_module('userset')
module.user = user
module = importlib.import_module('config')
#config.py
from userset import *
#now you can use user from service.py
I am trying to write some kind of wrapper for testing python module.
This wrapper simulates an environment and then starts other python module via execfile.
There are many python modules to be tested (>200).
Inside those modules there are some hard-coded variables that contain some absolute file-paths, that are not existing in my simulated environment (I also cannot create them). Those file-paths are paths to option files, that the script will read in. There is always exactly one option-file for each module and the file-path to thisoption files is always saved in the same global variable (What I mean: The variable name is the same in each module: optionFile).
optionFile = "Path to Option file"
My thought was, that I could maybe pre-set this global variable "optionFile" with an existing path before I execute the test-module. But of course this alone won't help, since the executed module will just overwrite "optionFile" with the hard-coded value when it is executed.
I wondered if there might be a way to overwrite the __setattr__ function of the globals-object, so that it will do nothing for certain variable names, but I was not successful with my tries. Do you think this could work and have any suggestions?
Based on the first impressions we got here it seems to be not possible to alter the __setattr__ of the globals object (though I don't understand why not...)
So the answer seems to be "No".
EDIT:
The reason, why this does not work here is, that there is no global "globals "-object. Instead each module has its "personal" namespace with its own global variables. This namespace is created once the module is loaded. It is for sure possible to alter that global namespace - but only after the module has already been loaded (which does not help in my application scenario). Thanks to BrenBarn for the clarification.
END OF EDIT
A workaround for my specific described problem would be to alter Python's built-in open-function instead.
# Keep original pointer to the actual Open-function
realOpen = open
# Overwrite the name of the Open-function to implement own logic
def open(filename, mode='r'):
if filename.endswith(".opt"):
print "Rerouting opening command"
realOpen("myCentralOptionFile.opt","r")
else:
realOpen(filename,mode)
Attention: This workaround has nothing to do with the title anymore.
I already use this function to change some string to class object.
But now I have defined a new module. How can I implement the same functionality?
def str2class(str):
return getattr(sys.modules[__name__], str)
I want to think some example, but it is hard to think. Anyway, the main problem is maybe the file path problem.
If you really need an example, the GitHub code is here.
The Chain.py file needs to perform an auto action mechanism. Now it fails.
New approach:
Now I put all files under one filefold, and it works, but if I use the modules concept, it fails. So if the problem is in a module file, how can I change the string object to relative class object?
Thanks for your help.
You can do this by accessing the namespace of the module directly:
import module
f = module.__dict__["func_name"]
# f is now a function and can be called:
f()
One of the greatest things about Python is that the internals are accessible to you, and that they fit the language paradigm. A name (of a variable, class, function, whatever) in a namespace is actually just a key in a dictionary that maps to that name's value.
If you're interested in what other language internals you can play with, try running dir() on things. You'd be surprised by the number of hidden methods available on most of the objects.
You probably should write this function like this:
def str2class(s):
return globals()[s]
It's really clearer and works even if __name__ is set to __main__.
As I write it, it seems almost surreal to me that I'm actually experiencing this problem.
I have a list of objects. Each of these objects are of instances of an Individual class that I wrote.
Thus, conventional wisdom says that isinstance(myObj, Individual) should return True. However, this was not the case. So I thought that there was a bug in my programming, and printed type(myObj), which to my surprise printed instance and myObj.__class__ gave me Individual!
>>> type(pop[0])
<type 'instance'>
>>> isinstance(pop[0], Individual) # with all the proper imports
False
>>> pop[0].__class__
Genetic.individual.Individual
I'm stumped! What gives?
EDIT: My Individual class
class Individual:
ID = count()
def __init__(self, chromosomes):
self.chromosomes = chromosomes[:] # managed as a list as order is used to identify chromosomal functions (i.e. chromosome i encodes functionality f)
self.id = self.ID.next()
# other methods
This error indicates that the Individual class somehow got created twice. You created pop[0] with one version of Instance, and are checking for instance with the other one. Although they are pretty much identical, Python doesn't know that, and isinstance fails. To verify this, check whether pop[0].__class__ is Individual evaluates to false.
Normally classes don't get created twice (unless you use reload) because modules are imported only once, and all class objects effectively remain singletons. However, using packages and relative imports can leave a trap that leads to a module being imported twice. This happens when a script (started with python bla, as opposed to being imported from another module with import bla) contains a relative import. When running the script, python doesn't know that its imports refer to the Genetic package, so it processes its imports as absolute, creating a top-level individual module with its own individual.Individual class. Another other module correctly imports the Genetic package which ends up importing Genetic.individual, which results in the creation of the doppelganger, Genetic.individual.Individual.
To fix the problem, make sure that your script only uses absolute imports, such as import Genetic.individual even if a relative import like import individual appears to work just fine. And if you want to save on typing, use import Genetic.individual as individual. Also note that despite your use of old-style classes, isinstance should still work, since it predates new-style classes. Having said that, it would be highly advisable to switch to new-style classes.
You need to use new-style classes that inherit from
class ClassName(object):
pass
From your example, you are using old-style classes that inherit from
class Classname:
pass
EDIT: As #user4815162342 said,
>>> type(pop[0])
<type 'instance'>
is caused by using an old-style class, but this is not the cause of your issues with isinstance. You should instead make sure you don't create the class in more than one place, or if you do, use distinct names. Importing it more than once should not be an issue.
Let's assume that we have a system of modules that exists only on production stage. At the moment of testing these modules do not exist. But still I would like to write tests for the code that uses those modules. Let's also assume that I know how to mock all the necessary objects from those modules. The question is: how do I conveniently add module stubs into current hierarchy?
Here is a small example. The functionality I want to test is placed in a file called actual.py:
actual.py:
def coolfunc():
from level1.level2.level3_1 import thing1
from level1.level2.level3_2 import thing2
do_something(thing1)
do_something_else(thing2)
In my test suite I already have everything I need: I have thing1_mock and thing2_mock. Also I have a testing function. What I need is to add level1.level2... into current module system. Like this:
tests.py
import sys
import actual
class SomeTestCase(TestCase):
thing1_mock = mock1()
thing2_mock = mock2()
def setUp(self):
sys.modules['level1'] = what should I do here?
#patch('level1.level2.level3_1.thing1', thing1_mock)
#patch('level1.level2.level3_1.thing1', thing2_mock)
def test_some_case(self):
actual.coolfunc()
I know that I can substitute sys.modules['level1'] with an object containing another object and so on. But it seems like a lot of code for me. I assume that there must be much simpler and prettier solution. I just cannot find it.
So, no one helped me with my problem and I decided to solve it by myself. Here is a micro-lib called surrogate which allows one to create stubs for non-existing modules.
Lib can be used with mock like this:
from surrogate import surrogate
from mock import patch
#surrogate('this.module.doesnt.exist')
#patch('this.module.doesnt.exist', whatever)
def test_something():
from this.module.doesnt import exist
do_something()
Firstly #surrogate decorator creates stubs for non-existing modules, then #patch decorator can alter them. Just as #patch, #surrogate decorators can be used "in plural", thus stubbing more than one module path. All stubs exist only at the lifetime of decorated function.
If anyone gets any use of this lib, that would be great :)