Engine.py will import several classes as self object
Engine.py
from api import api
from cloud import cloud
class Engine(object):
def __init__(self, env):
session = dict()
self.api = api.API(session)
self.cloud= cloud.CLOUD(session)
api.py
class API(object):
def __init__(self, session):
self.session = session
def api_keyword(self):
return SOMETHING
My question is :
How can I use the keyword under api.py and cloud.py and ONLY import Engine.py into robot file
test.robot
*** Settings ***
Library Engine.py ${env}
*** Test Cases ***
python class test
[Tags] class
Engine.api.api_keyword
And I got error message:
No keyword with name 'Engine.api.api_keyword' found.
Robot Framework maps only class methods to keywords; your class Engine does not expose any methods from api and cloud - it probably uses them internally, but doesn't define any as its own.
So here's your first solution - create wrapper methods for all you need in the cases:
def an_api_method(self):
self.api.something()
And now you'll have the An API Method keyword at your disposal in the cases.
Solution two - make your class inherit the other two:
class Engine(api, cloud):
, and your cases will have access to all their public methods.
This one is more involving - you'll have to call their constructors (with super()), and if you maintain a state in your class, you'll have to accommodate for that. I.e. more drastic code changes are needed.
The third solution doesn't require any changes to the Enhine code - but, disclaimer: I don't know will it work :) (I'm not at a computer).
It consists of two calls - first to use Get Library Instance to get the object of your imported library (from the Builtin library), and then - Call Method:
${ref}= Get Library Instance Engine
Call Method $ref.api api_keyword
Related
Introduction
For Java, Dependency Injection works as pure OOP, i.e. you provide an interface to be implemented and in your framework code accept an instance of a class that implements the defined interface.
Now for Python, you are able to do the same way, but I think that method was too much overhead right in case of Python. So then how would you implement it in the Pythonic way?
Use Case
Say this is the framework code:
class FrameworkClass():
def __init__(self, ...):
...
def do_the_job(self, ...):
# some stuff
# depending on some external function
The Basic Approach
The most naive (and maybe the best?) way is to require the external function to be supplied into the FrameworkClass constructor, and then be invoked from the do_the_job method.
Framework Code:
class FrameworkClass():
def __init__(self, func):
self.func = func
def do_the_job(self, ...):
# some stuff
self.func(...)
Client Code:
def my_func():
# my implementation
framework_instance = FrameworkClass(my_func)
framework_instance.do_the_job(...)
Question
The question is short. Is there any better commonly used Pythonic way to do this? Or maybe any libraries supporting such functionality?
UPDATE: Concrete Situation
Imagine I develop a micro web framework, which handles authentication using tokens. This framework needs a function to supply some ID obtained from the token and get the user corresponding to that ID.
Obviously, the framework does not know anything about users or any other application specific logic, so the client code must inject the user getter functionality into the framework to make the authentication work.
See Raymond Hettinger - Super considered super! - PyCon 2015 for an argument about how to use super and multiple inheritance instead of DI. If you don't have time to watch the whole video, jump to minute 15 (but I'd recommend watching all of it).
Here is an example of how to apply what's described in this video to your example:
Framework Code:
class TokenInterface():
def getUserFromToken(self, token):
raise NotImplementedError
class FrameworkClass(TokenInterface):
def do_the_job(self, ...):
# some stuff
self.user = super().getUserFromToken(...)
Client Code:
class SQLUserFromToken(TokenInterface):
def getUserFromToken(self, token):
# load the user from the database
return user
class ClientFrameworkClass(FrameworkClass, SQLUserFromToken):
pass
framework_instance = ClientFrameworkClass()
framework_instance.do_the_job(...)
This will work because the Python MRO will guarantee that the getUserFromToken client method is called (if super() is used). The code will have to change if you're on Python 2.x.
One added benefit here is that this will raise an exception if the client does not provide a implementation.
Of course, this is not really dependency injection, it's multiple inheritance and mixins, but it is a Pythonic way to solve your problem.
The way we do dependency injection in our project is by using the inject lib. Check out the documentation. I highly recommend using it for DI. It kinda makes no sense with just one function but starts making lots of sense when you have to manage multiple data sources etc, etc.
Following your example it could be something similar to:
# framework.py
class FrameworkClass():
def __init__(self, func):
self.func = func
def do_the_job(self):
# some stuff
self.func()
Your custom function:
# my_stuff.py
def my_func():
print('aww yiss')
Somewhere in the application you want to create a bootstrap file that keeps track of all the defined dependencies:
# bootstrap.py
import inject
from .my_stuff import my_func
def configure_injection(binder):
binder.bind(FrameworkClass, FrameworkClass(my_func))
inject.configure(configure_injection)
And then you could consume the code this way:
# some_module.py (has to be loaded with bootstrap.py already loaded somewhere in your app)
import inject
from .framework import FrameworkClass
framework_instance = inject.instance(FrameworkClass)
framework_instance.do_the_job()
I'm afraid this is as pythonic as it can get (the module has some python sweetness like decorators to inject by parameter etc - check the docs), as python does not have fancy stuff like interfaces or type hinting.
So to answer your question directly would be very hard. I think the true question is: does python have some native support for DI? And the answer is, sadly: no.
Some time ago I wrote dependency injection microframework with a ambition to make it Pythonic - Dependency Injector. That's how your code can look like in case of its usage:
"""Example of dependency injection in Python."""
import logging
import sqlite3
import boto.s3.connection
import example.main
import example.services
import dependency_injector.containers as containers
import dependency_injector.providers as providers
class Platform(containers.DeclarativeContainer):
"""IoC container of platform service providers."""
logger = providers.Singleton(logging.Logger, name='example')
database = providers.Singleton(sqlite3.connect, ':memory:')
s3 = providers.Singleton(boto.s3.connection.S3Connection,
aws_access_key_id='KEY',
aws_secret_access_key='SECRET')
class Services(containers.DeclarativeContainer):
"""IoC container of business service providers."""
users = providers.Factory(example.services.UsersService,
logger=Platform.logger,
db=Platform.database)
auth = providers.Factory(example.services.AuthService,
logger=Platform.logger,
db=Platform.database,
token_ttl=3600)
photos = providers.Factory(example.services.PhotosService,
logger=Platform.logger,
db=Platform.database,
s3=Platform.s3)
class Application(containers.DeclarativeContainer):
"""IoC container of application component providers."""
main = providers.Callable(example.main.main,
users_service=Services.users,
auth_service=Services.auth,
photos_service=Services.photos)
Here is a link to more extensive description of this example - http://python-dependency-injector.ets-labs.org/examples/services_miniapp.html
Hope it can help a bit. For more information please visit:
GitHub https://github.com/ets-labs/python-dependency-injector
Docs http://python-dependency-injector.ets-labs.org/
Dependency injection is a simple technique that Python supports directly. No additional libraries are required. Using type hints can improve clarity and readability.
Framework Code:
class UserStore():
"""
The base class for accessing a user's information.
The client must extend this class and implement its methods.
"""
def get_name(self, token):
raise NotImplementedError
class WebFramework():
def __init__(self, user_store: UserStore):
self.user_store = user_store
def greet_user(self, token):
user_name = self.user_store.get_name(token)
print(f'Good day to you, {user_name}!')
Client Code:
class AlwaysMaryUser(UserStore):
def get_name(self, token):
return 'Mary'
class SQLUserStore(UserStore):
def __init__(self, db_params):
self.db_params = db_params
def get_name(self, token):
# TODO: Implement the database lookup
raise NotImplementedError
client = WebFramework(AlwaysMaryUser())
client.greet_user('user_token')
The UserStore class and type hinting are not required for implementing dependency injection. Their primary purpose is to provide guidance to the client developer. If you remove the UserStore class and all references to it, the code still works.
After playing around with some of the DI frameworks in python, I've found they have felt a bit clunky to use when comparing how simple it is in other realms such as with .NET Core. This is mostly due to the joining via things like decorators that clutter the code and make it hard to simply add it into or remove it from a project, or joining based on variable names.
I've recently been working on a dependency injection framework that instead uses typing annotations to do the injection called Simple-Injection. Below is a simple example
from simple_injection import ServiceCollection
class Dependency:
def hello(self):
print("Hello from Dependency!")
class Service:
def __init__(self, dependency: Dependency):
self._dependency = dependency
def hello(self):
self._dependency.hello()
collection = ServiceCollection()
collection.add_transient(Dependency)
collection.add_transient(Service)
collection.resolve(Service).hello()
# Outputs: Hello from Dependency!
This library supports service lifetimes and binding services to implementations.
One of the goals of this library is that it is also easy to add it to an existing application and see how you like it before committing to it as all it requires is your application to have appropriate typings, and then you build the dependency graph at the entry point and run it.
Hope this helps. For more information, please see
github: https://github.com/BradLewis/simple-injection
docs: https://simple-injection.readthedocs.io/en/latest/
pypi: https://pypi.org/project/simple-injection/
A very easy and Pythonic way to do dependency injection is importlib.
You could define a small utility function
def inject_method_from_module(modulename, methodname):
"""
injects dynamically a method in a module
"""
mod = importlib.import_module(modulename)
return getattr(mod, methodname, None)
And then you can use it:
myfunction = inject_method_from_module("mypackage.mymodule", "myfunction")
myfunction("a")
In mypackage/mymodule.py you define myfunction
def myfunction(s):
print("myfunction in mypackage.mymodule called with parameter:", s)
You could of course also use a class MyClass iso. the function myfunction. If you define the values of methodname in a settings.py file you can load different versions of the methodname depending on the value of the settings file. Django is using such a scheme to define its database connection.
I think that DI and possibly AOP are not generally considered Pythonic because of typical Python developers preferences, rather that language features.
As a matter of fact you can implement a basic DI framework in <100 lines, using metaclasses and class decorators.
For a less invasive solution, these constructs can be used to plug-in custom implementations into a generic framework.
There is also Pinject, an open source python dependency injector by Google.
Here is an example
>>> class OuterClass(object):
... def __init__(self, inner_class):
... self.inner_class = inner_class
...
>>> class InnerClass(object):
... def __init__(self):
... self.forty_two = 42
...
>>> obj_graph = pinject.new_object_graph()
>>> outer_class = obj_graph.provide(OuterClass)
>>> print outer_class.inner_class.forty_two
42
And here is the source code
Due to Python OOP implementation, IoC and dependency injection are not standard practices in the Python world. But the approach seems promising even for Python.
To use dependencies as arguments is a non-pythonic approach. Python is an OOP language with beautiful and elegant OOP model, that provides more straightforward ways to maintain dependencies.
To define classes full of abstract methods just to imitate interface type is weird too.
Huge wrapper-on-wrapper workarounds create code overhead.
I also don't like to use libraries when all I need is a small pattern.
So my solution is:
# Framework internal
def MetaIoC(name, bases, namespace):
cls = type("IoC{}".format(name), tuple(), namespace)
return type(name, bases + (cls,), {})
# Entities level
class Entity:
def _lower_level_meth(self):
raise NotImplementedError
#property
def entity_prop(self):
return super(Entity, self)._lower_level_meth()
# Adapters level
class ImplementedEntity(Entity, metaclass=MetaIoC):
__private = 'private attribute value'
def __init__(self, pub_attr):
self.pub_attr = pub_attr
def _lower_level_meth(self):
print('{}\n{}'.format(self.pub_attr, self.__private))
# Infrastructure level
if __name__ == '__main__':
ENTITY = ImplementedEntity('public attribute value')
ENTITY.entity_prop
EDIT:
Be careful with the pattern. I used it in a real project and it showed itself a not that good way. My post on Medium about my experience with the pattern.
I'm trying to understand how to make RPC calls using Python. I have a stupid server that defines a class and exposes a method that create instances of that class:
# server.py
class Greeter(object):
def __init__(self, name):
self.name = name
def greet(self):
return "Hi {}!".format(self.name)
def greeter_factory(name):
return Greeter(name)
some_RPC_framework.register(greeter_factory)
and a client that wants to get an instance of the Greeter:
# client.py
greeter_factory = some_RPC_framework.proxy(uri_given_by_server)
h = greeter_factory("Heisemberg")
print("Server returned:", h.greet())
The problem is that I've found no framework that allows to return instances of user-defined objects, or that only returns a dict with the attributes of the object (for example, Pyro4).
In the past I've used Java RMI, where you can specify a codebase on the server where the client can download the compiled classes, if it needs to. Is there something like this for Python? Maybe some framework that can serialize objects along with the class bytecode to let the client have a full-working instance of the class?
Pyro can do this to a certain extent. You can register custom class (de)serializers when using the default serializer. Or you can decide to use the pickle serializer, but that has severe security implications. See http://pythonhosted.org/Pyro4/clientcode.html#serialization
What Pyro won't do for you, even when using the pickle serializer, is transfer the actual bytecode that makes up the module definition. The client, in your case, has to be able to import the module defining your classes in the regular way. There's no code transportation.
You can consider using
payload = CPickle.dump(Greeter(name))
on server side and on client side once the payload is received do -
h = CPickle.load(payload)
to get the instance of Greeter object that server has created.
I have following class in Python:
class SDK(object):
URL = 'http://example.com'
def checkUrl(self, url):
#some code
class api:
def innerMethod(self, url):
data = self.checkUrl(url)
#rest of code
but when I try to access checkUrl from api, I get error. I try to call nested method by:
sdk = SDK()
sdk.api.innerMethod('http://stackoverflow.com')
Is there any simple way to call inner class methods, or (if not) structurize methods into inner objects? Any suggestion will be appreciated.
Edit:
class code:
class SDK(object):
def run(self, method, *param):
pass
class api:
def checkDomain(self, domain):
json = self.run('check', domain)
return json
run code:
sdk = SDK()
result = sdk.api().checkDomain('stackoverflow.com')
The SDK class is not a parent of the api class in your example, i.e. api does not inherit from SDK, they are merely nested.
Therefore the self object in your api.innerMethod method is only an instance of the api class and doesn't provide access to methods of the SDK class.
I strongly recommend getting more knowledgeable about object-oriented programming concepts and grasp what the issue is here. It will help you tremendously.
As for using modules to achieve something along these lines, you can, for example, pull everything from the SDK class to sdk.py file, which would be the sdk module.
sdk.py:
URL = 'http://example.com'
def checkUrl(url):
#some code
class api:
def innerMethod(self, url):
data = checkUrl(url)
#rest of code
main.py:
import sdk
api = sdk.api()
api.innerMethod('http://stackoverflow.com')
Or you may go even further and transform sdk to a package with api being a module inside it.
See https://docs.python.org/2/tutorial/modules.html for details on how to use modules and packages.
If you want a method to act as a classmethod, you have to tell python about it:
class SDK:
class api:
#classmethod
def foo(cls):
return 1
Then you have access like
SDK.api.foo()
Depending on what you're trying to do, this smells kind of un-pythonic. If it's just the namespace you care about, you'd typically use a module.
I'm currently writing an application which allows the user to extend it via a 'plugin' type architecture. They can write additional python classes based on a BaseClass object I provide, and these are loaded against various application signals. The exact number and names of the classes loaded as plugins is unknown before the application is started, but are only loaded once at startup.
During my research into the best way to tackle this I've come up with two common solutions.
Option 1 - Roll your own using imp, pkgutil, etc.
See for instance, this answer or this one.
Option 2 - Use a plugin manager library
Randomly picking a couple
straight.plugin
yapsy
this approach
My question is - on the proviso that the application must be restarted in order to load new plugins - is there any benefit of the above methods over something inspired from this SO answer and this one such as:
import inspect
import sys
import my_plugins
def predicate(c):
# filter to classes
return inspect.isclass(c)
def load_plugins():
for name, obj in inspect.getmembers(sys.modules['my_plugins'], predicate):
obj.register_signals()
Are there any disadvantages to this approach compared to the ones above? (other than all the plugins must be in the same file) Thanks!
EDIT
Comments request further information... the only additional thing I can think to add is that the plugins use the blinker library to provide signals that they subscribe to. Each plugin may subscribe to different signals of different types and hence must have its own specific "register" method.
Since Python 3.6 a new class method __init_subclass__ is added, that is called on a base class, whenever a new subclass is created.
This method can further simplify the solution offered by will-hart above, by removing the metaclass.
The __init_subclass__ method was introduced with PEP 487: Simpler customization of class creation. The PEP comes with a minimal example for a plugin architecture:
It is now possible to customize subclass creation without using a
metaclass. The new __init_subclass__ classmethod will be called on
the base class whenever a new subclass is created:
class PluginBase:
subclasses = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.subclasses.append(cls)
class Plugin1(PluginBase):
pass
class Plugin2(PluginBase):
pass
The PEP example above stores references to the classes in the Plugin.plugins field.
If you want to store instances of the plugin classes, you can use a structure like this:
class Plugin:
"""Base class for all plugins. Singleton instances of subclasses are created automatically and stored in Plugin.plugins class field."""
plugins = []
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls.plugins.append(cls())
class MyPlugin1(Plugin):
def __init__(self):
print("MyPlugin1 instance created")
def do_work(self):
print("Do something")
class MyPlugin2(Plugin):
def __init__(self):
print("MyPlugin2 instance created")
def do_work(self):
print("Do something else")
for plugin in Plugin.plugins:
plugin.do_work()
which outputs:
MyPlugin1 instance created
MyPlugin2 instance created
Do something
Do something else
The metaclass approach is useful for this issue in Python < 3.6 (see #quasoft's answer for Python 3.6+). It is very simple and acts automatically on any imported module. In addition, complex logic can be applied to plugin registration with very little effort. It requires:
The metaclass approach works like the following:
1) A custom PluginMount metaclass is defined which maintains a list of all plugins
2) A Plugin class is defined which sets PluginMount as its metaclass
3) When an object deriving from Plugin - for instance MyPlugin is imported, it triggers the __init__ method on the metaclass. This registers the plugin and performs any application specific logic and event subscription.
Alternatively if you put the PluginMount.__init__ logic in PluginMount.__new__ it is called whenver a new instance of a Plugin derived class is created.
class PluginMount(type):
"""
A plugin mount point derived from:
http://martyalchin.com/2008/jan/10/simple-plugin-framework/
Acts as a metaclass which creates anything inheriting from Plugin
"""
def __init__(cls, name, bases, attrs):
"""Called when a Plugin derived class is imported"""
if not hasattr(cls, 'plugins'):
# Called when the metaclass is first instantiated
cls.plugins = []
else:
# Called when a plugin class is imported
cls.register_plugin(cls)
def register_plugin(cls, plugin):
"""Add the plugin to the plugin list and perform any registration logic"""
# create a plugin instance and store it
# optionally you could just store the plugin class and lazily instantiate
instance = plugin()
# save the plugin reference
cls.plugins.append(instance)
# apply plugin logic - in this case connect the plugin to blinker signals
# this must be defined in the derived class
instance.register_signals()
Then a base plugin class which looks like:
class Plugin(object):
"""A plugin which must provide a register_signals() method"""
__metaclass__ = PluginMount
Finally, an actual plugin class would look like the following:
class MyPlugin(Plugin):
def register_signals(self):
print "Class created and registering signals"
def other_plugin_stuff(self):
print "I can do other plugin stuff"
Plugins can be accessed from any python module that has imported Plugin:
for plugin in Plugin.plugins:
plugin.other_plugin_stuff()
See the full working example
The approach from will-hart was the most useful one to me!
For i needed more control I wrapped the Plugin Base class in a function like:
def get_plugin_base(name='Plugin',
cls=object,
metaclass=PluginMount):
def iter_func(self):
for mod in self._models:
yield mod
bases = not isinstance(cls, tuple) and (cls,) or cls
class_dict = dict(
_models=None,
session=None
)
class_dict['__iter__'] = iter_func
return metaclass(name, bases, class_dict)
and then:
from plugin import get_plugin_base
Plugin = get_plugin_base()
This allows to add additional baseclasses or switching to another metaclass.
I have a JSON parser library (ijson) with a test suite using unittest. The library actually has several parsing implementations — "backends" — in the form of a modules with the identical API. I want to automatically run the test suite several times for each available backend. My goals are:
I want to keep all tests in one place as they are backend agnostic.
I want the name of the currently used backend to be visible in some fashion when a test fails.
I want to be able to run a single TestCase or a single test, as unittest normally allows.
So what's the best way to organize the test suite for this? Write a custom test runner? Let TestCases load backends themselves? Imperatively generate separate TestCase classes for each backend?
By the way, I'm not married to unittest library in particular and I'm open to try another one if it solves the problem. But unittest is preferable since I already have the test code in place.
One common way is to group all your tests together in one class with an abstract method that creates an instance of the backend (if you need to create multiple instances in a test), or expects setUp to create an instance of the backend.
You can then create subclasses that create the different backends as needed.
If you are using a test loader that automatically detects TestCase subclasses, you'll probably need to make one change: don't make the common base class a subclass of TestCase: instead treat it as a mixin, and make the backend classes subclass from both TestCase and the mixin.
For example:
class BackendTests:
def make_backend(self):
raise NotImplementedError
def test_one(self):
backend = self.make_backend()
# perform a test on the backend
class FooBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "foo" backend:
return foo_backend
class BarBackendTests(unittest.TestCase, BackendTests):
def make_backend(self):
# Create an instance of the "bar" backend:
return bar_backend
When building a test suite from the above, you will have independent test cases FooBackendTests.test_one and BarBackendTests.test_one that test the same feature on the two backends.
I took James Henstridge's idea with a mixin class holding all the tests but actual test cases are then generated imperatively, as backends may fail on import in which case we don't want to test them:
class BackendTests(object):
def test_something(self):
# using self.backend
# Generating real TestCase classes for each importable backend
for name in ['backend1', 'backend2', 'backend3']:
try:
classname = '%sTest' % name.capitalize()
locals()[classname] = type(
classname,
(unittest.TestCase, BackendTests),
{'backend': import_module('backends.%s' % name)},
)
except ImportError:
pass