Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have a class that represents object. And I have a bunch of methods which modify this object state with no obvious return or obviously without any return. In C# I would declare all these methods as void and see no alternatives. But in Python I am about to make all the methods return self to give myself ability to write awesome one-liners like this:
classname().method1().method2().method3()
Is this Pythonic, or otherwise acceptable in Python?
Here is a mail from Guido van Rossum (the author of the Python programming language) about this topic: https://mail.python.org/pipermail/python-dev/2003-October/038855.html
I'd like to explain once more why I'm so adamant that sort() shouldn't
return 'self'.
This comes from a coding style (popular in various other languages, I
believe especially Lisp revels in it) where a series of side effects
on a single object can be chained like this:
x.compress().chop(y).sort(z)
which would be the same as
x.compress() x.chop(y) x.sort(z)
I find the chaining form a threat to readability; it requires that the
reader must be intimately familiar with each of the methods. The
second form makes it clear that each of these calls acts on the same
object, and so even if you don't know the class and its methods very
well, you can understand that the second and third call are applied to
x (and that all calls are made for their side-effects), and not to
something else.
I'd like to reserve chaining for operations that return new values,
like string processing operations:
y = x.rstrip("\n").split(":").lower()
There are a few standard library modules that encourage chaining of
side-effect calls (pstat comes to mind). There shouldn't be any new
ones; pstat slipped through my filter when it was weak.
It is an excellent idea for APIs where you are building state through methods. SQLAlchemy uses this to great effect for example:
>>> from sqlalchemy.orm import aliased
>>> adalias1 = aliased(Address)
>>> adalias2 = aliased(Address)
>>> for username, email1, email2 in \
... session.query(User.name, adalias1.email_address, adalias2.email_address).\
... join(adalias1, User.addresses).\
... join(adalias2, User.addresses).\
... filter(adalias1.email_address=='jack#google.com').\
... filter(adalias2.email_address=='j25#yahoo.com'):
... print(username, email1, email2)
Note that it doesn't return self in many cases; it will return a clone of the current object with a certain aspect altered. This way you can create divergent chains based of a shared base; base = instance.method1().method2(), then foo = base.method3() and bar = base.method4().
In the above example, the Query object returned by a Query.join() or Query.filter() call is not the same instance, but a new instance with the filter or join applied to it.
It uses a Generative base class to build upon; so rather than return self, the pattern used is:
def method(self):
clone = self._generate()
clone.foo = 'bar'
return clone
which SQLAlchemy further simplified by using a decorator:
def _generative(func):
#wraps(func)
def decorator(self, *args, **kw):
new_self = self._generate()
func(new_self, *args, **kw)
return new_self
return decorator
class FooBar(GenerativeBase):
#_generative
def method(self):
self.foo = 'bar'
All the #_generative-decorated method has to do is make the alterations on the copy, the decorator takes care of producing the copy, binding the method to the copy rather than the original, and returning it to the caller for you.
Here is an example which demonstrates a scenario when it may be a good technique
class A:
def __init__(self, x):
self.x = x
def add(self, y):
self.x += y
return self
def multiply(self, y):
self.x *= y
return self
def get(self):
return self.x
a = A(0)
print a.add(5).mulitply(2).get()
In this case you are able to create an object in which the order in which operations are performed are strictly determined by the order of the function call, which might make the code more readable (but also longer).
If you so desire, you can use a decorator here. It will stand out to someone looking through your code to see the interface, and you don't have to explicitly return self from every function (which could be annoying if you have multiple exit points).
import functools
def fluent(func):
#functools.wraps(func)
def wrapped(*args, **kwargs):
# Assume it's a method.
self = args[0]
func(*args, **kwargs)
return self
return wrapped
class Foo(object):
#fluent
def bar(self):
print("bar")
#fluent
def baz(self, value):
print("baz: {}".format(value))
foo = Foo()
foo.bar().baz(10)
Prints:
bar
baz: 10
Related
Like the question posted here, I want to create a class that inherits from another class passed as an argument.
class A():
def __init__(self, args):
stuff
class B():
def __init__(self, args):
stuff
class C():
def __init__(self, cls, args):
self.inherit(cls, args)
args = #arguments to create instances of A and B
class_from_A = C(A, args) #instance of C inherited from A
class_from_B = C(B, args) #instance of C inherited from B
I want to do this so that I can keep track of calls I make to different web api's. The thought is that I am just adding my own functionality to any api-type object. The problem with the solution to the linked question is that I don't want to have to go through the additional "layer" to use the api-type object. I want to say obj.get_data() instead of obj.api.get_data().
I've tried looking into how super() works but haven't came across anything that would help (although I could've easily missed something). Any help would be nice, and I'm open to any other approaches for what I'm trying to do, however, just out of curiosity I'd like to know if this is possible.
I don't think it's possible because __init__ is called after __new__ which is where you would specify base classes, but I think you can achieve your goal of tracking api calls using a metaclass. Since you didn't give any examples of what tracking the calls means, I'll leave you with an example metaclass which counts method calls. You can adapt it to your needs.
Another alternative would be to subclass A and B with methods that track whatever you need, and just return super().whatever(). I think I'd prefer that method unless A and B contain too many methods worth managing like that.
Here's an implementation from python-course.eu, by Bernd Klein. Click the link for more detail.
class FuncCallCounter(type):
""" A Metaclass which decorates all the methods of the
subclass using call_counter as the decorator
"""
#staticmethod
def call_counter(func):
""" Decorator for counting the number of function
or method calls to the function or method func
"""
def helper(*args, **kwargs):
helper.calls += 1
return func(*args, **kwargs)
helper.calls = 0
helper.__name__= func.__name__
return helper
def __new__(cls, clsname, superclasses, attributedict):
""" Every method gets decorated with the decorator call_counter,
which will do the actual call counting
"""
for attr in attributedict:
if callable(attributedict[attr]) and not attr.startswith("__"):
attributedict[attr] = cls.call_counter(attributedict[attr])
return type.__new__(cls, clsname, superclasses, attributedict)
I have a decorator which simply caches return values (called #cached in my example) and I wish to use it in conjunction with #property. This works just fine normally. The problem I am facing occurs when I try and use an expire attribute added by #cached.
def cached(f):
cache = [None]
def inner(*args, **kwargs):
if cache[0]:
cache[0] = f(*args, **kwargs)
return cache[0]
def expire():
cache[0] = None
inner.expire = expire
return inner
class Example(object):
#property
#cached
def something_expensive(self):
print("expensive")
return "hello"
e = Example()
e.something_expensive
e.something_expensive.expire()
How am I able to get access to the expire function? added to the function after its replaced by #property. I understand why this doesn't work I am interested in a way of working around that problem.
Some restrictions:
I cannot change the #cached decorator its in a library I don't control
I would really rather not remove #property because I want to expire in my unit tests and they make my code much nicer to use.
One solution that I think is rather bad is (because In reality I have a lot of properties that I want to do this for):
class Example(object):
#cached
def _something_expensive(self):
return "hello"
#property
def something_expensive(self):
return self._something_expensive()
You can access it using the class dictionary:
type(e).__dict__['something_expensive'].fget.expire()
In general e.something_expensive is equivalent to:
type(e).__dict__['something_expensive'].__get__(e, type(e))
For more details read up: Descriptor HowTo Guide
Note that inside the expiry function you're not setting cache from the outer function cached function as None, you're simply creating a new local variable. You may want to do something like this:
def expire():
del cache[:]
cache.append(None)
In Python 3 it's even easier to update cache using the nonlocal keyword.
I'm working on a class that basically allows for method chaining, for setting some attrbutes for different dictionaries stored.
The syntax is as follows:
d = Test()
d.connect().setAttrbutes(Message=Blah, Circle=True, Key=True)
But there can also be other instances, so, for example:
d = Test()
d.initialise().setAttrbutes(Message=Blah)
Now I believe that I can overwrite the "setattrbutes" function; I just don't want to create a function for each of the dictionary. Instead I want to capture the name of the previous chained function. So in the example above I would then be given "connect" and "initialise" so I know which dictionary to store these inside.
I hope this makes sense. Any ideas would be greatly appreciated :)
EDIT:
Would this work / Be a good work-around the above problem:
Using method overloading, I can have the following methods:
def setAttrbutes(self, Name="Foo", Message="", Circle=False):
print "Attrbutes method called for 'Foo'"
def setAttrbutes(self, Name="Boo", Message=""):
print "Attrbutes method called for 'Boo'"
So therefore, I can say which method to call depends on the name that is used. For example, in main, if I have the following:
d.setAttrbutes(Name="Foo", Message="Hello world", Circle=True) # this will call the first
d.setAttrbutes(Name="Boo", Message="Hello world") # this will call the second
Would this work, and, if not, why?
This is almost certainly a bad idea… but it is doable, in a few different ways.
Most simply, you can just have each function save its name in the object, e.g.:
def stash_name(func):
#functools.wraps(func)
def wrapper(self, *args, **kwargs):
self._stashed_name = func.__name__
return func(self, *args, **kwargs)
return wrapper
class Test(object):
#stash_name
def foo(self, x):
print x
#stash_name
def bar(self):
print
Now, after calling d.connect(), d._stashed_name will be "connect".
At the opposite extreme, if you want to get really hacky, you can do this without any cooperation from the preceding method. Just use sys._getframe(1) to find your calling context, then you can examine the frame's f_code to see how you were called.
You can use the dis module to see real bytecode. But basically, it will looks like this pseudo-bytecode:
LOAD_NAME d
LOAD_ATTR connect
<possibly other ops to prepare arguments>
CALL_FUNCTION 1 (or any other CALL_FUNCTION_* variant)
LOAD_ATTR setAttributes
<various other ops to prepare arguments>
CALL_FUNCTION 0
In this case, you can either get the attribute name from the LOAD_ATTR, or get the value that was pushed and look at its im_func.__name__, depending which one you want.
Of course there will be other cases that don't look like this. For example, let's say I called it as getattr(d, ''.join('con', 'next'))() instead of d.connect(). Or I looked up the unbound method and built a bound method on the fly. Or… What would you want to do in each such case? If you have the answers to all such cases, then you can work out the rule that generates those answers, then figure out how to get that from the bytecode.
Since you tacked on a second, completely different, question, here's a second answer.
Would this work / Be a good work-around the above problem:
Using method overloading, I can have the following methods:
No, you can't. Python does not have method overloading. If you def a method with the same name as a previous method, it just replaces the first one entirely.
There are ways to simulate method overloading by dispatching on the argument values manually within the method body. For example:
def _setAttrbutes_impl1(self, Name, Message, Circle):
pass
def _setAttrbutes_impl2(self, Name, Message):
pass
def setAttrbutes(self, Name=None, Message="", Circle=None):
if Circle is None:
return _setAttrbutes_impl2("Boo" if Name is None else Name, Message)
else:
return _setAttrbutes_impl1("Foo" if Name is None else Name, Message, Circle)
But this is rarely useful.
My Situation
I'm currently writing on a project in python which I want to use to learn a bit more about software architecture. I've read a few texts and watched a couple of talks about dependency injection and learned to love how clear constructor injection shows the dependencies of an object.
However, I'm kind of struggling how to get a dependency passed to an object. I decided NOT to use a DI framework since:
I don't have enough knowledge of DI to specify my requirements and thus cannot choose a framework.
I want to keep the code free of more "magical" stuff since I have the feeling that introducing a seldom used framework drastically decreases readability. (More code to read of which only a small part is used).
Thus, I'm using custom factory functions to create objects and explicitly pass their dependencies:
# Business and Data Objects
class Foo:
def __init__(self,bar):
self.bar = bar
def do_stuff(self):
print(self.bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
# Wiring up dependencies
def create_bar():
return Bar("Bar says: ")
def create_foo():
return Foo(create_bar())
# Starting the application
f = create_foo()
f.do_stuff()
Alternatively, if Foo has to create a number of Bars itself, it gets the creator function passed through its constructor:
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,greeting):
self.greeting = greeting
def __str__(self):
return self.greeting
# Wiring up dependencies
def create_bar():
return Bar("Hello World")
def create_foo():
return Foo(create_bar)
# Starting the application
f = create_foo()
f.do_stuff(3)
While I'd love to hear improvement suggestions on the code, this is not really the point of this post. However, I feel that this introduction is required to understand
My Question
While the above looks rather clear, readable and understandable to me, I run into a problem when the prefix dependency of Bar is required to be identical in the context of each Foo object and thus is coupled to the Foo object lifetime. As an example consider a prefix which implements a counter (See code examples below for implementation details).
I have two Ideas how to realize this, however, none of them seems perfect to me:
1) Pass Prefix through Foo
The first idea is to add a constructor parameter to Foo and make it store the prefix in each Foo instance.
The obvious drawback is, that it mixes up the responsibilities of Foo. It controls the business logic AND provides one of the dependencies to Bar. Once Bar does not require the dependency any more, Foo has to be modified. Seems like a no-go for me. Since I don't really think this should be a solution, I did not post the code here, but provided it on pastebin for the very interested reader ;)
2) Use Functions with State
Instead of placing the Prefix object inside Foo this approach is trying to encapsulate it inside the create_foo function. By creating one Prefix for each Foo object and referencing it in a nameless function using lambda, I keep the details (a.k.a there-is-a-prefix-object) away from Foo and inside my wiring-logic. Of course a named function would work, too (but lambda is shorter).
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
class Prefix:
def __init__(self,name):
self.name = name
self.count = 0
def __str__(self):
self.count +=1
return self.name+" "+str(self.count)+": "
# Wiring up dependencies
def create_bar(prefix):
return Bar(prefix)
def create_prefix(name):
return Prefix(name)
def create_foo(name):
prefix = create_prefix(name)
return Foo(lambda : create_bar(prefix))
# Starting the application
f1 = create_foo("foo1")
f2 = create_foo("foo2")
f1.do_stuff(3)
f2.do_stuff(2)
f1.do_stuff(2)
This approach seems much more useful to me. However, I'm not sure about common practices and thus fear that having state inside functions is not really recommended. Coming from a java/C++ background, I'd expect a function to be dependent on its parameters, its class members (if it's a method) or some global state. Thus, a parameterless function that does not use global state would have to return exactly the same value every time it is called. This is not the case here. Once the returned object is modified (which means that counter in prefix has been increased), the function returns an object which has a different state than it had when beeing returned the first time.
Is this assumption just caused by my restricted experience in python and do I have to change my mindset, i.e. don't think of functions but of something callable? Or is supplying functions with state an unintended misuse of lambda?
3) Using a Callable Class
To overcome my doubts on stateful functions I could use callable classes where the create_foo function of approach 2 would be replaced by this:
class BarCreator:
def __init__(self, prefix):
self.prefix = prefix
def __call__(self):
return create_bar(self.prefix)
def create_foo(name):
return Foo(BarCreator(create_prefix(name)))
While this seems a usable solution for me, it is sooo much more verbose.
Summary
I'm not absolutely sure how to handle the situation. Although I prefer number 2 I still have my doubts. Furthermore, I'm still hope that anyone comes up with a more elegant way.
Please comment, if there is anything you think is too vague or can be possibly misunderstood. I will improve the question as far as my abilities allow me to do :)
All examples should run under python2.7 and python3 - if you experience any problems, please report them in the comments and I'll try to fix my code.
If you want to inject a callable object but don't want it to have a complex setup -- if, as in your example, it's really just binding to a single input value -- you could try using functools.partial to provide a function <> value pair:
def factory_function(arg):
#processing here
return configurted_object_base_on_arg
class Consumer(object):
def __init__(self, injection):
self._injected = injection
def use_injected_value():
print self._injected()
injectable = functools.partial(factory_function, 'this is the configuration argument')
example = Consumer(injectable)
example.use_injected_value() # should return the result of your factory function and argument
As an aside, if you're creating a dependency injection setup like your option 3, you probably want to put the knwledge about how to do the configuration into a factory class rather than doing it inline as you're doing here. That way you can swap out factories if you want to choose between strategies. It's not functionally very different (unless the creation is more complex than this example and involves persistent state) but it's more flexible down the road if the code looks like
factory = FooBarFactory()
bar1 = factory.create_bar()
alt_factory = FooBlahFactory(extra_info)
bar2 = alt_factory.create_bar()
I've attended a session about decorator pattern where most of the examples provided were in java.For instance in the example below the Pizza object is being decorated with Two toppings .
Pizza vegPizza = new ToppingsType1(new ToppingType2(new Pizza()))
In python i've seen decorators being used in the scenarios like this
#applyTopping2
#applyTopping1
makePizza():
pass
Though here I can see that the the makePizza function is being decorated with two functions , but it differs a lot from the class based approach of java.
My question is do python decorators strictly implement the decorator pattern or do the implementation is a little different but the idea is same.
PS: I am not sure where to lookup the standard definition of the decorator pattern.though wikipedia gives an example in a dynamic language (JS) but my question still holds good :)
You should read this which explains what are python's decorators.
The "decorators" we talk about with concern to Python are not exactly the same thing as the DecoratorPattern [...]. A Python decorator is a specific change to the Python syntax that allows us to more conveniently alter functions and methods (and possibly classes in a future version). This supports more readable applications of the DecoratorPattern but also other uses as well.
So, you will be able to implement the "classical" decorator pattern with python's decorator, but you will be able to do much more (funny ;)) things with those decorators.
In your case, it could looks like :
def applyTopping1(functor):
def wrapped():
base_pizza = functor()
base_pizza.add("mozzarella")
return base_pizza
return wrapped
def applyTopping2(functor):
def wrapped():
base_pizza = functor()
base_pizza.add("ham")
return base_pizza
return wrapped
And then you will get a Margherita :)
not a real answer, just a fun example on how one can do things in Python - compared to static languages:
def pizza_topping_decorator_factory(topping):
def fresh_topping_decorator(pizza_function):
def pizza_with_extra_topping(*args, **kw):
return pizza_function(*args, **kw) + ", " + topping
return pizza_with_extra_topping
return fresh_topping_decorator
mozzarela = pizza_topping_decorator_factory("mozarella")
anchove = pizza_topping_decorator_factory("anchove")
#anchove
#mozzarela
def pizza():
return "Pizza"
When pasting this code on the Python console:
>>>
>>> print pizza()
Pizza, mozarella, anchove
The #decorator syntax in Python is nothing more than syntax sugar.
The code
#decorator
def foo():
pass
is nothing more than
def foo():
pass
foo = decorator(foo)
The final result just depends on what do you want for it to be.
You may use class as decorator and will get object instantiated from this class with foo as __init__ arg.
You may use object instantiated with some parameters as decorator like in code bellow
class Decorator(object):
def __init__(self, arg1, arg2):
pass
def __call__(self, foo):
return foo()
#Decorator(arg1_value, arg2_value)
def foo():
pass
This is only you who decides what your code will look like and what it will do not any pattern or somebody from java world:)