Does anyone know if there is a way to instantiate a function of a class for not yet instantiated objects in Python? I would like something like this:
class C():
def __init__(self, var):
self.var = var
def f1(self):
self.var += 1
def f2(self):
self.var += 2
cond = True
if cond : f = C.f1
else: f = C.f2
for i in xrange(1e7):
a = C(1)
for j in xrange(1e3):
a.f()
The goal is to be able to use 'f' as 'min', 'max' or 'mean' for nparrays at the beginning and not checking at each loops which function to use.
The types of a and b are numpy.ndarray. You have imported ndarray, so you can simply call ndarray.min on a and b:
f = ndarray.min
print f(a), f(b)
Here, ndarray.min(x) is equivalent to x.min().
Edit getting the numpy.ndarray.min without explicit knowledge of the type of a call to rand:
f = type(rand(int())).min
Note that you still need to know that this type has a min function.
One does not "instantiate" a function; one instantiates objects, as instances of classes.
Now, one simply calls a class method of a class klass by calling the method on the class: klass.method(foo).
If rand.min is a function which can be called on a, then one would simply do rand.min(a) or f(a).
Related
def f(obj):
print('attr =', obj.attr)
class Foo:
attr = 100
attr_val = f
x = Foo()
print(x.attr)
x.attr_val()
Output:
100
attr = 100
I got this code from real python but I don't understand how x is pass into function f.
Can someone explain that to me, thanks.
x is a class object when you are doing x.attr_val() it automatically takes itself and provides it as a first argument to the function (often arguments like this are named self).
attr_val is what is called an instance method. When your Foo class calls it, it passes the object as first argument automatically, effectively running: f(x)
If you were using a custom __init__ method, the standard practice would be to pass the self variable to indicate this self-reference.
Thus, a more verbose variant would be:
def f(obj):
print('attr =', obj.attr)
class Foo:
def __init__(self):
self.attr = 100
def attr_val(self):
f(self) # or "return f(self)"
x = Foo()
x.attr_val()
# attr = 100
I would like to freeze one or more function arguments during runtime, by calling a function that freezes these arguments, from another file. I.e imagine I have the following functions:
def f1( in_func ):
in_func()
def f2():
print('f2')
def freeze_arg(in_func):
'''Freeze in_func argument f1 to in_func parameter. Redefine f1.'''
Is there a way to define the function freeze_arg in this example, so that I can call it in another file and change the definition of f1. Note that f1 a freeze_arg are defined in the same module. In other words I want to import all the functions above and then call freez_arg. A simple implementation like:
def freeze_arg(in_func):
f1 = functools.partial(f1, in_func)
won't work because a local f1 variable will be created that shadows the original f1 function (and the code crashes as a result). Naively I however expected the following to work:
def freeze_arg(in_func):
global f1
f1 = functools.partial(f1, in_func)
#in another file
freeze_arg(f2)
f1()
But itt throws an error that f1 expect a positional argument:
TypeError: f1() missing 1 required positional argument: 'in_func'
Note that this code does work if the call to freeze_arg happens in the same file.
Another solution that did not work is the following:
frozen_func = None
def freeze_arg(in_func):
global frozen_func
frozen_func = functools.partial(f1, in_func)
#in another file:
freeze_args(f2)
f1()
throws:
TypeError: 'NoneType' object is not callable
So my question is: How do I define freeze_arg so I can call it after import, and redefine f1 that way. f1 is defined in the same file as freeze_arg. All of the examples above work when all functions are defined in the same file, but not when importing the functions from another file. A workaround would be to simply reassign the functions outside of the freeze_arg function, or to make a class that defines methods that can be used instead of the original function. These workarounds are however not what I am looking for.
Also note that decorators are not an option since they will run at import time and I want to change the function arguments during runtime.
I would make a parameterized class, my_functor below. It takes a parameter, some_param, and returns a class -
def my_functor(some_param):
class my_class:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def __str__(self):
return f"<{some_param}> {self.a} {self.b} {self.c}"
return my_class
Calling my_functor builds a new class with a "frozen" parameter -
a_type = my_functor("A")
b_type = my_functor("B")
print(a_type("alice", "brenda", "claire"))
print(a_type("denise", "erica", "francisa"))
print(b_type("apple", "berry", "carrot"))
print(b_type("dill", "eggplant", "fennel"))
<A> alice brenda claire
<A> denise erica francisa
<B> apple berry carrot
<B> dill eggplant fennel
The class can be parameterized with many parameters of varying types -
def my_functor(some_param, serializer):
class my_class:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def serialize(self):
return serializer(f"<{some_param}>", self.a, self.b, self.c)
return my_class
Here's an example with a "stringifying" serializer -
a_type = my_functor("A", lambda *x: ",".join(x))
print(a_type("alice", "brenda", "claire").serialize())
print(a_type("denise", "erica", "francisa").serialize())
<A>,alice,brenda,claire
<A>,denise,erica,francisa
And another type that serializes using a plain list -
b_type = my_functor("B", list)
print(b_type("apple", "berry", "carrot").serialize())
print(b_type("dill", "eggplant", "fennel").serialize())
['<B>', 'apple', 'berry', 'carrot']
['<B>', 'dill', 'eggplant', 'fennel']
I was trying to store reference to unbound method and noticed that it is being automatically bound. See example below. Is there more elegant way to store unbound method in the class without binding it?
def unbound_method():
print("YEAH!")
class A:
bound_method = unbound_method
unbound_methods = [unbound_method]
a = A()
a.unbound_methods[0]() # succeeds
a.bound_method() # fails
# TypeError: unbound_method() takes 0 positional arguments but 1 was given
This is not a standard "do you know about #staticmethod?" question.
What I'm trying to achieve is provide a way for children of the class provide another handler or certain situations. I do not control the unbound_method itself, it is provided from some library.
def unbound_method_a():
print("YEAH!")
def unbound_method_b():
print("WAY MORE YEAH!")
class A:
bound_method = unbound_method_a
class B(A):
bound_method = unbound_method_b
a = A()
a.bound_method() #fails
# print("YEAH!")
b = B()
b.bound_method() #fails
# print("WAY MORE YEAH!")
It can be achieved by wrapping the unbound method some dummy object like array, or in a bound method, just to drop self reference like this:
def unbound_method_a():
print("YEAH!")
def unbound_method_b():
print("WAY MORE YEAH!")
class A:
def call_unbound_method(self):
return unbound_method_a()
class B(A):
def call_unbound_method(self):
return unbound_method_b()
a = A()
a.call_unbound_method()
# print("YEAH!")
b = B()
b.call_unbound_method()
# print("WAY MORE YEAH!")
Not as far as I know. Would it be so bad if you just replace
a.bound_method()
with
A.bound_method()
?
I can't think of a situation in which the first one can't be replaced by the second one.
I am trying to dynamically create classes in Python and am relatively new to classes and class inheritance. Basically I want my final object to have different types of history depending on different needs. I have a solution but I feel there must be a better way. I dreamed up something like this.
class A:
def __init__(self):
self.history={}
def do_something():
pass
class B:
def __init__(self):
self.history=[]
def do_something_else():
pass
class C(A,B):
def __init__(self, a=False, b=False):
if a:
A.__init__(self)
elif b:
B.__init__(self)
use1 = C(a=True)
use2 = C(b=True)
You probably don't really need that, and this is probably an XY problem, but those happen regularly when you are learning a language. You should be aware that you typically don't need to build huge class hierarchies with Python like you do with some other languages. Python employs "duck typing" -- if a class has the method you want to use, just call it!
Also, by the time __init__ is called, the instance already exists. You can't (easily) change it out for a different instance at that time (though, really, anything is possible).
if you really want to be able to instantiate a class and receive what are essentially instances of completely different objects depending on what you passed to the constructor, the simple, straightforward thing to do is use a function that returns instances of different classes.
However, for completeness, you should know that classes can define a __new__ method, which gets called before __init__. This method can return an instance of the class, or an instance of a completely different class, or whatever the heck it wants. So, for example, you can do this:
class A(object):
def __init__(self):
self.history={}
def do_something(self):
print("Class A doing something", self.history)
class B(object):
def __init__(self):
self.history=[]
def do_something_else(self):
print("Class B doing something", self.history)
class C(object):
def __new__(cls, a=False, b=False):
if a:
return A()
elif b:
return B()
use1 = C(a=True)
use2 = C(b=True)
use3 = C()
use1.do_something()
use2.do_something_else()
print (use3 is None)
This works with either Python 2 or 3. With 3 it returns:
Class A doing something {}
Class B doing something []
True
I'm assuming that for some reason you can't change A and B, and you need the functionality of both.
Maybe what you need are two different classes:
class CAB(A, B):
'''uses A's __init__'''
class CBA(B, A):
'''uses B's __init__'''
use1 = CAB()
use2 = CBA()
The goal is to dynamically create a class.
I don't really recommend dynamically creating a class. You can use a function to do this, and you can easily do things like pickle the instances because they're available in the global namespace of the module:
def make_C(a=False, b=False):
if a:
return CAB()
elif b:
return CBA()
But if you insist on "dynamically creating the class"
def make_C(a=False, b=False):
if a:
return type('C', (A, B), {})()
elif b:
return type('C', (B, A), {})()
And usage either way is:
use1 = make_C(a=True)
use2 = make_C(b=True)
I was thinking about the very same thing and came up with a helper method for returning a class inheriting from the type provided as an argument.
The helper function defines and returns the class, which is inheriting from the type provided as an argument.
The solution presented itself when I was working on a named value class. I wanted a value, that could have its own name, but that could behave as a regular variable. The idea could be implemented mostly for debugging processes, I think. Here is the code:
def getValueClass(thetype):
"""Helper function for getting the `Value` class
Getting the named value class, based on `thetype`.
"""
# if thetype not in (int, float, complex): # if needed
# raise TypeError("The type is not numeric.")
class Value(thetype):
__text_signature__ = "(value, name: str = "")"
__doc__ = f"A named value of type `{thetype.__name__}`"
def __init__(self, value, name: str = ""):
"""Value(value, name) -- a named value"""
self._name = name
def __new__(cls, value, name: str = ""):
instance = super().__new__(cls, value)
return instance
def __repr__(self):
return f"{super().__repr__()}"
def __str__(self):
return f"{self._name} = {super().__str__()}"
return Value
Some examples:
IValue = getValueClass(int)
FValue = getValueClass(float)
CValue = getValueClass(complex)
iv = IValue(3, "iv")
print(f"{iv!r}")
print(iv)
print()
fv = FValue(4.5, "fv")
print(f"{fv!r}")
print(fv)
print()
cv = CValue(7 + 11j, "cv")
print(f"{cv!r}")
print(cv)
print()
print(f"{iv + fv + cv = }")
The output:
3
iv = 3
4.5
fv = 4.5
(7+11j)
cv = (7+11j)
iv + fv + cv = (14.5+11j)
When working in IDLE, the variables seem to behave as built-in types, except when printing:
>>> vi = IValue(4, "vi")
>>> vi
4
>>> print(vi)
vi = 4
>>> vf = FValue(3.5, 'vf')
>>> vf
3.5
>>> vf + vi
7.5
>>>
Say I have an class that looks like this:
class Test(object):
def __init__(self, a, b):
self.a = a
self.b = b
self.c = self.a + self.b
I would like the value of self.c to change whenever the value of attributes self.a or self.b changes for the same instance.
e.g.
test1 = Test(2,4)
print test1.c # prints 6
test1.a = 3
print test1.c # prints = 6
I know why it would still print 6, but is there a mechanism I could use to fire an update to self.c when self.a has changed. Or the only option I have is to have a method that returns me the value of self.c based on the current state of self.a and self.b
Yes, there is! It's called properties.
Read Only Properties
class Test(object):
def __init__(self,a,b):
self.a = a
self.b = b
#property
def c(self):
return self.a + self.b
With the above code, c is now a read-only property of the Test class.
Mutable Properties
You can also give a property a setter, which would make it read/write and allow you to set its value directly. It would look like this:
class Test(object):
def __init__(self, c = SomeDefaultValue):
self._c = SomeDefaultValue
#property
def c(self):
return self._c
#c.setter
def c(self,value):
self._c = value
However, in this case, it would not make sense to have a setter for self.c, since its value depends on self.a and self.b.
What does #property mean?
The #property bit is an example of something called a decorator. A decorator actually wraps the function (or class) it decorates into another function (the decorator function). After a function has been decorated, when it is called it is actually the decorator that is called with the function (and its arguments) as an argument. Usually (but not always!) the decorated function does something interesting, and then calls the original (decorated) function like it would normally. For example:
def my_decorator(thedecoratedfunction):
def wrapped(*allofthearguments):
print("This function has been decorated!") #something interesting
thedecoratedfunction(*allofthearguments) #calls the function as normal
return wrapped
#my_decorator
def myfunction(arg1, arg2):
pass
This is equivalent to:
def myfunction(arg1, arg2):
pass
myfunction = my_decorator(myfunction)
So this means in the class example above, instead of using the decorator you could also do this:
def c(self):
return self.a + self.b
c = property(c)
They are exactly the same thing. The #property is just syntactic sugar to replace calls for myobject.c with the property getter and setter (deleters are also an option).
Wait - How does that work?
You might be wondering why simply doing this once:
myfunction = my_decorator(myfunction)
...results in such a drastic change! So that, from now on, when calling:
myfunction(arg1, arg2)
...you are actually calling my_decorator(myfunction), with arg1, arg2 sent to the interior wrapped function inside of my_decorator. And not only that, but all of this happens even though you didn't even mention my_decorator or wrapped in your function call at all!
All of this works by virtue of something called a closure. When the function is passed into the decorator in this way (e.g., property(c)), the function's name is re-bound to the wrapped version of the function instead of the original function, and the original function's arguments are always passed to wrapped instead of the original function. This is simply the way that closures work, and there's nothing magical about it. Here is some more information about closures.
Descriptors
So to summarize so far: #property is just a way of wrapping the class method inside of the property() function so the wrapped class method is called instead of the original, unwrapped class method. But what is the property function? What does it do?
The property function adds something called a descriptor to the class. Put simply, a descriptor is an object class that can have separate get, set, and delete methods. When you do this:
#property
def c(self):
return self._c
...you are adding a descriptor to the Test class called c, and defining the get method (actually, __get__()) of the c descriptor as equal to the c(self) method.
When you do this:
#c.setter
def c(self,value):
self._c
...you are defining the set method (actually, __set__()) of the c descriptor as equal to the c(self,value) method.
Summary
An amazing amount of stuff is accomplished by simply adding #property to your def c(self) method! In practice, you probably don't need to understand all of this right away to begin using it. However, I recommend keeping in mind that when you use #property, you are using decorators, closures, and descriptors, and if you are at all serious about learning Python it would be well worth your time to investigate each of these topics on their own.
The simplest solution is to make c a read-only property:
class Test(object):
def __init__(self, a, b):
self.a = a
self.b = b
#property
def c(self):
return self.a + self.b
Now every time you access test_instance.c, it calls the property getter and calculates the appropriate value from the other attributes. In use:
>>> t = Test(2, 4)
>>> t.c
6
>>> t.a = 3
>>> t.c
7
Note that this means that you cannot set c directly:
>>> t.c = 6
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
t.c = 6
AttributeError: can't set attribute