Change only certain arguments in a class/method, hold others constant - python

I have a class & method, each with several arguments: my_class(a,b,c).my_method(d,e,f) and I'd like to be able to only change a single argument, while holding the others constant.
Constantly copy-pasting the other constant arguments seems bad, so I'd like to create a new object wrapper_fct where I reference my_class but only provide the one argument I want to change, b, without always having to specify the remaining arguments. How would wrapper_fct() look like?
For example, wrapper_fct(my_class, b1) would return my_class(a,b1,c).my_method(d,e,f), wrapper_fct(my_class, b2) would return my_class(a,b2,c).my_method(d,e,f).
Here's an example in practice:
Loop through just the variable b and evaluate several classes/methods for each new instance of b, and append the results in a list.
I can currently do this in a for loop:
mylist1 = [] # init lists (append results here)
mylist2 = []
mylist2 = []
for b in [1,2,3,4,5]:
mylist1.append( my_class1(a,b,c).my_method(d,e,f) )
mylist2.append( my_class2(a,b,c).my_method(d,e,f) )
mylist3.append( my_class3(a,b,c).my_method(d,e,f) )
...
But it seems better to create a function loop_through_B() and use the wrapper_fct(my_class,b) as specified above. Not sure if it's the ideal solution, but maybe something like:
def loop_through_B(input_class, b_values = [1,2,3,4,5])
mylist = []
for b in b_values:
mylist.append( wrapper_fct(input_class,b) )
return mylist
loop_through_B(my_class1) # would I also have to specify the method here as well?
loop_through_B(my_class2)
loop_through_B(my_class3)
Extra Question: how would I add the ability to vary method arguments, or even multiple class & method arguments?

After #chepner pointed me in the right direction, I think the best solution is to use the lambda function:
wrapper_fct = lambda b: my_class1(a,b,c).my_method(d,e,f)
In this case, I can vary b as much as I want while holding the class arguments a,c, and method arguments d,e,f constant. Note that with lambda functions, I can also vary the method arguments and/or the class arguments. For example:
wrapper_fct_multiple = lambda b, e: my_class1(a,b,c).my_method(d,e,f)
It is also possible to do this with functools.partial, but it's not obvious to me how I would specify both class & method arguments with functools.
Anyway, here is the solution implementation using lambda:
# define the "wrapper function" outside the loop
wrapper_fct = lambda b: my_class1(a,b,c).my_method(d,e,f)
# define the function I want to use to loop through B:
def loop_through_B(class_wrapper, b_values)
mylist = []
for b in b_values:
mylist.append( class_wrapper(b) )
return mylist
# run:
loop_through_B(wrapper_fct, b_values=[1,2,3,4,5])
# Can make additional wrapper_fct2, wrapper_fct3, for my_class2, my_class3 ...

You can pass the method a dictionary of arguments, and change what the method sees by selectively updating it when calling the method.
Here's what I mean:
class MyClass:
def __init__(self, a, b, c):
self.a, self.b, self.c = a, b, c
def my_method(self, kwargs):
return sum((kwargs[key] for key in kwargs.keys()))
def __repr__(self):
classname = type(self).__name__
args = ', '.join((f'{v!r}' for v in (self.a, self.b, self.c)))
return f'{classname}({args})'
instance = MyClass('a','b','c')
print(instance) # -> MyClass('a', 'b', 'c')
kwargs = dict(d=1, e=2, f=3)
print(instance.my_method(kwargs)) # -> 6
print(instance.my_method(dict(kwargs, e=38))) # -> 42

Related

Calling class function saved as attribute

I have two classes and a "use" class function that performs multiple actions using the various attributes as the inputs so that I can just call the one function and get different results based on which class object is referenced. Where I am getting stuck is that I want to have part of the use function check the 'effect' attribute and call the function found there. Currently, the function named in effect is called when the object c is defined, and everything I have tried within the use function has no effect or returns 'none' since I don't have a return statement in the add and sub functions.
I've provided a simplified example code below. C has 9 attributes and 10 different class functions that I would want to use in the effect spot. I plan on having 50+ different C objects, so not having to write out specific functions for each one would be spectacular.
In this example, the print(p.h) at the end returns 101, showing that designing the C object calls the add function I put in the attribute:
M= []
class P:
def __init__(p, h, s):
p.h= h
p.s=s
class C:
def __init__(y, name, d, f effect,):
y.name= name
y.effect= effect
y.d= d
y.f= f
def use(c):
M.append(c)
p.h -= p. y.d
p.s += y.f
effect
def add(c, x):
p.h += x
def sub(c, x):
p.h -=x
p= P(100)
c= C('test1', add(1), 1)
print(p.h)
I have tried the add and sub functions as both class and standalone, which didn't seem to make a difference, calling y.effect as though it were a function which just returns 'none' as mentioned, and adding the property decorator, which threw an error, probably because I don't quite understand what that is supposed to do yet.
When accessing methods of classes C and P, you will only be able to access them from the class namespace. C.add will work, but add will not.
Furthermore if you call c.add(4) it will be the same thing as calling C.add(c, 4) because it will implicitly pass the instance c into the class (C) method C.add. You cannot call add(1) or c.add(1) before your instance c is initialized by the __init__ method.
Typically when writing python, it is most clear to always name the first argument to your class method self to make it clear that self refers to the instance and that it will automatically get passed into the function.
Furthermore, because you do not instantiate p until after you define your class C, you won't be able to access p from inside your C class unless you pass p into the function or save it as an attribute of c.
Not totally sure what you are going for here but I made some modifications which might help.
#!/usr/bin/env python
M = []
class P:
def __init__(self, h, s):
# self refers to your instance of p
self.h = h
self.s = s
class C:
def __init__(self, name, p, d, f, effect):
# self refers to your instance of c
self.p = p # save an instance of p as an attribute of your instance of c
self.name = name
self.effect = effect
self.d = d
self.f = f
def use(self):
M.append(self)
self.p.h -= self.d
self.p.s += self.f
return self.effect
def add(self, x):
# self refers to your instance of c and gets implicitly passed in as the first argument
# when c.add(2) is called
self.p.h += x
def sub(self, x):
self.p.h -= x
p = P(100, 50)
c = C('test1', p, d=2, f=1, effect="a")
c.add(1)
print(p.h)
<script src="https://modularizer.github.io/pyprez/pyprez.min.js"></script>
This code isn't runnable as-is, so there may be other problems with your real code that aren't possible to discern from this example, but when you say:
function named in effect is called when the object c is defined
that's happening not because of what's inside your C class, but the line where you construct your C:
c= C('test1', add(1), 1)
The expression add(1) isn't passing the add function, it's calling the add function and passing its result. To pass a function as an argument, just pass the function itself without the ():
c = C('test1', add, 1)
Note that in the code you provided there is no function called add in the current scope, and your C.use does not call effect, which is a different problem.

Set default value to variable declared later on? [duplicate]

I'm trying to use
def my_function(a,b)
If I try to print function like this
print(my_function()), when values start from none, I get
"missing 2 required positional arguments"
I want to use default value so when I use, print(my_function()),
a=10 and b=a.
So I tried
def my_function(a=10,b=a)
and I got a not defined.
I don't want to define a before with global.
Is it possible? or something like this
def my_function(a,b)
if a == None:
a = 10
if b == None:
b = a
This didn't work either when I used print(my_function()).
You can set the default to None:
def my_function(a=10, b=None):
if b is None:
b = a
Here the default for a is 10, and b is set to a if left to the default value.
If you need to accept None as well, pick a different, unique default to act as a sentinel. An instance of object() is an oft-used convention:
_sentinel = object()
def my_function(a=10, b=_sentinel):
if b is _sentinel:
b = a
Now you can call my_function(11, None) and b will be set to None, call it without specifying b (e.g. my_function() or my_function(42), and b will be set to whatever a was set to.
Unless a parameter has a default (e.g. is a keyword parameter), they are required.
This function my_function(a,b) expected two positional arguments without default value so It can't be called without them passed
So the main question how can we pass two argument so that second is set to first if not passed
There are two way for this:
kwargs unpacking
def my_function(a=10, **kwargs):
b = kwargs.get('b', a)
sentinel as default Value
_sentinel = object()
def my_function(a=10, b=_sentinel):
if b is _sentinel:
b = a
Hum! I thing you can't define my_function like this. But you can use a decorator to hide the default values computation.
For example:
import functools
def my_decorator(f):
#functools.wraps(f)
def wrapper(a=10, b=None):
if b is None:
b = a
return f(a, b)
return wrapper
You can then define your function like this:
#my_decorator
def my_function(a, b):
return (a, b)
You can use your function with zero, one or two parameters:
>>> print(my_function())
(10, 10)
>>> print(my_function(5))
(5, 5)
>>> print(my_function(5, 12))
(5, 12)

Closure after function definition

Is it possible to define a closure for a function which is already defined?
For example I'd like to have a "raw" function and a function which already has some predefined values set by a surrounding closure.
Here is some code showing what I can do with a closure to add predefined variables to a function definition:
def outer(a, b, c):
def fun(d):
print(a + b + c - d)
return fun
foo = outer(4, 5, 6)
foo(10)
Now I want to have a definition of fun outside of a wrapping closure function, to be able to call fun either with variables from a closure or by passing variables directly. I know that I need to redefine a function to make it usable in a closure, thus I tried using lambda for it:
def fun(a, b, c, d): # raw function
print(a + b + c - d)
def clsr(func): # make a "closure" decorator
def wrap(*args):
return lambda *args: func(*args)
return wrap
foo = clsr(fun)(5, 6, 7) # make a closure with values already defined
foo(10) # raises TypeError: fun() missing 3 required positional arguments: 'a', 'b', and 'c'
fun(5, 6, 7, 10) # prints 8
What I also tried is using wraps from functools, but I was not able to make it work.
But is this even possible? And if yes: Is there any module which already implements decorators for this?
You can just define the wrap on the fly:
def fun(a, b, c, d): # raw function
print(a + b + c - d)
def closed(d): fun(5,6,7,d)
closed(10)
You can use this with lambda, but #juanpa points out you should not if there is no reason to. The above code will result in 8. This method by the way is not Python specific, most languages would support this.
But if you need a closure in a sense that it relies on the wrapper variables, than no, and there is good reason not to. This will create essentially a non-working function, that relies on wrapping. In this case using a class maybe better:
class fun:
def __init__(self,*args): #Can use specific things, not just *args.
self.args = args #Or meaningful names
def __call__(self,a, b, c, d): # raw function
print(a + b + c - d,self.args)
def closed(d):
fun("some",3,"more",['args'])(5,6,7,d)
closed(10)
or using *args/**kwargs directly and passing extra variables through that. Otherwise I am not familiar with a "inner function" construct that only works after wrapping.

Unpacking instance variables by making container iterable

I just want to be able to unpack the instance variables of class foo, for example:
x = foo("name", "999", "24", "0.222")
a, b, c, d = *x
a, b, c, d = [*x]
I am not sure as to which is the correct method for doing so when implementing my own __iter__ method, however, the latter is the one that has worked with mixed "success". I say mixed because doing so with the presented code appears to alter the original instance object x, such that it is no longer valid.
class foo:
def __init__(self, a, b, c, d):
self.a = a
self.b = b
self.c = c
self.d = d
def __iter__(self):
return iter([a, b, c, d])
I have read the myriad posts on this site regarding __iter__, __next__, generators etc., and also a python book and docs.python.org and seem unable to figure what I am not understanding. I've gathered that __iter__ needs to return an iterable (which can be just be self, but I am not sure how that works for what I want). I've also tried various ways of playing around with implementing __next__ and iterating over vars(foo).items(), either by casting to a list or as a dictionary, with no success.
I don't believe this is a duplicate post on account that the only similar questions I've seen present a single list sequence object attribute or employ a range of numbers instead of a four non-container variables.
If you want the instance's variables, you should access them with .self:
def __iter__(self):
return iter([self.a, self.b, self.c, self.d])
with this change,
a, b, c, d = list(x)
will get you the variables.
You could go to the more risky method of using vars(x) or x.__dict__, sort it by the variables name (and that's why it is also a limited one, the variables are saved in no-order), and extract the second element of each tuple. But I would say the iterator is definitely better.
You can store the arguments in an attribute (self.e below) or return them on function call:
class foo:
def __init__(self, *args):
self.a, self.b, self.c, self.d = self.e = args
def __call__(self):
return self.e
x = foo("name", "999", "24", "0.222")
a, b, c, d = x.e
# or
a, b, c, d = x()

function default value not defined

I'm trying to use
def my_function(a,b)
If I try to print function like this
print(my_function()), when values start from none, I get
"missing 2 required positional arguments"
I want to use default value so when I use, print(my_function()),
a=10 and b=a.
So I tried
def my_function(a=10,b=a)
and I got a not defined.
I don't want to define a before with global.
Is it possible? or something like this
def my_function(a,b)
if a == None:
a = 10
if b == None:
b = a
This didn't work either when I used print(my_function()).
You can set the default to None:
def my_function(a=10, b=None):
if b is None:
b = a
Here the default for a is 10, and b is set to a if left to the default value.
If you need to accept None as well, pick a different, unique default to act as a sentinel. An instance of object() is an oft-used convention:
_sentinel = object()
def my_function(a=10, b=_sentinel):
if b is _sentinel:
b = a
Now you can call my_function(11, None) and b will be set to None, call it without specifying b (e.g. my_function() or my_function(42), and b will be set to whatever a was set to.
Unless a parameter has a default (e.g. is a keyword parameter), they are required.
This function my_function(a,b) expected two positional arguments without default value so It can't be called without them passed
So the main question how can we pass two argument so that second is set to first if not passed
There are two way for this:
kwargs unpacking
def my_function(a=10, **kwargs):
b = kwargs.get('b', a)
sentinel as default Value
_sentinel = object()
def my_function(a=10, b=_sentinel):
if b is _sentinel:
b = a
Hum! I thing you can't define my_function like this. But you can use a decorator to hide the default values computation.
For example:
import functools
def my_decorator(f):
#functools.wraps(f)
def wrapper(a=10, b=None):
if b is None:
b = a
return f(a, b)
return wrapper
You can then define your function like this:
#my_decorator
def my_function(a, b):
return (a, b)
You can use your function with zero, one or two parameters:
>>> print(my_function())
(10, 10)
>>> print(my_function(5))
(5, 5)
>>> print(my_function(5, 12))
(5, 12)

Categories

Resources