Generating separate instances; passing list of instances as *args - python

I have created 5 instances of a class and need to call another class with arguments from the instances like so:
class Class2:
...
def __init__(self, arg1, arg2):
self.time = 0
...
def run_loop(self, arg, *args):
...
while True:
fn_with_args(*args)
...
if self.time == 35:
fn_print_exit()
break
def fn_with_args(self, *args):
for i in args:
if i != "none":
fn_other()
clas1_ob1 = Clas1(arg1, arg2)
clas1_ob2 = Clas1(arg1, arg2)
clas1_ob3 = Clas1(arg1, arg2)
clas1_ob4 = Clas1(arg1, arg2)
clas1_ob5 = Clas1(arg1, arg2)
clas2_ob1 = Clas2(arg)
clas2_ob1.run_loop(clas1_ob1, "none")
clas2_ob1.run_loop(clas1_ob2, clas1_ob1)
clas2_ob1.run_loop(clas1_ob3, clas1_ob1, clas1_ob2)
clas2_ob1.run_loop(clas1_ob4, clas1_ob1, clas1_ob2, clas1_ob3)
clas2_ob1.run_loop(clas1_ob5, clas1_ob1, clas1_ob2, clas1_ob3, clas1_ob4)
Its obviously pretty ugly, but it does work at the moment. However, I would of course prefer to not have to write out each instance and then write out each call. I would prefer to run for loops in both cases.
I could of course, use lists, but when I tried it did not work because I could not iterate through lists of objects in the function fn_with_args(). *args however, can be iterated through. So my question presently is 'how can I pass *args into the run_loop() call so I can simply call it once?'
Or if there is a way I can iterate over the list of objects I suppose that is an option too however I would prefer to not do this because it requires many more lines of code and a fair bit of re-structuring.
Any and all input appreciated, let me know if I need explain more,
Thanks
PS: I realize also that I could simply pass the elements of the object into a list but this presents different problems for the program and seems counter to oop structure.
EDIT--->
Example:
number = 5
clas2_ob1 = Clas2(arg)
for i in range(number):
clas1_ob1 = Clas1(arg1, arg2)
clas2_ob1.run_loop(clas1_ob1, "none")
#This is the structure I want but this will obviously just overwrite 5 times
#The arguments in this case also become incorrect

It's not hard to do what you want using a slice of a list of potential arguments:
class1_objs = [Clas1(arg1, arg2) for _ in range(5)] # create Class1 objects in a list
class2_obj = Clas2(arg)
for i, obj in enumerate(class1_objs): # loop over objects and indexes with enumerate
class2_obj.runloop(obj, *class1_objs[:i] or ["none"]) # use index to slice the list
I'd suggest however that you might want to redesign your method signatures to simply accept two arguments, an object and a sequence. The code you've shown packs and unpacks the arguments over and over, which is somewhat wasteful if you don't need it. Just get rid of the *s in all of the function definitions and their calls and you'll have simpler, more efficient code.

Related

Passing arguments to Python function as a data structure

I am using a 3rd party library function which has a large number of positional and named arguments. The function is called from numerous points in my code with identical arguments/values.
For ease of maintenance I don't want to hard-code the dozens of identical arguments multiple times throughout my code. I was hoping there was a way of storing them once in a data structure so I just need to pass the data structure. Along the lines of the following:
Assume the signature of the function I'm calling looks like:
def lib_function(arg1, arg2, arg3=None, arg4=None):
Assume that throughout my code I want to call it with values of
a for arg1,
b for arg2
d for arg4
(and I'm not using arg3).
I've tried defining a data structure as follows:
arguments = ('a', 'b', {'arg4':'d'})
and using it like this:
res = lib_function(arguments)
but this is obviously not correct - the tuple and dict are not unpacked during the call but are handled as a single first argument.
Is there a way of doing this in Python?
The obvious alternative is to proxy lib_function in my code, with the arguments hard-coded in the proxy which is then called with no arguments. Something like:
def proxy_lib_function():
return lib_function('a', 'b', arg4='d')
res = proxy_lib_function()
However I wanted to check there isn't a more Pythonic way of doing this.
Separate positional and named arguments and use asterisk unpacking:
def lib_function(arg1, arg2, arg3=None, arg4=None):
print(locals())
args = ("a", "b")
kwargs = {"arg4": "d"}
res = lib_function(*args, **kwargs)
Your function is:
def lib_function(arg1, arg2, arg3=None, arg4=None):
Define another function:
def yourFunction(x):
return lib_function(x[0], x[1], x[2], x[3])
The "data structure" would be:
data = [yourData1, yourData2, yourData3, yourData4]
Then you can call your new function with this:
yourFunction(data)

Can I implement a function or better a decorator that makes func(a1)(a2)(a3)...(an) == func(a1, a2, a3,...,an)? [duplicate]

On Codewars.com I encountered the following task:
Create a function add that adds numbers together when called in succession. So add(1) should return 1, add(1)(2) should return 1+2, ...
While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function f(x) that can be called as f(x)(y)(z).... Thus far, I'm not even sure how to interpret this notation.
As a mathematician, I'd suspect that f(x)(y) is a function that assigns to every x a function g_{x} and then returns g_{x}(y) and likewise for f(x)(y)(z).
Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.
How do you call this concept and where can I read more about it?
I don't know whether this is function chaining as much as it's callable chaining, but, since functions are callables I guess there's no harm done. Either way, there's two ways I can think of doing this:
Sub-classing int and defining __call__:
The first way would be with a custom int subclass that defines __call__ which returns a new instance of itself with the updated value:
class CustomInt(int):
def __call__(self, v):
return CustomInt(self + v)
Function add can now be defined to return a CustomInt instance, which, as a callable that returns an updated value of itself, can be called in succession:
>>> def add(v):
... return CustomInt(v)
>>> add(1)
1
>>> add(1)(2)
3
>>> add(1)(2)(3)(44) # and so on..
50
In addition, as an int subclass, the returned value retains the __repr__ and __str__ behavior of ints. For more complex operations though, you should define other dunders appropriately.
As #Caridorc noted in a comment, add could also be simply written as:
add = CustomInt
Renaming the class to add instead of CustomInt also works similarly.
Define a closure, requires extra call to yield value:
The only other way I can think of involves a nested function that requires an extra empty argument call in order to return the result. I'm not using nonlocal and opt for attaching attributes to the function objects to make it portable between Pythons:
def add(v):
def _inner_adder(val=None):
"""
if val is None we return _inner_adder.v
else we increment and return ourselves
"""
if val is None:
return _inner_adder.v
_inner_adder.v += val
return _inner_adder
_inner_adder.v = v # save value
return _inner_adder
This continuously returns itself (_inner_adder) which, if a val is supplied, increments it (_inner_adder += val) and if not, returns the value as it is. Like I mentioned, it requires an extra () call in order to return the incremented value:
>>> add(1)(2)()
3
>>> add(1)(2)(3)() # and so on..
6
You can hate me, but here is a one-liner :)
add = lambda v: type("", (int,), {"__call__": lambda self, v: self.__class__(self + v)})(v)
Edit: Ok, how this works? The code is identical to answer of #Jim, but everything happens on a single line.
type can be used to construct new types: type(name, bases, dict) -> a new type. For name we provide empty string, as name is not really needed in this case. For bases (tuple) we provide an (int,), which is identical to inheriting int. dict are the class attributes, where we attach the __call__ lambda.
self.__class__(self + v) is identical to return CustomInt(self + v)
The new type is constructed and returned within the outer lambda.
If you want to define a function to be called multiple times, first you need to return a callable object each time (for example a function) otherwise you have to create your own object by defining a __call__ attribute, in order for it to be callable.
The next point is that you need to preserve all the arguments, which in this case means you might want to use Coroutines or a recursive function. But note that Coroutines are much more optimized/flexible than recursive functions, specially for such tasks.
Here is a sample function using Coroutines, that preserves the latest state of itself. Note that it can't be called multiple times since the return value is an integer which is not callable, but you might think about turning this into your expected object ;-).
def add():
current = yield
while True:
value = yield current
current = value + current
it = add()
next(it)
print(it.send(10))
print(it.send(2))
print(it.send(4))
10
12
16
Simply:
class add(int):
def __call__(self, n):
return add(self + n)
If you are willing to accept an additional () in order to retrieve the result you can use functools.partial:
from functools import partial
def add(*args, result=0):
return partial(add, result=sum(args)+result) if args else result
For example:
>>> add(1)
functools.partial(<function add at 0x7ffbcf3ff430>, result=1)
>>> add(1)(2)
functools.partial(<function add at 0x7ffbcf3ff430>, result=3)
>>> add(1)(2)()
3
This also allows specifying multiple numbers at once:
>>> add(1, 2, 3)(4, 5)(6)()
21
If you want to restrict it to a single number you can do the following:
def add(x=None, *, result=0):
return partial(add, result=x+result) if x is not None else result
If you want add(x)(y)(z) to readily return the result and be further callable then sub-classing int is the way to go.
The pythonic way to do this would be to use dynamic arguments:
def add(*args):
return sum(args)
This is not the answer you're looking for, and you may know this, but I thought I would give it anyway because if someone was wondering about doing this not out of curiosity but for work. They should probably have the "right thing to do" answer.

Passing objects around an event queue in Python

So i have a relatively convoluted setup for something I'm working on explained as follows:
This is is python. and more of a rough outline, but it covers everything I need. Though the process next function is the same so feel free to clean that up if you want.
#timer event that runs every .1 second and processes events in a queue
some_event_timer():
events.process_next()
class Event_queue:
def __init__(self):
self.events = []
def push(self, event, parameters):
self.events.insert(len(self.events), event, parameters)
def process_next(self):
event = self.pop(0)
event[0](event[1])
class Foo:
def __init__(self, start_value = 1):
self.value = start_value
def update_value(self, multiple):
self.value *= multiple
def return_bah(self)
return self.value + 3
class Bar:
def __init__(self, number1, number2):
self.init = number1
self.add = number2
def print_alt_value(self, in_value):
print in_value * (self.init + self.add)
That is a barebones of what I have, but it illustrates my problem:
Doing the below
events2 = Event_queue2()
foo1 = Foo(4) ----> foo1.value = 4 here
bar1 = Bar(4, 2)
events2.push(foo1.update_value,1.5)
events2.push(bar1.print_alt_value,foo1.value)
events2.push(bar.print_alt_value,foo1.return_bah())
events2.process_next() ----> should process update_value to change foo.value to 6
events2.process_next() ----> should process print_alt_value in bar class - expected 36
events2.process_next() ----> should process print_alt_value - expected 54
I initially expected my output to be 36 6 * (4 + 2)
I know why its not, foo1.value and foo1.return_bah() gets passed as an evaluated parameter (correct term?).
What I really want is to pass the reference to the variable or the reference to the method, rather than having it evaluate when I put it in my event queue.
Can anyone help me.
I tried searching, but I couldn't piece together what I wanted exactly.
TO get what I have now I initially looked at these threads:
Calling a function of a module from a string with the function's name in Python
Use a string to call function in Python
But I don't see how to support parameters from that properly or how to support passing another function or reference to a variable from those.
I suppose at least for the method call, I could perhaps pass the parameter as foo1.return.bah and evaluate in the process_next method, but I was hoping for a general way that would accept both standard variables and method calls, as the event_queue will take both.
Thank you for the help
Update edit:
So I following the suggestion below, and got really close, but:
Ok, so I followed your queue suggestion and got really close to what I want, but I don't completely understand the first part about multiple functions.
I want to be able to call a dictionary of objects with this as well.
for example:
names = ["test1", "test2"]
for name in names:
names_objs[name] = Foo(4)
Then when attempting to push via lambda
for name in names_list:
events2.push(lambda: names_objs[name].update_value(2))
doesn't work. When teh event actually gets processed it only runs on whatever name_objs[name] references, and if the name variable is no longer valid or has been modified outside the function, it is wrong.
This actually wasn't surprising, but adding a:
name_obj_hold = name_objs[name]
then pushing that didn't either. it again only operates on whatever name_obj_hold last referenced.
Can someone clarify the multiple funcs thing. I'm afraid I'm having trouble wrapping my head around it.
basically I need the initial method call evaluated, so something like:
names_objs[name].some_func(#something in here#)
gets the proper method and associated with the right class object instance, but the #something in here# doesn't get evaluated (whether it is a variable or another function) until it actually gets called from the event queue.
Instead of passing in the function to call func1 and the arguments that should be passed to the function, pass in a function func2 that calls func1 with the arguments that should be passed in.
d = {"a":1}
def p(val):
print val
def func1():
p(d["a"])
def call_it(func):
func()
call_it(func1)
d["a"] = 111
call_it(func1)
Within func1, d["a"] is not evaluated until func1 actually executes.
For your purposes, your queue would change to:
class EventQueue(object):
def __init__(self):
self.events = deque()
def push(self, callable):
self.events.append(callable)
def process_next(self):
self.events.popleft()()
collections.deque will be faster at popping from the front of the queue than a list.
And to use the EventQueue, you can use lambdas for quick anonymous function.
events2 = EventQueue()
foo1 = Foo(4)
bar1 = Bar(4, 2)
events2.push(lambda: foo1.update_value(1.5))
events2.push(lambda: bar1.print_alt_value(foo1.value))
events2.push(lambda: bar1.print_alt_value(foo1.return_bah()))
events2.process_next()
events2.process_next() # 36.0
events2.process_next() # 54.0
For Edit:
In this case you need to "capture" the value in a variable that is more tightly scoped than the loop. You can use a normal function and partial() to achieve this.
for name in names_list:
def update(name):
names_objs[name].update_value(2)
events2.push(partial(update, name))

how to make my own mapping type in python

I have created a class MyClassthat contains a lot of simulation data. The class groups simulation results for different simulations that have a similar structure. The results can be retreived with a MyClass.get(foo) method. It returns a dictionary with simulationID/array pairs, array being the value of foo for each simulation.
Now I want to implement a method in my class to apply any function to all the arrays for foo. It should return a dictionary with simulationID/function(foo) pairs.
For a function that does not need additional arguments, I found the following solution very satisfying (comments always welcome :-) ):
def apply(self, function, variable):
result={}
for k,v in self.get(variable).items():
result[k] = function(v)
return result
However, for a function requiring additional arguments I don't see how to do it in an elegant way. A typical operation would be the integration of foo with bar as x-values like np.trapz(foo, x=bar), where both foo and bar can be retreived with MyClass.get(...)
I was thinking in this direction:
def apply(self, function_call):
"""
function_call should be a string with the complete expression to evaluate
eg: MyClass.apply('np.trapz(QHeat, time)')
"""
result={}
for SID in self.simulations:
result[SID] = eval(function_call, locals=...)
return result
The problem is that I don't know how to pass the locals mapping object. Or maybe I'm looking in a wrong direction. Thanks on beforehand for your help.
Roel
You have two ways. The first is to use functools.partial:
foo = self.get('foo')
bar = self.get('bar')
callable = functools.partial(func, foo, x=bar)
self.apply(callable, variable)
while the second approach is to use the same technique used by partial, you can define a function that accept arbitrary argument list:
def apply(self, function, variable, *args, **kwds):
result={}
for k,v in self.get(variable).items():
result[k] = function(v, *args, **kwds)
return result
Note that in both case the function signature remains unchanged. I don't know which one I'll choose, maybe the first case but I don't know the context on you are working on.
I tried to recreate (the relevant part of) the class structure the way I am guessing it is set up on your side (it's always handy if you can provide a simplified code example for people to play/test).
What I think you are trying to do is translate variable names to variables that are obtained from within the class and then use those variables in a function that was passed in as well. In addition to that since each variable is actually a dictionary of values with a key (SID), you want the result to be a dictionary of results with the function applied to each of the arguments.
class test:
def get(self, name):
if name == "valA":
return {"1":"valA1", "2":"valA2", "3":"valA3"}
elif name == "valB":
return {"1":"valB1", "2":"valB2", "3":"valB3"}
def apply(self, function, **kwargs):
arg_dict = {fun_arg: self.get(sim_args) for fun_arg, sim_args in kwargs.items()}
result = {}
for SID in arg_dict[kwargs.keys()[0]]:
fun_kwargs = {fun_arg: sim_dict[SID] for fun_arg, sim_dict in arg_dict.items()}
result[SID] = function(**fun_kwargs)
return result
def joinstrings(string_a, string_b):
return string_a+string_b
my_test = test()
result = my_test.apply(joinstrings, string_a="valA", string_b="valB")
print result
So the apply method gets an argument dictionary, gets the class specific data for each of the arguments and creates a new argument dictionary with those (arg_dict).
The SID keys are obtained from this arg_dict and for each of those, a function result is calculated and added to the result dictionary.
The result is:
{'1': 'valA1valB1', '3': 'valA3valB3', '2': 'valA2valB2'}
The code can be altered in many ways, but I thought this would be the most readable. It is of course possible to join the dictionaries instead of using the SID's from the first element etc.

Running bunch of python methods on a single piece of data

I would like to run a set of methods given some data. I was wondering how I can remove or chose to run different methods to be run. I would like to groups them within a larger method so I can call it; and it will go along the lines of test case.
In code: Now these are the methods that process the data. I may sometimes want to run all three or a subset thereof to collect information on this data set.
def one(self):
pass
def two(self):
pass
def three(self):
pass
I would like to be able to call of these methods with another call so I dont have to type out run this; run this. I am looking for elegant way to run a bunch of methods through one call so I can pick and choose which gets run.
Desired result
def run_methods(self, variables):
#runs all three or subset of
I hope I have been clear in my question. I am just looking for an elegant way to do this. Like in Java with reflection.
Please and thanks.
Send the methods you want to run as a parameter:
def runmethods(self, variables, methods):
for method in methods:
method(variables)
then call something like:
self.runmethods(variables, (method1, method2))
This is the nice thing of having functions as first-class objects in Python
For the question of the OP in the comment (different parameters for the functions), a dirty solution (sorry for that):
def rest(a, b):
print a - b
def sum(a, b):
print a + b
def run(adictio):
for method, (a, b) in adictio.iteritems():
method(a, b)
mydictio = {rest:(3, 2), sum:(4, 5)}
run(mydictio)
You could use other containers to send methods together with their variables but it is nice to see a function as the key of a dictionary
if your methods/functions use different numbers of parameters you can not use
for method, (a,b) in adictio.iteritems():
because it expects the same number of parameters for all methods. In this case you can use *args:
def rest(*args):
a, b = args
print a - b
def sum(*args):
a, b, c, d, e = args
print a + b + c + d + e
def run(adictio):
for method, params in adictio.iteritems():
method(*params)
mydictio = {rest:(3, 2), sum:(4, 5, 6, 7, 8)}
run(mydictio)
If you normally do all the functions but sometimes have exceptions, then it would be useful to have them done by default, but optionally disable them like this:
def doWalkDog():
pass
def doFeedKid():
pass
def doTakeOutTrash():
pass
def doChores(walkDog=True, feedKid=True, takeOutTrash=True):
if walkDog: doWalkDog()
if feedKid: doFeedKid()
if takeOutTrash: doTakeOutTrash()
# if the kid is at grandma's...
# we still walk the dog and take out the trash
doChores(feedKid=False)
To answer the question in the comment regarding passing arbitrary values:
def runmethods(self, methods):
for method, args in methods.iteritems():
method(*args[0], **args[1])
runmethods( {methodA: ([arg1, arg2], {'kwarg1:' 'one', 'kwarg2'})},
{methodB: ([arg1], {'kwarg1:' 'one'})}
)
But at this point, it's looking like more code than it's worth!

Categories

Resources