I'm new to programming and I've been stuck on this issue and would really like some help!
One of the parameters in my function is optional, but can take on multiple default values based on another function. Both functions take in the same input (among others). When I try to assign a default using the function as illustrated below:
def func(foo):
# returns different values of some variable k based on foo
def anotherFunc(foo, bar, k=func(foo)):
# this is the same foo input as the previous function
I get the following error:
NameError: name 'foo' is not defined
The thing is, the user can call 'anotherFunc' with any value of 'k' they want, which complicates things. Is there any way to have a function with arguments in it as a parameter in another function? Or is there any way for me to set multiple default values of 'k' based on the previous function while still allowing the user to choose their own 'k' if they wanted?
Thanks!
foo at the moment of defining the function acts as placeholder for the first function argument. It has no value until the function is called, for which its value can be accessed in the function body, like so:
def another_func(foo, bar, k=None):
if k is None:
k = func(foo)
...
You would probably want to do something like:
def func(foo):
return foo
def anotherfunc(foo, bar, k=None):
if k == None:
k = func(foo)
#process whatever
Related
So I use a bunch of files. Each file will trigger when lets say variable x = function. I know this is confusing but pretty much I need to be able to use a variable name which depending on what the variable is equal to will call that function. I am using python for this.
Based on your question, it looks like you want some sort of factory where the function to call is determined by the value of the variable passed in.
Here's a simple way of doing it:
x = 2 # determines which function to call
# possible functions to call
def f0(p): print('called f0',p)
def f1(p): print('called f1',p)
def f2(p): print('called f2',p)
def f3(p): print('called f3',p)
lstFunc = [f0, f1 ,f2, f3] # create list of functions
lstFunc[x]('test') # x=2, call function at index 2 (f2)
Output
called f2 test
For something more complicated, you would use a function which returns another function based on the variable value. In this example, I'm just using a list of functions.
Currently i'm using a list of strings with names of functions to fix the flow of my software:
flow = [
"func1",
"func2",
"func3",
"func4",
"func5"
]
Then i iterate over the flow and call each one passing the options:
options = {}
[getattr(__import__(phase), phase)(options) for phase in flow]
I would like to know if is it possible to do the same, but avoiding side effects, using reduce. Currently, this approach it's making the functions receive the option, but isn't necessary return the options for the next function, so i'm changing the options that is declared in other scope.
Thanks.
You can use functools.reduce (which is sometimes called fold in other functional programming languages like Haskell) to indeed call the function.
In that case however you will need to define a function taking two parameters: the old accumulator value and the element itself. You simply ignore the old value and call the function on the element.
So for a generic function f(x), you can do this with:
functools.reduce(lambda _,x:f(x),list,initializer=0)
So in your case that would be:
options = {}
functools.reduce(lambda _,phase:getattr(__import__(phase),phase)(options),flow,initializer=0)
EDIT:
after rereading your question, it appears to me that each of the functions takes as input options, and generates the "new" options that should be passed to the next function. Well the return of the first function, is the first parameter of the lambda of the next function. So you can fold it together like:
first_options = {}
functools.reduce(lambda options,phase:getattr(__import__(phase),phase)(options),flow,initializer=first_options)
This will result in something equivalent to:
options_0 = first_options
options_1 = getattr(__import__(phase),flow[0])(options_0)
options_2 = getattr(__import__(phase),flow[1])(options_1)
# ...
return options_n
but of course this happens inside the reduce.
so reduce takes one a function, say reduce_func, that takes on 2 arguments. When it goes through a list it takes the first two items as the params of reduce_func for the first call, then on each subsequent call, uses the return value as the first param, and the next value on the list as the second param. This means, for you, reduce_func needs to be the following
def reduce_func(param, f):
return f(param)
and your list needs to be the following:
[options, func1, func2, func3, func4]
Now, I used a list of functions and didn't use import. In stead of f, you could pass in [module].[function] as a string (call the param something like func_str), and do some splitting and inside of reduce_func as some set up.
I am stuck in a situation where I need to pass some value (which is always be random/different) returned from a function to another function, and the sequence in which the functions will be called is undefined as it will be figured at run-time based on user inputs.
For example,
def func1(some_value):
# Use some_value for whatever purpose
# Some code
return some_random_value
def func2(some_value):
# Use some_value for whatever purpose
# Some code
return some_random_value
def func3(some_value):
# Use some_value for whatever purpose
# Some code
return some_random_value
So let's assume if func2 is called first, any initial/default value is passed as parameter some_value and the function will return some_random_value. Now, I don't know which function will be called next, but whatever function is called the some_random_value returned from the previous function (in this case func2) should be passed as parameter some_value to the next called function (let it be func1). And this process goes on and on.
What could be the recommended way to achieve this? Should this be done using a global variable whose value is amended each time a function runs to store the function's return value? If yes, then how?
More specifically
A CLI will allow the user to choose some action and an appropriate function will be called according to this action. The last returned value from a function should be in the memory till the application ends. After a function performs it's task, it'll return a value. That value is required when any other function is called using CLI action. Again, the next function will process some data using the last function's return value, and then return some processed value, which later will be used by the next function or CLI action.
I was thinking like instead of returning the value from any of those functions, create a global variable with the default value:
common_data = 'some string'
And then in every function definition, add:
global common_data
common_data = 'new processed string'
This will ensure any next function call will be passed the value last saved in common_data by the previous function.
But this seems to be a non-recommend solution, at least I think so.
Please allow me to edit or elaborate this question if I am unable to explain my situation properly.
Thank you
I will deliver on this recursion error. ^^
from random import choice
from random import randint
def get_fs(f):
return [x for x in (func1, func2, func3) if x != f]
def func1(some_value, fs):
# Use some_value for whatever purpose
# Some code
f = choice(fs)
print("func1", f.__name__)
return f(randint(1,10), get_fs(func1))
def func2(some_value, fs):
# Use some_value for whatever purpose
# Some code
f = choice(fs)
print("func2", f.__name__)
return f(randint(1,10), get_fs(func2))
def func3(some_value, fs):
# Use some_value for whatever purpose
# Some code
f = choice(fs)
print("func3", f.__name__)
return f(randint(1,10), get_fs(func3))
def main():
functions = [func2, func3]
func1(randint(1,10), functions)
if __name__ == '__main__':
main()
I'm wanting to replace keywords with values from an associated dictionary.
file1.py
import file2
file2.replacefunction('Some text','a_unique_key', string_variable1)
file2.replacefunction('Other text','another_unique_key', string_variable2)
file2.replacefunction('More text','unique_key_3', string_variable2)
stringvariable1, used in the first function call, is a local variable in file1.py and therefore is accessible as a parameter in the function. It is intentionally a different variable than the one later used in that parameter position.
file2.py
import re
keywords = {
"a_unique_key":"<b>Some text</b>",
"another_unique_key":"<b>Other text</b>",
"unique_key_3":"<b>More text</b>",
}
def replacefunction(str_to_replace, replacement_key, dynamic_source):
string_variable2 = re.sub(str_to_replace, keywords[replacement_key], dynamic_source)
return string_variable2 <-- this variable needs to be accessible
The replacement values in the keywords dictionary are more complicated than shown above, and just demonstrated like this for brevity.
The problem occurs at the second call to replacefunction in file1.py - it cannot access stringvariable2 which is the result of the first function that is run.
I have seen that the way to access a variable produced in a function outside of that function is to do something like:
def helloworld()
a = 5
return a
mynewvariable = helloworld()
print mynewvariable
5 <-- is printed
But this approach won't work in this situation because the function needs to work on a string that is updated after each function call ie:
do this to string 2 # changes occur to string 2
do this to string 2 # changes occur to string 2
do this to string 2 # changes occur to string 2
I can achieve the required functionality without a function but was just trying to minimise code.
Is there any way to access a variable from outside a function, explicitly as a variable and not via assignment to a function?
Don't confuse variables with values. The name string_variable2 references a value, and you just return that from your function.
Where you call the function, you assign the returned value to a local variable, and use that reference to pass it into the next function call:
string_variable2 = file2.replacefunction('Some text','a_unique_key', string_variable1)
string_variable2 = file2.replacefunction('Other text','another_unique_key', string_variable2)
file2.replacefunction('More text','unique_key_3', string_variable2)
Here the replacefunction returns something, that is stored in string_variable2, and then passed to the second call. The return value of the second function call is again stored (using the same name here), and passed to the third call. And so on.
I have the following python code using the twisted API.
def function(self,filename):
def results(result):
//do something
for i in range(int(numbers)) :
name = something that has to do with the value of i
df = function_which_returns_a defer(name)
df.addCallback(results)
It uses the Twisted API. What i want to achieve is to pass to the callbacked function (results) the value of the name which is constructed in every iteration without changing the content of the functions_which_returns_a defer() function along with the deferred object of course. In every result of the functions_which_returns_a deffer the value of the name should be passed to results() to do something with this. I.e: at the first iteration when the execution reach the results function i need the function hold the result of the deffered object along with the value of name when i=0,then when i=1 the defered object will be passed with the value of name, and so on.So i need every time the result of the defer object when called with the name variable alond with the name variable. When i try to directly use the value of nameinside results() it holds always the value of the last iteration which is rationale, since function_which_returns_a defer(name) has not returned.
You can pass extra arguments to a Deferred callback at the Deferred.addCallback call site by simply passing those arguments to Deferred.addCallback:
def function(self,filename):
def results(result, name):
# do something
for i in range(int(numbers)) :
name = something that has to do with the value of i
df = function_which_returns_a defer(name)
df.addCallback(results, name)
You can also pass arguments by keyword:
df.addCallback(results, name=name)
All arguments passed like this to addCallback (or addErrback) are passed on to the callback function.