Scipy nquad integration with arguments - python

I tried to integrate in python using nquad. The problem is that when I try to pass extra arguments to the function which is being integrated in nquad, it wants to pass those parameters to bounds instead to function. I searched the internet and found that it was a bug in scipy.__version__ < 0.18.0, and was fixed then, but I have got version 1.1.0 and the problem persists. What should I do? The simplified example is below
>>> from scipy.integrate import nquad
>>> ranges0 = lambda x: [x, 2 * x]
>>> ranges = [ranges0, [1, 2]]
>>> func = lambda x0, x1, t0, t1: x0 + x1 + t0 + t1
>>> nquad(func, ranges, args=(1,2))
>>> TypeError: <lambda>() takes exactly 1 argument (3 given)

I did some digging in the documentation for nquad and I have found this excerpt:
If an element of ranges is a callable, then it will be called with all of the integration arguments available, as well as any parametric arguments. e.g. if func = f(x0, x1, x2, t0, t1), then ranges[0] may be defined as either (a, b) or else as (a, b) = range0(x1, x2, t0, t1).
In other words, when defining a func with 4 parameters you must define your range to take exactly 4-pos parameters. In other words, since your ranges0 is in the first place in the range list, it will be passed 4-1=3 parameters, if you put it in the next place in the list it will be passed 4-2=2 parameters. The same is true for all further places in the array. It stays at 2 because it is called with:
all of the integration arguments available
Here the problem is not related to scipy but it more relating to a logical error on your part.
In summary, a function in the range list can never accept one and only one argument when there are 2 time variables.

Related

Argument Unpacking while Defining Function in Python

I am trying to pass a list into the definition of a function in order to create new variables. The use case here is to run scipy's curve fit to find optimal parameters of the function. I want this function to be able to take any number of variables dynamically without specifically typing in all the variables I want it to optimize/solve for (b1, rate_1, etc.). Right now I have a list of the variables to include but can't seem to get the function to create them as new parameters in the function definition, which it looks like I need to do.
I'm familiar with using * in a function as seen below, but it seems that is for when the function is already defined and you're calling it. I want to do something similar but in the definition of the function so the function itself recognizes b1, rate_1, etc. as parameters that it can solve for using curve_fit.
My starter code:
def get_optimal_adstock_multivariate(x_var_names):
y = np.array(final_df['Count Of Solutions'])
# make list of coefficient variables (b1, b2, etc.) and make new variables for each rate (rate_1, rate_2, etc.)
coef_vars = []
rates = []
for i in range(0, len(x_var_names)):
coef_vars.append("b" + str(i+1))
rates.append("rate_" + str(i+1))
coef_vars_rates = coef_vars + rates
def f(final_df, b0, *coef_vars_rates): # using * to pass b1, rate_1, b2, rate_2, etc. as parameters (unpacking the list)
# need this function to recognize final_df, b0, b1, rate_1, etc. as variables
You cannot directly make the function recognize these variables unfortunately.
You can use keyword arguments with a double-* and address them from the resulting dictionary object:
def f(a, b, **kwargs):
... kwargs["b0"] ...
Posting this in hope that it might help you, though I am aware it's not a full solution to your problem.
I don't understand what you are trying to do with x_var_names.
We can define equivalent functions:
In [260]: a, b, c = 1,2,3
In [261]: def foo0(x, *args):
...: print(x, args)
...:
In [262]: def foo1(x, a,b,c):
...: print(x, a,b,c)
...:
and call them a list of variables or the actual variables:
In [263]: alist=[a,b,c]
In [264]: foo0(100,*alist)
100 (1, 2, 3)
In [265]: foo1(100,*alist)
100 1 2 3
In [266]: foo0(100,a,b,c)
100 (1, 2, 3)
In [267]: foo1(100,a,b,c)
100 1 2 3
Or if I refine the print in foo0 I get the same display in both:
In [268]: def foo2(x, *args):
...: print(x, *args)
...:
In [269]: foo2(100,a,b,c)
100 1 2 3
Look at curve_fit. It will work with either foo1 or foo2 signatures.
f(xdata, *params)
The number of params is determined by p0, though it also talks about determining them by introspection. By that I think it can deduce that foo1 expects 3 values.
Don't confuse variable names in the global environment with the variables, or tuple of values passed via curve_fit to your function. Local names can also be different. For example
def foo3(x, *args):
a,b,c = args
print(x, a,b,c)
uses local unpacking, where the a,b,c are just convenient ways of referencing the values in the function.

Python Scipy accesses local variables in another function?

In Scipy, one-dimensional integration of a function with multiple parameters is achieved by specifying the constant parameters in the function definition through the args argument.
This example is from the Scipy Reference Guide:
>>> from scipy.integrate import quad
>>> def integrand(x, a, b):
... return a * x + b
>>> a = 2
>>> b = 1
>>> I = quad(integrand, 0, 1, args=(a,b))
>>> I = (2.0, 2.220446049250313e-14)
https://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html
I always thought that local variables defined within functions are inaccessible outside the definition. Here, this seems not true since quad only requires integrad as the function argument, and it automatically knows that the variables used are (x, a, b) (and hence (a, b) are taken as parameters in the integration).
What is happening here? What am I missing?
quad doesn't know anything about what variables integrand uses. It doesn't know that the arguments are called a and b. It only sees the values of the global variables a and b, and it passes those values positionally to the integrand. In other words, the code would work the same if you did
x = 2
y = 1
I = quad(integrand, 0, 1, args=(x,y))
quad does not even know how many arguments integrand accepts, other than that it accepts at least one. quad passes the value of the variable of integration as the first argument. You, the user, have to know that, and pass the right number. If you don't (for instance if you didn't pass args, or just passed one arg), you'll get an error. quad just blindly passes along whatever arguments you give it.

SymPy lambdify with dot()

Take an undefined function that happens to be named dot, and make it part of lambdify:
import numpy
import sympy
class dot(sympy.Function):
pass
x = sympy.Symbol('x')
a = sympy.Matrix([1, 0, 0])
f = sympy.lambdify(x, dot(a.T, x))
x = numpy.array([3, 2, 1])
print(f(x))
Surprise: This actually works!
Apparently, the string "dot" is somehow extracted and replaced by an implementation of the dot-product. Does anyone know which?
The result of the above is [3]. I would, however, like to get the scalar 3. (How) can I modify f() to achieve that?
I'm not a sympy user however quoting the documentation for lambdify it says:
If not specified differently by the user, SymPy functions are replaced
as far as possible by either python-math, numpy (if available) or
mpmath functions - exactly in this order. To change this behavior, the
“modules” argument can be used. It accepts:
the strings “math”, “mpmath”, “numpy”, “numexpr”, “sympy”
any modules (e.g. math)
dictionaries that map names of sympy functions to arbitrary functions
lists that contain a mix of the arguments above, with higher priority given to entries appearing first.
So it seems that if you have python-math installed it will use that, if not but you have numpy installed it will use numpy's version, otherwise mpmat and then describes how to modify this behaviour.
In your case just provide a modules value that is a dictionary that maps the name dot to a function that return a scalar as you want.
An example of what I mean:
>>> import numpy as np
>>> import sympy
>>> class dot(sympy.Function): pass
...
>>> x = sympy.Symbol('x')
>>> a = sympy.Matrix([1,0,0])
>>> f = sympy.lambdify(x, dot(a.T, x), modules=[{'dot': lambda x, y: np.dot(x, y)[0]}, 'numpy'])
>>> y = np.array([3,2,1])
>>> print(f(y))
3
>>> print(type(f(y)))
<class 'numpy.int64'>
As you can see by manipulating the modules argument you can achieve what you want. My implementation here is absolutely naive, but you can generalize it like:
>>> def my_dot(x, y):
... res = np.dot(x, y)
... if res.ndim == 1 and res.size == 1:
... return res[0]
... return res
This function checks whether the result of the normal dot is a scalar, and if so returns the plain scalar and otherwise return the same result as np.dot.

How to pass tuple to a Matlab function from Python

I have a Matlab function that I'm calling from a python script:
import matlab.engine
eng = matlab.engine.start_matlab()
t = (1,2,3)
z = eng.tstFnc(t)
print z
The function tstFnc is as follows:
function [ z ] = tstFnc( a, b, c )
z = a + b + c
This does not work, however, as Matlab receives one input instead of three. Could this be made to work?
Note: this is a simplified case of what I want to do. In the actual problem I have a variable number of lists that I pass into a Matlab function, which is interpreted in the Matlab function using varargin.
As notes in the comments, the arguments need to be applied instead of passed as a tuple of length 1.
z = eng.tstFnc(*t)
This causes a call to tstFnc with len(t) arguments instead of a single tuple argument. Similarly you could just pass each argument individually.
z = eng.tstFnc(1, 2, 3)

Functions as arguments to functions

I saw this example in a Python book, which showcases how to use a function as an argument to another function:
def diff2(f, x, h=1E-6):
r = (f(x-h) - 2*f(x) + f(x+h))/float(h*h)
return r
def g(t):
return t**(-6)
t = 1.2
d2g = diff2(g, t)
print d2g
My question is, how does this script work without providing an argument to function g? The line in question is:
d2g = diff2(g,t)
Shouldn't it be done like:
d2g = diff2(g(t), t)
g is passed as an argument to diff2. In diff2, that argument is called f, so inside diff2 the name f refers to the function g. When diff2 calls f(x-h) (and the other calls it does), it is calling g, and providing the argument.
In other words, when you do diff2(g, t), you are telling diff2 that g is the function to call. The arguments to g are provided in diff2:
f(x-h) # calls g with x-h as the argument
f(x) # calls g with x as the argument
f(x+h) # calls g with x+h as the argument
If you called diff2(g(t), t), you would be passing the result of g(1.2) as the argument. g would be called before calling diff2, and diff2 would then fail when it tries to call f, because f would be a number (the value g(1.2)) instead of a function.
The functions in question are rather random, and perhaps difficult to understand. Let's consider a simple example, a function sum which takes two numbers a and b, and returns their sum. Actually, we can easily define another function prod, which returns their product too.
def sum(a,b):
return a + b
def prod(a,b):
return a * b
Let's say we have another function compute, which takes as its arguments the operation (a function), and two operands (two numbers to call the function on). In compute, we call the given operation on the arguments.
def compute(fn, a, b):
return fn(a, b)
We can compute different things. We can compute the sum of two numbers:
compute(sum, 1, 3)
We can compute the product of two numbers:
compute(prod, 1, 3)
Basically, without parentheses after the function name, we're not actually calling the function, it's just another object in the namespace (which happens to be a function which we can call). We don't call the function until inside of compute, when we do fn(a,b).
Let's see what the console outputs look like:
>>> compute(sum,1,3)
4
>>> compute(prod,1,3)
3
>>> sum
<function sum at mem-address>
>>> prod
<function prod at mem-address>
>>> sum(1,2)
3

Categories

Resources