How to perform this kind of integral in python using scipy:
$$\int_{0}^{1} f(x) \, dx \int_{0}^{x} g(y) \, dy \int_{0}^{x} h(z) \,dz $$
Tried using tplquad, but I think the fact that the two inner integrals are independent functions of x is not something I'm able to code.
SciPy's tplquad expects a function of at least three arguments in the order (z, y, x). Nothing easier than that:
def fgh(z, y, x):
return f(x)*g(y)*h(z)
To specify bounds lambda functions are quite useful. Example code for the quadrature then reads:
tplquad(fgh, a=0, b=1, \
gfun=lambda x:0, hfun=lambda x:x, \
qfun=lambda x,y:0, rfun=lambda x,y:x)
Related
Let's say I have f(x) := x^2-10 which intersects twice with g(x) = 0. This is implemented like following.
from scipy.optimize import fsolve
import pylab
import numpy
def function_a(x): # f(x)
return x**2-10
def function_b(x): # g(x)
return 0
result = fsolve(lambda x: function_a(x)-function_b(x), 0)
x = numpy.linspace(-10,10,100)
pylab.plot(x ,
[function_a(y) for y in x],
x,
[function_b(y) for y in x],
result,
function_a(result),
'ro'
)
pylab.show()
The scipy.optimize.fsolve returns the second intersection. However I would like to know the first intersection at all times. How could I achieve this?
By first intersection, I mean the intersection with the lower x value.
(Posting my comment as the answer as requested.)
Apparently, this function, and others in scipy optimize, only find a root, not necessarily all roots. If you set your guess to -1, it gives you the left root.
Also, perhaps consider sympy if you have well defined, known equations, as I believe it can be used find all the roots.
I'm currently working through some exercises on multivariable function calculus and thought I would have a go at making my own function to determine gradient and hessian at a defined point for any function. I'm currently having issues when attempting to substitute the resulting matrices with coordinate values for an arbitrary function. I've already managed to solve specific examples, but my attempt to make a function to solve a user defined function isn't working correctly.
def multivariable_function(function, variables, substitute=(0,0)):
"""Determines Gradient and Hessian vectors for multivariable function.
Args:
function: Enter the multivariable function
variables: Enter list of variable names
substitute: Default = (0,0)
Returns:
gradient/hessian matrices for given coordinate
To do:
Include sympy symbol() generation within function
"""
#derive_by_array returns a gradient matrix for multivariable function
Gradient = simplify(derive_by_array(function, variables))
#derive_by_array returns a Hessian matrix for multivariable function
Hessian = simplify(derive_by_array(derive_by_array(function, variables), variables))
#Line currently isn't doing anything
Gradient.subs(zip(variables, substitute))
return Gradient, Hessian
This is the basic function so far in operation.
multivariable_function((x**2)*(y**3) + exp(2*x + x*y - 1) - (x**3 + 3*y**2)**2, (x,y))`
which yields the following result, I am however aiming to substitute the desired values into the gradient and hessian matrices to achieve the following desired result. I managed to achieved the desired result using the following.
from sympy import *
x, y, z, K, T, r, σ, h, a, f, μ, c, t, m, x1, x2, x3 = symbols('x, y, z, K, T, r, σ, h, a, f, μ, c, t, m, x1, x2, x3') # Variables used must be defined in sympy.
init_printing(use_unicode=False) #Print the answers in unicode characters
function = (x**2)*(y**3) + exp(2*x + x*y - 1) - (x**3 + 3*y**2)**2
Gradient_1 = simplify(derive_by_array(function, (x, y)))
Hessian_1 = simplify(derive_by_array(derive_by_array(function, (x, y)), (x, y)))
Gradient_1.subs(x, 0).subs(y,0), Hessian_1.subs(x,0).subs(y,0)
After viewing the issue raised here, it seems zipping the two lists should enable the subs() function to work, but it currently isn't for me. I attempted to loop through 'variables and 'substitute' to sequentially apply .subs(), however I'm finding the function only works if the method is chained for all replacement variables, as in the example above.
Does anyone know how I can apply the .subs() n times for a given coordinate to yield the relevant gradient/hessian matrices?
The variable Gradient is of type
sympy.tensor.array.dense_ndim_array.ImmutableDenseNDimArray
Like almost all SymPy objects, with exception of mutable matrices, it is immutable. The method subs does not modify it in place; it returns a new object, which needs to be assigned.
Gradient = Gradient.subs(zip(variables, substitute))
Hessian = Hessian.subs(zip(variables, substitute))
Then the function works as expected, returning
([2*exp(-1), 0], [[4*exp(-1), exp(-1)], [exp(-1), 0]])
But I suggest not passing generators to subs; there are outstanding issues involving that. Convert to a list or a dict first, to be safe. (There is also a difference there: should substitutions be consecutive or simultaneous, although this does not matter when substituting numbers for symbols.)
subs_dict = dict(zip(variables, substitute))
Gradient = Gradient.subs(subs_dict)
Hessian = Hessian.subs(subs_dict)
I´m trying to construct a function that return the derivate of f, a function of one variable.
The return value should be a function approximating the derivative of f'
using the symmetric difference quotient, so that the returned function will compute (f(x+h) -f(x-h))/2h.
The function should start like this:
def derivative(f, x):
which should approximate the derivative of function f around the point x.
Does anyone have a clue what type of code I can use to construct this type of function?
/Alex
For a general function f(x), you can straightforwardly obtain a numerical approximation to its first derivative by the standard (second-order) approximation (f(x+h) - f(x-h)) /2h. The main challenge is to choose h to be small compared to the lenghscale over which f(x) shows non-quadratic variation, but sufficiently large to avoid round-off errors when subtracting nearby values of f(x).
However, if you want an algebraic method of differentiating your function, then things are more challenging. The easy cases are where f(x) is known to be a polynomial, so can be represented by a vector of coefficients of powers of x. In that case, numpy.polyder() can be used to compute the coefficients of the n'th derivative.
For more complicated functions, you may want to look at SymPy.
Both the numpy.polyder() and SymPy options require you to represent your function in a way that is specialized to these particular tools. I'm not aware of any method that can take an ordinary Python function and construct another function that implements the exact derivative.
what do you want the function to return?
If you want the value of the derivative in a certain x, you probably need three arguments:
def derivative(f, h, x):
return (f(x+h) - f(x-h))/2h
if you want to get a function which calculates the above for any x you can use:
def derivative(f, h):
return lambda x: (f(x+h) - f(x-h))/2h
Your best best bet would probably be to use SymPy which can do symbolic integration and differentation among other things:
>>> from sympy import *
>>> x, y, z = symbols('x y z')
>>> diff(x**2, x)
2*x
First you can define a function f (Example: f(x) = x ^ 2):
def f(x): return x ** 2
Next using the definition of derivative:
def derivative(function, x, accuracy = 20): # The 'Default' of accuracy is 20 and is an optional argument.
step = 1 / accuracy
return (function(x + step) - function(x - step)) / (step * 2)
~~~~~~~~~~~~~~~~~~~~~
By the way, I believe this is a typo:
def derivative(f, h):
Since you are approximating the derivative of function f around the point x, it should be:
def derivative(f, x):
As shown in my code
I want to differentiate the following equation
from sympy import *
init_printing()
x, t, r, phi = symbols('x, t, r, phi')
# this is how I want to do it
eq = Eq(x(t), r*phi(t))
eq.diff(t)
The result is differentiated only on the left side. I would like it to be evaluated on both sides. Is that possible in a simple way?
Currently I do the following:
Eq(eq.lhs.diff(t), eq.rhs.diff(t))
Borrowing some of the logic from Sympy: working with equalities manually, you can do something like this:
eq.func(*map(lambda x: diff(x, t), eq.args))
A bit ugly, but it works. Alternatively, you could just lift the .do() method from that and use it if you're going to want to do this a bunch of times.
I try to generate a generic fit polynom using SciPy's curve_fitmethod. My current simplified code looks like the following:
import functools
import scipy.optimize
def __fit_polynom_order_6(self, data):
def func(x, c1=None, c2=None, c3=None, c4=None, c5=None, c6=None):
return c1*x + c2*x**2 + c3*x**3 + c4*x**4 + c5*x**5 + c6*x**6
x, y = data[:,0], data[:,1]
popt, pcov = scipy.optimize.curve_fit(func, x, y)
func_fit = functools.partial(func, c1=popt[0],c2=popt[1],c3=popt[2],c4=popt[3],c5=popt[4],c6=popt[5])
return func_fit
Now I want also to do fits with polynoms of order n and thus generate a generic function __fit_polynom_order_n(self, n, data) that generates the polynom automatically and does essentially the same thing as my function above but with arbitrary polynoms.
My attempts doing this all came to nothing. Can you help? Thanks in advance!
There is already a function for that, np.polyfit:
fit = np.polyfit(x, y, n)
On the other hand, your func does not have a constant term. Is that on purpose?
If you wish to write your own polyfit-type method, you might want to study the source code for np.polyfit. You'll see that the problem is set up as a linear matrix equation and solved with np.linalg.lstsq, rather than the more general-purpose scipy.optimize.curve_fit.
# set up least squares equation for powers of x
lhs = vander(x, order)
rhs = y
c, resids, rank, s = lstsq(lhs, rhs, rcond)
Useful reference:
np.vander -- aha, this can be used to evaluate the polynomial at x. If you want to eliminate the constant term, you'd have to chop off the right-most column returned by np.vander.