The only example/docs I can find are on the Scipy docs page.
To test, I'm looking at a time-independent Schrod eq in a 1d infinite potential well. This has a neat analytic solution found by solving the DE, and inserting boundary conditions of ψ(0) = 0, ψ(L) = 0, and that the func soln to 1, but this question applies to solving any DE where the BCs we know aren't for the initial value.
You can solve it numerically with Scipy's solve_ivp by starting with ψ(0) = 0, and cheating to place ψ'(0) appropriately using the analytic soln. Can use shooting method to find an appropriate E value, eg the normalization condition above.
These are two sets of BCs: ψ(0) = 0 for both, normalization for both, and a second value of ψ for the analytic approach, and an initial value of ψ' for the ivp approach. Scipy's solve_bvp seems to offer a solution using the first set of BCs numerically (since we're cheating by inserting ψ'), but i can't get it working. This pseudocode describes the problem, and is how I expect the API to behave:
bcs = {0: (0, None), L: (0, None)} # Two BCs on ψ; no BCs on derivative
x_span = (0, L)
sol = solve_bvp(rhs, bcs, x_span)
In reality, the code looks something like this, and I can't get it to work:
def bc(ψ_a, ψ_b):
return np.array([ψ_a[0], ψ_b[0]])
x_span = (0, L)
x_eval = np.linspace(x_span[0], x_span[1], int(1e5))
x_guess = np.array([0, L])
ψ_guess = np.array([[0, 1], [0, -1]])
res = solve_bvp(rhs_1d, bc, x_guess, ψ_guess)
I've no idea how to build the bc function, and don't know why the guesses are set up the way they are. And unsure how I can guess for the value of ψ without also inserting a guess for ψ'. (The docs imply you can) Also of note, the docs shows an example implying you can use solve_bvp for a normalization BC as well, but not sure how to approach. (Example is too sparse)
The equivalent and working ivp code, for ref: (Compare to my solve_bvp pseudocode)
Python code:
ψ_0 = (0, sqrt(2/L) * n*π/L)
x_span = (0, L)
sol = solve_ivp(rhs_1d, x_span, ψ_0)
For the eigenvalue problem
-u''+V(x)u = c*u
with boundary conditions
u(0)=0=u(L)
and normalization
int(u(x)^2, x=0 to L)=1
set up the integral as third component. With the eigenvalue as parameter these are 4 dimensions allowing for 4 boundary conditions, the additional 2 are that the integral at 0 is zero and that the integral at L has value 1.
# some length
L = 10;
# some potential function
def V(x): return 1+(2*x-L)**2;
# the ODE function
def odesys(x,y,p):
u,v,S = y; c=p[0]
return [v, (V(x)-c)*u , u**2 ]
# the boundary conditions
def boundary(y0, yL, c):
return [ y0[0], yL[0], y0[2], yL[2]-1 ]
With the initial guess you select approximately what eigenfunction/eigenvalue you will get, more or less.
n=11;
w = (np.pi*n)/L
x_init = np.linspace(0,L,4*n+1);
u_init = np.sin(w*x_init);
v_init = np.cos(w*x_init)*w;
y_init = [ u_init, v_init, x_init/L ]
There is no need to put too many points into the guess, just enough that the structure of the first component is faithfully represented.
Then call the solver with the prepared data, take notice that the default tolerance is 1e-3, if you want better you have to allow for a finer subdivision. If everything runs fine, plot the solution.
res = solve_bvp(odesys, boundary, x_init, y_init, p=[w**2], max_nodes=10000, tol=1e-6)
print res.message
if res.success:
x_disp = np.linspace(0,L,3001)
y_disp = res.sol(x_disp)
plt.plot(x_disp, y_disp[0])
plt.title("eigenfunction to eigenvalue $\lambda=%.6f$"%res.p[0]);
plt.grid(); plt.show()
Related
I would like to solve this kind of equations:
a*85**b+c=100
a*90**b+c=66
a*92**b+c=33
I tried this
import scipy.optimize
def fun(variables) :
(a,b,c)= variables
eq0=a*85**b+c-100
eq1=a*90**b+c-66
eq2=a*92**b+c-33
return [eq0,eq1,eq2]
result = scipy.optimize.fsolve(fun, (1, -1, 0))
print(result)
But I get ValueError: Integers to negative integer powers are not allowed.
Then I tried the equivalent
def fun(variables) :
(a,b,c)= variables
eq0=log(a)+b*log(85)-log(100-c)
eq1=log(a)+b*log(90)-log(66-c)
eq2=log(a)+b*log(92)-log(33-c)
return [eq0,eq1,eq2]
result = scipy.optimize.fsolve(fun, (1, -1, 0))
print(result)
I get a solution but that is equal to the initial values (1, -1, 0)
Thus when I test fun(result), I get values different from zero.
I have noticed that for this example the same problem is observed
import scipy.optimize
def fun(variables) :
(x,y)= variables
eqn_1 = x**2+y-4
eqn_2 = x+y**2+3
return [eqn_1,eqn_2]
result = scipy.optimize.fsolve(fun, (0.1, 1))
print(result)
fun(result)
Does anyone would know how I could do ? Thank you
PS I have posted here about sympy last week
Resolution of multiple equations (with exponential)
When the initial condition is not well known, sometimes its best to try other methods first . For small problems, simplex minimization is useful:
import numpy as np
def func(x):
a,b,c= x
eq0=np.log(a)+b*np.log(85)-np.log(100-c)
eq1=np.log(a)+b*np.log(90)-np.log(66-c)
eq2=np.log(a)+b*np.log(92)-np.log(33-c)
return eq0**2+ eq1**2 + eq2**2
def func_vec(x):
a,b,c= x
eq0=np.log(a)+b*np.log(85)-np.log(100-c)
eq1=np.log(a)+b*np.log(90)-np.log(66-c)
eq2=np.log(a)+b*np.log(92)-np.log(33-c)
return eq0, eq1, eq2
from scipy.optimize import fsolve, minimize
out = minimize(func, [1,1,0], method="Nelder-Mead", options={"maxfev":100000})
print("roots:", out.x)
print("value at roots:", func_vec(out.x))
# roots: [ 7.87002460e+11 -1.07401055e-09 -7.87002456e+11]
# value at roots: (6.0964566728216596e-12, -1.2086331935279304e-11, 6.235012506294879e-12)
Note, I also tried [1,1,1] as an initial condition and found it converged to the wrong solution. Further increasing maxfev from 1e5 to 1e7 allowed [1,1,1] to converged to the proper solution, but then perhaps there are better methods to solve this.
I have following function which I need to minimize utilizing least square method (I am using lmfit).
y = a * exp(-x/b) + c
I have for example following data:
profitlist = [-10000, 100.00, 1000.00, 100000.00, 1000000.00]
utilitylist = [0, 0.2, 0.4, 0.6, 1]
App returns the following error:
ValueError: NaN values detected in your input data or the output of your objective/model function - fitting algorithms cannot handle this! Please read https://lmfit.github.io/lmfit-py/faq.html#i-get-errors-from-nan-in-my-fit-what-can-i-do for more information.
Problem seems to be that: exp(-x/b) returns inf or -inf if profitList contains any bigger negative number (-1000 worked, -100000 not). So it overflows probably.
The values in the profitList can be very large float numbers and they are not always the same. So how can I optimize it with these huge numbers? It seems that lmfit does not support decimal numbers which would fix the issue... What can I do to make it work?
class LeastSquares:
def __init__(self, profitList, utilityList):
self.profitList = np.asarray(profitList)
self.utilityList = np.asanyarray(utilityList)
def function(self, params, x):
a = params["a"]
b = params["b"]
c = params["c"]
return a * np.exp(-x/b) + c
def residual(self, params, x, y):
return (y - self.function(params, x))**2
def setParameters(self, a_start, b_start, c_start):
parameters = Parameters()
parameters.add(name="a", value=a_start, min=None, max=0, vary=True)
parameters.add(name="b", value=b_start, vary=True, min=0.1, max=None)
parameters.add(name="c", value=c_start, vary=True)
return parameters
def startOptimalization(self):
parameters = self.setParameters(-1, 1, 1)
result = minimize(self.residual, parameters, args=(self.profitList, self.utilityList), method="leastsq")
result.params.pretty_print()
print(fit_report(result))
print("SSE")
print(np.sum(result.residual))
As you see, numpy.exp(arg) gives Infinity for any argument greater than ~709, and you will need to avoid such extreme values. The underlying solvers simply cannot solve them. Since your argument for arg is -x/b, you need to make sure that b is not so small as to blow up the argument to numpy.exp().
In fact, your code shows that you do set a lower bound on b of 0.1.
But with values of profitlist extending to 1e7, that lower bound is too small to prevent Infinity - your lower limit on b would have to be around 14,000.
If your values for profitlist are changing for each optimization run, you may need to do something like this (in your startOptimization):
parameters = self.setParameters(-1, 1, 1)
parameters['b'].min = max(abs(self.profitList))/700.0
result = minimize(self.residual, parameters, args=(self.profitList, self.utilityList), method="leastsq")
result.params.pretty_print()
Also, when fitting exponential changes, it is often helpful to compute your exponential model function, and then take the residual as the logarithm of your data and the logarithm of your model, effectively doing the fit in log-space, as you would likely plot the data.
And, finally, don't take the square or the sum of squares of the difference yourself, just return the residual array with sign in tact. That is, you will probably be better off using something like:
def residual(self, params, x, y):
return np.log(y) - np.log(self.function(params, x))
I have the following problem:
I'm trying to create a matrix which will map a point a_i -> b_i, whilist ensuring that all other points from the space A are mapped to points inside the space B.
The difficulty is that I'm using a linprog function to check if a point is within a space, and this returns a Boolean, so I'm not sure how to use this as a constraint in optimisation.
Here's my relevant functions, cleaned up:
def x_to_matrix(x):
n = round(np.sqrt(len(x)))
return np.array(x).reshape((n, n))
def matrix_to_vector(M):
return M.flatten()
def check_point_within_polytope(point, polytope_points):
"""Uses linprog to check if a point can be decomposed as a convex sum of other points. The `c` vector is null, and `A` gives the basis points together with the requirement that `x` sums to 1"""
number_of_points = len(polytope_points)
dim = len(polytope_points[0])
c = np.zeros(number_of_points)
A = np.r_[np.array(polytope_points).T, np.ones((1, number_of_points))]
b = np.r_[point, np.ones(1)]
lp = linprog(c, A_eq=A, b_eq=b)
return lp.success
def make_T(local_points, target_points):
"""Challenge here is as follows. We want to create a matrix which maps a particular local point to a particular target point. At the same time, we want to make sure that the matrix maps all other points from the local space into the target space.
The difficulty in creating this in enacting these constraints. The way we check that a point is within a space is using linprog, which returns a Boolean rather than numerical result. """
target_space = target_points + local_points
target_point = target_points[0]
local_point = local_points[0]
def function_for_T(x):
M = x_to_matrix(x)
new_point = np.dot(M, local_point)
return np.linalg.norm(target_point - new_point)
def check_local_point_happy(x, local_point):
M = x_to_matrix(x)
new_point = np.dot(M, local_point)
return check_point_within_polytope(new_point, target_space)
nonlinear_constraints = []
for local_point in local_points:
fun_here = lambda x: check_local_point_happy(x, local_point)
nonlinear_constraints += NonlinearConstraint(fun_here, )
X0 = matrix_to_vector(np.eye(len(target_point)))
sol = minimize(function_for_T, method='SLSQP', x0=X0, constraints=nonlinear_constraints)
return sol
Is there some way to use my check_point_within_polytope function as a nonlinear constraint? Or alternatively is there some much better way of doing this? It feels like there must be since the constraints are, ultimately, linear.
Any help much appreciated!
In the following code I am trying to implement the following
write a function naturalSpline that implements cubic spline interpolation with natural boundary conditions
Use a tridiagonal solver to solve the arising tridiagonal system for the first derivatives.
The prototype of the function should read yy=naturalSpline(x,y,xx) where (x,y) are the input points and data, and xx are the points where the data should be interpolated.
I figured first I would start with the second bullet point, creating the tridiagonal solver. So this is just the Thomas algorithm. I spent some time to create this part of the code and I have formatted it below. But now I am trying to finish the first and third bullet points but I am not sure how to use what I have done already to finish those. Looking for some help with this! Thanks in advance.
import numpy as np
def TDMA(a,b,c,d):
n = len(d)
w= np.zeros(n-1,float)
g= np.zeros(n, float)
p = np.zeros(n,float)
w[0] = c[0]/b[0]
g[0] = d[0]/b[0]
for i in range(1,n-1):
w[i] = c[i]/(b[i] - a[i-1]*w[i-1])
for i in range(1,n):
g[i] = (d[i] - a[i-1]*g[i-1])/(b[i] - a[i-1]*w[i-1])
p[n-1] = g[n-1]
for i in range(n-1,0,-1):
p[i-1] = g[i-1] - w[i-1]*p[i]
return p
A = np.array([[10,2,0,0],[3,10,4,0],[0,1,7,5], [0,0,3,4]],dtype=float)
a = np.array([3.,1,3])
b = np.array([10.,10.,7.,4.])
c = np.array([2.,4.,5.])
d = np.array([3,4,5,6.])
print (TDMA(a, b, c, d))
Which gives the correct output, I even tested it against np.linalg.solve(a,b,c,d) to make sure it was correct
[ 0.14877589 0.75612053 -1.00188324 2.25141243]
For each interval [x_k, x_(k+1)], you can solve the four equations
p_k(x_k) = f(x_k) = y_k
p_k'(x_k) = f'(x_k) = d_k
p_k(x_(k+1)) = f(x_(k+1)) = y_(k+1)
p_k'(x_(k+1)) = f'(x_(k+1)) = d_(k+1)
(without checking your code, I assume that this is what you did).
From this, you can construct a dict
{'polynomials': [ [a_0, ..., d_0], ..., [a_24, ..., d_24] ],
'knots': [x_0, ..., x_24]}
For each x of your 250 point, you check for which k the point x is in the interval [x_k, x_(k+1)] and evaluate p_k(x).
All of this is straight forward mathematics and python coding. If something is not clear, you are better of learning more about both fields, instead of getting specialized advise on this website.
I am trying to use scipy.integrate.quad to integrate a function over a very large range (0..10,000). The function is zero over most of its range but has a spike in a very small range (e.g. 1,602..1,618).
When integrating, I would expect the output to be positive, but I guess that somehow quad's guessing algorithm is getting confused and outputting zero. What I would like to know is, is there a way to overcome this (e.g. by using a different algorithm, some other parameter, etc.)? I don't usually know where the spike is going to be, so I can't just split the integration range and sum the parts (unless somebody has a good idea on how to do that).
Thanks!
Sample output:
>>>scipy.integrate.quad(weighted_ftag_2, 0, 10000)
(0.0, 0.0)
>>>scipy.integrate.quad(weighted_ftag_2, 0, 1602)
(0.0, 0.0)
>>>scipy.integrate.quad(weighted_ftag_2, 1602, 1618)
(3.2710994652983256, 3.6297354011338712e-014)
>>>scipy.integrate.quad(weighted_ftag_2, 1618, 10000)
(0.0, 0.0)
You might want to try other integration methods, such as the integrate.romberg() method.
Alternatively, you can get the location of the point where your function is large, with weighted_ftag_2(x_samples).argmax(), and then use some heuristics to cut the integration interval around the maximum of your function (which is located at x_samples[….argmax()]. You must taylor the list of sampled abscissas (x_samples) to your problem: it must always contain points that are in the region where your function is maximum.
More generally, any specific information about the function to be integrated can help you get a good value for its integral. I would combine a method that works well for your function (one of the many methods offered by Scipy) with a reasonable splitting of the integration interval (for instance along the lines suggested above).
How about evaluating your function f() over each integer range [x, x+1),
and adding up e.g. romb(), as EOL suggests, where it's > 0:
from __future__ import division
import numpy as np
from scipy.integrate import romb
def romb_non0( f, a=0, b=10000, nromb=2**6+1, verbose=1 ):
""" sum romb() over the [x, x+1) where f != 0 """
sum_romb = 0
for x in xrange( a, b ):
y = f( np.arange( x, x+1, 1./nromb ))
if y.any():
r = romb( y, 1./nromb )
sum_romb += r
if verbose:
print "info romb_non0: %d %.3g" % (x, r) # , y
return sum_romb
#...........................................................................
if __name__ == "__main__":
np.set_printoptions( 2, threshold=100, suppress=True ) # .2f
def f(x):
return x if (10 <= x).all() and (x <= 12).all() \
else np.zeros_like(x)
romb_non0( f, verbose=1 )