scipy.integrate.solve_bvp for Neumann boundary conditions? - python

The question is what it says on the tin. I'm trying to numerically solve a boundary value problem and my friend is asking me whether the solver would work for these conditions. The page for the solver doesn't give the conditions we have i.e. bc(y(a),y(b), p) = 0 but the form of our question is y(0) = some constant value and y'(b) = 0, giving our Neumann conditions, Would you need to rewrite the function to have a first order reduction like in the shooting method?

I suppose that you are solving some second order ODE
y''(t) = F(t,y(t), y'(t))
For the first order system use the state vector u = [y, v] = [y, y']. Then the minimum to call the BVP solver is
def ode(t,u): y,v = u; return [v, F(t,y,v)];
def bc(u0, ub): return [u0[0]-y0, ub[1]];
y0 = someconstantvalue;
t_init = [0,b];
u_init = [[y0, y0], [0, 0]];
res = solve_bvp(ode, bc, t_init, u_init)
Always check res.success or at least print out res.message. You might need a better initial guess, for non-linear problems different initial guesses can give different solutions. res.sol contains the solution as an interpolating function, for generating plots it is better to use this function than the sparse collection of internal steps in res.x and res.y

Related

Assistance in solving a linear system of equations with least_squares

I'm hoping to just get some assistance conceptually about how to solve a linear system of equations with penalty functions. Example code is at the bottom.
Let's say I'm trying trying to do a fit of this equation:
Ie=IaX+IbY+IcZ
where Ie, Ia, Ib, and Ic are constants, and X,Y,Z are variables
I could easily solve this system of equations using scipy.least_squares, but I want to constrain the system with 2 contraints.
1. X+Y+Z=1
2. X,Y,Z > 0 and X,Y,Z < 1
To do this, I then modified the above function.
Ie-Ic=X(Ia-Ic)+Y(Ib-Ic) where X+Y+Z=1 I solved for Z
therefore
Ax=B where A=[Ia-Ic,Ib-Ic] and B=[Ie-Ic] given bounds (0,1)
This solves the 2nd criteria of X,Y,Z > 0 and X,Y,Z < 1, but it does not solve the 1st.
To resolve the first issue, an additional constraint needs to be made, where X+Y<1, and this I don't quite know how to do.
So I presume least_squares has some built in penalty function for it's bounds. I.E.
chi^2=||A-Bx||^2+P
where P is the conditions, if X,Y,Z>0 then P = 10000
thus, giving high chi squared values and thus setting the constraints
I don't know how I can add a further condition. So if X+Y<1 then P=10000 or something similar along those lines.
In short, least_squares enables you to set bounds on the individual values, but I'd like to set some further constraints, and I don't quite know how to do this with least_squares. I've seen additional inequality constraint options in scipy.minimize, but I don't quite know how to apply that to a linear system of equations with the format Ax=B.
So as an example code, lets say I've already done the calculations and obtained my A matrix of constants and my B vector. I use least squares, get my values for X and Y, and can calculate Z since X+Y+Z=1. The issue here is in my minimization, I did not set a constraint that X+Y<1, so in some cases you can actually get values where X+Y>1. So I'd like to find a method where I can set that additional constraint, in addition to the bounds constraint for the individual variables:
Ax=np.array([[1,2],[2,4],[3,4]])
B=np.array([0,1,2])
solution=lsq_linear(Ax,B,lsq_solver='lsmr',bounds=(0,1))
X=solution.x[0]
Y=solution.x[1]
Z=1-sum(solution.x)
If minimize is the solution here, can you please show me how to set it up given the above matrix of A and array of B?
Any advice, tips, or help to point me in the right direction is greatly appreciated!
Edit:
So I found something similar on here: Minimizing Least Squares with Algebraic Constraints and Bounds
So I thought I'd apply it to my case, but I don't think I've been able to apply it properly.
Ax=np.array([[1,2],[2,4],[3,4]])
B=np.array([0,1,2])
def fun(x,a1,a2,y):
fun_output=x[0]*a1+x[1]*a2
return np.sum((fun_output-y)**2)
cons = [{"type": "eq", "fun": lambda x: x[0] + x[1] - 1}]
bnds = [(0, 1), (0, 1)]
xinit = np.array([1, 1])
solution=minimize(fun,args=(Ax[:,0],Ax[:,1], B), x0=xinit, bounds=bnds, constraints=cons)
solution_2=lsq_linear(Ax,B,bounds=(0,1))
print(solution.x)
print(solution_2.x)
Issue is, the output of this differs from lsq_linear, and I almost always get a very close to zero value for Z regardless of what the input arrays are. I don't know if I'm setting this up/understanding this correctly.
Your initial guess xinit is not feasible and doesn't satisfy your constraint.
IMO, solving the initial problem directly as a constrained nonlinear optimization problem (NLP) instead of rewriting it is the easier approach. Assuming you have all the data points Ia, Ib, Ic and Ie (you didn't provide all of them), you can use the following code snippet which is in the same vein as the linked answer of mine in your question.
from scipy.optimize import minimize
import numpy as np
def fun_to_fit(coeffs, *args):
x, y, z = coeffs
Ia, Ib, Ic = args
return Ia*x + Ib*y + Ic*z
def objective(coeffs, *args):
Ia, Ib, Ic, Ie = args
residual = Ie - fun_to_fit(coeffs, Ia, Ib, Ic)
return np.sum(residual**2)
# Constraint: x + y + z == 1
con = [{'type': 'eq', 'fun': lambda coeffs: np.sum(coeffs) - 1}]
# bounds
bounds = [(0, 1), (0, 1), (0, 1)]
# initial guess (fulfils the constraint and lies within the bounds)
x0 = np.array([0.25, 0.5, 0.25])
# your given data points
#Ia = np.array(...)
#Ib = np.array(...)
#Ic = np.array(...)
#Ie = np.array(...)
# solve the NLP
res = minimize(lambda coeffs: objective(coeffs, Ia, Ib, Ic, Ie), x0=x0, bounds=bounds, constraint=con)

Imposing monotonicity with scipy.optimize.minimize

I am trying to minimize a function of a vector of length 20, but I want to constrain the solution to be monotonic, i.e.
x[1] <= x[2]... <= x[20]
I have tried to implement this in the following way using "constraints" for this routine:
cons = tuple([{'type':'ineq', 'fun': lambda x: x[i]- x[i-1]} for i in range(1, len(node_vals))])
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons) #optimize
However, the results I get are not monotonic, even when the initial guess b is, it seems that the optimizer is completely ignoring the constraints. What could be going wrong? I have also tried changing the constraint to x[i]**3 - x[i+1]**3 to make it "smoother", but it didn't help at all. My objective function, localisation is the integral of solution to an eigenvalue problem whose parameters are defined beforehand:
def localisation(node_vals, domain): #calculate localisation for solutions with piecewise linear grading
f = piecewise(node_vals, domain) #create piecewise linear function using given values at nodes
#plt.plot(domain, f(domain))
M = diff_matrix(f(domain)) #differentiation matrix created from piecewise linear function
m = np.concatenate(([0], get_solutions(M)[1][:, 0], [0]))
integral = num_int(domain, m)
return integral
You didn’t post a minimum reproducible example that we can run. However, did you try to specify which optimization algorithm to use in SciPy? Something like this:
res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons, method=‘SLSQP’)
I'm having a very similar problem but with additional upper and lower bounds on the monotonicity property. I'm tackling the problem like this (maybe it helps you):
Using the Trust-Region Constrained Algorithm given by scipy. This provides us a way of dealing with linear constraints in a matrix-manner:
lb <= A.dot(x) <= ub
where lb & and ub are the lower (upper) bounds of this constraint problem and A is the matrix, representing the linear constraint problem.
every row of matrix A is a linear term which defines a constraint
If, for example, x[0] <= x[1], then this can be transformed into x[0] - x[1] <= 0 which in terms of the linear constraint matrix A looks like this [1, -1,...], provided that the upper bound vector has a 0 value on this level of course (vice versa is also possible but either way, having at least one of both, lower or upper bound, makes this easy)
Setting up enough of these inequalities and at the same time merging a couple of those into a single inequality may create a sufficient matrix to solve this.
Hope this helps a bit, It did the job for my problem.

Performance issue with Scipy's solve_bvp and coupled differential equations

I'm facing a problem while trying to implement the coupled differential equation below (also known as single-mode coupling equation) in Python 3.8.3. As for the solver, I am using Scipy's function scipy.integrate.solve_bvp, whose documentation can be read here. I want to solve the equations in the complex domain, for different values of the propagation axis (z) and different values of beta (beta_analysis).
The problem is that it is extremely slow (not manageable) compared with an equivalent implementation in Matlab using the functions bvp4c, bvpinit and bvpset. Evaluating the first few iterations of both executions, they return the same result, except for the resulting mesh which is a lot greater in the case of Scipy. The mesh sometimes even saturates to the maximum value.
The equation to be solved is shown here below, along with the boundary conditions function.
import h5py
import numpy as np
from scipy import integrate
def coupling_equation(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty_like(a).astype(complex)
for idx, z_i in enumerate(z_mesh):
beta_zf_i = np.interp(z_i, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_i, z_a, ka_z) # Get ka at the desired point of the mesh
coupling_matrix = np.empty((2, 2), complex)
coupling_matrix[0] = [-1j * beta_zf_i, ka_z_i]
coupling_matrix[1] = [ka_z_i, 1j * beta_zf_i]
a_p[:, idx] = np.matmul(coupling_matrix, a[:, idx]) # Solve the coupling matrix
return a_p
def boundary_conditions(a_a, a_b):
return np.hstack(((a_a[0]-1), a_b[1]))
Moreover, I couldn't find a way to pass k, z and beta_zf as arguments of the function coupling_equation, given that the fun argument of the solve_bpv function must be a callable with the parameters (x, y). My approach is to define some global variables, but I would appreciate any help on this too if there is a better solution.
The analysis function which I am trying to code is:
def analysis(k, z, beta_analysis, max_mesh):
s11_analysis = np.empty_like(beta_analysis, dtype=complex)
s21_analysis = np.empty_like(beta_analysis, dtype=complex)
initial_mesh = np.linspace(z[0], z[-1], 10) # Initial mesh of 10 samples along L
mesh = initial_mesh
# a_init must be complex in order to solve the problem in a complex domain
a_init = np.vstack((np.ones(np.size(initial_mesh)).astype(complex),
np.zeros(np.size(initial_mesh)).astype(complex)))
for idx, beta in enumerate(beta_analysis):
print(f"Iteration {idx}: beta_analysis = {beta}")
global beta_zf
beta_zf = beta * np.ones(len(z)) # Global variable so as to use it in coupling_equation(x, y)
a = integrate.solve_bvp(fun=coupling_equation,
bc=boundary_conditions,
x=mesh,
y=a_init,
max_nodes=max_mesh,
verbose=1)
# mesh = a.x # Mesh for the next iteration
# a_init = a.y # Initial guess for the next iteration, corresponding to the current solution
s11_analysis[idx] = a.y[1][0]
s21_analysis[idx] = a.y[0][-1]
return s11_analysis, s21_analysis
I suspect that the problem has something to do with the initial guess that is being passed to the different iterations (see commented lines inside the loop in the analysis function). I try to set the solution of an iteration as the initial guess for the following (which must reduce the time needed for the solver), but it is even slower, which I don't understand. Maybe I missed something, because it is my first time trying to solve differential equations.
The parameters used for the execution are the following:
f2 = h5py.File(r'path/to/file', 'r')
k = np.array(f2['k']).squeeze()
z = np.array(f2['z']).squeeze()
f2.close()
analysis_points = 501
max_mesh = 1e6
beta_0 = 3e2;
beta_low = 0; # Lower value of the frequency for the analysis
beta_up = beta_0; # Upper value of the frequency for the analysis
beta_analysis = np.linspace(beta_low, beta_up, analysis_points);
s11_analysis, s21_analysis = analysis(k, z, beta_analysis, max_mesh)
Any ideas on how to improve the performance of these functions? Thank you all in advance, and sorry if the question is not well-formulated, I accept any suggestions about this.
Edit: Added some information about performance and sizing of the problem.
In practice, I can't find a relation that determines de number of times coupling_equation is called. It must be a matter of the internal operation of the solver. I checked the number of callings in one iteration by printing a line, and it happened in 133 ocasions (this was one of the fastests). This must be multiplied by the number of iterations of beta. For the analyzed one, the solver returned this:
Solved in 11 iterations, number of nodes 529.
Maximum relative residual: 9.99e-04
Maximum boundary residual: 0.00e+00
The shapes of a and z_mesh are correlated, since z_mesh is a vector whose length corresponds with the size of the mesh, recalculated by the solver each time it calls coupling_equation. Given that a contains the amplitudes of the progressive and regressive waves at each point of z_mesh, the shape of a is (2, len(z_mesh)).
In terms of computation times, I only managed to achieve 19 iterations in about 2 hours with Python. In this case, the initial iterations were faster, but they start to take more time as their mesh grows, until the point that the mesh saturates to the maximum allowed value. I think this is because of the value of the input coupling coefficients in that point, because it also happens when no loop in beta_analysisis executed (just the solve_bvp function for the intermediate value of beta). Instead, Matlab managed to return a solution for the entire problem in just 6 minutes, aproximately. If I pass the result of the last iteration as initial_guess (commented lines in the analysis function, the mesh overflows even faster and it is impossible to get more than a couple iterations.
Based on semi-random inputs, we can see that max_mesh is sometimes reached. This means that coupling_equation can be called with a quite big z_mesh and a arrays. The problem is that coupling_equation contains a slow pure-Python loop iterating on each column of the arrays. You can speed the computation up a lot using Numpy vectorization. Here is an implementation:
def coupling_equation_fast(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty(a.shape, dtype=np.complex128)
beta_zf_i = np.interp(z_mesh, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_mesh, z_a, ka_z) # Get ka at the desired point of the mesh
# Fast manual matrix multiplication
a_p[0] = (-1j * beta_zf_i) * a[0] + ka_z_i * a[1]
a_p[1] = ka_z_i * a[0] + (1j * beta_zf_i) * a[1]
return a_p
This code provides a similar output with semi-random inputs compared to the original implementation but is roughly 20 times faster on my machine.
Furthermore, I do not know if max_mesh happens to be big with your inputs too and even if this is normal/intended. It may make sense to decrease the value of max_mesh in order to reduce the execution time even more.

CVXPY optimization problem. DCPError: Problem does not follow DCP rules

We have the constraints set up very well and now are looking at the objective function of the partner preference problem right now. We can construct a 20x20 symmetric matrix of the worker to worker, each element being the combined score between two workers. however, the matrix is not positive semidefinite and we can not progress any further. We are using cvxpy python package. Please help!
Piece of code to clarify:
part_pref = pd.read_excel('Preferred Partners Updated.xlsx')
part_pref_np = np.array(part_pref)[:,1:21] # 20x20 symmetric matrix with partner preference
y = cp.Variable((20, 4), integer = True)
objective = cp.Maximize(-cp.sum(y.T # part_pref_np # y))
constraints = [y >= 0,
y <= 1,
cp.sum(y, axis = 1) == 1,
cp.sum(y, axis = 0) >= b,
cp.sum(y, axis = 0) <= c,
(skill_np-d).T # y >= e,
cp.multiply((skill_np[:,2]-4),y[:,0]) >= 0,
cp.multiply((skill_np[:,0]-4),y[:,3]) >= 0,
f # x[:,0] >=1]
problem = cp.Problem(objective, constraints)
problem.solve()
raises error
DCPError: Problem does not follow DCP rules.
I suspect the problem is in
cp.Maximize(-cp.sum(y.T # part_pref_np # y))
Can you try with
cp.Maximize(0)
and see what happens. I expect this will deliver a feasible solution.
CVXPY only allows convex quadratic objectives, and I think your objective is not convex (depends on the data so I cannot verify this). If so, you can linearize the objective as the decision variables are 0-1 variables. It is not so trivial however to do this in CVXPY as y is already a 2-dimensional variable (CVXPY only supports 0, 1 and 2-dimensional variables).
I am not completely sure why you are using a quadratic objective here. Maybe you can think about a linear one.

How to find all zeros of a function using numpy (and scipy)?

Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?
Actually, my strategy is the following:
I evaluate my function on a given number of points
I detect whether there is a change of sign
I find the zero between the points that are changing sign
I verify if the zero found is really a zero, or if this is an asymptote
U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
c = f(U)
s = numpy.sign(c)
for i in range(100-1):
if s[i] + s[i+1] == 0: # oposite signs
u = scipy.optimize.brentq(f, U[i], U[i+1])
z = f(u)
if numpy.isnan(z) or abs(z) > 1e-3:
continue
print('found zero at {}'.format(u))
This algorithm seems to work, except I see two potential problems:
It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.
Do you have a better strategy (still efficient) to find all the zeros of a function?
I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):
def f(u):
w = numpy.sqrt(V**2 - u**2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.jnjn(ell-1, u)
kl = scipy.special.jnkn(ell, w)
kl1 = scipy.special.jnkn(ell-1, w)
return jl / (u*jl1) + kl / (w*kl1)
Why are you limited to numpy? Scipy has a package that does exactly what you want:
http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html
One lesson I've learned: numerical programming is hard, so don't do it :)
Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.
If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.
I found out it's relatively easy to implement your own root finder using the scipy.optimize.fsolve.
Idea: Find any zeroes from interval (start, stop) and stepsize step by calling the fsolve repeatedly with changing x0. Use relatively small stepsize to find all the roots.
Can only search for zeroes in one dimension (other dimensions must be fixed). If you have other needs, I would recommend using sympy for calculating the analytical solution.
Note: It may not always find all the zeroes, but I saw it giving relatively good results. I put the code also to a gist, which I will update if needed.
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
# Defined below
r = RootFinder(1, 20, 0.01)
args = (90, 5)
roots = r.find(f, *args)
print("Roots: ", roots)
# plot results
u = np.linspace(1, 20, num=600)
fig, ax = plt.subplots()
ax.plot(u, f(u, *args))
ax.scatter(roots, f(np.array(roots), *args), color="r", s=10)
ax.grid(color="grey", ls="--", lw=0.5)
plt.show()
Example output:
Roots: [ 2.84599497 8.82720551 12.38857782 15.74736542 19.02545276]
zoom-in:
RootFinder definition
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
class RootFinder:
def __init__(self, start, stop, step=0.01, root_dtype="float64", xtol=1e-9):
self.start = start
self.stop = stop
self.step = step
self.xtol = xtol
self.roots = np.array([], dtype=root_dtype)
def add_to_roots(self, x):
if (x < self.start) or (x > self.stop):
return # outside range
if any(abs(self.roots - x) < self.xtol):
return # root already found.
self.roots = np.append(self.roots, x)
def find(self, f, *args):
current = self.start
for x0 in np.arange(self.start, self.stop + self.step, self.step):
if x0 < current:
continue
x = self.find_root(f, x0, *args)
if x is None: # no root found.
continue
current = x
self.add_to_roots(x)
return self.roots
def find_root(self, f, x0, *args):
x, _, ier, _ = fsolve(f, x0=x0, args=args, full_output=True, xtol=self.xtol)
if ier == 1:
return x[0]
return None
Test function
The scipy.special.jnjn does not exist anymore, but I created similar test function for the case.
def f(u, V=90, ell=5):
w = np.sqrt(V ** 2 - u ** 2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.yn(ell - 1, u)
kl = scipy.special.kn(ell, w)
kl1 = scipy.special.kn(ell - 1, w)
return jl / (u * jl1) + kl / (w * kl1)
The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:
Brent's method is indeed a good choice here.
First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
I've also encountered this problem to solve equations like f(z)=0 where f was an holomorphic function. I wanted to be sure not to miss any zero and finally developed an algorithm which is based on the argument principle.
It helps to find the exact number of zeros lying in a complex domain. Once you know the number of zeros, it is easier to find them. There are however two concerns which must be taken into account :
Take care about multiplicity : when solving (z-1)^2 = 0, you'll get two zeros as z=1 is counting twice
If the function is meromorphic (thus contains poles), each pole reduce the number of zero and break the attempt to count them.

Categories

Resources