I usually rely on Wolfram Mathematica for this kind of thing, but I've been delving into Python recently and the results are so much better. Basically, I'm looking for numerical solutions for systems like the following.
system:
Well, I know that there are solutions, because Wolfram Mathematica found a single one (0.0858875,0.0116077,-0.156661,1.15917). What I tried to do in Python is this brute force code.
import numpy as np
START = -3
END = 3
STEP = 0.1
for r0 in np.arange(START, END, STEP):
for r1 in np.arange(START, END, STEP):
for r2 in np.arange(START, END, STEP):
for r3 in np.arange(START, END, STEP):
eq0 = r0*r2+r1*r3
eq1 = r0*r1+r1*r2+r2*r3+r0*r3
eq2 = r0**2+r1**2+r2**2+r3**2-4*(r0+r1+r2+r3)**2
if (eq0 == 0 and eq1 < 0 and eq2 < 0):
print(r0, r1, r2, r3)
Edit: I'm okay with things like -0.00001< eq0 < 0.00001 instead of eq1 == 0
Well, although it didn't find solutions in this case, the brute force method went well for other systems I'm dealing with, particularly when there are fewer equations and variables. Starting with four variables, it becomes really difficult.
I'm sorry if I'm asking too much. I'm completely new to Python, so I also don't know if this is actually trivial. Maybe fsolve would be useful? I'm not sure if it works with inequalities. Also, even when the systems I encounter have only equalities, they always have more variables than equations, like this one:
system2:
.
Hence 'fsolve' is not appropriate, right?
As soon as your system contains inequalities, you need to formulate it as an optimization problem and solve it with scipy.optimize.minimize. Otherwise, you can use scipy.optimize.root or scipy.optimize.fsolve to solve an equation system. Note that the former is also exactly what is done behind the scenes in root and fsolve, i.e. both solve a least-squares optimization problem.
In general, the problem
g_1(x) = 0, ..., g_m(x) = 0
h_1(x) < 0, ..., h_p(x) < 0
can be formulated as
min g_1(x)**2 + ... + g_m(x)**2
s.t. -1.0*(h_1(x) + eps) >= 0
.
.
-1.0*(h_p(x) + eps) >= 0
where eps is a tolerance to model the strict inequality.
Hence, you can solve your first problem as follows:
import numpy as np
from scipy.optimize import minimize
def obj(r):
return (r[0]*r[2]+r[1]*r[3])**2
eps = 1.0e-6
constrs = [
{'type': 'ineq', 'fun': lambda r: -1.0*(r[0]*r[1] + r[1]*r[2] + r[2]*r[3] + r[0]*r[3] + eps)},
{'type': 'ineq', 'fun': lambda r: -1.0*(np.sum(r**2) - 4*(np.sum(r))**2 + eps)}
]
# res.x contains the solution
res = minimize(obj, x0=np.ones(4), constraints=constrs)
Your second problem can be solved similarly. Here, you only need to remove the constraints. Alternatively, you can use root where it's worth mentioning that it solves F(x) = 0 for a function F: R^N -> R^N, i.e. a function of N variables that returns an N dimensional vector. In case your function has fewer equations than variables, you can simply fill up the vector with zeros:
import numpy as np
from scipy.optimize import root
def F(r):
vals = np.zeros(r.size)
vals[0] = np.dot(r[:5], r[1:]) + r[0]*r[5]
vals[1] = r[0]*r[3] + r[1]*r[4] + r[2]*r[5]
vals[2] = np.sum(r**2) - 3*np.sum(r)**2
return vals
# res.x contains your solution
res = root(F, x0=np.ones(6))
Not really an answer, but you can simplify this a lot using product
from itertools import product
import numpy as np
START = -3
END = 3
STEP = 0.1
for r0, r1, r2, r3 in product(np.arange(START, END, STEP), repeat=4):
print(r0, r1, r2, r3)
Not sure if your problem is a root finding problem or a minimization with constraints problem, but take a look at scipy.optimize and scipy.linprog, maybe one of those methods can be bent to your application.
Related
Here is the question and what I got so far, but I do not know how I should go for the next to get C1.
Solve the IVP (1/x + 2y^2x)dx + (2yx^2 - cos(y))dy = 0, y(1) = pi. Give an implicit solution
from sympy import *
x = symbols('x')
y = Function('y')
deq = diff(y(x),x) + (1/x + 2*y(x)**2*x)/(2*y(x)*x**2 - cos(y(x)))
ysoln = dsolve(deq, y(x))
The following should work:
from sympy import *
x = symbols('x')
y = Function('y')
deq = diff(y(x),x)*(2*y(x)*x**2 - cos(y(x))) + (1/x + 2*y(x)**2*x)
# this leads to an error
# ysoln = dsolve(deq, y(x), ics={y(0): pi})
# so we do it our own way
ysoln = dsolve(deq, y(x))
C1 = solve(ysoln.subs(x, 1).subs(y(1), pi), 'C1')[0]
ysoln = ysoln.subs('C1', C1)
print(ysoln)
# Eq(x**2*y(x)**2 + log(x) - sin(y(x)), pi**2)
My version couldn't solve the equation in the form that you have so I had to restructure deq a bit. It probably just a little problem with all the division.
Note that this is likely will not work for every ODE. It worked now because one can solve for the unique solution of C1 given the initial conditions. Also, in the future, SymPy might not use C1 as the name of the arbitrary constant and functions such as .subs('C1', C1) will not work in that case.
As an interactive session though, the above method will work just fine.
Definition of the problem
I am trying to calculate the points of intersection of geometrical objects, such as two planes and a sphere, in python.
Let's consider for example these three objects:
This system gives two solutions:
I would like to know if there is a python library that can help develop a solver to calculate these intersections. I am looking for something working as Wolfram alpha, where we can input three equations and it returns all the possible solutions when there's finite number of solutions for simplicity.
What I tried
I tried with SymPy, but it returns []:
from sympy.solvers import solve
from sympy import Symbol
x = Symbol('x')
y = Symbol('y')
z = Symbol('z')
solve(z, x, x**2 + y**2 + z**2 -1)
I then tried with scipy:
from scipy.optimize import fsolve
def f(x):
y = np.zeros(3)
y[2] = x[2]
y[0] = x[0]
y[1] = x[0] ** 2 + x[1] ** 2+ x[2] ** 2 - 1
return y
x0 = np.array([10, 10, 10])
solution = fsolve(f, x0)
print(solution[0],solution[1],solution[2])
but it only returns one of the two solutions:
6.79746218330325e-28 1.0000000000000002 -2.3528179942097343e-35
I also tried with gekko, and stil it only returns one possible solution (which depends on the initial guess):
from gekko import GEKKO
m = GEKKO()
x = m.Var(value = 1)
y = m.Var(value = 1)
z = m.Var(value = 1)
m.Equation(x == 0)
m.Equation(z == 0)
m.Equation(x**2 + y**2+z**2 ==1)
m.solve()
fsolve from scipy, and all other functions that I personally know of that will accept any form of input function, will return one value.
One workaround if you have an idea where the other solution is would be to give an x0 value that is closer to the second solution with a second call to fsolve (see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html).
If you alternatively know what range you want to try and find solutions in, the easiest way is to make an array that you then check to see where the value changes sign (this would be doing it from scratch)
I found the solution with sympy. Apparently it's one of the only (if not only) libraries that allow finding analytical solutions, and returns more than just one solution. Also, we don't need to pass guesses as initial variables. In my question, there was an error in the example I posted with sympy. This is how I solved the system:
from sympy.solvers import solve
import sympy as sp
x = Symbol('x')
y = Symbol('y')
z = Symbol('z')
sp.solve([z , x, (x**2 + y**2 + z**2) - 1], x,y,z)
Result: [0,-1,0], [0,1,0]
I need an ODE-solver for a stiff problem similar to MATLAB ode15s.
For my problem I need to check how many steps (calculations) is needed for different initial values and compare this to my own ODE-solver.
I tried using
solver = scipy.integrate.ode(f)
solver.set_integrator('vode', method='bdf', order=15, nsteps=3000)
solver.set_initial_value(u0, t0)
And then integrating with:
i = 0
while solver.successful() and solver.t<tf:
solver.integrate(tf, step=True)
i += 1
print(i)
Where tf is the end of my time interval.
The function used is defined as:
def func(self, t, u):
u1 = u[1]
u2 = mu * (1-numpy.dot(u[0], u[0]))*u[1] - u[0]
return numpy.array([u1, u2])
Which with the initial value u0 = [ 2, 0] is a stiff problem.
This means that the number of steps should not depend on my constant mu.
But it does.
I think the odeint-method can solve this as a stiff problem - but then I have to send in the whole t-vector and therefore need to set the amount of steps that is done and this ruins the point of my assignment.
Is there anyway to use odeint with adaptive stepsize between two t0 and tf?
Or can you see anything I miss in the use of the vode-integrator?
I'm seeing something similar; with the 'vode' solver, changing methods between 'adams' and 'bdf' doesn't change the number of steps by very much. (By the way, there is no point in using order=15; the maximum order of the 'bdf' method of the 'vode' solver is 5 (and the maximum order of the 'adams' solver is 12). If you leave the argument out, it should use the maximum by default.)
odeint is a wrapper of LSODA. ode also provides a wrapper of LSODA:
change 'vode' to 'lsoda'. Unfortunately the 'lsoda' solver ignores
the step=True argument of the integrate method.
The 'lsoda' solver does much better than 'vode' with method='bdf'.
You can get an upper bound on
the number of steps that were used by initializing tvals = [],
and in func, do tvals.append(t). When the solver completes, set
tvals = np.unique(tvals). The length of tvals tells you the
number of time values at which your function was evaluated.
This is not exactly what you want, but it does show a huge difference
between using the 'lsoda' solver and the 'vode' solver with
method 'bdf'. The number of steps used by the 'lsoda' solver is
on the same order as you quoted for matlab in your comment. (I used mu=10000, tf = 10.)
Update: It turns out that, at least for a stiff problem, it make a huge difference for the 'vode' solver if you provide a function to compute the Jacobian matrix.
The script below runs the 'vode' solver with both methods, and it
runs the 'lsoda' solver. In each case, it runs the solver with and without the Jacobian function. Here's the output it generates:
vode adams jac=None len(tvals) = 517992
vode adams jac=jac len(tvals) = 195
vode bdf jac=None len(tvals) = 516284
vode bdf jac=jac len(tvals) = 55
lsoda jac=None len(tvals) = 49
lsoda jac=jac len(tvals) = 49
The script:
from __future__ import print_function
import numpy as np
from scipy.integrate import ode
def func(t, u, mu):
tvals.append(t)
u1 = u[1]
u2 = mu*(1 - u[0]*u[0])*u[1] - u[0]
return np.array([u1, u2])
def jac(t, u, mu):
j = np.empty((2, 2))
j[0, 0] = 0.0
j[0, 1] = 1.0
j[1, 0] = -mu*2*u[0]*u[1] - 1
j[1, 1] = mu*(1 - u[0]*u[0])
return j
mu = 10000.0
u0 = [2, 0]
t0 = 0.0
tf = 10
for name, kwargs in [('vode', dict(method='adams')),
('vode', dict(method='bdf')),
('lsoda', {})]:
for j in [None, jac]:
solver = ode(func, jac=j)
solver.set_integrator(name, atol=1e-8, rtol=1e-6, **kwargs)
solver.set_f_params(mu)
solver.set_jac_params(mu)
solver.set_initial_value(u0, t0)
tvals = []
i = 0
while solver.successful() and solver.t < tf:
solver.integrate(tf, step=True)
i += 1
print("%-6s %-8s jac=%-5s " %
(name, kwargs.get('method', ''), j.func_name if j else None),
end='')
tvals = np.unique(tvals)
print("len(tvals) =", len(tvals))
I'm implementing a very simple Susceptible-Infected-Recovered model with a steady population for an idle side project - normally a pretty trivial task. But I'm running into solver errors using either PysCeS or SciPy, both of which use lsoda as their underlying solver. This only happens for particular values of a parameter, and I'm stumped as to why. The code I'm using is as follows:
import numpy as np
from pylab import *
import scipy.integrate as spi
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
SIR = spi.odeint(eq_system, PopIn, t_interval)
This produces the following error:
lsoda-- at current t (=r1), mxstep (=i1) steps
taken on this call before reaching tout
In above message, I1 = 500
In above message, R1 = 0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
Normally when I encounter a problem like that, there's something terminally wrong with the equation system I set up, but I both can't see anything wrong with it. Weirdly, it also works if you change mu to something like 1/15550. In case it was something wrong with the system, I also implemented the model in R as follows:
require(deSolve)
sir.model <- function (t, x, params) {
S <- x[1]
I <- x[2]
R <- x[3]
with (
as.list(params),
{
dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
dR <- gamma*I - mu*R
res <- c(dS,dI,dR)
list(res)
}
)
}
times <- seq(0,15000,by=1)
params <- c(
beta <- 0.50,
gamma <- 1/10,
mu <- 1/25550
)
xstart <- c(S = 99, I = 1, R= 0)
out <- as.data.frame(lsoda(xstart,times,sir.model,params))
This also uses lsoda, but seems to be going off without a hitch. Can anyone see what's going wrong in the Python code?
I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.
Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:
SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)
A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:
import numpy as np
from pylab import *
import scipy.integrate as spi
def run():
#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50
gamma=1/10.
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)
#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(t,PopIn):
'''Defining SIR System of Equations'''
#Creating an array of equations
Eqs= np.zeros((3))
Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
return Eqs
ode = spi.ode(eq_system)
# BDF method suited to stiff systems of ODEs
ode.set_integrator('vode',nsteps=500,method='bdf')
ode.set_initial_value(PopIn,t_start)
ts = []
ys = []
while ode.successful() and ode.t < t_end:
ode.integrate(ode.t + t_step)
ts.append(ode.t)
ys.append(ode.y)
t = np.vstack(ts)
s,i,r = np.vstack(ys).T
fig,ax = subplots(1,1)
ax.hold(True)
ax.plot(t,s,label='Susceptible')
ax.plot(t,i,label='Infected')
ax.plot(t,r,label='Recovered')
ax.set_xlim(t_start,t_end)
ax.set_ylim(0,100)
ax.set_xlabel('Time')
ax.set_ylabel('Percent')
ax.legend(loc=0,fancybox=True)
return t,s,i,r,fig,ax
Output:
The infected population PopIn[1] decays to zero. Apparently, (normal) numerical imprecision leads to PopIn[1] becoming negative (approx. -3.549e-12) near t=322.9. Then eventually the solution blows up near t=7818.093, with PopIn[0] going toward +infinity and PopIn[1] going toward -infinity.
Edit: I removed my earlier suggestion for a "quick fix". It was a questionable hack.
I'm working with Python/numpy/scipy to write a small ray tracer. Surfaces are modelled as two-dimensional functions giving a height above a normal plane. I reduced the problem of finding the point of intersection between ray and surface to finding the root of a function with one variable. The functions are continuous and continuously differentiable.
Is there a way to do this more efficiently than simply looping over all the functions, using scipy root finders (and maybe using multiple processes)?
Edit: The functions are the difference between a linear function representing the ray and the surface function, constrained to a plane of intersection.
The following example shows calculating the roots for 1 million copies of the function x**(a+1) - b (all with different a and b) in parallel using the bisection method. Takes about ~12 seconds here.
import numpy
def F(x, a, b):
return numpy.power(x, a+1.0) - b
N = 1000000
a = numpy.random.rand(N)
b = numpy.random.rand(N)
x0 = numpy.zeros(N)
x1 = numpy.ones(N) * 1000.0
max_step = 100
for step in range(max_step):
x_mid = (x0 + x1)/2.0
F0 = F(x0, a, b)
F1 = F(x1, a, b)
F_mid = F(x_mid, a, b)
x0 = numpy.where( numpy.sign(F_mid) == numpy.sign(F0), x_mid, x0 )
x1 = numpy.where( numpy.sign(F_mid) == numpy.sign(F1), x_mid, x1 )
error_max = numpy.amax(numpy.abs(x1 - x0))
print "step=%d error max=%f" % (step, error_max)
if error_max < 1e-6: break
The basic idea is to simply run all the usual steps of a root finder in parallel on a vector of variables, using a function that can be evaluated on a vector of variables and equivalent vector(s) of parameters that define the individual component functions. Conditionals are replaced with a combination of masks and numpy.where(). This can continue until all roots have been found to the required precision, or alternately until enough roots have been found that it is worth to remove them from the problem and continue with a smaller problem that excludes those roots.
The functions I chose to solve are arbitrary, but it helps if the functions are well-behaved; in this case all functions in the family are monotonic and have exactly one positive root. Additionally, for the bisection method we need guesses for the variable that give different signs of the function, and those happen to be quite easy to come up with here as well (the initial values of x0 and x1).
The above code uses perhaps the simplest root finder (bisection), but the same technique could be easily applied to Newton-Raphson, Ridder's, etc. The fewer conditionals there are in a root finding method, the better suited it is to this. However, you will have to reimplement any algorithm you want, there is no way to use an existing library root finder function directly.
The above code snippet is written with clarity in mind, not speed. Avoiding the repetition of some calculations, in particular evaluating the function only once per iteration instead of 3 times, speeds this up to 9 seconds, as follows:
...
F0 = F(x0, a, b)
F1 = F(x1, a, b)
max_step = 100
for step in range(max_step):
x_mid = (x0 + x1)/2.0
F_mid = F(x_mid, a, b)
mask0 = numpy.sign(F_mid) == numpy.sign(F0)
mask1 = numpy.sign(F_mid) == numpy.sign(F1)
x0 = numpy.where( mask0, x_mid, x0 )
x1 = numpy.where( mask1, x_mid, x1 )
F0 = numpy.where( mask0, F_mid, F0 )
F1 = numpy.where( mask1, F_mid, F1 )
...
For comparison, using scipy.bisect() to find one root at a time takes ~94 seconds:
for i in range(N):
x_root = scipy.optimize.bisect(lambda x: F(x, a[i], b[i]), x0[i], x1[i], xtol=1e-6)
Sometime in the past few years, scipy.optimize.newton gained vectorization support. Using the example from the other answer would now look like:
import numpy as np
from scipy import optimize
def F(x, a, b):
return np.power(x, a+1.0) - b
N = 1000000
a = np.random.rand(N)
b = np.random.rand(N)
optimize.newton(F, np.zeros(N), args=(a, b))
This runs just as fast as the the vectorized bisection method in the other answer.