How to combine polynomials in matrix operations in Sympy? - python

I'm doing some matrix operations, sometimes involving matrices whose entries have constant values.
But for some reason, I cannot get the matrix operation to combine the results into a single polynomial, even when the result is simple. for example, consider the following:
from sympy.matrices import *
import sympy
x= sympy.symbol.symbols('x')
Poly_matrix = sympy.Matrix([[sympy.Poly(x, x)],[sympy.Poly(x, x)]])
constant_matrix = sympy.Matrix([[0,1]])
constant_matrix_poly = constant_matrix.applyfunc(lambda val: sympy.Poly(val, x))
# this doesn't combine them
result1 = constant_matrix * Poly_matrix
print result1
>>>> Matrix([[Poly(0, x, domain='ZZ') + Poly(x, x, domain='ZZ')]])
# even THIS doesn't combine them when I convert the constants to Polynomials
result = constant_matrix_poly * Poly_matrix
print result
>>>> Matrix([[Poly(0, x, domain='ZZ') + Poly(x, x, domain='ZZ')]])
The problem with this, is that when I try to extract the expression, and turn this result into a different polynomial, I get the following error:
# This is where the trouble starts
sympy.Poly(result[0].as_expr(), x)
sympy.Poly(result1[0].as_expr(), x)
And the resulting error is a long traceback, ending with:
PolynomialError: Poly(x, x, domain='ZZ') contains an element of the set of generators.
To be even more specific, it has trouble with result[0].as_expr() because it cannot convert it to an expression using as_expr(), even though it is still an object of type Poly, so it can still use the method as_expr().
Why is it that these polynomials do not automatically get combined into one?
Or is there another way for me to call sympy.Poly(result[0].as_expr(), x)?
EDIT: Here are some questions with a similar error (although sometimes caused by something different):
sympy: PolynomialError: cos(a) contains an element of the generators set
Sympy Error when using POLY with SQRT

I stumbled upon this issue some time ago while running some code from a post. After looking into the source code of the matrix multiplication function _eval_matrix_mul on line 174 of dense.py, it turns out that sympy.Add is used to perform addition during the computation process rather than the + operator. Therefore Poly.add is not invoked and a new expression is created for addition every time.
Further inspection into the source code reveals that there is a PolyMatrix class that rewrites the matrix multiplication functions for polynominals. This class does work as expected as shown in the following. However, it does not show up anywhere in the documentation for unknown reasons so use it with caution. The docstring in the linked source code provides basic documentation for the class.
from sympy import Poly
from sympy.abc import x
from sympy.polys.polymatrix import PolyMatrix
mat = PolyMatrix([[Poly(x, x)], [1]])
const_mat = PolyMatrix([[4, 3]])
print(mat.shape, const_mat.shape)
print(mat.T * mat)
print(mat * mat.T)
print(2 * mat)
print(2 * mat + const_mat.T)
Output:
(2, 1) (1, 2)
Matrix([[Poly(x**2 + 1, x, domain='ZZ')]])
Matrix([[Poly(x**2, x, domain='ZZ'), Poly(x, x, domain='ZZ')], [Poly(x, x, domain='ZZ'), 1]])
Matrix([[Poly(2*x, x, domain='ZZ')], [2]])
Matrix([[Poly(2*x + 4, x, domain='ZZ')], [5]])
Another alternative approach is to use Expr.collect which has the same functionality as sympy.collect, as shown in the following:
from sympy import Poly, Matrix
from sympy.abc import x
mat = Matrix([[Poly(x, x)], [1]])
result = mat.T * mat
simplified = result.applyfunc(lambda p: p.collect(x))
print(simplified)
Output:
Matrix([[Poly(x**2 + 1, x, domain='ZZ')]])

If all you want is the characteristic polynomial, use charpoly. This is more efficient than eigenvals, because sometimes symbolic roots can be expensive to calculate.
Sample
lamda = symbols('lamda')
p = M.charpoly(lamda)
factor(p)
2
(λ - 5) ⋅(λ - 3)⋅(λ + 2)
Source

Here is one way to do this:
# even though this prints as two terms (which is kind of weird)
print result[0].as_expr()
#...this collapses it into one
exec('poly_expr = '+str(result[0].as_expr()))
print poly_expr
#...which allows the expressions to be combined
Poly(poly_expr, x)
However, this seems like a very hackish solution. Is there an easier way to do this?

Related

Problems using numpy.piecewise

1. The core problem and question
I will provide an executable example below, but let me first walk you through the problem first.
I am using solve_ivp from scipy.integrate to solve an initial value problem (see documentation). In fact I have to call the solver twice, to once integrate forward and once backward in time. (I would have to go unnecessarily deep into my concrete problem to explain why this is necessary, but please trust me here--it is!)
sol0 = solve_ivp(rhs,[0,-1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
sol1 = solve_ivp(rhs,[0, 1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
Here rhs is the right hand side function of the initial value problem y(t) = rhs(t,y). In my case, y has six components y[0] to y[5]. y0=y(0) is the initial condition. [0,±1e8] are the respective integration ranges, one forward and the other backward in time. rtol and atol are tolerances.
Importantly, you see that I flagged dense_output=True, which means that the solver does not only return the solutions on the numerical grids, but also as interpolation functions sol0.sol(t) and sol1.sol(t).
My main goal now is to define a piecewise function, say sol(t) which takes the value sol0.sol(t) for t<0 and the value sol1.sol(t) for t>=0. So the main question is: How do I do that?
I thought that numpy.piecewise should be tool of choice to do this for me. But I am having trouble using it, as you will see below, where I show you what I tried so far.
2. Example code
The code in the box below solves the initial value problem of my example. Most of the code is the definition of the rhs function, the details of which are not important to the question.
import numpy as np
from scipy.integrate import solve_ivp
# aux definitions and constants
sin=np.sin; cos=np.cos; tan=np.tan; sqrt=np.sqrt; pi=np.pi;
c = 299792458
Gm = 5.655090674872875e26
# define right hand side function of initial value problem, y'(t) = rhs(t,y)
def rhs(t,y):
p,e,i,Om,om,f = y
sinf=np.sin(f); cosf=np.cos(f); Q=sqrt(p/Gm); opecf=1+e*cosf;
R = Gm**2/(c**2*p**3)*opecf**2*(3*(e**2 + 1) + 2*e*cosf - 4*e**2*cosf**2)
S = Gm**2/(c**2*p**3)*4*opecf**3*e*sinf
rhs = np.zeros(6)
rhs[0] = 2*sqrt(p**3/Gm)/opecf*S
rhs[1] = Q*(sinf*R + (2*cosf + e*(1 + cosf**2))/opecf*S)
rhs[2] = 0
rhs[3] = 0
rhs[4] = Q/e*(-cosf*R + (2 + e*cosf)/opecf*sinf*S)
rhs[5] = sqrt(Gm/p**3)*opecf**2 + Q/e*(cosf*R - (2 + e*cosf)/opecf*sinf*S)
return rhs
# define initial values, y0
y0=[3.3578528933149297e13,0.8846,2.34921,3.98284,1.15715,0]
# integrate twice from t = 0, once backward in time (sol0) and once forward in time (sol1)
sol0 = solve_ivp(rhs,[0,-1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
sol1 = solve_ivp(rhs,[0, 1e8],y0,rtol=10e-12,atol=10e-12,dense_output=True)
The solution functions can be addressed from here by sol0.sol and sol1.sol respectively. As an example, let's plot the 4th component:
from matplotlib import pyplot as plt
t0 = np.linspace(-1,0,500)*1e8
t1 = np.linspace( 0,1,500)*1e8
plt.plot(t0,sol0.sol(t0)[4])
plt.plot(t1,sol1.sol(t1)[4])
plt.title('plot 1')
plt.show()
3. Failing attempts to build piecewise function
3.1 Build vector valued piecewise function directly out of sol0.sol and sol1.sol
def sol(t): return np.piecewise(t,[t<0,t>=0],[sol0.sol,sol1.sol])
t = np.linspace(-1,1,1000)*1e8
print(sol(t))
This leads to the following error in piecewise in line 628 of .../numpy/lib/function_base.py:
TypeError: NumPy boolean array indexing assignment requires a 0 or 1-dimensional input, input has 2 dimensions
I am not sure, but I do think this is because of the following: In the documentation of piecewise it says about the third argument:
funclistlist of callables, f(x,*args,**kw), or scalars
[...]. It should take a 1d array as input and give an 1d array or a scalar value as output. [...].
I suppose the problem is, that the solution in my case has six components. Hence, evaluated on a time grid the output would be a 2d array. Can someone confirm, that this is indeed the problem? Since I think this really limits the usefulness of piecewiseby a lot.
3.2 Try the same, but just for one component (e.g. for the 4th)
def sol4(t): return np.piecewise(t,[t<0,t>=0],[sol0.sol(t)[4],sol1.sol(t)[4]])
t = np.linspace(-1,1,1000)*1e8
print(sol4(t))
This results in this error in line 624 of the same file as above:
ValueError: NumPy boolean array indexing assignment cannot assign 1000 input values to the 500 output values where the mask is true
Contrary to the previous error, unfortunately here I have so far no idea why it is not working.
3.3 Similar attempt, however first defining functions for the 4th components
def sol40(t): return sol0.sol(t)[4]
def sol41(t): return sol1.sol(t)[4]
def sol4(t): return np.piecewise(t,[t<0,t>=0],[sol40,sol41])
t = np.linspace(-1,1,1000)
plt.plot(t,sol4(t))
plt.title('plot 2')
plt.show()
Now this does not result in an error, and I can produce a plot, however this plot doesn't look like it should. It should look like plot 1 above. Also here, I so far have no clue what is going on.
Am thankful for help!
You can take a look to numpy.piecewise source code. There is nothing special in this function so I suggest to do everything manually.
def sol(t):
ans = np.empty((6, len(t)))
ans[:, t<0] = sol0.sol(t[t<0])
ans[:, t>=0] = sol1.sol(t[t>=0])
return ans
Regarding your failed attempts. Yes, piecewise excpect functions return 1d array. Your second attempt failed because documentation says that funclist argument should be list of functions or scalars but you send the list of arrays. Contrary to the documentation it works even with arrays, you just should use the arrays of the same size as t < 0 and t >= 0 like:
def sol4(t): return np.piecewise(t,[t<0,t>=0],[sol0.sol(t[t<0])[4],sol1.sol(t[t>=0])[4]])

Least Squares method in practice

Very simple regression task. I have three variables x1, x2, x3 with some random noise. And I know target equation: y = q1*x1 + q2*x2 + q3*x3. Now I want to find target coefs: q1, q2, q3 evaluate the
performance using the mean Relative Squared Error (RSE) (Prediction/Real - 1)^2 to evaluate the performance of our prediction methods.
In the research, I see that this is ordinary Least Squares Problem. But I can't get from examples on the internet how to solve this particular problem in Python. Let say I have data:
import numpy as np
sourceData = np.random.rand(1000, 3)
koefs = np.array([1, 2, 3])
target = np.dot(sourceData, koefs)
(In real life that data are noisy, with not normal distribution.) How to find this koefs using Least Squares approach in python? Any lib usage.
#ayhan made a valuable comment.
And there is a problem with your code: Actually there is no noise in the data you collect. The input data is noisy, but after the multiplication, you don't add any additional noise.
I've added some noise to your measurements and used the least squares formula to fit the parameters, here's my code:
data = np.random.rand(1000,3)
true_theta = np.array([1,2,3])
true_measurements = np.dot(data, true_theta)
noise = np.random.rand(1000) * 1
noisy_measurements = true_measurements + noise
estimated_theta = np.linalg.inv(data.T # data) # data.T # noisy_measurements
The estimated_theta will be close to true_theta. If you don't add noise to the measurements, they will be equal.
I've used the python3 matrix multiplication syntax.
You could use np.dot instead of #
That makes the code longer, so I've split the formula:
MTM_inv = np.linalg.inv(np.dot(data.T, data))
MTy = np.dot(data.T, noisy_measurements)
estimated_theta = np.dot(MTM_inv, MTy)
You can read up on least squares here: https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)#The_general_problem
UPDATE:
Or you could just use the builtin least squares function:
np.linalg.lstsq(data, noisy_measurements)
In addition to the #lhk answer I have found great scipy Least Squares function. It is easy to get the requested behavior with it.
This way we can provide a custom function that returns residuals and form Relative Squared Error instead of absolute squared difference:
import numpy as np
from scipy.optimize import least_squares
data = np.random.rand(1000,3)
true_theta = np.array([1,2,3])
true_measurements = np.dot(data, true_theta)
noise = np.random.rand(1000) * 1
noisy_measurements = true_measurements + noise
#noisy_measurements[-1] = data[-1] # (1000 * true_theta) - uncoment this outliner to see how much Relative Squared Error esimator works better then default abs diff for this case.
def my_func(params, x, y):
res = (x # params) / y - 1 # If we change this line to: (x # params) - y - we will got the same result as np.linalg.lstsq
return res
res = least_squares(my_func, x0, args=(data, noisy_measurements) )
estimated_theta = res.x
Also, we can provide custom loss with loss argument function that will process the residuals and form final loss.

Sympy gives numerical numpy output in dtype object format

I am using the following piece of code to create symbolic Sympy expressions for the spherical harmonics functions Y_l^m (4-pi-normalized over the full sphere) and their theta derivatives and then want to evaluate them on some evenly spaced grid in theta and phi coordinates:
import numpy as np
from math import pi, cos, sin
import sympy
from sympy import Ynm, simplify, diff, lambdify
from sympy.abc import n,m,theta,phi
resol = 2.5
dtheta_rad_ylm = -resol * pi/180.0
dphi_rad_ylm = resol * pi/180.0
thetaarr_rad_ylm_symm = np.arange(pi+dtheta_rad_ylm/2.0,dtheta_rad_ylm/2.0,dtheta_rad_ylm)
phiarr_rad_ylm = np.arange(0.0,2*pi,dphi_rad_ylm)
phi_grid_rad_ylm, theta_grid_rad_ylm_symm = np.meshgrid(phiarr_rad_ylm, thetaarr_rad_ylm_symm)
lmax = len(thetaarr_rad_ylm_symm)/2 - 1
nmax = (lmax+1)*(lmax+2)/2
ylms_symm_full = np.zeros((lmax+1, lmax+1, len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
dylms_symm_full = np.zeros((lmax+1, lmax+1, len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
for n in np.arange(0,lmax+1):
for m in np.arange(0,n+1):
print "generating resol %s, y_%d_%d" % (resol,n,m)
ylm_symbolic = simplify(2 * sympy.sqrt(sympy.pi) * Ynm(n,m,theta,phi).expand(func=True))
dylm_symbolic = simplify(diff(ylm_symbolic, theta))
# activate and deactivate comments for second-question-related error
# error appears later than the first-question-related error!
ylm_lambda = lambdify((theta,phi), sympy.N(ylm_symbolic), "numpy")
dylm_lambda = lambdify((theta,phi), sympy.N(dylm_symbolic), "numpy")
# ylm_lambda = lambdify((theta,phi), ylm_symbolic, "numpy")
# dylm_lambda = lambdify((theta,phi), dylm_symbolic, "numpy")
# activate and deactivate comments for first-question-related error
ylm_symm_full = np.asarray(ylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm), dtype=complex)
dylm_symm_full = np.asarray(dylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm), dtype=complex)
# ylm_symm_full = ylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm)
# dylm_symm_full = dylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm)
if n == 0 and m == 0:
ylm_symm_full = np.tile(ylm_symm_full, (len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
dylm_symm_full = np.tile(dylm_symm_full, (len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
ylms_symm_full[n,m,:,:] = np.real(ylm_symm_full)
dylms_symm_full[n,m,:,:] = np.real(dylm_symm_full)
There are several other packages providing the functionality of generating numeric Y_l^m without symbolic expressions, like scipy.special.sph_harm. However, it is crucial for me to get an "exact" derivative, i.e. not using any numerical differentiation method as e.g. finite differences (np.gradient). Therefore after getting the symbolic formula for the Y_l^m and simplifying those "as much as possible", lambda functions are created using the numpy backend (to be able to do vectorized calculations) and those are then evaluated on the grid. Finally I only need the real part of the spherical harmonics (I know that I could also create real spherical harmonics with Znm instead of Ynm, but...).
Two questions:
Mostly, the numerical output is then given as a usual 2d-numpy array of dtype complex or np.complex128. In some cases however, Sympy generates the array having a dtype object, this affects particularly the high l spherical harmonics. Array entries are displayed as complex 1-tuples instead of just complex numbers. The problem however is that taking the real part on that array has no effect, resulting in an error, since it is broadcast into an array that has a real dtype. Is there any particular reason for this? I do not see any immediate one, since the output is not inhomogeneous. Any way to change this without having to cast it additionally to dtype complex using np.asarray? It just takes additional computation time, makes the program slightly more complicated, but more importantly confusing.
You may also have noted that I use sympy.N to evaluate the expression already before I create the lambda function. The reason is that the prefactors in front of the spherical harmonics are in some cases of long format and numpy, for whoever knows which reason, cannot compute the sqrt of that number. Note that this is not in general true (np.sqrt(9L) = 3.0), but in this case there's an error message stating that the long object has no attribute sqrt. I suppose this is also related to the lambda function generation. Is there any method to tell Sympy to give already the symbolic expression in float format every time? Or, better, to somehow modify the lambdify call?
The code block should be stand-alone and testable, if you want to check these issues. Just remove the sympy.N and the np.asarray expressions. The first question relates to the error appearing earlier. Y_l^m generation up to the lmax which here is 35 takes roughly 10-15 minutes.
Thanks in advance for your help!
UPDATE: Here are some minimal, complete and verifiable examples. For both please import the required packages:
import numpy as np
from math import pi, cos, sin
import sympy
from sympy import Ynm, simplify, diff, lambdify
from sympy.abc import n,m,theta,phi
Error #1: object dtype problem at an = 31, m = 1:
# minimal, complete and verifiable example (MCVe) #1
# error message:
#---> 43 dylms_symm_full[n,m,:,:] = np.real(dylm_symm_full)
#TypeError: can't convert complex to float
ylm_symbolic = simplify(2 * sympy.sqrt(sympy.pi) * Ynm(31,1,theta,phi).expand(func=True))
dylm_symbolic = simplify(diff(ylm_symbolic, theta))
ylm_lambda = lambdify((theta,phi), ylm_symbolic, "numpy")
dylm_lambda = lambdify((theta,phi), dylm_symbolic, "numpy")
ylm_symm_full = ylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm)
dylm_symm_full = dylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm)
ylms_symm_full = np.zeros((len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
dylms_symm_full = np.zeros((len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
ylms_symm_full[:,:] = np.real(ylm_symm_full)
dylms_symm_full[:,:] = np.real(dylm_symm_full)
print ylm_symm_full
print dylm_symm_full
Error #2: long sqrt attribute problem at n = 32, m = 29:
# minimal, complete and verifiable example (MCVe) #2
# error message:
#---> 33 ylm_symm_full = np.asarray(ylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm), dtype=complex)
#/opt/local/anaconda/anaconda-2.2.0/lib/python2.7/site-packages/numpy/__init__.pyc in <lambda>(_Dummy_4374, _Dummy_4375)
#AttributeError: 'long' object has no attribute 'sqrt'
ylm_symbolic = simplify(2 * sympy.sqrt(sympy.pi) * Ynm(32,29,theta,phi).expand(func=True))
dylm_symbolic = simplify(diff(ylm_symbolic, theta))
ylm_lambda = lambdify((theta,phi), ylm_symbolic, "numpy")
dylm_lambda = lambdify((theta,phi), dylm_symbolic, "numpy")
ylm_symm_full = np.asarray(ylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm), dtype=complex)
dylm_symm_full = np.asarray(dylm_lambda(theta_grid_rad_ylm_symm, phi_grid_rad_ylm), dtype=complex)
ylms_symm_full = np.zeros((len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
dylms_symm_full = np.zeros((len(thetaarr_rad_ylm_symm), len(phiarr_rad_ylm)))
ylms_symm_full[:,:] = np.real(ylm_symm_full)
dylms_symm_full[:,:] = np.real(dylm_symm_full)
print ylm_symbolic # the symbolic Y_32^29 expression
print type(175844649714253329810) # the number that causes the problem
The question of why your code produces an object array on occasion isn't something we can't easily answer without a MCVe - it can't just happen on occasion, it has to be reproducible.
However if the array is object, it might be easily converted to complex with
arr.astype(np.complex)
With a copy=False parameter you could apply it to all results without much computational cost.
arr.astype(np.complex, copy=False).real
The elements of the object version aren't tuples; they are scalar complex values, and just print that way.
In [187]: arr=np.random.rand(3)+np.random.rand(3)*1j
In [188]: arrO=arr.astype(object)
In [189]: arrO
Out[189]:
array([(0.6129476673822528+0.09323924558124808j),
(0.9540542895309456+0.81929476753951j),
(0.8068967867200485+0.9494305517611881j)], dtype=object)
In [190]: type(arrO[0])
Out[190]: complex
In [191]: arr.real
Out[191]: array([ 0.61294767, 0.95405429, 0.80689679])
In [193]: arrO[0]
Out[193]: (0.6129476673822528+0.09323924558124808j)
In [194]: arrO.astype(np.complex).real
Out[194]: array([ 0.61294767, 0.95405429, 0.80689679])
Some math operations do 'bleed-through' to elements of an object array, but real is not one of them. So as you note np.real(arrO) does not produce what you want.
Looking more at your code, including the stuff that scrolls off the screen I see you are using:
np.asarray(dylm_lambda(...), dtype=complex)
That's the same as my astype(complex, copy=False).
For an already complex array the computational cost is minimal. For an object array it has to make a new array, and the cost more substantial. But if you can't figure out by sympy is creating the object array, you have to live with the cost.

Plot arbitrary 2-D function in python/pyplot like Matlab's Ezplot

I'm looking for a way to generate a plot similar to how ezplot works in MATLAB in that I can type:
ezplot('x^2 + y^2 = y + 5')
and get a graph ready to go for any arbitrary function. I'm only worrying about the case where I have both a x and a y.
I only have the function, and I'd really rather not go about trying to calculate all the y values for some given x range if I didn't have to.
The few solutions I've seen suggested are either about decision boundaries (which this is not. There is no test data or anything, just an arbitrary function) or are all for functions already defined as y = some x equation which doesn't really help me.
I would somewhat accept if there was a good way to mimic Wolfram|Alpha in their solve functionality("solve x^2 + y^2 = y + 5 for y" will give me two functions I could then graph separately), but rather prefer the ezplot as that's more or less instant within MATLAB.
I think you could use sympy plotting and parse_expr for this For your example, this would work as follows
from sympy.plotting import plot_implicit
from sympy.parsing.sympy_parser import parse_expr
def ezplot(s):
#Parse doesn't parse = sign so split
lhs, rhs = s.replace("^","**").split("=")
eqn_lhs = parse_expr(lhs)
eqn_rhs = parse_expr(rhs)
plot_implicit(eqn_lhs-eqn_rhs)
ezplot('x^2 + y^2 = y + 5')
This can be made as general as needed
You could use sympy to solve the equation and then use the resulting functions for plotting y over x:
import sympy
x=sympy.Symbol('x')
y=sympy.Symbol('y')
f = sympy.solve(x**2 + y**2 - y - 5, [y])
print f
xpts = (numpy.arange(10.)-5)/10
ypts = sympy.lambdify(x, f, 'numpy')(xpts)
# then e.g.: pylab.scatter(xpts, ypts)
#EdSmith solution works fine. Nevertheless, I have another suggestion. You can use plot a contour. You can rewrite your function as f(x, y)=0, and then use this code
from numpy import mgrid, pi
import matplotlib.pyplot as plt
def ezplot(f):
x, y = mgrid[-2*pi:2*pi:51, -2*pi:2*pi:51]
z = f(x, y)
ezplt = plt.contour(x, y, f, 0, colors='k')
return ezplt
That's the main idea. Of course, you can generalize it as the function in MATLAB, like general intervals of x and y, passing the function as a string, etc.

integrate.ode sets t0 values outside of my data range

I would like to solve the ODE dy/dt = -2y + data(t), between t=0..3, for y(t=0)=1.
I wrote the following code:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
When I run this code, I get several errors like this:
ValueError: A value in x_new is above the interpolation range.
odepack.error: Error occurred while calling the Python function named
func
My interpolation range is between 0.0 and 3.0.
Printing the value of t0 in func, I realized that t0 is actually sometimes above my interpolation range: 3.07634612585, 3.0203768998, 3.00638459329, ... It's why linear_interpolation(t0) raises the ValueError exceptions.
I have a few questions:
how does integrate.ode makes t0 vary? Why does it make t0 exceed the infimum (3.0) of my interpolation range?
in spite of these errors, integrate.ode returns an array which seems to contain correct value. So, should I just catch and ignore these errors? Should I ignore them whatever the differential equation(s), the t range and the initial condition(s)?
if I shouldn't ignore these errors, what is the best way to avoid them? 2 suggestions:
in interp1d, I could set bounds_error=False and fill_value=data[-1] since the t0 outside of my interpolation range seem to be closed to t[-1]:
linear_interpolation = interp1d(t, data, bounds_error=False, fill_value=data[-1])
But first I would like to be sure that with any other func and any other data the t0 will always remain closed to t[-1]. For example, if integrate.ode chooses a t0 below my interpolation range, the fill_value would be still data[-1], which would not be correct. Maybe to know how integrate.ode makes t0 vary would help me to be sure of that (see my first question).
in func, I could enclose the linear_interpolation call in a try/except block, and, when I catch a ValueError, I recall linear_interpolation but with t0 truncated:
def func(y, t0):
try:
interpolated_value = linear_interpolation(t0)
except ValueError:
interpolated_value = linear_interpolation(int(t0)) # truncate t0
return -2*y + interpolated_value
At least this solution permits linear_interpolation to still raise an exception if integrate.ode makes t0 >= 4.0 or t0 <= -1.0. I can then be alerted of incoherent behavior. But it is not really readable and the truncation seems to me a little arbitrary by now.
Maybe I'm just over-thinking about these errors. Please let me know.
It is normal for the odeint solver to evaluate your function at time values past the last requested time. Most ODE solvers work this way--they take internal time steps with sizes determined by their error control algorithm, and then use their own interpolation to evaluate the solution at the times requested by the user. Some solvers (e.g. the CVODE solver in the Sundials library) allow you to specify a hard bound on the time, beyond which the solver is not allowed to evaluate your equations, but odeint does not have such an option.
If you don't mind switching from scipy.integrate.odeint to scipy.integrate.ode, it looks like the "dopri5" and "dop853" solvers don't evaluate your function at times beyond the requested time. Two caveats:
The ode solvers use a different convention for the order of the arguments that define the differential equation. In the ode solvers, t is the first argument. (Yeah, I know, grumble, grumble...)
The "dopri5" and "dop853" solvers are for non-stiff systems. If your problem is stiff, they should still give correct answers, but they will do a lot more work than a stiff solver would do.
Here's a script that shows how to solve your example. To emphasize the change in the arguments, I renamed func to rhs.
import numpy as np
from scipy.integrate import ode
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
linear_interpolation = interp1d(t, data)
def rhs(t, y):
"""The "right-hand side" of the differential equation."""
#print 't', t
return -2*y + linear_interpolation(t)
# Initial condition
y0 = 1
solver = ode(rhs).set_integrator("dop853")
solver.set_initial_value(y0)
k = 0
soln = [y0]
while solver.successful() and solver.t < t[-1]:
k += 1
solver.integrate(t[k])
soln.append(solver.y)
# Convert the list to a numpy array.
soln = np.array(soln)
The rest of this answer looks at how you could continue to use odeint.
If you are only interested in linear interpolation, you could simply extend your data linearly, using the last two points of the data. A simple way to extend the data array is to append the value 2*data[-1] - data[-2] to the end of the array, and do the same for the t array. If the last time step in t is small, this might not be a sufficiently long extension to avoid the problem, so in the following, I've used a more general extension.
Example:
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
t = np.linspace(0, 3, 4)
data = [1, 2, 3, 4]
# Slope of the last segment.
m = (data[-1] - data[-2]) / (t[-1] - t[-2])
# Amount of time by which to extend the interpolation.
dt = 3.0
# Extended final time.
t_ext = t[-1] + dt
# Extended final data value.
data_ext = data[-1] + m*dt
# Extended arrays.
extended_t = np.append(t, t_ext)
extended_data = np.append(data, data_ext)
linear_interpolation = interp1d(extended_t, extended_data)
def func(y, t0):
print 't0', t0
return -2*y + linear_interpolation(t0)
soln = odeint(func, 1, t)
If simply using the last two data points to extend the interpolator linearly is too crude, then you'll have to use some other method to extrapolate a little beyond the final t value given to odeint.
Another alternative is to include the final t value as an argument to func, and explicitly handle t values larger than it in func. Something like this, where extrapolation is something you'll have to figure out:
def func(y, t0, tmax):
if t0 > tmax:
f = -2*y + extrapolation(t0)
else:
f = -2*y + linear_interpolation(t0)
return f
soln = odeint(func, 1, t, args=(t[-1],))

Categories

Resources