SciPy step response plot seems to break for some values - python

I'm using SciPy instead of MATLAB in a control systems class to plot the step responses of LTI systems. It's worked great so far, but I've run into an issue with a very specific system. With this code:
from numpy import min
from scipy import linspace
from scipy.signal import lti, step
from matplotlib import pyplot as p
# Create an LTI transfer function from coefficients
tf = lti([64], [1, 16, 64])
# Step response (redo it to get better resolution)
t, s = step(tf)
t, s = step(tf, T = linspace(min(t), t[-1], 200))
# Plotting stuff
p.plot(t, s)
p.xlabel('Time / s')
p.ylabel('Displacement / m')
p.show()
The code as-is displays a flat line. If I modify the final coefficient of the denominator to 64.00000001 (i.e., tf = lti([64], [1, 16, 64.0000001])) then it works as it should, showing an underdamped step response. Setting the coefficient to 63.9999999 also works. Changing all the coefficients to have explicit decimal places (i.e., tf = lti([64.0], [1.0, 16.0, 64.0])) doesn't affect anything, so I guess it's not a case of integer division messing things up.
Is this a bug in SciPy, or am I doing something wrong?

This is a limitation of the implementation of the step function. It uses a matrix exponential to find the step response, and it doesn't handle repeated poles well. (Your system has a repeated pole at -8.)
Instead of using step, you can use the function scipy.signal.step2
In [253]: from scipy.signal import lti, step2
In [254]: sys = lti([64], [1, 16, 64])
In [255]: t, y = step2(sys)
In [256]: plot(t, y)
Out[256]: [<matplotlib.lines.Line2D at 0x5ec6b90>]

Related

Trying to run an implementation of lsqcurvefit from the Optimization Toolbox from Matlab in python using curvefit

I am trying to implement lsqcurvefit from matlab in Python using curve_fit with no success. Below is the matlab code I am trying to port to Python:
myfun = #(x,xdata)(exp(x(1))./ xdata.^exp(x(2))) - x(3);
xstart = [4, -2, 54];
pX = [2, 3, 13, 12, 38, 39];
pY = [12.7595, 8.7857, -11.8802, -10.9528, -15.4390, -15.3083];
try
fittedmodel = lsqcurvefit(myfun,xstart,double(pX),double(pY), [], [], optimset('Display', 'off'));
disp("fitted model:");
disp(fittedmodel);
catch
end
Below is my matlab output:
fitted model:
4.8389 3.3577 -2.0000
Below is my Python code:
from scipy.optimize import curve_fit
import numpy as np
pX = [2, 3, 13, 12, 38, 39];
pY = [12.7595, 8.7857, -11.8802, -10.9528, -15.4390, -15.3083];
def myfun(x, xdata):
temp_val_1 = np.exp(x[0])
temp_val_2 = np.exp(x[1])
temp_val_3 = x[2]
temp_val_4 = np.power(xdata, temp_val_2)
temp_val_5 = np.divide(temp_val_1, temp_val_4)
temp_val_6 = temp_val_5 - temp_val_3
return temp_val_6
popt, pcov = curve_fit(myfun, pX, pY, p0=([4, -2, 54]))
print(popt, "\n", pcov)
and below is my Python output:
myfun() takes 2 positional arguments but 4 were given
I understand that there is something wrong with the inputs, but I don't understand what to change to solve this and receive the same results as I do with matlab.
Here are a few hints to get you started:
Note that curve_fit expects a function with signature f(xdata, *x), where x is your optimization variable, i.e. the searched coefficients. It's just the other way around compared to Matlab's lsqcurvefit. The notation *x is python specific and denotes a variable number of arguments.
Additionally, you don't need to use the np.power and np.divide functions. The usual mathematical operators are overloaded for np.arrays and are applied elementwise. For example, this means that for two np.arrays a / b is equivalent to Matlab's a ./ b. Consequently, it's more convenient to write (and to read):
def myfun(xdata, *x):
return np.exp(x[0]) / xdata**np.exp(x[1]) - x[2]
I obtain the following coefficients:
[ 4.01234549 -0.47409326 21.70045585]
However, there seems to be an overflow for the term np.exp(x[1]), so it might be worth to reformulate the objective function or increase the floating point precision. i.e. use long doubles dtype=np.float128.

quadpy IntegrationError: Tolerances (abs: 1.49e-08, rel: 1.49e-08) could not be reached with the given max_num_subintervals (= 50)

I am using quadpy to integrate a function in python.
Function
import numpy as np
T = 2*np.pi
def ex1(t):
return np.where(np.logical_and((t%T>=0), (t%T<np.pi)), t%T, np.pi)
The function is periodic, this is its plot:
x = np.linspace(0, 6*T, 1000)
plt.plot(x, ex1(x))
plt.grid(True)
plt.show()
Problem
I am trying to integrate this function:
from scipy.integrate import quad
import quadpy
print(quadpy.quad(ex1, 0, 3))
print(quad(ex1, 0, 3))
produces
(array(4.5), array(1.41692995e-19))
(4.5, 4.9960036108132044e-14)
On an interval from 0 to 3, everything works fine.
However If I increae the interval to e.g. 4, scipy still works.
print(quad(ex1, 0, 4))
produces
(7.631568411183528, 1.0717732083155035e-08)
but
print(quadpy.quad(ex1, 0, 4))
produces
IntegrationError: Tolerances (abs: 1.49e-08, rel: 1.49e-08) could not be reached with the given max_num_subintervals (= 50).
Questions
How do I prevent this error? I tried adding an argument called max_num_subintervals but that did not seem to work.
Am I using quadpy correctly for what I am trying to do? I have begun using it since I wanted to take derivatives of complex numbers which scipy does not support, and I would like to have a one-size-fits-all solution, thus using quadpy for these easier examples that scipy would be enough.
This was a quadpy bug after all. Fixed now:
import numpy as np
from scipy.integrate import quad
import quadpy
T = 2 * np.pi
def ex1(t):
return np.where(np.logical_and((t % T >= 0), (t % T < np.pi)), t % T, np.pi)
print(quadpy.__version__)
print(quadpy.quad(ex1, 0, 4))
print(quad(ex1, 0, 4))
0.14.7
(array(7.63156841), array(1.13827355e-16))
(7.631568411183528, 1.0717732083161812e-08)
You're probably using an outdated quadpy version.
After some more experimenting I have found that quadpy's quad method takes the same argument as scipys quad method, which can be found here:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html#scipy.integrate.quad
Using the epsabs, epsrel, limit optional arguments I can thus prevent the error:
print(quadpy.quad(ex1, 0, 4, epsabs=1e-1, epsrel=1e-1, limit=100))
produces
(array(7.6323447), array(0.01666253))
However scipys quad method reaches an error tolerance of 1-08, which I really cant reproduce with quadpy, even when setting the limit to a really high value like 1000000
Why is that?
Update: It seems the behaviour I experienced is a bug, now reported in: https://github.com/nschloe/quadpy/issues/255
However in general this answer answers my initial question.

Solving dynamical ODE System with Python

I am trying to solve a dynamical system with three state variables V1,V2,I3 and then plot these in a 3d Plot. My code so far looks as follows:
from scipy.integrate import ode
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import math
def ID(V,a,b):
return a*(math.exp(b*V)-math.exp(-b*V))
def dynamical_system(t,z,C1,C2,L,R1,R2,R3,RN,a,b):
V1,V2,I3 = z
f = [(1/C1)*(V1*(1/RN-1/R1)-ID(V1-V2,a,b)-(V1-V2)/R2),(1/C2)*(ID(V1-V2,a,b)+(V1-V2)/R2-I3),(1/L)*(-I3*R3+V2)]
return f
# Create an `ode` instance to solve the system of differential
# equations defined by `dynamical_system`, and set the solver method to 'dopri5'.
solver = ode(dynamical_system)
solver.set_integrator('dopri5')
# Set the initial value z(0) = z0.
C1=10
C2=100
L=0.32
R1=22
R2=14.5
R3=100
RN=6.9
a=2.295*10**(-5)
b=3.0038
solver.set_f_params(C1,C2,L,R1,R2,R3,RN,a,b)
t0 = 0.0
z0 = [-3, 0.5, 0.25] #here you can set the inital values V1,V2,I3
solver.set_initial_value(z0, t0)
# Create the array `t` of time values at which to compute
# the solution, and create an array to hold the solution.
# Put the initial value in the solution array.
t1 = 25
N = 200 #number of iterations
t = np.linspace(t0, t1, N)
sol = np.empty((N, 3))
sol[0] = z0
# Repeatedly call the `integrate` method to advance the
# solution to time t[k], and save the solution in sol[k].
k = 1
while solver.successful() and solver.t < t1:
solver.integrate(t[k])
sol[k] = solver.y
k += 1
xlim = (-4,1)
ylim= (-1,1)
zlim=(-1,1)
fig=plt.figure()
ax=fig.gca(projection='3d')
#ax.view_init(35,-28)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_zlim(zlim)
print sol[:,0]
print sol[:,1]
print sol[:,2]
ax.plot3D(sol[:,0], sol[:,1], sol[:,2], 'gray')
plt.show()
Printing the arrays that should hold the solutions sol[:,0] etc. shows that apparently it constantly fills it with the initial value. Can anyone help? Thanks!
Use from __future__ import division.
I can't reproduce your problem: I see a gradual change from -3 to -2.46838127, from 0.5 to 0.38022886 and from 0.25 to 0.00380239 (with a sharp change from 0.25 to 0.00498674 in the first step). This is with Python 3.7.0, NumPy version 1.15.3 and SciPy version 1.1.0.
Given that you are using Python 2.7, integer division may be the culprit here. Quite a number of your constants are integer, and you have a bunch of 1/<constant> integer divisions in your equation.
Indeed, if I replace / with // in my version (for Python 3), I can reproduce your problem.
Simply add from __future__ import division at the top of your script to solve your problem.
Also add from __future__ import print_function at the top, replace print <something> with print(<something>) and your script is fully Python 3 and 2 compatible).

Laguerre polynomials in python using scipy, lack of convergence?

The laguerre polynomials don't seem to be converging at some orders as can be demonstrated by running the following code.
import numpy as np
from sympy import mpmath as mp
from scipy.special import genlaguerre as genlag
from sympy.mpmath import laguerre as genlag2
from matplotlib import pyplot as plt
def laguerre(x, r_ord, phi_ord, useArbitraryPrecision=False):
if (r_ord < 30 and phi_ord < 30) and not useArbitraryPrecision:
polyCoef = genlag(r_ord, phi_ord)
out = np.polyval(polyCoef, x)
else:
fun = lambda arg: genlag2(r_ord, phi_ord, arg)
fun2 = np.frompyfunc(genlag2, 3, 1)
# fun2 = np.vectorize(fun)
out = fun2(r_ord, phi_ord, x)
return out
r_ord = 29
phi_ord = 29
f = lambda x, useArb : mp.log10(laguerre(x, 29, 29, useArb))
mp.plot(lambda y : f(y, True) - f(y, False), [0, 200], points = 1e3)
plt.show()
I was wondering if anyone knew what is going on or of any accuracy limitations of the scipy function? Do you recommend I simply use the mpmath function? At first I thought it might be that after a certain order it doesn't work but for (100, 100) it seems to be working just fine.
by running
mp.plot([lambda y : f(y, True), lambda y: f(y, False)], [0, 200], points = 1e3)
you get the following image where the discrepancy becomes pretty clear.
Any help appreciated.
Let me know if anything needs clarification.
Using polyval with high-order polynomials (about n > 20) is in general a bad idea, because evaluating polynomial using the coefficients (in power basis) will start giving large errors in floating point at high orders. The warning in the Scipy documentation tries to tell you that.
You should use scipy.special.eval_genlaguerre(r_ord, phi_ord, float(x)) instead of genlaguerre + polyval; it uses a stabler numerical algorithm for evaluating the polynomial.
Instead of using scipy.special.eval_genlaguerre to evaluate a high-degree polynomial as pv suggested, you can also use numpy.polynomial.Laguerre as explained in the NumPy documentation.
Unfortunately, it doesn't seem to provide a function for generalized Laguerre polynomials.
import numpy as np
from numpy.polynomial import Laguerre
p = Laguerre([1, -2, 1])
x = np.arange(5)
p(x)
NumPy output: 0, 0.5, 2, 4.5, 8

Using adaptive step sizes with scipy.integrate.ode

The (brief) documentation for scipy.integrate.ode says that two methods (dopri5 and dop853) have stepsize control and dense output. Looking at the examples and the code itself, I can only see a very simple way to get output from an integrator. Namely, it looks like you just step the integrator forward by some fixed dt, get the function value(s) at that time, and repeat.
My problem has pretty variable timescales, so I'd like to just get the values at whatever time steps it needs to evaluate to achieve the required tolerances. That is, early on, things are changing slowly, so the output time steps can be big. But as things get interesting, the output time steps have to be smaller. I don't actually want dense output at equal intervals, I just want the time steps the adaptive function uses.
EDIT: Dense output
A related notion (almost the opposite) is "dense output", whereby the steps taken are as large as the stepper cares to take, but the values of the function are interpolated (usually with accuracy comparable to the accuracy of the stepper) to whatever you want. The fortran underlying scipy.integrate.ode is apparently capable of this, but ode does not have the interface. odeint, on the other hand, is based on a different code, and does evidently do dense output. (You can output every time your right-hand-side is called to see when that happens, and see that it has nothing to do with the output times.)
So I could still take advantage of adaptivity, as long as I could decide on the output time steps I want ahead of time. Unfortunately, for my favorite system, I don't even know what the approximate timescales are as functions of time, until I run the integration. So I'll have to combine the idea of taking one integrator step with this notion of dense output.
EDIT 2: Dense output again
Apparently, scipy 1.0.0 introduced support for dense output through a new interface. In particular, they recommend moving away from scipy.integrate.odeint and towards scipy.integrate.solve_ivp, which as a keyword dense_output. If set to True, the returned object has an attribute sol that you can call with an array of times, which then returns the integrated functions values at those times. That still doesn't solve the problem for this question, but it is useful in many cases.
Since SciPy 0.13.0,
The intermediate results from the dopri family of ODE solvers can
now be accessed by a solout callback function.
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
backend = 'dopri5'
# backend = 'dop853'
solver = ode(logistic).set_integrator(backend)
sol = []
def solout(t, y):
sol.append([t, *y])
solver.set_solout(solout)
solver.set_initial_value(y0, t0).set_f_params(r)
solver.integrate(t1)
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
Result:
The result seems to be slightly different from Tim D's, although they both use the same backend. I suspect this having to do with FSAL property of dopri5. In Tim's approach, I think the result k7 from the seventh stage is discarded, so k1 is calculated afresh.
Note: There's a known bug with set_solout not working if you set it after setting initial values. It was fixed as of SciPy 0.17.0.
I've been looking at this to try to get the same result. It turns out you can use a hack to get the step-by-step results by setting nsteps=1 in the ode instantiation. It will generate a UserWarning at every step (this can be caught and suppressed).
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
import warnings
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
#backend = 'vode'
backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend, nsteps=1)
solver.set_initial_value(y0, t0).set_f_params(r)
# suppress Fortran-printed warning
solver._integrator.iwork[2] = -1
sol = []
warnings.filterwarnings("ignore", category=UserWarning)
while solver.t < t1:
solver.integrate(t1, step=True)
sol.append([solver.t, solver.y])
warnings.resetwarnings()
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
result:
The integrate method accepts a boolean argument step that tells the method to return a single internal step. However, it appears that the 'dopri5' and 'dop853' solvers do not support it.
The following code shows how you can get the internal steps taken by the solver when the 'vode' solver is used:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def logistic(t, y, r):
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
backend = 'vode'
#backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend)
solver.set_initial_value(y0, t0).set_f_params(r)
sol = []
while solver.successful() and solver.t < t1:
solver.integrate(t1, step=True)
sol.append([solver.t, solver.y])
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()
Result:
FYI, although an answer has been accepted already, I should point out for the historical record that dense output and arbitrary sampling from anywhere along the computed trajectory is natively supported in PyDSTool. This also includes a record of all the adaptively-determined time steps used internally by the solver. This interfaces with both dopri853 and radau5 and auto-generates the C code necessary to interface with them rather than relying on (much slower) python function callbacks for the right-hand side definition. None of these features are natively or efficiently provided in any other python-focused solver, to my knowledge.
Here's another option that should also work with dopri5 and dop853. Basically, the solver will call the logistic() function as often as needed to calculate intermediate values so that's where we store the results:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
sol = []
def logistic(t, y, r):
sol.append([t, y])
return r * y * (1.0 - y)
r = .01
t0 = 0
y0 = 1e-5
t1 = 5000.0
# Maximum number of steps that the integrator is allowed
# to do along the whole interval [t0, t1].
N = 10000
#backend = 'vode'
backend = 'dopri5'
#backend = 'dop853'
solver = ode(logistic).set_integrator(backend, nsteps=N)
solver.set_initial_value(y0, t0).set_f_params(r)
# Single call to solver.integrate()
solver.integrate(t1)
sol = np.array(sol)
plt.plot(sol[:,0], sol[:,1], 'b.-')
plt.show()

Categories

Resources