Related
I've a system of ODEs which I so far solved via solve_ivp.
scipy.integrate.solve_ivp(fun=model, t_span=(0.0, t_end), y0=[s0])
My problem is, that I want to solve the ODEs in a running simulation, where new values are constantly injected into the simulation and results are displayed. The simulation may run for several hours. My approach was to call solve_ivp repeatedly, roughly as follows (this allows me to show intermediate results and fetch new data, which is not shown here):
t = 0.0
s = s0
while t < t_end:
result = scipy.integrate.solve_ivp(fun=model, t_span=(t, t + t_step), y0=[s])
s = result.y[0][-1]
t += t_step
I wrote a few test cases, where I solved the ODEs analytically and for reasons I do not fully understand yet, repeatedly calling solve_ivp is always closer to the analytical solution (without manually tweaking the different parameters of solve_ivp). My question is more along the lines: If there is something wrong in this approach, or if there is some functionality in scipy or another package which may be better suited to my needs?
A poorly documented option is to use the integrators that underly solve_ivp directly. For example, RK45. When an external event changes something, you would need to restart the integration as most (all?) of these integrators use multiple prior steps to construct the value for the next time point. If you change values or the derivative function mid-integration, you will introduce subtle hard to understand bugs.
Here is an example that changes the derivative function based on an external event to the integrator. In this case, it is simply based on the clock time. Note that this example will give different results on different machines depending on speed of execution.
import datetime
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
def fun1(t, y):
return -y
def fun2(t, y):
return 2 * y
def use_fun2():
return datetime.datetime.utcnow().time().microsecond >= 500000
time_bound = 20.
# max_step used to make sure this runs slow enough
# try changing this to see the difference
max_step = 0.0001
y0 = [5]
fun=fun1
rk45 = integrate.RK45(fun, 0, y0, time_bound, max_step=max_step)
t = []
y = []
while rk45.status == "running" and rk45.y[0] < 200:
if use_fun2():
if fun is fun1:
fun = fun2
rk45 = integrate.RK45(fun, rk45.t, rk45.y, time_bound, max_step=max_step)
else:
if fun is fun2:
fun = fun1
rk45 = integrate.RK45(fun, rk45.t, rk45.y, time_bound, max_step=max_step)
t.append(rk45.t)
y.append(rk45.y)
rk45.step()
import matplotlib.pyplot as plt
plt.plot(t, y)
plt.show()
To check some of my results more easily I used an Excel sheet to make a few diagrams. However, I noticed something really awkward.
EDIT :
So let present the problem in another way, I found something that represent what I don't understand in my code.
import numpy as np
from scipy.integrate import odeint
A = []
def F(y, z):
global A
a = y[0]
b = y[1]
A.append(a)
return [a, b]
y0 = [1, 1]
z = np.linspace(0, 1, 101)
y = odeint(F, y0, z)
print(len(z), len(A))
The question is why the length of z and A are different (e.g. 101 and 55)?
For me ,during the solving, a should vary len(z) times and so A. So it looks like the linspace is not doing anything on the solving of the equations. Or perhaps I haven't understood the usage of linspace in Python.
The solution via odeint uses an implicit linear multi-step method with adaptive internal time stepping. This is implemented via a PECE predictor-corrector scheme. The E there stands for "evaluation". Which means that in each internal integration step, the ODE function is called twice. You might get less internal steps than the input time list has entries, the output array is interpolated from the internal time steps, so that you can have multiple output values per internal step. But the other extreme is also possible, that to reach the requested tolerances the internal step size is so small that one output time step requires multiple internal steps.
If the problem were more stiff, there would be even more calls, periodically for the numerical approximation of the Jacobian, and possibly multiple calls per step of the Newton-like corrector step or just multiple simple correction steps, which is then called PE(CE)d.
To compare with, look at the explicit RK4 method. There you have 4 evaluations of the ODE function per time step. The Dormand-Prince method of ode45 has 6+1 evaluations per time step, however there the internal time steps need not correspond to the time sample list passed to the method, the requested output samples are interpolated from the internal steps.
Usually when you do this by hand there's no problem as the 1/r usually gets cancelled with another r. But doing this numerically with scipy.misc.derivative works like a charm for r different from zero. But of course, as soon as I ask for r = 0, I get division by zero, which I expected. So how else could you calculate this numerically. I insist on the fact that everything has to be done numerically as my function are now so complicated that I won't be able to find a derivative manually. Thank you!
My code:
rAtheta = lambda _r: _r*Atheta(_r,theta,z,t)
if r != 0:
return derivative(rAtheta,r,dx=1e-10,order=3)/r
else:
#What should go here so that it doesn't blow up when calculating the gradient?
tl;dr: use symbolic differentiation, or complex step differentiation if that fails
If you insist on using numerical methods, you really have to approximate the limit of the derivative as r->0 one way or the other.
I suggest trying complex step differentiation. The idea is to use complex arguments inside the function you're trying to differentiate, but it usually gets rid of the numerical instability that is imposed by standard finite difference schemes. The result is a procedure that needs complex arithmetic (hooray numpy, and python in general!) but in turn can be much more stable at small dx values.
Here's another point: complex step differentiation uses
F′(x0) = Im(F(x0+ih))/h + O(h^2)
Let's apply this to your r=0 case:
F′(0) = Im(F(ih))/h + O(h^2)
There are no singularities even for r=0! Choose a small h, possibly the same dx you're passing to your function, and use that:
def rAtheta(_r):
# note that named lambdas are usually frowned upon
return _r*Atheta(_r,theta,z,t)
tol = 1e-10
dr = 1e-12
if np.abs(r) > tol: # or math.abs or your favourite other abs
return derivative(rAtheta,r,dx=dr,order=3)/r
else:
return rAtheta(r + 1j*dr).imag/dr/r
Here is the above in action for f = r*ln(r):
The result is straightforwardly smooth, even though the points below r=1e-10 were computed with complex step differentiation.
Very important note: notice the separation between tol and dr in the code. The former is used to determine when to switch between methods, and the latter is used as a step in complex step differentiation. Look what happens when tol=dr=1e-10:
the result is a smoothly wrong function below r=1e-10! That's why you always have to be careful with numerical differentiation. And I wouldn't advise going too much below that in dr, as machine precision will bite you sooner or later.
But why stop here? I'm fairly certain that your functions could be written in a vectorized way, i.e. they could accept an array of radial points. Using complex step differentiation you don't have to loop over the radial points (which you would have to resort to using scipy.misc.derivative). Example:
import numpy as np
import matplotlib.pyplot as plt
def Atheta(r,*args):
return r*np.log(r) # <-- vectorized expression
def rAtheta(r):
return r*Atheta(r) #,theta,z,t) # <-- vectorized as much as Atheta is
def vectorized_difffun(rlist):
r = np.asarray(rlist)
dr = 1e-12
return (rAtheta(r + 1j*dr)).imag/dr/r
rarr = np.logspace(-12,-2,20)
darr = vectorized_difffun(rarr)
plt.figure()
plt.loglog(rarr,np.abs(darr),'.-')
plt.xlabel(r'$r$')
plt.ylabel(r'$|\frac{1}{r} \frac{d}{dr}(r^2 \ln r)|$')
plt.tight_layout()
plt.show()
The result should be familiar:
Having cleared the fun weirdness that is complex step differentiation, I should note that you should strongly consider using symbolic math. In cases like this when 1/r factors disappear exactly, it wouldn't hurt if you reached this conclusion exactly. After all double precision is still just double precision.
For this you'll need the sympy module, define your function symbolically once, differentiate it symbolically once, turn your simplified result into a numpy function using sympy.lambdify, and use this numerical function as much as you need (assuming that this whole process runs in finite time and the resulting function is not too slow to use). Example:
import sympy as sym
# only needed for the example:
import numpy as np
import matplotlib.pyplot as plt
r = sym.symbols('r')
f = r*sym.ln(r)
df = sym.diff(r*f,r)
res_sym = sym.simplify(df/r)
res_num = sym.lambdify(r,res_sym,'numpy')
rarr = np.logspace(-12,-2,20)
darr = res_num(rarr)
plt.figure()
plt.loglog(rarr,np.abs(darr),'.-')
plt.xlabel(r'$r$')
plt.ylabel(r'$|\frac{1}{r} \frac{d}{dr}(r^2 \ln r)|$')
plt.tight_layout()
plt.show()
resulting in
As you see, the resulting function was vectorized thanks to lambdify using numpy during the conversion from symbolic to numeric function. Obviously, the best solution is the symbolic one as long as the resulting function is not so complicated to make its practical use impossible. I urge you to first try the symbolic version, and if for some reason it's not applicable, switch to complex step differentiation, with due caution.
I want to calculate and plot a gradient of any scalar function of two variables. If you really want a concrete example, lets say f=x^2+y^2 where x goes from -10 to 10 and same for y. How do I calculate and plot grad(f)? The solution should be vector and I should see vector lines. I am new to python so please use simple words.
EDIT:
#Andras Deak: thank you for your post, i tried what you suggested and instead of your test function (fun=3*x^2-5*y^2) I used function that i defined as V(x,y); this is how the code looks like but it reports an error
import numpy as np
import math
import sympy
import matplotlib.pyplot as plt
def V(x,y):
t=[]
for k in range (1,3):
for l in range (1,3):
t.append(0.000001*np.sin(2*math.pi*k*0.5)/((4*(math.pi)**2)* (k**2+l**2)))
term = t* np.sin(2 * math.pi * k * x/0.004) * np.cos(2 * math.pi * l * y/0.004)
return term
return term.sum()
x,y=sympy.symbols('x y')
fun=V(x,y)
gradfun=[sympy.diff(fun,var) for var in (x,y)]
numgradfun=sympy.lambdify([x,y],gradfun)
X,Y=np.meshgrid(np.arange(-10,11),np.arange(-10,11))
graddat=numgradfun(X,Y)
plt.figure()
plt.quiver(X,Y,graddat[0],graddat[1])
plt.show()
AttributeError: 'Mul' object has no attribute 'sin'
And lets say I remove sin, I get another error:
TypeError: can't multiply sequence by non-int of type 'Mul'
I read tutorial for sympy and it says "The real power of a symbolic computation system such as SymPy is the ability to do all sorts of computations symbolically". I get this, I just dont get why I cannot multiply x and y symbols with float numbers.
What is the way around this? :( Help please!
UPDATE
#Andras Deak: I wanted to make things shorter so I removed many constants from the original formulas for V(x,y) and Cn*Dm. As you pointed out, that caused the sin function to always return 0 (i just noticed). Apologies for that. I will update the post later today when i read your comment in details. Big thanks!
UPDATE 2
I changed coefficients in my expression for voltage and this is the result:
It looks good except that the arrows point in the opposite direction (they are supposed to go out of the reddish dot and into the blue one). Do you know how I could change that? And if possible, could you please tell me the way to increase the size of the arrows? I tried what was suggested in another topic (Computing and drawing vector fields):
skip = (slice(None, None, 3), slice(None, None, 3))
This plots only every third arrow and matplotlib does the autoscale but it doesnt work for me (nothing happens when i add this, for any number that i enter)
You were already of huge help , i cannot thank you enough!
Here's a solution using sympy and numpy. This is the first time I use sympy, so others will/could probably come up with much better and more elegant solutions.
import sympy
#define symbolic vars, function
x,y=sympy.symbols('x y')
fun=3*x**2-5*y**2
#take the gradient symbolically
gradfun=[sympy.diff(fun,var) for var in (x,y)]
#turn into a bivariate lambda for numpy
numgradfun=sympy.lambdify([x,y],gradfun)
now you can use numgradfun(1,3) to compute the gradient at (x,y)==(1,3). This function can then be used for plotting, which you said you can do.
For plotting, you can use, for instance, matplotlib's quiver, like so:
import numpy as np
import matplotlib.pyplot as plt
X,Y=np.meshgrid(np.arange(-10,11),np.arange(-10,11))
graddat=numgradfun(X,Y)
plt.figure()
plt.quiver(X,Y,graddat[0],graddat[1])
plt.show()
UPDATE
You added a specification for your function to be computed. It contains the product of terms depending on x and y, which seems to break my above solution. I managed to come up with a new one to suit your needs. However, your function seems to make little sense. From your edited question:
t.append(0.000001*np.sin(2*math.pi*k*0.5)/((4*(math.pi)**2)* (k**2+l**2)))
term = t* np.sin(2 * math.pi * k * x/0.004) * np.cos(2 * math.pi * l * y/0.004)
On the other hand, from your corresponding comment to this answer:
V(x,y) = Sum over n and m of [Cn * Dm * sin(2pinx) * cos(2pimy)]; sum goes from -10 to 10; Cn and Dm are coefficients, and i calculated
that CkDl = sin(2pik)/(k^2 +l^2) (i used here k and l as one of the
indices from the sum over n and m).
I have several problems with this: both sin(2*pi*k) and sin(2*pi*k/2) (the two competing versions in the prefactor are always zero for integer k, giving you a constant zero V at every (x,y). Furthermore, in your code you have magical frequency factors in the trigonometric functions, which are missing from the comment. If you multiply your x by 4e-3, you drastically change the spatial dependence of your function (by changing the wavelength by roughly a factor of a thousand). So you should really decide what your function is.
So here's a solution, where I assumed
V(x,y)=sum_{k,l = 1 to 10} C_{k,l} * sin(2*pi*k*x)*cos(2*pi*l*y), with
C_{k,l}=sin(2*pi*k/4)/((4*pi^2)*(k^2+l^2))*1e-6
This is a combination of your various versions of the function, with the modification of sin(2*pi*k/4) in the prefactor in order to have a non-zero function. I expect you to be able to fix the numerical factors to your actual needs, after you figure out the proper mathematical model.
So here's the full code:
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
def CD(k,l):
#return sp.sin(2*sp.pi*k/2)/((4*sp.pi**2)*(k**2+l**2))*1e-6
return sp.sin(2*sp.pi*k/4)/((4*sp.pi**2)*(k**2+l**2))*1e-6
def Vkl(x,y,k,l):
return CD(k,l)*sp.sin(2*sp.pi*k*x)*sp.cos(2*sp.pi*l*y)
def V(x,y,kmax,lmax):
k,l=sp.symbols('k l',integers=True)
return sp.summation(Vkl(x,y,k,l),(k,1,kmax),(l,1,lmax))
#define symbolic vars, function
kmax=10
lmax=10
x,y=sp.symbols('x y')
fun=V(x,y,kmax,lmax)
#take the gradient symbolically
gradfun=[sp.diff(fun,var) for var in (x,y)]
#turn into bivariate lambda for numpy
numgradfun=sp.lambdify([x,y],gradfun,'numpy')
numfun=sp.lambdify([x,y],fun,'numpy')
#plot
X,Y=np.meshgrid(np.linspace(-10,10,51),np.linspace(-10,10,51))
graddat=numgradfun(X,Y)
fundat=numfun(X,Y)
hf=plt.figure()
hc=plt.contourf(X,Y,fundat,np.linspace(fundat.min(),fundat.max(),25))
plt.quiver(X,Y,graddat[0],graddat[1])
plt.colorbar(hc)
plt.show()
I defined your V(x,y) function using some auxiliary functions for transparence. I left the summation cut-offs as literal parameters, kmax and lmax: in your code these were 3, in your comment they were said to be 10, and anyway they should be infinity.
The gradient is taken the same way as before, but when converting to a numpy function using lambdify you have to set an additional string parameter, 'numpy'. This will alow the resulting numpy lambda to accept array input (essentially it will use np.sin instead of math.sin and the same for cos).
I also changed the definition of the grid from array to np.linspace: this is usually more convenient. Since your function is almost constant at integer grid points, I created a denser mesh for plotting (51 points while keeping your original limits of (-10,10) fixed).
For clarity I included a few more plots: a contourf to show the value of the function (contour lines should always be orthogonal to the gradient vectors), and a colorbar to indicate the value of the function. Here's the result:
The composition is obviously not the best, but I didn't want to stray too much from your specifications. The arrows in this figure are actually hardly visible, but as you can see (and also evident from the definition of V) your function is periodic, so if you plot the same thing with smaller limits and less grid points, you'll see more features and larger arrows.
I'm trying to interpolate some data for the purpose of plotting. For instance, given N data points, I'd like to be able to generate a "smooth" plot, made up of 10*N or so interpolated data points.
My approach is to generate an N-by-10*N matrix and compute the inner product the original vector and the matrix I generated, yielding a 1-by-10*N vector. I've already worked out the math I'd like to use for the interpolation, but my code is pretty slow. I'm pretty new to Python, so I'm hopeful that some of the experts here can give me some ideas of ways I can try to speed up my code.
I think part of the problem is that generating the matrix requires 10*N^2 calls to the following function:
def sinc(x):
import math
try:
return math.sin(math.pi * x) / (math.pi * x)
except ZeroDivisionError:
return 1.0
(This comes from sampling theory. Essentially, I'm attempting to recreate a signal from its samples, and upsample it to a higher frequency.)
The matrix is generated by the following:
def resampleMatrix(Tso, Tsf, o, f):
from numpy import array as npar
retval = []
for i in range(f):
retval.append([sinc((Tsf*i - Tso*j)/Tso) for j in range(o)])
return npar(retval)
I'm considering breaking up the task into smaller pieces because I don't like the idea of an N^2 matrix sitting in memory. I could probably make 'resampleMatrix' into a generator function and do the inner product row-by-row, but I don't think that will speed up my code much until I start paging stuff in and out of memory.
Thanks in advance for your suggestions!
This is upsampling. See Help with resampling/upsampling for some example solutions.
A fast way to do this (for offline data, like your plotting application) is to use FFTs. This is what SciPy's native resample() function does. It assumes a periodic signal, though, so it's not exactly the same. See this reference:
Here’s the second issue regarding time-domain real signal interpolation, and it’s a big deal indeed. This exact interpolation algorithm provides correct results only if the original x(n) sequence is periodic within its full time interval.
Your function assumes the signal's samples are all 0 outside of the defined range, so the two methods will diverge away from the center point. If you pad the signal with lots of zeros first, it will produce a very close result. There are several more zeros past the edge of the plot not shown here:
Cubic interpolation won't be correct for resampling purposes. This example is an extreme case (near the sampling frequency), but as you can see, cubic interpolation isn't even close. For lower frequencies it should be pretty accurate.
If you want to interpolate data in a quite general and fast way, splines or polynomials are very useful. Scipy has the scipy.interpolate module, which is very useful. You can find many examples in the official pages.
Your question isn't entirely clear; you're trying to optimize the code you posted, right?
Re-writing sinc like this should speed it up considerably. This implementation avoids checking that the math module is imported on every call, doesn't do attribute access three times, and replaces exception handling with a conditional expression:
from math import sin, pi
def sinc(x):
return (sin(pi * x) / (pi * x)) if x != 0 else 1.0
You could also try avoiding creating the matrix twice (and holding it twice in parallel in memory) by creating a numpy.array directly (not from a list of lists):
def resampleMatrix(Tso, Tsf, o, f):
retval = numpy.zeros((f, o))
for i in xrange(f):
for j in xrange(o):
retval[i][j] = sinc((Tsf*i - Tso*j)/Tso)
return retval
(replace xrange with range on Python 3.0 and above)
Finally, you can create rows with numpy.arange as well as calling numpy.sinc on each row or even on the entire matrix:
def resampleMatrix(Tso, Tsf, o, f):
retval = numpy.zeros((f, o))
for i in xrange(f):
retval[i] = numpy.arange(Tsf*i / Tso, Tsf*i / Tso - o, -1.0)
return numpy.sinc(retval)
This should be significantly faster than your original implementation. Try different combinations of these ideas and test their performance, see which works out the best!
I'm not quite sure what you're trying to do, but there are some speedups you can do to create the matrix. Braincore's suggestion to use numpy.sinc is a first step, but the second is to realize that numpy functions want to work on numpy arrays, where they can do loops at C speen, and can do it faster than on individual elements.
def resampleMatrix(Tso, Tsf, o, f):
retval = numpy.sinc((Tsi*numpy.arange(i)[:,numpy.newaxis]
-Tso*numpy.arange(j)[numpy.newaxis,:])/Tso)
return retval
The trick is that by indexing the aranges with the numpy.newaxis, numpy converts the array with shape i to one with shape i x 1, and the array with shape j, to shape 1 x j. At the subtraction step, numpy will "broadcast" the each input to act as a i x j shaped array and the do the subtraction. ("Broadcast" is numpy's term, reflecting the fact no additional copy is made to stretch the i x 1 to i x j.)
Now the numpy.sinc can iterate over all the elements in compiled code, much quicker than any for-loop you could write.
(There's an additional speed-up available if you do the division before the subtraction, especially since inthe latter the division cancels the multiplication.)
The only drawback is that you now pay for an extra Nx10*N array to hold the difference. This might be a dealbreaker if N is large and memory is an issue.
Otherwise, you should be able to write this using numpy.convolve. From what little I just learned about sinc-interpolation, I'd say you want something like numpy.convolve(orig,numpy.sinc(numpy.arange(j)),mode="same"). But I'm probably wrong about the specifics.
If your only interest is to 'generate a "smooth" plot' I would just go with a simple polynomial spline curve fit:
For any two adjacent data points the coefficients of a third degree polynomial function can be computed from the coordinates of those data points and the two additional points to their left and right (disregarding boundary points.) This will generate points on a nice smooth curve with a continuous first dirivitive. There's a straight forward formula for converting 4 coordinates to 4 polynomial coefficients but I don't want to deprive you of the fun of looking it up ;o).
Here's a minimal example of 1d interpolation with scipy -- not as much fun as reinventing, but.
The plot looks like sinc, which is no coincidence:
try google spline resample "approximate sinc".
(Presumably less local / more taps ⇒ better approximation,
but I have no idea how local UnivariateSplines are.)
""" interpolate with scipy.interpolate.UnivariateSpline """
from __future__ import division
import numpy as np
from scipy.interpolate import UnivariateSpline
import pylab as pl
N = 10
H = 8
x = np.arange(N+1)
xup = np.arange( 0, N, 1/H )
y = np.zeros(N+1); y[N//2] = 100
interpolator = UnivariateSpline( x, y, k=3, s=0 ) # s=0 interpolates
yup = interpolator( xup )
np.set_printoptions( 1, threshold=100, suppress=True ) # .1f
print "yup:", yup
pl.plot( x, y, "green", xup, yup, "blue" )
pl.show()
Added feb 2010: see also basic-spline-interpolation-in-a-few-lines-of-numpy
Small improvement. Use the built-in numpy.sinc(x) function which runs in compiled C code.
Possible larger improvement: Can you do the interpolation on the fly (as the plotting occurs)? Or are you tied to a plotting library that only accepts a matrix?
I recommend that you check your algorithm, as it is a non-trivial problem. Specifically, I suggest you gain access to the article "Function Plotting Using Conic Splines" (IEEE Computer Graphics and Applications) by Hu and Pavlidis (1991). Their algorithm implementation allows for adaptive sampling of the function, such that the rendering time is smaller than with regularly spaced approaches.
The abstract follows:
A method is presented whereby, given a
mathematical description of a
function, a conic spline approximating
the plot of the function is produced.
Conic arcs were selected as the
primitive curves because there are
simple incremental plotting algorithms
for conics already included in some
device drivers, and there are simple
algorithms for local approximations by
conics. A split-and-merge algorithm
for choosing the knots adaptively,
according to shape analysis of the
original function based on its
first-order derivatives, is
introduced.