I'm working with a Spark Combustion Engine Model and because some reasons I'm using python to model the combustion. I'm trying to use the solver of ODEs but the yield is completly out of reality. I discovered that the integration of Volume of cylinder is wrong. I have already tried use the "odeint" and "ode" solver but the result is the same.
The code shows the derivative of Volume with theta and integrate to find the volume. I put the analytical equation to compare.
OBS: I had a similar problem using Matlab, but was when I tried use degrees in trigonometric functions. When I changed for radians the problem was solved.
The code follows:
from scipy.integrate import odeint
from scipy.integrate import ode
from scipy import integrate
import math
import sympy
from sympy import sqrt, sin, cos, tan, atan
from pylab import *
from RatesComp import *
V_real=np.zeros((100))
def Volume(V,theta):
V_sol = V[0]
dVdtheta = Vtdc*(r-1)/2 *( sin(theta) + eps/2*sin(2*theta)/sqrt(1-(eps**2)*sin(theta)**2))
return [dVdtheta]
#Geometry
eps = 0.25; # half stroke to rod ratio, s/2l
r = 10; # compression ratio
Vtdc = 6.9813e-05 # volume at TDC
# Initial Conditions
theta0 = - pi
V_init = 0.0006283
theta = linspace(-pi,pi,100)
solve = odeint( Volume, V_init, theta)
# Analytical Result
Size = len(theta)
for i in range(0, Size,1):
V_real[i] = Vtdc*(1+(r-1)/2*(1-cos(theta[i])+ 1/eps*(1-(1-(eps**2)*sin(theta[i])**2)**0.5)))
figure(1)
plot(theta, solve[:,0],label="Comput")
plot(theta, V_real[0:Size],label="Real")
ylabel('Volume [m^3]')
xlabel('CA [Rad]')
legend()
grid(True)
show()
The fig that I show is the volume of cylinder. The result real and the compute
Can someone help with information about why this problem happens?
Apparently you use python2. There the declaration of r=10 gives r the type integer which leads to a unwanted integer division in (r-1)/2 in the 'real' solution. In the derivative function there is a float value Vtdc as first factor in the product, after which the whole product evaluation is in float.
Thus change to r=10.0 or use (r-1.0)/2 or 0.5*(r-1).
And you should also set V_init = r*Vtdc as that is the value of V_real(-pi).
If you use python2 add at the first line: from __future__ import division to use division from Python3 according to documentation: https://mail.python.org/pipermail/tutor/2008-March/060886.html
In python2 when you divide two integer values you will get an integer result not float. It is may be solve your problem without large changing in the code.
Related
I am running into an issue with integration in Python returning incorrect values for an integral with a known analytical solution. The integral in question is
LaTex expression for the integral (can't post photos yet)
For the value of sigma I am using (1e-15),the solution to this integral has a value of ~ 1.25e-45. However when I use the scipy integrate package to calculate this I get zero, which I believe has to do with the precision required from the calculation.
#scipy method
import numpy as np
from scipy.integrate import quad
sigma = 1e-15
f = lambda x: (x**2) * np.exp(-x**2/(2*sigma**2))
#perform the integral and print the result
solution = quad(f,0,np.inf)[0]
print(solution)
0.0
And since precision was an issue I tried to also use another recommended package mpmath, which did not return 0, but was off by ~7 orders of magnitude from the correct answer. Testing larger values of sigma result in the solution being very close to the corresponding exact solution, but it seems to get increasingly incorrect as sigma gets smaller.
#mpmath method
import mpmath as mp
sigma = 1e-15
f = lambda x: (x**2) * mp.exp(-x**2/(2*sigma**2))
#perform the integral and print the result
solution = mp.quad(f,[0,np.inf])
print(solution)
2.01359486678988e-52
From here I could use some advice on getting a more accurate answer, as I would like to have some confidence applying python integration methods to integrals that cannot be solved analytically.
you should add extra points for the function as 'mid points', i added 100 points from 1e-100 to 1 to increase accuracy.
#mpmath method
import numpy as np
import mpmath as mp
sigma = 1e-15
f = lambda x: (x**2) * mp.exp(-x**2/(2*sigma**2))
#perform the integral and print the result
solution = mp.quad(f,[0,*np.logspace(-100,0,100),np.inf])
print(solution)
1.25286197427129e-45
Edit: turns out you need 10000 points instead of 100 points to get a more accurate result, of 1.25331413731554e-45, but it takes a few seconds to calculate.
Most numerical integrators will run into issues with numbers that small due to floating point precision. One solution is to scale the integral before calculating. Letting q -> x/sigma, the integral becomes:
f = lambda q: sigma**3*(q**2) * np.exp(-q**2/2)
solution = quad(f, 0, np.inf)[0]
# solution: 1.2533156529417088e-45
I am trying to compute the following integration in Python:
where the second term of the integrand is
I am currently computing it numerically by using Simpson's rule:
import math
import numpy as np
from scipy.integrate import simps
r = np.linspace(rmin, rmax, 5000)
f_val = some_complicated_function(r, params)
g_val = a*np.multiply(r**alpha, [math.exp(-b*r_) for r_ in r])
gamma = simps(np.multiply(f_val, g_val), r)
However, the result is not accurate for small r values. I checked the value of g_val and it was like below
array([2.48243025e-31, 1.62729999e-27, 3.31169129e-26, ...,
1.34177288e-13, 1.34053922e-13, 1.33930643e-13])
which is probably causing the underflow.
The most typical workaround would be to integrate the function analytically rather than numerically. However, the problem is the function f(r) is very complicated and it is not available as an explicit (analytic) function.
Does anyone know any idea to compute this kind of integration more accurately?
Hello I have to program a python function to solve Lorenz differential equations using Runge-Kutta 2cond grade
sigma=10, r=28 and b=8/3
with initial conditions (x,y,z)=(0,1,0)
this is the code i wrote, but it throws me an error saying overflow encountered in double_scalars,
and I don't see what is wrong with the program
from pylab import *
def runge_4(r0,a,b,n,f1,f2,f3):
def f(r,t):
x=r[0]
y=r[1]
z=r[2]
fx=f1(x,y,z,t)
fy=f2(x,y,z,t)
fz=f3(x,y,z,t)
return array([fx,fy,fz],float)
h=(b-a)/n
lista_t=arange(a,b,h)
print(lista_t)
X,Y,Z=[],[],[]
for t in lista_t:
k1=h*f(r0,t)
print("k1=",k1)
k2=h*f(r0+0.5*k1,t+0.5*h)
print("k2=",k2)
k3=h*f(r0+0.5*k2,t+0.5*h)
print("k3=",k3)
k4=h*f(r0+k3,t+h)
print("k4=",k4)
r0+=(k1+2*k2+2*k3+k4)/float(6)
print(r0)
X.append(r0[0])
Y.append(r0[1])
Z.append(r0[2])
return array([X,Y,Z])
def f1(x,y,z,t):
return 10*(y-x)
def f2(x,y,z,t):
return 28*x-y-x*z
def f3(x,y,z,t):
return x*y-(8.0/3.0)*z
#and I run it
r0=[1,1,1]
runge_4(r0,1,50,20,f1,f2,f3)
Solving differential equations numerically can be challenging. If you choose too high step sizes, the solution will accumulate high errors and can even become unstable, as in your case.
Either you should drastically reduce the step size (h) or just use the adaptive Runge Kutta method provided by scipy:
from numpy import array, linspace
from scipy.integrate import solve_ivp
import pylab
from mpl_toolkits import mplot3d
def func(t, r):
x, y, z = r
fx = 10 * (y - x)
fy = 28 * x - y - x * z
fz = x * y - (8.0 / 3.0) * z
return array([fx, fy, fz], float)
# and I run it
r0 = [0, 1, 0]
sol = solve_ivp(func, [0, 50], r0, t_eval=linspace(0, 50, 5000))
# and plot it
fig = pylab.figure()
ax = pylab.axes(projection="3d")
ax.plot3D(sol.y[0,:], sol.y[1,:], sol.y[2,:], 'blue')
pylab.show()
This solver uses 4th and 5th order Runge Kutta combination and controls the deviation between them by adapting the step size. See more usage documentation here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html
You use a step size of h=2.5.
For RK4 the useful step sizes given a Lipschitz constant L are in the range L*h=1e-3 to 0.1, one might get somewhat right looking results up to L*h=2.5. Above that the method turns chaotic, any resemblance to the underlying ODE is lost.
The Lorenz system has a Lipschitz constant of about L=50, see Chaos and continuous dependency of ODE solution, so h<0.05 is absolutely required, h=0.002 is better and h=2e-5 gives the numerically best results for this numerical method.
It can be related to a division by zero or when a limit of a type is exceeded (float type).
To figure out where and when it happens you can set numpy.seterr('raise') and it will raise an exception so you can debug and see what it's happening. It seems your algorithm is diverging.
Here you can se how to use numpy.seterr
i am a newbie to python. I have a simple differential systems, which consists of two variables and two differential equations and initial conditions x0=1, y0=2:
dx/dt=6*y
dy/dt=(2t-3x)/4y
now i am trying to solve these two differential equations and i choose odeint. Here is my code:
import matplotlib.pyplot as pl
import numpy as np
from scipy.integrate import odeint
def func(z,b):
x, y=z
return [6*y, (b-3*x)/(4*y)]
z0=[1,2]
t = np.linspace(0,10,11)
b=2*t
xx=odeint(func, z0, b)
pl.figure(1)
pl.plot(t, xx[:,0])
pl.legend()
pl.show()
but the result is incorrect and there is a error message:
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.
I don't know what is wrong with my code and how can i solve it.
Any help will be a useful to me.
Apply trick to desingularize the division by y, print all ODE function evaluations, plot both components, and use the right differential equation with the modified code
import matplotlib.pyplot as pl
import numpy as np
from scipy.integrate import odeint
def func(z,t):
x, y=z
print t,z
return [6*y, (2*t-3*x)*y/(4*y**2+1e-12)]
z0=[1,2]
t = np.linspace(0,1,501)
xx=odeint(func, z0, t)
pl.figure(1)
pl.plot(t, xx[:,0],t,xx[:,1])
pl.legend()
pl.show()
and you see that at t=0.64230232515 the singularity of y=0 is assumed, where y behaves like a square root function at its apex. There is no way to cross that singularity, as the slope of y goes to infinity. At this point, the solution is no longer continuously differentiable, and thus this is the extremal point of the solution. The constant continuation is an artifact of the desingularization, not a valid solution.
I want to calculate and plot a gradient of any scalar function of two variables. If you really want a concrete example, lets say f=x^2+y^2 where x goes from -10 to 10 and same for y. How do I calculate and plot grad(f)? The solution should be vector and I should see vector lines. I am new to python so please use simple words.
EDIT:
#Andras Deak: thank you for your post, i tried what you suggested and instead of your test function (fun=3*x^2-5*y^2) I used function that i defined as V(x,y); this is how the code looks like but it reports an error
import numpy as np
import math
import sympy
import matplotlib.pyplot as plt
def V(x,y):
t=[]
for k in range (1,3):
for l in range (1,3):
t.append(0.000001*np.sin(2*math.pi*k*0.5)/((4*(math.pi)**2)* (k**2+l**2)))
term = t* np.sin(2 * math.pi * k * x/0.004) * np.cos(2 * math.pi * l * y/0.004)
return term
return term.sum()
x,y=sympy.symbols('x y')
fun=V(x,y)
gradfun=[sympy.diff(fun,var) for var in (x,y)]
numgradfun=sympy.lambdify([x,y],gradfun)
X,Y=np.meshgrid(np.arange(-10,11),np.arange(-10,11))
graddat=numgradfun(X,Y)
plt.figure()
plt.quiver(X,Y,graddat[0],graddat[1])
plt.show()
AttributeError: 'Mul' object has no attribute 'sin'
And lets say I remove sin, I get another error:
TypeError: can't multiply sequence by non-int of type 'Mul'
I read tutorial for sympy and it says "The real power of a symbolic computation system such as SymPy is the ability to do all sorts of computations symbolically". I get this, I just dont get why I cannot multiply x and y symbols with float numbers.
What is the way around this? :( Help please!
UPDATE
#Andras Deak: I wanted to make things shorter so I removed many constants from the original formulas for V(x,y) and Cn*Dm. As you pointed out, that caused the sin function to always return 0 (i just noticed). Apologies for that. I will update the post later today when i read your comment in details. Big thanks!
UPDATE 2
I changed coefficients in my expression for voltage and this is the result:
It looks good except that the arrows point in the opposite direction (they are supposed to go out of the reddish dot and into the blue one). Do you know how I could change that? And if possible, could you please tell me the way to increase the size of the arrows? I tried what was suggested in another topic (Computing and drawing vector fields):
skip = (slice(None, None, 3), slice(None, None, 3))
This plots only every third arrow and matplotlib does the autoscale but it doesnt work for me (nothing happens when i add this, for any number that i enter)
You were already of huge help , i cannot thank you enough!
Here's a solution using sympy and numpy. This is the first time I use sympy, so others will/could probably come up with much better and more elegant solutions.
import sympy
#define symbolic vars, function
x,y=sympy.symbols('x y')
fun=3*x**2-5*y**2
#take the gradient symbolically
gradfun=[sympy.diff(fun,var) for var in (x,y)]
#turn into a bivariate lambda for numpy
numgradfun=sympy.lambdify([x,y],gradfun)
now you can use numgradfun(1,3) to compute the gradient at (x,y)==(1,3). This function can then be used for plotting, which you said you can do.
For plotting, you can use, for instance, matplotlib's quiver, like so:
import numpy as np
import matplotlib.pyplot as plt
X,Y=np.meshgrid(np.arange(-10,11),np.arange(-10,11))
graddat=numgradfun(X,Y)
plt.figure()
plt.quiver(X,Y,graddat[0],graddat[1])
plt.show()
UPDATE
You added a specification for your function to be computed. It contains the product of terms depending on x and y, which seems to break my above solution. I managed to come up with a new one to suit your needs. However, your function seems to make little sense. From your edited question:
t.append(0.000001*np.sin(2*math.pi*k*0.5)/((4*(math.pi)**2)* (k**2+l**2)))
term = t* np.sin(2 * math.pi * k * x/0.004) * np.cos(2 * math.pi * l * y/0.004)
On the other hand, from your corresponding comment to this answer:
V(x,y) = Sum over n and m of [Cn * Dm * sin(2pinx) * cos(2pimy)]; sum goes from -10 to 10; Cn and Dm are coefficients, and i calculated
that CkDl = sin(2pik)/(k^2 +l^2) (i used here k and l as one of the
indices from the sum over n and m).
I have several problems with this: both sin(2*pi*k) and sin(2*pi*k/2) (the two competing versions in the prefactor are always zero for integer k, giving you a constant zero V at every (x,y). Furthermore, in your code you have magical frequency factors in the trigonometric functions, which are missing from the comment. If you multiply your x by 4e-3, you drastically change the spatial dependence of your function (by changing the wavelength by roughly a factor of a thousand). So you should really decide what your function is.
So here's a solution, where I assumed
V(x,y)=sum_{k,l = 1 to 10} C_{k,l} * sin(2*pi*k*x)*cos(2*pi*l*y), with
C_{k,l}=sin(2*pi*k/4)/((4*pi^2)*(k^2+l^2))*1e-6
This is a combination of your various versions of the function, with the modification of sin(2*pi*k/4) in the prefactor in order to have a non-zero function. I expect you to be able to fix the numerical factors to your actual needs, after you figure out the proper mathematical model.
So here's the full code:
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
def CD(k,l):
#return sp.sin(2*sp.pi*k/2)/((4*sp.pi**2)*(k**2+l**2))*1e-6
return sp.sin(2*sp.pi*k/4)/((4*sp.pi**2)*(k**2+l**2))*1e-6
def Vkl(x,y,k,l):
return CD(k,l)*sp.sin(2*sp.pi*k*x)*sp.cos(2*sp.pi*l*y)
def V(x,y,kmax,lmax):
k,l=sp.symbols('k l',integers=True)
return sp.summation(Vkl(x,y,k,l),(k,1,kmax),(l,1,lmax))
#define symbolic vars, function
kmax=10
lmax=10
x,y=sp.symbols('x y')
fun=V(x,y,kmax,lmax)
#take the gradient symbolically
gradfun=[sp.diff(fun,var) for var in (x,y)]
#turn into bivariate lambda for numpy
numgradfun=sp.lambdify([x,y],gradfun,'numpy')
numfun=sp.lambdify([x,y],fun,'numpy')
#plot
X,Y=np.meshgrid(np.linspace(-10,10,51),np.linspace(-10,10,51))
graddat=numgradfun(X,Y)
fundat=numfun(X,Y)
hf=plt.figure()
hc=plt.contourf(X,Y,fundat,np.linspace(fundat.min(),fundat.max(),25))
plt.quiver(X,Y,graddat[0],graddat[1])
plt.colorbar(hc)
plt.show()
I defined your V(x,y) function using some auxiliary functions for transparence. I left the summation cut-offs as literal parameters, kmax and lmax: in your code these were 3, in your comment they were said to be 10, and anyway they should be infinity.
The gradient is taken the same way as before, but when converting to a numpy function using lambdify you have to set an additional string parameter, 'numpy'. This will alow the resulting numpy lambda to accept array input (essentially it will use np.sin instead of math.sin and the same for cos).
I also changed the definition of the grid from array to np.linspace: this is usually more convenient. Since your function is almost constant at integer grid points, I created a denser mesh for plotting (51 points while keeping your original limits of (-10,10) fixed).
For clarity I included a few more plots: a contourf to show the value of the function (contour lines should always be orthogonal to the gradient vectors), and a colorbar to indicate the value of the function. Here's the result:
The composition is obviously not the best, but I didn't want to stray too much from your specifications. The arrows in this figure are actually hardly visible, but as you can see (and also evident from the definition of V) your function is periodic, so if you plot the same thing with smaller limits and less grid points, you'll see more features and larger arrows.