Related
help, please - I can't understand my own code! lol
I'm fairly new at python and after many trials and errors, I got my code to work, but there is one particular part of it I don't understand.
In the code below, I'm solving a fairly basic ODE through scipy's odeint-function. My goal is then to build on this blue-print for more complicated systems.
My question(s): How could I call the method .reaction_rate_simple without any arguments and without the closing parenthesis? What does this mean in python? Should I use a static method here somewhere?
If anyone has any feedback on this - maybe this is a crappy piece of code and there's a better way of solving it!
I am very thankful for any response and help!
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
class batch_reator:
def __init__(self, C_init, volume_reactor, time_init, time_end):
self.C_init = C_init
self.volume = volume_reactor
self.time_init = time_init
self.time_end = time_end
self.operation_time = (time_end - time_init)
def reaction_rate_simple(self, concentration_t, t, stoch_factor, order, rate_constant):
reaction_rate = stoch_factor * rate_constant * (concentration_t ** order)
return reaction_rate
def equations_system(self, kinetics):
dCdt = kinetics
return dCdt
C_init = 200
time_init, time_end = 0, 1000
rate_constant, volume_reactor, order, stoch_factor = 0.0001, 10, 1, -1
time_span = np.linspace(time_init, time_end, 100)
Batch_basic = batch_reator(C_init, volume_reactor, time_init, time_end)
kinetics = Batch_basic.reaction_rate_simple
sol = odeint(Batch_basic.equations_system(kinetics), Batch_basic.C_init, time_span, args=(stoch_factor, order, rate_constant))
plt.plot(time_span, sol)
plt.show()
I assume you are referring to the line
kinetics = Batch_basic.reaction_rate_simple
You are not calling it, you are saving the method as a variable and then passing that method to equations_system(...), which simply returns it. I am not familiar with odeint, but according to the documentation, it accepts a callable, which is what you are giving it.
In python functions, lambdas, classes are all callable and can be assigned to variable and passed to functions and called as needed.
In this particular case the callback definition from the odeint docs say
func callable(y, t, …) or callable(t, y, …)
Computes the derivative of y at t. If the signature is callable(t, y, ...), then the argument tfirst must be set True.
So the first two arguments are passed in by odeint, and the other three are coming from the arguments specified by you.
I'd like to calculate an integral of the form
where I want the results as an array (to eventually plot them as a function of omega). I have
import numpy as np
import pylab as plt
from scipy import integrate
w = np.linspace(-5, 5, 1000)
def g(x):
return np.exp(-2*x)
def complexexponential(x, w):
return np.exp(-1j*w*x)
def integrand(x, w):
return g(x)*complexexponential(x, w)
integrated = np.real(integrate.quad(integrand, 0, np.inf, args = (w)))
which gives me the error "supplied function does not return a valid float". I am not very familiar with the integrate-function of Scipy. Many thanks for your help in advance!
Scipy integrate.quad doesn't seem to support vector output. If you loop over all your values of w and only give one of them at a time as args your code seems to work fine.
Also it doesn't handle complex integration, which you can get around using the procedure outlined in this answer.
I want to solve this differential equation:
y′′+2y′+2y=cos(2x) with initial conditions:
y(1)=2,y′(2)=0.5
y′(1)=1,y′(2)=0.8
y(1)=0,y(2)=1
and it's code is:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dU_dx(U, x):
return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]
U0 = [1,0]
xs = np.linspace(0, 10, 200)
Us = odeint(dU_dx, U0, xs)
ys = Us[:,0]
plt.xlabel("x")
plt.ylabel("y")
plt.title("Damped harmonic oscillator")
plt.plot(xs,ys);
how can I fulfill it?
Your initial conditions are not, as they give values at two different points. These are all boundary conditions.
def bc1(u1,u2): return [u1[0]-2.0,u2[1]-0.5]
def bc2(u1,u2): return [u1[1]-1.0,u2[1]-0.8]
def bc3(u1,u2): return [u1[0]-0.0,u2[0]-1.0]
You need a BVP solver to solve these boundary value problems.
You can either make your own solver using the shooting method, in case 1 as
def shoot(b): return odeint(dU_dx,[2,b],[1,2])[-1,1]-0.5
b = fsolve(shoot,0)
T = linspace(1,2,N)
U = odeint(dU_dx,[2,b],T)
or use the secant method instead of scipy.optimize.fsolve, as the problem is linear this should converge in 1, at most 2 steps.
Or you can use the scipy.integrate.solve_bvp solver (which is perhaps newer than the question?). Your task is similar to the documented examples. Note that the argument order in the ODE function is switched in all other solvers, even in odeint you can give the option tfirst=True.
def dudx(x,u): return [u[1], np.cos(2*x)-2*(u[1]+u[0])]
Solutions generated with solve_bvp, the nodes are the automatically generated subdivision of the integration interval, their density tells how "non-flat" the ODE is in that region.
xplot=np.linspace(1,2,161)
for k,bc in enumerate([bc1,bc2,bc3]):
res = solve_bvp(dudx, bc, [1.0,2.0], [[0,0],[0,0]], tol=1e-5)
print res.message
l,=plt.plot(res.x,res.y[0],'x')
c = l.get_color()
plt.plot(xplot, res.sol(xplot)[0],c=c, label="%d."%(k+1))
Solutions generated using the shooting method using the initial values at x=0 as unknown parameters to then obtain the solution trajectories for the interval [0,3].
x = np.linspace(0,3,301)
for k,bc in enumerate([bc1,bc2,bc3]):
def shoot(u0): u = odeint(dudx,u0,[0,1,2],tfirst=True); return bc(u[1],u[2])
u0 = fsolve(shoot,[0,0])
u = odeint(dudx,u0,x,tfirst=True);
l, = plt.plot(x, u[:,0], label="%d."%(k+1))
c = l.get_color()
plt.plot(x[::100],u[::100,0],'x',c=c)
You can use the scipy.integrate.ode function this is similar to scipy.integrate.odeint but allows a jac parameter which is df/dy or in the case of your given ODE df/dx
So I have the function
f(x) = I_0(exp(Q*x/nKT)
Where Q, K and T are constants, for the sake of clarity I'll add the values
Q = 1.6x10^(-19)
K = 1.38x10^(-23)
T = 77.6
and n and I_0 are the two constraints that I'm trying to minimize.
my xdata is a list of 50 datapoints and as is my ydata. So as of yet this is my code:
from __future__ import division
import scipy.optimize as optimize
import numpy
xdata = numpy.array([1.07,1.07994,1.08752,1.09355,
1.09929,1.10536,1.10819,1.11321,
1.11692,1.12099,1.12435,1.12814,
1.13181,1.13594,1.1382,1.14147,
1.14443,1.14752,1.15023,1.15231,
1.15514,1.15763,1.15985,1.16291,1.16482])
ydata = [0.00205,
0.004136,0.006252,0.008252,0.010401,
0.012907,0.014162,0.016498,0.018328,
0.020426,0.022234,0.024363,0.026509,
0.029024,0.030457,0.032593,0.034576,
0.036725,0.038703,0.040223,0.042352,
0.044289,0.046043,0.048549,0.050146]
#data and ydata is experimental data, xdata is voltage and ydata is current
def f(x,I0,N):
# I0 = 7.85E-07
# N = 3.185413895
Q = 1.66E-19
K = 1.38065E-23
T = 77.3692
return I0*(numpy.e**((Q*x)/(N*K*T))-1)
result = optimize.curve_fit(f, xdata,ydata) #trying to minize I0 and N
But the answer doesn't give suitably optimized constraints
Any help would be hugely appreciated I realize there may be something obvious I am missing, I just can't see what it is!
I have tried this, but for some reason if you throw out those constants so function becomes
def f(x,I0,N):
return I0*(numpy.exp(x/N)-1)
you get something reasonable.
1.86901114e-13, 4.41838309e-02
Its true, that when we get rid off constants its better. Define function as:
def f(x,A,B):
return A*(np.e**(B*x)-1)
and fit it by curve_fit, you'll be able to get A that is explicitly I0 (A=I0) and B (you can obtain N simply by N=Q/(BKT) ). I managed to get pretty good fit.
I think if there is too much constants, algorithm gets confused some way.
How would i fit a straight line and a quadratic to the data set below using the leastsq function from scipy.optimize? I know how to use polyfit to do it. But i need to use leastsq function.
Here are the x and y data sets:
x: 1.0,2.5,3.5,4.0,1.1,1.8,2.2,3.7
y: 6.008,15.722,27.130,33.772,5.257,9.549,11.098,28.828
Can someone help me out please?
The leastsq() method finds the set of parameters that minimize the error function ( difference between yExperimental and yFit).
I used a tuple to pass the parameters and lambda functions for the linear and quadratic fits.
leastsq starts from a first guess ( initial Tuple of parameters) and tries to minimize the error function. At the end, if leastsq succeeds, it returns the list of parameters that best fit the data. ( I printed to see it).
I hope it works
best regards
from scipy.optimize import leastsq
import numpy as np
import matplotlib.pyplot as plt
def main():
# data provided
x=np.array([1.0,2.5,3.5,4.0,1.1,1.8,2.2,3.7])
y=np.array([6.008,15.722,27.130,33.772,5.257,9.549,11.098,28.828])
# here, create lambda functions for Line, Quadratic fit
# tpl is a tuple that contains the parameters of the fit
funcLine=lambda tpl,x : tpl[0]*x+tpl[1]
funcQuad=lambda tpl,x : tpl[0]*x**2+tpl[1]*x+tpl[2]
# func is going to be a placeholder for funcLine,funcQuad or whatever
# function we would like to fit
func=funcLine
# ErrorFunc is the diference between the func and the y "experimental" data
ErrorFunc=lambda tpl,x,y: func(tpl,x)-y
#tplInitial contains the "first guess" of the parameters
tplInitial1=(1.0,2.0)
# leastsq finds the set of parameters in the tuple tpl that minimizes
# ErrorFunc=yfit-yExperimental
tplFinal1,success=leastsq(ErrorFunc,tplInitial1[:],args=(x,y))
print " linear fit ",tplFinal1
xx1=np.linspace(x.min(),x.max(),50)
yy1=func(tplFinal1,xx1)
#------------------------------------------------
# now the quadratic fit
#-------------------------------------------------
func=funcQuad
tplInitial2=(1.0,2.0,3.0)
tplFinal2,success=leastsq(ErrorFunc,tplInitial2[:],args=(x,y))
print "quadratic fit" ,tplFinal2
xx2=xx1
yy2=func(tplFinal2,xx2)
plt.plot(xx1,yy1,'r-',x,y,'bo',xx2,yy2,'g-')
plt.show()
if __name__=="__main__":
main()
from scipy.optimize import leastsq
import scipy as sc
import numpy as np
import matplotlib.pyplot as plt
with optimize.curve_fit the code is simpler, there is no need to define the residual(error) function.
fig, ax = plt.subplots ()
# data
x=np.array([1.0,2.5,3.5,4.0,1.1,1.8,2.2,3.7])
y=np.array([6.008,15.722,27.130,33.772,5.257,9.549,11.098,28.828])
# modeling functions
def funcLine(x, a,b):
return a*x+b
def funcQuad(x, a, b, c):
return a*x**2+b*x+c
# optimize constants for the linear function
constantsLine, _ = sc.optimize.curve_fit (funcLine, x, y)
X=np.linspace(x.min(),x.max(),50)
Y1=funcLine(X, *constantsLine)
# optimize constants for the quadratic function
constantsQuad, _ = sc.optimize.curve_fit (funcQuad, x, y)
Y2=funcQuad(X,*constantsQuad)
plt.plot(X,Y1,'r-',label='linear approximation')
plt.plot(x,y,'bo',label='data points')
plt.plot(X,Y2,'g-', label='quadratic approximation')
matplotlib.pylab.legend ()
ax.set_title("Nonlinear Least Square Problems", fontsize=18)
plt.show()
Here's a super simple example. Picture a paraboloid, so like a bowl with sides growing like a parabola. If we put the bottom at coordinates (x, y) = (a, b) and then minimize the height of the paraboloid over all values of x and y - we would expect the minimum to be x=a and y=b. Here's code that would do this.
import random
from scipy.optimize import least_squares
a, b = random.randint(1, 1000), random.randint(1, 1000)
print("Expect", [a, b])
def f(args):
x, y = args
return (x-a)**2 + (y-b)**2
x0 = [-1, -3]
result = least_squares(fun=f, x0=x0)
print(result.x)