Computing simultaneous Partial differential equation using FIPY in python - python

I was trying to solve pde using fipy, since I'm new to it; I'm unable to debug error in my code.
It is not giving any spatial pattern but just different monochromatic square(s) over a period of time.The equation which I'm trying to solve are-
(∂u(x,y,t))/∂t=G_1 (u,v)+d_11 ∇^2 u+d_12 ∇^2 v and
(∂v(x,y,t))/∂t=G_2 (u,v)+d_21 ∇^2 u+d_22 ∇^2 v
from fipy import *
nx=ny=200
dx=dy=0.25
L=dx*nx
dt=0.01
E=1.0
A=0.91881 #Alpha
B=0.0327 #Beta
D=0.05 #Delta
G=0.15 #Gamma
d11=0.1
d12=0.01
d21=0.5
d22=1.5
mesh =Grid2D(dx=dx,dy=dy,nx=nx,ny=ny)
u=CellVariable(name='u Variable',mesh=mesh) #pray
v=CellVariable(name='v Variable',mesh=mesh) #predator
u.setValue(GaussianNoiseVariable(mesh=mesh,mean=0.18,variance=0.01))
v.setValue(GaussianNoiseVariable(mesh=mesh,mean=0.6,variance=0.01))
eq_u=(TransientTerm(coeff=1,var=u)==u*(1-u)-(u*v*E)/(u+A*v)+ImplicitDiffusionTerm(coeff=d11,var=u)+ImplicitDiffusionTerm(coeff=d12,var=v))
eq_v=(TransientTerm(coeff=1,var=v)==B*v*(1-v)+(u*v*G)/(u+A*v)-D*v+ImplicitDiffusionTerm(coeff=d21,var=u)+ImplicitDiffusionTerm(coeff=d22,var=v))
#creating viewer
if __name__ == "__main__":
viewer_u=Viewer(vars=u,datamin=0.16,datamax=0.18)
viewer_u.plot()
viewer_v=Viewer(vars=v,datamin=0.0,datamax=0.4)
viewer_v.plot()
#solving
steps=250
for step in range(steps):
eq_u.solve(var=u,dt=dt)
eq_v.solve(var=v,dt=dt)
if __name__ == "__main__":
viewer_u.plot()
viewer_v.plot()

These equations are ODEs, not PDEs. FiPy may be able to solve them, but there are other tools much better optimized for solving ODEs, such as SciPy's odeint and scikits.odes.
I am not sure exactly what you mean by "It is not giving any pattern but just different monochromatic square(s) over a period of time." The random initial condition does not appear to evolve, but if I increase dt to 1.0, then V evolves to a uniform value of about 0.5. U does not appear to evolve for me, but that's a display glitch of some kind. V evolves to 0.598 and U evolves to 0.179. There is no pattern in space (if that's what you were expecting) because these equations have no spatial dependence; they only have partial derivatives in time, so there is absolutely nothing to cause the variables to flux from one cell to another.
You appear to get away with it here, but your variables should always be defined on the same mesh, rather than u_mesh for U and v_mesh for V.
No-flux boundary conditions are the default in FiPy, so there's no reason to specify U.faceGrad.constrain, etc.
Furthermore, there are no fluxes in these equations (because they're ODEs), so setting boundary fluxes isn't meaningful.
Normally I would recommend making your equations more implicit, coupling them together, and sweeping, but none of this seems to have much effect on their evolution in this case.

Related

solve an equation (with max-statement) in python

i am seeking a solution for the following equation in Python.
345-0.25*t = 37.5 * x_a
'with'
t = max(0, 10-x_a)*(20-10) + max(0,25-5*x_a)*(3-4) + max(0,4-0.25*x_a)*(30-12.5)
'x_a = ??'
If there is more than one solution to the problem (I am not even sure, whether this can happen from a mathematical point of view?), I want my code to return a positive(!) value for x_a, that minimizes t.
With my previous knowledge in the Basics of Python, Pandas and NumPy I actually have no clue, how to tackle this problem. Can someone give me a hint?
For Clarification: I inserted some exemplary numbers in the equation to make it easier to gasp the problem. In my final code, there might of course be different numbers for different scenarios. However, in every scenario x_a is the only unknown variable.
Update
I thought about the problem again and came up with the following solution, which yields the same result as the calculations done by Michał Mazur:
import itertools
from sympy import Eq, Symbol, solve
import numpy as np
x_a = Symbol('x_a')
possible_elements = np.array([10-x_a, 25-5*x_a, 4-0.25*x_a])
assumptions = np.array(list(itertools.product([True, False], repeat=3)))
for assumption in assumptions:
x_a = Symbol('x_a')
elements = assumption.astype(int) * possible_elements
t = elements[0]*(20-10) + elements[1]*(3-4) + elements[2]*(30-12.5)
eqn = Eq(300-0.25*t, 40*x_a)
solution = solve(eqn)
if len(solution)>2:
print('Warning! the code may suppress possible solutions')
if len(solution)==1:
solution = solution[0]
if (((float(possible_elements[0].subs(x_a,solution))) > 0) == assumption[0]) &\
(((float(possible_elements[1].subs(x_a,solution))) > 0) == assumption[1]) &\
(((float(possible_elements[2].subs(x_a,solution)))> 0) == assumption[2]):
print('solution:', solution)
Compared to the already suggested approach this may have an little advantage as it does not rely on testing all possible values and therefore can be used for very small as well as very big solutions without taking a lot of time (?). However, it probably only is useful as long as you don't have more complex functions for t (even having for example 5 max(...) statements and therefore (2^5=)32 scenarios to test seems quite cumbersome).
As far as I'm concerned, I just realized that my problem is even more complex as i thought. For my project the calculations needed to derive the value of "t" are pretty entangled and can not be written in just one equation. However it still is a function, that only relies on x_a. So I am still hoping for a Python-Solution similar to the proposed Solver in Excel... or I will stick to the approach of simply testing out all possible numbers.
If you are interested in a solution a bit different than the Python one, then I will give you a hand. Open up your Excel, with Solver extention and plug in
the data you are interested in cheking, as the following:
Into the E2 you plug the command I just have writen, into E4 you plug in
=300-0,25*E2
Into the F4 you plug:
=40*F2
Then you open up your Solver menu
Into the Set Objective you put the variable t, which you want to minimize.
Into Changing Variables you put the a.
Into Constraint Menu you put the equality of E4 and F4 cells.
You check the "Make Unconstarained Variables be non-negative" which will prevent your a variable to go below 0. Your method of computing is strictly non-linear, so you leave this option there.
You click solve. The computed value is presented in the screen.
The python approach I can think of:
minimumval=10100
minxa=10000
eps=0.01
for i in range(100000):
k=i/10000
x_a=k
t = max(0, 10-x_a)*(20-10) + max(0,25-5*x_a)*(3-4) + max(0,4-0.25*x_a)*(30-12.5)
val=abs(300-0.25*t-40*x_a)
if (val<eps):
if t<minimumval:
minimumval=t
minxa=x_a
It isn't direct solution, as in it only controls the error that you make in the equality by eps value. However, it gives solution.

Odeint function by scipy.integrate does not follow the given time vector when solving the system

I am trying to solve a ODE matrix system in which one of the terms change with time. To change it to the accurate value, the code should search within the predefined time vector to find at what instant it is, and use the same index (named control on the code) to set the values in the matrix, as shown below:
from scipy.integrate import odeint as ode
#(Here, I cut some of the non-relevant to the issue objects, that I can guarantee that are correct)
def susp(x,TT):
dxdt=[]
control=0
dxdtlong=np.zeros((1,4),float)
for i in range(time_size):
ttt=t[i]
if TT!=ttt:
control+=1
else:
break
W[0,0]=zr[control]
W[1,0]=zrp[control]
matriz1=np.matrix(np.dot(A,x))
matriz2=np.matrix(np.dot(H,W))
for j in range(len(matriz1)):
for k in range(len(matriz1[0])):
dxdtlong=matriz1[k][j]+matriz2[k][j]
dxdt.append(dxdtlong[0,0])
dxdt.append(dxdtlong[0,1])
dxdt.append(dxdtlong[0,2])
dxdt.append(dxdtlong[0,3])
data=np.reshape(dxdt,np.size(dxdt))
return data
x=ode(susp, x0_reshaped, t, full_output=1)
t is a equally spaced time vector, created with linspace.
Oddly, when running this code, the second time point called by ODE isn't t[1], but a value that isn't even in my time vector.
Can someone please explain to me what I am doing wrong?

solving a Non- linear first order differential equation and getting a break in the plot

I am trying to solve the elliptical differential equation using fourth-order runge-kutta method in python.
After execution, I get a very small part of the actual plot that should be obtained and alongside with it an error saying that:
"RuntimeWarning: invalid value encountered in double_scalars"
import numpy as np
import matplotlib.pyplot as plt
#Define constants
g=9.8
L=1.04
#Define the differential Function
def fun(y,x):
return-(2*(g/L)*(np.cos(y)-np.cos(np.pi/6)))**(1/2)
#Define variable arrays
x=np.zeros(1000)
y=np.zeros(1000)
y[0]=np.pi/6
dx=0.5
#Runge-Kutta Method
for i in range(len(y)-1):
k1=fun(x[i],y[i])
k2=fun(x[i]+dx/2, y[i]+dx*k1/2)
k3=fun(x[i]+dx/2, y[i]+dx*k2/2)
k4=fun(x[i]+dx, y[i]+dx*k3)
y[i+1]=y[i]+dx/6*(k1+2*k2+2*k3+k4)
x[i+1]=x[i]+dx
#print(y)
#print(x)
plt.plot(x,y)
plt.xlabel('Time')
plt.ylabel('Theta')
plt.grid()
And the graph I obtain is something like,
My question is why am I getting the error message? Thanks for helping!
Several points that lead to this behavior. First, you switched the order of the arguments in the ODE function, probably to make it compatible with odeint. Use the tfirst=True optional argument to avoid that and have the independent variable always first.
The actual source of the error is the term
(np.cos(y)-np.cos(np.pi/6)))**(1/2)
remember that in your version y has the value x[i], so that at some point the expression under the root becomes negative.
If you correct the first point, you will probably still encounter the second error as the exact solution moves parabolically towards the fixed point, so that the stages of RK4 are likely to overshoot. One can fix that by providing a sufficiently secured square root function,
def softroot(x): return x/max(1e-12,abs(x))**0.5
#Define the differential Function
def fun(x,y):
return -(2*(g/L)*softroot(np.cos(y)-np.cos(np.pi/6)))
#Define variable arrays
dx=0.01
x=np.arange(0,1,dx)
y=np.zeros(x.shape)
y[0]=np.pi/6
...
results in a plot
as the solution already starts in the fixed point. Shifting the initial point a little down to y[0]=np.pi/6-1e-8 produces a jump to the fixed point below.

High memory usage when doing direct transcription with sympy equations

I used sympy to derive, via lagrange, the equations of motion of my 3 link robot. The resultant equation of motion in the form (theta_dot_dot = f(theta, theta_dot)) turned out very complicated with A LOT of cos and sin. I then lambdified the functions to use with drake, replacing all the sympy.sin and sympy.cos with drake.sin, drake.cos.
The final function can be evaluated numerically (i.e. given theta, theta_dot, find theta_dot_dot) somewhat efficiently in the milliseconds range.
I then tried to use direct transcription to do trajectory optimization. Note I did not use the DirectTranscription library, instead manually added the necessary constraints.
The constraints are added roughly as follows:
for i in range(NUM_TIME_STEPS-1):
print("Adding constraints for t = " + str(i))
tau = mp.NewContinuousVariables(3, "tau_%d" % i)
next_state = mp.NewContinuousVariables(8, "state_%d" % (i+1))
for j in range(8):
mp.AddConstraint(next_state[j] <= (state_over_time[i] + TIME_INTERVAL*derivs(state_over_time[i], tau))[j])
mp.AddConstraint(next_state[j] >= (state_over_time[i] + TIME_INTERVAL*derivs(state_over_time[i], tau))[j])
state_over_time[i+1] = next_state
tau_over_time[i] = tau
The problem I'm facing right now is that on each iteration of adding constraints, I observe that my memory usage increases by around 70-100MB. This means that my number of time steps cannot go more than around 50 before the program crashes due to out of memory.
I'm wondering what I can do to make trajectory optimization work for my robot. Obviously I can try to simplify (by hand or otherwise) the equations of motions... but is there anything else I can try? Is it even normal that the constraints are taking up so much memory? Am I doing something very wrong here?
You're pushing drake's symbolic through your complex equations. Making that better is a good goal, but probably you want to avoid it by using the other overload for AddConstraint:
AddConstraint(your_method, lb, ub, vars)
https://drake.mit.edu/pydrake/pydrake.solvers.mathematicalprogram.html?highlight=addconstraint#pydrake.solvers.mathematicalprogram.MathematicalProgram.AddConstraint
That will use your python code as a function pointer, and should use autodiff instead of symbolic.

Is there a Python equivalent to the smooth.spline function in R

The smooth.spline function in R allows a tradeoff between roughness (as defined by the integrated square of the second derivative) and fitting the points (as defined by summing the squares of the residuals). This tradeoff is accomplished by the spar or df parameter. At one extreme you get the least squares line, and the other you get a very wiggly curve which intersects all of the data points (or the mean if you have duplicated x values with different y values)
I have looked at scipy.interpolate.UnivariateSpline and other spline variants in Python, however, they seem to only tradeoff by increasing the number of knots, and setting a threshold (called s) for the allowed SS residuals. By contrast, the smooth.spline in R allows having knots at all the x values, without necessarily having a wiggly curve that hits all the points -- the penalty comes from the second derivative.
Does Python have a spline fitting mechanism that behaves in this way? Allowing all knots but penalizing the second derivative?
You can use R functions in Python with rpy2:
import rpy2.robjects as robjects
r_y = robjects.FloatVector(y_train)
r_x = robjects.FloatVector(x_train)
r_smooth_spline = robjects.r['smooth.spline'] #extract R function# run smoothing function
spline1 = r_smooth_spline(x=r_x, y=r_y, spar=0.7)
ySpline=np.array(robjects.r['predict'](spline1,robjects.FloatVector(x_smooth)).rx2('y'))
plt.plot(x_smooth,ySpline)
If you want to directly set lambda: spline1 = r_smooth_spline(x=r_x, y=r_y, lambda=42) doesn't work, because lambda has already another meaning in Python, but there is a solution: How to use the lambda argument of smooth.spline in RPy WITHOUT Python interprating it as lambda.
To get the code running you first need to define the data x_train and y_train and you can define x_smooth=np.array(np.linspace(-3,5,1920)). if you want to plot it between -3 and 5 in Full-HD-resolution.
Note that this code is not fully compatible with Jupyter-notebooks for the latest versions of rpy2. You can fix this by using !pip install -Iv rpy2==3.4.2 as described in NotImplementedError: Conversion 'rpy2py' not defined for objects of type '<class 'rpy2.rinterface.SexpClosure'>' only after I run the code twice
I've been looking for exactly the same thing, but would rather not have to translate the code to Python. The Splinter package seems like an option, however: https://github.com/bgrimstad/splinter
From research on google, I concluded that
By contrast, the smooth.spline in R allows having knots at all the x values, without necessarily having a wiggly curve that hits all the points -- the penalty comes from the second derivative.

Categories

Resources