I am trying to calculate general composite function derivative using sympy. In my specific case script is the following:
from sympy import *
t=symbols('t')
p=Function('p')
x=Function('x')
v=diff(x(p(t)),t)
a=diff(v,t)
for variable a it yields:
Derivative(p(t), t)**2*Derivative(x(p(t)), p(t), p(t)) + Derivative(p(t), t, t)*Subs(Derivative(x(_xi_1), _xi_1), (_xi_1,), (p(t),))
If I call doit(), answer still contains subs object
a.doit() #answer: Derivative(p(t), t)**2*Subs(Derivative(x(_xi_3), _xi_3, _xi_3), (_xi_3,), (p(t),)) + Derivative(x(p(t)), p(t))*Derivative(p(t), t, t)
Mathematically the answer is correct but I still need output in following format (without Subs objects):
Derivative(p(t), t)**2*Derivative(x(p(t)), p(t), p(t)) + Derivative(x(p(t)), p(t))*Derivative(p(t), t, t)
Is there any way to achieve desired result? To be clear this example is very simplified compared to my original expression so I need general way to get desired output.
Indeed, repeated applications of doit() in this case result in flip-flopping between two forms of the expression: half the time the first addend has Subs, half the time it's the second.
But you can deal with the issue as follows:
for b in a.atoms(Subs):
a = a.xreplace({b: b.doit()})
This returns Derivative(p(t), t)**2*Derivative(x(p(t)), p(t), p(t)) + Derivative(x(p(t)), p(t))*Derivative(p(t), t, t) as desired.
The trick is that atoms(Subs) is the set of all Subs objects in the expression, and doit is applied only to them, not to Derivative objects where it only messes things up. (Ideally, doit would not mess Derivative objects up in the first place...)
Related
So I want to use the fact that [b,bd]=1, where [] is the commutator, to get some commutators of more complicated expressions using sympy instead of doing it by hand, but instead, I get huge expressions that contain the commutator but it is not replaced for 1, here's the code
from sympy import *
from sympy.physics.secondquant import *
comm1=simplify(Commutator(B(0),Bd(0)).doit())
comm1
the output in this case is 1 , it corresponds to the [b,bd]=1 case, but if I input a more complicated expression such as
w1,w2,g=symbols('w1 w1 g')
H=w1*B(0)*Bd(0)+w2*B(1)*Bd(1)+g*Bd(0)*B(1)+conjugate(g)*Bd(1)*B(0)
comm2=simplify(Commutator(H,B(0)))
print(simplify(comm2))
I get
-g*(AnnihilateBoson(0)*CreateBoson(0)*AnnihilateBoson(1) - CreateBoson(0)*AnnihilateBoson(1)*AnnihilateBoson(0)) - w1*(AnnihilateBoson(0)*AnnihilateBoson(1)*CreateBoson(1) - AnnihilateBoson(1)*CreateBoson(1)*AnnihilateBoson(0)) + w1*(AnnihilateBoson(0)*CreateBoson(0)*AnnihilateBoson(0) - AnnihilateBoson(0)**2*CreateBoson(0)) - conjugate(g)*(AnnihilateBoson(0)*CreateBoson(1)*AnnihilateBoson(0) - CreateBoson(1)*AnnihilateBoson(0)**2)
Which clearly would be simplified quite a lot if [b,bd]=1 was substituted, Does anyone know how to this ? or could anyone point me to another tool capable of doing this?
The key trick is always to use the commutation relation to move either 'a' or 'a^\dagger' to the left, until you cannot do that anymore.
I don't have a good Sympy answer, but since you asked about other tools, here's a shameless plug about how you do it in Cadabra (https://cadabra.science) (which uses Sympy for various things, though not this particular computation). First setup the two sets of creation/annihilation operators using:
{a_{0}, ad_{0}}::NonCommuting;
{a_{1}, ad_{1}}::NonCommuting;
{a_{0}, ad_{0}, a_{1}, ad_{1}}::SortOrder.
They'll print nicer with
\bar{#}::Accent;
ad_{n?}::LaTeXForm("a^\dagger",n?,"").
Your Hamiltonian:
H:= w_{1} a_{0} ad_{0} + w_{2} a_{1} ad_{1} + g ad_{0} a_{1} + \bar{g} ad_{1} a_{1};
The commutator you want to compute:
ex:= #(H) a_{0} - a_{0} #(H);
Just expanding this (without simplification using the [a,ad]=1 commutator) is done with
distribute(ex);
sort_product(ex);
where the 2nd line moves operators with different subscripts through each other, but keeps the order of operators with the same subscripts. Applying the commutator until the expression no longer changes:
converge(ex):
substitute(ex, $a_{n?} ad_{n?} = ad_{n?} a_{n?} + 1$)
distribute(ex)
;
to finally give '-a_0 w_1 - a_1 g'.
I dont know if this question has been asked before in SO, I will go ahead and post it here, I am attempting to solve a simple system with a PID controller, my system of differential equations are given below. I am basically attempting to code very basic PID algorithm. The structure of my control u depends on both derivative and integral of error term. I dont have any problem with the derivative term, it is the integral term that is creating problem in my code. The problem crops up when I assign s=0 in the beginning
and use it in my function as described in my code below. Is there a way to bypass it? I tried assigning s and told as global variables, but it didnt solve my problem. In a nutshell what I am doing is- I am adding state x1 every time and multiplying by dt(which is denoted by t-told).
Kindly help me iron out this issue, PFA my code attached below.
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
plt.style.use('bmh')
t0=0
y0=[0.1,0.2]
kp,kd,ki=2,0.5,0.8
s,told=0,0
def pid(t,Y):
x1,x2=Y[0],Y[1]
e=x1-1
de=x2
s=(x1+s)
integral=s*(t-told)
told=t
#ie=
u=kp*e+kd*de+ki*integral
x1dot=x2
x2dot=u-5*x1-2*x2
return[x1dot,x2dot]
solver=ode(pid).set_integrator('dopri5',rtol=1e-6,method='bdf',nsteps=1e5,max_step=1e-3)
solver.set_initial_value(y0,t0)
t1=10
dt=5e-3
sol = [ [yy] for yy in y0 ]
t=[t0]
while solver.successful() and solver.t<t1:
solver.integrate(solver.t+dt)
for k in range(2): sol[k].append(solver.y[k]);
t.append(solver.t)
print(len(sol[0]))
print(len(t))
x1=np.array(sol[0])
x2=np.array(sol[1])
e=x1-1
de=x2
u=kp*e+kd*de
for k in range(2):
if k==0:
plt.subplot(2,1,k+1)
plt.plot(t,sol[k],label='x1')
plt.plot(t,sol[k+1],label='x2')
plt.legend(loc='lower right')
else:
plt.subplot(2,1,k+1)
plt.plot(t,u)
plt.show()
You are making assumptions on the solver and the time steps that it visits that are not justified. With your hacking of the integral, even if it were mathematically sound (it should look like integral = integral + e*(t-told), which gives an order 1 integration method), you reduce the order of any integration method, probably down to 1, if you are lucky only to order 2.
A mathematically correct method to implement this system is to introduce a third variable x3 for the integral of e, that is, the derivative of x3 is e. That the correct order 1 system has to be of dimension 3 can be read of the fact that (eliminating e) your system has 3 differentiation/integration operations. With that your system becomes
def pid(t,Y):
x1, x2, x3 =Y
e=x1-1
x1dot = x2
edot = x1dot
x3dot = e
u=kp*e+kd*edot+ki*x3
x2dot=u-5*x1-2*x2
return[x1dot, x2dot, x3dot]
Note that there are no global dynamic variables necessary, only the constants (which could also be passed as parameters, whatever seems more efficient or readable).
Now you will also need an initial value for x3, it was not visible from the system what the integration variable would have to be, your code seems to suggest 0.
First of all you need to include "s" Variable into the pid function.
'
def pid(s, t, Y): ...
'
Easiest solution I can see right now is to create a class with s and told as properties of this class:
class PIDSolver:
def __init__(self)
self.t0=0
self.y0=[0.1,0.2]
self.kp,self.kd,self.ki=2,0.5,0.8
self.s,self.told=0,0
def pid(t,Y):
x1,x2=Y[0],Y[1]
e=x1-1
de=x2
self.s=(x1+self.s)
integral=self.s*(t-self.told)
self.told=t
#ie=
u=self.kp*e+self.kd*de+self.ki*integral
x1dot=x2
x2dot=u-5*x1-2*x2
return[x1dot,x2dot]
For the first part of your problem. Use pidsolver = PIDSolver() in the next part of your solution.
I solved this problem myself by using set_f_params() method and passing a list in itz argument. Also I passed a 3rd argument in pid() i.e pid(t,Y,arg). And lastly I assigned s,told=arg[0],arg[1].
I have the following problem: I have two sets of data (set T and set F). And the following functions:
x(T) = arctan(T-c0), A(x(T)) = arctan(x(T) -c1),
B(x(T)) = arctan(x(T) -c2)
and Y(x(t),F) = ((A(x(t)) - B(x(t)))/2 - A(x(t))arctan(F-c3) + B(x(t))arctan(F-c4))
# where c0,c1,c2,c3,c4 are constants
Now I want to create a surface plot of Y. And for that I would like to implement Y as a python (numpy) function what turns out to be quite complicated, because Y takes other functions as input.
Another idea of mine was to evaluate x, B and A on the data separately and store the results in numpy arrays. With those I also could get the output of the function Y , but I don't know which way is better in order to plot the data and I really would like to know how to write Y as a python function.
Thank you very much for your help
It is absolutely possible to use functions as input parameters to other functions. A use case could look like:
def plus_one(standard_input_parameter_like_int):
return standard_input_parameter_like_int + 1
def apply_function(function_as_input, standard_input_parameter):
return function_as_input(standard_input_parameter)
if(__name__ == '__main__'):
print(apply_function(plus_one, 1))
I hope that helps to solve your specific problem.
[...] somethin like def s(x,y,z,*args,*args2): will yield an
error.
This is perfectly normal as (at least as far as I know) there is only one variable length non-keyword argument list allowed per function (that has to be exactly labeled as *args). So if you remove the asterisks (*) you should actually be able to run s properly.
Regarding your initial question you could do something like:
c = [0.2,-0.2,0,0,0,0]
def x(T):
return np.arctan(T-c[0])
def A(xfunc,T):
return np.arctan(xfunc(T) - c[1])
def B(xfunc,T):
return np.arctan(xfunc(T) - c[2])
def Y(xfunc,Afunc,Bfunc,t,f):
return (Afunc(xfunc,t) - Bfunc(xfunc,t))/2.0 - Afunc(xfunc,t) * np.arctan(f - c[3]) + Bfunc(xfunc,t)*np.arctan(f-c[4])
_tSet = np.linspace(-1,1,20)
_fSet = np.arange(-1,1,20)
print Y(x,A,B,_tSet,_fSet)
As you can see (and probably already tested by yourself judging from your comment) you can use functions as arguments. And as long as you don't use any 'if' conditions or other non-vectorized functions in your 'sub'-functions the top-level function should already be vectorized.
I have a large (>2000 equations) system of ODE's that I want to solve with python scipy's odeint.
I have three problems that I want to solve (maybe I will have to ask 3 different questions?).
For simplicity, I will explain them here with a toy model, but please keep in mind that my system is large.
Suppose I have the following system of ODE's:
dS/dt = -beta*S
dI/dt = beta*S - gamma*I
dR/dt = gamma*I
with beta = cpI
where c, p and gamma are parameters that I want to pass to odeint.
odeint is expecting a file like this:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
S = y[0]
I = y[1]
R = y[2]
dydt = [-beta*S*I,
beta*S*I - gamma*I,
- gamma*I]
return dydt
that then can be passed to odeint like this:
myoutput = odeint(myODEs, [1000, 1, 0], np.linspace(0, 100, 50), args = ([c,p,gamma], ))
I generated a text file in Mathematica, say myOdes.txt, where each line of the file corresponds to the RHS of my system of ODE's, so it looks like this
#myODEs.txt
-beta*S*I
beta*S*I - gamma*I
- gamma*I
My text file looks similar to what odeint is expecting, but I am not quite there yet.
I have three main problems:
How can I pass my text file so that odeint understands that this is the RHS of my system?
How can I define my variables in a smart way, that is, in a systematic way? Since there are >2000 of them, I cannot manually define them. Ideally I would define them in a separate file and read that as well.
How can I pass the parameters (there are a lot of them) as a text file too?
I read this question that is close to my problems 1 and 2 and tried to copy it (I directly put values for the parameters so that I didn't have to worry about my point 3 above):
systemOfEquations = []
with open("myODEs.txt", "r") as fp :
for line in fp :
systemOfEquations.append(line)
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 1, 5))
but I got the error:
odepack.error: Result from function call is not a proper array of floats.
ValueError: could not convert string to float: -((12*0.01/1000)*I*S),
Edit: I modified my code to:
systemOfEquations = []
with open("SIREquationsMathematica2.txt", "r") as fp :
for line in fp :
pattern = regex.compile(r'.+?\s+=\s+(.+?)$')
expressionString = regex.search(pattern, line)
systemOfEquations.append( sympy.sympify( expressionString) )
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 100, 50), )
and this works (I don't quite get what the first two lines of the for loop are doing). However, I would like to do the process of defining the variables more automatic, and I still don't know how to use this solution and pass parameters in a text file. Along the same lines, how can I define parameters (that will depend on the variables) inside the dX_dt function?
Thanks in advance!
This isn't a full answer, but rather some observations/questions, but they are too long for comments.
dX_dt is called many times by odeint with a 1d array y and tuple t. You provide t via the args parameter. y is generated by odeint and varies with each step. dX_dt should be streamlined so it runs fast.
Usually an expresion like [eq for eq in systemOfEquations] can be simplified to systemOfEquations. [eq for eq...] doesn't do anything meaningful. But there may be something about systemOfEquations that requires it.
I'd suggest you print out systemOfEquations (for this small 3 line case), both for your benefit and ours. You are using sympy to translated the strings from the file into equations. We need to see what it produces.
Note that myODEs is a function, not a file. It may be imported from a module, which of course is a file.
The point to vals = dict(S=X[0], I=X[1], R=X[2], t=t) is to produce a dictionary that the sympy expressions can work with. A more direct (and I think faster) dX_dt function would look like:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
return dydt
I suspect that the dX_dt that runs sympy generated expressions will be a lot slower than a 'hardcoded' one like this.
I'm going add sympy tag, because, as written, that is the key to translating your text file into a function that odeint can use.
I'd be inclined to put the equation variability in the t parameters, rather a list of sympy expressions.
That is replace:
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
with something like
arg12=np.array([-beta, beta, 0])
arg1 = np.array([0, -gamma, -gamma])
arg0 = np.array([0,0,0])
dydt = arg12*y[0]*y[1] + arg1*y[1] + arg0*y[0]
Once this is right, then the argxx definitions can be move outside dX_dt, and passed via args. Now dX_dt is just a simple, and fast, calculation.
This whole sympy approach may work fine, but I'm afraid that in practice it will be slow. But someone with more sympy experience may have other insights.
I want to get rid of an extra substitution which sympy makes when differentiating a user defined composite function. The code is
t = Symbol('t')
u = Function('u')
f = Function('f')
U = Symbol('U')
pprint(diff(f(u(t),t),t))
The output is:
d d ⎛ d ⎞│
──(f(u(t), t)) + ──(u(t))⋅⎜───(f(ξ₁, t))⎟│
dt dt ⎝dξ₁ ⎠│ξ₁=u(t)
I guess it does this because you can't differentiate w.r.t u(t), so this is ok. What I want to do next is to substitute u(t) with an other variable say U and then get rid of the extra substitution \xi_1
⎞│
⎟│
⎠│ξ₁=U
To clarify, I want this output:
d d ⎛d ⎞
──(f(U, t)) + ──(U)⋅⎜──(f(U, t))⎟
dt dt ⎝dU ⎠
The reason is; when I Taylor expand a composite function like this, the extra substitutions make the output unreadable. Does anyone know how to do this? Any other solution is of course welcomed.
Substituting is done with subs. If something is not evaluated you can force it with the doit method.
>>> diff(f(u(t),t),t).subs(u(t),U)
Derivative(U,t)∗Subs(Derivative(f(xi1,t),xi1),(xi1,),(U,))+Derivative(f(U,t),t)
>>> _.doit()
Derivative(f(U,t),t)
Check the tutorial! It has all these ideas presented nicely.