I am trying to separately compute the elements of a Taylor expansion and did not obtain the results I was supposed to. The function to approximate is x**321, and the first three elements of that Taylor expansion around x=1 should be:
1 + 321(x-1) + 51360(x-1)**2
For some reason, the code associated with the second term is not working.
See my code below.
import sympy as sy
import numpy as np
import math
import matplotlib.pyplot as plt
x = sy.Symbol('x')
f = x**321
x0 = 1
func0 = f.diff(x,0).subs(x,x0)*((x-x0)**0/factorial(0))
print(func0)
func1 = f.diff(x,1).subs(x,x0)*((x-x0)**1/factorial(1))
print(func1)
func2 = f.diff(x,2).subs(x,x0)*((x-x0)**2/factorial(2))
print(func2)
The prints I obtain running this code are
1
321x - 321
51360*(x - 1)**2
I also used .evalf and .lambdify but the results were the same. I can't understand where the error is coming from.
f = x**321
x = sy.Symbol('x')
def fprime(x):
return sy.diff(f,x)
DerivativeOfF = sy.lambdify((x),fprime(x),"numpy")
print(DerivativeOfF(1)*((x-x0)**1/factorial(1)))
321*x - 321
I'm obviously just starting with the language, so thank you for your help.
I found a beginners guide how to Taylor expand in python. Check it out perhaps all your questions are answered there:
http://firsttimeprogrammer.blogspot.com/2015/03/taylor-series-with-python-and-sympy.html
I tested your code and it works fine. like Bazingaa pointed out in the comments it is just an issue how python saves functions internally. One could argument that for a computer it takes less RAM to save 321*x - 321 instead of 321*(x - 1)**1.
In your first output line it also gives you 1 instead of (x - 1)**0
Related
I was wondering if is there is a way to define a function that is a derivative of a function. I'm new to python so I don't no much, I tired looking up stuff that might be similar but nothing has worked so far. This is what I have for my code right now.
import sympy as sp
import math
x = sp.Symbol('x')
W = 15 #kN/m
E = 70 # Gpa
I = 52.9*10**(-6) #m**4
L = 3 #m
e = 0.01
xi = 1.8
y = 9
def f(x):
return ( ( y*3*(math.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*math.cos(math.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (math.pi**3)*(x**3) ) )/(3*L*(math.pi**3))**(1/2)
def derv(f,x):
return sp.diff(f)
print (derv(f,x))
Also, I don't understand whatx = sp.Symbol('x') does, so if someone could explain that, that would be awesome.
Any help is appreciated.
You are conflating two different things: python functions like f and math functions, which you can express with sympy like y = π * x/3. f is a python function that returns a sympy expression. sympy lets you stay in the world of symbolic math functions by defining variables like x = sp.Symbol('x') So calling f() produces a symbolic math function like:
You can use sympy to find the derivative of the symbolic function returned by f() but you need to define it with the sympy versions of the cos() function (and sp.pi if you want to keep it symbolic).
For example:
import sympy as sp
x = sp.Symbol('x')
W = 15 #kN/m
E = 70 # Gpa
I = 52.9*10**(-6) #m**4
L = 3 #m
e = 0.01
xi = 1.8
y = 9
def f(x):
return ( ( y*3*(sp.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*sp.cos(sp.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (sp.pi**3)*(x**3) ) )/(3*L*(sp.pi**3))**(1/2)
def derv(f,x):
return sp.diff(f(x)) # pass the result of f() which is a sympy function
derv(f,x)
You've programmed the function. it appears to be a simple function of two independent variables x and y.
Could be that x = sp.Symbol('x') is how SymPy defines the independent variable x. I don't know if you need one or another one for y.
You know enough about calculus to know that you need a derivative. Do you know how to differentiate a function of a single independent variable? It helps to know the answer before you start coding.
y*3*(math.pi**4)*E*I/(W*L) ) - ( 48*(L**3)*math.cos(math.pi*x/(2*L)) ) + ( 48*(L**3) ) + ( (math.pi**3)*(x**3) ) )/(3*L*(math.pi**3))**(1/2)
Looks simple.
There's only one term with y in it. The partial derivative w.r.t. y leaves you with 3*(math.pi**4)*E*I/(W*L) )
There's only one term with Cx**3 in it. That's easy to differentiate: 3C*x**2.
What's so hard? What's the problem?
In traditional programming, each function you write is translated to a series of commands that are then sent to the CPU and the result of the calculation is returned. Therefore, symbolic manipulation, like what we humans do with algebra and calculus, doesn't make any sense to the computer. Sympy gets around this by overriding Python's normal arithmetic operators, allowing you to do generate algebraic functions that can be manipulated similarly to how we humans do math. That's what sp.Symbols('x') is doing: providing you with a symbolic variable you can work with (you're also naming it in sympy).
If you want to evaluate your derivative, simply call evalf with the numerical value you want to assign to x.
When using CVXPY, I frequently get "SolverError". Their doc just says this is caused by numerical issues, but no further information is given about how to avoid them.
The following code snippet is an example, the problem is trivial, but the 'CVXOPT' solver just throws "SolverError". It is true that if we change the solver to another one, like 'ECOS', the problem will be solved as expected. But the point is, 'CVXOPT' should in principle solve this trivial problem and it really baffles me why it doesn't work.
import numpy as np
import cvxpy as cv
np.random.seed(0)
temp = np.random.rand(5)
T = 2
x = cv.Variable(T)
u = cv.Variable(2, T)
pbs = []
for t in range(T):
cost = cv.sum_squares(x[t]-temp[t])
constr = [x[t] == u[0,t]+u[1,t],]
pbs.append(cv.Problem(cv.Minimize(cost), constr))
prob = sum(pbs)
prob.solve(solver='CVXOPT')
Use prob.solve(solver='CVXOPT', kktsolver=cv.ROBUST_KKTSOLVER) to make the optimisation process more robust.
I'm pretty new to python and I got stuck on this:
I'd like to use scipy.optimize.minimize to maximize a function and I'm having some problem with the extra arguments of the function I defined.
I looked for a solution in tons of answered questions but I can't find anything that solves my problem.
I saw in Structure of inputs to scipy minimize function how to pass extra arguments that one wants to be constant in the minimization of the function and my code seems fine to me from this point of view.
This is my code:
import numpy as np
from scipy.stats import pearsonr
import scipy.optimize as optimize
def min_pears_function(a,exp):
(b,c,d,e)=a
return (1-(pearsonr(b + exp[0] * c + exp[1] * d + exp[2],e)[0]))
a = (log_x,log_y,log_t,log_z) # where log_x, log_y, log_t and log_z are numpy arrays with same length
guess_PF=[0.6,2.0,0.2]
res = optimize.minimize(min_pears_function, guess_PF, args=(a,), options={'xtol': 1e-8, 'disp': True})
When running the code I get the following error:
ValueError: need more than 3 values to unpack
But I can't see what needed argument I'm missing. The function seems to work fine, so I guess the problem is in optimize.minimize call?
Your error occurs here:
def min_pears_function(a,exp):
# XXX: This is your error line
(b,c,d,e)=a
return (1-(pearsonr(b + exp[0] * c + exp[1] * d + exp[2],e)[0]))
This is because:
the initial value you pass to optimize.minimize is guessPF which has just three values ([0.6,2.0,0.2]).
this initial value is passed to min_pears_function as the variable a.
Did you mean for it to be passed as exp? Is it exp you wish to solve for? In that case, redefine the signature as:
def min_pears_function(exp, a):
...
I have the following snippet of code
import sympy
a = sympy.symbols('a')
b = sympy.symbols('b')
c = sympy.symbols('c')
print((a*b).coeff(c,0))
print((a*b).as_independent(c)[0])
I don't understand why the two print statements print different output. According to the documentation of coeff:
You can select terms independent of x by making n=0; in this case
expr.as_independent(x)[0] is returned (and 0 will be returned instead
of None):
>>> (3 + 2*x + 4*x**2).coeff(x, 0)
3
Is this a bug in sympy, or do I miss something?
It's a bug. I have a pull request fixing it here.
I have a large (>2000 equations) system of ODE's that I want to solve with python scipy's odeint.
I have three problems that I want to solve (maybe I will have to ask 3 different questions?).
For simplicity, I will explain them here with a toy model, but please keep in mind that my system is large.
Suppose I have the following system of ODE's:
dS/dt = -beta*S
dI/dt = beta*S - gamma*I
dR/dt = gamma*I
with beta = cpI
where c, p and gamma are parameters that I want to pass to odeint.
odeint is expecting a file like this:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
S = y[0]
I = y[1]
R = y[2]
dydt = [-beta*S*I,
beta*S*I - gamma*I,
- gamma*I]
return dydt
that then can be passed to odeint like this:
myoutput = odeint(myODEs, [1000, 1, 0], np.linspace(0, 100, 50), args = ([c,p,gamma], ))
I generated a text file in Mathematica, say myOdes.txt, where each line of the file corresponds to the RHS of my system of ODE's, so it looks like this
#myODEs.txt
-beta*S*I
beta*S*I - gamma*I
- gamma*I
My text file looks similar to what odeint is expecting, but I am not quite there yet.
I have three main problems:
How can I pass my text file so that odeint understands that this is the RHS of my system?
How can I define my variables in a smart way, that is, in a systematic way? Since there are >2000 of them, I cannot manually define them. Ideally I would define them in a separate file and read that as well.
How can I pass the parameters (there are a lot of them) as a text file too?
I read this question that is close to my problems 1 and 2 and tried to copy it (I directly put values for the parameters so that I didn't have to worry about my point 3 above):
systemOfEquations = []
with open("myODEs.txt", "r") as fp :
for line in fp :
systemOfEquations.append(line)
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 1, 5))
but I got the error:
odepack.error: Result from function call is not a proper array of floats.
ValueError: could not convert string to float: -((12*0.01/1000)*I*S),
Edit: I modified my code to:
systemOfEquations = []
with open("SIREquationsMathematica2.txt", "r") as fp :
for line in fp :
pattern = regex.compile(r'.+?\s+=\s+(.+?)$')
expressionString = regex.search(pattern, line)
systemOfEquations.append( sympy.sympify( expressionString) )
def dX_dt(X, t):
vals = dict(S=X[0], I=X[1], R=X[2], t=t)
return [eq for eq in systemOfEquations]
out = odeint(dX_dt, [1000,1,0], np.linspace(0, 100, 50), )
and this works (I don't quite get what the first two lines of the for loop are doing). However, I would like to do the process of defining the variables more automatic, and I still don't know how to use this solution and pass parameters in a text file. Along the same lines, how can I define parameters (that will depend on the variables) inside the dX_dt function?
Thanks in advance!
This isn't a full answer, but rather some observations/questions, but they are too long for comments.
dX_dt is called many times by odeint with a 1d array y and tuple t. You provide t via the args parameter. y is generated by odeint and varies with each step. dX_dt should be streamlined so it runs fast.
Usually an expresion like [eq for eq in systemOfEquations] can be simplified to systemOfEquations. [eq for eq...] doesn't do anything meaningful. But there may be something about systemOfEquations that requires it.
I'd suggest you print out systemOfEquations (for this small 3 line case), both for your benefit and ours. You are using sympy to translated the strings from the file into equations. We need to see what it produces.
Note that myODEs is a function, not a file. It may be imported from a module, which of course is a file.
The point to vals = dict(S=X[0], I=X[1], R=X[2], t=t) is to produce a dictionary that the sympy expressions can work with. A more direct (and I think faster) dX_dt function would look like:
def myODEs(y, t, params):
c,p, gamma = params
beta = c*p
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
return dydt
I suspect that the dX_dt that runs sympy generated expressions will be a lot slower than a 'hardcoded' one like this.
I'm going add sympy tag, because, as written, that is the key to translating your text file into a function that odeint can use.
I'd be inclined to put the equation variability in the t parameters, rather a list of sympy expressions.
That is replace:
dydt = [-beta*y[0]*y[1],
beta*y[0]*y[1] - gamma*y[1],
- gamma*y[1]]
with something like
arg12=np.array([-beta, beta, 0])
arg1 = np.array([0, -gamma, -gamma])
arg0 = np.array([0,0,0])
dydt = arg12*y[0]*y[1] + arg1*y[1] + arg0*y[0]
Once this is right, then the argxx definitions can be move outside dX_dt, and passed via args. Now dX_dt is just a simple, and fast, calculation.
This whole sympy approach may work fine, but I'm afraid that in practice it will be slow. But someone with more sympy experience may have other insights.