Python - Solving equation equal to 0 using sympy - python

I'm trying to find the max height of a rocket launched using the equation for the height. I have the derivative already set, so now I need to solve for zero using the derivative. The equation I'm trying to solve for 0 is
-0.0052t^3 + 4.26t + 0.000161534t^3.751
The related code is as follows
def velocity(equation):
time = Symbol('t')
derivative = equation.diff(time)
return derivative
def max_height():
time = Symbol('t')
equ = 2.13 * (time ** 2) - 0.0013 * (time ** 4) + 0.000034 * (time ** 4.751)
return solve(Eq(velocity(equ), 0))
if __name__ == '__main__':
t = Symbol('t')
print(max_height())
I tried inserting the direct equation into the Eq, like so...
return solve(Eq(-0.0052t^3 + 4.26t + 0.000161534t^3.751, 0))
thinking the problem might be with the return type of velocity, but that didn't work. I also tried playing around with making them class functions, but that didn't seem to help either.
The result I'm getting is that it runs indefinitely until I stop it. When I do stop it, I get the following errors
Traceback (most recent call last):
File "C:\Users\...\main.py", line 40, in <module>
print(max_height())
File "C:\Users\...\main.py", line 29, in max_height
return solve(Eq(velocity(equ), 0))
File "C:\Users\...\venv\lib\site-packages\sympy\solvers\solvers.py", line 1095, in solve
solution = _solve(f[0], *symbols, **flags)
File "C:\Users\...\venv\lib\site-packages\sympy\solvers\solvers.py", line 1675, in _solve
u = unrad(f_num, symbol)
File "C:\Users\...\venv\lib\site-packages\sympy\solvers\solvers.py", line 3517, in unrad
neq = unrad(eq, *syms, **flags)
File "C:\Users\...\venv\lib\site-packages\sympy\solvers\solvers.py", line 3300, in unrad
eq = _mexpand(eq, recursive=True)
File "C:\Users\...\venv\lib\site-packages\sympy\core\function.py", line 2837, in _mexpand
was, expr = expr, expand_mul(expand_multinomial(expr))
File "C:\Users\...\venv\lib\site-packages\sympy\core\function.py", line 2860, in expand_mul
return sympify(expr).expand(deep=deep, mul=True, power_exp=False,
File "C:\Users\...\venv\lib\site-packages\sympy\core\cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
File "C:\Users\...\venv\lib\site-packages\sympy\core\expr.py", line 3630, in expand
expr, _ = Expr._expand_hint(
File "C:\Users\...\venv\lib\site-packages\sympy\core\expr.py", line 3555, in _expand_hint
arg, arghit = Expr._expand_hint(arg, hint, **hints)
File "C:\Users\...\venv\lib\site-packages\sympy\core\expr.py", line 3563, in _expand_hint
newexpr = getattr(expr, hint)(**hints)
File "C:\...\venv\lib\site-packages\sympy\core\mul.py", line 936, in _eval_expand_mul
n, d = fraction(expr)
File "C:\...\venv\lib\site-packages\sympy\simplify\radsimp.py", line 1113, in fraction
return Mul(*numer, evaluate=not exact), Mul(*denom, evaluate=not exact)
File "C:\...\venv\lib\site-packages\sympy\core\cache.py", line 72, in wrapper
retval = cfunc(*args, **kwargs)
File "C:\...\venv\lib\site-packages\sympy\core\operations.py", line 85, in __new__
c_part, nc_part, order_symbols = cls.flatten(args)
File "C:\...\venv\lib\site-packages\sympy\core\mul.py", line 523, in flatten
c_part.append(p)
Any help would be greatly appreciated.

There are several problems:
You can't use ^ for exponentiation. In Python you need **. See sympy - gotchas.
Another problem is that you have 3 different variables t. This should be just one variable. If there is only one variable in an equation, equation.diff() automatically uses that one. If there are multiple, you also need to pass the correct variable to your velocity function.
As your equations uses floats, sympy gets very confused as it tries to find exact symbolic solutions, which doesn't work well in floats which are by definition only approximations. Especially the float in the exponent is hard for sympy. To cope, sympy uses a numerical solver nsolve, but which needs a seed value to start number crunching. Depending on the seed, either 0, 40.50 or 87.55 is obtained.
Here is how the updated code could look like:
from sympy import Symbol, Eq, nsolve
def velocity(equation):
derivative = equation.diff()
return derivative
def max_height():
time = Symbol('t')
equ = 2.13 * (time ** 2) - 0.0013 * (time ** 4) + 0.000034 * (time ** 4.751)
return nsolve(Eq(velocity(equ), 0), 30)
print(max_height())
It could help to draw a plot (using lambdify() to make the functions accessible in matplotlib):
from sympy import Symbol, Eq, nsolve, lambdify
def velocity(equation, time):
derivative = equation.diff(time)
return derivative
def get_equation(time):
return 2.13 * (time ** 2) - 0.0013 * (time ** 4) + 0.000034 * (time ** 4.751)
def max_height(equ, time):
return [nsolve(Eq(velocity(equ, time), 0), initial_guess) for initial_guess in [0, 30, 500]]
time = Symbol('t')
equ = get_equation(time)
max_heigths = max_height(equ, time)
equ_np = lambdify(time, equ)
vel_np = lambdify(time, velocity(equ, time))
import matplotlib.pyplot as plt
import numpy as np
xs = np.linspace(0, 105, 2000)
ys = equ_np(xs)
max_heigths = np.array(max_heigths)
plt.plot(xs, equ_np(xs), label='equation')
plt.plot(xs, vel_np(xs), label='velocity')
plt.axhline(0, ls='--', color='black')
plt.scatter(max_heigths, equ_np(max_heigths), s=100, color='red', alpha=0.5, label='extremes')
plt.legend()
plt.show()

Related

Strange behaviour in scipy.solve_ivp when using an implicit method

I recently ran into a question about integration and encountered a strange bug. I attempt a very simple problem using solve_ivp:
from scipy.integrate import solve_ivp
import numpy as np
def f(y, t):
return y
y0 = [1,1,1,1]
method = 'RK23'
s = solve_ivp(f, (0,1), y0, method=method, t_eval=np.linspace(0,1))
And it works fine. When I change to method='BDF' or method='Radau' I get an error:
Traceback (most recent call last):
File "<ipython-input-222-f11c4406e92c>", line 10, in <module>
s = solve_ivp(f, (0,1), y0, method=method, t_eval=np.linspace(0,1))
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\integrate\_ivp\ivp.py", line 455, in solve_ivp
solver = method(fun, t0, y0, tf, vectorized=vectorized, **options)
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\integrate\_ivp\radau.py", line 299, in __init__
self.jac, self.J = self._validate_jac(jac, jac_sparsity)
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\integrate\_ivp\radau.py", line 345, in _validate_jac
J = jac_wrapped(t0, y0, self.f)
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\integrate\_ivp\radau.py", line 343, in jac_wrapped
sparsity)
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\integrate\_ivp\common.py", line 307, in num_jac
return _dense_num_jac(fun, t, y, f, h, factor, y_scale)
File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\integrate\_ivp\common.py", line 318, in _dense_num_jac
diff = f_new - f[:, None]
IndexError: too many indices for array
I also get an error with method = 'LSODA', although different (i.e. all implicit integrators). I do not get an error with any of the explicit integrators.
I tried this in spyder with scipy version 1.0.0 and in google colab (scipy version 1.1.0), with the same results.
Is this a bug or am I missing some argument I need for implicit integrators??
It appears that the Radau and BDF methods do not handle single-valued RHS functions. Making the function f above output a 1-D list solves your issue. Additionally, as mentioned by Weckesser in the comments, solve_ivp expects the RHS to be f(t, y) and not f(y, t).
Like this
def f(t, y):
return [y]

scipy curve_fit doesn't like math module

While trying to create an example with scipy.optimize curve_fit I found that scipy seems to be incompatible with Python's math module. While function f1 works fine, f2 throws an error message.
from scipy.optimize import curve_fit
from math import sin, pi, log, exp, floor, fabs, pow
x_axis = np.asarray([pi * i / 6 for i in range(-6, 7)])
y_axis = np.asarray([sin(i) for i in x_axis])
def f1(x, m, n):
return m * x + n
coeff1, mat = curve_fit(f1, x_axis, y_axis)
print(coeff1)
def f2(x, m, n):
return m * sin(x) + n
coeff2, mat = curve_fit(f2, x_axis, y_axis)
print(coeff2)
The full traceback is
Traceback (most recent call last):
File "/Documents/Programming/Eclipse/PythonDevFiles/so_test.py", line 49, in <module>
coeff2, mat = curve_fit(f2, x_axis, y_axis)
File "/usr/local/lib/python3.5/dist-packages/scipy/optimize/minpack.py", line 742, in curve_fit
res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/scipy/optimize/minpack.py", line 377, in leastsq
shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
File "/usr/local/lib/python3.5/dist-packages/scipy/optimize/minpack.py", line 26, in _check_func
res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
File "/usr/local/lib/python3.5/dist-packages/scipy/optimize/minpack.py", line 454, in func_wrapped
return func(xdata, *params) - ydata
File "/Documents/Programming/Eclipse/PythonDevFiles/so_test.py", line 47, in f2
return m * sin(x) + n
TypeError: only length-1 arrays can be converted to Python scalars
The error message appears with lists and numpy arrays as input alike. It affects all math functions, I tested (see functions in import) and must have something to do with, how the math module manipulates input data. This is most obvious with pow() function - if I don't import this function from math, curve_fit works properly with pow().
The obvious question - why does this happen and how can math functions be used with curve_fit?
P.S.: Please don't discuss, that one shouldn't fit the sample data with a linear fit. This was just chosen to illustrate the problem.
Be careful with numpy-arrays, operations working on arrays and operations working on scalars!
Scipy optimize assumes the input (initial-point) to be a 1d-array and often things go wrong in other cases (a list for example becomes an array and if you assumed to work on lists, things go havoc; those kind of problems are common here on StackOverflow and debugging is not that easy to do by the eye; code-interaction helps!).
import numpy as np
import math
x = np.ones(1)
np.sin(x)
> array([0.84147098])
math.sin(x)
> 0.8414709848078965 # this only works as numpy has dedicated support
# as indicated by the error-msg below!
x = np.ones(2)
np.sin(x)
> array([0.84147098, 0.84147098])
math.sin(x)
> TypeError: only size-1 arrays can be converted to Python scalars
To be honest: this is part of a very basic understanding of numpy and should be understood when using scipy's somewhat sensitive functions.

Sympy issues with plotting a piecewise function

I have defined a SymPy piecewise function to compute the Federal Income tax for 2017. I know the function is working as I have tried various inputs and compared it to verified tax calculators and it gives the same result.
However, when trying to plot the SymPy function, I get the error:
TypeError: '>=' not supported between instances of 'complex' and 'int'
I never defined any complex numbers.
def getFedTax(alpha,p,GI):
# alpha is an array indicating the starting dollar amount of each tax bracket, not including 0
#p is an array of the Tax Percentage amount corresponding to the interval BEFORE each alpha, i.e. [0,alpha0) corresponds to p0, [alpha0, alpha1) corresponds to p1, etc.
#x is the Gross Income for computation
# create an array of the cumulative sums for each starting point in alpha
cumsums = [0,p[0]*(alpha[0]-0)]
cumnum = cumsums[1]
for i,num in enumerate(alpha[1:-1],start=1):
cumsums.append(p[i]*(alpha[i]-alpha[i-1]) + cumnum)
cumnum = cumnum + p[i]*(alpha[i]-alpha[i-1])
cumsums.append(p[-2]*(alpha[-1]-alpha[-2]) + cumnum)
#Create the argument list of tuples for the SymPy.Piecewise function
argtuples = []
for n,bracstart in enumerate(alpha):
if n == 0:
argtuples.append((cumsums[0] + p[0]*x, And(0<=x, x<alpha[0])))
elif 0 < n and n < len(alpha)-1:
argtuples.append((cumsums[n] + p[n]*(x - alpha[n-1]), And(alpha[n-1] <= x, x < alpha[n])))
else:
argtuples.append((cumsums[-1]+p[-1]*(x-alpha[-1]), x>alpha[-1]))
t = Piecewise(*argtuples)
return round(t.subs(x,GI),2), t
from sympy import Piecewise, And
from sympy.plotting.plot import plot
from sympy.abc import x
ti = getFedTax([9325.00,37950.00,91900.00,191650.00,416700.00,418400.00],
[0.10,0.15,0.25,0.28,.33,.35,.396],1000000)
plot(ti[1],(x,1.,450000.00))
Full traceback:
runfile('C:/Users/galileo/Downloads/trial.py',
wdir='C:/Users/galileo/Downloads')
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\spyder\utils\site\sitecustomize.py", line 710, in runfile
execfile(filename, namespace)
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\spyder\utils\site\sitecustomize.py", line 101, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/galileo/Downloads/trial.py", line 39, in <module>
plot(ti[1],(x,1.,450000.00))
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\sympy\plotting\plot.py", line 1295, in plot
plots.show()
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\sympy\plotting\plot.py", line 196, in show
self._backend.show()
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\sympy\plotting\plot.py", line 1029, in show
self.process_series()
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\sympy\plotting\plot.py", line 908, in process_series
collection = self.LineCollection(s.get_segments())
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\sympy\plotting\plot.py", line 514, in get_segments
f_start = f(self.start)
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\sympy\plotting\experimental_lambdify.py", line 231, in __call__
result = self.lambda_func(args)
File "c:\users\galileo\appdata\local\programs\python\python36\lib\site-
packages\sympy\plotting\experimental_lambdify.py", line 316, in __call__
return self.lambda_func(*args, **kwargs)
File "<string>", line 1, in <lambda>
TypeError: '>=' not supported between instances of 'complex' and 'int'
Plotting of piecewise linear was bugged, and seems to have been fixed in january 2018
Upgrading to SymPy 1.2 (from 1.1.1 which was shipped by default with Anaconda) did the trick for me.
from sympy import symbols
from sympy.plotting import plot
x = symbols('x')
from sympy import Piecewise, log
f = 2*x+3
g = x+1
p = Piecewise((-1, x < -1), (g, x <= 1), (f, True))
plot(p)

Simulating the Ising Model in Python

I taught myself the Metropolis Algorithm and decided to try code it in Python. I chose to simulate the Ising model. I have an amateur understanding of Python and with that here is what I came up with -
import numpy as np, matplotlib.pyplot as plt, matplotlib.animation as animation
def Ising_H(x,y):
s = L[x,y] * (L[(x+1) % l,y] + L[x, (y+1) % l] + L[(x-1) % l, y] + L[x,(y-1) % l])
H = -J * s
return H
def mcstep(*args): #One Monte-Carlo Step - Metropolis Algorithm
x = np.random.randint(l)
y = np.random.randint(l)
i = Ising_H(x,y)
L[x,y] *= -1
f = Ising_H(x,y)
deltaH = f - i
if(np.random.uniform(0,1) > np.exp(-deltaH/T)):
L[x,y] *= -1
mesh.set_array(L.ravel())
return mesh,
def init_spin_config(opt):
if opt == 'h':
#Hot Start
L = np.random.randint(2, size=(l, l)) #lxl Lattice with random spin configuration
L[L==0] = -1
return L
elif opt =='c':
#Cold Start
L = np.full((l, l), 1, dtype=int) #lxl Lattice with all +1
return L
if __name__=="__main__":
l = 15 #Lattice dimension
J = 0.3 #Interaction strength
T = 2.0 #Temperature
N = 1000 #Number of iterations of MC step
opt = 'h'
L = init_spin_config(opt) #Initial spin configuration
#Simulation Vizualization
fig = plt.figure(figsize=(10, 10), dpi=80)
fig.suptitle("T = %0.1f" % T, fontsize=50)
X, Y = np.meshgrid(range(l), range(l))
mesh = plt.pcolormesh(X, Y, L, cmap = plt.cm.RdBu)
a = animation.FuncAnimation(fig, mcstep, frames = N, interval = 5, blit = True)
plt.show()
Apart from a 'KeyError' from a Tkinter exception and white bands when I try a 16x16 or anything above that, it looks and works fine. Now what I want to know is if this is right because -
I am uncomfortable with how I have used FuncAnimation to do the Monte Carlo simulation AND animate my mesh plot - does that even make sense?
And How about that cold start? All I am getting is a red screen.
Also, please tell me about the KeyError and the white banding.
The 'KeyError' came up as -
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1540, in __call__
return self.func(*args)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 590, in callit
func(*args)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_tkagg.py", line 147, in _on_timer
TimerBase._on_timer(self)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 1305, in _on_timer
ret = func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 1049, in _step
still_going = Animation._step(self, *args)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 855, in _step
self._draw_next_frame(framedata, self._blit)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 873, in _draw_next_frame
self._pre_draw(framedata, blit)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 886, in _pre_draw
self._blit_clear(self._drawn_artists, self._blit_cache)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/animation.py", line 926, in _blit_clear
a.figure.canvas.restore_region(bg_cache[a])
KeyError: <matplotlib.axes._subplots.AxesSubplot object at 0x7fd468b2f2d0>
You are asking a lot of questions at a time.
KeyError: cannot be reproduced. It's strange that it should only occur for some array sizes and not others. Possibly something is wrong with the backend, you may try to use a different one by placing those lines at the top of the script
import matplotlib
matplotlib.use("Qt4Agg")
white bands: cannot be reproduced either, but possibly they come from an automated axes scaling. To avoid that, you can set the axes limits manually
plt.xlim(0,l-1)
plt.ylim(0,l-1)
Using FuncAnimation to do the Monte Carlo simulation is perfectly fine. f course it's not the fastest method, but if you want to follow your simulation on the screen, there is nothing wrong with it. One may however ask the question why there would be only one spin flipping per time unit. But that is more a question on the physics than about programming.
Red screen for cold start: In the case of the cold start, you initialize your grid with only 1s. That means the minimum and maximum value in the grid is 1. Therefore the colormap of the pcolormesh is normalized to the range [1,1] and is all red. In general you want the colormap to span [-1,1], which can be done using vmin and vmax arguments.
mesh = plt.pcolormesh(X, Y, L, cmap = plt.cm.RdBu, vmin=-1, vmax=1)
This should give you the expected behaviour also for the "cold start".

python fmin_slsqp - error with constraints

I am practicing with SciPy and I encountered an error when trying to use fmin_slsqp. I set up a problem in which I want to maximize an objective function, U, given a set of constraints.
I have two control variables, x[0,t] and x[1,t] and, as you can see, they are indexed by t (time periods). The objective function is:
def obj_fct(x, alpha,beta,Al):
U = 0
x[1,0] = x0
for t in trange:
U = U - beta**t * ( (Al[t]*L)**(1-alpha) * x[1,t]**alpha - x[0,t])
return U
The constraints are defined over these two variables and one of them links the variables from one period (t) to another (t-1).
def constr(x,alpha,beta,Al):
return np.array([
x[0,t],
x[1,0] - x0,
x[1,t] - x[0,t] - (1-delta)*x[1,t-1]
])
Finally, here is the use of fmin_slsqp:
sol = fmin_slsqp(obj_fct, x_init, f_eqcons=constr, args=(alpha,beta,Al))
Leaving aside the fact that there are better ways to solve such dynamic problems, my question is about the syntax. When running this simple code, I get the following error:
Traceback (most recent call last):
File "xxx", line 34, in <module>
sol = fmin_slsqp(obj_fct, x_init, f_eqcons=constr, args=(alpha,beta,Al))
File "D:\Anaconda3\lib\site-packages\scipy\optimize\slsqp.py", line 207, in fmin_slsqp
constraints=cons, **opts)
File "D:\Anaconda3\lib\site-packages\scipy\optimize\slsqp.py", line 311, in _minimize_slsqp
meq = sum(map(len, [atleast_1d(c['fun'](x, *c['args'])) for c in cons['eq']]))
File "D:\Anaconda3\lib\site-packages\scipy\optimize\slsqp.py", line 311, in <listcomp>
meq = sum(map(len, [atleast_1d(c['fun'](x, *c['args'])) for c in cons['eq']]))
File "xxx", line 30, in constr
x[0,t],
IndexError: too many indices for array
[Finished in 0.3s with exit code 1]
What am I doing wrong?
The initial part of the code, assigning values to the parameters, is:
from scipy.optimize import fmin_slsqp
import numpy as np
T = 30
beta = 0.96
L = 1
x0 = 1
gl = 0.02
alpha = 0.3
delta = 0.05
x_init = np.array([1,0.1])
A_l0 = 1000
Al = np.zeros((T+1,1))
Al[1] = A_l0
trange = np.arange(1,T+1,1, dtype='Int8') # does not include period zero
for t in trange: Al[t] = A_l0*(1 + gl)**(t-1)
The array x passed to your objective and constraint functions will be a one-dimensional array (just like your x_init is). You can't index a one-dimensional array with two indices, so expressions such as x[1,0] and x[0,t] will generate an error.

Categories

Resources