I'm trying some basic practice with SymPy. I would like to take a second derivative symbolically of a function in rectangular coordinates with respect to the radius parameter in polar coordinates.
I'd like a nice chain rule symbolic expression where it calculates what it can and leaves unevaluated what can't be simplified further.
from sympy import *
init_session()
x, y, r, t = symbols('x y r t') # r (radius), t (angle theta)
f, g = symbols('f g', cls=Function)
g = f(x,y)
x = r * cos(t)
y = r* sin(t)
Derivative(g,r, 2).doit()
This code yields 0. Is there a way to get a symbolic representation of the answer, rather than 0?
Short answer:
Your commands are out of order.
Long answer:
x, y, r, t = symbols('x y r t') # r (radius), t (angle theta)
f, g = symbols('f g', cls=Function)
g = f(x,y)
Now x,y are Symbols, f is a Function and g is an applied Function i.e. the symbols x,y applied to f as f(x,y).
x = r * cos(t)
y = r* sin(t)
Now you redefine x and y as expressions of r and t. This does not affect g in the slightest!
Derivative(g,r, 2).doit()
Now you derive g wrt r. As g is still defined via the initial symbols x,y it does not depend on r, thus the derivative is zero.
To achieve what you want use this:
from sympy import *
r, t = symbols('r t') # r (radius), t (angle theta)
f = symbols('f', cls=Function)
x = r* cos(t)
y = r* sin(t)
g = f(x,y)
Derivative(g,r, 2).doit()
I also dropped all unnecessary Symbol definitions.
Related
I wanna solve a simultaneous differential equation and based on Lorenz equations:
def f(xyz, t, rho, sigma, beta):
x, y, z = xyz
return [sigma * (y - x),
x * (rho - z) - y,
x * y - beta * z]
I Wrote this:
def f(xyz, t, rho, sigma, beta):
x, y, z = xyz
return [sigma * y(t).diff(t) + sigma * x + beta * y -77,
x + rho * y - 61]
So basically I have another differential of y in the first equation and I tried to take derivative but it says:
TypeError: 'numpy.float64' object is not callable"
Can you tell me How can I solve such problems and for second orders of those?
You have a linear system for the derivatives, call them xp, yp. Fortunately it is triangular, so you can solve it via back-substitution. So first solve for yp then insert that for xp.
def f(xy,t):
x,y = xy
yp = 61 - (4*x + y)
xp = 77 - (2*yp + 2*x + 5*x)
return xp,yp
In general you could employ a linear-system solver numpy.linalg.solve or a more general solver like scipy.optimize.fsolve to extract the derivatives from a system of implicit equations (as far as possible, DAE - systems of differential-algebraic equations can also have this form).
The problem is that when you write y(t), Python thinks you are calling a function called y with argument t, but y appears to be a decimal number, not a function.
Python has a dynamic type system, so when you write
x, y, z = xyz
Python will assign the variable y to the data type of the middle value of xyz
So do u want to solve Lorenz differential equations in Python?
The link below can help you very much with the answer you are trying to find
https://www.programmersought.com/article/82154360499/
or you can solve it like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mpl_toolkits.mplot3d import Axes3D
def lorenz(state, t, sigma, beta, rho):
x, y, z = state
dx = sigma * (y - x)
dy = x * (rho - z) - y
dz = x * y - beta * z
return [dx, dy, dz]
I have a set of data and want to put a parabolic fit over it. This already works with the polyfit function from numpy like this:
fit = np.polyfit(X, y, 2)
formula = np.poly1d(fit)
Now I want the parabula to have its peak value at a fixed x value and that the fit is still carried out as best as possible with this fixed peak. Is there a way to accomplish that?
From my data I know that the parabola will always be open downwards.
I think this is quite a difficult problem since the x coordinate of the peak of a second-order polynomial (ax^2 + bx + c) always lies in x = -b/2a.
A thing you could do is to drop the b term and offset it by the desired peak x value in fitting the polynomial like the code below. Note that I used scipy.optimize.curve_fit to fit for the custom function func.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# generating a parabola with noise
np.random.seed(42)
x = np.linspace(-10, 10, 100)
y = 10 -(x-2)**2 + np.random.normal(0, 5, x.shape)
# function to fit
def func(x, a, c):
return a*x**2 + c
# desired x peak value
x_peak = 2
popt, pcov = curve_fit(func, x - x_peak, y)
y_fit = func(x - x_peak, *popt)
# plotting
plt.plot(x, y, 'k.')
plt.plot(x, y_fit)
plt.axvline(x_peak)
plt.show()
Outputs the image:
Fixing a point on your parabola simplifies the problem, since you can rewrite your equation slightly in terms of a constant now:
y = A(x - B)**2 + C
Given the coefficients a, b, c in your original unconstrained fit, you have the relationships
a = A
b = -2AB
c = AB**2 + C
The only difference is that since B is a constant and you don't have an x - B term in the equation, you need to set up the least-squares problem yourself. Given arrays x, y and constant B, the problem looks like this:
m = np.stack((x - B, np.ones_like(x)), axis=-1)
(A, C), *_ = np.linalg.lstsq(m, y, rcond=None)
You can then extract the normal coefficient from the formulas for a, b, c above.
Here is a complete example, just like the one in the other answer:
B = 2
np.random.seed(42)
x = np.linspace(-10, 10, 100)
y = 10 -(x - B)**2 + np.random.normal(0, 5, x.shape)
m = np.stack(((x - B)**2, np.ones_like(x)), axis=-1)
(A, C), *_ = np.linalg.lstsq(m, y, rcond=None)
a = A
b = -2 * A * B
c = A * B**2 + C
y_fit = a * x**2 + b * x + c
You can drop a, b, c entirely and do
y_fit = A * (x - B)**2 + C
The result will be identical.
plt.plot(x, y, 'k.')
plt.plot(x, y_fit)
Without the condition of location of the peak the function to be fitted would be :
y = a x^2 + b x + c
With condition of location of the peak at x=p , given p :
-b/(2a)=p
b=-2 a p
y = a x^2 -2 a p x + c
y = a (x^2 - 2 p x) +c
Knowing p , one change of variable :
X = x^2 -2 p x
So, from the data (x,y) one first compute the new data (X,y)
Then a and c are computed thanks to linear regression
y = a X + c
(Although there are a number of questions regarding how to best fit a plane to some 3D data on SO, I couldn't find an answer for this issue.)
Given N (x, y, z) points, I need the best fit plane
a*x + b*y + c*z + d = 0
defined through the a, b, c, d coefficients that minimize the mean of the orthogonal distances from the points to the plane. The point-plane orthogonal distance (for a given (x0, y0, z0) point) is defined as:
d = |a*x0 + b*y0 + c*z0 + d|/sqrt(a^2 + b^2 + c^2)
I set up two methods (code below):
Singular-Value Decomposition (source)
Basin-Hopping minimization of the mean orthogonal distances
As I understand it, the SVD method should produce the exact best fit plane by minimizing the orthogonal distances analytically. What I find instead is that the BH method gives better results than the supposedly exact SVD method, even for a low number of BH runs.
By "better" I mean that the final mean orthogonal distance value is smaller with the BH method, than with the SVD method.
What am I missing here?
import numpy as np
import scipy.optimize as optimize
def perp_error(params, xyz):
"""
Mean of the absolute values for the perpendicular distance of the
'xyz' points, to the plane defined by the coefficients 'a,b,c,d' in
'params'.
"""
a, b, c, d = params
x, y, z = xyz
length = np.sqrt(a**2 + b**2 + c**2)
return (np.abs(a * x + b * y + c * z + d) / length).mean()
def minPerpDist(x, y, z, N_min):
"""
Basin-Hopping method, minimize mean absolute values of the
orthogonal distances.
"""
def unit_length(params):
"""
Constrain the vector perpendicular to the plane to be of unit length.
"""
a, b, c, d = params
return a**2 + b**2 + c**2 - 1
# Random initial guess for the a,b,c,d plane coefficients.
initial_guess = np.random.uniform(-10., 10., 4)
# Constrain the vector perpendicular to the plane to be of unit length.
cons = ({'type': 'eq', 'fun': unit_length})
min_kwargs = {"constraints": cons, "args": [x, y, z]}
# Use Basin-Hopping to obtain the best fit coefficients.
sol = optimize.basinhopping(
perp_error, initial_guess, minimizer_kwargs=min_kwargs, niter=N_min)
abcd = list(sol.x)
return abcd
def SVD(X):
"""
Singular value decomposition method.
Source: https://gist.github.com/lambdalisue/7201028
"""
# Find the average of points (centroid) along the columns
C = np.average(X, axis=0)
# Create CX vector (centroid to point) matrix
CX = X - C
# Singular value decomposition
U, S, V = np.linalg.svd(CX)
# The last row of V matrix indicate the eigenvectors of
# smallest eigenvalues (singular values).
N = V[-1]
# Extract a, b, c, d coefficients.
x0, y0, z0 = C
a, b, c = N
d = -(a * x0 + b * y0 + c * z0)
return a, b, c, d
# Generate a random plane.
seed = np.random.randint(100000)
print("Seed: {}".format(seed))
np.random.seed(seed)
a, b, c, d = np.random.uniform(-10., 10., 4)
print("Orig abc(d=1): {:.3f} {:.3f} {:.3f}\n".format(a / d, b / d, c / d))
# Generate random (x, y, z) points.
N = 200
x, y = np.random.uniform(-5., 5., (2, N))
z = -(a * x + b * y + d) / c
# Add scatter in z.
z = z + np.random.uniform(-.2, .2, N)
# Solve using SVD method.
a, b, c, d = SVD(np.array([x, y, z]).T)
print("SVD abc(d=1): {:.3f} {:.3f} {:.3f}".format(a / d, b / d, c / d))
# Orthogonal mean distance
print("Perp err: {:.5f}\n".format(perp_error((a, b, c, d), (x, y, z))))
# Solve using Basin-Hopping.
abcd = minPerpDist(x, y, z, 500)
a, b, c, d = abcd
print("BH abc(d=1): {:.3f} {:.3f} {:.3f}".format(a / d, b / d, c / d))
print("Perp err: {:.5f}".format(perp_error(abcd, (x, y, z))))
I believe I found the reason for the discrepancy.
When I minimize the perpendicular distance of points to a plane using Basin-Hopping, I am using the absolute valued point-plane distance:
d_abs = |a*x0 + b*y0 + c*z0 + d| / sqrt(a^2 + b^2 + c^2)
The SVD method on the other hand, apparently minimizes the squared point-plane distance:
d_sqr = (a*x0 + b*y0 + c*z0 + d)^2 / (a^2 + b^2 + c^2)
If, in the code shared in the question, I use the squared distance in the perp_error() function instead of the absolute valued distance, both methods give the exact same answer.
I'm trying to simplify an expression using sympy but the relational terms seem to disappear. A toy example is as follows:
import sympy
from sympy import *
x = Symbol('x')
y = Symbol('y')
z = Symbol('z')
If I run:
z * Eq(x, y)
Then the output is:
z*(x == y)
But if I try to simplify this using:
simplify(z * Eq(x, y))
Then the output is:
z
Which I would not expect - should I expect this behaviour and if so, is there any way to prevent simplify from removing the relational term?
Thanks.
Logic and arithmetic operations cannot be combined to make such operations.
Supposing:
from sympy import *
x, y, z = symbols('x y z')
f = symbols('f', cls=Function)
For arithmetic operation:
xeqy = Piecewise((1,Eq(x,y)),(0,True)) # {1 for x = y, 0 otherwise}
f = z * xeqy # {z for x = y, 0 otherwise}
simplify(f)
For logical operation:
f = And(z,Eq(x,y)) # z ∧ (x = y)
simplify(f)
I am trying to generate points that lies on the surface of a sphere centered on (0,0) in python.
# r - the radius of the sphere
def createSphere(r):
lst = []
for z in range(-r, r+1):
r_ = r - abs(z)
if r_ == 0:
lst.append((0,0,r*np.sign(z)))
else:
for d in range(r_):
lst.append((r_ * cos(d * (360/float(r_))), r_ * sin(d * 360/float(r_))), z) )
return lst
It will return a list [(x1,y1,z1),...].
This is how the result looks like:
The surface isn't smooth, and it looks kinda like a cube with extra sharp corners.
Does anyone know whats wrong?
Thanks
Use the standard spherical to cartesian coordinate transformation:
import math
pi = math.pi
sin = math.sin
cos = math.cos
def createSphere(r, N=10):
lst = []
thetas = [(2*pi*i)/N for i in range(N)]
phis = [(pi*i)/N for i in range(N)]
for theta in thetas:
for phi in phis:
x = r * sin(phi) * cos(theta)
y = r * sin(phi) * sin(theta)
z = r * cos(phi)
lst.append((x, y, z))
return lst
Per the comments below: If you wish to vary the number of points depending on the height (or phi), you could let thetas depend on phi:
def createSphere(r, N=10):
lst = []
for phi in [(pi*i)/(N-1) for i in range(N)]:
M = int(sin(phi)*(N-1))+1
for theta in [(2*pi*i)/M for i in range(M)]:
x = r * sin(phi) * cos(theta)
y = r * sin(phi) * sin(theta)
z = r * cos(phi)
lst.append((x, y, z))
return lst
Above, the key line is
M = int(sin(phi)*(N-1))+1
M will equal 1 when phi is 0 or pi, and it will equal N when phi equals pi/2 (at the "equator"). Note that this is just one possible way to define M. Instead of using sin, you could instead define a piecewise linear function with the same boundary values, for example...