I was wondering if there's a way to find tangents to curve from discrete data.
For example:
x = np.linespace(-100,100,100001)
y = sin(x)
so here x values are integers, but what if we want to find tangent at something like x = 67.875?
I've been trying to figure out if numpy.interp would work, but so far no luck.
I also found a couple of similar examples, such as this one, but haven't been able to apply the techniques to my case :(
I'm new to Python and don't entirely know how everything works yet, so any help would be appreciated...
this is what I get:
from scipy import interpolate
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-100,100,10000)
y = np.sin(x)
tck, u = interpolate.splprep([y])
ti = np.linspace(-100,100,10000)
dydx = interpolate.splev(ti,tck,der=1)
plt.plot(x,y)
plt.plot(ti,dydx[0])
plt.show()
There is a comment in this answer, which tells you that there is a difference between splrep and splprep. For the 1D case you have here, splrep is completely sufficient.
You may also want to limit your curve a but to be able to see the oscilations.
from scipy import interpolate
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-15,15,1000)
y = np.sin(x)
tck = interpolate.splrep(x,y)
dydx = interpolate.splev(x,tck,der=1)
plt.plot(x,y)
plt.plot(x,dydx, label="derivative")
plt.legend()
plt.show()
While this is how the code above would be made runnable, it does not provide a tangent. For the tangent you only need the derivative at a single point. However you need to have the equation of a tangent somewhere and actually use it; so this is more a math question.
from scipy import interpolate
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-15,15,1000)
y = np.sin(x)
tck = interpolate.splrep(x,y)
x0 = 7.3
y0 = interpolate.splev(x0,tck)
dydx = interpolate.splev(x0,tck,der=1)
tngnt = lambda x: dydx*x + (y0-dydx*x0)
plt.plot(x,y)
plt.plot(x0,y0, "or")
plt.plot(x,tngnt(x), label="tangent")
plt.legend()
plt.show()
It should be noted that you do not need to use splines at all if the points you have are dense enough. In that case obtaining the derivative is just taking the differences between the nearest points.
from scipy import interpolate
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-15,15,1000)
y = np.sin(x)
x0 = 7.3
i0 = np.argmin(np.abs(x-x0))
x1 = x[i0:i0+2]
y1 = y[i0:i0+2]
dydx, = np.diff(y1)/np.diff(x1)
tngnt = lambda x: dydx*x + (y1[0]-dydx*x1[0])
plt.plot(x,y)
plt.plot(x1[0],y1[0], "or")
plt.plot(x,tngnt(x), label="tangent")
plt.legend()
plt.show()
The result will be visually identical to the one above.
Related
I am trying to fit a curve for a set of points using numpy and scipy libraries but am getting a closed curve as shown below.
Could anyone let me know how to fit a curve without closing curve?
The code I followed is:
import numpy as np
from scipy.interpolate import splprep, splev
import matplotlib.pyplot as plt
coords = np.array([(3,8),(3,9),(4,10),(5,11),(6,11), (7,13), (9,13),(10,14),(11,14),(12,14),(14,16),(16,17),(17,18),(18,18),(19,18), (20,19),
(21,19),(22,20),(23,20),(24,21),(26,21),(27,21),(28,21),(30,21),(32,20),(33,20),(32,17),(33,16),(33,15),(34,12), (34,10),(33,10),
(33,9),(33,8),(33,6),(34,6),(34,5)])
tck, u = splprep(coords.T, u=None, s=0.0, per=1)
u_new = np.linspace(u.min(), u.max(), 1000)
x_new, y_new = splev(u_new, tck, der=0)
plt.plot(coords[:,1], coords[:,0], 'ro')
plt.plot(y_new, x_new, 'b--')
plt.show()
Output:
I need output without joining the 1st and last point.
Thank you.
Just set per parameter to 0 in scipy.interpolate.splprep:
tck, u = splprep(coords.T, u=None, s=0.0, per=0)
I have a handful of data points that cluster along a line in 3d space. I have the x,y,z data in a csv file that I want to import. I would like to find an equation that represents that line, or the plane perpendicular to that line, or whatever is mathematically correct. These data are independent of each other. Maybe there are better ways to do this than what I tried to do but...
I attempted to replicate an old post here that seemed to be doing exactly what I'm trying to do
Fitting a line in 3D
but it seems that maybe updates over the past decade have left the second part of the code not working? Or maybe I'm just doing something wrong. I've included the entire thing that I frankensteined together from this at the bottom. There are two lines that seem to be giving me a problem.
I've snippeted them out here...
import numpy as np
pts = np.add.accumulate(np.random.random((10,3)))
x,y,z = pts.T
# this will find the slope and x-intercept of a plane
# parallel to the y-axis that best fits the data
A_xz = np.vstack((x, np.ones(len(x)))).T
m_xz, c_xz = np.linalg.lstsq(A_xz, z)[0]
# again for a plane parallel to the x-axis
A_yz = np.vstack((y, np.ones(len(y)))).T
m_yz, c_yz = np.linalg.lstsq(A_yz, z)[0]
# the intersection of those two planes and
# the function for the line would be:
# z = m_yz * y + c_yz
# z = m_xz * x + c_xz
# or:
def lin(z):
x = (z - c_xz)/m_xz
y = (z - c_yz)/m_yz
return x,y
#verifying:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = Axes3D(fig)
zz = np.linspace(0,5)
xx,yy = lin(zz)
ax.scatter(x, y, z)
ax.plot(xx,yy,zz)
plt.savefig('test.png')
plt.show()
They return this, but no values...
FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
m_xz, c_xz = np.linalg.lstsq(A_xz, z)[0]
FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
m_yz, c_yz = np.linalg.lstsq(A_yz, z)[0]
I don't know where to go from here. I don't even actually need the plot, I just needed an equation and am ill-equipped to move forward. If anyone knows an easier way to do this, or can point me in the right direction, I'm willing to learn, but I'm very, very lost. Thank you in advance!!
Here is my entire frankensteined code in case that is what is causing the issue.
import pandas as pd
import numpy as np
mydataset = pd.read_csv('line1.csv')
x = mydataset.iloc[:,0]
y = mydataset.iloc[:,1]
z = mydataset.iloc[:,2]
data = np.concatenate((x[:, np.newaxis],
y[:, np.newaxis],
z[:, np.newaxis]),
axis=1)
# Calculate the mean of the points, i.e. the 'center' of the cloud
datamean = data.mean(axis=0)
# Do an SVD on the mean-centered data.
uu, dd, vv = np.linalg.svd(data - datamean)
# Now vv[0] contains the first principal component, i.e. the direction
# vector of the 'best fit' line in the least squares sense.
# Now generate some points along this best fit line, for plotting.
# we want it to have mean 0 (like the points we did
# the svd on). Also, it's a straight line, so we only need 2 points.
linepts = vv[0] * np.mgrid[-100:100:2j][:, np.newaxis]
# shift by the mean to get the line in the right place
linepts += datamean
# Verify that everything looks right.
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d as m3d
ax = m3d.Axes3D(plt.figure())
ax.scatter3D(*data.T)
ax.plot3D(*linepts.T)
plt.show()
# this will find the slope and x-intercept of a plane
# parallel to the y-axis that best fits the data
A_xz = np.vstack((x, np.ones(len(x)))).T
m_xz, c_xz = np.linalg.lstsq(A_xz, z)[0]
# again for a plane parallel to the x-axis
A_yz = np.vstack((y, np.ones(len(y)))).T
m_yz, c_yz = np.linalg.lstsq(A_yz, z)[0]
# the intersection of those two planes and
# the function for the line would be:
# z = m_yz * y + c_yz
# z = m_xz * x + c_xz
# or:
def lin(z):
x = (z - c_xz)/m_xz
y = (z - c_yz)/m_yz
return x,y
print(x,y)
#verifying:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = Axes3D(fig)
zz = np.linspace(0,5)
xx,yy = lin(zz)
ax.scatter(x, y, z)
ax.plot(xx,yy,zz)
plt.savefig('test.png')
plt.show()
As was proposed in the old post you refer to, you could also make use of principal component analysis instead of a least squares approach. For that I suggest sklearn.decomposition.PCA from the sklearn package.
An example can be found below using the csv-file you provided.
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
mydataset = pd.read_csv('line1.csv')
x = mydataset.iloc[:,0]
y = mydataset.iloc[:,1]
z = mydataset.iloc[:,2]
coords = np.array((x, y, z)).T
pca = PCA(n_components=1)
pca.fit(coords)
direction_vector = pca.components_
print(direction_vector)
# Create plot
origin = np.mean(coords, axis=0)
euclidian_distance = np.linalg.norm(coords - origin, axis=1)
extent = np.max(euclidian_distance)
line = np.vstack((origin - direction_vector * extent,
origin + direction_vector * extent))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(coords[:, 0], coords[:, 1], coords[:,2])
ax.plot(line[:, 0], line[:, 1], line[:, 2], 'r')
You can get rid of the complaint from leastsquares by adding rcond=None like this:
m_xz, c_xz = np.linalg.lstsq(A_xz, z, rcond=None)[0]
Is this the right decision for your situation? I have no idea. But there's more about it in the docs.
When I run your code with your inputs it seems to run just fine and I get values assigned to m_xz, c_xz, etc. If you don't call them explicitly with print('m_xz') (or whatever) then you won't see them.
m_xz
Out[42]: 5.186132604596112
c_xz
Out[43]: 62.5764694106141
Also, you reference your data in kind of two different ways. You get x, y, and z from your csv, but also put it into a numpy array. You can get rid of the duplication and pandas by just using numpy:
data = np.genfromtxt('line1.csv', delimiter=',', skip_header=1)
x = data[:,0]
y = data[:,1]
z = data[:,2]
I used matplotlib.pyplot.contour to draw a line, but the result is strange.
My python code:
import numpy as np
from matplotlib import pyplot as plt
N = 1000
E = np.linspace(-5,0,N)
V = np.linspace(0, 70,N)
E, V = np.meshgrid(E, V)
L = np.sqrt(-E)
R = -np.sqrt(E+V)/np.tan(np.sqrt(E+V))
plt.contour(V, E,(L-R),levels=[0])
plt.show()
The result is:
But when I use Mathematica, the result is different.
Mathematica code is:
ContourPlot[Sqrt[-en] == -Sqrt[en + V]/Tan[Sqrt[en + V]], {V, 0, 70}, {en, -5, 0}]
The result is:
The result that I want is Mathematica's result.
Why does matplotlib.pyplot.contour give the wrong result? I am very confused!
It would be very appreciate if you can give me some idea! Thank you very much!
The result given by matplotlib.pyplot.contour is numerically correct, but mathematically wrong.
Check what happens if you simply plot the tan(x):
import numpy as np
from matplotlib import pyplot as plt
x = np.linspace(0,2*np.pi,1000)
y = np.tan(x)
plt.plot(x,y)
plt.show()
You will get a line at the poles. This is because subsequent points are connected.
You can circumvent this by using np.inf for points larger than a certain number. E.g. adding
y[np.abs(y)> 200] = np.inf
would result in
The same approach can be used for the contour.
import numpy as np
from matplotlib import pyplot as plt
N = 1000
x = np.linspace(0, 70,N)
y = np.linspace(-5,0,N)
X,Y = np.meshgrid(x, y)
F = np.sqrt(-Y) + np.sqrt(Y+X)/np.tan(np.sqrt(Y+X))
F[np.abs(F) > 200] = np.inf
plt.contour(X, Y, F, levels=[0])
plt.show()
I am using the below python code so as to bias an absolute sine wave. I would like to have only the crest part of the wave and not the trough part even after positive biasing.Here I am unable achieve continuous crest signal after positive biasing. Can any one help me in this?
Usage: Keeping the input signals above the threshold even during dynamic shift of threshold.
import matplotlib.pyplot as plt
import numpy as np
Bias=5;
x=np.linspace(-20,20,1000);
y=np.abs(np.sin(x)+Bias);
#Bias=np.zeros_like(x); # This is not working
y[(y<=Bias)]= Bias + y # This is not working
plt.plot(x,y)
plt.grid()
plt.show()
It is a litle bit unclear what you are asking... Maybe you want to try this:
import matplotlib.pyplot as plt
import numpy as np
Bias=5;
x = np.linspace(-20, 20, 1000);
y = np.abs(np.sin(x))
y = y + Bias
plt.plot(x, y)
plt.grid()
plt.show()
or this:
import matplotlib.pyplot as plt
import numpy as np
Bias=5;
x=np.linspace(-20,20,1000);
y=np.abs(np.sin(x) + Bias);
y[(y<=Bias)]= Bias
plt.plot(x,y)
plt.grid()
plt.show()
I'm new to python and have a question. I've figured out how to graph functions, but how do I plot a point which indicates the max and minimum values? Here is my code, and it graphs properly I believe. Thank you.
import numpy as np
import matplotlib.pyplot as plt
def graph(formula, x_range):
x = np.array(x_range)
y = eval(formula)
plt.plot(x, y)
plt.show()
graph('-x**4 + 508 * x + 40', range(-10, 200))
n_max = y.argmax()
plt.plot(x[n_max],y[n_max],'o')
n_min = y.argmin()
plt.plot(x[n_min],y[n_min],'x')
something like this?