I would like to know how does numpy.gradient work.
I used gradient to try to calculate group velocity (group velocity of a wave packet is the derivative of frequencies respect to wavenumbers, not a group of velocities). I fed a 3 column array to it, the first 2 colums are x and y coords, the third column is the frequency of that point (x,y). I need to calculate gradient and I did expect a 2d vector, being gradient definition
df/dx*i+df/dy*j+df/dz*k
and my function only a function of x and y i did expect something like
df/dx*i+df/dy*j
But i got 2 arrays with 3 colums each, i.e. 2 3d vectors; at first i thought that the sum of the two would give me the vector i were searchin for but the z component doesn't vanish. I hope i've been sufficiently clear in my explanation. I would like to know how numpy.gradient works and if it's the right choice for my problem. Otherwise i would like to know if there's any other python function i can use.
What i mean is: I want to calculate gradient of an array of values:
data=[[x1,x2,x3]...[x1,x2,x3]]
where x1,x2 are point coordinates on an uniform grid (my points on the brillouin zone) and x3 is the value of frequency for that point. I give in input also steps for derivation for the 2 directions:
stepx=abs(max(unique(data[:,0])-min(unique(data[:,0]))/(len(unique(data[:,0]))-1)
the same for y direction.
I didn't build my data on a grid, i already have a grid and this is why kind examples given here in answers do not help me.
A more fitting example should have a grid of points and values like the one i have:
data=[]
for i in range(10):
for j in range(10):
data.append([i,j,i**2+j**2])
data=array(data,dtype=float)
gx,gy=gradient(data)
another thing i can add is that my grid is not a square one but has the shape of a polygon being the brillouin zone of a 2d crystal.
I've understood that numpy.gradient works properly only on a square grid of values, not what i'm searchin for. Even if i make my data as a grid that would have lots of zeroes outside of the polygon of my original data, that would add really high vectors to my gradient affecting (negatively) the precision of calculation. This module seems to me more a toy than a tool, it has severe limitations imho.
Problem solved using dictionaries.
You need to give gradient a matrix that describes your angular frequency values for your (x,y) points. e.g.
def f(x,y):
return np.sin((x + y))
x = y = np.arange(-5, 5, 0.05)
X, Y = np.meshgrid(x, y)
zs = np.array([f(x,y) for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = zs.reshape(X.shape)
gx,gy = np.gradient(Z,0.05,0.05)
You can see that plotting Z as a surface gives:
Here is how to interpret your gradient:
gx is a matrix that gives the change dz/dx at all points. e.g. gx[0][0] is dz/dx at (x0,y0). Visualizing gx helps in understanding:
Since my data was generated from f(x,y) = sin(x+y) gy looks the same.
Here is a more obvious example using f(x,y) = sin(x)...
f(x,y)
and the gradients
update Let's take a look at the xy pairs.
This is the code I used:
def f(x,y):
return np.sin(x)
x = y = np.arange(-3,3,.05)
X, Y = np.meshgrid(x, y)
zs = np.array([f(x,y) for x,y in zip(np.ravel(X), np.ravel(Y))])
xy_pairs = np.array([str(x)+','+str(y) for x,y in zip(np.ravel(X), np.ravel(Y))])
Z = zs.reshape(X.shape)
xy_pairs = xy_pairs.reshape(X.shape)
gy,gx = np.gradient(Z,.05,.05)
Now we can look and see exactly what is happening. Say we wanted to know what point was associated with the value atZ[20][30]? Then...
>>> Z[20][30]
-0.99749498660405478
And the point is
>>> xy_pairs[20][30]
'-1.5,-2.0'
Is that right? Let's check.
>>> np.sin(-1.5)
-0.99749498660405445
Yes.
And what are our gradient components at that point?
>>> gy[20][30]
0.0
>>> gx[20][30]
0.070707731517679617
Do those check out?
dz/dy always 0 check.
dz/dx = cos(x) and...
>>> np.cos(-1.5)
0.070737201667702906
Looks good.
You'll notice they aren't exactly correct, that is because my Z data isn't continuous, there is a step size of 0.05 and gradient can only approximate the rate of change.
Related
Given is a geometrical object, for simplification a semisphere with a certain radius. This is displayed as a 2D matrix with the Z data being the height. Assuming that I cut the object along any line, I want to calculate the area of the cut. My solution is to interpolate the semisphere using scipys RectBivariateSpline to accurately display it.
import numpy as np
import scipy.interpolate as intp
radius = 15.
gridsize = 0.5
spectrum = np.arange(-radius,radius+gridsize,gridsize)
X,Y = np.meshgrid(spectrum,spectrum)
Z = np.where(np.sqrt(X**2+Y**2)<=radius, np.sqrt(radius**2-np.sqrt(X**2+Y**2)**2), 0)
spline = intp.RectBivariateSpline(x = X[0,:], y = Y[:,0], z = Z)
#Example coordinates of the cut
x0 = -4.78
x = -6.73
y0 = -15.
y = 15.
However, the RectBivariateSpline only offers an area integral (which can be quickly checked by setting x0 = x or y0 = y). On the other hand the UnivariateSpline only takes in 1D array, which would only work if my cut happened to be along one specific vector of the matrix Z.
Since I want to perform this operation a few thousand times, I would need a comparably quick way to solve the integral (numerically or analytically doesn't matter as long as the error is somewhat negligible). Does anyone have an idea on how to do this?
It turned out, that, for my case, it was sufficient to sample the spline along my cut (using numpy's arange to gather equally spaced points) and then by integrating via the Simpson rule, which only requires a number of points with a sufficiently low distance (which can be controlled via arange's step parameter).
I am experimenting with gradient descent and want to plot a contour of the gradient given independent variables x and y.
The optimization objective is to estimate a point given only a list of points and the distances to each of those points. I have a list of vectors of form [(x_1, y_1, d_1), ..., (x_n, y_n, d_n)] where d_i is the measured distance from the point to be estimated to the point (x_i, y_i), and I have a function g(x, y) that returns the gradient at the point (x, y). (The function g(x, y) uses the training vectors to calculate the gradient.)
The gradient descent algorithm works fine and arrives at a close estimate to the actual point coordinates. I want now to visualize the gradient as a contour map. I have the following for x and y values:
xlist = np.linspace(min([v[0] for v in vectors])-1, max([v[0] for v in vectors])+1, 100)
ylist = np.linspace(min([v[1] for v in vectors])-1, max([v[1] for v in vectors])+1, 100)
X, Y = np.meshgrid(xlist, ylist)
But now I need a Z value that maps each pair of coordinates in the grid mesh to g(x, y), and it needs to be the correct shape for the matplotlib contour plot. The examples I have seen have been useless because they all simply multiplied the x and y arrays to generate z values (which obviously will not work in this case), and all the tips, tricks, and SO answers I have encountered ultimately did not help.
How do I use my custom function g(x, y) to create the 2D Z array necessary for constructing a valid contour plot?
I can't quite wrap my head around on how to extrapolate from a dataset where the points are not ordered, i.e. be decreasing for 'x'. like so:
http://www.pic-host.org/images/2014/07/21/0b5ad6a11266f549.png
I got that I need to create a plot for the x and y values seperately. So the code that gets me this: (The points are ordered)
x = bananax
y = bananay
t = np.arange(x.shape[0], dtype=float)
t /= t[-1]
nt = np.linspace(0, 1, 100)
x1 = scipy.interpolate.spline(t, x, nt)
y1 = scipy.interpolate.spline(t, y, nt)
plt.plot(nt, x1, label='data x')
plt.plot(nt, y1, label='data y')
Now I got the interpolated splines. I guess I have to do the extrapolation for f(nt)=x1 and f(nt)=y1 respectivly. I get how to interpolate from the data with a simple linear regression but I'm missing how to get a more complex spline(?) extrapolated from it.
The aim is to let the extrapolated function follow the curvature of the datapoints. (At one end at least)
Cheers, and thanks!
I believe that you're on the right track in that you're creating a parametric curve (creating x(t) and y(t)) because the points are ordered. Part of issue seems to be that the spline function is giving you back discrete values rather than the form and parameters of the spline. scipy.optimize has some nice tools that will help you find functions rather than calculating points
If you've got any insight into the underlying process generating the data I suggest that you use that to help select a functional form for fitting. These more free-form methods will give you a degree of flexibility to do so.
Fit x(t) and y(t) and hold onto the resulting fitting functions. They'll be generated with data from t=0 to t=1 but nothing* will stop you from evaluating them outside that range.
I can recommend the following links for guidance on curve fitting procedure:
short: http://glowingpython.blogspot.com/2011/05/curve-fitting-using-fmin.html
long: http://nbviewer.ipython.org/gist/keflavich/4042018
*almost nothing
Thanks this got me on the right track. What worked for me was:
x = bananax
y = bananay
#------ fit a spline to the coordinates, x and y axis are interpolated towards t
t = np.arange(x.shape[0], dtype=float) #t is # of values
t /= t[-1] #t is now devided from 0 to 1
nt = np.linspace(0, 1, 100) #nt is array with values from 0 to 1 with 100 intermediate values
x1 = scipy.interpolate.spline(t, x, nt) #The x values where spline should estimate the y values
y1 = scipy.interpolate.spline(t, y, nt)
#------ create a new linear space for nnt in which an extrapolation from the interpolated spline will be made
nnt = np.linspace(-1, 1, 100) #values <0 are extrapolated (interpolation started at the tip(=0)
x1fit = np.polyfit(nt,x1,3) #fits a polynomial function of the nth order with the spline as input, output are the function parameters
y1fit = np.polyfit(nt,y1,3)
xpoly = np.poly1d(x1fit) #genereates the function based on the parameters obtained by polyfit
ypoly = np.poly1d(y1fit)
I have two columns of data, x and y. The y data takes the shape of the triangle wave below. As you can see, the triangle has 2 sections of positive gradient and 1 longer section with a negative gradient.
I would like to write a program that:
Queries whether the current entry in an vertical array has a positive or negative gradient with respect to the successive entry in the array.
Then, plots the y data against x, where y values with a positive gradient (and its respective x value) are plotted using one colour, and the negative points in another colour.
How is this best done in Python?
filen = 'filename.txt'
x = loadtxt(fn,unpack=True,usecols=[0])
y = loadtxt(fn,unpack=True,usecols=[1])
n = ma.masked_where(gradient(y) < 0, y)
p = ma.masked_where(gradient(y) > 0, y)
pylab.plot(x,n,'r',x,p,'g')
Does the trick for me!
I checked the available interpolation method in scipy, but could not get the proper solution for my case.
assume i have 100 points whose coordinates are random,
e.g., their x and y positions are:
x=np.random.rand(100)*100
y=np.random.rand(100)*100
z = f(x,y) #the point value calculated by certain function
now i want to get the point value z of a new evenly sampled coordinates (xnew and y new)
xnew = range(100)
ynew = range(100)
how should i do this using bilinear sampling?
i know it is possible to do it point by point, e.g., find the 4 nearest random points, and do the interpolation, but there got to be some easier existing functions to do this
thanks alot!
Use scipy.interpolate.griddata. It does the exact thing you need
# griddata expects an ndarray for the interpolant coordinates
interpolants = numpy.array([xnew, ynew])
# defaults to linear interpolation
znew = scipy.interpolate.griddata((x, y), z, interpolants)
http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata