For example, find the image below, which explains the problem for a simple 2D case. The label (N) and coordinates (x,y) for each point is known. I need to find all the point labels that lie within the red circle
My actual problem is in 3D and the points are not uniformly distributed
Sample input file which contain coordinates of 7.25 M points is attached here point file.
I tried the following piece of code
import numpy as np
C = [50,50,50]
R = 20
centroid = np.loadtxt('centroid') #chk the file attached
def dist(x,y): return sum([(xi-yi)**2 for xi, yi in zip(x,y)])
elabels=[i+1 for i in range(len(centroid)) if dist(C,centroid[i])<=R**2]
For an single search it takes ~ 10 min. Any suggestions to make it faster ?
Thanks,
Prithivi
When using numpy, avoid using list comprehensions on arrays.
Your computation can be done using vectorized expressions like this
centre = np.array((50., 50., 50.))
points = np.loadtxt('data')
distances2= np.sum((points-centre)**2, axis=1)
points is a N x 2 array, points-centre is also a N x 2 array,
(points-centre)**2 computes the squares of each element of the difference and eventually np.sum(..., axis=1) sums the elements of the squared differences along axis no. 1, that is, across columns.
To filter the array of positions, you can use boolean indexing
close = points[distances2<max_dist**2]
You are heavily calling the dist function. You could try to low level optimize it, and control with the timeit Python module which is more efficient. On my machine, I tried this one:
def dist(x,y):
d0 = y[0] -x[0]
d1 = y[1] -x[1]
d2 = y[2] -x[2]
return d0 * d0 + d1*d1 + d2*d2
and timeit said it was more than 3 times quicker.
This one was just in the middle:
def dist(x,y):
s = 0
for i in range(len(x)):
d = y[i] - x[i]
s += d * d
return s
Related
I am asking for help to speed up my program by replacing the for loop described below by something with numpy and without a for loop. It involves calculating the distance from a single point to every single point individually in an 2D-array holding multiple points. I have added code and comments below to make it more clear.
Thank you very much for your help.
# some random point
current_point = np.array(423,629)
distances = []
# x and y coordinates of the points saved in np arrays of shape(1,number of points)
# example For three points the xi array could look like: ([231,521,24])
xi = x[indices]
yi = y[indices]
# to get a combined array holding both x and y coordinates. of shape (number of points,2)
x_y = np.vstack((xi, yi)).reshape(-1,2)
# this is the part that i need to speed up. Calculating the distance from the current_point to every point in x_y.
for i in range(xi.shape[0]):
b = np.array(x[i],y[i])
distance = np.linalg.norm(current_point-b)
distances.append(distance)
min_distance = min(distances)
Replace your for loop with this one-liner:
distances = np.linalg.norm(x_y - current_point, axis=1)
you're not properly stacking the coordinate array, it should be transposed not reshaped
x_y = np.vstack((xi, yi)).T
then to find the distances
distances = np.linalg.norm(current_point - x_y, axis= -1)
I have the following problem. Imaging you have a set of coordinates that are somewhat organized in a regular pattern, such as the one shown below.
What i want to do is to automatically extract coordinates, such that they are ordered from left to right and top to bottom. In addition, the total number of coordinates should be as large as possible, but only include coordinates, such that the extracted coordinates are on a nearly rectangular grid (even if the coordinates have a different symmetry, e.g. hexagonal). I always want to extract coordinates that follow a rectangular unit cell structure.
For the example shown above, the largest number that contain such an orthorhombic set would be 8 x 8 coordinates (lets call this dimensions: m x n), as framed by the red rectangle.
The problem is that the given coordinates are noisy and distorted.
My approach was to generate an artificial lattice, and minimizing the difference to the given coordinates, taking into account some rotation, shift and simple distortion of the lattice. However, it turned out to be tricky to define a cost function that covers the complexity of the problem, i.e. minimizing the difference between the given coordinates and the fitted lattice, but also maximizing the grid components m x n.
If anyone has a smart idea how to tackle this problem, maybe also with machine learning algorithms, i would be very thankful.
Here is the code that i have used so far:
A function to generate the artificial lattice with m x n coordinates that are spaced by a and b in the "n" and "m" directions. The angle theta allows for a rotation of the lattice.
def lattice(m, n, a, b, theta):
coords = []
for j in range(m):
for i in range(n):
coords.append([np.sin(theta)*a*i + np.cos(theta)*b*j, np.cos(theta)*a*i - np.sin(theta)*b*j])
return np.array(coords)
I used the following function to measure the mean minimal distance between points, which is a good starting point for fitting:
def mean_min_distance(coords):
from scipy.spatial import distance
cd = distance.cdist(coords, coords)
cd_1 = np.where(cd == 0, np.nan, cd)
return np.mean(np.nanmin(cd_1, axis=1))
The following function provides all possible combinations of m x n that theoretically fit into the lengths of the coordinates, whose arrangement is assumed to be unknown. The ability to limit this to minimal and maximal values is included already:
def get_all_mxn(l, min_m=2, min_n=2, max_m=None, max_n=None):
poss = []
if max_m is None:
max_m = l + 1
if max_n is None:
max_n = l +1
for i in range(min_m, max_m):
for j in range(min_n, max_n):
if i * j <= l:
poss.append([i, j])
return np.array(poss)
The definition of the costfunction i used (for one particular set of m x n). So i first wanted to get a good fit for a certain m x n arrangement.
def cost(x0):
a, b, theta, shift_a, shift_b, dd1 = x0
# generate lattice
l = lattice(m, n, a, b, theta)
# distort lattice by affine transformation
distortion_matr = np.array([[1, dd1], [0, 1]])
l = np.dot(distortion_matr, l.T).T
# shift lattice
l = l + np.array((shift_b, shift_a))
# Some padding to make the lists the same length
len_diff = coords.shape[0] - l.shape[0]
l = np.append(l, (1e3, 1e3)*len_diff).reshape((l.shape[0] + len_diff, 2))
# calculate all distances between all points
cd = distance.cdist(coords, l)
minimum distance between each artificial lattice point and all coords
cd_min = np.min(cd[:, :coords.shape[0] - len_diff], axis=0)
# returns root mean square difference of all minimal distances
return np.sqrt(np.sum(np.abs(cd_min) ** 2) )
I then run the minimization:
md = mean_min_distance(coords)
# initial guess
x0 = np.array((md, md, np.deg2rad(-3.), 3, 1, 0.12))
res = minimize(cost, x0)
However, the results are extremely dependend on the initial parameter x0 and i have not even included a fitting of m and n.
Hi I am fairly new and I hope you can answer my question or help me find a better method!
Say I have a set of x,y,z coordinates that I want to subdivide into arrays containing the points within a certain volume (dV) of the total volume of the x,y,z space.
I have been trying to sort each x,y,z coordinate by the x value first, then subdividing by some dx into a new dimension of the array, then within each of these subdivided dimensions, sorting the y values and redividing by dy, and then the same along the z axis, giving the sorted and subdivided coordinates
I have attempted to create an array to append the coordinate sets to...
def splitter(array1):
xSortx = np.zeros([10,1,3])
for j in range(0,10):
for i in range(len(array1)) :
if (j * dx) <= array1[i][0] < (j + 1)*dx:
np.append(xSortx[j],array1[i])
everything seemed to be working but the append part, i have heard append in python can be troubling so another method I tried was to create the multidimensional matrix first in order to fill it, but I ran into the problem that I do not know how to create a multidimensional matrix that could have for example 1 entry in the second dimension but 5 in the next index of the second ex: [[[0,0,0]],[[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]]].
I would really appreciate any tips or advice, let me know if this is not very clear and I will try to explain it more!
I believe this is what you want:
# define your working volume
Vmin = np.array([1,2,3])
Vmax = np.array([4,5,6])
DV = Vmax-Vmin
# define your subdividing unit
d = 0.5
N = np.ceil(DV / d).astype(int) # number of bins in each dimension
def splitter(array):
result = [[[[] for i in xrange(N[0])] for j in xrange(N[1])] for k in xrange(N[2])]
for p in array:
i,j,k = ((p - Vmin ) / d).astype(int) # find the bin coordinates
result[i][j][k].append(p)
return result
# test the function
test = Vmin + np.random.rand(20,3) * DV # create 20 random points in the working volume
result = splitter(test)
for i in xrange(N[0]):
for j in xrange(N[1]):
for k in xrange(N[2]):
print "points in bin:", Vmin + np.array([i,j,k]) * d
for p in result[i][j][k]:
print p
Does anyone know a good method to calculate the empirical/sample covariogram, if possible in Python?
This is a screenshot of a book which contains a good definition of covariagram:
If I understood it correctly, for a given lag/width h, I'm supposed to get all the pair of points that are separated by h (or less than h), multiply its values and for each of these points, calculate its mean, which in this case, are defined as m(x_i). However, according to the definition of m(x_{i}), if I want to compute m(x1), I need to obtain the average of the values located within distance h from x1. This looks like a very intensive computation.
First of all, am I understanding this correctly? If so, what is a good way to compute this assuming a two dimensional space? I tried to code this in Python (using numpy and pandas), but it takes a couple of seconds and I'm not even sure it is correct, that is why I will refrain from posting the code here. Here is another attempt of a very naive implementation:
from scipy.spatial.distance import pdist, squareform
distances = squareform(pdist(np.array(coordinates))) # coordinates is a nx2 array
z = np.array(z) # z are the values
cutoff = np.max(distances)/3.0 # somewhat arbitrary cutoff
width = cutoff/15.0
widths = np.arange(0, cutoff + width, width)
Z = []
Cov = []
for w in np.arange(len(widths)-1): # for each width
# for each pairwise distance
for i in np.arange(distances.shape[0]):
for j in np.arange(distances.shape[1]):
if distances[i, j] <= widths[w+1] and distances[i, j] > widths[w]:
m1 = []
m2 = []
# when a distance is within a given width, calculate the means of
# the points involved
for x in np.arange(distances.shape[1]):
if distances[i,x] <= widths[w+1] and distances[i, x] > widths[w]:
m1.append(z[x])
for y in np.arange(distances.shape[1]):
if distances[j,y] <= widths[w+1] and distances[j, y] > widths[w]:
m2.append(z[y])
mean_m1 = np.array(m1).mean()
mean_m2 = np.array(m2).mean()
Z.append(z[i]*z[j] - mean_m1*mean_m2)
Z_mean = np.array(Z).mean() # calculate covariogram for width w
Cov.append(Z_mean) # collect covariances for all widths
However, now I have confirmed that there is an error in my code. I know that because I used the variogram to calculate the covariogram (covariogram(h) = covariogram(0) - variogram(h)) and I get a different plot:
And it is supposed to look like this:
Finally, if you know a Python/R/MATLAB library to calculate empirical covariograms, let me know. At least, that way I can verify what I did.
One could use scipy.cov, but if one does the calculation directly (which is very easy), there are more ways to speed this up.
First, make some fake data that has some spacial correlations. I'll do this by first making the spatial correlations, and then using random data points that are generated using this, where the data is positioned according to the underlying map, and also takes on the values of the underlying map.
Edit 1:
I changed the data point generator so positions are purely random, but z-values are proportional to the spatial map. And, I changed the map so that left and right side were shifted relative to eachother to create negative correlation at large h.
from numpy import *
import random
import matplotlib.pyplot as plt
S = 1000
N = 900
# first, make some fake data, with correlations on two spatial scales
# density map
x = linspace(0, 2*pi, S)
sx = sin(3*x)*sin(10*x)
density = .8* abs(outer(sx, sx))
density[:,:S//2] += .2
# make a point cloud motivated by this density
random.seed(10) # so this can be repeated
points = []
while len(points)<N:
v, ix, iy = random.random(), random.randint(0,S-1), random.randint(0,S-1)
if True: #v<density[ix,iy]:
points.append([ix, iy, density[ix,iy]])
locations = array(points).transpose()
print locations.shape
plt.imshow(density, alpha=.3, origin='lower')
plt.plot(locations[1,:], locations[0,:], '.k')
plt.xlim((0,S))
plt.ylim((0,S))
plt.show()
# build these into the main data: all pairs into distances and z0 z1 values
L = locations
m = array([[math.sqrt((L[0,i]-L[0,j])**2+(L[1,i]-L[1,j])**2), L[2,i], L[2,j]]
for i in range(N) for j in range(N) if i>j])
Which gives:
The above is just the simulated data, and I made no attempt to optimize it's production, etc. I assume this is where the OP starts, with the task below, since the data already exists in a real situation.
Now calculate the "covariogram" (which is much easier than generating the fake data, btw). The idea here is to sort all the pairs and associated values by h, and then index into these using ihvals. That is, summing up to index ihval is the sum over N(h) in the equation, since this includes all pairs with hs below the desired values.
Edit 2:
As suggested in the comments below, N(h) is now only the pairs that are between h-dh and h, rather than all pairs between 0 and h (where dh is the spacing of h-values in ihvals -- ie, S/1000 was used below).
# now do the real calculations for the covariogram
# sort by h and give clear names
i = argsort(m[:,0]) # h sorting
h = m[i,0]
zh = m[i,1]
zsh = m[i,2]
zz = zh*zsh
hvals = linspace(0,S,1000) # the values of h to use (S should be in the units of distance, here I just used ints)
ihvals = searchsorted(h, hvals)
result = []
for i, ihval in enumerate(ihvals[1:]):
start, stop = ihvals[i-1], ihval
N = stop-start
if N>0:
mnh = sum(zh[start:stop])/N
mph = sum(zsh[start:stop])/N
szz = sum(zz[start:stop])/N
C = szz-mnh*mph
result.append([h[ihval], C])
result = array(result)
plt.plot(result[:,0], result[:,1])
plt.grid()
plt.show()
which looks reasonable to me as one can see bumps or troughs at the expected for the h values, but I haven't done a careful check.
The main speedup here over scipy.cov, is that one can precalculate all of the products, zz. Otherwise, one would feed zh and zsh into cov for every new h, and all the products would be recalculated. This calculate could be sped up even more by doing partial sums, ie, from ihvals[n-1] to ihvals[n] at each timestep n, but I doubt that will be necessary.
I have 6 lists storing x,y,z coordinates of two sets of positions (3 lists each). I want to calculate the distance between each point in both sets. I have written my own distance function but it is slow. One of my lists has about 1 million entries.
I have tried cdist, but it produces a distance matrix and I do not understand what it means. Is there another inbuilt function that can do this?
If possible, use the numpy module to handle this kind of things. It is a lot more efficient than using regular python lists.
I am interpreting your problem like this
You have two sets of points
Both sets have the same number of points (N)
Point k in set 1 is related to point k in set 2. If each point is the coordinate of some object, I am interpreting it as set 1 containing the initial point and set 2 the point at some other time t.
You want to find the distance d(k) = dist(p1(k), p2(k)) where p1(k) is point number k in set 1 and p2(k) is point number k in set 2.
Assuming that your 6 lists are x1_coords, y1_coords, z1_coords and x2_coords, y2_coords, z2_coords respectively, then you can calculate the distances like this
import numpy as np
p1 = np.array([x1_coords, y1_coords, z1_coords])
p2 = np.array([x2_coords, y2_coords, z2_coords])
squared_dist = np.sum((p1-p2)**2, axis=0)
dist = np.sqrt(squared_dist)
The distance between p1(k) and p2(k) is now stored in the numpy array as dist[k].
As for speed: On my laptop with a "Intel(R) Core(TM) i7-3517U CPU # 1.90GHz" the time to calculate the distance between two sets of points with N=1E6 is 45 ms.
Although this solution uses numpy, np.linalg.norm could be another solution.
Say you have one point p0 = np.array([1,2,3]) and a second point p1 = np.array([4,5,6]). Then the quickest way to find the distance between the two would be:
import numpy as np
dist = np.linalg.norm(p0 - p1)
# Use the distance function in Cartesian 3D space:
# Example
import math
def distance(x1, y1, z1, x2, y2, z2):
d = 0.0
d = math.sqrt((x2 - x1)**2 + (y2 - y1)**2 + (z2 - z1)**2)
return d
You can use math.dist(A, B) with A and B being an array of coordinates