I have the following matrix, which represents some points:
points = np.random.uniform(30, 50, size = (5,3))
# gives array([[ 45.98139489, 40.27871523, 41.91617071],
[ 41.1404787 , 34.56098247, 35.91171313],
[ 34.46375465, 49.89872417, 39.04753134],
[ 49.28112722, 32.01837698, 32.83394596],
[ 48.96623168, 33.58271833, 33.54690091]])
Now each column is a coordinate. Each column has values within the range [30,50]. I want to map each column to different intervals. I know how to map points from an interval to another thanks to this question:
Algorithm to map an interval to a smaller interval
But I want to make something very fast and that maps each column (possibly) to a different interval. For instance suppose we have
intervals = np.array([[0, 10], [3,7], [100,200]])
Or we could have them separate in arrays as xinterval = np.array([0,10]), it doesn't matter.
My Slow try
I collected all the intervals in intervals and then used the transformation on each column through a loop
for col, interval in zip(range(points.shape[1]), intervals):
points[:, col] = ((points[:,col]-min(points[:,col]))*(interval[1]-interval[0]) / (max(points[:,col])-min(points[:,col])) ) + interval[0]
Where for simplicity I have used the min max range as the previous interval, but I could have just used 30,50 as such:
for col, interval in zip(range(points.shape[1]), intervals):
points[:, col] = ((points[:,col]-30)*(interval[1]-interval[0]) / (50-30) ) + interval[0]
Is there a faster way, without using a loop?
Straight-forward broadcasting
Here's one vectorized way making use of broadcasting -
mins = points.min(0)
a1 = (points - mins)* (intervals[:,1]-intervals[:,0])
a2 = points.max(0) - mins
out = a1/a2 + intervals[:,0]
Improvement : Lesser broadcasting
Looking closely, we are performing broadacsting at few places. Though broadacsting is a very efficient method to vectorize things, it still has some cost. We could improve on it, by re-arranging things around with the intention of reducing the number of broadcasting steps to just two, as compared to four before.
Hence, the modified one would be -
mins = points.min(0)
scale = (intervals[:,1]-intervals[:,0])/(points.max(0) - mins)
offset = mins*scale - intervals[:,0]
out = points *scale - offset
I. Broadcasting steps before :
Two at : (points - mins)* (intervals[:,1]-intervals[:,0]).
Two at : a1/a2 + intervals[:,0].
II. Broadcasting steps after improvement :
One at points *scale and one at the subtraction thereafter .
Runtime test
Approaches -
def app1(points, intervals):
mins = points.min(0)
a1 = (points - mins)* (intervals[:,1]-intervals[:,0])
a2 = points.max(0) - mins
out = a1/a2 + intervals[:,0]
return out
def app2(points, intervals):
mins = points.min(0)
scale = (intervals[:,1]-intervals[:,0])/(points.max(0) - mins)
offset = mins*scale - intervals[:,0]
out = points *scale - offset
return out
Timings -
In [104]: points = np.array([[ 45.98139489, 40.27871523, 41.91617071],
...: [ 41.1404787 , 34.56098247, 35.91171313],
...: [ 34.46375465, 49.89872417, 39.04753134],
...: [ 49.28112722, 32.01837698, 32.83394596],
...: [ 48.96623168, 33.58271833, 33.54690091]])
...: points = np.repeat(points, 100000,axis=0)
...:
...: intervals = np.array([[0, 10], [3,7], [100,200]])
...:
In [105]: %timeit app1(points, intervals)
10 loops, best of 3: 26.3 ms per loop
In [106]: %timeit app2(points, intervals)
100 loops, best of 3: 17.9 ms per loop
Related
I'm trying to do an operation on each pair of rows of distance n, and get the minimum (also maximum and mean) of the results for each n from 0 to n-1. For example, if Data=[1,2,3,4] and the operation is addition, Minimum=[2,3,4,5] and Maximum=[8,7,6,5], and Mean=[5,5,5,5].
I have the following code that uses ratio as the operation which works OK for a small data size but takes more than 10 seconds for 10,000 rows. Since I will be working with data that can have 1,000,000 rows, what would be a better way to do this?
import pandas as pd
import numpy as np
low=250
high=5000
length=10
x=pd.DataFrame({'A': np.random.uniform(low, high=high, size=length)})
x['mean']=x['min']=x['max']=x['A'].copy()
for i in range(0,len(x)):
ratio=x['A']/x['A'].shift(i)
x['mean'].iloc[[i]]=ratio.mean()
x['max'].iloc[[i]]=ratio.max()
x['min'].iloc[[i]]=ratio.min()
print (x)
Approach #1 : For efficiency and considering that you might have upto 1,000,000 rows, I would suggest using the underlying array data in a similar-looking loopy solution and using the efficient array-slicing to use a gradually diminishing data to work with and these two together should bring on noticeable performance boost.
Thus, an implementation would be -
a = x['A'].values
N = len(a)
out = np.zeros((N,4))
out[:,0] = a
for i in range(N):
ratio = a[i:]/a[:N-i]
out[i,1] = ratio.mean()
out[i,2] = ratio.min()
out[i,3] = ratio.max()
df_out = pd.DataFrame(out, columns= (('A','mean','min','max')))
Approach #2 : For a smaller datasize, we can use a vectorized solution that would create a square 2D array of shape (N,N) with shifted versions of the input data. Then, we mask out the upper triangular region with NaNs and finally employ numpy.nanmean, numpy.nanmin and numpy.nanmax to perform those pandas equivalent mean, min and max equivalent operations -
a = x['A'].values
N = len(a)
r = np.arange(N)
shifting_idx = (r[:,None] - r)%N
vals = a[:,None]/a[shifting_idx]
upper_tri_mask = r[:,None] < r
vals[upper_tri_mask] = np.nan
out = np.zeros((N,4))
out[:,0] = a
out[:,1] = np.nanmean(vals, 0)
out[:,2] = np.nanmin(vals, 0)
out[:,3] = np.nanmax(vals, 0)
df_out = pd.DataFrame(out, columns= (('A','mean','min','max')))
Runtime test
Approaches -
def org_app(x):
x['mean']=x['min']=x['max']=x['A'].copy()
for i in range(0,len(x)):
ratio=x['A']/x['A'].shift(i)
x['mean'].iloc[[i]]=ratio.mean()
x['max'].iloc[[i]]=ratio.max()
x['min'].iloc[[i]]=ratio.min()
return x
def app1(x):
a = x['A'].values
N = len(a)
out = np.zeros((N,4))
out[:,0] = a
for i in range(N):
ratio = a[i:]/a[:N-i]
out[i,1] = ratio.mean()
out[i,2] = ratio.min()
out[i,3] = ratio.max()
return pd.DataFrame(out, columns= (('A','mean','min','max')))
Timings -
In [3]: low=250
...: high=5000
...: length=10000
...: x=pd.DataFrame({'A': np.random.uniform(low, high=high, size=length)})
...:
In [4]: %timeit app1(x)
1 loop, best of 3: 185 ms per loop
In [5]: %timeit org_app(x)
1 loop, best of 3: 8.59 s per loop
In [6]: 8590.0/185
Out[6]: 46.432432432432435
46x+ speedup on 10,000 rows data!
I have a code for sequentially whether every pair of cartesian coordinates found in my DataFrame fall into certain geometric enclosed areas. But it is rather slow, I suspect because it is not vectorized. Here is an example:
from matplotlib.patches import Rectangle
r1 = Rectangle((0,0), 10, 10)
r2 = Rectangle((50,50), 10, 10)
df = pd.DataFrame([[1,2],[-1,5], [51,52]], columns=['x', 'y'])
for j in range(df.shape[0]):
coordinates = df.x.iloc[j], df.y.iloc[j]
if r1.contains_point(coordinates):
df['location'].iloc[j] = 0
else r2.contains_point(coordinates):
df['location'].iloc[j] = 1
Can someone propose an approach for speed-up?
It's better to convert the rectangular patches into an array and work on it after deducing the extent to which they are spread out.
def seqcheck_vect(df):
xy = df[["x", "y"]].values
e1 = np.asarray(rec1.get_extents())
e2 = np.asarray(rec2.get_extents())
r1m1, r1m2 = np.min(e1), np.max(e1)
r2m1, r2m2 = np.min(e2), np.max(e2)
out = np.where(((xy >= r1m1) & (xy <= r1m2)).all(axis=1), 0,
np.where(((xy >= r2m1) & (xy <= r2m2)).all(axis=1), 1, np.nan))
return df.assign(location=out)
For the given sample the function outputs:
benchmarks:
def loopy_version(df):
for j in range(df.shape[0]):
coordinates = df.x.iloc[j], df.y.iloc[j]
if rec1.contains_point(coordinates):
df.loc[j, "location"] = 0
elif rec2.contains_point(coordinates):
df.loc[j, "location"] = 1
else:
pass
return df
testing on a DF of 10K rows:
np.random.seed(42)
df = pd.DataFrame(np.random.randint(0, 100, (10000,2)), columns=list("xy"))
# check if both give same outcome
loopy_version(df).equals(seqcheck_vect(df))
True
%timeit loopy_version(df)
1 loop, best of 3: 3.8 s per loop
%timeit seqcheck_vect(df)
1000 loops, best of 3: 1.73 ms per loop
So, the vectorized approach is approximately 2200 times faster compared to the loopy one.
I am trying to calculate a distance matrix for a long list of locations identified by Latitude & Longitude using the Haversine formula that takes two tuples of coordinate pairs to produce the distance:
def haversine(point1, point2, miles=False):
""" Calculate the great-circle distance bewteen two points on the Earth surface.
:input: two 2-tuples, containing the latitude and longitude of each point
in decimal degrees.
Example: haversine((45.7597, 4.8422), (48.8567, 2.3508))
:output: Returns the distance bewteen the two points.
The default unit is kilometers. Miles can be returned
if the ``miles`` parameter is set to True.
"""
I can calculate the distance between all points using a nested for loop as follows:
data.head()
id coordinates
0 1 (16.3457688674, 6.30354512503)
1 2 (12.494749307, 28.6263955635)
2 3 (27.794615136, 60.0324947881)
3 4 (44.4269923769, 110.114216113)
4 5 (-69.8540884125, 87.9468778773)
using a simple function:
distance = {}
def haver_loop(df):
for i, point1 in df.iterrows():
distance[i] = []
for j, point2 in df.iterrows():
distance[i].append(haversine(point1.coordinates, point2.coordinates))
return pd.DataFrame.from_dict(distance, orient='index')
But this takes quite a while given the time complexity, running at around 20s for 500 points and I have a much longer list. This has me looking at vectorization, and I've come across numpy.vectorize ((docs), but can't figure out how to apply it in this context.
From haversine's function definition, it looked pretty parallelizable. So, using one of the best tools for vectorization with NumPy aka broadcasting and replacing the math funcs with the NumPy equivalents ufuncs, here's one vectorized solution -
# Get data as a Nx2 shaped NumPy array
data = np.array(df['coordinates'].tolist())
# Convert to radians
data = np.deg2rad(data)
# Extract col-1 and 2 as latitudes and longitudes
lat = data[:,0]
lng = data[:,1]
# Elementwise differentiations for lattitudes & longitudes
diff_lat = lat[:,None] - lat
diff_lng = lng[:,None] - lng
# Finally Calculate haversine
d = np.sin(diff_lat/2)**2 + np.cos(lat[:,None])*np.cos(lat) * np.sin(diff_lng/2)**2
return 2 * 6371 * np.arcsin(np.sqrt(d))
Runtime tests -
The other np.vectorize based solution has shown some positive promise on performance improvement over the original code, so this section would compare the posted broadcasting based approach against that one.
Function definitions -
def vectotized_based(df):
haver_vec = np.vectorize(haversine, otypes=[np.int16])
return df.groupby('id').apply(lambda x: pd.Series(haver_vec(df.coordinates, x.coordinates)))
def broadcasting_based(df):
data = np.array(df['coordinates'].tolist())
data = np.deg2rad(data)
lat = data[:,0]
lng = data[:,1]
diff_lat = lat[:,None] - lat
diff_lng = lng[:,None] - lng
d = np.sin(diff_lat/2)**2 + np.cos(lat[:,None])*np.cos(lat) * np.sin(diff_lng/2)**2
return 2 * 6371 * np.arcsin(np.sqrt(d))
Timings -
In [123]: # Input
...: length = 500
...: d1 = np.random.uniform(-90, 90, length)
...: d2 = np.random.uniform(-180, 180, length)
...: coords = tuple(zip(d1, d2))
...: df = pd.DataFrame({'id':np.arange(length), 'coordinates':coords})
...:
In [124]: %timeit vectotized_based(df)
1 loops, best of 3: 1.12 s per loop
In [125]: %timeit broadcasting_based(df)
10 loops, best of 3: 68.7 ms per loop
You would provide your function as an argument to np.vectorize(), and could then use it as an argument to pandas.groupby.apply as illustrated below:
haver_vec = np.vectorize(haversine, otypes=[np.int16])
distance = df.groupby('id').apply(lambda x: pd.Series(haver_vec(df.coordinates, x.coordinates)))
For instance, with sample data as follows:
length = 500
df = pd.DataFrame({'id':np.arange(length), 'coordinates':tuple(zip(np.random.uniform(-90, 90, length), np.random.uniform(-180, 180, length)))})
compare for 500 points:
def haver_vect(data):
distance = data.groupby('id').apply(lambda x: pd.Series(haver_vec(data.coordinates, x.coordinates)))
return distance
%timeit haver_loop(df): 1 loops, best of 3: 35.5 s per loop
%timeit haver_vect(df): 1 loops, best of 3: 593 ms per loop
start by getting all combinations using itertools.product
results= [(p1,p2,haversine(p1,p2))for p1,p2 in itertools.product(points,repeat=2)]
that said Im not sure how fast it will be this looks like it might be a duplicate of Python: speeding up geographic comparison
Part of my Python program contains the follow piece of code, where a new grid
is calculated based on data found in the old grid.
The grid i a two-dimensional list of floats. The code uses three for-loops:
for t in xrange(0, t, step):
for h in xrange(1, height-1):
for w in xrange(1, width-1):
new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] + t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])
gr = new_gr
return gr
The code is extremly slow for a large grid and a large time t.
I've tried to use Numpy to speed up this code, by substituting the inner loop
with:
J = np.arange(1, width-1)
new_gr[h][J] = gr[h][J] + gr[h][J-1] ...
But the results produced (the floats in the array) are about 10% smaller than
their list-calculation counterparts.
What loss of accuracy is to be expected when converting lists of floats to Numpy array of floats using np.array(pylist) and then doing a calculation?
How should I go about converting a triple for-loop to pretty and fast Numpy code? (or are there other suggestions for speeding up the code significantly?)
If gr is a list of floats, the first step if you are looking to vectorize with NumPy would be to convert gr to a NumPy array with np.array().
Next up, I am assuming that you have new_gr initialized with zeros of shape (height,width). The calculations being performed in the two innermost loops basically represent 2D convolution. So, you can use signal.convolve2d with an appropriate kernel. To decide on the kernel, we need to look at the scaling factors and make a 3 x 3 kernel out of them and negate them to simulate the calculations we are doing with each iteration. Thus, you would have a vectorized solution with the two innermost loops being removed for better performance, like so -
import numpy as np
from scipy import signal
# Get the scaling factors and negate them to get kernel
kernel = -np.array([[0,1-2*t,0],[-1,1,0,],[t,0,0]])
# Initialize output array and run 2D convolution and set values into it
out = np.zeros((height,width))
out[1:-1,1:-1] = signal.convolve2d(gr, kernel, mode='same')[1:-1,:-2]
Verify output and runtime tests
Define functions :
def org_app(gr,t):
new_gr = np.zeros((height,width))
for h in xrange(1, height-1):
for w in xrange(1, width-1):
new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] + t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])
return new_gr
def proposed_app(gr,t):
kernel = -np.array([[0,1-2*t,0],[-1,1,0,],[t,0,0]])
out = np.zeros((height,width))
out[1:-1,1:-1] = signal.convolve2d(gr, kernel, mode='same')[1:-1,:-2]
return out
Verify -
In [244]: # Inputs
...: gr = np.random.rand(40,50)
...: height,width = gr.shape
...: t = 1
...:
In [245]: np.allclose(org_app(gr,t),proposed_app(gr,t))
Out[245]: True
Timings -
In [246]: # Inputs
...: gr = np.random.rand(400,500)
...: height,width = gr.shape
...: t = 1
...:
In [247]: %timeit org_app(gr,t)
1 loops, best of 3: 2.13 s per loop
In [248]: %timeit proposed_app(gr,t)
10 loops, best of 3: 19.4 ms per loop
#Divakar, I tried a couple of variations on your org_app. The fully vectorized version is:
def org_app4(gr,t):
new_gr = np.zeros((height,width))
I = np.arange(1,height-1)[:,None]
J = np.arange(1,width-1)
new_gr[I,J] = gr[I,J] + gr[I,J-1] + gr[I-1,J] + t * gr[I+1,J-1]-2 * (gr[I,J-1] + t * gr[I-1,J])
return new_gr
While half the speed of your proposed_app, it is closer in style to the original. And thus may help with understanding how nested loops can be vectorized.
An important step is the conversion of I into a column array, so that together I,J index a block of values.
In a UxU periodic domain, I simulate the dynamics of a 2D array, with entries denoting x-y coordinates. At each time step, the "parent" entries are replaced by new coordinates selected from their normally distributed "offsprings", keeping the array size the same. To illustrate:
import numpy as np
import random
np.random.seed(13)
def main(time_step=10):
def dispersal(self, litter_size_):
return np.random.multivariate_normal([self[0], self[1]], [[sigma**2*1, 0], [0, 1*sigma**2]], litter_size_) % U
U = 10
sigma = 2.
parent = np.random.random(size=(4,2))*U
for t in range(time_step):
offspring = []
for parent_id in range(len(parent)):
litter_size = np.random.randint(1,4) # 1-3 offsprings reproduced per parent
offspring.append(dispersal(parent[parent_id], litter_size))
offspring = np.vstack(offspring)
indices = np.arange(len(offspring))
parent = offspring[np.random.choice(indices, 4, replace=False)] # only 4 survives to parenthood
return parent
However, the function can be inefficient to run, indicated by:
from timeit import timeit
timeit(main, number=10000)
that returns 40.13353896141052 secs.
A quick check with cProfile seems to identify Numpy's multivariate_normal function as a bottleneck.
Is there a more efficient way to implement this operation?
Yeah many functions in Numpy are relatively expensive if you use them on single numbers, as multivariate_normal shows in this case. Because the number of offspring is within the narrow range of [1, 3] it's worthwhile to pre-compute random samples. We can take samples around mean=(0,0) and during the iteration add the actual coordinates of the parents.
Also the inner loop can be vectorized. Resulting in:
def main_2(time_step=10, n_parent=4, max_offspring=3):
U = 10
sigma = 2.
cov = [[sigma**2, 0], [0, sigma**2]]
size = n_parent * max_offspring * time_step
samples = np.random.multivariate_normal(np.zeros(2), cov, size)
parents = np.random.rand(n_parent, 2) * U
for _ in range(time_step):
litter_size = np.random.randint(1, max_offspring+1, n_parent)
n_offspring = litter_size.sum()
parents = np.repeat(parents, litter_size, axis=0)
offspring = (parents + samples[:n_offspring]) % U
samples = samples[n_offspring:]
parents = np.random.permutation(offspring)[:n_parent]
return parents
The timings I get are:
In [153]: timeit(main, number=1000)
Out[153]: 9.255848071099535
In [154]: timeit(main_2, number=1000)
Out[154]: 0.870663221881841