I have a data fram that contains two columns with numbers and a third column with repeating letters. Let's say somthing like this:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=list('xy'))
letters = ['A', 'B', 'C', 'D'] * int(len(df.index) / 4)
df['letters'] = letters
I want to create two new columns, which compares the number in columns 'x' and 'y' to the average of their corresponding letters. For example one new column will just contain the number 10 (if 20% or better than the mean), -10 (if 20% worse than the mean) or else 0.
I wrote the function below:
def scoreFunHigh(dataField, mean, diff, multip):
upper = mean * (1 + diff)
lower = mean * (1 - diff)
if dataField > upper:
return multip * 1
elif dataField < lower:
return multip * (-1)
else:
return 0
And then created the column as follows:
letterMeanX = df.groupby('letters')['x'].transform(np.nanmean)
df['letter x score'] = np.vectorize(scoreFunHigh)(df['x'], letterMeanX, 0.2, 10)
letterMeanY = df.groupby('letters')['y'].transform(np.nanmean)
df['letter y score'] = np.vectorize(scoreFunHigh)(df['y'], letterMeanY, 0.3, 5)
This works. However, I am getting the below runtime waring:
C:\Users\ ... \Python\Python38\lib\site-packages\numpy\lib\function_base.py:2167: RuntimeWarning: invalid value encountered in ? (vectorized)
outputs = ufunc(*inputs)
(Please note, that if I am running the exact same code as above I am not getting the error. My real dataframe is much larger and I am using several functions for different data)
What is the problem here? Is there a better way to set this up?
Thank you very much
The sample you give does not produce the runtimewarning, so we can't do anything to help you diagnose it. I don't know if a fuller traceback provides any useful information.
But lets look at the calculations:
In [70]: np.vectorize(scoreFunHigh)(df['x'], letterMeanX, 0.2, 10)
Out[70]:
array([-10, 0, 10, -10, 0, 0, -10, -10, 10, 0, 0, 10, -10,
-10, 0, 10, 10, -10, 0, 10, -10, -10, -10, 10, 10, -10,
...
-10, 10, -10, 0, 0, 10, 10, 0, 10])
and with the df assignment:
In [74]: df['letter x score'] = np.vectorize(scoreFunHigh)(df['x'], letterMeanX,
...: 0.2, 10)
...:
In [75]: df
Out[75]:
x y letters letter x score
0 33 98 A -10
1 38 49 B 0
2 78 46 C 10
3 31 46 D -10
4 41 74 A 0
.. .. .. ... ...
95 51 4 D 0
96 70 4 A 10
97 74 74 B 10
98 54 70 C 0
99 87 44 D 10
Often np.vectorize gives problems because of the otypes issue (read the docs); if the trial calculation produces an integer, then the return dtype is set to that, giving problems if other values are floats. However in this case the result can only have one of three values, [-10,0,10] (the last parameter).
The warning, such as you provide, suggests that some value(s) in the larger dataframe are wrong for the calculations in your scoreFunHigh function. But the warning doesn't give enough detail to say what.
It is relatively easy to apply real numpy vectorization to this problem, since it depends on two Series, df['x] an letterMeanX and 2 scalars.
In [111]: letterMeanX = df.groupby('letters')['x'].transform(np.nanmean)
In [112]: letterMeanX.shape
Out[112]: (100,)
In [113]: df['x'].shape
Out[113]: (100,)
In [114]: upper = letterMeanX *(1+0.2)
In [115]: lower = letterMeanX *(1-0.2)
In [116]: res = np.zeros(letterMeanX.shape,int)
In [117]: res[df['x']>upper] = 10
In [118]: res[df['x']<lower] = -10
In [119]: np.allclose(res, Out[70])
Out[119]: True
In other words, rather than applying the upper/lower comparison row by row, it applies it to the whole Series. It is still iterating, but in compiled numpy methods, which are much faster. np.vectorize is just a wrapper around an iteration. It still calls your python function once for each row. Hopefully the performance disclaimer is clear enough.
Consider directly calling your function with slight adjustment to method to handle conditional logic using numpy.select (or numpy.where). With this approach no loops are run but vectorized operations on the Series and scalar parameters:
def scoreFunHigh(dataField, mean, diff, multip):
conds = [dataField > mean * (1 + diff),
dataField < mean * (1 - diff)]
vals = [multip * 1, multip * (-1)]
return np.select(conds, vals, default=0)
letterMeanX = df.groupby('letters')['x'].transform(np.nanmean)
df['letter x score'] = scoreFunHigh(df['x'], letterMeanX, 0.2, 10)
letterMeanY = df.groupby('letters')['y'].transform(np.nanmean)
df['letter y score'] = scoreFunHigh(df['y'], letterMeanY, 0.3, 5)
Here is version that doesn't use np.vectorize
def scoreFunHigh(val, mean, diff, multip):
upper = mean * (1 + diff)
lower = mean * (1 - diff)
if val > upper:
return multip * 1
elif val < lower:
return multip * (-1)
else:
return 0
letterMeanX = df.groupby('letters')['x'].apply(lambda x: np.nanmean(x))
df['letter x score'] = df.apply(
lambda row: scoreFunHigh(row['x'], letterMeanX[row['letters']], 0.2, 10), axis=1)
Output
x y letters letter x score
0 52 76 A 0
1 90 99 B 10
2 87 43 C 10
3 44 73 D 0
4 49 3 A 0
.. .. .. ... ...
95 16 51 D -10
96 38 3 A 0
97 43 47 B 0
98 58 39 C 0
99 41 26 D 0
Related
I have a time-series A holding several values. I need to obtain a series B that is defined algebraically as follows:
B[t] = a * A[t] + b * B[t-1]
where we can assume B[0] = 0, and a and b are real numbers.
Is there any way to do this type of recursive computation in Pandas? Or do I have no choice but to loop in Python as suggested in this answer?
As an example of input:
> A = pd.Series(np.random.randn(10,))
0 -0.310354
1 -0.739515
2 -0.065390
3 0.214966
4 -0.605490
5 1.293448
6 -3.068725
7 -0.208818
8 0.930881
9 1.669210
As I noted in a comment, you can use scipy.signal.lfilter. In this case (assuming A is a one-dimensional numpy array), all you need is:
B = lfilter([a], [1.0, -b], A)
Here's a complete script:
import numpy as np
from scipy.signal import lfilter
np.random.seed(123)
A = np.random.randn(10)
a = 2.0
b = 3.0
# Compute the recursion using lfilter.
# [a] and [1, -b] are the coefficients of the numerator and
# denominator, resp., of the filter's transfer function.
B = lfilter([a], [1, -b], A)
print B
# Compare to a simple loop.
B2 = np.empty(len(A))
for k in range(0, len(B2)):
if k == 0:
B2[k] = a*A[k]
else:
B2[k] = a*A[k] + b*B2[k-1]
print B2
print "max difference:", np.max(np.abs(B2 - B))
The output of the script is:
[ -2.17126121e+00 -4.51909273e+00 -1.29913212e+01 -4.19865530e+01
-1.27116859e+02 -3.78047705e+02 -1.13899647e+03 -3.41784725e+03
-1.02510099e+04 -3.07547631e+04]
[ -2.17126121e+00 -4.51909273e+00 -1.29913212e+01 -4.19865530e+01
-1.27116859e+02 -3.78047705e+02 -1.13899647e+03 -3.41784725e+03
-1.02510099e+04 -3.07547631e+04]
max difference: 0.0
Another example, in IPython, using a pandas DataFrame instead of a numpy array:
If you have
In [12]: df = pd.DataFrame([1, 7, 9, 5], columns=['A'])
In [13]: df
Out[13]:
A
0 1
1 7
2 9
3 5
and you want to create a new column, B, such that B[k] = A[k] + 2*B[k-1] (with B[k] == 0 for k < 0), you can write
In [14]: df['B'] = lfilter([1], [1, -2], df['A'].astype(float))
In [15]: df
Out[15]:
A B
0 1 1
1 7 9
2 9 27
3 5 59
I have a set of objects and their positions over time. I would like to get the average distance between objects for each time point. An example dataframe is as follows:
time = [0, 0, 0, 1, 1, 2, 2]
x = [216, 218, 217, 280, 290, 130, 132]
y = [13, 12, 12, 110, 109, 3, 56]
car = [1, 2, 3, 1, 3, 4, 5]
df = pd.DataFrame({'time': time, 'x': x, 'y': y, 'car': car})
df
x y car
time
0 216 13 1
0 218 12 2
0 217 12 3
1 280 110 1
1 290 109 3
2 130 3 4
2 132 56 5
The end result I would like to have is:
df2
average distance
between cars
time
0 1.55
1 10.05
2 53.04
any idea on how to proceed? I've been trying apply the scipy.spatial.distance function to the dataframe, but I'm not sure how to apply it to df.groupby('time'), and then get the mean value of all those distances.
Any help appreciated!
You could pass an array of the points to scipy.spatial.distaince.pdist and it will calculate all pair-wise distances between Xi and Xj for i>j. Then take the mean.
import numpy as np
from scipy import spatial
df.groupby('time').apply(lambda x: spatial.distance.pdist(np.array(list(zip(x.x, x.y)))).mean())
Outputs:
time
0 1.550094
1 10.049876
2 53.037722
dtype: float64
For me using apply or for loop does not have much different
l1=[]
l2=[]
for y,x in df.groupby('time'):
v=np.triu(spatial.distance.cdist(x[['x','y']].values, x[['x','y']].values),k=0)
v = np.ma.masked_equal(v, 0)
l2.append(np.mean(v))
l1.append(y)
pd.DataFrame({'ave':l2},index=l1)
Out[250]:
ave
0 1.550094
1 10.049876
2 53.037722
building this up from the first principles:
For each point at index n, it is necessary to compute the distance with all the points with index > n.
if the distance between two points is given by formula:
np.sqrt((x0 - x1)**2 + (y0 - y1)**2)
then for an array of points in a dataframe, we can get all the distances & then calculate its mean:
distances = []
for i in range(len(df)-1):
distances += np.sqrt( (df.x[i+1:] - df.x[i])**2 + (df.y[i+1:] - df.y[i])**2 ).tolist()
np.mean(distances)
expressing the same logic using pd.concat & a couple of helper functions
def diff_sq(x, i):
return (x.iloc[i+1:] - x.iloc[i])**2
def dist_df(x, y, i):
d_sq = diff_sq(x, i) + diff_sq(y, i)
return np.sqrt(d_sq)
def avg_dist(df):
return pd.concat([dist_df(df.x, df.y, i) for i in range(len(df)-1)]).mean()
then it is possible to use the avg_dist function with groupby
df.groupby('time').apply(avg_dist)
# outputs:
time
0 1.550094
1 10.049876
2 53.037722
dtype: float64
You could also use the itertools package to define your own function as follow:
import itertools
import numpy as np
def combinations(series):
l = list()
for item in itertools.combinations(series,2):
l.append(((item[0] - item[1])**2))
return l
df2 = df.groupby('time').agg(combinations)
df2['avg_distance'] = [np.mean(np.sqrt(pd.Series(df2.iloc[k,0]) +
pd.Series(df2.iloc[k,1]))) for k in range(len(df2))]
df2.avg_distance.to_frame()
Then, the output is:
avg_distance
time
0 1.550094
1 10.049876
2 53.037722
Suppose
s = pd.Series(range(50))
0 0
1 1
2 2
3 3
...
48 48
49 49
How can I get the new series that consists of sum of every n rows?
Expected result is like below, when n = 5;
0 10
1 35
2 60
3 85
...
8 210
9 235
If using loc or iloc and loop by python, of course it can be accomplished, however I believe it could be done simply in Pandas way.
Also, this is a very simplified example, I don't expect the explanation of the sequences:). Actual data series I'm trying has the time index and the the number of events occurred in every second as the values.
GroupBy.sum
N = 5
s.groupby(s.index // N).sum()
0 10
1 35
2 60
3 85
4 110
5 135
6 160
7 185
8 210
9 235
dtype: int64
Chunk the index into groups of 5 and group accordingly.
numpy.reshape + sum
If the size is a multiple of N (or 5), you can reshape and add:
s.values.reshape(-1, N).sum(1)
# array([ 10, 35, 60, 85, 110, 135, 160, 185, 210, 235])
numpy.add.at
b = np.zeros(len(s) // N)
np.add.at(b, s.index // N, s.values)
b
# array([ 10., 35., 60., 85., 110., 135., 160., 185., 210., 235.])
The most efficient solution I can think of is f1() in my example below. It is orders of magnitude faster than using the groupby in the other answer.
Note that f1() doesn't work when the length of the array is not an exact multiple, e.g. if you want to sum a 3-item array every 2 items.
For those cases, you can use f1v2():
f1v2( [0,1,2,3,4] ,2 ) = [1,5,4]
My code is below. I have used timeit for the comparisons:
import timeit
import numpy as np
import pandas as pd
def f1(a,x):
if isinstance(a, pd.Series):
a = a.to_numpy()
return a.reshape((int(a.shape[0]/x), int(x) )).sum(1)
def f2(myarray, x):
return [sum(myarray[n: n+x]) for n in range(0, len(myarray), x)]
def f3(myarray, x):
s = pd.Series(myarray)
out = s.groupby(s.index // 2).sum()
return out
def f1v2(a,x):
if isinstance(a, pd.Series):
a = a.to_numpy()
mod = a.shape[0] % x
if mod != 0:
excl = a[-mod:]
keep = a[: len(a) - mod]
out = keep.reshape((int(keep.shape[0]/x), int(x) )).sum(1)
out = np.hstack( (excl.sum() , out) )
else:
out = a.reshape((int(a.shape[0]/x), int(x) )).sum(1)
return out
a = np.arange(0,1e6)
out1 = f1(a,2)
out2 = f2(a,2)
out3 = f2(a,2)
t1 = timeit.Timer( "f1(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
t1v2 = timeit.Timer( "f1v2(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
t2 = timeit.Timer( "f2(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
t3 = timeit.Timer( "f3(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
resdf = pd.DataFrame(index = ['min time'])
resdf['f1'] = [min(t1)]
resdf['f1v2'] = [min(t1v2)]
resdf['f2'] = [min(t2)]
resdf['f3'] = [min(t3)]
#the docs explain why it makes more sense to take the min than the avg
resdf = resdf.transpose()
resdf['% difference vs fastes'] = (resdf /resdf.min() - 1) * 100
b = np.array( [0,1,2,4,5,6,7] )
out1v2 = f1v2(b,2)
What's the easiest way to sort evenly distributed values into a predefined number of groups?
data = {'impact':[10,30,20,10,90,60,50,40]}
df = pd.DataFrame(data,index=['a','b','c','d','e','f','g','h'])
print df
impact
a 10
b 30
c 20
d 10
e 90
f 60
g 50
h 40
numgroups = 4
group_targetsum = round(df.impact.sum() / numgroups, -1)
print group_targetsum
80.0
In the case above, I'd like to create 4 groups from df. The only sorting criteria is that the sum of impact in each group should be approximately equal to group_targetsum. impact sum can be above or below group_targetsum within a reasonable margin.
Ultimately, I'd like to separate these groups into their own dataframes, preserving index. Resulting in something like this:
print df_a
impact
e 90
print df_b
impact
c 20
f 60
print df_c
impact
a 10
d 10
g 50
print df_d
impact
b 30
h 40
Resulting dataframes don't need to be exactly this, just as long as they sum as close as possible to group_targetsum.
Assuming fairly similar values in the series, here's an approach using searchsorted -
In [150]: df
Out[150]:
impact
a 10
b 30
c 20
d 10
e 90
f 60
g 50
h 40
In [151]: a = df.values.ravel()
In [152]: shift_num = group_targetsum*np.arange(1,numgroups)
In [153]: idx = np.searchsorted(a.cumsum(), shift_num,'right')
In [154]: np.split(a, idx)
Out[154]: [array([10, 30, 20, 10]), array([90]), array([60]), array([50, 40])]
Conceptually we'd just like to use a weighted version of qcut, but that doesn't exist in pandas at this time. Nevertheless, we can accomplish the same thing by combining cumsum and cut. The cumsum essentially gives us the weighting, and we then slice it up with cut.
(Note about 'csum_midpoint': without the midpoint adjustment, we'll end up putting things into groups based on where it begins (in a cumulative sense) and hence end up with a bias towards binning in the higher groups. The midpoint adjustment can't make things perfectly even, but it helps. I believe this answer is mathematically the same as #Divakar's with the exception of my use of midpoint here and his use of 'right'.)
df['csum'] = df['impact'].cumsum()
df['csum_midpoint'] = (df.csum + df.csum.shift().fillna(0)) / 2.
df['grp'] = pd.cut( df.csum_midpoint, np.linspace(0,df['impact'].sum(),numgroups+1 ))
df.groupby( df.grp )['impact'].sum()
grp
(0, 77.5] 70
(77.5, 155] 90
(155, 232.5] 60
(232.5, 310] 90
Name: impact, dtype: int64
df
impact csum csum_midpoint grp
a 10 10 5.0 (0, 77.5]
b 30 40 25.0 (0, 77.5]
c 20 60 50.0 (0, 77.5]
d 10 70 65.0 (0, 77.5]
e 90 160 115.0 (77.5, 155]
f 60 220 190.0 (155, 232.5]
g 50 270 245.0 (232.5, 310]
h 40 310 290.0 (232.5, 310]
I have the folowing minimal code which is too slow. For the 1000 rows I need, it takes about 2 min. I need it to run faster.
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0,1000,size=(1000, 4)), columns=list('ABCD'))
start_algorithm = time.time()
myunique = df['D'].unique()
for i in myunique:
itemp = df[df['D'] == i]
for j in myunique:
jtemp = df[df['D'] == j]
I know that numpy can make it run much faster but keep in mind that I want to keep a part of the original dataframe (or array in numpy) for specific values of column 'D'. How can I improve its performance?
Avoid computing the sub-DataFrame df[df['D'] == i] more than once. The original code computes this len(myunique)**2 times. Instead you can compute this once for each i (that is, len(myunique) times in total), store the results, and then pair them together later. For example,
groups = [grp for di, grp in df.groupby('D')]
for itemp, jtemp in IT.product(groups, repeat=2):
pass
import pandas as pd
import itertools as IT
df = pd.DataFrame(np.random.randint(0,1000,size=(1000, 4)), columns=list('ABCD'))
def using_orig():
myunique = df['D'].unique()
for i in myunique:
itemp = df[df['D'] == i]
for j in myunique:
jtemp = df[df['D'] == j]
def using_groupby():
groups = [grp for di, grp in df.groupby('D')]
for itemp, jtemp in IT.product(groups, repeat=2):
pass
In [28]: %timeit using_groupby()
10 loops, best of 3: 63.8 ms per loop
In [31]: %timeit using_orig()
1 loop, best of 3: 2min 22s per loop
Regarding the comment:
I can easily replace itemp and jtemp with a=1 or print "Hello" so ignore that
The answer above addresses how to compute itemp and jtemp more efficiently. If itemp and jtemp are not central to your real calculation, then we would need to better understand what you really want to compute in order to suggest (if possible) a way to compute it faster.
Here's a vectorized approach to form the groups based on unique elements from "D" column -
# Sort the dataframe based on the sorted indices of column 'D'
df_sorted = df.iloc[df['D'].argsort()]
# In the sorted dataframe's 'D' column find the shift/cut indces
# (places where elements change values, indicating change of groups).
# Cut the dataframe at those indices for the final groups with NumPy Split.
cut_idx = np.where(np.diff(df_sorted['D'])>0)[0]+1
df_split = np.split(df_sorted,cut_idx)
Sample testing
1] Form a sample dataframe with random elements :
>>> df = pd.DataFrame(np.random.randint(0,100,size=(5, 4)), columns=list('ABCD'))
>>> df
A B C D
0 68 68 90 39
1 53 99 20 85
2 64 76 21 19
3 90 91 32 36
4 24 9 89 19
2] Run the original code and print the results :
>>> myunique = df['D'].unique()
>>> for i in myunique:
... itemp = df[df['D'] == i]
... print itemp
...
A B C D
0 68 68 90 39
A B C D
1 53 99 20 85
A B C D
2 64 76 21 19
4 24 9 89 19
A B C D
3 90 91 32 36
3] Run the proposed code and print the results :
>>> df_sorted = df.iloc[df['D'].argsort()]
>>> cut_idx = np.where(np.diff(df_sorted['D'])>0)[0]+1
>>> df_split = np.split(df_sorted,cut_idx)
>>> for split in df_split:
... print split
...
A B C D
2 64 76 21 19
4 24 9 89 19
A B C D
3 90 91 32 36
A B C D
0 68 68 90 39
A B C D
1 53 99 20 85