determine the range of a value using a look up table - python

I have a df with numbers:
numbers = pd.DataFrame(columns=['number'], data=[
50,
65,
75,
85,
90
])
and a df with ranges (look up table):
ranges = pd.DataFrame(
columns=['range','range_min','range_max'],
data=[
['A',90,100],
['B',85,95],
['C',70,80]
]
)
I want to determine what range (in second table) a value (in the first table) falls in. Please note ranges overlap, and limits are inclusive.
Also please note the vanilla dataframe above has 3 ranges, however this dataframe gets generated dynamically. It could have from 2 to 7 ranges.
Desired result:
numbers = pd.DataFrame(columns=['number','detected_range'], data=[
[50,'out_of_range'],
[65, 'out_of_range'],
[75,'C'],
[85,'B'],
[90,'overlap'] * could be A or B *
])
I solved this with a for loop but this doesn't scale well to a big dataset I am using. Also code is too extensive and inelegant. See below:
numbers['detected_range'] = nan
for i, row1 in number.iterrows():
for j, row2 in ranges.iterrows():
if row1.number<row2.range_min and row1.number>row2.range_max:
numbers.loc[i,'detected_range'] = row1.loc[j,'range']
else if (other cases...):
...and so on...
How could I do this?

You can use a bit of numpy vectorial operations to generate masks, and use them to select your labels:
import numpy as np
a = numbers['number'].values # numpy array of numbers
r = ranges.set_index('range') # dataframe of min/max with labels as index
m1 = (a>=r['range_min'].values[:,None]).T # is number above each min
m2 = (a<r['range_max'].values[:,None]).T # is number below each max
m3 = (m1&m2) # combine both conditions above
# NB. the two operations could be done without the intermediate variables m1/m2
m4 = m3.sum(1) # how many matches?
# 0 -> out_of_range
# 2 -> overlap
# 1 -> get column name
# now we select the label according to the conditions
numbers['detected_range'] = np.select([m4==0, m4==2], # out_of_range and overlap
['out_of_range', 'overlap'],
# otherwise get column name
default=np.take(r.index, m3.argmax(1))
)
output:
number detected_range
0 50 out_of_range
1 65 out_of_range
2 75 C
3 85 B
4 90 overlap
edit:
It works with any number of intervals in ranges
example output with extra['D',50,51]:
number detected_range
0 50 D
1 65 out_of_range
2 75 C
3 85 B
4 90 overlap

Pandas IntervalIndex fits in here; however, since your data has overlapping points, a for loop is the approach I'll use here (for unique, non-overlapping indices, pd.get_indexer is a fast approach):
intervals = pd.IntervalIndex.from_arrays(ranges.range_min,
ranges.range_max,
closed='both')
box = []
for num in numbers.number:
bools = intervals.contains(num)
if bools.sum()==1:
box.append(ranges.range[bools].item())
elif bools.sum() > 1:
box.append('overlap')
else:
box.append('out_of_range')
numbers.assign(detected_range = box)
number detected_range
0 50 out_of_range
1 65 out_of_range
2 75 C
3 85 B
4 90 overlap

firstly,explode the ranges:
df1=ranges.assign(col1=ranges.apply(lambda ss:range(ss.range_min,ss.range_max),axis=1)).explode('col1')
df1
range range_min range_max col1
0 A 90 100 90
0 A 90 100 91
0 A 90 100 92
0 A 90 100 93
0 A 90 100 94
0 A 90 100 95
0 A 90 100 96
0 A 90 100 97
0 A 90 100 98
0 A 90 100 99
1 B 85 95 85
1 B 85 95 86
1 B 85 95 87
1 B 85 95 88
1 B 85 95 89
1 B 85 95 90
secondly,judge wether each of numbers in first df
def function1(x):
df11=df1.loc[df1.col1==x]
if len(df11)==0:
return 'out_of_range'
if len(df11)>1:
return 'overlap'
return df11.iloc[0,0]
numbers.assign(col2=numbers.number.map(function1))
number col2
0 50 out_of_range
1 65 out_of_range
2 75 C
3 85 B
4 90 overlap
the logic is simple and clear

Related

How to name the column when using value_count function in pandas?

I was counting the no of occurrence of angle and dist by the code below:
g = new_df.value_counts(subset=['Current_Angle','Current_dist'] ,sort = False)
the output:
current_angle current_dist 0
-50 30 1
-50 40 2
-50 41 6
-50 45 4
try1:
g.columns = ['angle','Distance','count','Percentage Missed'] - result was no change in the name of column
try2:
When I print the columns using print(g.columns) ended with error AttributeError: 'Series' object has no attribute 'columns'
I want to rename the column 0 as count and add a new column to the dataframe g as percent missed which is calculated by 100 - value in column 0
Expected output
current_angle current_dist count percent missed
-50 30 1 99
-50 40 2 98
-50 41 6 94
-50 45 4 96
1:How to modify the code? I mean instead of value_counts, is there any other function that can give the expected output?
2. How to get the expected output with the current method?
EDIT 1(exceptional case)
data:
angle
distance
velocity
0
124
-3
50
24
-25
50
34
25
expected output:
count is calculated based on distance
angle
distance
velocity
count
percent missed
0
124
-3
1
99
50
24
-25
1
99
50
34
25
1
99
First add Series.reset_index, because DataFrame.value_counts return Series, so possible use parameter name for change column 0 to count column and then subtract 100 to new column by Series.rsub for subtract from right side like 100 - df['count']:
df = (new_df.value_counts(subset=['Current_Angle','Current_dist'] ,sort = False)
.reset_index(name='count')
.assign(**{'percent missed': lambda x: x['count'].rsub(100)}))
Or if need also set new columns names use DataFrame.set_axis:
df = (new_df.value_counts(subset=['Current_Angle','Current_dist'] ,sort = False)
.reset_index(name='count')
.set_axis(['angle','Distance','count'], axis=1)
.assign(**{'percent missed': lambda x: x['count'].rsub(100)}))
If need assign new columns names here is alternative solution:
df = (new_df.value_counts(subset=['Current_Angle','Current_dist'] ,sort = False)
.reset_index())
df.columns = ['angle','Distance','count']
df['percent missed'] = df['count'].rsub(100)
Assuming a DataFrame as input (if not reset_index first), simply use rename and a subtraction:
df = df.rename(columns={'0': 'count'}) # assuming string '0' here, else use 0
df['percent missed'] = 100 - df['count']
output:
current_angle current_dist count percent missed
0 -50 30 1 99
1 -50 40 2 98
2 -50 41 6 94
3 -50 45 4 96
alternative: using groupby.size:
(new_df
.groupby(['current_angle','current_dist']).size()
.reset_index(name='count')
.assign(**{'percent missed': lambda d: 100-d['count']})
)
output:
current_angle current_dist count percent missed
0 -50 30 1 99
1 -50 40 2 98
2 -50 41 6 94
3 -50 45 4 96

Sample dataframe by value in column and keep all rows

I want to sample a Pandas dataframe using values in a certain column, but I want to keep all rows with values that are in the sample.
For example, in the dataframe below I want to randomly sample some fraction of the values in b, but keep all corresponding rows in a and c.
d = pd.DataFrame({'a': range(1, 101, 1),'b': list(range(0, 100, 4))*4, 'c' :list(range(0, 100, 2))*2} )
Desired example output from a 16% sample:
Out[66]:
a b c
0 1 0 0
1 26 0 50
2 51 0 0
3 76 0 50
4 4 12 6
5 29 12 56
6 54 12 6
7 79 12 56
8 18 68 34
9 43 68 84
10 68 68 34
11 93 68 84
12 19 72 36
13 44 72 86
14 69 72 36
15 94 72 86
I've tried sampling the series and merging back to the main data, like this:
In [66]: pd.merge(d, d.b.sample(int(.16 * d.b.nunique())))
This creates the desired output, but it seems inefficient. My real dataset has millions of values in b and hundreds of millions of rows. I know I could also use some version of ``isin```, but that also is slow.
Is there a more efficient way to do this?
I really doubt that isin is slow:
uniques = df.b.unique()
# this maybe the bottle neck
samples = np.random.choice(uniques, replace=False, size=int(0.16*len(uniques)) )
# sampling here
df[df.b.isin(samples)]
You can profile the steps above. In case samples=... is slow, you can try:
idx = np.random.rand(len(uniques))
samples = uniques[idx<0.16]
Those took about 100 ms on my system on 10 million rows.
Note: d.b.sample(int(.16 * d.b.nunique())) does not sample 0.16 of the unique values in b.

my data cleaning script is slow, any ideas on how to improve?

I have a Data(csv format) where the first column is an epoch timestamp(strictly increasing) and the other columns are cumulative rows(just increasing or equal).
Sample is as below:
df = pandas.DataFrame([[1515288240, 100, 50, 90, 70],[1515288241, 101, 60, 95, 75],[1515288242, 110, 70, 100, 80],[1515288239, 110, 70, 110, 85],[1515288241, 110, 75, 110, 85],[1515288243,110,70,110,85]],columns =['UNIX_TS','A','B','C','D'])
df =
id UNIX_TS A B C D
0 1515288240 100 50 90 70
1 1515288241 101 60 95 75
2 1515288242 110 70 100 80
3 1515288239 110 70 110 85
4 1515288241 110 75 110 85
5 1515288243 110 70 110 85
import pandas as pd
def clean(df,column_name,equl):
i=0
while(df.shape[0]-2>=i):
if df[column_name].iloc[i]>df[column_name].iloc[i+1]:
df.drop(df[column_name].iloc[[i+1]].index,inplace=True)
continue
elif df[column_name].iloc[i]==df[column_name].iloc[i+1] and equl==1:
df.drop(df[column_name].iloc[[i+1]].index,inplace=True)
continue
i+=1
clean(df,'UNIX_TS',1)
for col in df.columns[1:]:
clean(df,col,0)
df =
id UNIX_TS A B C D
0 1515288240 100 50 90 70
1 1515288241 101 60 95 75
2 1515288242 110 70 100 80
My script works as intended but its too slow, anybody has ideas about how to improve its speed.
I wrote a script to remove all the invalid data based on 2 rules:
Unix_TS must be strictly increasing(because its a time, it cannot flow back or pause),
other columns are increasing and can be constant for example is in one row it is 100 and the next row it can be >=100 but not less.
Based on the rules the index 3 & 4 are invalid because unix_ts 1515288239 is 1515288241 are less than the index 2.
index 5 is wrong because the value B decreased
IIUC, can use
cols = ['A', 'B', 'C', 'D']
mask_1 = df['UNIX_TS'] > df['UNIX_TS'].cummax().shift().fillna(0)
mask_2 = mask_2 = (df[cols] >= df[cols].cummax().shift().fillna(0)).all(1)
df[mask_1 & mask_2]
Outputs
UNIX_TS A B C D
0 1515288240 100 50 90 70
1 1515288241 101 60 95 75
2 1515288242 110 70 100 80

Label a pandas column based on sign flips of another column

I have a pandas dataframe like this:
Index High Low MA(5)-MA(20)
0 100 90 -1
1 101 91 -2
2 102 92 +1
3 99 88 +2
I want to get the maximum of the highs when MA(5) - MA(20) is positive, and the minimum of the lows then the same is negative.
The thing is that I want only the local maxima and minima not the global one, so, getting the maximum and minimum has to be reset each time the sign of MA(5) - MA(20) flips.
I do not want to use a for loop since they are really slow in python.
Any help?
You can use np.sign to get the sign of the last column. Perform a groupby operation, and use np.where to assign values accordingly.
v = np.sign(df['MA(5)-MA(20)']) < 1
g = df.groupby(v.ne(v.shift()).cumsum())
df['Maxima/Minima'] = np.where(
v, g.Low.transform('min'), g.High.transform('max')
)
df
Index High Low MA(5)-MA(20) Maxima/Minima
0 0 100 90 -1 90
1 1 101 91 -2 90
2 2 102 92 1 102
3 3 99 88 2 102
You'll notice that rows are assigned the local minima/maxima values according to their sign.
Is this what you need ?
v=df['MA(5)-MA(20)'].gt(0).astype(int).diff().fillna(0).cumsum()
df.groupby(v).High.transform('max').mask(df['MA(5)-MA(20)'] == 0,df.groupby(v).Low.transform('min'))
0 90
1 90
2 102
3 102
Name: High, dtype: int64

Select slices/ range of columns for each row in a pandas dataframe

Here is the problem:
import numpy
import pandas
dfl = pandas.DataFrame(numpy.random.randn(30,10))
now, I want the following cells put in a data frame:
For row 1: columns 3 to 6 (length = 4 cells),
For row 2: columns 4 to 7 (length = 4 cells),
For row 3: columns 1 to 4 (length = 4 cells),
ect...
Each of these range is always 4 cells wide, but the start/end are different columns.
The row-wise start point are in a list [3, 4, 1,...] and so are the row-wise end points. The list of rows I'm interested in is also a list [1, 2, 3].
Finally, dfl has an datetime-index which I would like to preserve
(meaning the end result should be a data frame with indexes dfl.index[1, 2, 3].
Edit: range exceeds
Some of the entries of the vector of row-wise start points are too large (say a row-wise start point of 9 in the example matrix above). In those case, I just want all the columns from the row-wise start point and then as many NaN's as necessary to get the right shape (so since 9+4 > 10, then the corresponding row of the result data frame should be [9, 10, NaN, NaN]
Using NumPy broadcasting to create all those column indices and then advanced-indexing into the array data -
def extract_rows(dfl, starts, L, fillval=np.nan):
a = dfl.values
idx = np.asarray(starts)[:,None] + range(L)
valid_mask = idx < dfl.shape[1]
idx[~valid_mask] = 0
val = a[np.arange(len(idx))[:,None],idx]
return pd.DataFrame(np.where(valid_mask, val, fillval))
Sample runs -
In [541]: np.random.seed(0)
In [542]: dfl = pandas.DataFrame(numpy.random.randint(11,99,(3,10)))
In [543]: dfl
Out[543]:
0 1 2 3 4 5 6 7 8 9
0 55 58 75 78 78 20 94 32 47 98
1 81 23 69 76 50 98 57 92 48 36
2 88 83 20 31 91 80 90 58 75 93
In [544]: extract_rows(dfl, starts=[3,4,8], L=4, fillval=np.nan)
Out[544]:
0 1 2 3
0 78.0 78.0 20.0 94.0
1 50.0 98.0 57.0 92.0
2 75.0 93.0 NaN NaN
In [545]: extract_rows(dfl, starts=[3,4,8], L=4, fillval=-1)
Out[545]:
0 1 2 3
0 78 78 20 94
1 50 98 57 92
2 75 93 -1 -1
Or we can using .iloc and enumerate
l=[3, 4, 1]
pd.DataFrame(data=[df.iloc[x:x+1,y:y+4].values[0] for x,y in enumerate(l)])
Out[107]:
0 1 2 3
0 1.224124 -0.938459 -1.114081 -1.128225
1 -0.445288 0.445390 -0.154295 -1.871210
2 0.784677 0.997053 2.144286 -0.179895

Categories

Resources