i have a dataframe with, say, 4 columns [['a','b','c','d']], to which I add another column ['total'] containing the sum of all the other columns for each row. I then add another column ['growth of total'] with the growth rate of the total.
some of the values in [['a','b','c','d']] are blank, rendering the ['total'] column invalid for these rows. I can easily get rid of these rows with df.dropna(how='any').
However, my growth rate will be invalid not only for rows with missing values in [['a','b','c','d']], but also for the following row. How do I drop all these rows?
IIUC correctly you can use notnull with all to mask off any rows with NaN and any rows that follow NaN rows:
In [43]:
df = pd.DataFrame({'a':[0,np.NaN, 2, 3,np.NaN], 'b':[np.NaN, 1,2,3,4], 'c':[0, np.NaN,2,3,4]})
df
Out[43]:
a b c
0 0 NaN 0
1 NaN 1 NaN
2 2 2 2
3 3 3 3
4 NaN 4 4
In [44]:
df[df.notnull().all(axis=1) & df.shift().notnull().all(axis=1)]
Out[44]:
a b c
3 3 3 3
Here's one option that I think does what you're looking for:
In [76]: df = pd.DataFrame(np.arange(40).reshape(10,4))
In [77]: df.ix[1,2] = np.nan
In [78]: df.ix[6,1] = np.nan
In [79]: df['total'] = df.sum(axis=1, skipna=False)
In [80]: df
Out[80]:
0 1 2 3 total
0 0 1 2 3 6
1 4 5 NaN 7 NaN
2 8 9 10 11 38
3 12 13 14 15 54
4 16 17 18 19 70
5 20 21 22 23 86
6 24 NaN 26 27 NaN
7 28 29 30 31 118
8 32 33 34 35 134
9 36 37 38 39 150
In [81]: df['growth'] = df['total'].iloc[1:] - df['total'].values[:-1]
In [82]: df
Out[82]:
0 1 2 3 total growth
0 0 1 2 3 6 NaN
1 4 5 NaN 7 NaN NaN
2 8 9 10 11 38 NaN
3 12 13 14 15 54 16
4 16 17 18 19 70 16
5 20 21 22 23 86 16
6 24 NaN 26 27 NaN NaN
7 28 29 30 31 118 NaN
8 32 33 34 35 134 16
9 36 37 38 39 150 16
Related
date hy_code curr_idx_diff is_vol_price_up
24 2021/12/3 A 17 1
25 2021/12/6 A 2 0
26 2021/12/7 A 19 0
27 2021/12/8 A 20 0
28 2021/12/9 A 21 1
29 2021/12/10 A 22 0
30 2021/12/13 A 23 0
31 2021/12/14 A 24 0
32 2021/12/15 A 24 0
33 2021/12/16 A 20 1
34 2021/12/17 A 21 0
35 2021/12/20 A 22 0
36 2021/12/21 A 23 0
37 2021/12/22 A 24 0
38 2021/12/23 A 24 0
39 2021/12/24 A 19 0
40 2021/12/27 A 20 0
41 2021/12/28 A 21 0
42 2021/12/29 A 22 0
43 2021/12/30 A 23 1
24 2021/12/3 B 3 0
25 2021/12/6 B 4 0
26 2021/12/7 B 5 1
27 2021/12/8 B 6 0
28 2021/12/9 B 7 0
29 2021/12/10 B 8 0
30 2021/12/13 B 9 0
31 2021/12/14 B 10 0
32 2021/12/15 B 11 0
33 2021/12/16 B 12 0
34 2021/12/17 B 13 0
35 2021/12/20 B 14 0
36 2021/12/21 B 15 0
37 2021/12/22 B 16 0
38 2021/12/23 B 14 1
39 2021/12/24 B 12 0
40 2021/12/27 B 19 0
41 2021/12/28 B 20 0
42 2021/12/29 B 21 0
43 2021/12/30 B 20 0
Goal
I want to assign column called bkb for each hy_code group, which computes rolling sum based on curr_idx_diff like below:
df.assign(bkb=lambda x:x.groupby('hy_code')[['is_vol_price_up','curr_idx_diff']].agg(lambda x: x[0].rolling(x[1],1).sum()))
But got KeyError: 0
Expected Output
date hy_code curr_idx_diff is_vol_price_up bkb
24 2021/12/3 A 17 1 NaN
25 2021/12/6 A 2 0 1
26 2021/12/7 A 19 0 NaN
27 2021/12/8 A 20 0 NaN
28 2021/12/9 A 21 1 NaN
29 2021/12/10 A 22 0 NaN
30 2021/12/13 A 23 0 NaN
31 2021/12/14 A 24 0 NaN
32 2021/12/15 A 24 0 NaN
33 2021/12/16 A 20 1 NaN
34 2021/12/17 A 21 0 NaN
35 2021/12/20 A 22 0 NaN
36 2021/12/21 A 23 0 NaN
37 2021/12/22 A 24 0 NaN
38 2021/12/23 A 24 0 NaN
39 2021/12/24 A 19 0 NaN
40 2021/12/27 A 20 0 NaN
41 2021/12/28 A 21 0 NaN
42 2021/12/29 A 22 0 NaN
43 2021/12/30 A 23 1 NaN
24 2021/12/3 B 3 0 NaN
25 2021/12/6 B 4 0 NaN
26 2021/12/7 B 5 1 NaN
27 2021/12/8 B 6 0 NaN
28 2021/12/9 B 7 0 NaN
29 2021/12/10 B 8 0 NaN
30 2021/12/13 B 9 0 NaN
31 2021/12/14 B 10 0 NaN
32 2021/12/15 B 11 0 NaN
33 2021/12/16 B 12 0 NaN
34 2021/12/17 B 13 0 NaN
35 2021/12/20 B 14 0 NaN
36 2021/12/21 B 15 0 NaN
37 2021/12/22 B 16 0 NaN
38 2021/12/23 B 14 1 2
39 2021/12/24 B 12 0 1
40 2021/12/27 B 19 0 NaN
41 2021/12/28 B 20 0 NaN
42 2021/12/29 B 21 0 NaN
43 2021/12/30 B 20 0 2
For example
index=25, rolling 2(curr_idx_diff=2), sum=1.
index=38, rolling 14(curr_idx_diff=14), sum=2.
index=24, rolling 3(curr_idx_diff=3), sum=NaN because for B group, there's only 1 row.
I have a column as follows:
A B
0 0 20.00
1 1 35.00
2 2 75.00
3 3 29.00
4 4 125.00
5 5 16.00
6 6 52.50
7 7 NaN
8 8 NaN
9 9 NaN
10 10 NaN
11 11 NaN
12 12 NaN
13 13 239.91
14 14 22.87
15 15 52.74
16 16 37.20
17 17 27.44
18 18 57.01
19 19 29.88
I want to change the values of the column as follows
if 0<B<10.0, then Replace the cell value of B by "0 to 10"
if 10.1<B<20.0, then Replace the cell value of B by "10 to 20"
continue like this until the maximum range achieved.
I have tried
ds['B'] = np.where(ds['B'].between(10.0,20.0), "10 to 20", ds['B'])
But once I perform this operation, the DataFrame is occupied by the string "10 to 20" so I cannot perform this operation again for the remaining values of the DataFrame. After this step, the DataFrame looks like this:
A B
0 0 10 to 20
1 1 35.0
2 2 75.0
3 3 29.0
4 4 125.0
5 5 10 to 20
6 6 52.5
7 7 nan
8 8 nan
9 9 nan
10 10 nan
11 11 nan
12 12 nan
13 13 239.91
14 14 22.87
15 15 52.74
16 16 37.2
17 17 27.44
18 18 57.01
19 19 29.88
And the following line: ds['B'] = np.where(ds['B'].between(20.0,30.0), "20 to 30", ds['B']) will throw TypeError: '>=' not supported between instances of 'str' and 'float'
How can i code this to change all of the values in the DataFrame to these strings of ranges all at once?
Build your bins and labels and use pd.cut:
bins = np.arange(0, df["B"].max() // 10 * 10 + 10, 10).astype(int)
labels = [' to '.join(t) for t in zip(bins[:-1].astype(str), bins[1:].astype(str))]
df["B"] = pd.cut(df["B"], bins=bins, labels=labels)
>>> df
A B
0 0 10 to 20
1 1 30 to 40
2 2 70 to 80
3 3 20 to 30
4 4 120 to 130
5 5 10 to 20
6 6 50 to 60
7 7 NaN
8 8 NaN
9 9 NaN
10 10 NaN
11 11 NaN
12 12 NaN
13 13 NaN
14 14 20 to 30
15 15 50 to 60
16 16 30 to 40
17 17 20 to 30
18 18 50 to 60
19 19 20 to 30
This can be done with much less code as this is actually just a matter of string formatting.
ds['B'] = ds['B'].apply(lambda x: f'{int(x/10) if x>=10 else ""}0 to {int(x/10)+1}0' if pd.notnull(x) else x)
You can create a custom function that maps each range to a string. For example, 19.0 will be mapped to "10 to 20", and then apply this function to each row.
I've written the code so that the minimum and maximum of the range is generalizable to the DataFrame, and takes on values that are multiples of 10.
import numpy as np
import pandas as pd
## copy and paste your DataFrame
ds = pd.read_clipboard()
# floor to nearest multiple of 10
ds_min = ds['B'].min()//10*10
# ceiling to the nearest multiple of 10
ds_max = round(ds['B'].max(),-1)
ranges = np.linspace(ds_min, ds_max, ((ds_max-ds_min)/10)+1)
def map_value_to_string(value):
for idx in range(1,len(ranges)):
low_value, high_value = ranges[idx-1], ranges[idx]
if low_value < value <= high_value:
return f"{int(low_value)} to {int(high_value)}"
else:
continue
ds['B'] = ds['B'].apply(lambda x: map_value_to_string(x))
Output:
>>> ds
A B
0 0 10 to 20
1 1 30 to 40
2 2 70 to 80
3 3 20 to 30
4 4 120 to 130
5 5 10 to 20
6 6 50 to 60
7 7 None
8 8 None
9 9 None
10 10 None
11 11 None
12 12 None
13 13 230 to 240
14 14 20 to 30
15 15 50 to 60
16 16 30 to 40
17 17 20 to 30
18 18 50 to 60
19 19 20 to 30
I have 2 data frames, df1 and df2:
df1
Out[66]:
A B
0 1 11
1 1 2
2 1 32
3 1 42
4 1 54
5 1 66
6 2 16
7 2 23
8 3 13
9 3 24
10 3 35
11 3 46
12 3 51
13 4 12
14 4 28
15 4 39
16 4 49
df2
Out[80]:
B
0 32
1 42
2 13
3 24
4 35
5 39
6 49
I want to merge dataframes but at the same time including the first and/or last value of the set in column A. This is an example of the desired outcome:
df3
Out[93]:
A B
0 1 2
1 1 32
2 1 42
3 1 54
4 3 13
5 3 24
6 3 35
7 3 46
8 4 28
9 4 39
10 4 49
I'm trying to use merge but that only slice the portion of data frames that coincides. Someone have an idea to deal with this? thanks!
Here's one way to do it using merge with indicator, groupby, and rolling:
df[df.merge(df2, on='B', how='left', indicator='Ind').eval('Found=Ind == "both"')
.groupby('A')['Found']
.apply(lambda x: x.rolling(3, center=True, min_periods=2).max()).astype(bool)]
Output:
A B
1 1 2
2 1 32
3 1 42
4 1 54
8 3 13
9 3 24
10 3 35
11 3 46
14 4 28
15 4 39
16 4 49
pd.concat([df1.groupby('A').min().reset_index(), pd.merge(df1,df2, on="B"), df1.groupby('A').max().reset_index()]).reset_index(drop=True).drop_duplicates().sort_values(['A','B'])
A B
0 1 2
4 1 32
5 1 42
1 2 16
2 3 13
7 3 24
8 3 35
3 4 12
9 4 39
10 4 49
Breaking down each part
#Get Minimum
df1.groupby('A').min().reset_index()
# Merge on B
pd.merge(df1,df2, on="B")
# Get Maximum
df1.groupby('A').max().reset_index()
# Reset the Index and drop duplicated rows since there may be similarities between the Merge and Min/Max. Sort values by 'A' then by 'B'
.reset_index(drop=True).drop_duplicates().sort_values(['A','B'])
I have a dataframe that has values of the different column numbers for another dataframe. Is there a way that I can just return the value from the other dataframe instead of just having the column index.
I basically want to match up the index between the Push and df dataframes. The values in the Push dataframe contain what column I want to return from the df dataframe.
Push dataframe:
0 1
0 1 2
1 0 3
2 0 3
3 1 3
4 0 2
df dataframe:
0 1 2 3 4
0 10 11 22 33 44
1 10 11 22 33 44
2 10 11 22 33 44
3 10 11 22 33 44
4 10 11 22 33 44
return:
0 1
0 11 22
1 10 33
2 10 33
3 11 33
4 10 22
You can do it with np.take ; However this function works on the flattened array. push must be shift like that :
In [285]: push1 = push.values+np.arange(0,25,5)[:,None]
In [229]: pd.DataFrame(df.values.take(push1))
EDIT
No, I just reinvent np.choose :
In [24]: df
Out[24]:
0 1 2 3 4
0 0 1 2 3 4
1 10 11 12 13 14
2 20 21 22 23 24
3 30 31 32 33 34
4 40 41 42 43 44
In [25]: push
Out[25]:
0 1
0 1 2
1 0 3
2 0 3
3 1 3
4 0 2
In [27]: np.choose(push.T,df).T
Out[27]:
0 1
0 1 2
1 10 13
2 20 23
3 31 33
4 40 42
We using melt then replace notice (df1 is your push , df2 is your df)
df1.astype(str).replace(df2.melt().drop_duplicates().set_index('variable').value.to_dict())
Out[31]:
0 1
0 11 22
1 10 33
2 10 33
3 11 33
4 10 22
The goal here is to see how many unique values i have in my database. This is the code i have written:
apps = pd.read_csv('ConcatOwned1_900.csv', sep='\t', usecols=['appid'])
apps[('appid')] = apps[('appid')].astype(int)
apps_list=apps['appid'].unique()
b = apps.groupby('appid').size()
blist = b.unique()
print len(apps_list), len(blist), len(set(b))
>>>7672 2164 2164
Why is there difference in those two methods?
Due to request i am posting some of my data:
Unnamed: 0 StudID No appid work work2
0 0 76561193665298433 0 10 nan 0
1 1 76561193665298433 1 20 nan 0
2 2 76561193665298433 2 30 nan 0
3 3 76561193665298433 3 40 nan 0
4 4 76561193665298433 4 50 nan 0
5 5 76561193665298433 5 60 nan 0
6 6 76561193665298433 6 70 nan 0
7 7 76561193665298433 7 80 nan 0
8 8 76561193665298433 8 100 nan 0
9 9 76561193665298433 9 130 nan 0
10 10 76561193665298433 10 220 nan 0
11 11 76561193665298433 11 240 nan 0
12 12 76561193665298433 12 280 nan 0
13 13 76561193665298433 13 300 nan 0
14 14 76561193665298433 14 320 nan 0
15 15 76561193665298433 15 340 nan 0
16 16 76561193665298433 16 360 nan 0
17 17 76561193665298433 17 380 nan 0
18 18 76561193665298433 18 400 nan 0
19 19 76561193665298433 19 420 nan 0
20 20 76561193665298433 20 500 nan 0
21 21 76561193665298433 21 550 nan 0
22 22 76561193665298433 22 620 6.0 3064
33 33 76561193665298434 0 10 nan 837
34 34 76561193665298434 1 20 nan 27
35 35 76561193665298434 2 30 nan 9
36 36 76561193665298434 3 40 nan 5
37 37 76561193665298434 4 50 nan 2
38 38 76561193665298434 5 60 nan 0
39 39 76561193665298434 6 70 nan 403
40 40 76561193665298434 7 130 nan 0
41 41 76561193665298434 8 80 nan 6
42 42 76561193665298434 9 100 nan 10
43 43 76561193665298434 10 220 nan 14
IIUC based on attached piece of the dataframe it seems that you should analyze b.index, not values of b. Just look:
b = apps.groupby('appid').size()
In [24]: b
Out[24]:
appid
10 2
20 2
30 2
40 2
50 2
60 2
70 2
80 2
100 2
130 2
220 2
240 1
280 1
300 1
320 1
340 1
360 1
380 1
400 1
420 1
500 1
550 1
620 1
dtype: int64
In [25]: set(b)
Out[25]: {1, 2}
But if you do it for b.index you'll get the same values for all 3 methods:
blist = b.index.unique()
In [30]: len(apps_list), len(blist), len(set(b.index))
Out[30]: (23, 23, 23)