Can't find aggregation result column in Python Pandas - python

s = pd.Series(["08-10-2017", "08-10-2017", "08-10-2017", "09-10-2017", "09-10-2017", "09-10-2017", "10-10-2017", "10-10-2017", "10-10-2017", "11-10-2017", "11-10-2017", "11-10-2017", "12-10-2017", "12-10-2017", "12-10-2017", "13-10-2017", "13-10-2017", "13-10-2017", "14-10-2017", "14-10-2017"])
p = pd.DataFrame(data=s)
p.columns = ['date']
p.groupby('date').agg('count').reset_index().columns
Where is 'count' column ?

I think you are looking for value_counts
p.date.value_counts()
Out[1095]:
09-10-2017 3
13-10-2017 3
10-10-2017 3
12-10-2017 3
08-10-2017 3
11-10-2017 3
14-10-2017 2
Name: date, dtype: int64
And if you want to do with groupby
p.groupby('date').size()
And if do want using count
p.groupby('date').agg({'date':'count'})
Out[1101]:
date
date
08-10-2017 3
09-10-2017 3
10-10-2017 3
11-10-2017 3
12-10-2017 3
13-10-2017 3
14-10-2017 2

Related

Find Average of Every Three Columns in Pandas dataframe

I am new to Python and Pandas. I have a panda dataframe with monthly columns ranging from 2000 (2000-01) to 2016 (2016-06).
I want to find the average of every three months and assign it to a new quarterly column (2000q1). I know I can do the following:
df['2000q1'] = df[['2000-01', '2000-02', '2000-03']].mean(axis=1)
df['2000q2'] = df[['2000-04', '2000-05', '2000-06']].mean(axis=1)
.
.
.
df['2016-02'] = df[['2016-04', '2016-05', '2016-06']].mean(axis=1)
But, this is very tedious. I appreciate it if someone helps me find a better way.
You can use groupby on columns:
df.groupby(np.arange(len(df.columns))//3, axis=1).mean()
Or, those can be converted to datetime. You can use resample:
df.columns = pd.to_datetime(df.columns)
df.resample('Q', axis=1).mean()
Here's a demo:
cols = pd.date_range('2000-01', '2000-06', freq='MS')
cols = cols.strftime('%Y-%m')
cols
Out:
array(['2000-01', '2000-02', '2000-03', '2000-04', '2000-05', '2000-06'],
dtype='<U7')
df = pd.DataFrame(np.random.randn(10, 6), columns=cols)
df
Out:
2000-01 2000-02 2000-03 2000-04 2000-05 2000-06
0 -1.263798 0.251526 0.851196 0.159452 1.412013 1.079086
1 -0.909071 0.685913 1.394790 -0.883605 0.034114 -1.073113
2 0.516109 0.452751 -0.397291 -0.050478 -0.364368 -0.002477
3 1.459609 -1.696641 0.457822 1.057702 -0.066313 -0.910785
4 -0.482623 1.388621 0.971078 -0.038535 0.033167 0.025781
5 -0.016654 1.404805 0.100335 -0.082941 -0.418608 0.588749
6 0.684735 -2.007105 0.552615 1.969356 -0.614634 0.021459
7 0.382475 0.965739 -1.826609 -0.086537 -0.073538 -0.534753
8 1.548773 -0.157250 0.494819 -1.631516 0.627794 -0.398741
9 0.199049 0.145919 0.711701 0.305382 -0.118315 -2.397075
First alternative:
df.groupby(np.arange(len(df.columns))//3, axis=1).mean()
Out:
0 1
0 -0.053692 0.883517
1 0.390544 -0.640868
2 0.190523 -0.139108
3 0.073597 0.026868
4 0.625692 0.006805
5 0.496162 0.029067
6 -0.256585 0.458727
7 -0.159465 -0.231609
8 0.628781 -0.467487
9 0.352223 -0.736669
Second alternative:
df.columns = pd.to_datetime(df.columns)
df.resample('Q', axis=1).mean()
Out:
2000-03-31 2000-06-30
0 -0.053692 0.883517
1 0.390544 -0.640868
2 0.190523 -0.139108
3 0.073597 0.026868
4 0.625692 0.006805
5 0.496162 0.029067
6 -0.256585 0.458727
7 -0.159465 -0.231609
8 0.628781 -0.467487
9 0.352223 -0.736669
You can assign this to a DataFrame:
res = df.resample('Q', axis=1).mean()
Change column names as you like:
res = res.rename(columns=lambda col: '{}q{}'.format(col.year, col.quarter))
res
Out:
2000q1 2000q2
0 -0.053692 0.883517
1 0.390544 -0.640868
2 0.190523 -0.139108
3 0.073597 0.026868
4 0.625692 0.006805
5 0.496162 0.029067
6 -0.256585 0.458727
7 -0.159465 -0.231609
8 0.628781 -0.467487
9 0.352223 -0.736669
And attach this to your current DataFrame by:
pd.concat([df, res], axis=1)

Count values in column using pandas

I have data
member_id device_id
19404 dfbc9d3230304cdfb0316cc32c41b67f [2016-04-28, 2016-04-27, 2016-04-26, 2016-04-22]
19555 176e307bd8714a00ac2b99276123f0a7 [2016-04-29, 2016-04-28, 2016-04-27, 2016-04-23]
19632 a6d4b631e09a4b31afef4c93472c7da3 [2016-04-29, 2016-04-28, 2016-04-27]
19792 0146b09048ce4c47af4bbc69e7999137 [2016-04-23, 2016-04-22, 2016-04-21, 2016-04-20]
20258 1510f9b4efc14183ad412eb54c9e058f [2016-04-09]
5f42f4d02d38456689e58d6a1b9a3e16 [2016-04-29, 2016-04-28, 2016-04-25, 2016-04-22]
and I need to count values in the third column in list.
I try len(), I thought it returns length of list, but it's wrong.
new = data.groupby(['member_id', 'device_id'])['event_date'].unique()
count() returns sum of all values
assuming that you have a list of values in your last column l:
In [113]: df.l.map(len)
Out[113]:
0 4
1 4
2 3
3 4
4 1
5 4
Name: l, dtype: int64
if your last column is string, you can convert it to list first:
df.l.str.replace('[\[\]]', '').str.split('\s*,\s*').map(len)
Is this what you are looking for:
import pandas as pd
df = pd.DataFrame(columns=('member_id','device_id','event_date'),data=[
[19404,'dfbc9d3230304cdfb0316cc32c41b67f',['2016-04-28', '2016-04-27', '2016-04-26', '2016-04-22']],
[19555,'176e307bd8714a00ac2b99276123f0a7',['2016-04-29', '2016-04-28', '2016-04-27', '2016-04-23']],
[19632,'a6d4b631e09a4b31afef4c93472c7da3',['2016-04-29', '2016-04-28', '2016-04-27']],
[19792,'0146b09048ce4c47af4bbc69e7999137',['2016-04-23', '2016-04-22', '2016-04-21', '2016-04-20']],
[20258,'1510f9b4efc14183ad412eb54c9e058f',['2016-04-09']],
[20258,'5f42f4d02d38456689e58d6a1b9a3e16',['2016-04-29', '2016-04-28', '2016-04-25', '2016-04-22']]
])
new = df.groupby(['member_id', 'device_id'])['event_date']
for each_n in new:
print each_n[0],len(each_n[1].values[0])
Output
(19404, 'dfbc9d3230304cdfb0316cc32c41b67f') 4
(19555, '176e307bd8714a00ac2b99276123f0a7') 4
(19632, 'a6d4b631e09a4b31afef4c93472c7da3') 3
(19792, '0146b09048ce4c47af4bbc69e7999137') 4
(20258, '1510f9b4efc14183ad412eb54c9e058f') 1
(20258, '5f42f4d02d38456689e58d6a1b9a3e16') 4
You can apply the len function to the grouped column. The .iat[0] gets the first item in the group, which in this case is your list.
>>> df.groupby(['member_id', 'device_id'])['event_date'].agg(
{'event_count': lambda group: len(group.iat[0])})
event_count
member_id device_id
19404 dfbc9d3230304cdfb0316cc32c41b67f 4
19555 176e307bd8714a00ac2b99276123f0a7 4
19632 a6d4b631e09a4b31afef4c93472c7da3 3
19792 0146b09048ce4c47af4bbc69e7999137 4
20258 1510f9b4efc14183ad412eb54c9e058f 1
5f42f4d02d38456689e58d6a1b9a3e16 4

Drop pandas dataframe row based on max value of a column

I have a Dataframe like so:
p_rel y_BET sq_resid
1 0.069370 41.184996 0.292942
2 0.116405 43.101090 0.010953
3 0.173409 44.727748 0.036832
4 0.225629 46.681293 0.540616
5 0.250682 46.980616 0.128191
6 0.294650 47.446113 0.132367
7 0.322530 48.078038 0.235047
How do I get rid of the fourth row because it has the max value of sq_resid? note: the max will change from dataset to dataset so just removing the 4th row isn't enough.
I have tried several things such as I can remove the max value which leaves the dataframe like below but haven't been able to remove the whole row.
p_rel y_BET sq_resid
1 0.069370 41.184996 0.292942
2 0.116405 43.101090 0.010953
3 0.173409 44.727748 0.036832
4 0.225629 46.681293 Nan
5 0.250682 46.980616 0.128191
6 0.294650 47.446113 0.132367
7 0.322530 48.078038 0.235047
You could just filter the df like so:
In [255]:
df.loc[df['sq_resid']!=df['sq_resid'].max()]
Out[255]:
p_rel y_BET sq_resid
1 0.069370 41.184996 0.292942
2 0.116405 43.101090 0.010953
3 0.173409 44.727748 0.036832
5 0.250682 46.980616 0.128191
6 0.294650 47.446113 0.132367
or drop using idxmax which will return the label row of the max value:
In [257]:
df.drop(df['sq_resid'].idxmax())
Out[257]:
p_rel y_BET sq_resid
1 0.069370 41.184996 0.292942
2 0.116405 43.101090 0.010953
3 0.173409 44.727748 0.036832
5 0.250682 46.980616 0.128191
6 0.294650 47.446113 0.132367
7 0.322530 48.078038 0.235047

Selecting columns from a pandas dataframe based on row conditions

I have a pandas dataframe
In [1]: df = DataFrame(np.random.randn(10, 4))
Is there a way I can only select columns which have (last row) value>0
the desired output would be a new dataframe having all rows associated with columns where the last row >0
In [201]: df = pd.DataFrame(np.random.randn(10, 4))
In [202]: df
Out[202]:
0 1 2 3
0 -1.380064 0.391358 -0.043390 -1.970113
1 -0.612594 -0.890354 -0.349894 -0.848067
2 1.178626 1.798316 0.691760 0.736255
3 -0.909491 0.429237 0.766065 -0.605075
4 -1.214366 1.907580 -0.583695 0.192488
5 -0.283786 -1.315771 0.046579 -0.777228
6 1.195634 -0.259040 -0.432147 1.196420
7 -2.346814 1.251494 0.261687 0.400886
8 0.845000 0.536683 -2.628224 -0.238449
9 0.246398 -0.548448 -0.295481 0.076117
In [203]: df.iloc[:, (df.iloc[-1] > 0).values]
Out[203]:
0 3
0 -1.380064 -1.970113
1 -0.612594 -0.848067
2 1.178626 0.736255
3 -0.909491 -0.605075
4 -1.214366 0.192488
5 -0.283786 -0.777228
6 1.195634 1.196420
7 -2.346814 0.400886
8 0.845000 -0.238449
9 0.246398 0.076117
Basically this solution uses very basic Pandas indexing, in particular iloc() method
You can use the boolean series generated from the condition to index the columns of interest:
In [30]:
df = pd.DataFrame(np.random.randn(10, 4))
df
Out[30]:
0 1 2 3
0 -0.667736 -0.744761 0.401677 -1.286372
1 1.098134 -1.327454 1.409357 -0.180265
2 -0.105780 0.446195 -0.562578 -0.746083
3 1.366714 -0.685103 0.982354 1.928026
4 0.091040 -0.689676 0.425042 0.723466
5 0.798305 -1.454922 -0.017695 0.515961
6 -0.786693 1.496968 -0.112125 -1.303714
7 -0.211216 -1.321854 -0.892023 -0.583492
8 1.293255 0.936271 1.873870 0.790086
9 -0.699665 -0.953611 0.139986 -0.200499
In [32]:
df[df.columns[df.iloc[-1]>0]]
Out[32]:
2
0 0.401677
1 1.409357
2 -0.562578
3 0.982354
4 0.425042
5 -0.017695
6 -0.112125
7 -0.892023
8 1.873870
9 0.139986
Check out pandasql: https://pypi.python.org/pypi/pandasql
This blog post is a great tutorial for using SQL for Pandas DataFrames: http://blog.yhathq.com/posts/pandasql-sql-for-pandas-dataframes.html
This should get you started:
from pandasql import *
import pandas
def pysqldf(q):
return sqldf(q, globals())
q = """
SELECT
*
FROM
df
WHERE
value > 0
ORDER BY 1;
"""
df = pysqldf(q)

Python Pandas: Get row by median value

I'm trying to get the row of the median value for a column.
I'm using data.median() to get the median value for 'column'.
id 30444.5
someProperty 3.0
numberOfItems 0.0
column 70.0
And data.median()['column'] is subsequently:
data.median()['performance']
>>> 70.0
How can get the row or index of the median value?
Is there anything similar to idxmax / idxmin?
I tried filtering but it's not reliable in cases multiple rows have the same value.
Thanks!
You can use rank and idxmin and apply it to each column:
import numpy as np
import pandas as pd
def get_median_index(d):
ranks = d.rank(pct=True)
close_to_median = abs(ranks - 0.5)
return close_to_median.idxmin()
df = pd.DataFrame(np.random.randn(13, 4))
df
0 1 2 3
0 0.919681 -0.934712 1.636177 -1.241359
1 -1.198866 1.168437 1.044017 -2.487849
2 1.159440 -1.764668 -0.470982 1.173863
3 -0.055529 0.406662 0.272882 -0.318382
4 -0.632588 0.451147 -0.181522 -0.145296
5 1.180336 -0.768991 0.708926 -1.023846
6 -0.059708 0.605231 1.102273 1.201167
7 0.017064 -0.091870 0.256800 -0.219130
8 -0.333725 -0.170327 -1.725664 -0.295963
9 0.802023 0.163209 1.853383 -0.122511
10 0.650980 -0.386218 -0.170424 1.569529
11 0.678288 -0.006816 0.388679 -0.117963
12 1.640222 1.608097 1.779814 1.028625
df.apply(get_median_index, 0)
0 7
1 7
2 3
3 4
May be just : data[data.performance==data.median()['performance']].

Categories

Resources