I have a tall pandas dataframe called use with columns ID, Date, .... Each row is unique, but each ID has many rows, with one row ID per date.
ID Date Other_data
1 1-1-01 10
2 1-1-01 23
3 1-1-01 0
1 1-2-01 11
3 1-2-01 1
1 1-3-01 9
2 1-3-01 20
3 1-3-01 2
I also have a list of unique ids, ids=use['ID'].drop_duplicates
I want to find the intersection of all of the dates, that is, only the dates for which each ID has data. The end result in this toy problem should be [1-1-01, 1-3-01]
Currently, I loop through, subsetting by ID and taking the intersection. Roughly speaking, it looks like this:
dates = use['Date'].drop_duplicates()
for i in ids:
id_dates = use[(use['ID'] == i)]['Date'].values
dates = set(dates).intersection(id_dates)
This strikes me as horrifically inefficient. What is a more efficient way to identify dates where each ID has data?
Thanks very much!
Using crosstab, when the value is 0 should be the target row . using df.eq(0).any(1). to find it
df=pd.crosstab(use.ID,use.Date)
df
Out[856]:
Date 1-1-01 1-2-01 1-3-01
ID
1 1 1 1
2 1 0 1
3 1 1 1
Find the unique IDs per date, then check if that's all of them.
gp = df.groupby('Date').ID.nunique()
gp[gp == df.ID.nunique()].index.tolist()
#['1-1-01', '1-3-01']
Related
I have a large dataset (df) with lots of columns and I am trying to get the total number of each day.
|datetime|id|col3|col4|col...
1 |11-11-2020|7|col3|col4|col...
2 |10-11-2020|5|col3|col4|col...
3 |09-11-2020|5|col3|col4|col...
4 |10-11-2020|4|col3|col4|col...
5 |10-11-2020|4|col3|col4|col...
6 |07-11-2020|4|col3|col4|col...
I want my result to be something like this
|datetime|id|col3|col4|col...|Count
6 |07-11-2020|4|col3|col4|col...| 1
3 |5|col3|col4|col...| 1
2 |10-11-2020|5|col3|col4|col...| 1
4 |4|col3|col4|col...| 2
1 |11-11-2020|7|col3|col4|col...| 1
I tried to use resample like this df = df.groupby(['id','col3', pd.Grouper(key='datetime', freq='D')]).sum().reset_index() and this is my result. I am still new to programming and Pandas but I have read up on pandas docs and am still unable to do it.
|datetime|id|col3|col4|col...
6 |07-11-2020|4|col3|1|0.0
3 |07-11-2020|5|col3|1|0.0
2 |10-11-2020|5|col3|1|0.0
4 |10-11-2020|4|col3|2|0.0
1 |11-11-2020|7|col3|1|0.0
try this:
df = df.groupby(['datetime','id','col3']).count()
If you want the count values for all columns based only on the date, then:
df.groupby('datetime').count()
And you'll get a DataFrame who has the date time as the index and the column cells representing the number of entries for that given index.
I have a list of recorded diagnoses like this:
df = pd.DataFrame({
"DiagnosisTime": ["2017-01-01 08:23:00", "2017-01-01 08:23:00", "2017-01-01 08:23:03", "2017-01-01 08:27:00", "2019-12-31 20:19:39", "2019-12-31 20:19:39"],
"ID": [1,1,1,1,2,2]
})
There are multiple subjects that can be identified by an ID. For each subject there may be one or more diagnosis. Each diagnosis may be comprised of multiple entries (as multiple things are recoreded (not in this example)).
The individual diagnoses (with multiple rows) can (to some extend) be identified by the DiagnosisTime. However, sometimes there is a little delay during the writing of data for one diagnosis so that I want to allow a small tolerance of a few seconds when grouping by DiagnosisTime.
In this example I want a result as follows:
There are two diagnoses for ID 1: rows 0, 1, 2 and row 3. Note the slightly different DiagnosisTime in row 2 compared to 0 and 1. ID 2 is comprised of 1 diagnosis comprised of rows 4 and 5.
For each ID I want to set the counter back to 1 (or 0 if thats easier).
This is how far I've come:
df["DiagnosisTime"] = pd.to_datetime(df["DiagnosisTime"])
df["diagnosis_number"] = df.groupby([pd.Grouper(freq='5S', key="DiagnosisTime"), 'ID']).ngroup()
I think I successfully identified diagnoses within one ID (not entirely sure about the Grouper), but I don't know how to reset the counter.
If that is not possible I would also be satisfied with a function which returns all records of one ID that have the lowest diagnosis_number within that group.
You can add lambda function with GroupBy.transform and factorize:
df["diagnosis_number"] = (df.groupby('ID')['diagnosis_number']
.transform(lambda x: pd.factorize(x)[0]) + 1)
print (df)
DiagnosisTime ID diagnosis_number
0 2017-01-01 08:23:00 1 1
1 2017-01-01 08:23:00 1 1
2 2017-01-01 08:23:03 1 1
3 2017-01-01 08:27:00 1 2
4 2019-12-31 20:19:39 2 1
5 2019-12-31 20:19:39 2 1
I have a dataframe with individuals and their household IDs and I would like to create a variable that contains the household size.
I am using Python 3.7. I tried to use the groupby function combined with the size (I tried count as well count) function. The idea is for each observation about an individual, I want to count in the dataframe the number of observations with the same household ID and store it in a new variable.
Consider that each observation has a household ID (hh_id) and that I would like to store the household size in the hh_size variable.
I tried the following:
df['hh_size'] = df.groupby('hh_id').size
I expect hh_size variable to contain for each observation the household size. However, I get a column with only nan.
When I usedf.groupby('hh_id').size alone, I get the expected result but I cannot manage to store it in the hh_size variable.
For example:
individual hh_id hh_size
1 1 2
2 1 2
3 2 1
4 3 1
Thanks,
Julien
If I understand it you have to convert it to new DataFrame - .to_frame(name='hh_size') - and you may have to reset index.
import pandas as pd
df = pd.DataFrame({
'individual': [1,1,2,2,3,4],
'hh_id': [1,1,1,1,2,3],
})
sizes = df.groupby(['individual', 'hh_id']).size()
new_df = sizes.to_frame(name='hh_size').reset_index()
print(new_df)
Result:
individual hh_id hh_size
0 1 1 2
1 2 1 2
2 3 2 1
3 4 3 1
I am new to pandas. I'm trying to sort a column within each group. So far, I was able to group first and second column values together and calculate the mean value in third column. But I am still struggling to sort 3rd column.
This is my input dataframe
This is my dataframe after applying groupby and mean function
I used the following line of code to group input dataframe,
df_o=df.groupby(by=['Organization Group','Department']).agg({'Total Compensation':np.mean})
Please let me know how to sort the last column for each group in 1st column using pandas.
It seems you need sort_values:
#for return df add parameter as_index=False
df_o=df.groupby(['Organization Group','Department'],
as_index=False)['Total Compensation'].mean()
df_o = df_o.sort_values(['Total Compensation','Organization Group'])
Sample:
df = pd.DataFrame({'Organization Group':['a','b','a','a'],
'Department':['d','f','a','a'],
'Total Compensation':[1,8,9,1]})
print (df)
Department Organization Group Total Compensation
0 d a 1
1 f b 8
2 a a 9
3 a a 1
df_o=df.groupby(['Organization Group','Department'],
as_index=False)['Total Compensation'].mean()
print (df_o)
Organization Group Department Total Compensation
0 a a 5
1 a d 1
2 b f 8
df_o = df_o.sort_values(['Total Compensation','Organization Group'])
print (df_o)
Organization Group Department Total Compensation
1 a d 1
0 a a 5
2 b f 8
At first I have two problems, the first will follow now:
I a dataframe df with many times the same userid and along with it a date and some unimportant other columns:
userid date
0 243 2014-04-01
1 234 2014-12-01
2 234 2015-11-01
3 589 2016-07-01
4 589 2016-03-01
I am currently trying to groupby them by userid and sort the dates descending and cut out the twelve oldest. My code looks like this:
df = df.groupby(['userid'], group_keys=False).agg(lambda x: x.sort_values(['date'], ascending=False, inplace=False).head(12))
And I get this error:
ValueError: cannot copy sequence with size 6 to array axis with dimension 12
At the moment my aim is to avoid to split the dataframe in individual ones.
My second problem is more complex:
I try to find out if the sorted dates (respectively per group of userids) are monthly consecutive. This means if there is an date for one group of userid, for example userid: 234 and date: 2014-04-01, the next entry below must be userid: 234 and date:2014-03-01. There is no focus on the day, only the year and month are important.
And only this consecutive 12 dates should be copied in another dataframe.
A second dataframe df2 contains the same userid, but they are unique and another column is 'code'. Here is an example:
userid code
0 433805 1
24 5448 0
48 3434 1
72 34434 1
96 3202 1
120 23766 1
153 39457 0
168 4113 1
172 3435 5
374 34093 1
I summarize: I try to check if there are 12 consecutive months per userid and copy every correct sequence in another dataframe. For this I have also compare the 'code' from df2.
This is a version of my code:
df['YearMonthDiff'] = df['date'].map(lambda x: 1000*x.year + x.month).diff()
df['id_before'] = df['userid'].shift()
final_df = pd.DataFrame()
for group in df.groupby(['userid'], group_keys=False):
fi = group[1]
if (fi['userid'] <> fi['id_before']) & group['YearMonthDiff'].all(-1.0) & df.loc[fi.userid]['code'] != 5:
final_df.append(group['userid','date', 'consum'])
At first calculated from the date an integer and made diff(). On other posts I saw they shift the column to compare the values from the current row and the row before. Then I made groupby(userid) to iterate over the single groups. Now it's extra ugly I tried to find the beginning of such an userid-group, try to check if there are only consecutive months and the correct 'code'. And at least I append it on the final dataframe.
On of the biggest problems is to compare the row with the following row. I can iterate over them with iterrow(), but I cannot compare them without shift(). There exits a calendar function, but on these I will take a look on the weekend. Sorry for the mess I am new to pandas.
Has anyone an idea how to solve my problem?
for your first problem, try this
df.groupby(by='userid').apply(lambda x: x.sort_values(by='date',ascending=False).iloc[[e for e in range(12) if e <len(x)]])
Using groupby and nlargest, we get the index values of those largest dates. Then we use .loc to get just those rows
df.loc[df.groupby('userid').date.nlargest(12).index.get_level_values(1)]
Consider the dataframe df
dates = pd.date_range('2015-08-08', periods=10)
df = pd.DataFrame(dict(
userid=np.arange(2).repeat(4),
date=np.random.choice(dates, 8, False)
))
print(df)
date userid
0 2015-08-12 0 # <-- keep
1 2015-08-09 0
2 2015-08-11 0
3 2015-08-15 0 # <-- keep
4 2015-08-13 1
5 2015-08-10 1
6 2015-08-17 1 # <-- keep
7 2015-08-16 1 # <-- keep
We'll keep the latest 2 dates per user id
df.loc[df.groupby('userid').date.nlargest(2).index.get_level_values(1)]
date userid
0 2015-08-12 0
3 2015-08-15 0
6 2015-08-17 1
7 2015-08-16 1
Maybe someone is interested, I solved my second problem thus:
I cast the date to an int, calculated the difference and I shift the userid one row, like in my example. And then follows this... found a solution on stackoverflow
gr_ob = df.groupby('userid')
gr_dict = gr_ob.groups
final_df = pd.DataFrame(columns=['userid', 'date', 'consum'])
for group_name in gr_dict.keys():
new_df = gr_ob.get_group(group_name)
if (new_df['userid'].iloc[0] <> new_df['id_before'].iloc[0]) & (new_df['YearMonthDiff'].iloc[1:] == -1.0).all() & (len(new_df) == 12):
final_df = final_df.append(new_df[['userid', 'date', 'consum']])