Filter Dates in Pandas - python

Currently have a dataset structured the following way:
id_number start_date end_date data1 data2 data3 ...
Basically, I have a whole bunch of id's with a certain date range and then multiple columns of summary data. My problem is that I need yearly totals of the summary data. This means I need to get to a place where I can groupby year on a single occurrence of each document. However, it is not guaranteed that a document exists for a given year, and the date ranges can span multiple years. Any help would be greatly appreciated, I am quite stuck.
Sample dataframe:
df = pd.DataFrame([[1, '3/10/2002', '4/12/2005'], [1, '4/13/2005', '5/20/2005'], [1, '5/21/2005', '8/10/2009'], [2, '2/20/2012', '2/20/2015'], [3, '10/19/2003', '12/12/2012']])
df.columns = ['id_num', 'start', 'end']
df.start = pd.to_datetime(df['start'], format= "%m/%d/%Y")
df.end = pd.to_datetime(df['end'], format= "%m/%d/%Y")

Assuming we have a DataFrame df:
id_num start end value
0 1 2002-03-10 2005-04-12 1
1 1 2005-04-13 2005-05-20 2
2 1 2007-05-21 2009-08-10 3
3 2 2012-02-20 2015-02-20 4
4 3 2003-10-19 2012-12-12 5
we can create a row for each year for our start to end ranges with:
ys = [np.arange(x[0], x[1]+1) for x in zip(df['start'].dt.year, df['end'].dt.year)]
df = (pd.DataFrame(ys, df.index)
.stack()
.astype(int)
.reset_index(1, True)
.to_frame('year')
.join(df, how='left')
.reset_index())
print(df)
Here we're first creating the ys variable with the list of years for each start-end range from our DataFrame, and the df = ... is splitting these year lists into separate rows and joining back to the original DataFrame (very similar to what's done in this post: How to convert column with list of values into rows in Pandas DataFrame).
Output:
index year id_num start end value
0 0 2002 1 2002-03-10 2005-04-12 1
1 0 2003 1 2002-03-10 2005-04-12 1
2 0 2004 1 2002-03-10 2005-04-12 1
3 0 2005 1 2002-03-10 2005-04-12 1
4 1 2005 1 2005-04-13 2005-05-20 2
5 2 2007 1 2007-05-21 2009-08-10 3
6 2 2008 1 2007-05-21 2009-08-10 3
7 2 2009 1 2007-05-21 2009-08-10 3
8 3 2012 2 2012-02-20 2015-02-20 4
9 3 2013 2 2012-02-20 2015-02-20 4
10 3 2014 2 2012-02-20 2015-02-20 4
11 3 2015 2 2012-02-20 2015-02-20 4
12 4 2003 3 2003-10-19 2012-12-12 5
13 4 2004 3 2003-10-19 2012-12-12 5
14 4 2005 3 2003-10-19 2012-12-12 5
15 4 2006 3 2003-10-19 2012-12-12 5
16 4 2007 3 2003-10-19 2012-12-12 5
17 4 2008 3 2003-10-19 2012-12-12 5
18 4 2009 3 2003-10-19 2012-12-12 5
19 4 2010 3 2003-10-19 2012-12-12 5
20 4 2011 3 2003-10-19 2012-12-12 5
21 4 2012 3 2003-10-19 2012-12-12 5
Note:
I changed the original ranges to test cases where there are some years missing for some id_num, e.g. for id_num=1 we have years 2002-2005, 2005-2005 and 2007-2009, so we should not get 2006 for id_num=1 in the output (and we don't, so it passes the test)

I've taken your example and added some random values so we have something to work with:
df = pd.DataFrame([[1, '3/10/2002', '4/12/2005'], [1, '4/13/2005', '5/20/2005'], [1, '5/21/2005', '8/10/2009'], [2, '2/20/2012', '2/20/2015'], [3, '10/19/2003', '12/12/2012']])
df.columns = ['id_num', 'start', 'end']
df.start = pd.to_datetime(df['start'], format= "%m/%d/%Y")
df.end = pd.to_datetime(df['end'], format= "%m/%d/%Y")
np.random.seed(0) # seeding the random values for reproducibility
df['value'] = np.random.random(len(df))
So far we have:
id_num start end value
0 1 2002-03-10 2005-04-12 0.548814
1 1 2005-04-13 2005-05-20 0.715189
2 1 2005-05-21 2009-08-10 0.602763
3 2 2012-02-20 2015-02-20 0.544883
4 3 2003-10-19 2012-12-12 0.423655
We want values at the end of the year for each given date, whether it is beginning or end. So we will treat all dates the same. We just want date + user + value:
tmp = df[['end', 'value']].copy()
tmp = tmp.rename(columns={'end':'start'})
new = pd.concat([df[['start', 'value']], tmp], sort=True)
new['id_num'] = df.id_num.append(df.id_num) # doubling the id numbers
Giving us:
start value id_num
0 2002-03-10 0.548814 1
1 2005-04-13 0.715189 1
2 2005-05-21 0.602763 1
3 2012-02-20 0.544883 2
4 2003-10-19 0.423655 3
0 2005-04-12 0.548814 1
1 2005-05-20 0.715189 1
2 2009-08-10 0.602763 1
3 2015-02-20 0.544883 2
4 2012-12-12 0.423655 3
Now we can group by ID number and year:
new = new.groupby(['id_num', new.start.dt.year]).sum().reset_index(0).sort_index()
id_num value
start
2002 1 0.548814
2003 3 0.423655
2005 1 2.581956
2009 1 0.602763
2012 2 0.544883
2012 3 0.423655
2015 2 0.544883
And finally, for each user we expand the range to have every year in between, filling forward missing data:
new = new.groupby('id_num').apply(lambda x: x.reindex(pd.RangeIndex(x.index.min(), x.index.max() + 1)).fillna(method='ffill')).drop(columns='id_num')
value
id_num
1 2002 0.548814
2003 0.548814
2004 0.548814
2005 2.581956
2006 2.581956
2007 2.581956
2008 2.581956
2009 0.602763
2 2012 0.544883
2013 0.544883
2014 0.544883
2015 0.544883
3 2003 0.423655
2004 0.423655
2005 0.423655
2006 0.423655
2007 0.423655
2008 0.423655
2009 0.423655
2010 0.423655
2011 0.423655
2012 0.423655

Related

Loop through timeseries and fill missing data - Python

I have a DF such as the one below:
ID
Year
Value
1
2007
1
1
2008
1
1
2009
1
1
2011
1
1
2013
1
1
2014
1
1
2015
1
2
2008
1
2
2010
1
2
2011
1
2
2012
1
2
2013
1
2
2014
1
3
2009
1
3
2010
1
3
2011
1
3
2012
1
3
2013
1
3
2014
1
3
2015
1
As you can see, in ID '1' I am missing values for 2010 and 2012; and for ID '2' I am missing values for 2008, 2009, 2015, and ID '3' I am missing 2007, 2008. So, I would like to fill these gaps with the value '1'. What I would like to achieve is below:
ID
Year
Value
1
2007
1
1
2008
1
1
2009
1
1
2010
1
1
2011
1
1
2012
1
1
2013
1
1
2014
1
1
2015
1
2
2007
1
2
2008
1
2
2009
1
2
2010
1
2
2011
1
2
2012
1
2
2013
1
2
2014
1
2
2015
1
3
2007
1
3
2008
1
3
2009
1
3
2010
1
3
2011
1
3
2012
1
3
2013
1
3
2014
1
3
2015
1
I have created the below so far; however, that only fills for one ID, and i was struggling to find a way to loop through each ID adding a 'value' for each year that is missing:
idx = pd.date_range('2007', '2020', freq ='Y')
DF.index = pd.DatetimeIndex(DF.index)
DF_s = DF.reindex(idx, fill_value=0)
Any ideas would be helpful, please.
I'm not sure I got what you want to achieve, but if you want to fill NaNs in the "Value" column between 2007 and 2015 (suggesting that there are more years where you don't want to fill the column), you could do something like this:
import math
df1 = pd.DataFrame({'ID': [1,1,1,2,2,2],
'Year': [2007,2010,2020,2007,2010,2015],
'Value': [1,None,None,None,1,None]})
# Write a function with your logic
def func(x, y):
return 0 if math.isnan(y) and 2007<=x<=2015 else y
# Apply it to the df and update the column
df1['Value'] = df1.apply(lambda x: func(x.Year, x.Value), axis=1)
# ID Year Value
# 0 1 2007 1.0
# 1 1 2010 0.0
# 2 1 2020 NaN
# 3 2 2007 0.0
# 4 2 2010 1.0
# 5 2 2015 0.0
Answering my own question :). Needed to apply a lambda function after doing the groupby['org'] that adds a nan to each year that is missing. The reset_index effectivity ungroups it back into the original list.
f = lambda x: x.reindex(pd.date_range(pd.to_datetime('2007'), pd.to_datetime('2020'), name='date', freq='Y'))
DF_fixed = DF.set_index('Year').groupby(['Org']).apply(f).drop(['Org'], axis=1)
DF.reset_index()

How to count the number of older time stamps for each row (id) in Python

I am trying to create an aggregation by counting the number of historical incidents per unique ID based on date
ID Date
1 1/1/2010
1 1/1/2011
1 1/1/2012
2 1/1/2010
2 1/1/2011
The desired output would be:
ID Date Historical_Incidents
1 1/1/2010 0
1 1/1/2011 1
1 1/1/2012 2
2 1/1/2010 0
2 1/1/2011 1
I tried first grouping by ID, and counting the number of unique dates then merging with the orginal dataframe:
data4.groupby('Id')['Date'].nunique()
I'm gettting the number of "Dates" per ID, but I'm trying to get the number of "Dates" which occured before per id
Making similar, but non-identical data:
>>> df = pd.DataFrame([[2009, 1], [2010, 1], [2011, 1], [2009, 2], [2010, 2]], columns=list('AB'))
>>> df
A B
0 2009 1
1 2010 1
2 2011 1
3 2009 2
4 2010 2
Assuming they are all sorted by date (here, the A column):
>>> df['count'] = df.groupby('B').cumcount()
>>> df
A B count
0 2009 1 0
1 2010 1 1
2 2011 1 2
3 2009 2 0
4 2010 2 1

Split int64 Pandas column in two

I've been given a dataset that has dates as an integer using the format 52019 for May 2019. I've put it into a Pandas DataFrame, and I need to extract that date format into a month column and year column, but I can't figure out how to do that for an int64 datatype or how to handle it for the two digit months. So I want to take something like
ID Date
1 22019
2 32019
3 52019
5 102019
and make it become
ID Month Year
1 2 2019
2 3 2019
3 5 2019
5 10 2019
What should I do?
divmod
df['Month'], df['Year'] = np.divmod(df.Date, 10000)
df
ID Date Month Year
0 1 22019 2 2019
1 2 32019 3 2019
2 3 52019 5 2019
3 5 102019 10 2019
Without mutating original dataframe using assign
df.assign(**dict(zip(['Month', 'Year'], np.divmod(df.Date, 10000))))
ID Date Month Year
0 1 22019 2 2019
1 2 32019 3 2019
2 3 52019 5 2019
3 5 102019 10 2019
Using // and %
df['Month'], df['Year'] = df.Date//10000,df.Date%10000
df
Out[528]:
ID Date Month Year
0 1 22019 2 2019
1 2 32019 3 2019
2 3 52019 5 2019
3 5 102019 10 2019
Use:
s=pd.to_datetime(df.pop('Date'),format='%m%Y') #convert to datetime and pop deletes the col
df['Month'],df['Year']=s.dt.month,s.dt.year #extract month and year
print(df)
ID Month Year
0 1 2 2019
1 2 3 2019
2 3 5 2019
3 5 10 2019
str.extract can handle the tricky part of figuring out whether the Month has 1 or 2 digits.
(df['Date'].astype(str)
.str.extract(r'^(?P<Month>\d{1,2})(?P<Year>\d{4})$')
.astype(int))
Month Year
0 2 2019
1 3 2019
2 5 2019
3 10 2019
You may also use string slicing if it's guaranteed your numbers have only 5 or 6 digits (if not, use str.extract above):
u = df['Date'].astype(str)
df['Month'], df['Year'] = u.str[:-4], u.str[-4:]
df
ID Date Month Year
0 1 22019 2 2019
1 2 32019 3 2019
2 3 52019 5 2019
3 5 102019 10 2019

Merge and delete duplicates

I have two large datasets I want to merge which have a common column, "gene".
All entries are unique in df1
in [85]: df1
Out[85]:
gene
0 Cdk12
1 Cdk2ap1
2 Cdk7
3 Cdk8
4 Cdx2
5 Cenpa
6 Cenpa
7 Cenpa
8 Cenpc1
9 Cenpe
10 Cenpj
df2
Out[86]:
gene year DOI
0 Cdk12 2001 10.1038/35055500
1 Cdk12 2002 10.1038/nature01266
2 Cdk12 2002 10.1074/jbc.M106813200
3 Cdk12 2003 10.1073/pnas.1633296100
4 Cdk12 2003 10.1073/pnas.2336103100
5 Cdk12 2005 10.1093/nar/gni045
6 Cdk12 2005 10.1126/science.1112014
7 Cdk12 2008 10.1101/gr.078352.108
8 Cdk12 2011 10.1371/journal.pbio.1000582
9 Cdk12 2012 10.1074/jbc.M111.321760
10 Cdk12 2016 10.1038/cdd.2015.157
11 Cdk12 2017 10.1093/cercor/bhw081
12 Cdk2ap1 2001 10.1006/geno.2001.6474
13 Cdk2ap1 2001 10.1038/35055500
14 Cdk2ap1 2002 10.1038/nature01266
I want to keep the order of df1 because I am going to join that alongside a different dataset.
Dataframe 2 has many entries for each "gene" and I want only one for each gene.
The most recent value in "year" will decide which "gene" entry to keep.
I have tried:
reading the files into pandas and then naming the columns
df1 = pd.read_csv('T1inorderforMerge.csv', header = None)
df2 = pd.read_csv('T2inorderforMerge.csv', header = None)
df1.columns = ["gene"]
df2.columns = ["gene","year","DOI"]
I have tried all variations of the code below i.e changing how and order of the df's.
df3 = pd.merge(df1, df2, on ="gene", how="left")
I have tried vertical and horizontal stacking which obvious to some, didn't work. There is lots of other messy code I have also tried but really want to see how/if I can do this using pandas.
I think one possible solution is create helper columns which count values of gene and then merge pairs - first Cdk12 in df1 with first Cdk12 in df2, second Cdk12 with second Cdk12,... . Unique values are merged 1 to 1, classic way (because a is then always 0):
df1['a'] = df1.groupby('gene').cumcount()
df2['a'] = df2.groupby('gene').cumcount()
print (df1)
gene a
0 Cdk12 0
1 Cdk2ap1 0
2 Cdk7 0
3 Cdk8 0
4 Cdx2 0
5 Cenpa 0
6 Cenpa 1
7 Cenpa 2
8 Cenpc1 0
9 Cenpe 0
10 Cenpj 0
print (df2)
gene year DOI a
0 Cdk12 2001 10.1038/35055500 0
1 Cdk12 2002 10.1038/nature01266 1
2 Cdk12 2002 10.1074/jbc.M106813200 2
3 Cdk12 2003 10.1073/pnas.1633296100 3
4 Cdk12 2003 10.1073/pnas.2336103100 4
5 Cdk12 2005 10.1093/nar/gni045 5
6 Cdk12 2005 10.1126/science.1112014 6
7 Cdk12 2008 10.1101/gr.078352.108 7
8 Cdk12 2011 10.1371/journal.pbio.1000582 8
9 Cdk12 2012 10.1074/jbc.M111.321760 9
10 Cdk12 2016 10.1038/cdd.2015.157 10
11 Cdk12 2017 10.1093/cercor/bhw081 11
12 Cdk2ap1 2001 10.1006/geno.2001.6474 0
13 Cdk2ap1 2001 10.1038/35055500 1
14 Cdk2ap1 2002 10.1038/nature01266 2
df3 = pd.merge(df1, df2, on =["a","gene"], how="left").drop('a', axis=1)
print (df3)
gene year DOI
0 Cdk12 2001.0 10.1038/35055500
1 Cdk2ap1 2001.0 10.1006/geno.2001.6474
2 Cdk7 NaN NaN
3 Cdk8 NaN NaN
4 Cdx2 NaN NaN
5 Cenpa NaN NaN
6 Cenpa NaN NaN
7 Cenpa NaN NaN
8 Cenpc1 NaN NaN
9 Cenpe NaN NaN
10 Cenpj NaN NaN
Also get NaNs of all rows which not match pairs gene.
But if need process only unique values in df1['gene'] then need drop_duplicates first in both DataFrames:
df1 = df1.drop_duplicates('gene')
df2 = df2.drop_duplicates('gene')
print (df1)
gene
0 Cdk12
1 Cdk2ap1
2 Cdk7
3 Cdk8
4 Cdx2
5 Cenpa
8 Cenpc1
9 Cenpe
10 Cenpj
print (df2)
gene year DOI
0 Cdk12 2001 10.1038/35055500
12 Cdk2ap1 2001 10.1006/geno.2001.6474
df3 = pd.merge(df1, df2, on ="gene", how="left")
print (df3)
gene year DOI
0 Cdk12 2001.0 10.1038/35055500
1 Cdk2ap1 2001.0 10.1006/geno.2001.6474
2 Cdk7 NaN NaN
3 Cdk8 NaN NaN
4 Cdx2 NaN NaN
5 Cenpa NaN NaN
6 Cenpc1 NaN NaN
7 Cenpe NaN NaN
8 Cenpj NaN NaN
Not sure what is type(df1), but:
In [1]: df1 = ['a', 'f', 'g']
In [2]: df2 = [['a', 7, True], ['g',8, False]]
In [3]: [[inner_item for inner_item in df2 if inner_item[0] == outer_item][0] if len([inner_item for inner_item in df2 if inner_item[0] == outer_item])>0 else [outer_item,None,None] for outer_item in df1]
Out[3]: [['a', 7, True], ['f', None, None], ['g', 8, False]]

Python function definition on two list

Year Month Year_month
2009 2 2009/2
2009 3 2009/3
2007 4 2007/3
2006 10 2006/10
Year_month
200902
200903
200704
200610
I would like to combine the year and month columns into the format as Year_month (i.e. replace the original one). How could I do it? The following approach seems not working in Python. Thanks.
def f(x, y)
return x*100+y
for i in range(0,filename.shape[0]):
filename['Year_month'][i] = f(filename['year'][i] ,filename['month'][i])
I think you can use zfill:
df['Year_month'] = df.Year.astype(str) + df.Month.astype(str).str.zfill(2)
print df
Year Month Year_month
0 2009 2 200902
1 2009 3 200903
2 2007 4 200704
3 2006 10 200610
df = df.read_clipboard()
Year Month Year_month
0 2009 2 2009/2
1 2009 3 2009/3
2 2007 4 2007/3
3 2006 10 2006/10
df['Year_month'] = df.apply(lambda row: str(row.Year)+str(row.Month).zfill(2), axis=1)
Year Month Year_month
0 2009 2 200902
1 2009 3 200903
2 2007 4 200704
3 2006 10 200610

Categories

Resources