Adding repeating date column to pandas DataFrame - python

I am new to pandas and I am struggling adding dates to my pandas dataFrame df that comes from .csv file. I have a dataFrame with several unique ids, and each id has 120 months, I need to add a column date. Each id should have exactly the same dates for 120 periods. I am struggling to add them as after first id there is another id and the dates should start over again. my data in csv file looks like this:
month id
1 1593
2 1593
...
120 1593
1 8964
2 8964
...
120 8964
1 58944
...
Here is my code and I am not really sure how should I use groupby method to add dates for my dataframe based on id:
group=df.groupby('id')
group['date']=pd.date_range(start='2020/6/1', periods=120, freq='MS').shift(14,freq='D')
Please help me!!!

If you know how many sets of 120 you have, you can use this. Just change the 2 at the end. This example creates a repeating 120 dates twice. You may have to adapt for your specific use.
new_dates = list(pd.date_range(start='2020/6/1', periods=120, freq='MS').shift(14,freq='D'))*2
df = pd.DataFrame({'date': new_dates})

These are the same except ones using lambda
def repeatingDates(numIds): return [d.strftime(
'%Y/%m/%d') for d in pandas.date_range(start='2020/6/1', periods=120, freq='MS')] * numIds
repeatingDates = lambda numIds: [d.strftime(
'%Y/%m/%d') for d in pandas.date_range(start='2020/6/1', periods=120, freq='MS')] * numIds

You can use Pandas transform. This is how I solved it:
dataf['dates'] = \
(dataf
.groupby("id")
.transform(lambda d: pd.date_range(start='2020/6/1', periods=d.max(), freq='MS').shift(14,freq='D')
)
Results:
month id dates
0 1 1593 2020-06-15
1 2 1593 2020-07-15
2 3 1593 2020-08-15
3 1 8964 2020-06-15
4 2 8964 2020-07-15
5 1 58944 2020-06-15
6 2 58944 2020-07-15
7 3 58944 2020-08-15
8 4 58944 2020-09-15
Test data:
import io
import pandas as pd
dataf = pd.read_csv(io.StringIO("""
month,id
1,1593
2,1593
3,1593
1,8964
2,8964
1,58944
2,58944
3,58944
4,58944""")).astype(int)

Related

Problem with group by max period in dataframe pandas

I'm still a novice with python and I'm having problems trying to group some data to show that record that has the highest (maximum) date, the dataframe is as follows:
...
I am trying the following:
df_2 = df.max(axis = 0)
df_2 = df.periodo.max()
df_2 = df.loc[df.groupby('periodo').periodo.idxmax()]
And it gives me back:
Timestamp('2020-06-01 00:00:00')
periodo 2020-06-01 00:00:00
valor 3.49136
Although the value for 'periodo' is correct, for 'valor' it is not, since I need to obtain the corresponding complete record ('period' and 'value'), and not the maximum of each one. I have tried other ways but I can't get to what I want ...
I need to do?
Thank you in advance, I will be attentive to your answers!
Regards!
# import packages we need, seed random number generator
import pandas as pd
import datetime
import random
random.seed(1)
Create example dataframe
dates = [single_date for single_date in (start_date + datetime.timedelta(n) for n in range(day_count))]
values = [random.randint(1,1000) for _ in dates]
df = pd.DataFrame(zip(dates,values),columns=['dates','values'])
ie df will be:
dates values
0 2020-01-01 389
1 2020-01-02 808
2 2020-01-03 215
3 2020-01-04 97
4 2020-01-05 500
5 2020-01-06 30
6 2020-01-07 915
7 2020-01-08 856
8 2020-01-09 400
9 2020-01-10 444
Select rows with highest entry in each column
You can do:
df[df['dates'] == df['dates'].max()]
(Or, if wanna use idxmax, can do: df.loc[[df['dates'].idxmax()]])
Returning:
dates values
9 2020-01-10 444
ie this is the row with the latest date
&
df[df['values'] == df['values'].max()]
(Or, if wanna use idxmax again, can do: df.loc[[df['values'].idxmax()]] - as in Scott Boston's answer.)
and
dates values
6 2020-01-07 915
ie this is the row with the highest value in the values column.
Reference.
I think you need something like:
df.loc[[df['valor'].idxmax()]]
Where you use idxmax on the 'valor' column. Then use that index to select that row.
MVCE:
import pandas as pd
import numpy as np
np.random.seed(123)
df = pd.DataFrame({'periodo':pd.date_range('2018-07-01', periods = 600, freq='d'),
'valor':np.random.random(600)+3})
df.loc[[df['valor'].idxmax()]]
Output:
periodo valor
474 2019-10-18 3.998918

Pandas get the Month Ending Values from Series

I need to get the month-end balance from a series of entries.
Sample data:
date contrib totalShrs
0 2009-04-23 5220.00 10000.000
1 2009-04-24 10210.00 20000.000
2 2009-04-27 16710.00 30000.000
3 2009-04-30 22610.00 40000.000
4 2009-05-05 28909.00 50000.000
5 2009-05-20 38409.00 60000.000
6 2009-05-28 46508.00 70000.000
7 2009-05-29 56308.00 80000.000
8 2009-06-01 66108.00 90000.000
9 2009-06-02 78108.00 100000.000
10 2009-06-12 86606.00 110000.000
11 2009-08-03 95606.00 120000.000
The output would look something like this:
2009-04-30 40000
2009-05-31 80000
2009-06-30 110000
2009-07-31 110000
2009-08-31 120000
Is there a simple Pandas method?
I don't see how I can do this with something like a groupby?
Or would I have to do something like iterrows, find all the monthly entries, order them by date and pick the last one?
Thanks.
Use Grouper with GroupBy.last, forward filling missing values by ffill with Series.reset_index:
#if necessary
#df['date'] = pd.to_datetime(df['date'])
df = df.groupby(pd.Grouper(freq='m',key='date'))['totalShrs'].last().ffill().reset_index()
#alternative
#df = df.resample('m',on='date')['totalShrs'].last().ffill().reset_index()
print (df)
date totalShrs
0 2009-04-30 40000.0
1 2009-05-31 80000.0
2 2009-06-30 110000.0
3 2009-07-31 110000.0
4 2009-08-31 120000.0
Following gives you the information you want, i.e. end of month values, though the format is not exactly what you asked:
df['month'] = df['date'].str.split('-', expand = True)[1] # split date column to get month column
newdf = pd.DataFrame(columns=df.columns) # create a new dataframe for output
grouped = df.groupby('month') # get grouped values
for g in grouped: # for each group, get last row
gdf = pd.DataFrame(data=g[1])
newdf.loc[len(newdf),:] = gdf.iloc[-1,:] # fill new dataframe with last row obtained
newdf = newdf.drop('date', axis=1) # drop date column, since month column is there
print(newdf)
Output:
contrib totalShrs month
0 22610 40000 04
1 56308 80000 05
2 86606 110000 06
3 95606 120000 08

Grouping by column groups on a data frama in python pandas

I have a data frame with columns for every month of every year from 2000 to 2016
df.columns
output
Index(['2000-01', '2000-02', '2000-03', '2000-04', '2000-05', '2000-06',
'2000-07', '2000-08', '2000-09', '2000-10',
...
'2015-11', '2015-12', '2016-01', '2016-02', '2016-03', '2016-04',
'2016-05', '2016-06', '2016-07', '2016-08'],
dtype='object', length=200)
and I would like to group over these column by quarters.
I have made a dictionary believing it would be the best method to use groupby then use aggregate and mean:
m2q = {'2000q1': ['2000-01', '2000-02', '2000-03'],
'2000q2': ['2000-04', '2000-05', '2000-06'],
'2000q3': ['2000-07', '2000-08', '2000-09'],
...
'2016q2': ['2016-04', '2016-05', '2016-06'],
'2016q3': ['2016-07', '2016-08']}
but
df.groupby(m2q)
is not giving me the desired output.
In fact its giving me an empty grouping.
Any suggestions to make this grouping work?
Or perhaps a more pythonian solution to categorize by quarters taking the mean of the specified columns?
You can convert your index to DatetimeIndex(example 1) or PeriodIndex(example 2).
And also please check Time Series / Date functionality subject for more detail.
import numpy as np
import pandas as pd
idx = ['2000-01', '2000-02', '2000-03', '2000-04', '2000-05', '2000-06',
'2000-07', '2000-08', '2000-09', '2000-10', '2000-11', '2000-12']
df = pd.DataFrame(np.arange(12), index=idx, columns=['SAMPLE_DATA'])
print(df)
SAMPLE_DATA
2000-01 0
2000-02 1
2000-03 2
2000-04 3
2000-05 4
2000-06 5
2000-07 6
2000-08 7
2000-09 8
2000-10 9
2000-11 10
2000-12 11
# Handle your timeseries data with pandas timeseries / date functionality
df.index=pd.to_datetime(df.index)
example 1
print(df.resample('Q').sum())
SAMPLE_DATA
2000-03-31 3
2000-06-30 12
2000-09-30 21
2000-12-31 30
example 2
print(df.to_period('Q').groupby(level=0).sum())
SAMPLE_DATA
2000Q1 3
2000Q2 12
2000Q3 21
2000Q4 30

Adding days to column in pandas using separate column integers

I've tried datetime.timedelta on the series, as well as pd.DateOffset. Neither works. I do know I could iterate over this dataframe and add them manually, but I was looking for a vectorized approach.
Example:
d = {pd.Timestamp('2015-01-02'):{'days_delinquent':11}, pd.Timestamp('2015-01-15'):{'days_delinquent':23}}
>>> dataf = pd.DataFrame.from_dict(d,orient='index')
>>> dataf
days_delinquent
2015-01-02 11
2015-01-15 23
Just trying to add 11 and 23 days to the rows below. The column I'm adding to in real life is not the index, but I can obviously just make it the index when doing this.
I guess this wasn't self explanatory, but the output would the be a new column with Date(Index in this case) + datetime.timedelta(days=dataf['days_delinquent'])
You can convert your days_delinquent column to timedelta64[D] (offset in days) and add it to the index, eg:
import pandas as pd
d = {pd.Timestamp('2015-01-02'):{'days_delinquent':11}, pd.Timestamp('2015-01-15'):{'days_delinquent':23}}
df = pd.DataFrame.from_dict(d,orient='index')
df['returned_on'] = df.index + df.days_delinquent.astype('timedelta64[D]')
Much nicer (thanks DSM) is to use pd.to_timedelta so the units are more easily changed if needs be:
df['returned_on'] = df.index + pd.to_timedelta(df.days_delinquent, 'D')
Gives you:
days_delinquent returned_on
2015-01-02 11 2015-01-13
2015-01-15 23 2015-02-07
import pandas as pd
d = {pd.Timestamp('2015-01-02'):{'days_delinquent':11},
pd.Timestamp('2015-01-15'):{'days_delinquent':23}}
df = pd.DataFrame.from_dict(d,orient='index')
def add_days(x):
return x['index'] + pd.Timedelta(days=x['days_delinquent'])
df.reset_index().apply(add_days,axis=1)
Output:
0 2015-01-13
1 2015-02-07
dtype: datetime64[ns]
dataf['result'] = [d + datetime.timedelta(delta)
for d, delta in zip(dataf.index, dataf.days_delinquent)]
dataf
Out[56]:
days_delinquent result
2015-01-02 11 2015-01-13
2015-01-15 23 2015-02-07

pandas datetimeindex between_time function(how to get a not_between_time)

I have a pandas df, and I use between_time a and b to clean the data. How do I
get a non_between_time behavior?
I know i can try something like.
df.between_time['00:00:00', a]
df.between_time[b,23:59:59']
then combine it and sort the new df. It's very inefficient and it doesn't work for me as I have data betweeen 23:59:59 and 00:00:00
Thanks
You could find the index locations for rows with time between a and b, and then use df.index.diff to remove those from the index:
import pandas as pd
import io
text = '''\
date,time, val
20120105, 080000, 1
20120105, 080030, 2
20120105, 080100, 3
20120105, 080130, 4
20120105, 080200, 5
20120105, 235959.01, 6
'''
df = pd.read_csv(io.BytesIO(text), parse_dates=[[0, 1]], index_col=0)
index = df.index
ivals = index.indexer_between_time('8:01:30','8:02')
print(df.reindex(index.diff(index[ivals])))
yields
val
date_time
2012-01-05 08:00:00 1
2012-01-05 08:00:30 2
2012-01-05 08:01:00 3
2012-01-05 23:59:59.010000 6

Categories

Resources