Get values from .csv closest in time to values in another dataframe - python

I have 2 dataframes I have created using pandas and stored as .csv. Each row of both dataframes has columns with date and times, but the timestamps aren't necessarily same. So, I want to create a combined pandas dataframe such that the 2 are joined on the basis of CLOSEST times.
This is my first dataframe. This is my second dataframe. I want to get kp and f107 values for each filename which are closest in date and time to the Avg_time column for each row in the first dataframe. How do I do this? Is there a merge with method='nearest' type way to do this with pandas?

You can use pd.merge_asof in Pandas 0.20.2 with a direction='nearest':
pd.merge_asof(df1.sort_values(by='file_date'),df2.sort_values(by='AST'), left_on='file_date', right_on='AST', direction='nearest')
Output:
Filename file_date Avg_time AST f107 kp
0 Na1998319 1998-11-16 2:14 1998-11-15 23:00:00 121.8 2.3
1 Na1998320 1998-11-17 2:01 1998-11-16 23:00:00 118.0 2.3
2 Na1998321 1998-11-18 0:38 1998-11-17 23:00:00 112.2 2.3
3 Na1998322 1998-11-18 20:51 1998-11-17 23:00:00 112.2 2.3
4 Na1999020 1999-01-20 22:53 1999-01-19 23:00:00 231.3 0.7

Related

Groupby number of hours in a month in pandas

Could someone please guide how to groupby no. of hours from hourly based index to find how many hours of null values are there in a specific month? Therefore, I am thinking of having a dataframe with monthly based index.
Below given is the dataframe which has timestamp as index and another column with has occassionally null values.
timestamp
rel_humidity
1999-09-27 05:00:00
82.875
1999-09-27 06:00:00
83.5
1999-09-27 07:00:00
83.0
1999-09-27 08:00:00
80.6
1999-09-27 09:00:00
nan
1999-09-27 10:00:00
nan
1999-09-27 11:00:00
nan
1999-09-27 12:00:00
nan
I tried this but the resulting dataframe is not what I expected.
gap_in_month = OG_1998_2022_gaps.groupby(OG_1998_2022_gaps.index.month, OG_1998_2022_gaps.index.year).count()
I always struggle with groupby in function. Therefore, highly appreciate any help. Thanks in advance!
If need 0 if no missing value per month create mask by Series.isna, convert DatetimeIndex to month periods by DatetimeIndex.to_period and aggregate sum - Trues of mask are processing like 1 or alternative with Grouper:
gap_in_month = (OG_1998_2022_gaps['rel_humidity'].isna()
.groupby(OG_1998_2022_gaps.index.to_period('m')).sum())
gap_in_month = (OG_1998_2022_gaps['rel_humidity'].isna()
.groupby(pd.Grouper(freq='m')).sum())
If need only matched rows solution is similar, but first filter by boolean indexing and then aggregate counts by GroupBy.size:
gap_in_month = (OG_1998_2022_gaps[OG_1998_2022_gaps['rel_humidity'].isna()]
.groupby(OG_1998_2022_gaps.index.to_period('m')).size())
gap_in_month = (OG_1998_2022_gaps[OG_1998_2022_gaps['rel_humidity'].isna()]
.groupby(pd.Grouper(freq='m')).size())
Alternative to groupby, but (in my opinion) much nicer, is to use pd.Series.resample:
import pandas as pd
# Some sample data with a DatetimeIndex:
series = pd.Series(
np.random.choice([1.0, 2.0, 3.0, np.nan], size=2185),
index=pd.date_range(start="1999-09-26", end="1999-12-26", freq="H")
)
# Solution:
series.isna().resample("M").sum()
# Note that GroupBy.count and Resampler.count count the number of non-null values,
# whereas you seem to be looking for the opposite :)
In your case:
OG_1998_2022_gaps['rel_humidity'].isna().resample("M").sum()

Mapping two rows to one row in pandas

I have a dataframe a with 14 rows and another dataframe comp1sum with 7 rows. a has date column for 7 days in 12hr interval. So that makes it 14 rows. Also, comp1sum has a column with 7 days.
This is the comp1sum dataframe
And this is the a dataframe
I want to map 2 rows of a dataframe to single rows of comp1sum dataframe. So, that one day of dataframe a is mapped to one day of comp1sum dataframe.
I have the following code for that
j=0
for i in range(0,7):
a.loc[i,'comp1_sum'] = comp_sum.iloc[j]['comp1sum']
a.loc[i,'comp2_sum'] = comp_sum.iloc[j]['comp2sum']
j=j+1
And its output is
dt_truncated comp1_sum
3 2015-02-01 00:00:00 142.0
10 2015-02-01 12:00:00 144.0
12 2015-02-03 00:00:00 145.0
2 2015-02-05 00:00:00 141.0
14 2015-02-05 12:00:00 NaN
The code is mapping the days from comp1sum based on index of a and not based on dates of a. I want 2015-02-01 00:00:00 to have the values 139.0 and 2015-02-02 00:00:00 to have the value 140.0 and so on such that increasing dates have increasing values.
I am not able to map in such a way. please help.
Edit1- As per #Ssayan answer, I am getting this error-
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-255-77e55efca5f9> in <module>
3 # use the sorted index to iterate through the sorted dataframe
4 for i, idx in enumerate(a.index):
----> 5 a.loc[idx, 'comp1_sum'] = b.iloc[i//2]['comp1sum']
6 a.loc[idx,'comp2_sum'] = b.iloc[i//2]['comp2sum']
IndexError: single positional indexer is out-of-bounds
Your issue is that your DataFrame a is not sorted by date so the index 0 does not match the earliest date. When you use loc it uses the value of the index, not the order in which the table is, so even with sorting the DataFrame the issue remains.
One way out is to sort the DataFrame a by date and then to use the sorted index to apply the value in the order you need.
# sort the dataframe by date
a = a.sort_values("dt_truncated")
# use the sorted index to iterate through the sorted dataframe
for i, idx in enumerate(a.index):
a.loc[idx, 'val_1'] = b.iloc[i//2]['val1']
a.loc[idx,'val_2'] = b.iloc[i//2]['val2']

Create common columns and transform time series like data

I have an excel sheet which contains more than 30 sheets for different parameters like BP, Heart rate etc.
One of the dataframe (df1 - created from one sheet of excel) looks like as shown below
df1= pd.DataFrame({'person_id':[1,1,1,1,2,2,2,2,3,3,3,3,3,3],'level_1': ['H1Date','H1','H2Date','H2','H1Date','H1','H2Date','H2','H1Date','H1','H2Date','H2','H3Date','H3'],
'values': ['2006-10-30 00:00:00','6.6','2006-08-30 00:00:00','4.6','2005-10-30 00:00:00','6.9','2016-11-30 00:00:00','6.6','2006-10-30 00:00:00','6.6','2006-11-30 00:00:00','8.6',
'2106-10-30 00:00:00','16.6']})
Another dataframe (df2) from another sheet of excel file can be generated using the code below
df2= pd.DataFrame({'person_id':[1,1,1,1,2,2,2,2,3,3,3,3,3,3],'level_1': ['GluF1Date','GluF1','GluF2Date','GluF2','GluF1Date','GluF1','GluF2Date','GluF2','GluF1Date','GluF1','GluF2Date','GluF2','GluF3Date','GluF3'],
'values': ['2006-10-30 00:00:00','6.6','2006-08-30 00:00:00','4.6','2005-10-30 00:00:00','6.9','2016-11-30 00:00:00','6.6','2006-10-30 00:00:00','6.6','2006-11-30 00:00:00','8.6',
'2106-10-30 00:00:00','16.6']})
Similarly there are more than 30 dataframes like this with values of the same format (Date & measurement value) but column names (H1, GluF1, H1Date,H100,H100Date, GluF1Date,P1,PDate,UACRDate,UACR100, etc) are different
What I am trying to do based on SO search is as shown below
g = df1.level_1.str[-2:] # Extracting column names
df1['lvl'] = df1.level_1.apply(lambda x: int(''.join(filter(str.isdigit, x)))) # Extracting level's number
df1= df1.pivot_table(index=['person_id', 'lvl'], columns=g, values='values', aggfunc='first')
final = df1.reset_index(level=1).drop(['lvl'], axis=1)
The above code gives an output like this which is not expected
This doesn't work as g doesn't result in same string output (column names) for all records. My code would work if the substring extract has resulted in same output but since the data is like sequence, I am not able to make it uniform
I expect my output to be like as shown below for each dataframe. Please note that a person can have 3 records (H1..H3)/10 records (H1..H10) / 100 records (ex: H1...H100). It is all possible.
updated screenshot
Concat all even and all odd rows without using column names, then name the columns as needed:
res = pd.concat([df2.iloc[0::2,0:3:2].reset_index(drop=True), df2.iloc[1::2,2].reset_index(drop=True)], axis=1)
res.columns = ['Person_ID', 'Date', 'Value']
Output:
Person_ID Date Value
0 1 2006-10-30 00:00:00 6.6
1 1 2006-08-30 00:00:00 4.6
2 2 2005-10-30 00:00:00 6.9
3 2 2016-11-30 00:00:00 6.6
4 3 2006-10-30 00:00:00 6.6
5 3 2006-11-30 00:00:00 8.6
6 3 2106-10-30 00:00:00 16.6
Here is one way using unstack() with a little modification:
Assign a dummy column using ,df1.groupby(['person_id',df1.level_1.str[:2]]).cumcount()
Change level_1 to level_1=df1.level_1.str[:2]
Set index as ['person_id','level_1','k'] and unstack on the 3rd index.
m=(df1.assign(k=df1.groupby(['person_id',df1.level_1.str[:2]]).cumcount()
,level_1=df1.level_1.str[:2]).
set_index(['person_id','level_1','k']).unstack(2)).droplevel(1)
m.columns=['Date','Values']
print(m)
Date Values
person_id
1 2006-10-30 00:00:00 6.6
1 2006-08-30 00:00:00 4.6
2 2005-10-30 00:00:00 6.9
2 2016-11-30 00:00:00 6.6
3 2006-10-30 00:00:00 6.6
3 2006-11-30 00:00:00 8.6
3 2106-10-30 00:00:00 16.6

pandas - efficiently computing minutely returns as columns on intraday data

I have a DataFrame that looks like such:
closingDate Time Last
0 1997-09-09 2018-12-13 00:00:00 1000
1 1997-09-09 2018-12-13 00:01:00 1002
2 1997-09-09 2018-12-13 00:02:00 1001
3 1997-09-09 2018-12-13 00:03:00 1005
I want to create a DataFrame with roughly 1440 columns labled as timestamps, where the respective daily value is the return over the prior minute:
closingDate 00:00:00 00:01:00 00:02:00
0 1997-09-09 2018-12-13 -0.08 0.02 -0.001 ...
1 1997-09-10 2018-12-13 ...
My issue is that this is a very large DataFrame (several GB), and I need to do this operation multiple times. Time and memory efficiency is key, but time being more important. Is there some vectorized, built in method to do this in pandas?
You can do this with some aggregation and shifting your time series that should result in more efficient calculations.
First aggregate your data by closingDate.
g = df.groupby("closingDate")
Next you can shift your data to offset by a day.
shifted = g.shift(periods=1)
This will create a new dataframe where the Last value will be from the previous minute. Now you can join to your original dataframe based on the index.
df = df.merge(shifted, left_index=True, right_index=True)
This adds the shifted columns to the new dataframe that you can use to do your difference calculation.
df["Diff"] = (df["Last_x"] - df["Last_y"]) / df["Last_y"]
You now have all the data you're looking for. If you need each minute to be its own column you can pivot the results. By grouping the closingDate and then applying the shift you avoid shifting dates across days. If you look at the first observation of each day you'll get a NaN since the values won't be shifted across separate days.

Pandas concatenate/join/group rows in a dataframe based on date

I have a pandas dataset like this:
Date WaterTemp Discharge AirTemp Precip
0 2012-10-05 00:00 10.9 414.0 39.2 0.0
1 2012-10-05 00:15 10.1 406.0 39.2 0.0
2 2012-10-05 00:45 10.4 406.0 37.4 0.0
...
63661 2016-10-12 14:30 10.5 329.0 15.8 0.0
63662 2016-10-12 14:45 10.6 323.0 19.4 0.0
63663 2016-10-12 15:15 10.8 329.0 23 0.0
I want to extend each row so that I get a dataset that looks like:
Date WaterTemp 00:00 WaterTemp 00:15 .... Discharge 00:00 ...
0 2012-10-05 10.9 10.1 414.0
There will be at most 72 readings for each date so I should have 288 columns in addition to the date and index columns, and at most I should have at most 1460 rows (4 years * 365 days in year - possibly some missing dates). Eventually, I will use the 288-column dataset in a classification task (I'll be adding the label later), so I need to convert this dataframe to a 2d array (sans datetime) to feed into the classifier, so I can't simply group by date and then access the group. I did try grouping based on date, but I was uncertain how to change each group into a single row. I also looked at joining. It looks like joining could suit my needs (for example a join based on (day, month, year)) but I was uncertain how to split things into different pandas dataframes so that the join would work. What is a way to do this?
PS. I do already know how to change the my datetimes in my Date column to dates without the time.
I figured it out. I group the readings by time of day of reading. Each group is a dataframe in and of itself, so I just then need to concatenate the dataframes based on date. My code for the whole function is as follows.
import pandas
def readInData(filename):
#read in files and remove missing values
ds = pandas.read_csv(filename)
ds = ds[ds.AirTemp != 'M']
#set index to date
ds['Date'] = pandas.to_datetime(ds.Date, yearfirst=True, errors='coerce')
ds.Date = pandas.DatetimeIndex(ds.Date)
ds.index = ds.Date
#group by time (so group readings by time of day of reading, i.e. all readings at midnight)
dg = ds.groupby(ds.index.time)
#initialize the final dataframe
df = pandas.DataFrame()
for name, group in dg: #for each group
#each group is a dateframe
try:
#set unique column names except for date
group.columns = ['Date', 'WaterTemp'+str(name), 'Discharge'+str(name), 'AirTemp'+str(name), 'Precip'+str(name)]
#ensure date is the index
group.index = group.Date
#remove time from index
group.index = group.index.normalize()
#join based on date
df = pandas.concat([df, group], axis=1)
except: #if the try catch block isn't here, throws errors! (three for my dataset?)
pass
#remove duplicate date columns
df = df.loc[:,~df.columns.duplicated()]
#since date is index, drop the first date column
df = df.drop('Date', 1)
#return the dataset
return df

Categories

Resources