I'm currently struggling with a problem of which I try not to use for loops (even though that would make it easier for me to understand) and instead use the 'pandas' approach.
The problem I'm facing is that I have a big dataframe of logs, allLogs, like:
index message date_time user_id
0 message1 2023-01-01 09:00:49 123
1 message2 2023-01-01 09:00:58 123
2 message3 2023-01-01 09:01:03 125
... etc
I'm doing analysis per user_id, for which I've written a function. This function needs a subset of the allLogs dataframe: all id's, messages ande date_times per user_id. Think of it like: for each unique user_id I want to run the function.
This function calculates the date-times between each message and makes a Series with all those time-delta's (time differences). I want to make this into a separate dataframe, for which I have a big list/series/array of time-delta's for each unique user_id.
The current function looks like this:
def makeSeriesPerUser(df):
df = df[['message','date_time']]
df = df.drop_duplicates(['date_time','message'])
df = df.sort_values(by='date_time', inplace = True)
m1 = (df['message'] == df['message'].shift(-1))
df = df[~(m1)]
df = (df['date_time'].shift(-1) - df['date_time'])
df = df.reset_index(drop=True)
seconds = m1.astype('timedelta64[s]')
return seconds
And I use allLogs.groupby('user_id').apply(lambda x: makeSeriesPerUser(x)) to apply it to my user_id groups.
How do I, instead of returning something and adding it to the existing dataframe, make a new dataframe with for each unique user_id a series of these time-delta's (each user has different amounts of logs)?
You should just create a dict where the keys are the user IDs and the values are the relevant DataFrames per user. There is no need to keep everything in one giant DataFrame, unless you have millions of users with only a few records apiece.
First off, you should use chaining. It's much simpler to read.
Secondly, the pd.DataFrame.groupby().apply can take the function itself. No lambda function is required.
Your sort_values(inplace=True) is returning None. Removing this will return the sorted DataFrame.
def makeSeriesPerUser(df):
df = df[['message','date_time']]
df = df.drop_duplicates(['date_time','message'])
df = df.sort_values(by='date_time', inplace = True)
m1 = (df['message'] == df['message'].shift(-1))
df = df[~(m1)]
df = (df['date_time'].shift(-1) - df['date_time'])
df = df.reset_index(drop=True)
seconds = m1.astype('timedelta64[s]')
return seconds
Turns into
def extract_timedelta(df_grouped_by_user: pd.DataFrame) -> Series:
selected_columns = ['message', 'date_time']
time_delta = (df_grouped_by_user[selected_columns]
.drop_duplicates(selected_columns) # drop duplicate entries
['date_time'] # select date_time column
.sort_values() # sort values of selected date_time column
.diff() # take difference
.astype('timedelta64[s]') # as type
.reset_index(drop=True)
)
return time_delta
time_delta_df = df.groupby('user_id').apply(extract_timedelta)
This returns a dataframe of timedeltas and is grouped by each user_id. The grouped dataframe is actually just a series with a MultiIndex. This index is just a tuple['user_id', int].
If you want a new dataframe with users as columns, then you want to this
data = {group_name: extract_timedelta(group_df) for group_name, group_df in messages_df.groupby('user_id')}
time_delta_df = pd.DataFrame(data)
Related
Having a bit of trouble understanding the documentation
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
dfbreed['x'] = dfbreed.apply(testbreed, axis=1)
C:/Users/erasmuss/PycharmProjects/Sarah/farmdata.py:38: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
Code is basically to re-arrange and clean some data to make analysis easier.
Code in given row-by per each animal, but has repetitions, blanks, and some other sparse values
Idea is to basically stack rows into columns and grab the useful data (Weight by date and final BCS) per animal
Initial DF
few snippets of the dataframe
Output Format
Output DF/csv
import pandas as pd
import numpy as np
#Function for cleaning up multiple entries of breeds
def testbreed(x):
if x.first_valid_index() is None:
return None
else:
return x[x.first_valid_index()]
#Read Data
df1 = pd.read_csv("farmdata.csv")
#Drop empty rows
df1.dropna(how='all', axis=1, inplace=True)
#Copy to extract Weights in DF2
df2 = df1.copy()
df2 = df2.drop(['BCS', 'Breed','Age'], axis=1)
#Pivot for ID names in DF1
df1 = df1.pivot(index='ID', columns='Date', values=['Breed','Weight', 'BCS'])
#Pivot for weights in DF2
df2 = df2.pivot(index='ID', columns='Date', values = 'Weight')
#Split out Breeds and BCS into individual dataframes w/Duplicate/missing data for each ID
df3 = df1.copy()
dfbreed = df3[['Breed']]
dfBCS = df3[['BCS']]
#Drop empty BCS columns
df1.dropna(how='all', axis=1, inplace=True)
#Shorten Breed and BCS to single Column by grabbing first value that is real. see function above
dfbreed['x'] = dfbreed.apply(testbreed, axis=1)
dfBCS['x'] = dfBCS.apply(testbreed, axis=1)
#Populate BCS and Breed into new DF
df5= pd.DataFrame(data=None)
df5['Breed'] = dfbreed['x']
df5['BCS'] = dfBCS['x']
#Join Weights
df5 = df5.join(df2)
#Write output
df5.to_csv(r'.\out1.csv')
I want to take the BCS and Breed dataframes which are multi-indexed on the column by Breed or BCS and then by date to take the first non-NaN value in the rows of dates and set that into a column named breed.
I had a lot of trouble getting the columns to pick the first unique values in-situ on the DF
I found a work-around with a 2015 answer:
2015 Answer
which defined the function at the top.
reading through the setting a value on the copy-of a slice makes sense intuitively,
but I can't seem to think of a way to make it work as a direct-replacement or index-based.
Should I be looping through?
Trying from The second answer here
I get
dfbreed.loc[:,'Breed'] = dfbreed['Breed'].apply(testbreed, axis=1)
dfBCS.loc[:, 'BCS'] = dfBCS.apply['BCS'](testbreed, axis=1)
which returns
ValueError: Must have equal len keys and value when setting with an iterable
I'm thinking this has something to do with the multi-index
keys come up as:
MultiIndex([('Breed', '1/28/2021'),
('Breed', '2/12/2021'),
('Breed', '2/4/2021'),
('Breed', '3/18/2021'),
('Breed', '7/30/2021')],
names=[None, 'Date'])
MultiIndex([('BCS', '1/28/2021'),
('BCS', '2/12/2021'),
('BCS', '2/4/2021'),
('BCS', '3/18/2021'),
('BCS', '7/30/2021')],
names=[None, 'Date'])
Sorry for the long question(s?)
Can anyone help me out?
Thanks.
You created dfbreed as:
dfbreed = df3[['Breed']]
So it is a view of the original DataFrame (limited to just this one column).
Remember that a view has not any own data buffer, it is only a tool to "view"
a fragment of the original DataFrame, with read only access.
When you attempt to perform dfbreed['x'] = dfbreed.apply(...), you
actually attempt to violate the read-only access mode.
To avoid this error, create dfbreed as an "independent" DataFrame:
dfbreed = df3[['Breed']].copy()
Now dfbreed has its own data buffer and you are free to change the data.
I have a dataframe which can be generated from the code as given below
df = pd.DataFrame({'person_id' :[1,2,3],'date1':
['12/31/2007','11/25/2009','10/06/2005'],'val1':
[2,4,6],'date2': ['12/31/2017','11/25/2019','10/06/2015'],'val2':[1,3,5],'date3':
['12/31/2027','11/25/2029','10/06/2025'],'val3':[7,9,11]})
I followed the below solution to convert it from wide to long
pd.wide_to_long(df, stubnames=['date', 'val'], i='person_id',
j='grp').sort_index(level=0)
Though this works with sample data as shown below, it doesn't work with my real data which has more than 200 columns. Instead of person_id, my real data has subject_ID which is values like DC0001,DC0002 etc. Does "I" always have to be numeric? Instead it adds the stub values as new columns in my dataset and has zero rows
This is how my real columns looks like
My real data might contains NA's as well. So do I have to fill them with default values for wide_to_long to work?
Can you please help as to what can be the issue? Or any other approach to achieve the same result is also helpful.
Try adding additional argument in the function which allows the strings suffix.
pd.long_to_wide(.......................,suffix='\w+')
The issue is with your column names, the numbers used to convert from wide to long need to be at the end of your column names or you need to specify a suffix to groupby. I think the easiest solution is to create a function that accepts regex and the dataframe.
import pandas as pd
import re
def change_names(df, regex):
# Select one of three column groups
old_cols = df.filter(regex = regex).columns
# Create list of new column names
new_cols = []
for col in old_cols:
# Get the stubname of the original column
stub = ''.join(re.split(r'\d', col))
# Get the time point
num = re.findall(r'\d+', col) # returns a list like ['1']
# Make new column name
new_col = stub + num[0]
new_cols.append(new_col)
# Create dictionary mapping old column names to new column names
dd = {oc: nc for oc, nc in zip(old_cols, new_cols)}
# Rename columns
df.rename(columns = dd, inplace = True)
return df
tdf = pd.DataFrame({'person_id' :[1,2,3],'h1date': ['12/31/2007','11/25/2009','10/06/2005'],'t1val': [2,4,6],'h2date': ['12/31/2017','11/25/2019','10/06/2015'],'t2val':[1,3,5],'h3date': ['12/31/2027','11/25/2029','10/06/2025'],'t3val':[7,9,11]})
# Change date columns
tdf = change_names(tdf, 'date$')
tdf = change_names(tdf, 'val$')
print(tdf)
person_id hdate1 tval1 hdate2 tval2 hdate3 tval3
0 1 12/31/2007 2 12/31/2017 1 12/31/2027 7
1 2 11/25/2009 4 11/25/2019 3 11/25/2029 9
2 3 10/06/2005 6 10/06/2015 5 10/06/2025 11
This is quite late to answer this question. But putting the solution here in case someone else find it useful
tdf = pd.DataFrame({'person_id' :[1,2,3],'h1date': ['12/31/2007','11/25/2009','10/06/2005'],'t1val': [2,4,6],'h2date': ['12/31/2017','11/25/2019','10/06/2015'],'t2val':[1,3,5],'h3date': ['12/31/2027','11/25/2029','10/06/2025'],'t3val':[7,9,11]})
## You can use m13op22 solution to rename your columns with numeric part at the
## end of the column name. This is important.
tdf = tdf.rename(columns={'h1date': 'hdate1', 't1val': 'tval1',
'h2date': 'hdate2', 't2val': 'tval2',
'h3date': 'hdate3', 't3val': 'tval3'})
## Then use the non-numeric portion, (in this example 'hdate', 'tval') as
## stubnames. The mistake you were doing was using ['date', 'val'] as stubnames.
df = pd.wide_to_long(tdf, stubnames=['hdate', 'tval'], i='person_id', j='grp').sort_index(level=0)
print(df)
I want to compute week of the month for a specified date. For computing week of the month, I currently use the user-defined function.
Input data frame:
Output data frame:
Here is what I have tried:
from math import ceil
def week_of_month(dt):
"""
Returns the week of the month for the specified date.
"""
first_day = dt.replace(day=1)
dom = dt.day
adjusted_dom = dom + first_day.weekday()
return int(ceil(adjusted_dom/7.0))
After this,
import pandas as pd
df = pd.read_csv("input_dataframe.csv")
df.date = pd.to_datetime(df.date)
df['year_of_date'] = df.date.dt.year
df['month_of_date'] = df.date.dt.month
df['day_of_date'] = df.date.dt.day
wom = pd.Series()
# worker function for creating week of month series
def convert_date(t):
global wom
wom = wom.append(pd.Series(week_of_month(datetime.datetime(t[0],t[1],t[2]))), ignore_index = True)
# calling worker function for each row of dataframe
_ = df[['year_of_date','month_of_date','day_of_date']].apply(convert_date, axis = 1)
# adding new computed column to dataframe
df['week_of_month'] = wom
# here this updated dataframe should look like Output data frame.
What this does is for each row of data frame it computes week of the month using given function. It makes computations slower as the data frame grows to more rows. Because currently I have more than 10M+ rows.
I am looking for a faster way of doing this. What changes can I make to this code to vectorize this operation across all rows?
Thanks in advance.
Edit: What worked for me after reading answers is below code,
first_day_of_month = pd.to_datetime(df.date.values.astype('datetime64[M]'))
df['week_of_month'] = np.ceil((df.date.dt.day + first_day_of_month.weekday) / 7.0).astype(int)
The week_of_month method can be vectorized. It could be beneficial to not do the conversion to datetime objects, and instead use pandas only methods.
first_day_of_month = df.date.to_period("M").to_timestamp()
df["week_of_month"] = np.ceil((data.day + first_day_of_month.weekday) / 7.0).astype(int)
just right off the bat without even going into your code and mentioning X/Y problems, etc.:
try to get a list of unique dates, I'm sure in the 10M rows you have more than one is a duplicate.
Steps:
create a 2nd df that contains only the columns you need and no
duplicates (drop_duplicates)
run your function on the small dataframe
merge the large and small dfs
(optional) drop the small one
This question has two parts:
1) Is there a better way to do this?
2) If NO to #1, how can I fix my date issue?
I have a dataframe as follows
GROUP DATE VALUE DELTA
A 12/20/2015 2.5 ??
A 11/30/2015 25
A 1/31/2016 8.3
B etc etc
B etc etc
C etc etc
C etc etc
This is a representation, there are close to 100 rows for each group (each row representing a unique date).
For each letter in GROUP, I want to find the change in value between successive dates. So for example for GROUP A I want the change between 11/30/2015 and 12/20/2015, which is -22.5. Currently I am doing the following:
df['DATE'] = pd.to_datetime(df['DATE'],infer_datetime_format=True)
df.sort_values('DATE',ascending=True)
df_out = []
for GROUP in df.GROUP.unique():
x = df[df.GROUP == GROUP]
x['VALUESHIFT'] = x['VALUE'].shift(+1)
x['DELTA'] = x['VALUE'].sub(x['VALUESHIFT'])
df_out.append(x)
df_out = pd.concat(df_out)
The challenge I am running into is the dates are not sorted correctly. So when the shift takes place and I calculate the delta it is not really the delta between successive dates.
Is this the right approach to handle? If so how can I fix my date issue? I have reviewed/tried the following to no avail:
Applying datetime format in pandas for sorting
how to make a pandas dataframe column into a datetime object showing just the date to correctly sort
doing calculations in pandas dataframe based on trailing row
Pandas - Split dataframe into multiple dataframes based on dates?
Answering my own question. This works:
df['DATE'] = pd.to_datetime(df['DATE'],infer_datetime_format=True)
df_out = []
for ID in df.GROUP.unique():
x = df[df.GROUP == ID]
x.sort_values('DATE',ascending=True, inplace=True)
x['VALUESHIFT'] = x['VALUE'].shift(+1)
x['DELTA'] = x['VALUE'].sub(x['VALUESHIFT'])
df_out.append(x)
df_out = pd.concat(df_out)
1) Added inplace=True to sort value.
2) Added the sort within the for loop.
3) Changed by loop from using GROUP to ID since it is also the name of a column name, which I imagine is considered sloppy?
I have a dataframe with many rows. I am appending a column using data produced from a custom function, like this:
import numpy
df['new_column'] = numpy.vectorize(fx)(df['col_a'], df['col_b'])
# takes 180964.377 ms
It is working fine, what I am trying to do is speed it up. There is really only a small group of unique combinations of col_a and col_b. Many of the iterations are redundant. I was thinking maybe pandas would just figure that out on its own but I don't think that is the case. Consider this:
print len(df.index) #prints 127255
df_unique = df.copy().drop_duplicates(['col_a', 'col_b'])
print len(df_unique.index) #prints 9834
I also convinced myself of the possible speedup by running this:
df_unique['new_column'] = numpy.vectorize(fx)(df_unique['col_a'], df_unique['col_b'])
# takes 14611.357 ms
Since there is a lot of redundant data, what I am trying to do is update the large dataframe ( df 127255 rows ) but only need to run the fx function the minimum amount of times ( 9834 times ). This is because of all the duplicate rows for col_a and col_b. Of course this means that there will be multiple rows in df that have the same values for col_a and col_b, but that is OK, the other columns of df are different and make each row unique.
Before I create a normal iterative for loop to loop through the df_unique dataframe and do a conditional update on df, I wanted to ask if there was a more "pythonic" neat way of doing this kind of update. Thanks a lot.
** UPDATE **
I created the simple for loop mentioned above, like this:
df = ...
df_unique = df.copy().drop_duplicates(['col_a', 'col_b'])
df_unique['new_column'] = np.vectorize(fx)(df_unique['col_a'], df_unique['col_b'])
for index, row in df_unique.iterrows():
df.loc[(df['col_a'] == row['col_a']) & (df['col_b'] == row['col_b']),'new_column'] = row['new_column']
# takes 165971.890
So with this for loop there may be a slight performance increase but not nearly what I would have expected.
FYI
This is the fx function. It queries a mysql database.
def fx(d):
exp_date = datetime.strptime(d.col_a, '%m/%d/%Y')
if exp_date.weekday() == 5:
exp_date -= timedelta(days=1)
p = pandas.read_sql("select stat from table where a = '%s' and b_date = '%s';" % (d.col_a,exp_date.strftime('%Y-%m-%d')),engine)
if len(p.index) == 0:
return None
else:
return p.iloc[0].close
UPDATE:
if you can manage to read up your three columns ['stat','a','b_date'] belonging to table table into tab DF then you could merge it like this:
tab = pd.read_sql('select stat,a,b_date from table', engine)
df.merge(tab, left_on=[...], right_on=[...], how='left')
OLD answer:
you can merge/join your precalculated df_unique DF with the original df DF:
df['new_column'] = df.merge(df_unique, on=['col_a','col_b'], how='left')['new_column']
MaxU's answer may be already something you want. But I'll show another approach which may be a bit faster (I didn't measure).
I assume that:
df[['col_a', 'col_b']] is sorted so that all identical entries are in consecutive rows (it's important)
df has a unique index (if not, you may create some temporary unique index).
I'll use the fact that df_unique.index is a subset of df.index.
# (keep='first' is actually default)
df_unique = df[['col_a', 'col_b']].drop_duplicates(keep='first').copy()
# You may try .apply instead of np.vectorize (I think it may be faster):
df_unique['result'] = df_unique.apply(fx, axis=1)
# Main part:
df['result'] = df_unique['result'] # uses 2.
df['result'].fillna(method='ffill', inplace=True) # uses 1.