Pandas, append column based on unique subset of column values - python

I have a dataframe with many rows. I am appending a column using data produced from a custom function, like this:
import numpy
df['new_column'] = numpy.vectorize(fx)(df['col_a'], df['col_b'])
# takes 180964.377 ms
It is working fine, what I am trying to do is speed it up. There is really only a small group of unique combinations of col_a and col_b. Many of the iterations are redundant. I was thinking maybe pandas would just figure that out on its own but I don't think that is the case. Consider this:
print len(df.index) #prints 127255
df_unique = df.copy().drop_duplicates(['col_a', 'col_b'])
print len(df_unique.index) #prints 9834
I also convinced myself of the possible speedup by running this:
df_unique['new_column'] = numpy.vectorize(fx)(df_unique['col_a'], df_unique['col_b'])
# takes 14611.357 ms
Since there is a lot of redundant data, what I am trying to do is update the large dataframe ( df 127255 rows ) but only need to run the fx function the minimum amount of times ( 9834 times ). This is because of all the duplicate rows for col_a and col_b. Of course this means that there will be multiple rows in df that have the same values for col_a and col_b, but that is OK, the other columns of df are different and make each row unique.
Before I create a normal iterative for loop to loop through the df_unique dataframe and do a conditional update on df, I wanted to ask if there was a more "pythonic" neat way of doing this kind of update. Thanks a lot.
** UPDATE **
I created the simple for loop mentioned above, like this:
df = ...
df_unique = df.copy().drop_duplicates(['col_a', 'col_b'])
df_unique['new_column'] = np.vectorize(fx)(df_unique['col_a'], df_unique['col_b'])
for index, row in df_unique.iterrows():
df.loc[(df['col_a'] == row['col_a']) & (df['col_b'] == row['col_b']),'new_column'] = row['new_column']
# takes 165971.890
So with this for loop there may be a slight performance increase but not nearly what I would have expected.
FYI
This is the fx function. It queries a mysql database.
def fx(d):
exp_date = datetime.strptime(d.col_a, '%m/%d/%Y')
if exp_date.weekday() == 5:
exp_date -= timedelta(days=1)
p = pandas.read_sql("select stat from table where a = '%s' and b_date = '%s';" % (d.col_a,exp_date.strftime('%Y-%m-%d')),engine)
if len(p.index) == 0:
return None
else:
return p.iloc[0].close

UPDATE:
if you can manage to read up your three columns ['stat','a','b_date'] belonging to table table into tab DF then you could merge it like this:
tab = pd.read_sql('select stat,a,b_date from table', engine)
df.merge(tab, left_on=[...], right_on=[...], how='left')
OLD answer:
you can merge/join your precalculated df_unique DF with the original df DF:
df['new_column'] = df.merge(df_unique, on=['col_a','col_b'], how='left')['new_column']

MaxU's answer may be already something you want. But I'll show another approach which may be a bit faster (I didn't measure).
I assume that:
df[['col_a', 'col_b']] is sorted so that all identical entries are in consecutive rows (it's important)
df has a unique index (if not, you may create some temporary unique index).
I'll use the fact that df_unique.index is a subset of df.index.
# (keep='first' is actually default)
df_unique = df[['col_a', 'col_b']].drop_duplicates(keep='first').copy()
# You may try .apply instead of np.vectorize (I think it may be faster):
df_unique['result'] = df_unique.apply(fx, axis=1)
# Main part:
df['result'] = df_unique['result'] # uses 2.
df['result'].fillna(method='ffill', inplace=True) # uses 1.

Related

How to make a new dataframe with output from Pandas apply function?

I'm currently struggling with a problem of which I try not to use for loops (even though that would make it easier for me to understand) and instead use the 'pandas' approach.
The problem I'm facing is that I have a big dataframe of logs, allLogs, like:
index message date_time user_id
0 message1 2023-01-01 09:00:49 123
1 message2 2023-01-01 09:00:58 123
2 message3 2023-01-01 09:01:03 125
... etc
I'm doing analysis per user_id, for which I've written a function. This function needs a subset of the allLogs dataframe: all id's, messages ande date_times per user_id. Think of it like: for each unique user_id I want to run the function.
This function calculates the date-times between each message and makes a Series with all those time-delta's (time differences). I want to make this into a separate dataframe, for which I have a big list/series/array of time-delta's for each unique user_id.
The current function looks like this:
def makeSeriesPerUser(df):
df = df[['message','date_time']]
df = df.drop_duplicates(['date_time','message'])
df = df.sort_values(by='date_time', inplace = True)
m1 = (df['message'] == df['message'].shift(-1))
df = df[~(m1)]
df = (df['date_time'].shift(-1) - df['date_time'])
df = df.reset_index(drop=True)
seconds = m1.astype('timedelta64[s]')
return seconds
And I use allLogs.groupby('user_id').apply(lambda x: makeSeriesPerUser(x)) to apply it to my user_id groups.
How do I, instead of returning something and adding it to the existing dataframe, make a new dataframe with for each unique user_id a series of these time-delta's (each user has different amounts of logs)?
You should just create a dict where the keys are the user IDs and the values are the relevant DataFrames per user. There is no need to keep everything in one giant DataFrame, unless you have millions of users with only a few records apiece.
First off, you should use chaining. It's much simpler to read.
Secondly, the pd.DataFrame.groupby().apply can take the function itself. No lambda function is required.
Your sort_values(inplace=True) is returning None. Removing this will return the sorted DataFrame.
def makeSeriesPerUser(df):
df = df[['message','date_time']]
df = df.drop_duplicates(['date_time','message'])
df = df.sort_values(by='date_time', inplace = True)
m1 = (df['message'] == df['message'].shift(-1))
df = df[~(m1)]
df = (df['date_time'].shift(-1) - df['date_time'])
df = df.reset_index(drop=True)
seconds = m1.astype('timedelta64[s]')
return seconds
Turns into
def extract_timedelta(df_grouped_by_user: pd.DataFrame) -> Series:
selected_columns = ['message', 'date_time']
time_delta = (df_grouped_by_user[selected_columns]
.drop_duplicates(selected_columns) # drop duplicate entries
['date_time'] # select date_time column
.sort_values() # sort values of selected date_time column
.diff() # take difference
.astype('timedelta64[s]') # as type
.reset_index(drop=True)
)
return time_delta
time_delta_df = df.groupby('user_id').apply(extract_timedelta)
This returns a dataframe of timedeltas and is grouped by each user_id. The grouped dataframe is actually just a series with a MultiIndex. This index is just a tuple['user_id', int].
If you want a new dataframe with users as columns, then you want to this
data = {group_name: extract_timedelta(group_df) for group_name, group_df in messages_df.groupby('user_id')}
time_delta_df = pd.DataFrame(data)

Optimal way to create a column by matching two other columns

The first df I have is one that has station codes and names, along with lat/long (not as relevant), like so:
code name latitude longitude
I have another df with start/end dates for travel times. This df has only the station code, not the station name, like so:
start_date start_station_code end_date end_station_code duration_sec
I am looking to add columns that have the name of the start/end stations to the second df by matching the first df "code" and second df "start_station_code" / "end_station_code".
I am relatively new to pandas, and was looking for a way to optimize doing this as my current method takes quite a while. I use the following code:
for j in range(0, len(df_stations)):
for i in range(0, len(df)):
if(df_stations['code'][j] == df['start_station_code'][i]):
df['start_station'][i] = df_stations['name'][j]
if(df_stations['code'][j] == df['end_station_code'][i]):
df['end_station'][i] = df_stations['name'][j]
I am looking for a faster method, any help is appreciated. Thank you in advance.
Use merge. If you are familiar with SQL, merge is equivalent to LEFT JOIN:
cols = ["code", "name"]
result = (
second_df
.merge(first_df[cols], left_on="start_station_code", right_on="code")
.merge(first_df[cols], left_on="end_station_code", right_on="code")
.rename(columns={"code_x": "start_station_code", "code_y": "end_station_code"})
)
The answer by #Code-Different is very nearly correct. However the columns to be renamed are the name columns not the code columns. For neatness you will likely want to drop the additional code columns that get created by the merges. Using your names for the dataframes df and df_station the code needed to produce df_required is:
cols = ["code", "name"]
required_df = (
df
.merge(df_stations[cols], left_on="start_station_code", right_on="code")
.merge(df_stations[cols], left_on="end_station_code", right_on="code")
.rename(columns={"name_x": "start_station", "name_y": "end_station"})
.drop(columns = ['code_x', 'code_y'])
)
As you may notice the merge means that the dataframe acquires duplicate 'code' columns which get suffixed automatically, this is a built in default of the merge command. See https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html for more detail.

Fill specific columns in a pandas dataframe rows with values from another dataframes values

I am trying to replace some missing and incorrect values in my master dataset by filling it in with correct values from two different datasets.
I created a miniature version of the full dataset like so (note the real dataset is several thousand rows long):
import pandas as pd
data = {'From':['GA0251','GA5201','GA5551','GA510A','GA5171','GA5151'],
'To':['GA0201_T','GA5151_T','GA5151_R','GA5151_V','GA5151_P','GA5171_B'],
'From_Latitude':[55.86630869,0,55.85508787,55.85594626,55.85692217,55.85669934],
'From_Longitude':[-4.27138731,0,-4.24126866,-4.24446585,-4.24516129,-4.24358251,],
'To_Latitude':[55.86614756,0,55.85522197,55.85593762,55.85693878,0],
'To_Longitude':[-4.271040979,0,-4.241466534,-4.244607602,-4.244905037,0]}
dataset_to_correct = pd.DataFrame(data)
However, some values in the From lat/long and the To lat/long are incorrect. I have two tables like the one below for each of From and To, which I would like to substitute into the table in place of the two values for that row.
Table of Corrected From lat/long:
data = {'Site':['GA5151_T','GA5171_B'],
'Correct_Latitude':[55.85952791,55.87044558],
'Correct_Longitude':[55.85661767,-4.24358251,]}
correct_to_coords = pd.DataFrame(data)
I would like to match this table to the From column and then replace the From_Latitude and From_Longitude with the correct values.
Table of Corrected To lat/long:
data = {'Site':['GA5201','GA0251'],
'Correct_Latitude':[55.857577,55.86616756],
'Correct_Longitude':[-4.242770,-4.272140979]}
correct_from_coords = pd.DataFrame(data)
I would like to match this table to the To column and then replace the To_Latitude and To_Longitude with the correct values.
Is there a way to match the site in each table to the corresponding From or To column and then replace only the values in the respective columns?
I have tried using code from this answer (Elegant way to replace values in pandas.DataFrame from another DataFrame) but it seems to have no effect on the database.
(correct_to_coords.set_index('Site').rename(columns = {'Correct_Latitude':'To_Latitude'}) .combine_first(dataset_to_correct.set_index('To')))
#zswqa 's answer produces right result, #Anurag Dabas 's doesn't.
Another possible solution, It is a bit faster than merge method suggested above, although both are correct.
dataset_to_correct.set_index("To",inplace=True)
correct_to_coords.set_index("Site",inplace=True)
dataset_to_correct.loc[correct_to_coords.index, "To_Latitude"] = correct_to_coords["Correct_Latitude"]
dataset_to_correct.loc[correct_to_coords.index, "To_Longitude"] = correct_to_coords["Correct_Longitude"]
dataset_to_correct.reset_index(inplace=True)
dataset_to_correct.set_index("From",inplace=True)
correct_from_coords.set_index("Site",inplace=True)
dataset_to_correct.loc[correct_from_coords.index, "From_Latitude"] = correct_from_coords["Correct_Latitude"]
dataset_to_correct.loc[correct_from_coords.index, "From_Longitude"] = correct_from_coords["Correct_Longitude"]
dataset_to_correct.reset_index(inplace=True)
merge = dataset_to_correct.merge(correct_to_coords, left_on='To', right_on='Site', how='left')
merge.loc[(merge.To == merge.Site), 'To_Latitude'] = merge.Correct_Latitude
merge.loc[(merge.To == merge.Site), 'To_Longitude'] = merge.Correct_Longitude
# del merge['Site']
# del merge['Correct_Latitude']
# del merge['Correct_Longitude']
merge = merge.drop(columns = ['Site','Correct_Latitude','Correct_Longitude'])
merge = merge.merge(correct_from_coords, left_on='From', right_on='Site', how='left')
merge.loc[(merge.From == merge.Site), 'From_Latitude'] = merge.Correct_Latitude
merge.loc[(merge.From == merge.Site), 'From_Longitude'] = merge.Correct_Longitude
# del merge['Site']
# del merge['Correct_Latitude']
# del merge['Correct_Longitude']
merge = merge.drop(columns = ['Site','Correct_Latitude','Correct_Longitude'])
merge
lets try dual merge by merge()+pop()+fillna()+drop():
dataset_to_correct=dataset_to_correct.merge(correct_to_coords,left_on='To',right_on='Site',how='left').drop('Site',1)
dataset_to_correct['From_Latitude']=dataset_to_correct.pop('Correct_Latitude').fillna(dataset_to_correct['From_Latitude'])
dataset_to_correct['From_Longitude']=dataset_to_correct.pop('Correct_Longitude').fillna(dataset_to_correct['From_Longitude'])
dataset_to_correct=dataset_to_correct.merge(correct_from_coords,left_on='From',right_on='Site',how='left').drop('Site',1)
dataset_to_correct['To_Latitude']=dataset_to_correct.pop('Correct_Latitude').fillna(dataset_to_correct['To_Latitude'])
dataset_to_correct['To_Longitude']=dataset_to_correct.pop('Correct_Longitude').fillna(dataset_to_correct['To_Longitude'])

Append to a pd.DataFrame, dynamically allocating any new columns

I'm wanting to aggregate some API responses into a DataFrame.
The request consistently returns a number of json key value pairs, lets say A,B,C. occasionally however it will return A,B,C,D.
I would like something comparible to SQL's OUTER JOIN, that will simply add the new row, whilst filling the corresponding previous columns as NULL or some other placeholder.
The pandas join options insist upon imposing a unique suffix for the side, I really don't want this.
Am I looking at this the wrong way?
If there is no easy solution, I could just select a subset of the consistently available columns but I really wanted to download the lot and do the processing as a separate stage.
You can use pandas.concat as it provides with all the functionality required for your problem. Let this toy problem illustrate the possible solution.
# This generates random data with some key and value pair.
def gen_data(_size):
import string
keys = list(string.ascii_uppercase)
return dict((k,[v]) for k,v in zip(np.random.choice(keys, _size),np.random.randint(1000, size=_size)))
counter = 0
df = pd.DataFrame()
while True:
if counter > 5:
break;
# Recieve the data
new_data = gen_data(5)
# Converting this to dataframe obj
new_data = pd.DataFrame(new_data)
# Appending this data to my stack
df = pd.concat((df, new_data), axis=0, sort=True)
counter += 1
df.reset_index(drop=True, inplace=True)
print(df.to_string())

What's the fastest way to remove pyspark columns that are all the same value (e.g. 0)?

I have a massive pyspark dataframe and I need to remove any columns from the dataframe that are all 0's. I'm currently using this code below for this:
columns = list(masterdata_raw.schema.names)
zero_col = []
for col_ in columns:
col_max = masterdata_raw.agg({col_: "max"}).collect()[0]
col_min = masterdata_raw.agg({col_: "min"}).collect()[0]
if col_max == 0 and col_min == 0:
zero_col = zero_col.append(col_)
#*********removing the features found above************
for col_ in zero_col:
masterdata_raw = masterdata_raw.drop(col_)
Unfortunately this runs an aggregation twice and .collect's the results twice as well...SLOW
I tried to combine the aggregations already, but because I was trying to aggregate on the same column name within one dictionary, I only got one result back. I guess I could have added another column with a different name and then aggregated both at the same time, but that seemed inefficient as well.
What's the fastest way to filter out all of these 0 columns in pyspark?
why not just
for col_ in columns:
if masterdata_raw.groupBy().sum(col_).collect() == 0
masterdata_raw.drop(col_)

Categories

Resources