How to implement python custom function on dictionary of dataframes - python

I have a dictionary that contains 3 dataframes.
How do I implement a custom function to each dataframes in the dictionary.
In simpler terms, I want to apply the function find_outliers as seen below
# User defined function : find_outliers
#(I)
from scipy import stats
outlier_threshold = 1.5
ddof = 0
def find_outliers(s: pd.Series):
outlier_mask = np.abs(stats.zscore(s, ddof=ddof)) > outlier_threshold
# replace boolean values with corresponding strings
return ['background-color:blue' if val else '' for val in outlier_mask]
To the dictionary of dataframes dict_of_dfs below
# the dataset
import numpy as np
import pandas as pd
df = {
'col_A':['A_1001', 'A_1001', 'A_1001', 'A_1001', 'B_1002','B_1002','B_1002','B_1002','D_1003','D_1003','D_1003','D_1003'],
'col_X':[110.21, 191.12, 190.21, 12.00, 245.09,4321.8,122.99,122.88,134.28,148.14,161.17,132.17],
'col_Y':[100.22,199.10, 191.13,199.99, 255.19,131.22,144.27,192.21,7005.15,12.02,185.42,198.00],
'col_Z':[140.29, 291.07, 390.22, 245.09, 4122.62,4004.52,395.17,149.19,288.91,123.93,913.17,1434.85]
}
df = pd.DataFrame(df)
df
#dictionary_of_dataframes
#(II)
dict_of_dfs=dict(tuple(df.groupby('col_A')))
and lastly, flag outliers in each df of the dict_of_dfs
# end goal is to have find/flag outliers in each `df` of the `dict_of_dfs`
#(III)
desired_cols = ['col_X','col_Y','col_Z']
dict_of_dfs.style.apply(find_outliers, subset=desired_cols)
summarily, I want to apply I to II and finally flag outliers in III
Thanks for your attempt. :)
Desired output should look like this, but for the three dataframes

This may not be what you want, but this is how I'd approach it, but you'll have to work out the details of the function because you have it written to receive a series rather a dataframe. Groupby apply() will send the subsets of rows and then you can perform the actions on that subset and return the result.
For consideration:
inside the function you may be able to handle all columns like so:
def find_outliers(x):
for col in ['col_X','col_Y','col_Z']:
outlier_mask = np.abs(stats.zscore(x[col], ddof=ddof)) > outlier_threshold
x[col] = ['outlier' if val else '' for val in outlier_mask]
return x
newdf = df.groupby('col_A').apply(find_outliers)
col_A col_X col_Y col_Z
0 A_1001 outlier
1 A_1001
2 A_1001
3 A_1001 outlier
4 B_1002 outlier
5 B_1002 outlier
6 B_1002
7 B_1002
8 D_1003 outlier
9 D_1003
10 D_1003

Related

How can I improve the speed of pandas rows operations?

I have a large .csv file that has 11'000'000 rows and 3 columns: id ,magh , mixid2.
What I have to do is to select the rows with the same id and then check if these rows have the same mixid2; if True I remove the rows, If False I initialize a class with the information of the selected rows.
That is my code:
obs=obs.set_index('id')
obs=obs.sort_index()
#dropping elements with only one mixid2 and filling S
ID=obs.index.unique()
S=[]
good_bye_list = []
for i in tqdm(ID):
app=obs.loc[i]
if len(np.unique([app['mixid2'],])) != 1:
#fill the class list
S.append(star(app['magh'].values,app['mixid2'].values,z_in))
else :
#drop
good_bye_list.append(i)
obs=obs.drop(good_bye_list)
The .csv file is very large so it takes 40 min to compute everything.
How can I improve the speed??
Thank you for the help.
This is the .csv file:
id,mixid2,magh
3447001203296326,557,14.25
3447001203296326,573,14.25
3447001203296326,525,14.25
3447001203296326,541,14.25
3447001203296330,540,15.33199977874756
3447001203296330,573,15.33199977874756
3447001203296333,172,17.476999282836914
3447001203296333,140,17.476999282836914
3447001203296333,188,17.476999282836914
3447001203296333,156,17.476999282836914
3447001203296334,566,15.626999855041506
3447001203296334,534,15.626999855041506
3447001203296334,550,15.626999855041506
3447001203296338,623,14.800999641418455
3447001203296338,639,14.800999641418455
3447001203296338,607,14.800999641418455
3447001203296344,521,12.8149995803833
3447001203296344,537,12.8149995803833
3447001203296344,553,12.8149995803833
3447001203296345,620,12.809000015258787
3447001203296345,543,12.809000015258787
3447001203296345,636,12.809000015258787
3447001203296347,558,12.315999984741213
3447001203296347,542,12.315999984741213
3447001203296347,526,12.315999984741213
3447001203296352,615,12.11299991607666
3447001203296352,631,12.11299991607666
3447001203296352,599,12.11299991607666
3447001203296360,540,16.926000595092773
3447001203296360,556,16.926000595092773
3447001203296360,572,16.926000595092773
3447001203296360,524,16.926000595092773
3447001203296367,490,15.80799961090088
3447001203296367,474,15.80799961090088
3447001203296367,458,15.80799961090088
3447001203296369,639,15.175000190734865
3447001203296369,591,15.175000190734865
3447001203296369,623,15.175000190734865
3447001203296369,607,15.175000190734865
3447001203296371,460,14.975000381469727
3447001203296373,582,14.532999992370605
3447001203296373,614,14.532999992370605
3447001203296373,598,14.532999992370605
3447001203296374,184,14.659000396728516
3447001203296374,203,14.659000396728516
3447001203296374,152,14.659000396728516
3447001203296374,136,14.659000396728516
3447001203296374,168,14.659000396728516
3447001203296375,592,14.723999977111815
3447001203296375,608,14.723999977111815
3447001203296375,624,14.723999977111815
3447001203296375,92,14.723999977111815
3447001203296375,76,14.723999977111815
3447001203296375,108,14.723999977111815
3447001203296375,576,14.723999977111815
3447001203296376,132,14.0649995803833
3447001203296376,164,14.0649995803833
3447001203296376,180,14.0649995803833
3447001203296376,148,14.0649995803833
3447001203296377,168,13.810999870300293
3447001203296377,152,13.810999870300293
3447001203296377,136,13.810999870300293
3447001203296377,184,13.810999870300293
3447001203296378,171,13.161999702453613
3447001203296378,187,13.161999702453613
3447001203296378,155,13.161999702453613
3447001203296378,139,13.161999702453613
3447001203296380,565,13.017999649047852
3447001203296380,517,13.017999649047852
3447001203296380,549,13.017999649047852
3447001203296380,533,13.017999649047852
3447001203296383,621,13.079999923706055
3447001203296383,589,13.079999923706055
3447001203296383,605,13.079999923706055
3447001203296384,541,12.732000350952148
3447001203296384,557,12.732000350952148
3447001203296384,525,12.732000350952148
3447001203296385,462,12.784000396728516
3447001203296386,626,12.663999557495115
3447001203296386,610,12.663999557495115
3447001203296386,577,12.663999557495115
3447001203296389,207,12.416000366210938
3447001203296389,255,12.416000366210938
3447001203296389,223,12.416000366210938
3447001203296389,239,12.416000366210938
3447001203296390,607,12.20199966430664
3447001203296390,591,12.20199966430664
3447001203296397,582,16.635000228881836
3447001203296397,598,16.635000228881836
3447001203296397,614,16.635000228881836
3447001203296399,630,17.229999542236328
3447001203296404,598,15.970000267028807
3447001203296404,631,15.970000267028807
3447001203296404,582,15.970000267028807
3447001203296408,540,16.08799934387207
3447001203296408,556,16.08799934387207
3447001203296408,524,16.08799934387207
3447001203296408,572,16.08799934387207
3447001203296409,632,15.84000015258789
3447001203296409,616,15.84000015258789
Hello and welcome to StackOverflow.
In pandas the rule of thumb is that raw loops are always slower than the dedicated functions. To apply a function to a sub-DataFrame of rows that fulfill certain criteria you can use groupby
In your case the function is a bit ... unpythonic as the instantiation of S is a side effect and the deleting of rows you are currenty iterating over is dangerous. For example in a dictionary you should never do this. That said, you can create a function like this:
In [37]: def my_func(df):
...: if df['mixid2'].nunique() == 1:
...: return None
...: else:
...: S.append(df['mixid2'])
...: return df
and apply it to you DataFrame via
S = []
obs.groupby('id').apply(my_func)
This iterates over all subdataframes with the same id and drops them if there is exactly one unique value in mixid2. Otherwise it appends the values to a list S
The resulting DataFrame is 3 rows shorter
Out[38]:
id mixid2 magh
id
3447001203296326 0 3447001203296326 557 14.250000
1 3447001203296326 573 14.250000
... ... ... ...
3447001203296409 98 3447001203296409 632 15.840000
99 3447001203296409 616 15.840000
[97 rows x 3 columns]
and S contains 28 elements. That you could pass into the star constructor just as you did.
I guess you want to groupby and exclude all the elements where mixid2 appears more than 1 times using set_index. To get the original shape, we use reset_index after the filtering.
df = obs.set_index('mixid2').loc[~df.groupby('mixid2').count().id.eq(1)].reset_index()
df.shape
(44, 3)
I'm not entirely sure, if I understood you correctly. But what you can do is first remove duplicates in your dataframe and then use the groupby function to get all the remaining data points with same id:
# dropping all duplicates based on id an mixid2
df.drop_duplicates(["id", "mixid2"], inplace=True)
# then iterate over all groups:
for index, grp in df.groupby(["id"]):
pass # do stuff here with the grp
Normally it is a good idea to rely on pandas internal functions, since they are mostly optimised quite well.
new_df = app.groupby(['id','mixid2'], as_index=False).agg('count')
new_df = new_df[new_df['magh'] > 1]
then pass new_df to your function.

Create dataframe in a loop

I would like to create a dataframe in a loop and after use these dataframe in a loop. I tried eval() function but it didn't work.
For example :
for i in range(5):
df_i = df[(df.age == i)]
There I would like to create df_0,df_1 etc. And then concatenate these new dataframe after some calculations :
final_df = pd.concat(df_0,df_1)
for i in range(2:5):
final_df = pd.concat(final_df, df_i)
You can create a dict of DataFrames x and have is as dict keys:
np.random.seed(42)
df = pd.DataFrame({'age': np.random.randint(0, 5, 20)})
x = {}
for i in range(5):
x[i] = df[df['age']==i]
final = pd.concat(x.values())
Then you can refer to individual DataFrames as:
x[1]
Output:
age
5 1
13 1
15 1
And concatenate all of them with:
pd.concat(x.values())
Output:
age
18 0
5 1
13 1
15 1
2 2
6 2
...
The way is weird and not recommended, but it can be done.
Answer
for i in range(5):
exec("df_{i} = df[df['age']=={i}]")
def UDF(dfi):
# do something in user-defined function
for i in range(5):
exec("df_{i} = UDF(df_{i})")
final_df = pd.concat(df_0,df_1)
for i in range(2:5):
final_df = pd.concat(final_df, df_i)
Better Way 1
Using a list or a dict to store the dataframe should be a better way since you can access each dataframe by an index or a key.
Since another answer shows the way using dict (#perl), I will show you the way using list.
def UDF(dfi):
# do something in user-defined function
dfs = [df[df['age']==i] for i in range(i)]
final_df = pd.concat(map(UDF, dfs))
Better Way 2
Since you are using pandas.DataFrame, groupby function is a 'pandas' way to do what you want. (maybe, I guess, cause I don't know what you want to do. LOL)
def UDF(dfi):
# do something in user-defined function
final_df = df.groupby('age').apply(UDF)
Reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html

Python Pandas aggregation with condition

I need to group my dataframe and use several aggregation functions on different columns. And some of this aggregation have conditions.
Here is an example. The data are all the orders from 2 customers and I would like to calculate some information on each customer. Like their orders count, their total spendings and average spendings.
import pandas as pd
data = {'order_id' : range(1,9),
'cust_id' : [1]*5 + [2]*3,
'order_amount' : [100,50,70,75,80,105,30,20],
'cust_days_since_reg' : [0,10,25,37,52,0,17,40]}
orders = pd.DataFrame(data)
aggregation = {'order_id' : 'count',
'order_amount' : ['sum', 'mean']}
cust = orders.groupby('cust_id').agg(aggregation).reset_index()
cust.columns = ['_'.join(col) for col in cust.columns.values]
This works fine and gives me :
_
But I have to add an aggregation function with a argument and a condition : the amount a customer spent in his first X months (X must be customizable)
Since I need an argument in this aggregation I tried :
def spendings_X_month(group, n_months):
return group.loc[group['cust_days_since_reg'] <= n_months*30,
'order_amount'].sum()
aggregation = {'order_id' : 'count',
'order_amount' : ['sum',
'mean',
lambda x: spendings_X_month(x, 1)]}
cust = orders.groupby('cust_id').agg(aggregation).reset_index()
But that last line gets me the error : KeyError: 'cust_days_since_reg'.
It must be a scoping error, the cust_days_since_reg column must not be visible in this situation.
I could calculate this last column separately and then join the resulting dataframe to the first but there must be a better solution, that makes every thing in only one groupby.
Could anyone help me with this problem please ?
Thank You
You cannot use agg, because each function working only with one column, so this kind of filtering based of another col is not possible.
Solution use GroupBy.apply:
def spendings_X_month(group, n_months):
a = group['order_id'].count()
b = group['order_amount'].sum()
c = group['order_amount'].mean()
d = group.loc[group['cust_days_since_reg'] <= n_months*30,
'order_amount'].sum()
cols = ['order_id_count','order_amount_sum','order_amount_mean','order_amount_spendings']
return pd.Series([a,b,c,d], index=cols)
cust = orders.groupby('cust_id').apply(spendings_X_month, 1).reset_index()
print (cust)
cust_id order_id_count order_amount_sum order_amount_mean \
0 1 5.0 375.0 75.000000
1 2 3.0 155.0 51.666667
order_amount_spendings
0 220.0
1 135.0

How to implement a select-like function

I got a dataset in python and the structure of it is like
Tree Species number of trunks
------------------------------
Acer rubrum 1
Quercus bicolor 1
Quercus bicolor 1
aabbccdd 0
and I have a question of can I implement a function similar to
Select sum(number of trunks)
from trees.data['Number of Trunks']
where x = trees.data["Tree Species"]
group by trees.data["Tree Species"]
in python? x is an array contains five elements:
x = array(['Acer rubrum', 'Acer saccharum', 'Acer saccharinum',
'Quercus rubra', 'Quercus bicolor'], dtype='<U16')
what I want to do is mapping each elements in x to trees.data["Tree Species"] and calculate the sum of number of trunks, it should return an array of
array = (sum_num(Acer rubrum), sum_num(Acer saccharum), sum_num(Acer saccharinum),
sum_num(Acer Quercus rubra), sum_num(Quercus bicolor))
Did you want to look at Python Pandas. That will allow you to do something like
df.groupby('Tree Species')['Number of Trunks'].sum()
Please note here df is whatever the variable name you read in your data frame. I would recommend you to look at pandas and lambda function too.
You can do something like this:
import pandas as pd
df = pd.DataFrame()
tree_species = ["Acer rubrum", "Quercus bicolor", "Quercus bicolor", "aabbccdd"]
no_of_trunks = [1,1,1,0]
df["Tree Species"] = tree_species
df["Number of Trunks"] = no_of_trunks
df.groupby('Tree Species').sum() #This will create a pandas dataframe
df.groupby('Tree Species')['Number of Trunks'].sum() #This will create a pandas series.
You can do the same thing by just using dictionaries too:
tree_species = ["Acer rubrum", "Quercus bicolor", "Quercus bicolor", "aabbccdd"]
no_of_trunks = [1,1,1,0]
d = {}
for key, trunk in zip(tree_species, no_of_trunks):
if not key in d.keys():
d[key] = 0
d[key] += trunk
print(d)

What is the Pythonic way to apply a function to multi-index multi-columns dataFrame?

Given a multi-index multi-column dataframe below, I want to apply LinearRegression to each block of this dataframe, for example, "index(X,1), column A". And compute the predicted dataframe as df_result.
A B
X 1 1997-01-31 -0.061332 0.630682
1997-02-28 -2.671818 0.377036
1997-03-31 0.861159 0.303689
...
1998-01-31 0.535192 -0.076420
...
1998-12-31 1.430995 -0.763758
Y 1 1997-01-31 -0.061332 0.630682
1997-02-28 -2.671818 0.377036
1997-03-31 0.861159 0.303689
...
1998-01-31 0.535192 -0.076420
...
1998-12-31 1.430995 -0.763758
Here is what I tried:
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
N = 24
dates = pd.date_range('19970101', periods=N, freq='M')
df=pd.DataFrame(np.random.randn(len(dates),2),index=dates,columns=list('AB'))
df2=pd.concat([df,df],keys=[('X','1'),('Y','1')])
regr = LinearRegression()
# df_result will be reassined, copy the index and metadata from df2
df_result=df2.copy()
# I know the double loop below is not a clever idea. What is the right way?
for row in df2.index.to_series().unique():
for col in df2.columns:
#df2 can contain missing values
lenX=np.count_nonzero(df2.ix[row[:1],col].notnull().values.ravel())
X=np.array(range(lenX)).reshape(lenX,1)
y=df2.ix[row[:1],col]
y=y[y.notnull()]
# train the model
regr.fit(X,y)
df_result.ix[row[:1],col][:lenX] = regr.predict(X)
The question is that the double loop above make the computing quite slow, more than ten minutes for 100kb data set. What is the pythonic way to do this?
EDIT:
The second question for the last line of the code above is that I am working with a copy of a slice of the dataframe. Some columns of "df_result" are not updated with this operation.
EDIT2:
Some columns of the original data can contain missing value, and we cannot apply regression directly on them. For example,
df2.ix[('X','1','1997-12-31')]['A']=np.nan
df2.ix[('Y','1','1998-12-31')]['A']=np.nan
I don't quite understand the row looping.
anyhow, to maintain consistency in the numbers I put np.random.seed(1) at the top
In short I think you can achieve what you want with a function, groupby, and call to .transform().
def do_regression(y):
X=np.array(range(len(y))).reshape(len(y),1)
regr.fit(X,y)
return regr.predict(X)
df_regressed = df2.groupby(level=[0,1]).transform(do_regression)
print df_regressed.head()
A B
X 1 1997-01-31 0.779476 -1.222119
1997-02-28 0.727184 -1.138630
1997-03-31 0.674892 -1.055142
1997-04-30 0.622601 -0.971653
1997-05-31 0.570309 -0.888164
which matches your df_result output.
print df_result.head()
A B
X 1 1997-01-31 0.779476 -1.222119
1997-02-28 0.727184 -1.138630
1997-03-31 0.674892 -1.055142
1997-04-30 0.622601 -0.971653
1997-05-31 0.570309 -0.888164
oh and a couple of alternatives for:
X=np.array(range(len(y))).reshape(len(y),1)
1.) X = np.expand_dims(range(len(y)), axis=1)
2.) X = np.arange(len(y))[:,np.newaxis]
Edit for empty data
ok 2 suggestions:
Would it be legitimate to use the interpolate method to fill the null values?
df2 = df2.interpolate()
OR
do the regression on non null values and then pop the nulls back in at the appropriate index position
def do_regression(y):
x_s =np.arange(len(y))
x_s_non_nulls = x_s[y.notnull().values]
x_s_non_nulls = np.expand_dims(x_s_non_nulls, axis=1)
y_non_nulls = y[y.notnull()] # get the non nulls
regr.fit(x_s_non_nulls,y_non_nulls) # regression
results = regr.predict(x_s_non_nulls)
#pop back in then nulls.
for idx in np.where(y.isnull().values ==True):
results = np.insert(results,idx,np.NaN)
return results

Categories

Resources