how to apply custom function to each row of pandas dataframe - python

i have the following example:
import pandas as pd
import numpy as np
df = pd.DataFrame([(0,2,5), (2,4,None),(7,-5,4), (1,None,None)])
def clean(series):
start = np.min(list(series.index[pd.isnull(series)]))
end = len(series)
series[start:] = series[start-1]
return series
my objective is to obtain a dataframe in which each row which contains a None value is filled in with the last available numerical value.
so, for example, running this function on just the 3rd row of the dataframe, i would produce the following:
row = df.ix[3]
test = clean(row)
test
0 1.0
1 1.0
2 1.0
Name: 3, dtype: float64
i cannot get this to work using the .apply() method, i.e. df.apply(clean,axis=1)
i should mention that this is a toy example - the custom function i would write in the real one is more dynamic in how it fills the values - so i am not looking for basic utilities like .ffill or .fillna

The apply method didn't work because when the row is completely filled your clean function will not know where to start the index from because of empty array for the given series.
So use a condition before altering series data i.e
def clean(series):
# Creating a copy for the sake of safety
series = series.copy()
# Alter series if only there exists a None value
if pd.isnull(series).any():
start = np.min(list(series.index[pd.isnull(series)]))
# for completely filled row
# series.index[pd.isnull(series)] will return
# Int64Index([], dtype='int64')
end = len(series)
series[start:] = series[start-1]
return series
df.apply(clean,1)
Output :
0 1 2
0 0.0 2.0 5.0
1 2.0 4.0 4.0
2 7.0 -5.0 4.0
3 1.0 1.0 1.0
Hope it clarifies why apply didn't work. I also suggest to take builtins to consideration to clean the data rather than writing functions from scratch.

At first, This is the code to solve your toy problem. But this code isn't what you want.
df.ffill(axis=1)
Next, I try to test your code.
df.apply(clean,axis=1)
#...start = np.min(list(series.index[pd.isnull(series)]))...
#=>ValueError: ('zero-size array to reduction operation minimum
# which has no identity', 'occurred at index 0')
To understand the situation, test with lambda function.
df.apply(lambda series:list(series.index[pd.isnull(series)]),axis=1)
0 []
1 [2]
2 []
3 [1, 2]
dtype: object
And next expression puts the same value error:
import numpy as np
np.min([])
In conclusion, pandas.apply() works well but clean function doesn't.

Could you use something like the fillna with backfill? I think this might be more efficient, if backfill meets your scenario..
i.e.
df.fillna(method='backfill')
However, this assumes a np.nan in the cells?
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html

Related

Trying to compare to values in a pandas dataframe for max value

I've got a pandas dataframe, and I'm trying to fill a new column in the dataframe, which takes the maximum value of two values situated in another column of the dataframe, iteratively. I'm trying to build a loop to do this, and save time with computation as I realise I could probably do it with more lines of code.
for x in ((jac_input.index)):
jac_output['Max Load'][x] = jac_input[['load'][x],['load'][x+1]].max()
However, I keep getting this error during the comparison
IndexError: list index out of range
Any ideas as to where I'm going wrong here? Any help would be appreciated!
Many things are wrong with your current code.
When you do ['abc'][x], x can only take the value 0 and this will return 'abc' as you are slicing a list. Not at all what you expect it to do (I imagine, slicing the Series).
For your code to be valid, you should do something like:
jac_input = pd.DataFrame({'load': [1,0,3,2,5,4]})
for x in jac_input.index:
print(jac_input['load'].loc[x:x+1].max())
output:
1
3
3
5
5
4
Also, when assigning, if you use jac_output['Max Load'][x] = ... you will likely encounter a SettingWithCopyWarning. You should rather use loc: jac_outputLoc[x, 'Max Load'] = .
But you do not need all that, use vectorial code instead!
You can perform rolling on the reversed dataframe:
jac_output['Max Load'] = jac_input['load'][::-1].rolling(2, min_periods=1).max()[::-1]
Or using concat:
jac_output['Max Load'] = pd.concat([jac_input['load'], jac_input['load'].shift(-1)], axis=1).max(1)
output (without assignment):
0 1.0
1 3.0
2 3.0
3 5.0
4 5.0
5 4.0
dtype: float64

Pandas - change cell value based on conditions from cell and from column

I have a Dataframe with a lot of "bad" cells. Let's say, they have all -99.99 as values, and I want to remove them (set them to NaN).
This works fine:
df[df == -99.99] = None
But actually I want to delete all these cells ONLY if another cell in the same row is market as 1 (e.g. in the column "Error").
I want to delete all -99.99 cells, but only if df["Error"] == 1.
The most straight-forward solution I thin is something like
df[(df == -99.99) & (df["Error"] == 1)] = None
but it gives me the error:
ValueError: cannot reindex from a duplicate axis
I tried every given solutions on the internet but I cant get it to work! :(
Since my Dataframe is big I don't want to iterate it (which of course, would work, but take a lot of time).
Any hint?
Try using broadcasting while passing numpy values:
# sample data, special value is -99
df = pd.DataFrame([[-99,-99,1], [2,-99,2],
[1,1,1], [-99,0, 1]],
columns=['a','b','Errors'])
# note the double square brackets
df[(df==-99) & (df[['Errors']]==1).values] = np.nan
Output:
a b Errors
0 NaN NaN 1
1 2.0 -99.0 2
2 1.0 1.0 1
3 NaN 0.0 1
At least, this is working (but with column iteration):
for i in df.columns:
df.loc[df[i].isin([-99.99]) & df["Error"].isin([1]), i] = None

Returning date that corresponds with maximum value in pandas dataframe [duplicate]

How can I find the row for which the value of a specific column is maximal?
df.max() will give me the maximal value for each column, I don't know how to get the corresponding row.
Use the pandas idxmax function. It's straightforward:
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].idxmax()
3
>>> df['B'].idxmax()
4
>>> df['C'].idxmax()
1
Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) -- it provides the same thing, and appears at least as fast as idxmax in cursory observations.
idxmax() returns indices labels, not integers.
Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd').
if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed).
HISTORICAL NOTES:
idxmax() used to be called argmax() prior to 0.11
argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0
back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax).
argmax function returned the integer position within the index of the row location of the maximum element.
pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.
For example, consider this toy DataFrame with a duplicate row label:
In [19]: dfrm
Out[19]:
A B C
a 0.143693 0.653810 0.586007
b 0.623582 0.312903 0.919076
c 0.165438 0.889809 0.000967
d 0.308245 0.787776 0.571195
e 0.870068 0.935626 0.606911
f 0.037602 0.855193 0.728495
g 0.605366 0.338105 0.696460
h 0.000000 0.090814 0.963927
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
In [20]: dfrm['A'].idxmax()
Out[20]: 'i'
In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas
Out[21]:
A B C
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
So here a naive use of idxmax is not sufficient, whereas the old form of argmax would correctly provide the positional location of the max row (in this case, position 9).
This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.
So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.
You might also try idxmax:
In [5]: df = pandas.DataFrame(np.random.randn(10,3),columns=['A','B','C'])
In [6]: df
Out[6]:
A B C
0 2.001289 0.482561 1.579985
1 -0.991646 -0.387835 1.320236
2 0.143826 -1.096889 1.486508
3 -0.193056 -0.499020 1.536540
4 -2.083647 -3.074591 0.175772
5 -0.186138 -1.949731 0.287432
6 -0.480790 -1.771560 -0.930234
7 0.227383 -0.278253 2.102004
8 -0.002592 1.434192 -1.624915
9 0.404911 -2.167599 -0.452900
In [7]: df.idxmax()
Out[7]:
A 0
B 8
C 7
e.g.
In [8]: df.loc[df['A'].idxmax()]
Out[8]:
A 2.001289
B 0.482561
C 1.579985
Both above answers would only return one index if there are multiple rows that take the maximum value. If you want all the rows, there does not seem to have a function.
But it is not hard to do. Below is an example for Series; the same can be done for DataFrame:
In [1]: from pandas import Series, DataFrame
In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])
In [3]: s.idxmax()
Out[3]: 'b'
In [4]: s[s==s.max()]
Out[4]:
b 4
c 4
dtype: int64
df.iloc[df['columnX'].argmax()]
argmax() would provide the index corresponding to the max value for the columnX. iloc can be used to get the row of the DataFrame df for this index.
A more compact and readable solution using query() is like this:
import pandas as pd
df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
print(df)
# find row with maximum A
df.query('A == A.max()')
It also returns a DataFrame instead of Series, which would be handy for some use cases.
Very simple: we have df as below and we want to print a row with max value in C:
A B C
x 1 4
y 2 10
z 5 9
In:
df.loc[df['C'] == df['C'].max()] # condition check
Out:
A B C
y 2 10
If you want the entire row instead of just the id, you can use df.nlargest and pass in how many 'top' rows you want and you can also pass in for which column/columns you want it for.
df.nlargest(2,['A'])
will give you the rows corresponding to the top 2 values of A.
use df.nsmallest for min values.
The direct ".argmax()" solution does not work for me.
The previous example provided by #ely
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1
returns the following message :
FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
So that my solution is :
df['A'].values.argmax()
mx.iloc[0].idxmax()
This one line of code will give you how to find the maximum value from a row in dataframe, here mx is the dataframe and iloc[0] indicates the 0th index.
Considering this dataframe
[In]: df = pd.DataFrame(np.random.randn(4,3),columns=['A','B','C'])
[Out]:
A B C
0 -0.253233 0.226313 1.223688
1 0.472606 1.017674 1.520032
2 1.454875 1.066637 0.381890
3 -0.054181 0.234305 -0.557915
Assuming one want to know the rows where column "C" is max, the following will do the work
[In]: df[df['C']==df['C'].max()])
[Out]:
A B C
1 0.472606 1.017674 1.520032
The idmax of the DataFrame returns the label index of the row with the maximum value and the behavior of argmax depends on version of pandas (right now it returns a warning). If you want to use the positional index, you can do the following:
max_row = df['A'].values.argmax()
or
import numpy as np
max_row = np.argmax(df['A'].values)
Note that if you use np.argmax(df['A']) behaves the same as df['A'].argmax().
Use:
data.iloc[data['A'].idxmax()]
data['A'].idxmax() -finds max value location in terms of row
data.iloc() - returns the row
If there are ties in the maximum values, then idxmax returns the index of only the first max value. For example, in the following DataFrame:
A B C
0 1 0 1
1 0 0 1
2 0 0 0
3 0 1 1
4 1 0 0
idxmax returns
A 0
B 3
C 0
dtype: int64
Now, if we want all indices corresponding to max values, then we could use max + eq to create a boolean DataFrame, then use it on df.index to filter out indexes:
out = df.eq(df.max()).apply(lambda x: df.index[x].tolist())
Output:
A [0, 4]
B [3]
C [0, 1, 3]
dtype: object
what worked for me is:
df[df['colX'] == df['colX'].max()
You then get the row in your df with the maximum value of colX.
Then if you just want the index you can add .index at the end of the query.

How do I "re-group" my Series after performing an apply() on a SeriesGroupBy?

I need to adapt an existing function, that essentially performs a Series.str.contains and returns the resulting Series, to be able to handle SeriesGroupBy as input.
As suggested by the pandas error message
Cannot access attribute 'str' of 'SeriesGroupBy' objects, try using the 'apply' method
I have tried to use apply() on the SeriesGroupBy object, which works in a way, but results in a Series object. I would now like to apply the same grouping as before, to this Series.
Original function
def contains(series, expression):
return series.str.contains(expression)
My attempt so far
>>> import pandas as pd
... from functools import partial
...
... def _f(series, expression):
... return series.str.contains(expression)
...
... def contains(grouped_series, expression):
... result = grouped_series.apply(partial(_f, expression=expression))
... return result
>>> df = pd.DataFrame(zip([1,1,2,2], ['abc', 'def', 'abq', 'bcq']), columns=['group', 'text'])
>>> gdf = df.groupby('group')
>>> gs = gdf['text']
>>> type(gs)
<class 'pandas.core.groupby.generic.SeriesGroupBy'>
>>> r = contains(gdf['text'], 'b')
>>> r
0 True
1 False
2 True
3 True
Name: text, dtype: bool
>>> type(r)
<class 'pandas.core.series.Series'>
The desired result would by a boolean series grouped by the same indices as the original grouped_series.
The actual result is a Series object without any grouping.
EDIT / CLARIFICATION:
The initial answers make me think I didn't stress the core of the problem enough. For the sake of the question, lets assume I cannot change anything outside of the contains(grouped_series, expression) function.
I think I know how to solve my problem if I approach it from another angle, and if I don't that would then become another question. The real world context makes it very complicated to change code outside of that one function. So I would really appreciate suggestions that work within that constraint.
So, let me rephrase the question as follows:
I'm looking for a function contains(grouped_series, expression), so that the following code works:
>>> df = pd.DataFrame(zip([1,1,2,2], ['abc', 'def', 'abq', 'bcq']), columns=['group', 'text'])
>>> grouped_series = contains(df.groupby('group')['text'], 'b')
>>> grouped_series.sum()
group
1 1.0
2 2.0
Name: text, dtype: float64
groupby is not needed unless you want to do something with the "group" -- like calculating its sum or check if all rows in the group contain the letter b. When you call apply on a GroupBy object, you can pass additional argument to the function being applied by keywords:
def contains(frame, expression):
return frame['text'].str.contains(expression).all()
df.groupby('group').apply(contains, expression='b')
Result:
group
1 False
2 True
dtype: bool
I like to think that the first parameter to the function being applied (frame) is a smaller view of the original dataframe, being chopped up by the groupby clause.
That said, apply is pretty slow compared to specialized aggregate functions lime min, max or sum. Use these as much as possible and save apply for complex cases.
Following the advice of the error message, you could use apply:
df.groupby('group').apply(lambda x : x.text.str.contains('b'))
Out[10]:
group
1 0 True
1 False
2 2 True
3 True
Name: text, dtype: bool
If you want to put these indices into your data set and return a DataFrame, use reset_index:
df.groupby('group').apply(lambda x : x.text.str.contains('b')).reset_index()
Out[11]:
group level_1 text
0 1 0 True
1 1 1 False
2 2 2 True
3 2 3 True
_f has absolutely no relationship to the groups. The way to deal with this is to instead define a column prior to grouping (not a separate function), then group. Now that column (called 'to_sum') is part of your Series.GroupBy object.
df.assign(to_sum = _f(df['text'], 'b')).groupby('group').to_sum.sum()
#group
#1 1.0
#2 2.0
#Name: to_sum, dtype: float64
If you don't need the entire DataFrame for your subsequent operations, you can sum the Series returned by _f using df to group (as they will share the same index)
_f(df['text'], 'b').groupby(df['group']).sum()
You can just do this. No need to do group-by
df['eval']= df['text'].str.contains('b')
eval is the name of the column which you want add. You can name what you want.
df.groupby('group')['eval'].sum()
Run this after the first line. The result is
group
1 1.0
2 2.0

faster replacement of -1 and 0 to NaNs in column for a large dataset

The 'azdias' is a dataframe which is my main dataset and meta data or feature summary of it lies in dataframe 'feat_info'. The 'feat_info' shows the values in every column that have been displayed as NaN.
Ex: column1 has values [-1,0] as NaN values. So my job will be to find and replace these -1,0 in column1 as NaN.
azdias dataframe:
feat_info dataframe:
I have tried following in jupyter notebook.
def NAFunc(x, miss_unknown_list):
x_output = x
for i in miss_unknown_list:
try:
miss_unknown_value = float(i)
except ValueError:
miss_unknown_value = i
if x == miss_unknown_value:
x_output = np.nan
break
return x_output
for cols in azdias.columns.tolist():
NAList = feat_info[feat_info.attribute == cols]['missing_or_unknown'].values[0]
azdias[cols] = azdias[cols].apply(lambda x: NAFunc(x, NAList))
Question 1: I am trying to impute NaN values. But my code is very
slow. I wish to speed up my process of execution.
I have attached sample of both dataframes:
azdias_sample
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST
0 -1 2 1 2.0 3
1 -1 1 2 5.0 1
2 -1 3 2 3.0 1
3 2 4 2 2.0 4
4 -1 3 1 5.0 4
feat_info_sample
attribute information_level type missing_or_unknown
AGER_TYP person categorical [-1,0]
ALTERSKATEGORIE_GROB person ordinal [-1,0,9]
ANREDE_KZ person categorical [-1,0]
CJT_GESAMTTYP person categorical [0]
FINANZ_MINIMALIST person ordinal [-1]
If the azdias dataset is obtained from read_csv or similar IO functions, the na_values keyword argument can be used to specify column-specific missing value representations to make sure the returned data frame already has in-place NaN values from the very beginning. The sample code is shown in the following.
from ast import literal_eval
feat_info.set_index("attribute", inplace=True)
# A more concise but less efficient alternative is
# na_dict = feat_info["missing_or_unknown"].apply(literal_eval).to_dict()
na_dict = {attr: literal_eval(val) for attr, val in feat_info["missing_or_unknown"].items()}
df_azdias = pd.read_csv("azidas.csv", na_values=na_dict)
As for the data type, there is no built-in NaN representation for integer data types. Hence a float data type is needed. If the missing values are imputed using fillna, the downcast argument can be specified to make the returned series or data frame have an appropriate data type.
Try using the DataFrame's replace method. How about this?
for c in azdias.columns.tolist():
replace_list = feat_info[feat_info['attribute'] == c]['missing_or_unknown'].values
azidias[c] = azidias[c].replace(to_replace=list(replace_list), value=np.nan)
A couple things I'm not sure about without being able to execute your code:
In your example, you used .values[0]. Don't you want all the values?
I'm not sure if it's necessary to do to_replace=list(replace_list), it may work to just use to_replace=replace_list.
In general, I recommend thinking to yourself "surely Pandas has a function to do this for me." Often, they do. For performance with Pandas generally, avoid looping over and setting things. Vectorized methods tend to be much faster.

Categories

Resources