How can I find the row for which the value of a specific column is maximal?
df.max() will give me the maximal value for each column, I don't know how to get the corresponding row.
Use the pandas idxmax function. It's straightforward:
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].idxmax()
3
>>> df['B'].idxmax()
4
>>> df['C'].idxmax()
1
Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) -- it provides the same thing, and appears at least as fast as idxmax in cursory observations.
idxmax() returns indices labels, not integers.
Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd').
if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed).
HISTORICAL NOTES:
idxmax() used to be called argmax() prior to 0.11
argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0
back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax).
argmax function returned the integer position within the index of the row location of the maximum element.
pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.
For example, consider this toy DataFrame with a duplicate row label:
In [19]: dfrm
Out[19]:
A B C
a 0.143693 0.653810 0.586007
b 0.623582 0.312903 0.919076
c 0.165438 0.889809 0.000967
d 0.308245 0.787776 0.571195
e 0.870068 0.935626 0.606911
f 0.037602 0.855193 0.728495
g 0.605366 0.338105 0.696460
h 0.000000 0.090814 0.963927
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
In [20]: dfrm['A'].idxmax()
Out[20]: 'i'
In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas
Out[21]:
A B C
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
So here a naive use of idxmax is not sufficient, whereas the old form of argmax would correctly provide the positional location of the max row (in this case, position 9).
This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.
So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.
You might also try idxmax:
In [5]: df = pandas.DataFrame(np.random.randn(10,3),columns=['A','B','C'])
In [6]: df
Out[6]:
A B C
0 2.001289 0.482561 1.579985
1 -0.991646 -0.387835 1.320236
2 0.143826 -1.096889 1.486508
3 -0.193056 -0.499020 1.536540
4 -2.083647 -3.074591 0.175772
5 -0.186138 -1.949731 0.287432
6 -0.480790 -1.771560 -0.930234
7 0.227383 -0.278253 2.102004
8 -0.002592 1.434192 -1.624915
9 0.404911 -2.167599 -0.452900
In [7]: df.idxmax()
Out[7]:
A 0
B 8
C 7
e.g.
In [8]: df.loc[df['A'].idxmax()]
Out[8]:
A 2.001289
B 0.482561
C 1.579985
Both above answers would only return one index if there are multiple rows that take the maximum value. If you want all the rows, there does not seem to have a function.
But it is not hard to do. Below is an example for Series; the same can be done for DataFrame:
In [1]: from pandas import Series, DataFrame
In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])
In [3]: s.idxmax()
Out[3]: 'b'
In [4]: s[s==s.max()]
Out[4]:
b 4
c 4
dtype: int64
df.iloc[df['columnX'].argmax()]
argmax() would provide the index corresponding to the max value for the columnX. iloc can be used to get the row of the DataFrame df for this index.
A more compact and readable solution using query() is like this:
import pandas as pd
df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
print(df)
# find row with maximum A
df.query('A == A.max()')
It also returns a DataFrame instead of Series, which would be handy for some use cases.
Very simple: we have df as below and we want to print a row with max value in C:
A B C
x 1 4
y 2 10
z 5 9
In:
df.loc[df['C'] == df['C'].max()] # condition check
Out:
A B C
y 2 10
If you want the entire row instead of just the id, you can use df.nlargest and pass in how many 'top' rows you want and you can also pass in for which column/columns you want it for.
df.nlargest(2,['A'])
will give you the rows corresponding to the top 2 values of A.
use df.nsmallest for min values.
The direct ".argmax()" solution does not work for me.
The previous example provided by #ely
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1
returns the following message :
FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
So that my solution is :
df['A'].values.argmax()
mx.iloc[0].idxmax()
This one line of code will give you how to find the maximum value from a row in dataframe, here mx is the dataframe and iloc[0] indicates the 0th index.
Considering this dataframe
[In]: df = pd.DataFrame(np.random.randn(4,3),columns=['A','B','C'])
[Out]:
A B C
0 -0.253233 0.226313 1.223688
1 0.472606 1.017674 1.520032
2 1.454875 1.066637 0.381890
3 -0.054181 0.234305 -0.557915
Assuming one want to know the rows where column "C" is max, the following will do the work
[In]: df[df['C']==df['C'].max()])
[Out]:
A B C
1 0.472606 1.017674 1.520032
The idmax of the DataFrame returns the label index of the row with the maximum value and the behavior of argmax depends on version of pandas (right now it returns a warning). If you want to use the positional index, you can do the following:
max_row = df['A'].values.argmax()
or
import numpy as np
max_row = np.argmax(df['A'].values)
Note that if you use np.argmax(df['A']) behaves the same as df['A'].argmax().
Use:
data.iloc[data['A'].idxmax()]
data['A'].idxmax() -finds max value location in terms of row
data.iloc() - returns the row
If there are ties in the maximum values, then idxmax returns the index of only the first max value. For example, in the following DataFrame:
A B C
0 1 0 1
1 0 0 1
2 0 0 0
3 0 1 1
4 1 0 0
idxmax returns
A 0
B 3
C 0
dtype: int64
Now, if we want all indices corresponding to max values, then we could use max + eq to create a boolean DataFrame, then use it on df.index to filter out indexes:
out = df.eq(df.max()).apply(lambda x: df.index[x].tolist())
Output:
A [0, 4]
B [3]
C [0, 1, 3]
dtype: object
what worked for me is:
df[df['colX'] == df['colX'].max()
You then get the row in your df with the maximum value of colX.
Then if you just want the index you can add .index at the end of the query.
I am trying to create a function in python that checks if the data in the dataframe is following a certain structure
in my case i need to ensure that the id column is structured like this ID0101-10
here is my code but it is not working, i keep getting an indexing error:
i = 0
for i in df["id"]:
if ('-' in df["id"]):
df["id"].iloc[i] = df["id"].iloc[i]
i += 1
else:
df.drop(df["id"].iloc[i])
i += 1
if you're curious about my data, its like this:
id name
ID0101-10 John
ID0101-11 Mary
8454 Test
MMMM MMMM
ID0101-01 Ben
MN87876 00.00
i am trying to clean my data by dropping the dummy values
EDIT: i get this error
TypeError: Cannot index by location index with a non-integer key
Any help is appreciated thanks
If I understand correctly, you can do this:
import pandas as pd
df = pd.DataFrame({'id':['ID0101-10', 'ID0101-11', '8454', 'MMMM', 'ID0101-01', 'MN87876'],
'name':['John', 'Mary', 'Test', 'MMMM', 'Ben', '00.00']})
result = df[df['id'].str.startswith('ID0101-')]
print(result)
Output:
id name
0 ID0101-10 John
1 ID0101-11 Mary
4 ID0101-01 Ben
As a general rule, you rarely need to loop over pandas dataframes, it's almost always faster to use native pandas functions.
For more complex matches you can use regular expressions: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.match.html
I have a Python Pandas Dataframe, in which a column named status contains three kinds of possible values: ok, must read x more books, does not read any books yet, where x is an integer higher than 0.
I want to sort status values according to the order above.
Example:
name status
0 Paul ok
1 Jean must read 1 more books
2 Robert must read 2 more books
3 John does not read any book yet
I've found some interesting hints, using Pandas Categorical and map but I don't know how to deal with variable values modifying strings.
How can I achieve that?
Use:
a = df['status'].str.extract('(\d+)', expand=False).astype(float)
d = {'ok': a.max() + 1, 'does not read any book yet':-1}
df1 = df.iloc[(-df['status'].map(d).fillna(a)).argsort()]
print (df1)
name status
0 Paul ok
2 Robert must read 2 more books
1 Jean must read 1 more books
3 John does not read any book yet
Explanation:
First extract integers by regex \d+
Then dynamically create dictionary for map non numeric values
Replace NaNs by fillna for numeric Series
Get positions by argsort
Select by iloc for sorted values
You can use sorted with a custom function to calculate the indices which would be sort an array (much like numpy.argsort). Then feed to pd.DataFrame.iloc:
df = pd.DataFrame({'name': ['Paul', 'Jean', 'Robert', 'John'],
'status': ['ok', 'must read 20 more books',
'must read 3 more books', 'does not read any book yet']})
def sort_key(x):
if x[1] == 'ok':
return -1
elif x[1] == 'does not read any book yet':
return np.inf
else:
return int(x[1].split()[2])
idx = [idx for idx, _ in sorted(enumerate(df['status']), key=sort_key)]
df = df.iloc[idx, :]
print(df)
name status
0 Paul ok
2 Robert must read 3 more books
1 Jean must read 20 more books
3 John does not read any book yet
I have a DataFrame, say one column is:
{'university':'A','B','A','C'}
I want to change the column into:
{'university':1,2,1,3}
According to an imaginary dict:
{'A':1,'B':2,'C':3}
how to get this done?
ps: I solved the original problem, it's something about my own computer setting.
And I changed the question accordingly to be more helpful.
I think you need map by dict - d:
df.university = df.university.map(d)
If need encode the object as an enumerated type or categorical variable use factorize:
df.university = pd.factorize(df.university)[0] + 1
Sample:
d = {'A':1,'B':2,'C':3}
df = pd.DataFrame({'university':['A','B','A','C']})
df['a'] = df.university.map(d)
df['b'] = pd.factorize(df.university)[0] + 1
print (df)
university a b
0 A 1 1
1 B 2 2
2 A 1 1
3 C 3 3
I try rewrite your function:
def given_value(column):
columnlist=column.drop_duplicates()
#reset to default monotonic increasing (0,1,2, ...)
columnlist = columnlist.reset_index(drop=True)
#print (columnlist)
#swap index and values to new Series columnlist_rev
columnlist_rev= pd.Series(columnlist.index, index=columnlist.values)
#map by columnlist_rev
column=column.map(columnlist_rev)
return column
print (given_value(df.university))
0 0
1 1
2 0
3 2
Name: university, dtype: int64
AttributeError: 'DataFrame' object has no attribute 'column'
Your answer is written in the Exception statement! DataFrame object doesn't have an attribute called column, which means you can't call on DataFrame.column at any point in your code. I believe your problem exists outside of what you have posted here, likely to be somewhere near the part where you imported the data as a DataFrame fro the first time. My guess is that when you were naming the columns, you did something like df.column = [university] instead of df.columns = [university]. The s matters. If you read the Traceback closely, you'll be able to figure out precisely which line is throwing the error.
Also, in your posted function, you do not need the parameter df as it is not used at any point during the process.
I have a dataframe having multiple columns. I would like to replace the value in a column called Discriminant. Now this value needs to only be replaced for a few rows, whenever a condition is met in another column called ids. I tried various methods; The most common method seems to be using the .loc method, but for some reason it doesn't work for me.
Here are the variations that I am unsuccessfully trying:
encodedid - variable used for condition checking
indices - variable used for subsetting the dataframe (starts from zero)
Variation 1:
df[df.ids == encodedid].loc[df.ids==encodedid, 'Discriminant'].values[indices] = 'Y'
Variation 2:
df[df['ids'] == encodedid].iloc[indices,:].set_value('questionid','Discriminant', 'Y')
Variation 3:
df.loc[df.ids==encodedid, 'Discriminant'][indices] = 'Y'
Variation 3 particularly has been disappointing in that most posts on SO tend to say it should work but it gives me the following error:
ValueError: [ 0 1 2 3 5 6 7 8 10 11 12 13 14 16 17 18 19 20 21 22 23] not contained in the index
Any pointers will be highly appreciated.
you are slicing too much. try something like this:
indexer = df[df.ids == encodedid].index
df.loc[indexer, 'Discriminant'] = 'Y'
.loc[] needs an index list and a column list. you can set the value of that slice easily using = 'what you need'
looking at your problem you might want to set that for 2 columns at the same time such has:
indexer = df[df.ids == encodedid].index
column_list = ['Discriminant', 'questionid']
df.loc[indexer, column_list] = 'Y'
Maybe something like this. I don't have a dataframe to test it, but...
df['Discriminant'] = np.where(df['ids'] == 'some_condition', 'replace', df['Discriminant'])