How to do columnwise operations in pandas? - python

I have a dataframe that looks something like:
sample parameter1 parameter2 parameter3
A 9 6 3
B 4 5 7
C 1 5 8
and I want to do an operation that does something like:
for sample in dataframe:
df['new parameter'] = df[sample, parameter1]/df[sample, parameter2]
so far I have tried:
df2.loc['ratio'] = df2.loc['reads mapped']/df2.loc['raw total sequences']
but I get the error:
KeyError: 'the label [reads mapped] is not in the [index]'
when I know well that it is in the index, so I figure I am missing some concept somewhere. Any help is much appreciated!
I should add that the parameter values are floats, just in case that is a problem as well!

The method .loc first expects row indices, then column indices, so the following should work, since you wanted to do column-wise operations:
df2['ratio'] = df2.loc[:, 'reads mapped'] / df2.loc[:, 'raw total sequences']
You can find more info in the documentation.

Related

pandas error in df.apply() only for a specific dataframe

Noticed something very strange in pandas. My dataframe(with 3 rows and 3 columns) looks like this:
When I try to extract ID and Name(separated by underscore) to their own columns using command below, it gives me an error:
df[['ID','Name']] = df.apply(lambda x: get_first_last(x['ID_Name']), axis=1, result_type='broadcast')
Error is:
ValueError: cannot broadcast result
Here's the interesting part though..When I delete the "From_To" column from the original dataframe, performing the same df.apply() to split ID_Name works perfectly fine and I get the new columns like this:
I have checked a lot of SO answers but none seem to help. What did I miss here?
P.S. get_first_last is a very simple function like this:
def get_first_last(s):
str_lis = s.split("_")
return [str_lis[0], str_lis[1]]
From the doc of pandas.DataFrame.apply :
'broadcast' : results will be broadcast to the original shape of the DataFrame, the original index and columns will be retained.
So the problem is that the original shape of your dataframe is (3, 3) and the result of your apply function is 2 columns, so you have a mismatch. and that also explane why when you delete the "From_To", the new shape is (3, 2) and now you have a match ...
You can use 'broadcast' instead of 'expand' and you will have your expected result.
table = [
['1_john', 23, 'LoNDon_paris'],
['2_bob', 34, 'Madrid_milan'],
['3_abdellah', 26, 'Paris_Stockhom']
]
df = pd.DataFrame(table, columns=['ID_Name', 'Score', 'From_to'])
df[['ID','Name']] = df.apply(lambda x: get_first_last(x['ID_Name']), axis=1, result_type='expand')
hope this helps !!
It's definitely not a good use case to use apply, you should rather do:
df[["ID", "Name"]]=df["ID_Name"].str.split("_", expand=True, n=1)
Which for your data will output (I took only first 2 columns from your data frame):
ID_Name Score ID Name
0 1_john 23 1 john
1 2_bob 34 2 bob
2 3_janet 45 3 janet
Now n=1 is just in case you would have multiple _ (e.g. as a part of the name) - to make sure you will return at most 2 columns (otherwise the above code would fail)
For instance, if we slightly modify your code, we get the following output:
ID_Name Score ID Name
0 1_john 23 1 john
1 2_bob_jr 34 2 bob_jr
2 3_janet 45 3 janet

Returning date that corresponds with maximum value in pandas dataframe [duplicate]

How can I find the row for which the value of a specific column is maximal?
df.max() will give me the maximal value for each column, I don't know how to get the corresponding row.
Use the pandas idxmax function. It's straightforward:
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].idxmax()
3
>>> df['B'].idxmax()
4
>>> df['C'].idxmax()
1
Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) -- it provides the same thing, and appears at least as fast as idxmax in cursory observations.
idxmax() returns indices labels, not integers.
Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd').
if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed).
HISTORICAL NOTES:
idxmax() used to be called argmax() prior to 0.11
argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0
back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax).
argmax function returned the integer position within the index of the row location of the maximum element.
pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.
For example, consider this toy DataFrame with a duplicate row label:
In [19]: dfrm
Out[19]:
A B C
a 0.143693 0.653810 0.586007
b 0.623582 0.312903 0.919076
c 0.165438 0.889809 0.000967
d 0.308245 0.787776 0.571195
e 0.870068 0.935626 0.606911
f 0.037602 0.855193 0.728495
g 0.605366 0.338105 0.696460
h 0.000000 0.090814 0.963927
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
In [20]: dfrm['A'].idxmax()
Out[20]: 'i'
In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas
Out[21]:
A B C
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
So here a naive use of idxmax is not sufficient, whereas the old form of argmax would correctly provide the positional location of the max row (in this case, position 9).
This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.
So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.
You might also try idxmax:
In [5]: df = pandas.DataFrame(np.random.randn(10,3),columns=['A','B','C'])
In [6]: df
Out[6]:
A B C
0 2.001289 0.482561 1.579985
1 -0.991646 -0.387835 1.320236
2 0.143826 -1.096889 1.486508
3 -0.193056 -0.499020 1.536540
4 -2.083647 -3.074591 0.175772
5 -0.186138 -1.949731 0.287432
6 -0.480790 -1.771560 -0.930234
7 0.227383 -0.278253 2.102004
8 -0.002592 1.434192 -1.624915
9 0.404911 -2.167599 -0.452900
In [7]: df.idxmax()
Out[7]:
A 0
B 8
C 7
e.g.
In [8]: df.loc[df['A'].idxmax()]
Out[8]:
A 2.001289
B 0.482561
C 1.579985
Both above answers would only return one index if there are multiple rows that take the maximum value. If you want all the rows, there does not seem to have a function.
But it is not hard to do. Below is an example for Series; the same can be done for DataFrame:
In [1]: from pandas import Series, DataFrame
In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])
In [3]: s.idxmax()
Out[3]: 'b'
In [4]: s[s==s.max()]
Out[4]:
b 4
c 4
dtype: int64
df.iloc[df['columnX'].argmax()]
argmax() would provide the index corresponding to the max value for the columnX. iloc can be used to get the row of the DataFrame df for this index.
A more compact and readable solution using query() is like this:
import pandas as pd
df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
print(df)
# find row with maximum A
df.query('A == A.max()')
It also returns a DataFrame instead of Series, which would be handy for some use cases.
Very simple: we have df as below and we want to print a row with max value in C:
A B C
x 1 4
y 2 10
z 5 9
In:
df.loc[df['C'] == df['C'].max()] # condition check
Out:
A B C
y 2 10
If you want the entire row instead of just the id, you can use df.nlargest and pass in how many 'top' rows you want and you can also pass in for which column/columns you want it for.
df.nlargest(2,['A'])
will give you the rows corresponding to the top 2 values of A.
use df.nsmallest for min values.
The direct ".argmax()" solution does not work for me.
The previous example provided by #ely
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1
returns the following message :
FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax'
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.
So that my solution is :
df['A'].values.argmax()
mx.iloc[0].idxmax()
This one line of code will give you how to find the maximum value from a row in dataframe, here mx is the dataframe and iloc[0] indicates the 0th index.
Considering this dataframe
[In]: df = pd.DataFrame(np.random.randn(4,3),columns=['A','B','C'])
[Out]:
A B C
0 -0.253233 0.226313 1.223688
1 0.472606 1.017674 1.520032
2 1.454875 1.066637 0.381890
3 -0.054181 0.234305 -0.557915
Assuming one want to know the rows where column "C" is max, the following will do the work
[In]: df[df['C']==df['C'].max()])
[Out]:
A B C
1 0.472606 1.017674 1.520032
The idmax of the DataFrame returns the label index of the row with the maximum value and the behavior of argmax depends on version of pandas (right now it returns a warning). If you want to use the positional index, you can do the following:
max_row = df['A'].values.argmax()
or
import numpy as np
max_row = np.argmax(df['A'].values)
Note that if you use np.argmax(df['A']) behaves the same as df['A'].argmax().
Use:
data.iloc[data['A'].idxmax()]
data['A'].idxmax() -finds max value location in terms of row
data.iloc() - returns the row
If there are ties in the maximum values, then idxmax returns the index of only the first max value. For example, in the following DataFrame:
A B C
0 1 0 1
1 0 0 1
2 0 0 0
3 0 1 1
4 1 0 0
idxmax returns
A 0
B 3
C 0
dtype: int64
Now, if we want all indices corresponding to max values, then we could use max + eq to create a boolean DataFrame, then use it on df.index to filter out indexes:
out = df.eq(df.max()).apply(lambda x: df.index[x].tolist())
Output:
A [0, 4]
B [3]
C [0, 1, 3]
dtype: object
what worked for me is:
df[df['colX'] == df['colX'].max()
You then get the row in your df with the maximum value of colX.
Then if you just want the index you can add .index at the end of the query.

Remove partial duplicate row using column value

I'm trying to clean data where there is a lot of partial duplicate only storing the first row of data when the key in Col A has duplicate.
A B C D
0 foo bar lor ips
1 foo bar
2 test do kin ret
3 test do
4 er ed ln pr
expected output after cleaning
A B C D
0 foo bar lor ips
1 test do kin ret
2 er ed ln pr
I have been looking at methods such as drop_duplicates or even group_by but they don't really help in my case : the duplicate are partial since some rows contain empty data and only have similar value in col A and B.
group by partial work but doesn't return the transformed data , they just filter through.
I'm very new to panda and pointer are appreciated. I could probably doing it outside panda but i'm thinking there might be a better way to do it.
edit: sorry just noticed a mistake i made in the provided example. ( test had became " tes "
In your case how would you say partial duplicate? Please provide complicate example. In the above example instead of Col A duplication you could try Col B.
Expected output could be obtained from this following snippet,
print (df.drop_duplicates(subset=['B']))
Note: Suggested solution only works for the above sample, it won't work when it has different col A and same Col B value.

Optimization problem with Pandas apply and multiIndex search [duplicate]

This question already has an answer here:
How do you shift Pandas DataFrame with a multiindex?
(1 answer)
Closed 4 years ago.
So, I was wondering if I am doing this correctly, because maybe there is a much better way to do this and I am wasting a lot of time.
I have a 3 level index dataframe, like this:
IndexA IndexB IndexC ColumnA ColumnB
A B C1 HiA HiB
A B C2 HiA2 HiB2
I need to do a search for every row, saving data from other rows. I know this sounds strange, but it makes sense with my data. For example:
I want to add ColumnB data from my second row to the first one, and vice-versa, like this:
IndexA IndexB IndexC ColumnA ColumnB NewData
A B C1 HiA HiB HiB2
A B C2 HiA2 HiB2 HiB
In order to do this search, I do an apply on my df, like this:
df['NewData'] = df.apply(lambda r: my_function(df, r.IndexA, r.IndexB, r.IndexC), axis=1)
Where my function is:
def my_function(df, indexA, indexB, indexC):
idx = pd.IndexSlice
#Here I do calculations (substraction) to know what C exactly I want
#newIndexC = C - someConstantValue
try:
res = df.loc[idx[IndexA, IndexB, newIndexC],'ColumnB']
return res
except KeyError:
return -1
I tried to simplify a lot of this problem, sorry if it sounds confusing. Basically my data frame has 20 million rows, and this search takes 2 hours. I know it has to take a lot, because there are a lot of accesses, but I wanted to know if there could be a faster way to do this search.
More information:
On indexA I have different groups of values. Example: Countries.
On indexB I have different groups of dates.
On indexC I have different groups of values.
Answer:
df['NewData'] = df.groupby(level=['IndexA', 'IndexB'])['ColumnB'].shift(7)
All you're really doing is a shift. You can speed it up 1000x like this:
df['NewData'] = df['ColumnB'].shift(-someConstantValue)
You'll need to roll the data from the top someConstantValue number of rows around to the bottom--I'm leaving that as an exercise.

Pandas merge giving wrong output

Ok
I have gone through some blogs related to this topic - but I am still getting the same problem. I have two dataframes. Both have a column X which have SHA2 values in them. It contains hex strings.
Example (Dataframe lookup)
X,Y
000000000E000394574D69637264736F66742057696E646F7773204861726477,7
0000000080000000000000090099000000040005000000000000008F2A000010,7
000000020000000000000000777700010000000000020000000040C002004600,24
0000005BC614437F6BE049237FA1DDD2083B5BA43A10175E4377A59839DC2B64,7
Example (Dataframe source)
X,Z
000000000E000394574D69637264736F66742057696E646F7773204861726477,'blah'
0000000080000000000000090099000000040005000000000000008F2A000010,'blah blah'
000000020000000000000000777700010000000000020000000040C002004600,'dummy'
etc.
So now I am doing
lookup['X'] = lookup['X'].astype(str)
source['X'] = source['X'].astype(str)
source['newcolumn'] = source.merge(lookup, on='X', how='inner')['Y']
The source has 160,000 rows and the lookup has around 500,000 rows.
Now, when the operation finishes, I get newcolumn but the values are wrong.
I have made sure that they are not being picked up from duplicate values of X, because there are no duplicate X in either table.
So, this is really making me feel dumb and gave me quite a pain in my live systems. Can anyone suggest what is the problem ?
I have now replaced the call with
def getReputation(lookupDF,value,lookupcolumn,default):
lookupRows = lookupDF.loc[lookupDF['X']==value]
if lookupRows.shape[0]>0:
return lookupRows[lookupcolumn].values[0]
else:
return default
source['newcolumn'] = source.apply(lambda x: getReputation(lookup,x['X'],'Y',-1),axis=1)
This code works - but obviously it is BAD code and takes a horrible long time. I can multiprocess it - but the question remains. WHY is the merge failing ?
Thanks for your help
Rgds
I'd use map() method in this case:
first set 'X' as index in the lookup DF:
In [58]: lookup.set_index('X', inplace=True)
In [59]: lookup
Out[59]:
Y
X
000000000E000394574D69637264736F66742057696E646F7773204861726477 7
0000000080000000000000090099000000040005000000000000008F2A000010 7
000000020000000000000000777700010000000000020000000040C002004600 24
0000005BC614437F6BE049237FA1DDD2083B5BA43A10175E4377A59839DC2B64 7
In [60]: df['Y'] = df.X.map(lookup.Y)
In [61]: df
Out[61]:
X Z Y
0 000000000E000394574D69637264736F66742057696E646F7773204861726477 blah 7
1 0000000080000000000000090099000000040005000000000000008F2A000010 blah blah 7
2 000000020000000000000000777700010000000000020000000040C002004600 dummy 24
Actually your code is working properly for your sample DFs:
In [68]: df.merge(lookup, on='X', how='inner')
Out[68]:
X Z Y
0 000000000E000394574D69637264736F66742057696E646F7773204861726477 blah 7
1 0000000080000000000000090099000000040005000000000000008F2A000010 blah blah 7
2 000000020000000000000000777700010000000000020000000040C002004600 dummy 24
So check whether you have the same data and dtypes in the X column in both DFs
Probably you might have duplicate values on column X on the lookup data frame. It is due to the indexing of fields and the below snippet will produce the right results.
output = source.merge(lookup, on='X', how='inner')
In case if you want to create a new column, either it should not have any duplicates on the right df or the indexes needs to be adjusted accordingly. If you're sure there are no duplicate values, compare the indexes from the above snippet and your snippet for better understanding and try resetting the indexes before merging.

Categories

Resources