I search Pandas DataFrame by loc -for example like this
x = df.loc[df.index.isin(['one','two'])]
But I need only the first row of the result. If I use
x = df.loc[df.index.isin(['one','two'])].iloc[0]
I get error in the case that no row is found. Of course, I can select all the rows (the first example) and then check if result is empty or not. But I seek some more efficient way (the dataframe can be long). Is there any?
pandas.Index.duplicated
The pandas.Index object has a duplicated method that identifies all repeated values after the first occurance.
x[~x.index.duplicated()]
If you wanted to ...
df[df.index.isin(['one', 'two']) & ~df.index.duplicated()]
Related
I have a huge 800k row dataframe which I need to find the key with another dataframe.
Initially I was looping through my 2 dataframes with a loop and checking the value of the keys with a condition.
I was told about the possibility of using merge to save time. However, no way to make it work :(
Overall, here's the code I'm trying to adapt:
mergeTwo = pd.read_json('merge/mergeUpdate.json')
matches = pd.read_csv('archive/matches.csv')
for indexOne,value in tqdm(mergeTwo.iterrows()):
for index, match in matches.iterrows():
if value["gameid"] == match["gameid"]:
print(match)
for index, value in mergeTwo.iterrows():
test = value.to_frame().merge(matches, on='gameid')
print(test)
In my first case, my code works without worries.
In the second, this one tells me a problem of not known key (gameid)
Anyone got a solution?
Thanks in advance !
When you iterate over rows, your value is a Series which is transformed into a one-column frame by to_frame method with the original column names as its index. So you need to transpose it to make the second way work:
for index, value in mergeTwo.iterrows():
# note .T after .to_frame
test = value.to_frame().T.merge(matches, on='gameid')
print(test)
But iteration is a redundant tool, merge applied to the first frame should be enough:
mergeTwo.merge(matches, on='gameid', how='left')
I'm trying to iterate over a large DataFrame that has 32 fields, 1 million plus rows.
What i'm trying to do is iterate over each row, and check whether any of the rest of the rows have duplicate information in 30 of the fields, while the other two fields have different information.
I'd then like to store the the ID info. of the rows that meet these conditions.
So far i've been trying to figure out how to check two rows with the below code, it seems to work when comparing single columns but throws an error when I try more than one column, could anyone advise on how best to approach?
for index in range(len(df)):
for row in range(index, len(df)):
if df.iloc[index][1:30] == df.iloc[row][1:30]:
print(df.iloc[index])
As a general rule, you should always always try not to iterate over the rows of a DataFrame.
It seems that what you need is the pandas duplicated() method. If you have a list of the 30 columns you want to use to determine duplicates rows, the code looks something like this:
df.duplicated(subset=['col1', 'col2', 'col3']) # etc.
Full example:
# Set up test df
from io import StringIO
sub_df = pd.read_csv(
StringIO("""ID;col1;col2;col3
One;23;451;42;31
Two;24;451;42;54
Three;25;513;31;31"""
),
sep=";"
)
Find which rows are duplicates in col1 and col2. Note that the default is that the first instance is not marked as a duplicate, but later duplicates are. This behaviour can be changed as described in the documentation I linked to above.
mask = sub_df.duplicated(["col1", "col2"])
This looks like:
Now, filter using the mask.
sub_df["ID"][sub_df.duplicated(["col1", "col2"])]
Of course, you can do the last two steps in one line.
I've noticed three methods of selecting a column in a Pandas DataFrame:
First method of selecting a column using loc:
df_new = df.loc[:, 'col1']
Second method - seems simpler and faster:
df_new = df['col1']
Third method - most convenient:
df_new = df.col1
Is there a difference between these three methods? I don't think so, in which case I'd rather use the third method.
I'm mostly curious as to why there appear to be three methods for doing the same thing.
In the following situations, they behave the same:
Selecting a single column (df['A'] is the same as df.loc[:, 'A'] -> selects column A)
Selecting a list of columns (df[['A', 'B', 'C']] is the same as df.loc[:, ['A', 'B', 'C']] -> selects columns A, B and C)
Slicing by rows (df[1:3] is the same as df.iloc[1:3] -> selects rows 1 and 2. Note, however, if you slice rows with loc, instead of iloc, you'll get rows 1, 2 and 3 assuming you have a RangeIndex. See details here.)
However, [] does not work in the following situations:
You can select a single row with df.loc[row_label]
You can select a list of rows with df.loc[[row_label1, row_label2]]
You can slice columns with df.loc[:, 'A':'C']
These three cannot be done with [].
More importantly, if your selection involves both rows and columns, then assignment becomes problematic.
df[1:3]['A'] = 5
This selects rows 1 and 2 then selects column 'A' of the returning object and assigns value 5 to it. The problem is, the returning object might be a copy so this may not change the actual DataFrame. This raises SettingWithCopyWarning. The correct way of making this assignment is:
df.loc[1:3, 'A'] = 5
With .loc, you are guaranteed to modify the original DataFrame. It also allows you to slice columns (df.loc[:, 'C':'F']), select a single row (df.loc[5]), and select a list of rows (df.loc[[1, 2, 5]]).
Also note that these two were not included in the API at the same time. .loc was added much later as a more powerful and explicit indexer. See unutbu's answer for more detail.
Note: Getting columns with [] vs . is a completely different topic. . is only there for convenience. It only allows accessing columns whose names are valid Python identifiers (i.e. they cannot contain spaces, they cannot be composed of numbers...). It cannot be used when the names conflict with Series/DataFrame methods. It also cannot be used for non-existing columns (i.e. the assignment df.a = 1 won't work if there is no column a). Other than that, . and [] are the same.
loc is specially useful when the index is not numeric (e.g. a DatetimeIndex) because you can get rows with particular labels from the index:
df.loc['2010-05-04 07:00:00']
df.loc['2010-1-1 0:00:00':'2010-12-31 23:59:59 ','Price']
However [] is intended to get columns with particular names:
df['Price']
With [] you can also filter rows, but it is more elaborated:
df[df['Date'] < datetime.datetime(2010,1,1,7,0,0)]['Price']
If you're confused which of these approaches is (at least) the recommended one for your use-case, take a look at this brief instructions from pandas tutorial:
When selecting subsets of data, square brackets [] are used.
Inside these brackets, you can use a single column/row label, a list
of column/row labels, a slice of labels, a conditional expression or
a colon.
Select specific rows and/or columns using loc when using the row and
column names
Select specific rows and/or columns using iloc when using the
positions in the table
You can assign new values to a selection based on loc/iloc.
I highlighted some of the points to make their use-case differences even more clear.
There seems to be a difference between df.loc[] and df[] when you create dataframe with multiple columns.
You can refer to this question:
Is there a nice way to generate multiple columns using .loc?
Here, you can't generate multiple columns using df.loc[:,['name1','name2']] but you can do by just using double bracket df[['name1','name2']]. (I wonder why they behave differently.)
I've noticed three methods of selecting a column in a Pandas DataFrame:
First method of selecting a column using loc:
df_new = df.loc[:, 'col1']
Second method - seems simpler and faster:
df_new = df['col1']
Third method - most convenient:
df_new = df.col1
Is there a difference between these three methods? I don't think so, in which case I'd rather use the third method.
I'm mostly curious as to why there appear to be three methods for doing the same thing.
In the following situations, they behave the same:
Selecting a single column (df['A'] is the same as df.loc[:, 'A'] -> selects column A)
Selecting a list of columns (df[['A', 'B', 'C']] is the same as df.loc[:, ['A', 'B', 'C']] -> selects columns A, B and C)
Slicing by rows (df[1:3] is the same as df.iloc[1:3] -> selects rows 1 and 2. Note, however, if you slice rows with loc, instead of iloc, you'll get rows 1, 2 and 3 assuming you have a RangeIndex. See details here.)
However, [] does not work in the following situations:
You can select a single row with df.loc[row_label]
You can select a list of rows with df.loc[[row_label1, row_label2]]
You can slice columns with df.loc[:, 'A':'C']
These three cannot be done with [].
More importantly, if your selection involves both rows and columns, then assignment becomes problematic.
df[1:3]['A'] = 5
This selects rows 1 and 2 then selects column 'A' of the returning object and assigns value 5 to it. The problem is, the returning object might be a copy so this may not change the actual DataFrame. This raises SettingWithCopyWarning. The correct way of making this assignment is:
df.loc[1:3, 'A'] = 5
With .loc, you are guaranteed to modify the original DataFrame. It also allows you to slice columns (df.loc[:, 'C':'F']), select a single row (df.loc[5]), and select a list of rows (df.loc[[1, 2, 5]]).
Also note that these two were not included in the API at the same time. .loc was added much later as a more powerful and explicit indexer. See unutbu's answer for more detail.
Note: Getting columns with [] vs . is a completely different topic. . is only there for convenience. It only allows accessing columns whose names are valid Python identifiers (i.e. they cannot contain spaces, they cannot be composed of numbers...). It cannot be used when the names conflict with Series/DataFrame methods. It also cannot be used for non-existing columns (i.e. the assignment df.a = 1 won't work if there is no column a). Other than that, . and [] are the same.
loc is specially useful when the index is not numeric (e.g. a DatetimeIndex) because you can get rows with particular labels from the index:
df.loc['2010-05-04 07:00:00']
df.loc['2010-1-1 0:00:00':'2010-12-31 23:59:59 ','Price']
However [] is intended to get columns with particular names:
df['Price']
With [] you can also filter rows, but it is more elaborated:
df[df['Date'] < datetime.datetime(2010,1,1,7,0,0)]['Price']
If you're confused which of these approaches is (at least) the recommended one for your use-case, take a look at this brief instructions from pandas tutorial:
When selecting subsets of data, square brackets [] are used.
Inside these brackets, you can use a single column/row label, a list
of column/row labels, a slice of labels, a conditional expression or
a colon.
Select specific rows and/or columns using loc when using the row and
column names
Select specific rows and/or columns using iloc when using the
positions in the table
You can assign new values to a selection based on loc/iloc.
I highlighted some of the points to make their use-case differences even more clear.
There seems to be a difference between df.loc[] and df[] when you create dataframe with multiple columns.
You can refer to this question:
Is there a nice way to generate multiple columns using .loc?
Here, you can't generate multiple columns using df.loc[:,['name1','name2']] but you can do by just using double bracket df[['name1','name2']]. (I wonder why they behave differently.)
I am trying to filter specific rows with python-pandas:
df = pd.read_csv('file.csv', delimiter=',', header=None,engine='python', usecols=range(0, 7), error_bad_lines=False)
df = df.drop(df.index[9:86579])
df = df[df[[0,1]].apply(lambda r: r.str.contains('TestString1', case=False).any(), axis=1)]
df.to_csv("yourcsv.csv", index=False, header=None)#
Now how can I set a starting row? Because my rows "0-10" consist information and I want to start searching by keyword from row 11. But how?
Try this:
df.iloc[11:].to_csv("yourcsv.csv", index=False, header=None)
If you don't want to drop rows and "see" only from a certain row your dataframe you can use ILOC function:
df["column name"].iloc[11:].apply(function)
This example you get from 11th row until last one and apply your function.
DataFrame.iloc
Purely integer-location based indexing for selection by position.
Allowed inputs are:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid output for indexing (one of the above)
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array.
I am not sure what you mean by "Because my rows "0-10" consist information and I want to start searching by keyword from row 11".
If you mean that you need the first 10 rows to be used as a condition for making your filter working afterwards, then you can iterate by row and use np.where.
If this is not the case, than I believe the other two answers (John, Rafael) already solved your problem so you can vote them up.