Can we index both the row and column in pandas without using.iloc? The documentations says
With DataFrame, slicing inside of [] slices the rows.
But when I want to include both row and column in the same fashion, it is not working.
data = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde'))
data[0:2] #only rows
data.iloc[0:2,0:3] # works.
data[0:2,0:3] # not working in python, but it works similarly in R
I agree that using iloc is probably the clearest solution, but indexing by row and column number simultaneously can be done with two separate indexing operations. Unless you use iloc, I don't think pandas knows if you are looking for columns number 0-3, or columns named 0, 1, 2, and 3
data[0:2][data.columns[0:3]]
This is fairly clear though in showing exactly what you are selecting. Otherwise, you'll have to drop into array indexing to get your subset.
data.values[0:2,0:3]
Related
With my df, I dropped index 3116:
df=df.drop(df.index[3116],axis=0)
However, when I try to use a for loop with the rows in the df later, there's an error at 3116. Not sure why? Is it not dropped correctly? Because when I use df.info(), there is 1 less column so I would think it's correct but later there's an error:
for i in range(df['ever_married'].count()):
if df['ever_married'][i] == 'Yes':
df['ever_married'][i]=1
elif df['ever_married'][i] =='No':
df['ever_married'][i]=0
This brings:
'KeyError: 3116'
However, when I add to before the first if block in the for loop:
if i==3116:
pass
The error goes away, but the code doesn't perform how I want by converting all values from obj to int.
How can I fix this? Thank you!
If you drop the index at that position, then the dataframe has a non-contiguous index. Later, when you loop over it, you make the assumption that the index is contiguous:
for i in range(df['ever_married'].count()):
This will loop from 0 to the number of rows in your database, and does not skip any dropped rows. There are four fixes you could choose from here:
Get rid of the loop. Series.map() could be applied to this problem, like so:
df['ever_married'] = df['ever_married'].map({'No': 0, 'Yes': 1})
This is both faster and more robust. It replaces No with 0, and Yes with 1, everywhere in the column.
Index using .iloc[] instead of indexes. The .iloc[] indexer selects by position within the series or dataframe, rather than by index.
Example of how to set the value in column "ever_married" at index i to 1.
df.iloc[i, df.columns.get_loc('ever_married')]=1
Restore a contiguous index using .reset_index(). DataFrame.reset_index() can reset the index so that it is contiguous and does not skip any numbers.
Example:
df = df.reset_index(drop=True)
Use for i in df.index: to skip over missing rows.
Of the four solutions, I would suggest solution 1.
I am appending different dataframes to make one set. Occasionally, some values have the same index, so it stores the value as a series. Is there a quick way within Pandas to just overwrite the value instead of storing all the values as a series?
You weren't very clear guy. If you want to resolve the duplicated indexes problem, probably the pd.Dataframe.reset_index() method will be enough. But, if you have duplicate rows when you concat the Dataframes, just use the pd.DataFrame.drop_duplicates() method. Else, share a bit of your code with or be clearer.
I'm not sure that the code below is what you're searching.
we say two dataframes, one columns, the same index and different values. and you wanna overwrite the value in one dataframe with the other. you can do it with a simple loop with iloc indexer.
import pandas as pd
df_1 = pd.DataFrame({'col_1':['a','b','c','d']})
df_2 = pd.DataFrame({'col_1':['q','w','e','r']})
rows = df_1.shape[0]
for idx in range(rows):
df_1['col_1'].iloc[idx] = df_2['col_2'].iloc[idx]
Then, you check the df_1. you should get that:
df_1
col_1
0 q
1 w
2 e
3 r
Whatever the response is what you want, let me know so I can help you.
I've noticed three methods of selecting a column in a Pandas DataFrame:
First method of selecting a column using loc:
df_new = df.loc[:, 'col1']
Second method - seems simpler and faster:
df_new = df['col1']
Third method - most convenient:
df_new = df.col1
Is there a difference between these three methods? I don't think so, in which case I'd rather use the third method.
I'm mostly curious as to why there appear to be three methods for doing the same thing.
In the following situations, they behave the same:
Selecting a single column (df['A'] is the same as df.loc[:, 'A'] -> selects column A)
Selecting a list of columns (df[['A', 'B', 'C']] is the same as df.loc[:, ['A', 'B', 'C']] -> selects columns A, B and C)
Slicing by rows (df[1:3] is the same as df.iloc[1:3] -> selects rows 1 and 2. Note, however, if you slice rows with loc, instead of iloc, you'll get rows 1, 2 and 3 assuming you have a RangeIndex. See details here.)
However, [] does not work in the following situations:
You can select a single row with df.loc[row_label]
You can select a list of rows with df.loc[[row_label1, row_label2]]
You can slice columns with df.loc[:, 'A':'C']
These three cannot be done with [].
More importantly, if your selection involves both rows and columns, then assignment becomes problematic.
df[1:3]['A'] = 5
This selects rows 1 and 2 then selects column 'A' of the returning object and assigns value 5 to it. The problem is, the returning object might be a copy so this may not change the actual DataFrame. This raises SettingWithCopyWarning. The correct way of making this assignment is:
df.loc[1:3, 'A'] = 5
With .loc, you are guaranteed to modify the original DataFrame. It also allows you to slice columns (df.loc[:, 'C':'F']), select a single row (df.loc[5]), and select a list of rows (df.loc[[1, 2, 5]]).
Also note that these two were not included in the API at the same time. .loc was added much later as a more powerful and explicit indexer. See unutbu's answer for more detail.
Note: Getting columns with [] vs . is a completely different topic. . is only there for convenience. It only allows accessing columns whose names are valid Python identifiers (i.e. they cannot contain spaces, they cannot be composed of numbers...). It cannot be used when the names conflict with Series/DataFrame methods. It also cannot be used for non-existing columns (i.e. the assignment df.a = 1 won't work if there is no column a). Other than that, . and [] are the same.
loc is specially useful when the index is not numeric (e.g. a DatetimeIndex) because you can get rows with particular labels from the index:
df.loc['2010-05-04 07:00:00']
df.loc['2010-1-1 0:00:00':'2010-12-31 23:59:59 ','Price']
However [] is intended to get columns with particular names:
df['Price']
With [] you can also filter rows, but it is more elaborated:
df[df['Date'] < datetime.datetime(2010,1,1,7,0,0)]['Price']
If you're confused which of these approaches is (at least) the recommended one for your use-case, take a look at this brief instructions from pandas tutorial:
When selecting subsets of data, square brackets [] are used.
Inside these brackets, you can use a single column/row label, a list
of column/row labels, a slice of labels, a conditional expression or
a colon.
Select specific rows and/or columns using loc when using the row and
column names
Select specific rows and/or columns using iloc when using the
positions in the table
You can assign new values to a selection based on loc/iloc.
I highlighted some of the points to make their use-case differences even more clear.
There seems to be a difference between df.loc[] and df[] when you create dataframe with multiple columns.
You can refer to this question:
Is there a nice way to generate multiple columns using .loc?
Here, you can't generate multiple columns using df.loc[:,['name1','name2']] but you can do by just using double bracket df[['name1','name2']]. (I wonder why they behave differently.)
I've noticed three methods of selecting a column in a Pandas DataFrame:
First method of selecting a column using loc:
df_new = df.loc[:, 'col1']
Second method - seems simpler and faster:
df_new = df['col1']
Third method - most convenient:
df_new = df.col1
Is there a difference between these three methods? I don't think so, in which case I'd rather use the third method.
I'm mostly curious as to why there appear to be three methods for doing the same thing.
In the following situations, they behave the same:
Selecting a single column (df['A'] is the same as df.loc[:, 'A'] -> selects column A)
Selecting a list of columns (df[['A', 'B', 'C']] is the same as df.loc[:, ['A', 'B', 'C']] -> selects columns A, B and C)
Slicing by rows (df[1:3] is the same as df.iloc[1:3] -> selects rows 1 and 2. Note, however, if you slice rows with loc, instead of iloc, you'll get rows 1, 2 and 3 assuming you have a RangeIndex. See details here.)
However, [] does not work in the following situations:
You can select a single row with df.loc[row_label]
You can select a list of rows with df.loc[[row_label1, row_label2]]
You can slice columns with df.loc[:, 'A':'C']
These three cannot be done with [].
More importantly, if your selection involves both rows and columns, then assignment becomes problematic.
df[1:3]['A'] = 5
This selects rows 1 and 2 then selects column 'A' of the returning object and assigns value 5 to it. The problem is, the returning object might be a copy so this may not change the actual DataFrame. This raises SettingWithCopyWarning. The correct way of making this assignment is:
df.loc[1:3, 'A'] = 5
With .loc, you are guaranteed to modify the original DataFrame. It also allows you to slice columns (df.loc[:, 'C':'F']), select a single row (df.loc[5]), and select a list of rows (df.loc[[1, 2, 5]]).
Also note that these two were not included in the API at the same time. .loc was added much later as a more powerful and explicit indexer. See unutbu's answer for more detail.
Note: Getting columns with [] vs . is a completely different topic. . is only there for convenience. It only allows accessing columns whose names are valid Python identifiers (i.e. they cannot contain spaces, they cannot be composed of numbers...). It cannot be used when the names conflict with Series/DataFrame methods. It also cannot be used for non-existing columns (i.e. the assignment df.a = 1 won't work if there is no column a). Other than that, . and [] are the same.
loc is specially useful when the index is not numeric (e.g. a DatetimeIndex) because you can get rows with particular labels from the index:
df.loc['2010-05-04 07:00:00']
df.loc['2010-1-1 0:00:00':'2010-12-31 23:59:59 ','Price']
However [] is intended to get columns with particular names:
df['Price']
With [] you can also filter rows, but it is more elaborated:
df[df['Date'] < datetime.datetime(2010,1,1,7,0,0)]['Price']
If you're confused which of these approaches is (at least) the recommended one for your use-case, take a look at this brief instructions from pandas tutorial:
When selecting subsets of data, square brackets [] are used.
Inside these brackets, you can use a single column/row label, a list
of column/row labels, a slice of labels, a conditional expression or
a colon.
Select specific rows and/or columns using loc when using the row and
column names
Select specific rows and/or columns using iloc when using the
positions in the table
You can assign new values to a selection based on loc/iloc.
I highlighted some of the points to make their use-case differences even more clear.
There seems to be a difference between df.loc[] and df[] when you create dataframe with multiple columns.
You can refer to this question:
Is there a nice way to generate multiple columns using .loc?
Here, you can't generate multiple columns using df.loc[:,['name1','name2']] but you can do by just using double bracket df[['name1','name2']]. (I wonder why they behave differently.)
I am trying to filter specific rows with python-pandas:
df = pd.read_csv('file.csv', delimiter=',', header=None,engine='python', usecols=range(0, 7), error_bad_lines=False)
df = df.drop(df.index[9:86579])
df = df[df[[0,1]].apply(lambda r: r.str.contains('TestString1', case=False).any(), axis=1)]
df.to_csv("yourcsv.csv", index=False, header=None)#
Now how can I set a starting row? Because my rows "0-10" consist information and I want to start searching by keyword from row 11. But how?
Try this:
df.iloc[11:].to_csv("yourcsv.csv", index=False, header=None)
If you don't want to drop rows and "see" only from a certain row your dataframe you can use ILOC function:
df["column name"].iloc[11:].apply(function)
This example you get from 11th row until last one and apply your function.
DataFrame.iloc
Purely integer-location based indexing for selection by position.
Allowed inputs are:
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid output for indexing (one of the above)
.iloc[] is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array.
I am not sure what you mean by "Because my rows "0-10" consist information and I want to start searching by keyword from row 11".
If you mean that you need the first 10 rows to be used as a condition for making your filter working afterwards, then you can iterate by row and use np.where.
If this is not the case, than I believe the other two answers (John, Rafael) already solved your problem so you can vote them up.