I'm using the .at function to try and save all columns under one header in a list.
The file contains entries for country and population.
df = pandas.read_csv("file.csv")
population_list = []
df2 = df[df['country'] == "India"]
for i in range(len(df2)):
population_list = df2.at[i, 'population']
This is throwing a KeyError. However, the df.at seems to be working fine for the original dataframe. Is .at just not allowed in this case?
IIUC, you don't need to loop over your dataframe to get what you need. Simply use:
population_list = df2["population"].tolist()
If you really want to use the loop (not recommended when unnecessary), note that the index has likely changed after your filter, i.e not consecutive integers.
Try:
for i in df2.index:
population_list.append(df2.at[i, 'population'])
Note: In your code you keep trying to reassign the entire list to one value instead of appending.
In at you pass the index value and the column name.
In the case of the "original" DataFrame all is OK, because probably the index contains
consecutive values starting from 0.
But when you run df2 = df[df['country'] == "India"] then df2 contains
only a subset of original rows, so the index does not contain consecutive numbers.
One of possible solutions is to run reset_index() on df2.
Then the index will again contain consecutive numbers and your code should raise no exception.
Edit
But your code raises other doubts.
Remember that at returns a single value, taken from a cell
with particular index value and column, not a list.
So maybe it is enough to run:
population_India = df.set_index('country').at['India', 'population']
You don't need any list. You want to find just the popupation of India, a single value.
Related
I have a huge 800k row dataframe which I need to find the key with another dataframe.
Initially I was looping through my 2 dataframes with a loop and checking the value of the keys with a condition.
I was told about the possibility of using merge to save time. However, no way to make it work :(
Overall, here's the code I'm trying to adapt:
mergeTwo = pd.read_json('merge/mergeUpdate.json')
matches = pd.read_csv('archive/matches.csv')
for indexOne,value in tqdm(mergeTwo.iterrows()):
for index, match in matches.iterrows():
if value["gameid"] == match["gameid"]:
print(match)
for index, value in mergeTwo.iterrows():
test = value.to_frame().merge(matches, on='gameid')
print(test)
In my first case, my code works without worries.
In the second, this one tells me a problem of not known key (gameid)
Anyone got a solution?
Thanks in advance !
When you iterate over rows, your value is a Series which is transformed into a one-column frame by to_frame method with the original column names as its index. So you need to transpose it to make the second way work:
for index, value in mergeTwo.iterrows():
# note .T after .to_frame
test = value.to_frame().T.merge(matches, on='gameid')
print(test)
But iteration is a redundant tool, merge applied to the first frame should be enough:
mergeTwo.merge(matches, on='gameid', how='left')
With my df, I dropped index 3116:
df=df.drop(df.index[3116],axis=0)
However, when I try to use a for loop with the rows in the df later, there's an error at 3116. Not sure why? Is it not dropped correctly? Because when I use df.info(), there is 1 less column so I would think it's correct but later there's an error:
for i in range(df['ever_married'].count()):
if df['ever_married'][i] == 'Yes':
df['ever_married'][i]=1
elif df['ever_married'][i] =='No':
df['ever_married'][i]=0
This brings:
'KeyError: 3116'
However, when I add to before the first if block in the for loop:
if i==3116:
pass
The error goes away, but the code doesn't perform how I want by converting all values from obj to int.
How can I fix this? Thank you!
If you drop the index at that position, then the dataframe has a non-contiguous index. Later, when you loop over it, you make the assumption that the index is contiguous:
for i in range(df['ever_married'].count()):
This will loop from 0 to the number of rows in your database, and does not skip any dropped rows. There are four fixes you could choose from here:
Get rid of the loop. Series.map() could be applied to this problem, like so:
df['ever_married'] = df['ever_married'].map({'No': 0, 'Yes': 1})
This is both faster and more robust. It replaces No with 0, and Yes with 1, everywhere in the column.
Index using .iloc[] instead of indexes. The .iloc[] indexer selects by position within the series or dataframe, rather than by index.
Example of how to set the value in column "ever_married" at index i to 1.
df.iloc[i, df.columns.get_loc('ever_married')]=1
Restore a contiguous index using .reset_index(). DataFrame.reset_index() can reset the index so that it is contiguous and does not skip any numbers.
Example:
df = df.reset_index(drop=True)
Use for i in df.index: to skip over missing rows.
Of the four solutions, I would suggest solution 1.
I am trying to insert or add from one dataframe to another dataframe. I am going through the original dataframe looking for certain words in one column. When I find one of these terms I want to add that row to a new dataframe.
I get the row by using.
entry = df.loc[df['A'] == item]
But when trying to add this row to another dataframe using .add, .insert, .update or other methods i just get an empty dataframe.
I have also tried adding the column to a dictionary and turning that into a dataframe but it writes data for the entire row rather than just the column value. So is there a way to add one specific row to a new dataframe from my existing variable ?
So the entry is a dataframe containing the rows you want to add?
you can simply concatenate two dataframe using concat function if both have the same columns' name
import pandas as pd
entry = df.loc[df['A'] == item]
concat_df = pd.concat([new_df,entry])
pandas.concat reference:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html
The append function expect a list of rows in this formation:
[row_1, row_2, ..., row_N]
While each row is a list, representing the value for each columns
So, assuming your trying to add one row, you shuld use:
entry = df.loc[df['A'] == item]
df2=df2.append( [entry] )
Notice that unlike python's list, the DataFrame.append function returning a new object and not changing the object called it.
See also enter link description here
Not sure how large your operations will be, but from an efficiency standpoint, you're better off adding all of the found rows to a list, and then concatenating them together at once using pandas.concat, and then using concat again to combine the found entries dataframe with the "insert into" dataframe. This will be much faster than using concat each time. If you're searching from a list of items search_keys, then something like:
entries = []
for i in search_keys:
entry = df.loc[df['A'] == item]
entries.append(entry)
found_df = pd.concat(entries)
result_df = pd.concat([old_df, found_df])
I search Pandas DataFrame by loc -for example like this
x = df.loc[df.index.isin(['one','two'])]
But I need only the first row of the result. If I use
x = df.loc[df.index.isin(['one','two'])].iloc[0]
I get error in the case that no row is found. Of course, I can select all the rows (the first example) and then check if result is empty or not. But I seek some more efficient way (the dataframe can be long). Is there any?
pandas.Index.duplicated
The pandas.Index object has a duplicated method that identifies all repeated values after the first occurance.
x[~x.index.duplicated()]
If you wanted to ...
df[df.index.isin(['one', 'two']) & ~df.index.duplicated()]
I'm trying to remove the percent sign after a value in a pandas dataframe, relevant code:
for i in loansdata:
if i.endswith('%'):
i = i[:-1]
I was thinking that i = i[:-1] would set the new value, but it doesn't. How do I go about it? For clarity: if I print i inside the for loop, it prints without the percent sign. But if I print the whole dataframe, it has not changed.
use str.replace to replace a specific character for a column:
df[col] = df[col].str.replace('%','')
What you're doing depending on what loansdata actually is, is either looping over the columns or the row values of a column.
You can't modify the row contents like that, even if you could you should avoid loops where a vectorised solution exists.
If % exists in multiple cols then you could call the above for each col but this method only exists for str dtypes