Change values in Pandas Dataframe [Python] - python

I'm trying to remove the percent sign after a value in a pandas dataframe, relevant code:
for i in loansdata:
if i.endswith('%'):
i = i[:-1]
I was thinking that i = i[:-1] would set the new value, but it doesn't. How do I go about it? For clarity: if I print i inside the for loop, it prints without the percent sign. But if I print the whole dataframe, it has not changed.

use str.replace to replace a specific character for a column:
df[col] = df[col].str.replace('%','')
What you're doing depending on what loansdata actually is, is either looping over the columns or the row values of a column.
You can't modify the row contents like that, even if you could you should avoid loops where a vectorised solution exists.
If % exists in multiple cols then you could call the above for each col but this method only exists for str dtypes

Related

Converting all pandas column: row to key:value pair json

I am trying to add a new column at the end of my pandas dataframe that will contain the values of previous cells in key:value pair. I have tried the following:
import json
df["json_formatted"] = df.apply
(
lambda row: json.dumps(row.to_dict(), ensure_ascii=False), axis=1
)
It creates the the column json_formatted successfully with all required data, but the problem is it also adds the json_formatted as another extra key. I don't want that. I want the json data to contain only the information from the original df columns. How can I do that?
Note: I made ensure_ascii=False because the column names are in Japanese characters.
Create a new variable holding the created column and add it afterwards:
json_formatted = df.apply(lambda row: json.dumps(row.to_dict(), ensure_ascii=False), axis=1)
df['json_formatted'] = json_formatted
This behaviour shouldn't happen, but might be caused by your having run this function more than once. (You added the column, and then ran df.apply on the same dataframe).
You can avoid this by making your columns explicit: df[['col1', 'col2']].apply()
Apply is an expensive operation is Pandas, and if performance matters it is better to avoid it. An alternative way to do this is
df["json_formatted"] = [json.dumps(s, ensure_ascii=False) for s in df.T.to_dict().values()]

DataFrame.at function not working after copying dataframe

I'm using the .at function to try and save all columns under one header in a list.
The file contains entries for country and population.
df = pandas.read_csv("file.csv")
population_list = []
df2 = df[df['country'] == "India"]
for i in range(len(df2)):
population_list = df2.at[i, 'population']
This is throwing a KeyError. However, the df.at seems to be working fine for the original dataframe. Is .at just not allowed in this case?
IIUC, you don't need to loop over your dataframe to get what you need. Simply use:
population_list = df2["population"].tolist()
If you really want to use the loop (not recommended when unnecessary), note that the index has likely changed after your filter, i.e not consecutive integers.
Try:
for i in df2.index:
population_list.append(df2.at[i, 'population'])
Note: In your code you keep trying to reassign the entire list to one value instead of appending.
In at you pass the index value and the column name.
In the case of the "original" DataFrame all is OK, because probably the index contains
consecutive values starting from 0.
But when you run df2 = df[df['country'] == "India"] then df2 contains
only a subset of original rows, so the index does not contain consecutive numbers.
One of possible solutions is to run reset_index() on df2.
Then the index will again contain consecutive numbers and your code should raise no exception.
Edit
But your code raises other doubts.
Remember that at returns a single value, taken from a cell
with particular index value and column, not a list.
So maybe it is enough to run:
population_India = df.set_index('country').at['India', 'population']
You don't need any list. You want to find just the popupation of India, a single value.

Is there a way to reverse the dropping method in pandas?

I'm aware that you can use
df1 = df1[df1['Computer Name'] != 'someNameToBeDropped']
to drop a given string as a row
what if i wanted to do it the other way around. Let's say dropping everything except what i have in a list of strings.
is there a simple hack I haven't noticed?
Try this to get rows such that value of col is in that given list
df = df[df[column].isin(list_of_strings)]
Additional to exclude what's in the list
df = df[~df[column].isin(list_of_values)]

adding row from one dataframe to another

I am trying to insert or add from one dataframe to another dataframe. I am going through the original dataframe looking for certain words in one column. When I find one of these terms I want to add that row to a new dataframe.
I get the row by using.
entry = df.loc[df['A'] == item]
But when trying to add this row to another dataframe using .add, .insert, .update or other methods i just get an empty dataframe.
I have also tried adding the column to a dictionary and turning that into a dataframe but it writes data for the entire row rather than just the column value. So is there a way to add one specific row to a new dataframe from my existing variable ?
So the entry is a dataframe containing the rows you want to add?
you can simply concatenate two dataframe using concat function if both have the same columns' name
import pandas as pd
entry = df.loc[df['A'] == item]
concat_df = pd.concat([new_df,entry])
pandas.concat reference:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html
The append function expect a list of rows in this formation:
[row_1, row_2, ..., row_N]
While each row is a list, representing the value for each columns
So, assuming your trying to add one row, you shuld use:
entry = df.loc[df['A'] == item]
df2=df2.append( [entry] )
Notice that unlike python's list, the DataFrame.append function returning a new object and not changing the object called it.
See also enter link description here
Not sure how large your operations will be, but from an efficiency standpoint, you're better off adding all of the found rows to a list, and then concatenating them together at once using pandas.concat, and then using concat again to combine the found entries dataframe with the "insert into" dataframe. This will be much faster than using concat each time. If you're searching from a list of items search_keys, then something like:
entries = []
for i in search_keys:
entry = df.loc[df['A'] == item]
entries.append(entry)
found_df = pd.concat(entries)
result_df = pd.concat([old_df, found_df])

Pandas: np.where overwriting values

I currently iterate through the rows of an excel file multiple times and write in "XYZ" to a new column when the row meets certain conditions.
My current code is:
df["new_column"] = np.where(fn == True, "XYZ", "")
The issue I face is that when the fn == True condition is not satisfied, I want to do absolutely nothing and move onto checking the next row of the excel file. I noticed that each time I iterate, the empty string replaces the "XYZ"s that are already marked in the file. Is there a way to prevent this from happening? Is there something I can do instead of empty string ("") to prevent overwriting?
Edit:
My dataframe is a huge financial Excel file with multiple columns and rows. This data set has columns like quantity, revenue, sales, etc. Basically, I have a list that contains about 50 conditionals. For each condition, I iterate through all the rows in the Excel and for the row that matches the condition, I wanted to put an "XYZ" in the df["new_column"] flagging that row. The df["new_column"] is an added column to the original dataframe. Then, I move onto the next condition up until the 50th conditional.
I think the problem is, is that the way I wrote code replaces the previous existing "XYZ" with empty string when I proceed onto check the other conditionals in the list. Basically, I want to find a way to lock "XYZ" in, so it can't become overwritten.
The fn is a helper function that returns a boolean depending on if the condition equals a row in the dataframe. While I iterate, if the condition matches a row, then this function returns True and marks the df["new_column"] with "XYZ". The helper function takes in multiple arguments to check if the current condition matches any of the rows in the dataframe. I hope this explanation helps!
you can try using a lambda.
first, create the function:
def checkIfTrue(FN, new):
if new == "":
pass
if FN:
return "XYZ"
than apply this to the new column like that:
df['new_column'] = df.apply(lambda row: checkIfTrue(row["fn"], row["new_column"]), axis=1)
IIUC you want to use .loc[]:
df.loc[fn, "new_column"] = 'XYZ'

Categories

Resources