Replacing characters in a string Python - python

I have a data frame that I iterate through and modify as follows:
filtered = pd.read_csv(fileloc)
for index, row in filtered.iterrows():
row["standardUpc"] = row["standardUpc"].replace("['","")
row["standardUpc"] = row["standardUpc"].replace("']","")
This is not working.

You shouldn't need to iterate over the df for this, and instead you should be able to do:
filtered['standardUpc'] = filtered['standardUpc'].str.replace("'['","")
filtered['standardUpc'] = filtered['standardUpc'].str.replace("']'","")
There are ways to chain the two calls together, but that should be the way that you can do the string replacement.

Related

Iterate through different dataframes and apply a function to each one

I have 4 different dataframes containing time series data that all have the same structure.
My goal is to take each individual dataframe and pass it through a function I have defined that will group them by datestamp, sum the columns and return a new dataframe with the columns I want. So in total I want 4 new dataframes that have only the data I want.
I just looked through this post:
Loop through different dataframes and perform actions using a function
but applying this did not change my results.
Here is my code:
I am putting the dataframes in a list so I can iterate through them
dfs = [vds, vds2, vds3, vds4]
This is my function I want to pass each dataframe through:
def VDS_pre(df):
df = df.groupby(['datestamp','timestamp']).sum().reset_index()
df = df.rename(columns={'datestamp': 'Date','timestamp':'Time','det_vol': 'VolumeVDS'})
df = df[['Date','Time','VolumeVDS']]
return df
This is the loop I made to iterate through my dataframe list and pass each one through my function:
for df in dfs:
df = VDS_pre(df)
However once I go through my loop and go to print out the dataframes, they have not been modified and look like they initially did. Thanks for the help!
However once I go through my loop and go to print out the dataframes, they have not been modified and look like they initially did.
Yes, this is actually the case. The reason why they have not been modified is:
Assignment to an item in a for item in lst: loop does not have any effect on both the lst and the identifier/variables from which the lst items got their values as it is demonstrated with following code:
v1=1; v2=2; v3=3
lst = [v1,v2,v3]
for item in lst:
item = 0
print(lst, v1, v2, v3) # gives: [1, 2, 3] 1 2 3
To achieve the result you expect to obtain you can use a list comprehension and the list unpacking feature of Python:
vds,vds2,vds3,vds4=[VDS_pre(df) for df in [vds,vds2,vds3,vds4]]
or following code which is using a list of strings with the identifier/variable names of the dataframes:
sdfs = ['vds', 'vds2', 'vds3', 'vds4']
for sdf in sdfs:
exec(str(f'{sdf} = VDS_pre(eval(sdf))'))
Now printing vds, vds2, vds3 and vds4 will output the modified dataframes.
Pandas frame operations return new copy of data. Your snippet store the result in df variable which is not stored or updated to your initial list. This is why you don't have any stored result after execution.
If you don't need to keep original frames, you may simply overwrite them:
for i, df in enumerate(dfs):
dfs[i] = VDS_pre(df)
If not just use a second list and append result to it.
l = []
for df in dfs:
df2 = VDS_pre(df)
l.append(df2)
Or even better use list comprehension to rewrite this snippet into a single line of code.
Now you are able to store the result of your processing.
Additionally if your frames have the same structure and can be merged as a single frame, you may consider to first concat them and then apply your function on it. That would be totally pandas.

How do slice and select only the first characters- like you would use str() for a character string, but for a numeric feature?

I am trying to slice first 3 digits of the feature Zip Code and create a new variable off of that.
zipcode = 75012
sliced_zipcode = str(zipcode)[:3]
# If you want the result in integer
three_digits_zipcode = int(str(zipcode)[:3])
# If you want to apply this in a dataframe
import pandas as pd
df['three_digits_zip'] = df['zipcode'].apply(lambda x: int(str(x)[:3]))
Zip = 300123
newZip = Zip//1000
print(newZip)

How do I format the output from regular expression matches into a simple list that I can use to select rows in a pandas data frame?

I tried to convert the actual matches [F.....] to list or list of strings but did not convert in the right format:
regex = re.compile(r'(^A[0-9]....|^B[0-9]....|^C[0-9]....|^carp_P[0-
9]....|^F[0-9]....|^H[0-9]....|^O[0-9]....|^Q[0-9]....)')
regex.split(text)
matches = re.findall(regex, text)
If matches:
match_lst = []
match_lst.append(matches)
print(match_lst)
match_str = ''.join(str(e) for e in match_lst)
print(match_str)
If I create my own simple list then my code works for accession numbers when I set that column as the index [F6XW64, etc. as the index) then it works.
rows = ['F6XW64', 'F6XRH6']
filename = 'Select_Columns_Macho.xlsx'
df2 = pd.read_excel(filename, sheet_name=0, header = 0)
df2.set_index('Accession', inplace = True)
df3 = df2.loc[rows]
print (df3)
These lists and string generated from my regular expressions search above does not work in my code below. It seems like a format issue, but I have not been able to figure it out.
I hope to pull our rows in a pandas data frame that contain a list of index labels, but the list generated from regular expressions and my efforts to format the output from regular expression into a simple list like rows above is not working.

How can I convert this for loop to the dataframe?

If I use print ,I can print all datas.But when I use data= ,it just show me a value of i=2917. How can I convert this for loop to the dataframe.
import pandas as pd
df = pd.read_excel('C:/Users/aaaa/Desktop/rrrrr/twstock/1101.xlsx')
for i in range (1,2917):
data='{:.6%}'.format((df['close'][i]/df['close'][i-1])-1)
You reassign data in every iteration of your for loop. There data contains only the value for i = 2916.
How about creating a list and then appending your data to it inside the for loop?
data = []
for i in range(1,2917):
data.append('{:.6%}'.format((df['close'][i]/df['close'][i-1])-1))
print(data)
I would recommend using pandas vectorized methods for speed and cleanness:
df = pd.read_excel('C:/Users/aaaa/Desktop/rrrrr/twstock/1101.xlsx')
data = df["close"].pct_change()
Then you can change to a string representation list if desired by doing something like:
string_list = ['{:.6%}'.format(x) for x in data.tolist()[1:]]
DON'T loop through the dataframe like kalehmann suggested, it's very inefficient. You can either call data = df["close"].pct_change() as Sven suggested, or if you want to use a similar function to the one you defined:
data = df['first']/df['first'].shift(1)-1
And then you can run:
data_list = ['{:.6%}'.format(x) for x in data.tolist()]

compare list of dictionaries to dataframe, show missing values

I have a list of dictionaries
example_list = [{'email':'myemail#email.com'},{'email':'another#email.com'}]
and a dataframe with an 'Email' column
I need to compare the list against the dataframe and return the values that are not in the dataframe.
I can certainly iterate over the list, check in the dataframe, but I was looking for a more pythonic way, perhaps using list comprehension or perhaps a map function in dataframes?
To return those values that are not in DataFrame.email, here's a couple of options involving set difference operations—
np.setdiff1d
emails = [d['email'] for d in example_list)]
diff = np.setdiff1d(emails, df['Email']) # returns a list
set.difference
# returns a set
diff = set(d['email'] for d in example_list)).difference(df['Email'])
One way is to take one set from another. For a functional solution you can use operator.itemgetter:
from operator import itemgetter
res = set(map(itemgetter('email'), example_list)) - set(df['email'])
Note - is syntactic sugar for set.difference.
I ended up converting the list into a dataframe, comparing the two dataframes by merging them on a column, and then creating a dataframe out of the missing values
so, for example
example_list = [{'email':'myemail#email.com'},{'email':'another#email.com'}]
df_two = pd.DataFrame(item for item in example_list)
common = df_one.merge(df_two, on=['Email'])
df_diff = df_one[(~df_one.Email.isin(common.Email))]

Categories

Resources