New DataFrame is being made when updating dataframe inside a loop - python

I am trying to make some changes to three dataframes in a loop in this manner.
for sheet in [f1, f2, f3]:
sheet = preprocess_df(sheet)
The preprocess_df function looks like this
def preprocess_df(df):
""" Making a function to preprocess a dataframe individually rather then all three together """
# make column names uniform
columns = [
"Reporting_Type",
"AA_name",
"Date_DD/MM/YYYY",
"Time_HHMMSS",
"Type",
"Name",
"FI_Type",
"Count_linked",
"Average_timelag_FI_Notification",
"FI_Ready_to_FI_request_ratio",
"Count_Consent_Raised",
"Actioned_to_raised_ratio",
"Approved_to_raised_ratio",
"FI_Ready_to_FI_request_ratio(Daily)",
"Daily_Consent_Requests_Data_Delivered",
"Total_Consent_Requests_Data_Delivered",
"Consent_Requests_Data_Delivered_To_Raised_Ratio",
"Daily_Consent_Requests_Raised",
"Daily Consent_Requests_Data_Delivered_To_Raised_Ratio",
]
# Set the sheet size
df = df.iloc[:, :19]
# Set the column names
df.columns = columns
return df
I am basically updating the column names and fixing the dataframe size. The issue that I face is that the sheet variable does get updated if I print the dataframe inside the loop, however, the original f1, f2 and f3 dataframes don't get updated. I think this is because the sheet variable creates a copy of f1 etc. rather than actually using the same dataframe. This seems like an extension of pass-by-reference or pass-by-value concept. Is there a way I could make inplace changes to all the sheets inside the loop?

Indeed, a copy of the dataframe is created when you are doing df = df.iloc[:, :19].
However, you can get around this by using drop, with inplace=True:
import pandas as pd
import numpy as np
def preprocess_df(df):
columns = [
"a",
"b",
] # Swap this list with yours
df.drop(df.columns[:2],inplace=True, axis=1) # Replace 2 with 19 in your code
df.columns = columns
f1 = pd.DataFrame(np.arange(12).reshape(3, 4),columns=['A', 'B', 'C', 'D']) # Just an example
preprocess_df(f1) # You can put this in your for loop
print(f1)
The above code will output something like:
a b
0 0 1
1 4 5
2 8 9

Related

Changes to pandas dataframe in for loop is only partially saved

I have two dfs, and want to manipulate them in some way with a for loop.
I have found that creating a new column within the loop updates the df. But not with other commands like set_index, or dropping columns.
import pandas as pd
import numpy as np
gen1 = pd.DataFrame(np.random.rand(12,3))
gen2 = pd.DataFrame(np.random.rand(12,3))
df1 = pd.DataFrame(gen1)
df2 = pd.DataFrame(gen2)
all_df = [df1, df2]
for x in all_df:
x['test'] = x[1]+1
x = x.set_index(0).drop(2, axis=1)
print(x)
Note that when each df is printed as per the loop, both dfs execute all the commands perfectly. But then when I call either df after, only the new column 'test' is there, and 'set_index' and 'drop' column is undone.
Am I missing something as to why only one of the commands have been made permanent? Thank you.
Here's what's going on:
x is a variable that at the start of each iteration of your for loop initially refers to an element of the list all_df. When you assign to x['test'], you are using x to update that element, so it does what you want.
However, when you assign something new to x, you are simply causing x to refer to that new thing without touching the contents of what x previously referred to (namely, the element of all_df that you are hoping to change).
You could try something like this instead:
for x in all_df:
x['test'] = x[1]+1
x.set_index(0, inplace=True)
x.drop(2, axis=1, inplace=True)
print(df1)
print(df2)
Please note that using inplace is often discouraged (see here for example), so you may want to consider whether there's a way to achieve your objective using new DataFrame objects created based on df1 and df2.

Create two new fields at once in pandas dataframe based off of calculations of other fields

I am iterating over a series of csv files as dataframes, eventually writing them all out to a common excel workbook.
In one of the many files, there are decimal GPS values (latitude, longitude) split into two columns (df[4] and df[5]) that I'm converting to degrees-minutes-seconds. That method returns a tuple that I'm attempting to park in two new fields called dmslat and dmslon in the same row of the original dataframe:
def convert_dd_to_dms(lat, lon):
# does the math here
return dmslat, dmslon
csv_dir = askdirectory() # tkinter directory picker
os.chdir(csv_dir)
for f in glob.iglob("*.csv"):
(csv_path, csv_name) = os.path.split(f)
(csv_prefix, csv_ext) = os.path.splitext(csv_name)
if csv_prefix[-3:] == "loc":
df = pd.read_csv(f)
df['dmslat'] = None
df['dmslon'] = None
for i, row in df.iterrows():
fixed_coords = convert_dd_to_dms(row[4], row[5])
row['dmslat'] = fixed_coords[0]
row['dmslon'] = fixed_coords[1]
print(df)
# process the other files
So when I use a print() statement I can see the coords are properly calculated but they are not being committed to the dmslat/dmslon fields.
I have also tried assigning the new fields within the row iterator, but since I am at the row scale, it ends up overwriting the entire column with the new calculated value every time.
How can I get the results to (succinctly) populate the columns?
It would appear that df.iterrows() is resulting in a "copy" of each row, thus when you add/update the columns "dmslat" and "dmslon", you are modifying the copy, not the original dataframe. This can be confirmed by printing "row" after your assignments. You will see the row item was successfully updated, but the changes are not reflected in the original dataframe.
To modify the original dataframe, you can modify your code as such:
for i, row in df.iterrows():
fixed_coords = convert_dd_to_dms(row[4], row[5])
df.loc[i, 'dmslat'] = fixed_coords[0]
df.loc[i, 'dmslon'] = fixed_coords[1]
print(df)
using df.loc guarantees the changes are made to the original dataframe.
I think you better use apply rather than iterrows.
Here's a solution that is based on apply. I replaced your location calculation with a function named 'foo' which does some arbitrary calculation from two fields 'a' and 'b' to new values for 'a' and 'b'.
df = pd.DataFrame({"a": range(10), "b":range(10, 20)})
def foo(row):
return (row["a"] + row["b"], row["a"] * row["b"])
new_df = df.apply(foo, axis=1).apply(pd.Series)
In the above code block, applying 'foo' returns a tuple for every row. Using apply again with pd.Series turns it into a data frame.
df[["a", "b"]] = new_df
df.head(3)
a b
0 10 0
1 23 132
2 38 336

Add new columns and new column names in python

I have a CSV file in the following format:
Date,Time,Open,High,Low,Close,Volume
09/22/2003,00:00,1024.5,1025.25,1015.75,1022.0,720382.0
09/23/2003,00:00,1022.0,1035.5,1019.25,1022.0,22441.0
10/22/2003,00:00,1035.0,1036.75,1024.25,1024.5,663229.0
I would like to add 20 new columns to this file, the value of each new column is synthetically created by simply randomizing a set of numbers.
It would be something like this:
import pandas as pd
df = pd.read_csv('dataset.csv')
print(len(df))
input()
for i in range(len(df)):
#Data that already exist
date = df.values[i][0]
time = df.values[i][1]
open_value= df.values[i][2]
high_value=df.values[i][3]
low_value=df.values[i][4]
close_value=df.values[i][5]
volume=df.values[i][6]
#This is the new data
prediction_1=randrange(3)
prediction_2=randrange(3)
prediction_3=randrange(3)
prediction_4=randrange(3)
prediction_5=randrange(3)
prediction_6=randrange(3)
prediction_7=randrange(3)
prediction_8=randrange(3)
prediction_9=randrange(3)
prediction_10=randrange(3)
prediction_11=randrange(3)
prediction_12=randrange(3)
prediction_13=randrange(3)
prediction_14=randrange(3)
prediction_15=randrange(3)
prediction_16=randrange(3)
prediction_17=randrange(3)
prediction_18=randrange(3)
prediction_19=randrange(3)
prediction_20=randrange(3)
#How to concatenate these data row by row in a matrix?
#How to add new column names and save the file?
I would like to concatenate them (old+synthetic data) and, after that, I would like to add 20 new columns named 'synthetic1', 'synthetic2', ..., 'synthetic20', to the existing column names and then save the resulting new dataset in a new text file.
I could do that easily with NumPy, but here, we have no numeric data and, therefore, I don't know how to do (or if it is possible to do) that. Is possible to do that with Pandas or another library?
Here's a way you can do:
import numpy as np
# set nrow and col, nrow should match the number of rows in existing df
n_row = 100
n_col = 20
f = pd.DataFrame(np.random.randint(100, size=(n_row, n_col)), columns=['synthetic' + str(x) for x in range(1,n_col+1)])
df = pd.concat([df, f])

Appending a column to data frame using Pandas in python

I'm trying some operations on Excel file using pandas. I want to extract some columns from a excel file and add another column to those extracted columns. And want to write all the columns to new excel file. To do this I have to append new column to old columns.
Here is my code-
import pandas as pd
#Reading ExcelFIle
#Work.xlsx is input file
ex_file = 'Work.xlsx'
data = pd.read_excel(ex_file,'Data')
#Create subset of columns by extracting columns D,I,J,AU from the file
data_subset_columns = pd.read_excel(ex_file, 'Data', parse_cols="D,I,J,AU")
#Compute new column 'Percentage'
#'Num Labels' and 'Num Tracks' are two different columns in given file
data['Percentage'] = data['Num Labels'] / data['Num Tracks']
data1 = data['Percentage']
print data1
#Here I'm trying to append data['Percentage'] to data_subset_columns
Final_data = data_subset_columns.append(data1)
print Final_data
Final_data.to_excel('111.xlsx')
No error is shown. But Final_data is not giving me expected results. ( Data not getting appended)
There is no need to explicitly append columns in pandas. When you calculate a new column, it is included in the dataframe. When you export it to excel, the new column will be included.
Try this, assuming 'Num Labels' and 'Num Tracks' are in "D,I,J,AU" [otherwise add them]:
import pandas as pd
data_subset = pd.read_excel(ex_file, 'Data', parse_cols="D,I,J,AU")
data_subset['Percentage'] = data_subset['Num Labels'] / data_subset['Num Tracks']
data_subset.to_excel('111.xlsx')
The append function of a dataframe adds rows, not columns to the dataframe. Well, it does add columns if the appended rows have more columns than in the source dataframe.
DataFrame.append(other, ignore_index=False, verify_integrity=False)[source]
Append rows of other to the end of this frame, returning a new object. Columns not in this frame are added as new columns.
I think you are looking for something like concat.
Combine DataFrame objects horizontally along the x axis by passing in axis=1.
>>> df1 = pd.DataFrame([['a', 1], ['b', 2]],
... columns=['letter', 'number'])
>>> df4 = pd.DataFrame([['bird', 'polly'], ['monkey', 'george']],
... columns=['animal', 'name'])
>>> pd.concat([df1, df4], axis=1)
letter number animal name
0 a 1 bird polly
1 b 2 monkey george

Creating an empty Pandas DataFrame, and then filling it

I'm starting from the pandas DataFrame documentation here: Introduction to data structures
I'd like to iteratively fill the DataFrame with values in a time series kind of calculation. I'd like to initialize the DataFrame with columns A, B, and timestamp rows, all 0 or all NaN.
I'd then add initial values and go over this data calculating the new row from the row before, say row[A][t] = row[A][t-1]+1 or so.
I'm currently using the code as below, but I feel it's kind of ugly and there must be a way to do this with a DataFrame directly or just a better way in general.
Note: I'm using Python 2.7.
import datetime as dt
import pandas as pd
import scipy as s
if __name__ == '__main__':
base = dt.datetime.today().date()
dates = [ base - dt.timedelta(days=x) for x in range(0,10) ]
dates.sort()
valdict = {}
symbols = ['A','B', 'C']
for symb in symbols:
valdict[symb] = pd.Series( s.zeros( len(dates)), dates )
for thedate in dates:
if thedate > dates[0]:
for symb in valdict:
valdict[symb][thedate] = 1+valdict[symb][thedate - dt.timedelta(days=1)]
print valdict
NEVER grow a DataFrame row-wise!
TLDR; (just read the bold text)
Most answers here will tell you how to create an empty DataFrame and fill it out, but no one will tell you that it is a bad thing to do.
Here is my advice: Accumulate data in a list, not a DataFrame.
Use a list to collect your data, then initialise a DataFrame when you are ready. Either a list-of-lists or list-of-dicts format will work, pd.DataFrame accepts both.
data = []
for row in some_function_that_yields_data():
data.append(row)
df = pd.DataFrame(data)
pd.DataFrame converts the list of rows (where each row is a scalar value) into a DataFrame. If your function yields DataFrames instead, call pd.concat.
Pros of this approach:
It is always cheaper to append to a list and create a DataFrame in one go than it is to create an empty DataFrame (or one of NaNs) and append to it over and over again.
Lists also take up less memory and are a much lighter data structure to work with, append, and remove (if needed).
dtypes are automatically inferred (rather than assigning object to all of them).
A RangeIndex is automatically created for your data, instead of you having to take care to assign the correct index to the row you are appending at each iteration.
If you aren't convinced yet, this is also mentioned in the documentation:
Iteratively appending rows to a DataFrame can be more computationally
intensive than a single concatenate. A better solution is to append
those rows to a list and then concatenate the list with the original
DataFrame all at once.
*** Update for pandas >= 1.4: append is now DEPRECATED! ***
As of pandas 1.4, append has now been deprecated! Use pd.concat instead. See the release notes
These options are horrible
append or concat inside a loop
Here is the biggest mistake I've seen from beginners:
df = pd.DataFrame(columns=['A', 'B', 'C'])
for a, b, c in some_function_that_yields_data():
df = df.append({'A': i, 'B': b, 'C': c}, ignore_index=True) # yuck
# or similarly,
# df = pd.concat([df, pd.Series({'A': i, 'B': b, 'C': c})], ignore_index=True)
Memory is re-allocated for every append or concat operation you have. Couple this with a loop and you have a quadratic complexity operation.
The other mistake associated with df.append is that users tend to forget append is not an in-place function, so the result must be assigned back. You also have to worry about the dtypes:
df = pd.DataFrame(columns=['A', 'B', 'C'])
df = df.append({'A': 1, 'B': 12.3, 'C': 'xyz'}, ignore_index=True)
df.dtypes
A object # yuck!
B float64
C object
dtype: object
Dealing with object columns is never a good thing, because pandas cannot vectorize operations on those columns. You will need to do this to fix it:
df.infer_objects().dtypes
A int64
B float64
C object
dtype: object
loc inside a loop
I have also seen loc used to append to a DataFrame that was created empty:
df = pd.DataFrame(columns=['A', 'B', 'C'])
for a, b, c in some_function_that_yields_data():
df.loc[len(df)] = [a, b, c]
As before, you have not pre-allocated the amount of memory you need each time, so the memory is re-grown each time you create a new row. It's just as bad as append, and even more ugly.
Empty DataFrame of NaNs
And then, there's creating a DataFrame of NaNs, and all the caveats associated therewith.
df = pd.DataFrame(columns=['A', 'B', 'C'], index=range(5))
df
A B C
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
It creates a DataFrame of object columns, like the others.
df.dtypes
A object # you DON'T want this
B object
C object
dtype: object
Appending still has all the issues as the methods above.
for i, (a, b, c) in enumerate(some_function_that_yields_data()):
df.iloc[i] = [a, b, c]
The Proof is in the Pudding
Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility.
Benchmarking code for reference.
Here's a couple of suggestions:
Use date_range for the index:
import datetime
import pandas as pd
import numpy as np
todays_date = datetime.datetime.now().date()
index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D')
columns = ['A','B', 'C']
Note: we could create an empty DataFrame (with NaNs) simply by writing:
df_ = pd.DataFrame(index=index, columns=columns)
df_ = df_.fillna(0) # With 0s rather than NaNs
To do these type of calculations for the data, use a NumPy array:
data = np.array([np.arange(10)]*3).T
Hence we can create the DataFrame:
In [10]: df = pd.DataFrame(data, index=index, columns=columns)
In [11]: df
Out[11]:
A B C
2012-11-29 0 0 0
2012-11-30 1 1 1
2012-12-01 2 2 2
2012-12-02 3 3 3
2012-12-03 4 4 4
2012-12-04 5 5 5
2012-12-05 6 6 6
2012-12-06 7 7 7
2012-12-07 8 8 8
2012-12-08 9 9 9
If you simply want to create an empty data frame and fill it with some incoming data frames later, try this:
newDF = pd.DataFrame() #creates a new dataframe that's empty
newDF = newDF.append(oldDF, ignore_index = True) # ignoring index is optional
# try printing some data from newDF
print newDF.head() #again optional
In this example I am using this pandas doc to create a new data frame and then using append to write to the newDF with data from oldDF.
If I have to keep appending new data into this newDF from more than
one oldDFs, I just use a for loop to iterate over
pandas.DataFrame.append()
Note: append() is deprecated since version 1.4.0. Use concat()
Initialize empty frame with column names
import pandas as pd
col_names = ['A', 'B', 'C']
my_df = pd.DataFrame(columns = col_names)
my_df
Add a new record to a frame
my_df.loc[len(my_df)] = [2, 4, 5]
You also might want to pass a dictionary:
my_dic = {'A':2, 'B':4, 'C':5}
my_df.loc[len(my_df)] = my_dic
Append another frame to your existing frame
col_names = ['A', 'B', 'C']
my_df2 = pd.DataFrame(columns = col_names)
my_df = my_df.append(my_df2)
Performance considerations
If you are adding rows inside a loop consider performance issues. For around the first 1000 records "my_df.loc" performance is better, but it gradually becomes slower by increasing the number of records in the loop.
If you plan to do thins inside a big loop (say 10M‌ records or so), you are better off using a mixture of these two;
fill a dataframe with iloc until the size gets around 1000, then append it to the original dataframe, and empty the temp dataframe.
This would boost your performance by around 10 times.
Simply:
import numpy as np
import pandas as pd
df=pd.DataFrame(np.zeros([rows,columns])
Then fill it.
Assume a dataframe with 19 rows
index=range(0,19)
index
columns=['A']
test = pd.DataFrame(index=index, columns=columns)
Keeping Column A as a constant
test['A']=10
Keeping column b as a variable given by a loop
for x in range(0,19):
test.loc[[x], 'b'] = pd.Series([x], index = [x])
You can replace the first x in pd.Series([x], index = [x]) with any value
This is my way to make a dynamic dataframe from several lists with a loop
x = [1,2,3,4,5,6,7,8]
y = [22,12,34,22,65,24,12,11]
z = ['as','ss','wa', 'ss','er','fd','ga','mf']
names = ['Bob', 'Liz', 'chop']
a loop
def dataF(x,y,z,names):
res = []
for t in zip(x,y,z):
res.append(t)
return pd.DataFrame(res,columns=names)
Result
dataF(x,y,z,names)
# import pandas library
import pandas as pd
# create a dataframe
my_df = pd.DataFrame({"A": ["shirt"], "B": [1200]})
# show the dataframe
print(my_df)

Categories

Resources