Generate a pandas dataframe with for-loop - python

I have generated a dataframe (called 'sectors') that stores information from my brokerage account (sector/industry, sub sector, company name, current value, cost basis, etc).
I want to avoid hard coding a filter for each sector or sub sector to find specific data. I have achieved this with the following code (I know, not very pythonic, but I am new to coding):
for x in set(sectors_df['Sector']):
x_filt = sectors_df['Sector'] == x
#value in sect takes the sum of all current values in a given sector
value_in_sect = round(sectors_df.loc[x_filt]['Current Value'].sum(), 2)
#pct in sect is the % of the sector in the over all portfolio (total equals the total value of all sectors)
pct_in_sect = round((value_in_sect/total)*100 , 2)
print(x, value_in_sect, pct_in_sect)
for sub in set(sectors_df['Sub Sector']):
sub_filt = sectors_df['Sub Sector'] == sub
value_of_subs = round(sectors_df.loc[sub_filt]['Current Value'].sum(), 2)
pct_of_subs = round((value_of_subs/total)*100, 2)
print(sub, value_of_subs, pct_of_subs)
My print statements produce the majority of the information I want, although I am still working through how to program for the % of a sub sector within its own sector. Anyways, I would now like to put this information (value_in_sect, pct_in_sect, etc) into dataframes of their own. What would be the best way or the smartest way or the most pythonic way to go about this? I am thinking a dictionary, and then creating a dataframe from the dictionary, but not sure.

The split-apply-combine process in pandas, specifically aggregation, is the best way to go about this. First I'll explain how this process would work manually, and then I'll show how pandas can do it in one line.
Manual split-apply-combine
Split
First, divide the DataFrame into groups of the same Sector. This involves getting a list of Sectors and figuring out which rows belong to it (just like the first two lines of your code). This code runs through the DataFrame and builds a dictionary with keys as Sectors and a list of indices of rows from sectors_df that correspond to it.
sectors_index = {}
for ix, row in sectors_df.iterrows():
if row['Sector'] not in sectors_index:
sectors_index[row['Sector']] = [ix]
else:
sectors_index[row['Sector']].append(ix)
Apply
Run the same function, in this case summing of Current Value and calculating its percentage share, on each group. That is, for each sector, grab the corresponding rows from the DataFrame and run the calculations in the next lines of your code. I'll store the results as a dictionary of dictionaries: {'Sector1': {'value_in_sect': 1234.56, 'pct_in_sect': 11.11}, 'Sector2': ... } for reasons that will become obvious later:
sector_total_value = {}
total_value = sectors_df['Current Value'].sum()
for sector, row_indices in sectors_index.items():
sector_df = sectors_df.loc[row_indices]
current_value = sector_df['Current Value'].sum()
sector_total_value[sector] = {'value_in_sect': round(current_value, 2),
'pct_in_sect': round(current_value/total_value * 100, 2)
}
(see footnote 1 for a note on rounding)
Combine
Finally, collect the function results into a new DataFrame, where the index is the Sector. pandas can easily convert this nested dictionary structure into a DataFrame:
sector_total_value_df = pd.DataFrame.from_dict(sector_total_value, orient='index')
split-apply-combine using groupby
pandas makes this process very simple using the groupby method.
Split
The groupby method splits a DataFrame into groups by a column or multiple columns (or even another Series):
grouped_by_sector = sectors_df.groupby('Sector')
grouped_by_sector is similar to the index we built earlier, but the groups can be manipulated much more easily, as we can see in the following steps.
Apply
To calculate the total value in each group, select the column or columns to sum up, use the agg or aggregate method with the function you want to apply:
sector_total_value = grouped_by_sector['Current Value'].agg(value_in_sect=sum)
Combine
It's already done! The apply step already creates a DataFrame where the index is the Sector (the groupby column) and the value in the value_in_sect column is the result of the sum operation.
I've left out the pct_in_sect part because a) it can be more easily done after the fact:
sector_total_value_df['pct_in_sect'] = round(sector_total_value_df['value_in_sect'] / total_value * 100, 2)
sector_total_value_df['value_in_sect'] = round(sector_total_value_df['value_in_sect'], 2)
and b) it's outside the scope of this answer.
Most of this can be done easily in one line (see footnote 2 for including the percentage, and rounding):
sector_total_value_df = sectors_df.groupby('Sector')['Current Value'].agg(value_in_sect=sum)
For subsectors, there's one additional consideration, which is that grouping should be done by Sector and Subsector rather than just Subsector, so that, for example rows from Utilities/Gas and Energy/Gas aren't combined.
subsector_total_value_df = sectors_df.groupby(['Sector', 'Sub Sector'])['Current Value'].agg(value_in_sect=sum)
This produces a DataFrame with a MultiIndex with levels 'Sector' and 'Sub Sector', and a column 'value_in_sect'. For a final piece of magic, the percentage in Sector can be calculated quite easily:
subsector_total_value_df['pct_within_sect'] = round(subsector_total_value_df['value_in_sect'] / sector_total_value_df['value_in_sect'] * 100, 2)
which works because the 'Sector' index level is matched during division.
Footnote 1. This deviates from your code slightly, because I've chosen to calculate the percentage using the unrounded total value, to minimize the error in the percentage. Ideally though, rounding is only done at display time.
Footnote 2. This one-liner generates the desired result, including percentage and rounding:
sector_total_value_df = sectors_df.groupby('Sector')['Current Value'].agg(
value_in_sect = lambda c: round(sum(c), 2),
pct_in_sect = lambda c: round(sum(c)/sectors_df['Current Value'].sum() * 100, 2),
)

Related

Updating a dataframe by referencing row values and column values without iterating

I am rusty with Pandas and Dataframes.
I have one dataframe (named data) with two columns (userid, date).
I have a second dataframe, incidence_matrix, where the rows are userids (the same userids in data) and the columns are dates (the same dates in data). This is how I construct incidence_matrix:
columns = pd.date_range(start='2020-01-01', end='2020-11-30', freq='M', closed='right')
index = data['USERID']
incidence_matrix = pd.DataFrame(index=index, columns=columns)
incidence_matrix = incidence_matrix.fillna(0)
I am trying to iterate over each (userid, date) pair in data, and using the values of each userid and date, update that corresponding cell in incidence_matrix to be 1.
In production data could be millions of rows. So I'd prefer to not iterate over the data and use a vectorization approach.
How can (or should) the above be done?
I am running into errors when attempting to reference cells by name, for example in my attempt below, the first print statement works but the second print statement doesn't recognize a date value as a label
for index, row in data.iterrows():
print(row['USERID'], row['POSTDATE'])
print(incidence_matrix.loc[row['USERID']][row['POSTDATE']])
Thank you in advance.
Warning: the representation you have chosen is going to be pretty sparse in real life (user visits typically follow a Zipf law), leading to a quite inefficient usage of the memory. You'd be better off representing your incidence as a tall and thin DataFrame, for example the output of:
data.groupby(['userid', data['date'].dt.to_period('M')]).count()
With this caveat out of the way:
def add_new_data(data, incidence=None):
delta_incidence = (
data
.groupby(['userid', data['date'].dt.to_period('M')])
.count()
.squeeze()
.unstack('date', fill_value=0)
)
if incidence is None:
return delta_incidence
return incidence.combine(delta_incidence, np.add, fill_value=0).astype(int)
should do what you want. It re-indexes the previous value of incidence (if any) such that the outcome is a new DataFrame where the axes are the union of incidence and delta_incidence.
Here is a toy example, for testing:
def gen_data(n):
return pd.DataFrame(
dict(
userid=np.random.choice('bob alice john james sophia'.split(), size=n),
date=[
(pd.Timestamp('2020-01-01') + v * pd.Timedelta('365 days')).round('s')
for v in np.random.uniform(size=n)
],
)
)
# first time (no previous incidence)
data = gen_data(20)
incidence = add_new_data(data)
# new data arrives
data = gen_data(30)
incidence = add_new_data(data, incidence)

Python - How to optimize code to run faster? (lots of for loops in DataFrame)

I have a code that works with an excel file (SAP Download) quite extensively (data transformation and calculation steps).
I need to loop through all the lines (couple thousand rows) a few times. I have written a code prior that adds DataFrame columns separately, so I could do everything in one for loop that was of course quite quick, however, I had to change data source that meant change in raw data structure.
The raw data structure has 1st 3 rows empty, then a Title row comes with column names, then 2 rows empty, and the 1st column is also empty. I decided to wipe these, and assign column names and make them headers (steps below), however, since then, separately adding column names and later calculating everything in one for statement does not fill data to any of these specific columns.
How could i optimize this code?
I have deleted some calculation steps since they are quite long and make code part even less readable
#This function adds new column to the dataframe
def NewColdfConverter(*args):
for i in args:
dfConverter[i] = '' #previously used dfConverter[i] = NaN
#This function creates dataframe from excel file
def DataFrameCreator(path,sheetname):
excelFile = pd.ExcelFile(path)
global readExcel
readExcel = pd.read_excel(excelFile,sheet_name=sheetname)
#calling my function to create dataframe
DataFrameCreator(filePath,sheetName)
dfConverter = pd.DataFrame(readExcel)
#dropping NA values from Orders column (right now called Unnamed)
dfConverter.dropna(subset=['Unnamed: 1'], inplace=True)
#dropping rows and deleting other unnecessary columns
dfConverter.drop(dfConverter.head(1).index, inplace=True)
dfConverter.drop(dfConverter.columns[[0,11,12,13,17,22,23,48]], axis = 1,inplace = True)
#renaming columns from Unnamed 1: etc to proper names
dfConverter = dfConverter.rename(columns={Unnamed 1:propername1 Unnamed 2:propername2 etc.})
#calling new column function -> this Day column appears in the 1st for loop
NewColdfConverter("Day")
#example for loop that worked prior, but not working since new dataset and new header/column steps added:
for i in range(len(dfConverter)):
#Day column-> floor Entry Date -1, if time is less than 5:00:00
if(dfConverter['Time'][i] <= time(hour=5,minute=0,second=0)):
dfConverter['Day'][i] = pd.to_datetime(dfConverter['Entry Date'][i])-timedelta(days=1)
else:
dfConverter['Day'][i] = pd.to_datetime(dfConverter['Entry Date'][i])
Problem is, there are many columns that build on one another, so I cannot get them in one for loop, for instance in below example I need to calculate reqsWoSetUpValue, so I can calculate requirementsValue, so I can calculate otherReqsValue, but I'm not able to do this within 1 for loop by assigning the values to the dataframecolumn[i] row, because the value will just be missing, like nothing happened.
(dfsorted is the same as dfConverter, but a sorted version of it)
#example code of getting reqsWoSetUpValue
for i in range(len(dfSorted)):
reqsWoSetUpValue[i] = #calculationsteps...
#inserting column with value
dfSorted.insert(49,'Reqs wo SetUp',reqsWoSetUpValue)
#getting requirements value with previously calculated Reqs wo SetUp column
for i in range(len(dfSorted)):
requirementsValue[i] = #calc
dfSorted.insert(50,'Requirements',requirementsValue)
#Calculating Other Reqs value with previously calculated Requirements column.
for i in range(len(dfSorted)):
otherReqsValue[i] = #calc
dfSorted.insert(51,'Other Reqs',otherReqsValue)
Anyone have a clue, why I cannot do this in 1 for loop anymore by 1st adding all columns by the function, like:
NewColdfConverter('Reqs wo setup','Requirements','Other reqs')
#then in 1 for loop:
for i in range(len(dfsorted)):
dfSorted['Reqs wo setup'] = #calculationsteps
dfSorted['Requirements'] = #calculationsteps
dfSorted['Other reqs'] = #calculationsteps
Thank you
General comment: How to identify bottlenecks
To get started, you should try to identify which parts of the code are slow.
Method 1: time code sections using the time package
Wrap blocks of code in statements like this:
import time
t = time.time()
# do something
print("time elapsed: {:.1f} seconds".format(time.time() - t))
Method 2: use a profiler
E.g. Spyder has a built-in profiler. This allows you to check which operations are most time consuming.
Vectorize your operations
Your code will be orders of magnitude faster if you vectorize your operations. It looks like your loops are all avoidable.
For example, rather than calling pd.to_datetime on every row separately, you should call it on the entire column at once
# slow (don't do this):
for i in range(len(dfConverter)):
dfConverter['Day'][i] = pd.to_datetime(dfConverter['Entry Date'][i])
# fast (do this instead):
dfConverter['Day'] = pd.to_datetime(dfConverter['Entry Date'])
If you want to perform an operation on a subset of rows, you can also do this in a vectorized operation by using loc:
mask = dfConverter['Time'] <= time(hour=5,minute=0,second=0)
dfConverter.loc[mask,'Day'] = pd.to_datetime(dfConverter.loc[mask,'Entry Date']) - timedelta(days=1)
Not sure this would improve performance, but you could calculate the dependent columns at the same time row by row with DataFrame.iterrows()
for index, data in dfSorted.iterrows():
dfSorted['Reqs wo setup'][index] = #calculationsteps
dfSorted['Requirements'][index] = #calculationsteps
dfSorted['Other reqs'][index] = #calculationsteps

How to multi-thread large number of pandas dataframe selection calls on large dataset

df is a dataframe containing 12 millions+ lines unsorted.
Each row has a GROUP ID.
The end goal is to randomly select 1 row per unique GROUP ID, thus populating a new column named SELECTED where 1 means selected 0 means the opposite
There may be 5000+ unique GROUP IDs.
Seeking better and faster solution than the following, Potentially multi-threaded solution?
for sec in df['GROUP'].unique():
sz = df.loc[df.GROUP == sec, ['SELECTED']].size
sel = [0]*sz
sel[random.randint(0,sz-1)] = 1
df.loc[df.GROUP == sec, ['SELECTED']] = sel
You could try a vectorized version, which will probably speed things up if you have many classes.
import pandas as pd
# get fake data
df = pd.DataFrame(pd.np.random.rand(10))
df['GROUP'] = df[0].astype(str).str[2]
# mark one element of each group as selected
df['selected'] = df.index.isin( # Is current index in a selected list?
df.groupby('GROUP') # Get a GroupBy object.
.apply(pd.Series.sample) # Select one row from each group.
.index.levels[1] # Access index - in this case (group, old_id) pair; select the old_id out of the two.
).astype(pd.np.int) # Convert to ints.
Note that this may fail if duplicate indices are present.
I do not know panda's dataframe, but if you simply set selected where it is needed to be one and later assume that not having the attribute means not selected you could avoid updating all elements.
You may also do something like this :
selected = []
for sec in df['GROUP'].unique():
selected.append(random.choice(sec))
or with list comprehensions
selected = [random.choice(sec) for sec in df['GROUP'].unique()]
maybe this can speed it up because you will not need to allow new memory and udpate all elements from your dataframe.
If you really want multithreading have a look at concurrent.futures https://docs.python.org/3/library/concurrent.futures.html

Pandas groupby: How do I use shifted values

I have a dataset that represents reoccurring events at different locations.
df = [Datetime location time event]
Each location can have 8-10 events that repeat. What I'm trying to do is build some information of how long it has been between two events. (they may not be the same event)
I am able to do this by splitting the df into sub-dfs and processing each location individually. But it would seem that groupby should be smarter that this. This is also assuming that I know all the locations which may vary file to file.
df1 = df[(df['location'] == "Loc A")]
df1['delta'] = df1['time'] - df1['time'].shift(1)
df2 = df[(df['location'] == "Loc B")]
df2['delta'] = df2['time'] - df2['time'].shift(1)
...
...
What I would like to do is groupBy based on location...
dfg = df.groupby(['location'])
Then for each grouped location
Add a delta column
Shift and subtract to get the delta time between events
Questions:
Does groupby maintain the order of events?
Would a for loop that runs over the DF be better? That doesn't seem very python like.
Also once you have a grouped df is there a way to transform it back to a general dataframe. I don't think I need to do this but thought it may be helpful in the future.
Thank you for any support you can offer.
http://pandas.pydata.org/pandas-docs/dev/groupby.html looks like it provides what you need.
groups = df.groupby('location').groups
or
for name, group in df.groupby('location')
// do stuff here
Will split it into groups of rows with matching values in the location column.
Then you can sort the groups based on the time value and iterate through to create the deltas.
It appears that when you group-by and identify a column to act on the data is returned in a series which then a function can be applied to.
deltaTime = lambda x: (x - x.shift(1))
df['delta'] = df.groupby('location')['time'].apply(deltaTime)
This groups by location and returns the time column for each group.
Each sub-series is then passed to the function deltaTime.

Generate ranks based on data table

The Background:
I have been creating a script that based on input csv's that exist within an input directory, creates a 3 dimensional array to store the aggregated information. Each table within the array represents one of the pollution sources (eg one of the input csvs was Incinerators.csv, the created table will be the aggregated information about various pollutants released by Incinerators on a watershed scale), each row represents the aggregated information by watershed - row 0 = headers, and each column is the amount of and toxic equivalent of each substance - col 0 = watershed ID.
For each substance in each watershed, the total released by all sources is calculated and stored in another array with the exact same layout addressable using totals[wsid][substance] by index or name based dictionary lookups.
The Question:
With this table of totals, I need to calculate each watershed's relative rank for the amount of each substance released compared to what is released in other watersheds.
I could use a couple of nested loops to go through each substance column and convert this into a list, sort the list, and then relate this back to the watershed ID... but this would not be a very clean solution. Zero values also need to be omitted from ranking and duplicate values should be given the same rank while decreasing total number being ranked.
Is there a smarter way to do this? Or a module where this is already implemented? (didn't see anything evident in pyTables)
One of the requirements is that the solution also remain simple enough so that those with even less python experience than I will at least be able to understand the process. I can use up to 2.7.1
The End Goal:
Generate HTML pages to be iframed from a Google Earth description bubble with the results. I have put a couple entirely unfinished sample outputs here.
For this I have created 2 functions
def sortTable(table, col):
return sorted(table, key=itemgetter(col))
And
def buildRankTable(totalTable, fieldList, wsidList, subList, subDict, wsidDict):
## build rankTable to mimic other templates
rankTable = newTemplateTable(wsidList, fieldList)
## add another row to track total number ranked for each substance
numRanked = [0 for i in range(len(fieldList))]
numRanked[0] = "TotalNoRanked"
rankTable.append(numRanked)
for substance in subList:
tempTable = sortTable(totalTable, subDict[substance])
exportCsv(tempTable, outdir + os.sep + "rankT_" + substance + ".csv")
rankList = []
## extract a the low to high list of wsid's, skipping non-floats (no measurement)
for row in tempTable:
if type(row[subDict[substance]]) == float:
rankList.append(row[0]) ## build wsid list in ranked order
numRanked[subDict[substance]] = len(rankList)
## by default, this ranks low to high, we want to rank high to low starting at 1
rankList.reverse()
## with the list of ranked wsids, get the rank and save to rankTable
for rank, wsid in enumerate(rankList):
rankTable[wsidDict[wsid]][subDict[substance]] = rank + 1
## any 0 (default) values become 'NR' - No Rank
for rowI in range(len(rankTable)):
for colI in range(len(rankTable[rowI])):
if rankTable[rowI][colI] == 0:
rankTable[rowI][colI] = "NR"
return rankTable'
fieldList = list of fields in first row
wsidList = list of wsid's (remaining 595 rows)
subList = list of substances to be ranked
subDict = dictionary to map each substance to it's col index in totalTable
wsidDict = dictionary to map each wsid it it's row index in totalTable

Categories

Resources