Working with Pandas DataFrame / Sorting - python

I'm working with a big dataset within Excel in which I'm trying to sort a number by top 25 per index value.
The datasite looks like this:
The Final PAC ID is the company number and changes (this does not show in the given data). The PAC contribution is the number I want to sort by.
So for example, there will be 50 contributions done by company C00003590, to different candidates with amount 'PAC contribution', I would like to sort the top 25 contributions done per company.
I've tried working with dictionaries, creating a dictionary for each company and adding in the candidate numbers as a string key, and the contribution as a value.
The code I have so far is the following (this might be the completely wrong way to go about it though):
import pandas as pd
df1 = pd.read_excel('Test2.xlsx')
dict_company = {}
k1 = str(df1['Final PAC ID'])
k2 = str(df1['Candidate ID'])
for each in range(0,100):
dict_company[k1)[each]] = {}
dict_company[k1)[each]] = k2[each]
if each % 50 == 0:
print(each)
print(dict_company)
for each in range(0,100):
dict_company[k1][k2][each] = round(float(k1[each]))
if each % 50:
print(each)
print(dict_company)

I think you need nlargest:
df1 = df.groupby('Final PAC ID')['PAC contribution'].nlargest(50)
If need all columns:
cols = df.columns[~df.columns.isin(['PAC contribution','Final PAC ID'])].tolist()
df1 = df.set_index(cols)
.groupby('Final PAC ID')['PAC contribution']
.nlargest(50)
.reset_index()
Another solution (can be slowier):
df1 = df.sort_values('PAC contribution', ascending=False).groupby('Final PAC ID').head(50)
Last save to excel by to_excel:
df1.to_excel('filename.xlsx')

df.groupby('Final PAC ID').head(50).reset_index(drop=True)

You can use groupby in conjunction with a dictionary comprehension here. The result is a dictionary containing your company names as keys and the sub dataframes with top 25 payments as values:
def aggregate(sub_df):
return sub_df.sort_values('PAC contribution', ascending=False).head(25)
grouped = df.groupby('Final PAC ID')
results = {company: aggregate(sub_df)
for company, sub_df in grouped}

Related

How to assign a value to a cell in dataframe A based on a value in dataframe B, conditional on values of two other columns in B?

I'm really amateur-level with both python and pandas, but I'm trying to solve an issue for work that's stumping me.
I have two dataframes, let's call them dfA and dfB:
dfA:
project_id Category Initiative
10
20
30
40
dfB:
project_id field_id value
10 100 lorem
10 200 lorem1
10 300 lorem2
20 200 ipsum
20 300 ipsum1
20 500 ipsum2
Let's say I know "Category" from dfA correlates to field_id "100" from dfB, and "Initiative" correlates to field_id "200".
I need to look through dfB and for a given project_id/field_id combination, take the corresponding value in the "value" column and place it in the correct cell in dfA.
The result would look like this:
dfA:
project_id Category Initiative
10 lorem lorem1
20 ipsum
30
40
Bonus difficulty: not every project in dfA exists in dfB, and not every field_id is used in every project_id.
I hope I've explained this well enough; I feel like there must be a relatively simple way to handle this that I'm missing.
You could do something like this although it's not very elegant, there must be a better way. I had to use try/except because of the cases where the project Id is not available in the dfB. I put NaN values for the missing ones but you can easily put empty strings.
def get_value(row):
try:
res = dfB[(dfB['field_id'] == 100) & (dfB['project_id'] == row['project_id'])]['value'].iloc[0]
except:
res = np.nan
row['Categorie'] = res
try:
res = dfB[(dfB['field_id'] == 200) & (dfB['project_id'] == row['project_id'])]['value'].iloc[0]
except:
res = np.nan
row['Initiative'] = res
return row
dfA = dfA.apply(get_value, axis=1)
EDIT: as mentioned in comment, this is not very flexible as some values are hardcoded but you can easily change that with something like the below. This way, if the field_id change or you need to add/remove a column, just update the dictionary.
columns_field = {"Category": 100, "Initiative": 200}
def get_value(row):
for key, value in columns_fields.items():
try:
res = dfB[(dfB['field_id'] == value) & (dfB['project_id'] == row['project_id'])]['value'].iloc[0]
except:
res = np.nan
row[key] = res
return row
dfA = dfA.apply(get_value, axis=1)

How to divide a pandas data frame into sublists of n at a time?

I have a data frame made of tweets and their author, there is a total of 45 authors. I want to divide the data frame into groups of 2 authors at a time such that I can export them later into csv files.
I tried using the following: (given that the authors are in column named 'B' and the tweets are in columns named 'A')
I took the following from this question
df.set_index(keys=['B'],drop=False,inplace=True)
authors = df['B'].unique().tolist()
in order to separate the lists :
dgroups =[]
for i in range(0,len(authors)-1,2):
dgroups.append(df.loc[df.B==authors[i]])
dgroups.extend(df.loc[df.B ==authors[i+1]])
but instead it gives me sub-lists like this:
dgroups = [['A'],['B'],
[tweet,author],
['A'],['B'],
[tweet,author2]]
prior to this I was able to divide them correctly into 45 sub-lists derived from the previous link 1 as follows:
for i in authors:
groups.append(df.loc[df.B==i])
so how would i do that for 2 authors or 3 authors or like that?
EDIT: from #Jonathan Leon answer, i thought i would do the following, which worked but isn't a dynamic solution and is inefficient i guess, especially if n>3 :
dgroups= []
for i in range(2,len(authors)+1,2):
tempset1=[]
tempset2=[]
tempset1 = df.loc[df.B==authors[i-2]]
if(i-1 != len(authors)):
tempset2=df.loc[df.B ==authors[i-1]]
dgroups.append(tempset1.append(tempset2))
else:
dgroups.append(tempset1)
This imports the foreign language incorrectly, but the logic works to create a new csv for every two authors.
pd.read_csv('TrainDataAuthorAttribution.csv')
# df.groupby('B').count()
authors=df.B.unique().tolist()
auths_in_subset = 2
for i in range(auths_in_subset, len(authors)+auths_in_subset, auths_in_subset):
# print(authors[i-auths_in_subset:i])
dft = df[df.B.isin(authors[i-auths_in_subset:i])]
# print(dft)
dft.to_csv('df' + str(i) + '.csv')

How to sum up certain rows with Pandas and add the result to a defaultdict (large dataset)

I have a dataframe which consists out of 5 million name entries. The structure is as follows:
dataframe
What one can read from this dataframe is that for instance the name Mary was given to 14 babys in the state Alaska (AK) in the year 1910. But the name Mary were also given to newborns in the other states and the following years as well.
What I would like to identify is: What is the most given name in that particular dataset overall and how often was that name assigned?
I have tried this:
import pandas as pd
from collections import defaultdict
df = pd.read_csv("names.csv")
mask = df[["Name", "Count"]]
counter = 0
dd = defaultdict(int)
for pos, data in mask.iterrows():
name = data["Name"]
dd[name] = dd[name] + data["Count"]
counter += 1
if counter == 100000:
break
print ("Done!")
freq_name = 0
name = ""
for key, value in dd.items():
if freq_name < value:
freq_name = value
name = key
print(name)
This code works pretty well but only for up to 100.000 rows. However, when I use the presented code with the full dataset it takes ages.
Any idea or hint what I could improve would be greatly appreciated.
As suggested in the comments you can use something like this:
df = pd.read_csv("names.csv")
name, total_count = max(df.groupby('Name').Count.sum().items(), key=lambda x: x[1])

Mapping a column from one dataframe to another in pandas based on condition

I have two dataframes df_inv
df_inv
and df_sales.
df_sales
I need to add a column to df_inv with the sales person name based on the doctor he is tagged in df_sales. This would be a simple merge I guess if the sales person to doctor relationship in df_sales was unique. But There is change in ownership of doctors among sales person and a row is added with each transfer with an updated day.
So if the invoice date is less than updated date then previous tagging should be used, If there are no tagging previously then it should show nan. In other word for each invoice_date in df_inv the previous maximum updated_date in df_sales should be used for tagging.
The resulting table should be like this
Final Table
I am relatively new to programming but I can usually find my way through problems. But I can not figure this out. Any help is appreciated
import pandas as pd
import numpy as np
df_inv = pd.read_excel(r'C:\Users\joy\Desktop\sales indexing\consolidated report.xlsx')
df_sales1 = pd.read_excel(r'C:\Users\joy\Desktop\sales indexing\Sales Person
tagging.xlsx')
df_sales2 = df_sales1.sort_values('Updated Date',ascending=False)
df_sales = df_sales2.reset_index(drop=True)
sales_tag = []
sales_dup = []
counter = 0
for inv_dt, doc in zip(df_inv['Invoice_date'],df_inv['Doctor_Name']):
for sal, ref, update in zip(df_sales['Sales
Person'],df_sales['RefDoctor'],df_sales['Updated Date']):
if ref==doc:
if update<=inv_dt and sal not in sales_dup :
sales_tag.append(sal)
sales_dup.append(ref)
break
else:
pass
else:
pass
sales_dup = []
counter = counter+1
if len(sales_tag)<counter:
sales_tag.append('none')
else:
pass
df_inv['sales_person'] = sales_tag
This appears to work.

How to combine two sets of data with differences in merge-index strings?

I want to merge two csv-files with soccer data. They hold different data of the same and different games (partial overlap). Normally I would do a merge with df.merge, but the problem is, that the nomenclature differs for some teams in the two Datasets. E.g. "Atletic Bilbao" is called "Club Atletic" in the second set.
Therefore I would like to norm the team-naming on the two Datasets in order to be able to do a simple df.merge-operation on dates and teamnames. At the moment this would result in extra-lines, when a team has different names in the two sets.
So my main question is: How can I norm the teamnames in the two sets easily, without having to analyse all the differences "by hand" and hardcode "replace"-operations on one of the sets?
Dataset1 is downloadable here: https://data.fivethirtyeight.com/#soccer-spi
Dataset2 is not available freely, but it looks like this:
hometeam awayteam date homeproba drawproba awayproba homexg awayxg
Manchester United Leicester 2018-08-10 22:00:00 0.2812 0.3275 0.3913 1.5137 1.73813
--Edit after first comments--
So the main question is: How could I automatically analyse the differences in the two datasets naming? Helpful facts:
As the sets hold wholes seasons, the overlap per team name is at least 30+ games.
Most of the teams have same names, name differences are the smaller part of the team names.
Most name differences have at least a common substring.
Both datasets have date-information of the games.
We know, a team plays only one game a day.
So if Dataset1 says:
1.1.2018 Real - Atletic Club
And Dataset2 says:
1.1.2018 Real - Atletic Bilbao
We should be able to analyse that: {'Atletic Club':'Atletic Bilbao'}
So this is how I could solve this finally:
import pandas as pd
df_teamnames = pd.merge(dataset1,dataset2,on=['hometeam','date'])
df_teamnames = df_teamnames[['awayteam_x','awayteam_y']]
df_teamnames = df_teamnames.drop_duplicates()
This gives you a dataframe holding each team's name existing in both datasets like this:
1 Marseille Marseille
2 Atletic Club Atletic Bilbao
...
Assuming your dates are compatible (and correct), this should probably work to generate a translation dictionary. This type of thing is always super fragile I think though, and you shouldn't really rely on it.
import pandas as pd
names_1 = dataset1['hometeam'].unique().tolist()
names_2 = dataset2['hometeam'].unique().tolist()
mapping_dict = dict()
for common_name in set(names_1).intersection(set(names_2)):
mapping_dict[common_name] = common_name
unknown_1 = set(names_1).difference(set(names_2))
unknown_2 = set(names_2).difference(set(names_1))
trim_df1 = dataset1.loc[:, ['hometeam', 'awayteam', 'date']]
trim_df2 = dataset2.loc[:, ['hometeam', 'awayteam', 'date']]
aligned_data = trim_df1.join(trim_df2, on = ['hometeam', 'date'], how = 'inner', lsuffix = '_1', rsuffix = '_2')
for unknown_name in unknown_1:
matching_name = aligned_data.loc[aligned_data['awayteam_1'] == unknown_name, 'awayteam_2'].unique()
if len(matching_name) != 1:
raise ValueError("Couldn't find a unique match")
mapping_dict[unknown_name] = matching_name[0]
unknown_2.remove(matching_name[0])
if len(unknown_2) != 0:
raise ValueError("We have extra team names for some reason")

Categories

Resources