I have to convert data from a SQL table to pandas and display the output. The data is a sales table:
cust prod day month year state quant
0 Bloom Pepsi 2 12 2017 NY 4232
1 Knuth Bread 23 5 2017 NJ 4167
2 Emily Pepsi 22 1 201 CT 4404
3 Emily Fruits 11 1 2010 NJ 4369
4 Helen Milk 7 11 2016 CT 210
I have to convert this to find average sales for each customer for each state for year 2017:
CUST AVG_NY AVG_CT AVG_NJ
Bloom 28923 3241 1873
Sam 4239 872 142
Below is my code:
import pandas as pd
import psycopg2 as pg
engine = pg.connect("dbname='postgres' user='postgres' host='127.0.0.1' port='8800' password='sh'")
df = pd.read_sql('select * from sales', con=engine)
df.drop("prod", axis=1, inplace=True)
df.drop("day", axis=1, inplace=True)
df.drop("month", axis=1, inplace=True)
df_main = df.loc[df.year == 2017]
#df.drop(df[df['state'] != 'NY'].index, inplace=True)
df2 = df_main.loc[df_main.state == 'NY']
df2.drop("year",axis=1,inplace=True)
NY = df2.groupby(['cust']).mean()
df3 = df_main.loc[df_main.state == 'CT']
df3.drop("year",axis=1,inplace=True)
CT = df3.groupby(['cust']).mean()
df4 = df_main.loc[df_main.state == 'NJ']
df4.drop("year",axis=1,inplace=True)
NJ = df4.groupby(['cust']).mean()
NY = NY.join(CT,how='left',lsuffix = 'NY', rsuffix = '_right')
NY = NY.join(NJ,how='left',lsuffix = 'NY', rsuffix = '_right')
print(NY)
This give me an output like:
quantNY quant_right quant
cust
Bloom 3201.500000 3261.0 2277.000000
Emily 2698.666667 1432.0 1826.666667
Helen 4909.000000 2485.5 2352.166667
I found a question where I can change the column names to the output I need but I am not sure if the below two lines of the code are the right way to join the dataframes:
NY = NY.join(CT,how='left',lsuffix = 'NY', rsuffix = '_right')
NY = NY.join(NJ,how='left',lsuffix = 'NY', rsuffix = '_right')
Is there a better way of doing this with Pandas?
Use pivot_table:
df.pivot_table(index=['year', 'cust'], columns='state',
values='quant', aggfunc='mean').add_prefix('AVG_')
Related
I have a data frame df like:
age=14 gender=male loc=NY key=0012328434 Unnamed: 4
age=45 gender=female loc=CS key=834734hh43 pre="axe"
age=23 gender=female loc=CA key=545df35fdf NaN
..
..
age=65 gender=male loc=LA key=dfdf545dfg pre="cold"
And I need this df to have a header and remove the redundant data, like desired_df:
age gender loc key pre
14 male NY 0012328434 NaN
45 female CS 834734hh43 axe
23 female CA 545df35fdf NaN
..
..
65 male LA dfdf545dfg cold
what I tried to do:
df1 = df.str.split()
df_out = pd.DataFrame(df1.str[1::2].tolist(), columns=df1[0][0::2])
but this fails, clearly as I do not have a df name to begin with. Any help would be really appreciated.
# df = pd.read_csv(r'xyz.csv', header = None)
df1=(pd.DataFrame(df.fillna('NaN=NaN')
.apply(lambda x: dict(list(x.str.replace('"', '')
.str.split('='))), axis=1).to_list())
.drop('NaN', axis = 1))
age gender loc key pre
0 14 male NY 0012328434
1 45 female CS 834734hh43 axe
2 23 female CA 545df35fdf NaN
3 65 male LA dfdf545dfg cold
(Untested!)
headers = ['age', 'gender', 'loc', 'key', 'pre']
df.columns = headers
for name in df.columns:
df[name] = df[name].str.removeprefix(f'{name}=')
Let's say that I have this dataframe with four columns : "Name", "Value", "Ccy" and "Group" :
import pandas as pd
Name = ['ID', 'Country', 'IBAN','Dan_Age', 'Dan_city', 'Dan_country', 'Dan_sex', 'Dan_Age', 'Dan_country','Dan_sex' , 'Dan_city','Dan_country' ]
Value = ['TAMARA_CO', 'GERMANY','FR56','18', 'Berlin', 'GER', 'M', '22', 'FRA', 'M', 'Madrid', 'ESP']
Ccy = ['','','','EUR','EUR','USD','USD','','CHF', '','DKN','']
Group = ['0','0','0','1','1','1','1','2','2','2','3','3']
df = pd.DataFrame({'Name':Name, 'Value' : Value, 'Ccy' : Ccy,'Group':Group})
print(df)
Name Value Ccy Group
0 ID TAMARA_CO 0
1 Country GERMANY 0
2 IBAN FR56 0
3 Dan_Age 18 EUR 1
4 Dan_city Berlin EUR 1
5 Dan_country GER USD 1
6 Dan_sex M USD 1
7 Dan_Age 22 2
8 Dan_country FRA CHF 2
9 Dan_sex M 2
10 Dan_city Madrid DKN 3
11 Dan_country ESP 3
I want to represent this data differently before saving it in a csv. I would like to group the duplicates in the column "Name" with the associates values in "Values" and "Ccy". I want that the data in the column "Value" and "Ccy" are stored in the row(index) defined by the column "Group". Like that I do not mixed the data.
Then if the name is in the "group" 0, it means that it is general data so I would like that the all the rows from this "Name" are filled with the same value.
So I would like to get this result :
ID_Value Country_Value IBAN_Value Dan_age Dan_age_Ccy Dan_city_Value Dan_city_Ccy Dan_sex_Value
1 TAMARA GER FR56 18 EUR Berlin EUR M
2 TAMARA GER FR56 22 M
3 TAMARA GER FR56 Madrid DKN
I can not find how to do the first part. With the code below, I do not get what I want evn if I remove the columns empty
g = df.groupby(['Name']).cumcount()
df = df.set_index([g,'Name']).unstack().sort_index(level=1, axis=1)
df.columns = df.columns.map(lambda x: f'{x[0]}_{x[1]}')
Anyone can help me !
Thank you
You can use the following. See comments in code for each step:
s = df.loc[df['Group'] == '0', 'Name'].tolist() # this variable will be used later according to Condition 2
df['Name'] = pd.Categorical(df['Name'], categories=df['Name'].unique(), ordered=True) #this preserves order before pivoting
df = df.pivot(index='Group', columns='Name') #transforms long-to-wide per expected output
for col in df.columns:
if col[1] in s: df[col] = df[col].shift().ffill() #Condition 2
df = df.iloc[1:].replace('',np.nan).dropna(axis=1, how='all').fillna('') #dataframe cleanup
df.columns = ['_'.join(col) for col in df.columns.swaplevel()] #column name cleanup
df
Out[1]:
ID_Value Country_Value IBAN_Value Dan_Age_Value Dan_city_Value \
Group
1 TAMARA_CO GERMANY FR56 18 Berlin
2 TAMARA_CO GERMANY FR56 22
3 TAMARA_CO GERMANY FR56 Madrid
Dan_country_Value Dan_sex_Value Dan_Age_Ccy Dan_city_Ccy \
Group
1 GER M EUR EUR
2 FRA M
3 ESP DKN
Dan_country_Ccy Dan_sex_Ccy
Group
1 USD USD
2 CHF
3
From there, you can drop columns you don't want, change strings from "TAMARA_CO" to "TAMARA", "GERMANY" to "GER", use reset_index(drop=True), etc.
You can do this quite easily with only 3 steps:
Split your data frame into 2 parts: the "general data" (which we want as a series) and the more specific data. Each data frame now contains the same kinds of information.
The key part of your problem: reorganizing the data. All you need is the pandas pivot function. It does exactly what you need!
Add the general information and the pivoted data back together.
# Split Data
general = df[df.Group == "0"].set_index("Name")["Value"].copy()
main_df = df[df.Group != "0"]
# Pivot Data
result = main_df.pivot(index="Group", columns=["Name"],
values=["Value", "Ccy"]).fillna("")
result.columns = [f"{c[1]}_{c[0]}" for c in result.columns]
# Create a data frame that has an identical row for each group
general_df = pd.DataFrame([general]*3, index=result.index)
general_df.columns = [c + "_Value" for c in general_df.columns]
# Merge the data back together
result = general_df.merge(result, on="Group")
The result given above does not give the exact column order you want, so you'd have to specify that manually with
final_cols = ["ID_Value", "Country_Value", "IBAN_Value",
"Dan_age_Value", "Dan_Age_Ccy", "Dan_city_Value",
"Dan_city_Ccy", "Dan_sex_Value"]
result = result[final_cols]
This question already has answers here:
Pandas Merging 101
(8 answers)
Closed 2 years ago.
I have two dataframes in Pandas. What I want achieve is, grab every 'Name' from DF1 and get the corresponding 'City' and 'State' present in DF2.
For example, 'Dwight' from DF1 should return corresponding values 'Miami' and 'Florida' from DF2.
DF1
Name Age Student
0 Dwight 20 Yes
1 Michael 30 No
2 Pam 55 No
. . . .
70000 Jim 27 Yes
DF1 has approx 70,000 rows with 3 columns
Second Dataframe, DF2 has approx 320,000 rows.
Name City State
0 Dwight Miami Florida
1 Michael Scranton Pennsylvania
2 Pam Austin Texas
. . . . .
325082 Jim Scranton Pennsylvania
Currently I have two functions, which return the values of 'City' and 'State' using a filter.
def read_city(id):
filt = (df2['Name'] == id)
if filt.any():
field = (df2[filt]['City'].values[0])
else:
field = ""
return field
def read_state(id):
filt = (df2['Name'] == id)
if filt.any():
field = (df2[filt]['State'].values[0])
else:
field = ""
return field
I am using the apply function to process all the values.
df['city_list'] = df['Name'].apply(read_city)
df['State_list'] = df['Name'].apply(read_state)
The result takes a long time to compute in the above way. It roughly takes me around 18 minutes to get back the df['city_list'] and df['State_list'].
Is there a faster to compute this ? Since I am completely new to pandas, I would like to know if there is a efficient way to compute this ?
I believe you can do a map:
s = df2.groupby('name')[['City','State']].agg(list)
df['city_list'] = df['Name'].map(s['City'])
df['State_list'] = df['Name'].map(s['State'])
Or a left merge after you got s:
df = df.merge(s.add_suffix('_list'), left_on='Name', right_index=True, how='left')
I think you can do something like this:
# Dataframe DF1 (dummy data)
DF1 = pd.DataFrame(columns=['Name', 'Age', 'Student'], data=[['Dwight', 20, 'Yes'], ['Michael', 30, 'No'], ['Pam', 55, 'No'], ['Jim', 27, 'Yes']])
print("DataFrame DF1")
print(DF1)
# Dataframe DF2 (dummy data)
DF2 = pd.DataFrame(columns=['Name', 'City', 'State'], data=[['Dwight', 'Miami', 'Florida'], ['Michael', 'Scranton', 'Pennsylvania'], ['Pam', 'Austin', 'Texas'], ['Jim', 'Scranton', 'Pennsylvania']])
print("DataFrame DF2")
print(DF2)
# You do a merge on 'Name' column and then, you change the name of columns 'City' and 'State'
df = pd.merge(DF1, DF2, on=['Name']).rename(columns={'City': 'city_list', 'State': 'State_list'})
print("DataFrame final")
print(df)
Output:
DataFrame DF1
Name Age Student
0 Dwight 20 Yes
1 Michael 30 No
2 Pam 55 No
3 Jim 27 Yes
DataFrame DF2
Name City State
0 Dwight Miami Florida
1 Michael Scranton Pennsylvania
2 Pam Austin Texas
3 Jim Scranton Pennsylvania
DataFrame final
Name Age Student city_list State_list
0 Dwight 20 Yes Miami Florida
1 Michael 30 No Scranton Pennsylvania
2 Pam 55 No Austin Texas
3 Jim 27 Yes Scranton Pennsylvania
Have two df with values
df 1
number 1 2 3
12354 mark 24 london
12356 jacob 25 denver
12357 luther 26 berlin
12358 john 27 tokyo
12359 marshall 28 cairo
12350 ted 29 delhi
another df 2
number
12354
12357
12359
remove all the rows in df1 having values of same column values of df2
Expected Output
0 1 2 3
12356 jacob 25 denver
12358 john 27 tokyo
12350 ted 29 delhi
Here an example :
import pandas as pd
from io import StringIO
df1 = """
number,1,2,3
12354,mark,24,london
12356,jacob,25,denver
12357,luther,26,berlin
12358,john,27,tokyo
12359,marshall,28,cairo
12350,ted,29,delhi
"""
df2 = """
number
12354
12357
12359
"""
df_df2 = pd.read_csv(StringIO(df2), sep=',')
df_df1 = pd.read_csv(StringIO(df1), sep=',')
df=pd.merge(df_df1,df_df2, indicator=True, how='outer').query('_merge=="left_only"')
df.drop(['_merge'], axis=1, inplace=True)
df.rename(columns={'number': '0'}, inplace=True)
print(df)
How would I go about adding a new column to an existing dataframe by comparing it to another that is shorter in length and has a different index.
For example, if I have:
df1 = country code year
0 Armenia a 2016
1 Brazil b 2017
2 Turkey c 2016
3 Armenia d 2017
df2 = geoCountry 2016_gdp 2017_gdp
0 Armenia 10.499 10.74
1 Brazil 1,798.62 2,140.94
2 Turkey 857.429 793.698
and I want to end up with:
df1 = country code year gdp
0 Armenia a 2016 10.499
1 Brazil b 2017 2,140.94
2 Turkey c 2016 857.429
3 Armenia d 2017 10.74
How would I go about this? I attempted to use answers outlined here and here to no avail. I also did the following which takes too long on a 90000 row dataframe
for index, row in df1.iterrows():
if row['country'] in list(df2.geoCountry):
if row['year'] == 2016:
df1['gdp'].append(df2[df2.geoCountry == str(row['country'])]['2016'])
else:
df1['gdp'].append(df2[df2.geoCountry == str(row['country'])]['2017'])
I guess this is what you're looking for:
df2 = df2.melt(id_vars = 'geoCountry', value_vars = ['2016_gdp', '2017_gdp'], var_name = ['year'])
df1['year'] = df1['year'].astype('int')
df2['year'] = df2['year'].str.slice(0,4).astype('int')
df1.merge(df2, left_on = ['country','year'], right_on = ['geoCountry','year'])[['country', 'code', 'year', 'value']]
Output:
country code year value
0 Armenia a 2016 10.499
1 Brazil b 2017 2,140.94
2 Turkey c 2016 857.429
3 Armenia d 2017 10.74
You mainly need the melt function:
df2.columns = df2.columns.str.split("_").str.get(0)
df2 = df2.rename(index=str, columns={"geoCountry": "country"})
df3 = pd.melt(df2, id_vars=['geoCountry'], value_vars=['2016','2017'],
var_name='year', value_name='gdp')
After this you simply merge the df1 with the above df3
result = pd.merge(df1, df3, on=['country','year'])
Output:
pd.merge(df1, df3, on=['country','year'])
Out[36]:
country code year gdp
0 Armenia a 2016 10.499
1 Brazil b 2017 2140.940
2 Turkey c 2016 857.429
3 Armenia d 2017 10.740