I have a list of names of columns (cols) that exist in one dataframe.
I want to insert columns by those names in another dataframe.
So I am using a for loop to iterate the list and create the columns one by one:
cols = ['DEPTID', 'ACCOUNT', 'JRNL LINE DESCRIPTION', 'JRNL DATE', 'BASE AMOUNT', 'FOREIGN CURRENCY', 'FOREIGN AMOUNT', 'JRNL SOURCE']
for col in cols:
# "summary" and "obiee" are dataframes
summary.loc[obiee['mapid'], col] = obiee[col].tolist()
I would like to get rid of the for loop, however.
So I have tried multiple column assignment using the .loc syntax:
cols = ['DEPTID', 'ACCOUNT', 'JRNL LINE DESCRIPTION', 'JRNL DATE', 'BASE AMOUNT', 'FOREIGN CURRENCY', 'FOREIGN AMOUNT', 'JRNL SOURCE']
summary.loc[obiee['mapid'], cols] = obiee[cols]
but Pandas will throw an error:
KeyError: "['DEPTID' 'ACCOUNT' 'JRNL LINE DESCRIPTION' 'JRNL DATE' 'BASE AMOUNT'\n 'FOREIGN CURRENCY' 'FOREIGN AMOUNT' 'JRNL SOURCE'] not in index"
Is it not possible with this syntax? How can I do this otherwise?
join
You can create a new dataframe and then join. From your problem description and sample code, 'mapid' represents index values in the summary dataframe. join is made to merge on index. So by setting obiee's index to 'mapid' then taking the the appropriate columns, we can just use join.
summary.join(obiee.set_index('mapid')[cols])
you have a dataFrame df1 .. with some columns...
And you want those in a df2 ... all you need to do is just equate them as show below
df2 = pd.DataFrame({ 'A' : 1.,
....: 'B' : pd.Timestamp('20130102'),
....: 'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
....: 'D' : np.array([3] * 4,dtype='int32'),
....: 'E' : pd.Categorical(["test","train","test","train"]),
....: 'F' : 'foo' })
df1 = pd.DataFrame({ 'G' : 1.,
....: 'H' : pd.Timestamp('20130102'),
....: 'I' : pd.Series(1,index=list(range(4)),dtype='float32'),
....: 'J' : np.array([3] * 4,dtype='int32'),
....: 'K' : pd.Categorical(["test","train","test","train"]),
....: 'L' : 'foo' })
df2['G'],df2['F'] = df1['G'],df1['H']
Related
Here's my df:
df=pd.DataFrame(
{
'Color': ['red','blue','red','red','green','red','yellow'],
'Type': ['Oil', 'Aluminium', 'Oil', 'Oil', 'Cement Paint', 'Synthetic Rubber', 'Emulsion'],
'Finish' : ['Satin', 'Matte', 'Matte', 'Satin', 'Semi-gloss', 'Satin', 'Satin'],
'Use' : ['Interior', 'Exterior', 'Interior', 'Interior', 'Exterior', 'Exterior', 'Exterior'],
'Price' : [55, 75, 60, 60, 55, 75, 50]
}
)
I want to create a pivot table that will output 'Color', 'color count', the percentage or weight or each count of color, and finally a total row, outputting the total color count next to 100%. Additionally, I'd like to add a header with today's date in the following format (02 - Nov).
Here is my current pivot with the aproximating inputs
today=datetime.date.today()
today_format=today.strftime("%d-m%")
pivot_table=pd.pivot_table(
data=df,
index='Color',
aggfunc={'Color':'count'}
)
df['Color'].value_counts(
normalize=True
).mul(100).round(1).astype(str) + '%'
Is there a way to add more information to the pivot as a header, total and extra column? Or just I just try to convert the pivot back to a DF and edit it from there?
The main difficulty I'm finding is that since I'm handling string data, when I 'aggfunc='sum' it actually adds the strings. And If I try to add 'margins=True, margins_name='Total count' I get the following error:
if isinstance(aggfunc[k], str):
KeyError: 'Type'
The desired table output would look something like this:
Updated Answer
Thanks to a great suggestion by Rabinzel, we can also have today's date as a column header as well:
df = (df['Color'].value_counts().reset_index().pivot_table(index = ['index'], aggfunc = np.sum, margins=True, margins_name='Total')
.assign(perc = lambda x: x['Color']/x.iloc[:-1]['Color'].sum() * 100)
.rename(columns = {'Color' : 'Color Count',
'perc' : '%'}))
new_cols = pd.MultiIndex.from_product([[datetime.today().strftime('%#d-%b')], df.columns])
df.columns = new_cols
df
2-Nov
Color Count %
index
blue 1 14.285714
green 1 14.285714
red 4 57.142857
yellow 1 14.285714
Total 7 100.000000
Assuming I have the following dataframe
df = pd.DataFrame(
{
'ID': ['AB01'],
'Col A': ["Yes"],
'Col B': ["L"],
'Col C': ["Yes"],
'Col D': ["L"],
'Col E': ["Yes"],
'Col F': ["L"],
'Type': [85]
}
)
I want to change all column names by changing it lowercase, replace space with underscore and adding string _filled to the end of name, except for columns named in list skip = ['ID', 'Type'].
How can I achieve this? I want the end resulting dataframe to have column names as ID, col_a_filled, col_b_filled......,Type
You can use df.rename along with a dict comprehension to get a nice one-liner:
df = df.rename(columns={col:col.lower().replace(" ", "_")+"_filled" for col in df.columns if col not in skip})
I have the following two dataframes:
df1 = pd.DataFrame([["blala Amazon", '02/30/2017', 'Amazon'], ["blala Amazon", '04/28/2017', 'Amazon'], ['blabla Netflix', '06/28/2017', 'Netflix']], columns=['text', 'date', 'keyword'])
df2 = pd.DataFrame([['01/28/2017', '3.4', '10.2'], ['02/30/2017', '3.7', '10.5'], ['03/28/2017', '6.0', '10.9']], columns=['dates', 'ReturnOnAssets.1', 'ReturnOnAssets.2'])
(perhaps it's clearer in the screenshots here: https://imgur.com/a/YNrWpR2)
The df2 is much larger than shown here - it contains columns for 100 companies. So for example, for the 10th company, the column names are: ReturnOnAssets.10, etc.
I have created a dictionary which maps the company names to the column names:
stocks = {'Microsoft':'','Apple' :'1', 'Amazon':'2', 'Facebook':'3',
'Berkshire Hathaway':'4', 'Johnson & Johnson':'5',
'JPMorgan' :'6', 'Alphabet': '7'}
and so on.
Now, what I am trying to achieve is adding a column "ReturnOnAssets" from d2 to d1, but for a specific company and for a specific date. So looking at df1, the first tweet (i.e. "text") contains a keyword "Amazon" and it was posted on 04/28/2017. I now need to go to df2 to the relevant column name for Amazon (i.e. "ReturnOnAssets.2") and fetch the value for the specified date.
So what I expect looks like this:
df1 = pd.DataFrame([["blala Amazon", '02/30/2017', 'Amazon', **'10.5'**], ["blala Amazon", '04/28/2017', 'Amazon', 'x'], ["blabla Netflix', '06/28/2017', 'Netflix', 'x']], columns=['text', 'date', 'keyword', 'ReturnOnAssets'])
By x I mean values which where not included in the example df1 and df2.
I am fairly new to pandas and I can't wrap my head around it. I tried:
keyword = df1['keyword']
txt = 'ReturnOnAssets.'+ stocks[keyword]
df1['ReturnOnAssets'] = df2[txt]
But I don't know how to fetch the relevant date, and also this gives me an error: "Series' objects are mutable, thus they cannot be hashed", which probably comes from the fact that I cannot just add a whole column of keywords to the text string.
I don't know how to achieve the operation I need to do, so I would appreciate help.
It can probably be shortened and you can add if statements to deal with when there are missing values.
import pandas as pd
import numpy as np
df1 = pd.DataFrame([["blala Amazon", '05/28/2017', 'Amazon'], ["blala Facebook", '04/28/2017', 'Facebook'], ['blabla Netflix', '06/28/2017', 'Netflix']], columns=['text', 'dates', 'keyword'])
df1
df2 = pd.DataFrame([['06/28/2017', '3.4', '10.2'], ['05/28/2017', '3.7', '10.5'], ['04/28/2017', '6.0', '10.9']], columns=['dates', 'ReturnOnAsset.1', 'ReturnOnAsset.2'])
#creating myself a bigger df2 to cover all the way to netflix
for i in range (9):
df2[('ReturnOnAsset.' + str(i))]=np.random.randint(1, 1000, df1.shape[0])
stocks = {'Microsoft':'0','Apple' :'1', 'Amazon':'2', 'Facebook':'3',
'Berkshire Hathaway':'4', 'Johnson & Johnson':'5',
'JPMorgan' :'6', 'Alphabet': '7', 'Netflix': '8'}
#new col where to store values
df1['ReturnOnAsset']=np.nan
for index, row in df1.iterrows():
colname=('ReturnOnAsset.' + stocks[row['keyword']] )
df1['ReturnOnAsset'][index]=df2.loc[df2['dates'] ==row['dates'] , colname]
Next time please give us a correct test data, I modified your dates and dictionary for match the first and second column (netflix and amazon values).
This code will work if and only if all dates from df1 are in df2 (Note that in df1 the column name is date and in df2 the column name is dates)
df1 = pd.DataFrame([["blala Amazon", '02/30/2017', 'Amazon'], ["blala Amazon", '04/28/2017', 'Amazon'], ['blabla Netflix', '02/30/2017', 'Netflix']], columns=['text', 'date', 'keyword'])
df2 = pd.DataFrame([['04/28/2017', '3.4', '10.2'], ['02/30/2017', '3.7', '10.5'], ['03/28/2017', '6.0', '10.9']], columns=['dates', 'ReturnOnAssets.1', 'ReturnOnAssets.2'])
stocks = {'Microsoft':'','Apple' :'5', 'Amazon':'2', 'Facebook':'3',
'Berkshire Hathaway':'4', 'Netflix':'1',
'JPMorgan' :'6', 'Alphabet': '7'}
df1["ReturnOnAssets"]= [ df2["ReturnOnAssets." + stocks[ df1[ "keyword" ][ index ] ] ][ df2.index[ df2["dates"] == df1["date"][index] ][0] ] for index in range(len(df1)) ]
df1
I have a data set which contains some headers which end with no break space hex code. Below is my attempt to get rid of that but it still exists there.
Input:
files=[file1,file2,file3]
for f in files:
for col in f.columns:
col = col.replace("\xc2\xa0", "")
col = col.replace(u'\xa0', u' ')
print(f.columns.values)
Output:
'Name' 'Date' 'rep_cur' 'Passenger Revenue\xa0' 'Cargo Revenue\xa0'
'Other Revenue\xa0' 'Total Cargo & Other Revenue' 'Total Revenue\xa0'
'% inc / (dec) to previous period' 'Employee Costs\xa0' 'Fuel and oil\xa0'
Use str.strip:
l = ['Name','Date','rep_cur','Passenger Revenue\xa0','Cargo Revenue\xa0',
'Other Revenue\xa0','Total Cargo & Other Revenue','Total Revenue\xa0',
'% inc / (dec) to previous period','Employee Costs\xa0','Fuel and oil\xa0']
new_l = [i.strip() for i in l]
Output:
['Name',
'Date',
'rep_cur',
'Passenger Revenue',
'Cargo Revenue',
'Other Revenue',
'Total Cargo & Other Revenue',
'Total Revenue',
'% inc / (dec) to previous period',
'Employee Costs',
'Fuel and oil']
for col in f.columns:
col = col.replace("\xc2\xa0", "")
col = col.replace(u'\xa0', u' ')
That does nothing to the actual col used for the iteration. That is pretty much equivalent to:
li = [1, 2, 3]
for n in li:
n = n + 1
print(li)
# [1, 2, 3]
A decent IDE should show you a warning along the lines of "n (or col in your example) is redefined with no usage".
Instead you should use the tools pandas provide, for example df.rename.
df = pd.DataFrame({'a\xa0': []})
print(df.rename(lambda col: col.replace('\xa0', ''), axis='columns'))
Note that .rename returns a new dataframe. You can use inplace=True to change the original dataframe:
df.rename(lambda col: col.replace('\xa0', ''), axis='columns', inplace=True)
If you don't want to be so fancy you could replace the columns' names yourself (which is similar to what your original code tried to do):
df.columns = [column.replace('\xa0', '') for col in df.columns]
I have a list of columns that I want to rename a portion of based on a list of values.
I am looking at a file which has 12 months of data and each month is a different column (I need to keep it in this specific format unfortunately). This file is generated once per month and I keep the column names more general since I have to do a lot of calculations on them based the month number (for example, I need to compare information against the average of month 8, 9, and 10 every month).
Here are the columns I want to rename:
['month_1_Sign',
'month_2_Sign',
'month_3_Sign',
'month_4_Sign',
'month_5_Sign',
'month_6_Sign',
'month_7_Sign',
'month_8_Sign',
'month_9_Sign',
'month_10_Sign',
'month_11_Sign',
'month_12_Sign',
'month_1_Actual',
'month_2_Actual',
'month_3_Actual',
'month_4_Actual',
'month_5_Actual',
'month_6_Actual',
'month_7_Actual',
'month_8_Actual',
'month_9_Actual',
'month_10_Actual',
'month_11_Actual',
'month_12_Actual',
'month_1_Target',
'month_2_Target',
'month_3_Target',
'month_4_Target',
'month_5_Target',
'month_6_Target',
'month_7_Target',
'month_8_Target',
'month_9_Target',
'month_10_Target',
'month_11_Target',
'month_12_Target']
Here are the names I want to place:
required_date_range = sorted(list(pd.Series(pd.date_range((dt.datetime.today().date() + pd.DateOffset(months=-13)), periods=12, freq='MS')).dt.strftime('%Y-%m-%d')))
['2015-03-01',
'2015-04-01',
'2015-05-01',
'2015-06-01',
'2015-07-01',
'2015-08-01',
'2015-09-01',
'2015-10-01',
'2015-11-01',
'2015-12-01',
'2016-01-01',
'2016-02-01']
So month_12 columns (month_12 is always the latest month) would be changed to '2016-02-01_Sign', '2016-02-01_Actual', '2016-02-01_Target' in this example.
I tried doing this but it doesn't change anything (trying to change the month_# with the actual date it refers to):
final.replace('month_10', required_date_range[9], inplace=True)
final.replace('month_11', required_date_range[10], inplace=True)
final.replace('month_12', required_date_range[11], inplace=True)
final.replace('month_1', required_date_range[0], inplace=True)
final.replace('month_2', required_date_range[1], inplace=True)
final.replace('month_3', required_date_range[2], inplace=True)
final.replace('month_4', required_date_range[3], inplace=True)
final.replace('month_5', required_date_range[4], inplace=True)
final.replace('month_6', required_date_range[5], inplace=True)
final.replace('month_7', required_date_range[6], inplace=True)
final.replace('month_8', required_date_range[7], inplace=True)
final.replace('month_9', required_date_range[8], inplace=True)
You could construct a dict and then call map on the split column str:
In [27]:
d = dict(zip([str(x) for x in range(1,13)], required_date_range))
d
Out[27]:
{'1': '2015-03-01',
'10': '2015-12-01',
'11': '2016-01-01',
'12': '2016-02-01',
'2': '2015-04-01',
'3': '2015-05-01',
'4': '2015-06-01',
'5': '2015-07-01',
'6': '2015-08-01',
'7': '2015-09-01',
'8': '2015-10-01',
'9': '2015-11-01'}
In [26]:
df.columns = df.columns.to_series().str.rsplit('_').str[1].map(d) + '_' + df.columns.to_series().str.rsplit('_').str[-1]
df.columns
Out[26]:
Index(['2015-03-01_Sign', '2015-04-01_Sign', '2015-05-01_Sign',
'2015-06-01_Sign', '2015-07-01_Sign', '2015-08-01_Sign',
'2015-09-01_Sign', '2015-10-01_Sign', '2015-11-01_Sign',
'2015-12-01_Sign', '2016-01-01_Sign', '2016-02-01_Sign',
'2015-03-01_Actual', '2015-04-01_Actual', '2015-05-01_Actual',
'2015-06-01_Actual', '2015-07-01_Actual', '2015-08-01_Actual',
'2015-09-01_Actual', '2015-10-01_Actual', '2015-11-01_Actual',
'2015-12-01_Actual', '2016-01-01_Actual', '2016-02-01_Actual',
'2015-03-01_Target', '2015-04-01_Target', '2015-05-01_Target',
'2015-06-01_Target', '2015-07-01_Target', '2015-08-01_Target',
'2015-09-01_Target', '2015-10-01_Target', '2015-11-01_Target',
'2015-12-01_Target', '2016-01-01_Target', '2016-02-01_Target'],
dtype='object')
You're going to want to use the .rename method instead of the .replace! For instance this code:
import pandas as pd
d = {'a': [1, 2, 4], 'b':[2,3,4],'c':[3,4,5]}
df = pd.DataFrame(d)
df.rename(columns={'a': 'x1', 'b': 'x2'}, inplace=True)
Changes the 'a' and 'b' column title to 'x1' and 'x2' respectively.
The first line of the renaming code you have would change to:
final.rename(columns={'month_10':required_date_range[9]}, inplace=True)
In fact you could do every column in that one command by adding entries to the columns dictionary argument.
final.rename(columns={'month_10':required_date_range[9],
'month_9':required_date-range[8], ... (and so on) }, inplace=True)
from collections import product
df = pd.DataFrame(np.random.rand(3, 12 * 3), columns=['month_' + str(c[0]) + '_' + c[1] for c in product(range(1, 13), ['Sign', 'Actual', 'Target'])])
First create a mapping to the relevant months.
mapping = {'month_' + str(n): date for n, date in enumerate(required_date_range, 1)}
>>> mapping
{'month_1': '2015-03-01',
'month_10': '2015-12-01',
'month_11': '2016-01-01',
'month_12': '2016-02-01',
'month_2': '2015-04-01',
'month_3': '2015-05-01',
'month_4': '2015-06-01',
'month_5': '2015-07-01',
'month_6': '2015-08-01',
'month_7': '2015-09-01',
'month_8': '2015-10-01',
'month_9': '2015-11-01'}
Then reassign columns, joining the mapped month (e.g. '2016-02-01') to the rest of the column name. This was done using a list comprehension.
df.columns = [mapping.get(c[:c.find('_', 6)]) + c[c.find('_', 6):] for c in cols]
>>> df.columns.tolist()
['2015-03-01_Sign',
'2015-04-01_Sign',
'2015-05-01_Sign',
'2015-06-01_Sign',
'2015-07-01_Sign',
'2015-08-01_Sign',
'2015-09-01_Sign',
'2015-10-01_Sign',
'2015-11-01_Sign',
'2015-12-01_Sign',
'2016-01-01_Sign',
'2016-02-01_Sign',
'2015-03-01_Actual',
'2015-04-01_Actual',
'2015-05-01_Actual',
'2015-06-01_Actual',
'2015-07-01_Actual',
'2015-08-01_Actual',
'2015-09-01_Actual',
'2015-10-01_Actual',
'2015-11-01_Actual',
'2015-12-01_Actual',
'2016-01-01_Actual',
'2016-02-01_Actual',
'2015-03-01_Target',
'2015-04-01_Target',
'2015-05-01_Target',
'2015-06-01_Target',
'2015-07-01_Target',
'2015-08-01_Target',
'2015-09-01_Target',
'2015-10-01_Target',
'2015-11-01_Target',
'2015-12-01_Target',
'2016-01-01_Target',
'2016-02-01_Target']