I have a data frame
cat input.csv
dwelling,wall,weather,occ,height,temp
5,2,Ldn,Pen,154.7,23.4
5,4,Ldn,Pen,172.4,28.7
3,4,Ldn,Pen,183.5,21.2
3,4,Ldn,Pen,190.2,30.3
To which I'm trying to apply the following function:
input_df = pd.read_csv('input.csv')
def folder_column(row):
if row['dwelling'] == 5 and row['wall'] == 2:
return 'folder1'
elif row['dwelling'] == 3 and row['wall'] == 4:
return 'folder2'
else:
return 0
I want to run the function on the input dataset and store the output in a separate data frame using something like this:
temp_df = pd.DataFrame()
temp_df = input_df['archetype_folder'] = input_df.apply(folder_column, axis=1)
But when I do this I only get the newly created 'archetype_folder' in the temp_df, when I would like all the original columns from the input_df. Can anyone help? Note that I don't want to add the new column 'archetype_folder' to the original, input_df. I've also tried this:
temp_df = input_df
temp_df['archetype_folder'] = temp_df.apply(folder_column, axis=1)
But when I run the second command both input_df and temp_df end up with the new column?
Any help is appreciated!
Use Dataframe.copy :
temp_df = input_df.copy()
temp_df['archetype_folder'] = temp_df.apply(folder_column, axis=1)
You need to create copy of original DataFrame, then assign return values of your function to it, consider following simple example
import pandas as pd
def is_odd(row):
return row.value % 2 == 1
df1 = pd.DataFrame({"value":[1,2,3],"name":["uno","dos","tres"]})
df2 = df1.copy()
df2["odd"] = df1.apply(is_odd,axis=1)
print(df1)
print("=====")
print(df2)
gives output
value name
0 1 uno
1 2 dos
2 3 tres
=====
value name odd
0 1 uno True
1 2 dos False
2 3 tres True
You don't need apply. Use .loc to be more efficient.
temp_df = input_df.copy()
m1 = (input_df['dwelling'] == 5) & (input_df['wall'] == 2)
m2 = (input_df['dwelling'] == 3) & (input_df['wall'] == 4)
temp_df.loc[m1, 'archetype_folder'] = 'folder1'
temp_df.loc[m2, 'archetype_folder'] = 'folder2'
Related
I would like to use a function that produces multiple outputs to create multiple new columns in an existing pandas dataframe.
For example, say I have this test function which outputs 2 things:
def testfunc (TranspoId, LogId):
thing1 = TranspoId + LogId
thing2 = LogId - TranspoId
return thing1, thing2
I can give those returned outputs to 2 different variables like so:
Thing1,Thing2 = testfunc(4,28)
print(Thing1)
print(Thing2)
I tried to do this with a dataframe in the following way:
data = {'Name':['Picard','Data','Guinan'],'TranspoId':[1,2,3],'LogId':[12,14,23]}
df = pd.DataFrame(data, columns = ['Name','TranspoId','LogId'])
print(df)
df['thing1','thing2'] = df.apply(lambda row: testfunc(row.TranspoId, row.LogId), axis=1)
print(df)
What I want is something that looks like this:
data = {'Name':['Picard','Data','Guinan'],'TranspoId':[1,2,3],'LogId':[12,14,23], 'Thing1':[13,16,26], 'Thing2':[11,12,20]}
df = pd.DataFrame(data, columns=['Name','TranspoId','LogId','Thing1','Thing2'])
print(df)
In the real world that function is doing a lot of heavy lifting, and I can't afford to run it twice, once for each new variable being added to the df.
I've been hitting myself in the head with this for a few hours. Any insights would be greatly appreciated.
I believe the best way is to change the order and make a function that works with Series.
import pandas as pd
# Create function that deals with series
def testfunc (Series1, Series2):
Thing1 = Series1 + Series2
Thing2 = Series1 - Series2
return Thing1, Thing2
# Create df
data = {'Name':['Picard','Data','Guinan'],'TranspoId':[1,2,3],'LogId':[12,14,23]}
df = pd.DataFrame(data, columns = ['Name','TranspoId','LogId'])
# Apply function
Thing1,Thing2 = testfunc(df['TranspoId'],df['LogId'])
print(Thing1)
print(Thing2)
# Assign new columns
df = df.assign(Thing1 = Thing1)
df = df.assign(Thing2 = Thing2)
# print df
print(df)
Your function should return a series that calculates the new columns in one pass. Then you can use pandas.apply() to add the new fields.
import pandas as pd
df = pd.DataFrame( {'TranspoId':[1,2,3], 'LogId':[4,5,6]})
def testfunc(row):
new_cols = pd.Series([
row['TranspoId'] + row['LogId'],
row['LogId'] - row['TranspoId']])
return new_cols
df[['thing1','thing2']] = df.apply(testfunc, axis = 1)
print(df)
Output:
TranspoId LogId thing1 thing2
0 1 4 5 3
1 2 5 7 3
2 3 6 9 3
How to do the same as the bellow code for a dask data frame.
df['new_column'] = 0
for i in range(len(df)):
if (condition):
df[i,'new_column'] = '1'
else:
df[i,'new_column'] = '0'
I want to add a new column to a dask dataframe and insert 0/1 to the new column.
In case you do not wish to compute as suggested by Rajnish kumar, you can also use something along the following lines:
import dask.dataframe as dd
import pandas as pd
import numpy as np
my_df = [{"a": 1, "b": 2}, {"a": 2, "b": 3}]
df = pd.DataFrame(my_df)
dask_df = dd.from_pandas(df, npartitions=2)
dask_df["c"] = dask_df.apply(lambda x: x["a"] < 2,
axis=1,
meta=pd.Series(name="c", dtype=np.bool))
dask_df.compute()
Output:
a b c
0 1 2 True
1 2 3 False
The condition (here a check whether the entry in column "a" < 2) is applied on a row-by-row-basis. Note that depending on your condition and dependencies therein it might not necessarily be as straightforward, but in that case you could share additional information on what your condition entails.
You can't do that directly to Dask Dataframe. You first need to compute it. Use this, It will work.
df = df.compute()
for i in range(len(df)):
if (condition):
df[i,'new_column'] = '1'
else:
df[i,'new_column'] = '0'
The reason behind this is Dask Dataframe is the representation of dataframe schema, it is divided into dask-delayed task. Hope it helps you.
I was going through these answers for a similar problem I was facing.
This worked for me.
def extractAndFill(df, datetimeColumnName):
# Add 4 new columns for weekday, hour, month and year
df['pickup_date_weekday'] = 0
df['pickup_date_hour'] = 0
df['pickup_date_month'] = 0
df['pickup_date_year'] = 0
# Iterate through each row and update the values for weekday, hour, month and year
for index, row in df.iterrows():
# Get weekday, hour, month and year
w, h, m, y = extractDateParts(row[datetimeColumnName])
# Update the values
row['pickup_date_weekday'] = w
row['pickup_date_hour'] = h
row['pickup_date_month'] = m
row['pickup_date_year'] = y
return df
df1.compute()
df1 = extractAndFill(df1, 'pickup_datetime')
I have a dataframe "bb" like this:
Response Unique Count
I love it so much! 246_0 1
This is not bad, but can be better. 246_1 2
Well done, let's do it. 247_0 1
If count is lager than 1, I would like to split the string and make the dataframe "bb" become this: (result I expected)
Response Unique
I love it so much! 246_0
This is not bad 246_1_0
but can be better. 246_1_1
Well done, let's do it. 247_0
My code:
bb = DataFrame(bb[bb['Count'] > 1].Response.str.split(',').tolist(), index=bb[bb['Count'] > 1].Unique).stack()
bb = bb.reset_index()[[0, 'Unique']]
bb.columns = ['Response','Unique']
bb=bb.replace('', np.nan)
bb=bb.dropna()
print(bb)
But the result is like this:
Response Unique
0 This is not bad 246_1
1 but can be better. 246_1
How can I keep the original dataframe in this case?
First split only values per condition with to new helper Series and then add counter values by GroupBy.cumcount only per duplicated index values by Index.duplicated:
s = df.loc[df.pop('Count') > 1, 'Response'].str.split(',', expand=True).stack()
df1 = df.join(s.reset_index(drop=True, level=1).rename('Response1'))
df1['Response'] = df1.pop('Response1').fillna(df1['Response'])
mask = df1.index.duplicated(keep=False)
df1.loc[mask, 'Unique'] += df1[mask].groupby(level=0).cumcount().astype(str).radd('_')
df1 = df1.reset_index(drop=True)
print (df1)
Response Unique
0 I love it so much! 246_0
1 This is not bad 246_1_0
2 but can be better. 246_1_1
3 Well done! 247_0
EDIT: If need _0 for all another values remove mask:
s = df.loc[df.pop('Count') > 1, 'Response'].str.split(',', expand=True).stack()
df1 = df.join(s.reset_index(drop=True, level=1).rename('Response1'))
df1['Response'] = df1.pop('Response1').fillna(df1['Response'])
df1['Unique'] += df1.groupby(level=0).cumcount().astype(str).radd('_')
df1 = df1.reset_index(drop=True)
print (df1)
Response Unique
0 I love it so much! 246_0_0
1 This is not bad 246_1_0
2 but can be better. 246_1_1
3 Well done! 247_0_0
Step wise we can solve this problem the following:
Split your dataframes by count
Use this function to explode the string to rows
We groupby on index and use cumcount to get the correct unique column values.
Finally we concat the dataframes together again.
df1 = df[df['Count'].ge(2)] # all rows which have a count 2 or higher
df2 = df[df['Count'].eq(1)] # all rows which have count 1
df1 = explode_str(df1, 'Response', ',') # explode the string to rows on comma delimiter
# Create the correct unique column
df1['Unique'] = df1['Unique'] + '_' + df1.groupby(df1.index).cumcount().astype(str)
df = pd.concat([df1, df2]).sort_index().drop('Count', axis=1).reset_index(drop=True)
Response Unique
0 I love it so much! 246_0
1 This is not bad 246_1_0
2 but can be better. 246_1_1
3 Well done! 247_0
Function used from linked answer:
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
df1 = pd.read_excel(mxln) # Loads master xlsx for comparison
df2 = pd.read_excel(sfcn) # Loads student xlsx for comparison
difference = df2[df2 != df1] # Scans for differences
Wherever there is a difference, I want to store those cell locations in a list. It needs to be in the format 'A1' (not something like [1, 1]) so I can pass it through this:
redFill = PatternFill(start_color='FFEE1111', end_color='FFEE1111', fill_type='solid')
lsws['A1'].fill = redFill
lsfh.save(sfcn)
I've looked at solutions like this, but I couldn't get it to work/don't understand it. For example, the following doesn't work:
def highlight_cells():
df1 = pd.read_excel(mxln) # Loads master xlsx for comparison
df2 = pd.read_excel(sfcn) # Loads student xlsx for comparison
difference = df2[df2 != df1] # Scans for differences
return ['background-color: yellow']
df2.style.apply(highlight_cells)
To get the difference cells from two pandas.DataFrame as excel coordinates you can do:
Code:
def diff_cell_indices(dataframe1, dataframe2):
from openpyxl.utils import get_column_letter as column_letter
x_ofs = dataframe1.columns.nlevels + 1
y_ofs = dataframe1.index.nlevels + 1
return [column_letter(x + x_ofs) + str(y + y_ofs) for
y, x in zip(*np.where(dataframe1 != dataframe2))]
Test Code:
import pandas as pd
df1 = pd.read_excel('test.xlsx')
print(df1)
df2 = df.copy()
df2.C['R2'] = 1
print(df2)
print(diff_cell_indices(df1, df2))
Results:
B C
R2 2 3
R3 4 5
B C
R2 2 1
R3 4 5
['C2']
Morning,
I have 3 excels that i have imported via from excel. I am trying to create a DataFrame which has taken the name ('Ticker') column from each import, add the title of the excel ('Secto') and append it to eachother to create a new DataFrame. This new DataFrame will then be exported to excel.
AA = ['Aero&Def','REITs', 'Auto&Parts']
File = 'FTSEASX_'+AA[0]+'_Price.xlsx'
xlsx = pd.ExcelFile('C:/Users/Ben/'+File)
df = pd.read_excel(xlsx, 'Price_Data')
df = df[df.Identifier.notnull()]
df.fillna(0)
a = []
b = []
for i in df['Ticker']:
a.append(i)
b.append(AA[0])
raw_data = {'Ticker': a, 'Sector': b}
df2 = pd.DataFrame(raw_data, columns = ['Ticker', 'Sector'])
del AA[0]
for j in AA:
File = 'FTSEASX_'+j+'_Price.xlsx'
xlsx = pd.ExcelFile('C:/Users/Ben/'+File)
df3 = pd.read_excel(xlsx, 'Price_Data')
df3 = df3[df3.Identifier.notnull()]
df3.fillna(0)
a = []
b = []
for i in df3['Ticker']:
a.append(i)
b.append(j)
raw_data = {'Ticker': a, 'Sector': b}
df4 = pd.DataFrame(raw_data, columns = ['Ticker', 'Sector'])
df5 = df2.append(df4)
I am currently getting the below but obviously the 2nd import, titled 'REITs' is not getting captured.
Ticker Sector
0 AVON-GB Aero&Def
1 BA-GB Aero&Def
2 COB-GB Aero&Def
3 MGGT-GB Aero&Def
4 SNR-GB Aero&Def
5 ULE-GB Aero&Def
6 QQ-GB Aero&Def
7 RR-GB Aero&Def
8 CHG-GB Aero&Def
0 GKN-GB Auto&Parts
How would i go about achieving this? or is there a better more pythonic way of achieving this?
I would do it this way:
import pandas as pd
AA = ['Aero&Def','REITs', 'Auto&Parts']
# assuming that ['Ticker','Sector','Identifier'] columns are in 'B,D,E' Excel columns
xl_cols='B,D,E'
dfs = [ pd.read_excel('FTSEASX_{0}_Price.xlsx'.format(f),
'Price_Data',
parse_cols=xl_cols,
).query('Identifier == Identifier')
for f in AA]
df = pd.concat(dfs, ignore_index=True)
print(df[['Ticker', 'Sector']])
Explanation:
.query('Identifier == Identifier') - gives you only those rows where Identifier is NOT NULL (using the fact that value == NaN will always be False)
PS You don't want to loop through your data frames when working with Pandas...