Add new column in the Dataset with condition? - python

I am new in pandas and python, but I want to add new column that collect all row data and paste it in new column for example:
df_final = pd.read_csv('df_final.csv')
House_No = df_final['House_No_'].copy()
Street = df_final['Street'].copy()
City = df_final['City'].copy()
District = df_final['District'].copy()
Postl_Code = df_final['Postl_Code'].copy()
df_final['Full_Address']=(House_No +' , '+ Street +' , '+ City +' , '+ District +' , '+ str(Postl_Code))
the output is :
when the House No is null the new cell become null .. find it in row 7,8 and 9 in the image.
how can ignore the null cell and just take the rest of the roe ??
Thank you in advance.

import numpy as np
import pandas as pd
# sample data
np.random.seed(1)
df = pd.DataFrame(np.random.randint(1,10, size=(5,5)), columns=list('abcde'))
df.iloc[2, 0] = np.nan
# use np.where with join
df['new_col'] = np.where(df['a'].isna(), df.iloc[:, 1:].astype(str).apply(', '.join, axis=1),
df.astype(str).apply(', '.join, axis=1))
a b c d e new_col
0 6.0 9 6 1 1 6.0, 9, 6, 1, 1
1 2.0 8 7 3 5 2.0, 8, 7, 3, 5
2 NaN 3 5 3 5 3, 5, 3, 5
3 8.0 8 2 8 1 8.0, 8, 2, 8, 1
4 7.0 8 7 2 1 7.0, 8, 7, 2, 1
or if you do not care if the nan is in the final output simply do:
df['new_col1'] = df.astype(str).apply(', '.join, axis=1)
a b c d e new_col1
0 6.0 9 6 1 1 6.0, 9, 6, 1, 1
1 2.0 8 7 3 5 2.0, 8, 7, 3, 5
2 NaN 3 5 3 5 nan, 3, 5, 3, 5
3 8.0 8 2 8 1 8.0, 8, 2, 8, 1
4 7.0 8 7 2 1 7.0, 8, 7, 2, 1

Related

Pandas replace with dict and condition

In Pandas in Python you have the function df.replace(), which you can give a dict to change the values in a column:
df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
'B': [5, 6, 7, 8, 9],
'C': ['a', 'b', 'c', 'd', 'e']})
df.replace('A': {0: 10, 3: 100})
Is it possible to add a condition to this? For example that it will only replace the values in the A column if the value in the B column is smaller than 8.
Using where:
df['A'] = df['A'].replace({0: 10, 3: 100}).where(df['B'].lt(8), df['A'])
output:
A B C
0 10 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
Try this:
df.update(df['A'][df['B'] < 8].replace({0: 10, 3: 100}))
Output:
>>> df
A B C
0 10.0 5 a
1 1.0 6 b
2 2.0 7 c
3 3.0 8 d
4 4.0 9 e
Notice how A at row 3 is not 100, but 3.0 (the old value). Because B at row 3 is 8, which, per your condition, is not less then 8.

Mapping Tuple Dictionary to Multiple columns of a DataFrame

Ive got a PDB DataFrame with residue insertion codes. Simplified example.
d = {'ATOM' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'residue_number' : [2, 2, 2, 3, 3, 3, 3, 3, 3, 5, 5, 5],
'insertion' : ['', '', '', '', '', '', 'A', 'A', 'A', '', '', '']}
df = pd.DataFrame(data = d)
Dataframe:
ATOM residue_number insertion
0 1 2
1 2 2
2 3 2
3 4 3
4 5 3
5 6 3
6 7 3 A
7 8 3 A
8 9 3 A
9 10 5
10 11 5
11 12 5
I need to renumber the residues according to a different numbering and insertion scheme. Output from the renumbering script can be formatted into a dictionary of tuples, e.g.
my_dict = {(2,): 1, (3,): 2, (3, 'A') : 3, (5, ) : (4, 'A') }
Is it possible to map this dictionary of tuples onto the two columns ['ATOM']['insertion']? The desired output would be:
ATOM residue_number insertion
0 1 1
1 2 1
2 3 1
3 4 2
4 5 2
5 6 2
6 7 3
7 8 3
8 9 3
9 10 4 A
10 11 4 A
11 12 4 A
I've been searching and banging my head on this for a few days, I've tried mapping and multindex but cant seem to find a way to map a dictionary of tuples across multiple columns. I feel like I'm thinking about it wrong somehow. Thanks for any advice!
in this case I think that you need to define a function that gets as input your old residue_number and insertion and outputs the new ones. For that, I will work directly from the df, so, to avoid extra coding, I will redefine your my_dict from (2,) to this (2,'')
here is the code:
import pandas as pd
d = {'ATOM' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'residue_number' : [2, 2, 2, 3, 3, 3, 3, 3, 3, 5, 5, 5],
'insertion' : ['', '', '', '', '', '', 'A', 'A', 'A', '', '', '']}
df = pd.DataFrame(data = d)
# Our new dict with keys and values as tuples
my_new_dict = {(2,''): (1,''), (3,''): (2,''), (3,'A'): (3,''), (5,''): (4,'A') }
# We need a function that maps a tuple (residue_number, insertion) into your new_residue_number and new_insertion values
def new_residue_number(residue_number, insertion, my_new_dict):
# keys are tuples
key = (residue_number, insertion)
# Return new residue_number and insertion values
return my_new_dict[key]
# Example to see how this works
print(new_residue_number(2, '', my_new_dict)) # Output (1,'')
print(new_residue_number(5, '', my_new_dict)) # Output (4, 'A')
print(new_residue_number(3, 'A', my_new_dict)) # Output (3,'')
# Now we apply this to our df and save it in the same df in two new columns
df[['new_residue_number','new_insertion']] = df.apply(lambda row: pd.Series(new_residue_number(row['residue_number'], row['insertion'], my_new_dict)), axis=1)
I hope this can solve your problem!
I think we can create a DataFrame with your dictionary after modifying it to set all values ​​as tuples. So we can use DataFrame.join or . I think this is easier(
and recommended) if we convert the blank values ​​of the insertion column to NaN.
import numpy as np
new_df = ( df.assign(insertion = df['insertion'].replace(r'^\s*$',
np.nan,
regex=True)
.mask(df['insertion'].isnull()))
.join(pd.DataFrame({x:(y if isinstance(y,tuple) else (y,np.nan))
for x,y in my_dict.items()},
index = ['new_residue_number','new_insertion']).T,
on = ['residue_number','insertion'])
.fillna('')
.drop(['residue_number','insertion'],axis=1)
.rename(columns = {'new_residue_number':'residue_number',
'new_insertion':'insertion'}))
print(new_df)
ATOM residue_number insertion
0 1 1.0
1 2 1.0
2 3 1.0
3 4 2.0
4 5 2.0
5 6 2.0
6 7 3.0
7 8 3.0
8 9 3.0
9 10 4.0 A
10 11 4.0 A
11 12 4.0 A
Detail
print(pd.DataFrame({x:(y if isinstance(y,tuple) else (y,np.nan))
for x,y in my_dict.items()},
index = ['new_residue_number','new_insertion']).T)
new_residue_number new_insertion
2 NaN 1 NaN
3 NaN 2 NaN
A 3 NaN
5 NaN 4 A
The logic here is a simple merge. But we need to do a lot of work to turn that dictionary into a suitable DataFrame for mapping. I'd reconsider whether you can store the renumbering output as my final s DataFrame from the start.
#Turn the dict into a mapping
s = pd.DataFrame(my_dict.values())[0].explode().to_frame()
s['idx'] = s.groupby(level=0).cumcount()
s = (s.pivot(columns='idx', values=0)
.rename_axis(None, axis=1)
.rename(columns={0: 'new_res', 1: 'new_ins'}))
s.index = pd.MultiIndex.from_tuples([*my_dict.keys()], names=['residue_number', 'insertion'])
s = s.reset_index().fillna('') # Because you have '' not NaN
# residue_number insertion new_res new_ins
#0 2 1
#1 3 2
#2 3 A 3
#3 5 4 A
The mapping is now a merge. I'll leave all columns in for clarity of the logic, but you can use the commented out code to drop the original columns and rename the new columns.
df = df.merge(s, how='left')
# Your real output with
#df = (df.merge(s, how='left')
# .drop(columns=['residue_number', 'insertion'])
# .rename(columns={'new_res': 'residue_number',
# 'new_ins': 'insertion'}))
ATOM residue_number insertion new_res new_ins
0 1 2 1
1 2 2 1
2 3 2 1
3 4 3 2
4 5 3 2
5 6 3 2
6 7 3 A 3
7 8 3 A 3
8 9 3 A 3
9 10 5 4 A
10 11 5 4 A
11 12 5 4 A

Add multiple dataframes based on one column

I have several hundred dataframes with same column names, like this:
df1
wave num stlines fwhm EWs MeasredWave
0 4050.32 3 0.28269 0.07365 22.16080 4050.311360
1 4208.98 5 0.48122 0.08765 44.90035 4208.972962
2 4374.94 9 0.71483 0.11429 86.96497 4374.927110
3 4379.74 9 0.31404 0.09107 30.44271 4379.760601
4 4398.01 14 0.50415 0.09845 52.83236 4398.007473
5 5520.50 1 0.06148 0.12556 8.21685 5520.484742
df2
wave num stlines fwhm EWs MeasredWave
0 4050.32 3 0.28616 0.07521 22.91064 4050.327388
1 4208.98 6 0.48781 0.08573 44.51609 4208.990029
2 4374.94 9 0.71548 0.11437 87.10152 4374.944513
3 4379.74 10 0.31338 0.09098 30.34791 4379.778009
4 4398.01 15 0.49950 0.08612 45.78707 4398.020367
5 4502.21 9 0.56362 0.10114 60.67868 4502.223123
6 4508.28 3 0.69554 0.11600 85.88428 4508.291777
7 4512.99 2 0.20486 0.08891 19.38745 4512.999332
8 5520.50 1 0.06148 0.12556 8.21685 5520.484742
That's how i0'm reading them
path_to_files = '/home/Desktop/computed_2d/'
lst = []
for filen in dir1:
df = pd.read_table(path_to_files+filen, skiprows=0, usecols=(0,1,2,3,4,8),names=['wave','num','stlines','fwhm','EWs','MeasredWave'],delimiter=r'\s+')
lst.append(df)
The desired result should look like this:
wave num stlines fwhm EWs MeasredWave
0 4050.32 3.0 0.284425 0.074430 22.535720 4050.319374
1 4208.98 5.5 0.484515 0.086690 44.708220 4208.981496
2 4374.94 9.0 0.715155 0.114330 87.033245 4374.935812
3 4379.74 9.5 0.313710 0.091025 30.395310 4379.769305
4 4398.01 14.5 0.501825 0.092285 49.309715 4398.013920
5 4502.21 9 0.56362 0.10114 60.67868 4502.223123
6 4508.28 3 0.69554 0.11600 85.88428 4508.291777
7 4512.99 2 0.20486 0.08891 19.38745 4512.999332
8 5520.50 1.0 0.061480 0.125560 8.216850 5520.484742
As you can see the number of rows are not same. Now i want to take the average of all the dataframes based on column1 wave and i want to make sure that the each index of column wave of df1 gets added to the correct index of df2
You can stack all dataframe in one by using pd.concat wich axis = 1 and take average of respective column
df3 = pd.merge(df1,df2,on=['wave'],how ='outer',)
df4 = df3.rename(columns = lambda x: x.split('_')[0]).T
df4.groupby(df4.index).mean().T
Out:
EWs MeasredWave fwhm num stlines wave
0 22.535720 4050.319374 0.074430 3.0 0.284425 4050.32
1 44.708220 4208.981496 0.086690 5.5 0.484515 4208.98
2 87.033245 4374.935812 0.114330 9.0 0.715155 4374.94
3 30.395310 4379.769305 0.091025 9.5 0.313710 4379.74
4 49.309715 4398.013920 0.092285 14.5 0.501825 4398.01
5 8.216850 5520.484742 0.125560 1.0 0.061480 5520.50
6 60.678680 4502.223123 0.101140 9.0 0.563620 4502.21
7 85.884280 4508.291777 0.116000 3.0 0.695540 4508.28
8 19.387450 4512.999332 0.088910 2.0 0.204860 4512.9
Here is an example to do what you need:
import pandas as pd
df1 = pd.DataFrame({'A': [0, 1, 2, 3],
'B': [0, 1, 2, 3],
'C': [0, 1, 2, 3],
'D': [0, 1, 2, 3]},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': [4, 5, 6, 7],
'B': [4, 5, 6, 7],
'C': [4, 5, 6, 7],
'D': [4, 5, 6, 7]},
index=[0, 1, 2, 3])
df3 = pd.DataFrame({'A': [8, 9, 10, 11],
'B': [8, 9, 10, 11],
'C': [8, 9, 10, 11],
'D': [8, 9, 10, 11]},
index=[0, 1, 2, 3])
df4 = pd.concat([df1, df2, df3])
df5 = pd.concat([df1, df2, df3], ignore_index=True)
print(df4)
print('\n\n')
print(df5)
print(f"Average of column A = {df4['A'].mean()}")
You will have
A B C D
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
3 3 3 3 3
0 4 4 4 4
1 5 5 5 5
2 6 6 6 6
3 7 7 7 7
0 8 8 8 8
1 9 9 9 9
2 10 10 10 10
3 11 11 11 11
A B C D
0 0 0 0 0
1 1 1 1 1
2 2 2 2 2
3 3 3 3 3
4 4 4 4 4
5 5 5 5 5
6 6 6 6 6
7 7 7 7 7
8 8 8 8 8
9 9 9 9 9
10 10 10 10 10
11 11 11 11 11
Average of column A = 5.5
Answer from #Naga Kiran is great. I updated the whole solution here:
import pandas as pd
df1 = pd.DataFrame(
{'wave' : [4050.32, 4208.98, 4374.94, 4379.74, 4398.01, 5520.50],
'num' : [3, 5, 9, 9, 14, 1],
'stlines' : [0.28269, 0.48122, 0.71483, 0.31404, 0.50415, 0.06148],
'fwhm' : [0.07365, 0.08765, 0.11429, 0.09107, 0.09845, 0.12556],
'EWs' : [22.16080, 44.90035, 86.96497, 30.44271, 52.83236, 8.21685],
'MeasredWave' : [4050.311360, 4208.972962, 4374.927110, 4379.760601, 4398.007473, 5520.484742]},
index=[0, 1, 2, 3, 4, 5])
df2 = pd.DataFrame(
{'wave' : [4050.32, 4208.98, 4374.94, 4379.74, 4398.01, 4502.21, 4508.28, 4512.99, 5520.50],
'num' : [3, 6, 9, 10, 15, 9, 3, 2, 1],
'stlines' : [0.28616, 0.48781, 0.71548, 0.31338, 0.49950, 0.56362, 0.69554, 0.20486, 0.06148],
'fwhm' : [0.07521, 0.08573, 0.11437, 0.09098, 0.08612, 0.10114, 0.11600, 0.08891, 0.12556],
'EWs' : [22.91064, 44.51609, 87.10152, 30.34791, 45.78707, 60.67868, 85.88428, 19.38745, 8.21685],
'MeasredWave' : [4050.327388, 4208.990029, 4374.944513, 4379.778009, 4398.020367, 4502.223123, 4508.291777, 4512.999332, 5520.484742]},
index=[0, 1, 2, 3, 4, 5, 6, 7, 8])
df3 = pd.merge(df1, df2, on='wave', how='outer')
df4 = df3.rename(columns = lambda x: x.split('_')[0]).T
df5 = df4.groupby(df4.index).mean().T
df6 = df5[['wave', 'num', 'stlines', 'fwhm', 'EWs', 'MeasredWave']]
df7 = df6.sort_values('wave', ascending = True).reset_index(drop=True)
df7

create a dataframe from 3 other dataframes in python

I am trying to create a new df which summarises my key information, by taking that information from 3 (say) other dataframes.
dfdate = {'x1': [2, 4, 7, 5, 6],
'x2': [2, 2, 2, 6, 7],
'y1': [3, 1, 4, 5, 9]}
dfdate = pd.DataFrame(df, index=range(0:4))
dfqty = {'x1': [1, 2, 6, 6, 8],
'x2': [3, 1, 1, 7, 5],
'y1': [2, 4, 3, 2, 8]}
dfqty = pd.DataFrame(df2, range(0:4))
dfprices = {'x1': [0, 2, 2, 4, 4],
'x2': [2, 0, 0, 3, 4],
'y1': [1, 3, 2, 1, 3]}
dfprices = pd.DataFrame(df3, range(0:4))
Let us say the above 3 dataframes are my data. Say, some dates, qty, and prices of goods. My new df is to be constructed from the above data:
rng = len(dfprices.columns)*len(dfprices.index) # This is the len of new df
dfnew = pd.DataFrame(np.nan,index=range(0,rng),columns=['Letter', 'Number', 'date', 'qty', 'price])
Now, this is where I struggle to piece my stuff together. I am trying to take all the data in dfdate and put it into a column in the new df. same with dfqty and dfprice. (so 3x5 matricies essentially goto a 1x15 vector and are placed into the new df).
As well as that, I need a couple of columns in dfnew as identifiers, from the names of the columns of the old df.
Ive tried for loops but to no avail, and don't know how to convert a df to series. But my desired output is:
dfnew:
'Lettercol','Numbercol', 'date', 'qty', 'price'
0 X 1 2 1 0
1 X 1 4 2 2
2 X 1 7 6 2
3 X 1 5 6 4
4 X 1 6 8 4
5 X 2 2 3 2
6 X 2 2 1 0
7 X 2 2 1 0
8 X 2 6 7 3
9 X 2 7 5 4
10 Y 1 3 2 1
11 Y 1 1 4 3
12 Y 1 4 3 2
13 Y 1 5 2 1
14 Y 1 9 8 3
where the numbers 0-14 are the index.
letter = letter from col header in DFs
number = number from col header in DFs
next 3 columns are data from the orig df's
(don't ask why the original data is in that funny format :)
thanks so much. my last Q wasn't well received so have tried to make this one better, thanks
Use:
#list of DataFrames
dfs = [dfdate, dfqty, dfprices]
#list comprehension with reshape
comb = [x.unstack() for x in dfs]
#join together
df = pd.concat(comb, axis=1, keys=['date', 'qty', 'price'])
#remove second level of MultiIndex and index to column
df = df.reset_index(level=1, drop=True).reset_index().rename(columns={'index':'col'})
#extract all values without first by indexing [1:] and first letter by [0]
df['Number'] = df['col'].str[1:]
df['Letter'] = df['col'].str[0]
cols = ['Letter', 'Number', 'date', 'qty', 'price']
#change order of columns
df = df.reindex(columns=cols)
print (df)
Letter Number date qty price
0 x 1 2 1 0
1 x 1 4 2 2
2 x 1 7 6 2
3 x 1 5 6 4
4 x 1 6 8 4
5 x 2 2 3 2
6 x 2 2 1 0
7 x 2 2 1 0
8 x 2 6 7 3
9 x 2 7 5 4
10 y 1 3 2 1
11 y 1 1 4 3
12 y 1 4 3 2
13 y 1 5 2 1
14 y 1 9 8 3

Pandas Dataframe: Expand rows with lists to multiple row with desired indexing for all columns

I have time series data in pandas dataframe with index as time at the start of measurement and columns with list of values recorded at a fixed sampling rate (difference in consecutive index/number of elements in the list)
Here is the what it looks like:
Time A B ....... Z
0 [1, 2, 3, 4] [1, 2, 3, 4]
2 [5, 6, 7, 8] [5, 6, 7, 8]
4 [9, 10, 11, 12] [9, 10, 11, 12]
6 [13, 14, 15, 16] [13, 14, 15, 16 ]
...
I want to expand each row in all the columns to multiple rows such that:
Time A B .... Z
0 1 1
0.5 2 2
1 3 3
1.5 4 4
2 5 5
2.5 6 6
.......
So far I am thinking along these lines (code doesn't wok):
def expand_row(dstruc):
for i in range (len(dstruc)):
for j in range (1,len(dstruc[i])):
dstruc.loc[i+j/len(dstruc[i])] = dstruc[i][j]
dstruc.loc[i] = dstruc[i][0]
return dstruc
expanded = testdf.apply(expand_row)
I also tried using split(',') and stack() together but I am not able to fix my indexing appropriately.
import numpy as np
import pandas as pd
df = pd.DataFrame({key: zip(*[iter(range(1, 17))]*4) for key in list('ABC')},
index=range(0,8,2))
result = pd.DataFrame.from_items([(index, zipped) for index, row in df.iterrows() for zipped in zip(*row)], orient='index', columns=df.columns)
result.index.name = 'Time'
grouped = result.groupby(level=0)
increment = (grouped.cumcount()/grouped.size())
result.index = result.index + increment
print(result)
yields
In [183]: result
Out[183]:
A B C
Time
0.00 1 1 1
0.25 2 2 2
0.50 3 3 3
0.75 4 4 4
2.00 5 5 5
2.25 6 6 6
2.50 7 7 7
2.75 8 8 8
4.00 9 9 9
4.25 10 10 10
4.50 11 11 11
4.75 12 12 12
6.00 13 13 13
6.25 14 14 14
6.50 15 15 15
6.75 16 16 16
Explanation:
One way to loop over the contents of list is to use a list comprehension:
In [172]: df = pd.DataFrame({key: zip(*[iter(range(1, 17))]*4) for key in list('ABC')}, index=range(2,10,2))
In [173]: [(index, zipped) for index, row in df.iterrows() for zipped in zip(*row)]
Out[173]:
[(0, (1, 1, 1)),
(0, (2, 2, 2)),
...
(6, (15, 15, 15)),
(6, (16, 16, 16))]
Once you have the values in the above form, you can build the desired DataFrame with pd.DataFrame.from_items:
result = pd.DataFrame.from_items([(index, zipped) for index, row in df.iterrows() for zipped in zip(*row)], orient='index', columns=df.columns)
result.index.name = 'Time'
yields
In [175]: result
Out[175]:
A B C
Time
2 1 1 1
2 2 2 2
...
8 15 15 15
8 16 16 16
To compute the increments to be added to the index, you can group by the index and find the ratio of the cumcount to the size of each group:
In [176]: grouped = result.groupby(level=0)
In [177]: increment = (grouped.cumcount()/grouped.size())
In [179]: result.index = result.index + increment
In [199]: result.index
Out[199]:
Int64Index([ 0.0, 0.25, 0.5, 0.75, 2.0, 2.25, 2.5, 2.75, 4.0, 4.25, 4.5,
4.75, 6.0, 6.25, 6.5, 6.75],
dtype='float64', name=u'Time')
Probably not ideal, but this can be done using groupby and apply a function which returns the expanded DataFrame for each row (here the time difference is assumed to be fixed at 2.0):
def expand(x):
data = {c: x[c].iloc[0] for c in x if c != 'Time'}
n = len(data['A'])
step = 2.0 / n;
data['Time'] = [x['Time'].iloc[0] + i*step for i in range(n)]
return pd.DataFrame(data)
print df.groupby('Time').apply(expand).set_index('Time', drop=True)
Output:
A B
Time
0.0 1 1
0.5 2 2
1.0 3 3
1.5 4 4
2.0 5 5
2.5 6 6
3.0 7 7
3.5 8 8
4.0 9 9
4.5 10 10
5.0 11 11
5.5 12 12
6.0 13 13
6.5 14 14
7.0 15 15
7.5 16 16
Say, the dataframe wanted to be expanded is named as df_to_expand, you could do the following using eval.
df_expanded_list = []
for coln in df_to_expand.columns:
_df = df_to_expand[coln].apply(lambda x: pd.Series(eval(x), index=[coln + '_' + str(i) for i in range(len(eval(x)))]))
df_expanded_list.append(_df)
df_expanded = pd.concat(df_expanded_list, axis=1)
References:
covert a string which is a list into a proper list python

Categories

Resources