I have two databses:
Database one:
one has data about name of observation and different measurments that were taken for each obseravtion , e.g:
and database2 which contain the same name observations (but not all of them) and carbom measurment for each.
I want to do these steps:
- add empty column in database 1
- if the name in databse2 is in databse 1 , I want to take the carbon value and add it to the new column.
if not, leave it empty.
I have tried to write something but it's really the beginning and I feel stuck :
NaN = np.nan
df['carbon'] = NaN
for i in df.loc['name']:
if i in df_chemo.loc['sample name'] is in df.loc['name']:
I know it is just the beginning but I feel like I don't know how to write what I want.
My end goal: to add to databse 1 new column that will have values from database2 only if the names are match.
What you are looking for is the merge method:
df = df.merge(df_chemo, how='left', on='name')
Example:
import pandas as pd
df1 = pd.DataFrame({'x':[1,2,1,4], 'y':[11,22,33,44]})
print(df1, end='\n ------------- \n')
df2 = pd.DataFrame({'x':[1,2,5,7], 'z':list('abcd')})
print(df2, end='\n ------------- \n')
print(df1.merge(df2, on='x', how='left'))
Output:
x y
0 1 11
1 2 22
2 1 33
3 4 44
-------------
x z
0 1 a
1 2 b
2 5 c
3 7 d
-------------
x y z
0 1 11 a
1 2 22 b
2 1 33 a
3 4 44 NaN
Related
I'd like to concatenate two dataframes A, B to a new one without duplicate rows (if rows in B already exist in A, don't add):
Dataframe A:
I II
0 1 2
1 3 1
Dataframe B:
I II
0 5 6
1 3 1
New Dataframe:
I II
0 1 2
1 3 1
2 5 6
How can I do this?
The simplest way is to just do the concatenation, and then drop duplicates.
>>> df1
A B
0 1 2
1 3 1
>>> df2
A B
0 5 6
1 3 1
>>> pandas.concat([df1,df2]).drop_duplicates().reset_index(drop=True)
A B
0 1 2
1 3 1
2 5 6
The reset_index(drop=True) is to fix up the index after the concat() and drop_duplicates(). Without it you will have an index of [0,1,0] instead of [0,1,2]. This could cause problems for further operations on this dataframe down the road if it isn't reset right away.
In case you have a duplicate row already in DataFrame A, then concatenating and then dropping duplicate rows, will remove rows from DataFrame A that you might want to keep.
In this case, you will need to create a new column with a cumulative count, and then drop duplicates, it all depends on your use case, but this is common in time-series data
Here is an example:
df_1 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':34},])
df_2 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':14},
])
df_1['count'] = df_1.groupby(['date','id','value']).cumcount()
df_2['count'] = df_2.groupby(['date','id','value']).cumcount()
df_tot = pd.concat([df_1,df_2], ignore_index=False)
df_tot = df_tot.drop_duplicates()
df_tot = df_tot.drop(['count'], axis=1)
>>> df_tot
date id value
0 11/20/2015 4 24
1 11/20/2015 4 24
2 11/20/2015 6 34
1 11/20/2015 6 14
I'm surprised that pandas doesn't offer a native solution for this task.
I don't think that it's efficient to just drop the duplicates if you work with large datasets (as Rian G suggested).
It is probably most efficient to use sets to find the non-overlapping indices. Then use list comprehension to translate from index to 'row location' (boolean), which you need to access rows using iloc[,]. Below you find a function that performs the task. If you don't choose a specific column (col) to check for duplicates, then indexes will be used, as you requested. If you chose a specific column, be aware that existing duplicate entries in 'a' will remain in the result.
import pandas as pd
def append_non_duplicates(a, b, col=None):
if ((a is not None and type(a) is not pd.core.frame.DataFrame) or (b is not None and type(b) is not pd.core.frame.DataFrame)):
raise ValueError('a and b must be of type pandas.core.frame.DataFrame.')
if (a is None):
return(b)
if (b is None):
return(a)
if(col is not None):
aind = a.iloc[:,col].values
bind = b.iloc[:,col].values
else:
aind = a.index.values
bind = b.index.values
take_rows = list(set(bind)-set(aind))
take_rows = [i in take_rows for i in bind]
return(a.append( b.iloc[take_rows,:] ))
# Usage
a = pd.DataFrame([[1,2,3],[1,5,6],[1,12,13]], index=[1000,2000,5000])
b = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], index=[1000,2000,3000])
append_non_duplicates(a,b)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 3000 7 8 9 <- from b
append_non_duplicates(a,b,0)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 2000 4 5 6 <- from b
# 3000 7 8 9 <- from b
Another option:
concatenation = pd.concat([
dfA,
dfB[dfB['I'].isin(dfA['I']) == False], # <-- get all the data in dfB that doesn't show up in dfB (based on values in column 'I')
])
The object concatenation will be:
I II
0 1 2
1 3 1
2 5 6
Let's say i created a dataframe as follows:
headings = {'A'=[],'B'=[]}
data = pd.DataFrame(headings)
Then as part of a for loop, the output of a function adds values to the data frame but only under 'B':
order=[1,2,3,4]
for x in order:
output = function(x)
data = data.append(output)
Which produce this:
A B
0 Nan 23
1 Nan 24
2 Nan 26
3 Nan 27
I want to add x itself under 'A' as part of the for loop so the output looks like this
A B
0 1 23
1 2 24
2 3 26
3 4 27
I cant reveal the actual function/data as its sensitive but let me know if you need further info
One way is to replace na with value of interest
for x in order:
output = function(x)
data = data.append(output)
data.fillna(value=x,inplace=True) #inplace replaces the original dataframe
I have two pandas DF. Of unequal sizes. For example :
Df1
id value
a 2
b 3
c 22
d 5
Df2
id value
c 22
a 2
No I want to extract from DF1 those rows which has the same id as in DF2. Now my first approach is to run 2 for loops, with something like :
x=[]
for i in range(len(DF2)):
for j in range(len(DF1)):
if DF2['id'][i] == DF1['id'][j]:
x.append(DF1.iloc[j])
Now this is okay, but for 2 files of 400,000 lines in one and 5,000 in another, I need an efficient Pythonic+Pnadas way
import pandas as pd
data1={'id':['a','b','c','d'],
'value':[2,3,22,5]}
data2={'id':['c','a'],
'value':[22,2]}
df1=pd.DataFrame(data1)
df2=pd.DataFrame(data2)
finaldf=pd.concat([df1,df2],ignore_index=True)
Output after concat
id value
0 a 2
1 b 3
2 c 22
3 d 5
4 c 22
5 a 2
Final Ouput
finaldf.drop_duplicates()
id value
0 a 2
1 b 3
2 c 22
3 d 5
You can concat the dataframes , then check if all the elements are duplicated or not , then drop_duplicates and keep just the first occurrence:
m = pd.concat((df1,df2))
m[m.duplicated('id',keep=False)].drop_duplicates()
id value
0 a 2
2 c 22
You can try this:
df = df1[df1.set_index(['id']).index.isin(df2.set_index(['id']).index)]
I have 3 different data-frames which can be generated using the code given below
data_file= pd.DataFrame({'person_id':[1,2,3],'gender': ['Male','Female','Not disclosed'],'ethnicity': ['Chinese','Indian','European'],'Marital_status': ['Single','Married','Widowed'],'Smoke_status':['Yes','No','No']})
map_file= pd.DataFrame({'gender': ['1.Male','2. Female','3. Not disclosed'],'ethnicity': ['1.Chinese','2. Indian','3.European'],
'Marital_status':['1.Single','2. Married','3 Widowed'],'Smoke_status':['1. Yes','2. No',np.nan]})
hash_file = pd.DataFrame({'keys':['gender','ethnicity','Marital_status','Smoke_status','Yes','No','Male','Female','Single','Married','Widowed','Chinese','Indian','European'],'values':[21,22,23,24,125,126,127,128,129,130,131,141,142,0]})
And another empty dataframe in which the output should be filled can be generated using the code below
columns = ['person_id','obsid','valuenum','valuestring','valueid']
obs = pd.DataFrame(columns=columns)
What I am trying to achieve is shown in the table where you can see the rules and description of how data is to be filled
I did try via for loop approach but as soon as I unstack it, I lose the column names and not sure how I can proceed further.
a=1
for i in range(len(data_file)):
df_temp = data_file[i:a]
a=a+1
df_temp=df_temp.unstack()
df_temp = df_temp.to_frame().reset_index()
How can I get my output dataframe to be filled like as shown below (ps: I have shown only for person_id = 1 and 4 columns) but in real time, I have more than 25k persons and 400 columns for each person. So any elegant and efficient approach is helpful unlike my for loop.
After chat and remove duplicates data is possible use:
s = hash_file.set_index('VARIABLE')['concept_id']
df1 = map_file.melt().dropna(subset=['value'])
df1[['valueid','valuestring']] = df1.pop('value').str.extract('(\d+)\.(.+)')
df1['valuestring'] = df1['valuestring'].str.strip()
columns = ['studyid','obsid','valuenum','valuestring','valueid']
obs = data_file.melt('studyid', value_name='valuestring').sort_values('studyid')
#merge by 2 columns variable, valuestring
obs = (obs.merge(df1, on=['variable','valuestring'], how='left')
.rename(columns={'valueid':'valuenum'}))
obs['obsid'] = obs['variable'].map(s)
obs['valueid'] = obs['valuestring'].map(s)
#map by only one column variable
s1 = df1.drop_duplicates('variable').set_index('variable')['valueid']
obs['valuenum_new'] = obs['variable'].map(s1)
obs = obs.reindex(columns + ['valuenum_new'], axis=1)
print (obs)
#compare number of non missing rows
print (len(obs.dropna(subset=['valuenum'])))
print (len(obs.dropna(subset=['valuenum_new'])))
Here is an alterternative approach using DataFrame.melt and Series.map:
# Solution for pandas V 0.24.0 +
columns = ['person_id','obsid','valuenum','valuestring','valueid']
# Create map Series
hash_map = hash_file.set_index('keys')['values']
value_map = map_file.stack().str.split('\.\s?', expand=True).set_index(1, append=True).droplevel(0)[0]
# Melt and add mapped columns
obs = data_file.melt(id_vars=['person_id'], value_name='valuestring')
obs['obsid'] = obs.variable.map(hash_map)
obs['valueid'] = obs.valuestring.map(hash_map).astype('Int64')
obs['valuenum'] = obs[['variable', 'valuestring']].apply(tuple, axis=1).map(value_map)
# Reindex and sort for desired output
obs.reindex(columns=columns).sort_values('person_id')
[out]
person_id obsid valuenum valuestring valueid
0 1 21 1 Male 127
3 1 22 1 Chinese 141
6 1 23 1 Single 129
9 1 24 1 Yes 125
1 2 21 2 Female 128
4 2 22 2 Indian 142
7 2 23 2 Married 130
10 2 24 2 No 126
2 3 21 3 Not disclosed NaN
5 3 22 3 European 0
8 3 23 3 Widowed 131
11 3 24 2 No 126
I'd like to concatenate two dataframes A, B to a new one without duplicate rows (if rows in B already exist in A, don't add):
Dataframe A:
I II
0 1 2
1 3 1
Dataframe B:
I II
0 5 6
1 3 1
New Dataframe:
I II
0 1 2
1 3 1
2 5 6
How can I do this?
The simplest way is to just do the concatenation, and then drop duplicates.
>>> df1
A B
0 1 2
1 3 1
>>> df2
A B
0 5 6
1 3 1
>>> pandas.concat([df1,df2]).drop_duplicates().reset_index(drop=True)
A B
0 1 2
1 3 1
2 5 6
The reset_index(drop=True) is to fix up the index after the concat() and drop_duplicates(). Without it you will have an index of [0,1,0] instead of [0,1,2]. This could cause problems for further operations on this dataframe down the road if it isn't reset right away.
In case you have a duplicate row already in DataFrame A, then concatenating and then dropping duplicate rows, will remove rows from DataFrame A that you might want to keep.
In this case, you will need to create a new column with a cumulative count, and then drop duplicates, it all depends on your use case, but this is common in time-series data
Here is an example:
df_1 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':34},])
df_2 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':14},
])
df_1['count'] = df_1.groupby(['date','id','value']).cumcount()
df_2['count'] = df_2.groupby(['date','id','value']).cumcount()
df_tot = pd.concat([df_1,df_2], ignore_index=False)
df_tot = df_tot.drop_duplicates()
df_tot = df_tot.drop(['count'], axis=1)
>>> df_tot
date id value
0 11/20/2015 4 24
1 11/20/2015 4 24
2 11/20/2015 6 34
1 11/20/2015 6 14
I'm surprised that pandas doesn't offer a native solution for this task.
I don't think that it's efficient to just drop the duplicates if you work with large datasets (as Rian G suggested).
It is probably most efficient to use sets to find the non-overlapping indices. Then use list comprehension to translate from index to 'row location' (boolean), which you need to access rows using iloc[,]. Below you find a function that performs the task. If you don't choose a specific column (col) to check for duplicates, then indexes will be used, as you requested. If you chose a specific column, be aware that existing duplicate entries in 'a' will remain in the result.
import pandas as pd
def append_non_duplicates(a, b, col=None):
if ((a is not None and type(a) is not pd.core.frame.DataFrame) or (b is not None and type(b) is not pd.core.frame.DataFrame)):
raise ValueError('a and b must be of type pandas.core.frame.DataFrame.')
if (a is None):
return(b)
if (b is None):
return(a)
if(col is not None):
aind = a.iloc[:,col].values
bind = b.iloc[:,col].values
else:
aind = a.index.values
bind = b.index.values
take_rows = list(set(bind)-set(aind))
take_rows = [i in take_rows for i in bind]
return(a.append( b.iloc[take_rows,:] ))
# Usage
a = pd.DataFrame([[1,2,3],[1,5,6],[1,12,13]], index=[1000,2000,5000])
b = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], index=[1000,2000,3000])
append_non_duplicates(a,b)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 3000 7 8 9 <- from b
append_non_duplicates(a,b,0)
# 0 1 2
# 1000 1 2 3 <- from a
# 2000 1 5 6 <- from a
# 5000 1 12 13 <- from a
# 2000 4 5 6 <- from b
# 3000 7 8 9 <- from b
Another option:
concatenation = pd.concat([
dfA,
dfB[dfB['I'].isin(dfA['I']) == False], # <-- get all the data in dfB that doesn't show up in dfB (based on values in column 'I')
])
The object concatenation will be:
I II
0 1 2
1 3 1
2 5 6