Check Series label does not exist in a separate DataFrame - python

I'm iterating over two separate dataframes, where one dataframe is a subset of the other. I need to ensure that only the columns in the set (df1) which are not contained in the subset (df2) pass the conditional statement.
In this case, it would be comparing the Series object during each iteration in df1 to the dataframe, df2. Ideally I would like to compare just the labels associated with each column, not the values contained in the columns. My code below. Any help would be greatly appreciated!
for i in df1:
for j in df2:
if df1[i] is not in df2:
...do some stuff between df1[i] and df2[j]

To find out if the values of df1 are in df2 you can use:
df1.isin(df2)
To find all values in df1 that are not in df2 you can use:
df1[~df1.isin(df2)]
The values that are in df1 and df2 will be a nan in this case

Related

How to compare two columns in different pandas dataframes, store the differences in a 3rd dataframe

I need to compare two df1 (blue) and df2 (orange), store only the rows of df2 (orange) that are not in df1 in a separate data frame, and then add that to df1 while assigning function 6 and sector 20 for the employees that were not present in df1 (blue)
I know how to find the differences between the data frames and store that in a third data frame, but I'm stuck trying to figure out how to store only the rows of df2 that are not in df1.
Can try this:
Get a list with the data os orange u want to keep
Filter df2 with that list
Append
df1 --> blue, df2 --> orange
import pandas as pd
df2['Function'] = 6
df2['Sector'] = 20
ids_df2_keep = [e for e in df2['ID'] if e not in list(df1['ID'])]
df2 = df2[df2['ID'].isin(ids_df2_keep)
df1 = df1.append(df2)
This has been answered in pandas get rows which are NOT in other dataframe
Store it as a merge and simply select the rows that do not share common values.
~ negates the expression, select all that are NOT IN instead of IN.
common = df1.merge(df2,on=['ID','Name'])
df = df2[(~df2['ID'].isin(common['ID']))&(~df2['Name'].isin(common['Name']))]
This was tested using some of your data:
df1 = pd.DataFrame({'ID':[125,134,156],'Name':['John','Mary','Bill'],'func':[1,2,2]})
df2 = pd.DataFrame({'ID':[125,139,133],'Name':['John','Joana','Linda']})
Output is:
ID Name
1 139 Joana
2 133 Linda

Selecting rows from another boolean dataframe in Python

I have two dataframes, df1 and df2.
df1 contains integers and df2 contains booleans.
df1 and df2 are exactly the same size (like both are 10x10).
I would like to create a df3 that would take the data from df1 only if the value in the same location in df2 is True. All False would be replaced by Nan in df3
Thanks in advance!

combining dataframes and adding values on common date index

I have many dataframes with one column (same name in all) whose indexes are date ranges - I want to merge/combine these dataframes into one, summing the values where any dates are common. below is a simplified example
range1 = pd.date_range('2021-10-01','2021-11-01')
range2 = pd.date_range('2021-11-01','2021-12-01')
df1 = pd.DataFrame(np.random.rand(len(range1),1), columns=['value'], index=range1)
df2 = pd.DataFrame(np.random.rand(len(range2),1), columns=['value'], index=range2)
here '2021-11-01' appears in both df1 and df2 with different values
I would like to obtain a single dataframe of 62 rows (32+31-1) where the 2021-11-01 date contains the sum of its values in df1 and df2
We can use pd.concate() on the two dataframes, then df.reset_index() to get a new regular-integer index, rename the date column, and then use df.groupby().sum().
df = pd.concat([df1,df2]) # this gives 63 rows by 1 column, where the column is the values and the dates are the index
df = df.reset_index() # moves the dates to a column, now called 'index', and makes a new integer index
df = df.rename(columns={'index':'Date'}) #renames the column
df.groupby('Date').sum()

How to merge two different dataframe with different columns

As someone who is super new in merge/append on Python, I am trying to merge two different DF together.
DF1 has 2 columns with Text and ID columns and 100 rows
DF2 has 3 columns with Text, ID, and Match columns and has 20 rows
My goal is to combine the two DFs together so the "Match" column from DF2 can be merged into DF1.
The Match column is all "True" value, so when it gets merged over the other 80 rows on DF1 can be NaN and I can fix it later.
Thank you to everyone for the help and support!
Try a left merge using .merge(), like this:
DF_out = DF1.merge(DF2, on=['Text', 'ID'], how='left')

Copying dataframes columns into another dataframe

I have two dataframes df1 and df2 where df1 has 9 columns and df2 has 8 columns. I want to replace the first 8 columns of df1 with that of df2. How can this be done? I tried with iloc but not able to succeed.
Following are the files:
https://www.filehosting.org/file/details/842516/tpkA0t2vAtkrqKTb/df1.csv for df1
https://www.filehosting.org/file/details/842517/8XpizwCAX79p9rrZ/df2.csv for df2
import pandas as pd
df1=pd.DataFrame({0:[1,1,1,0,0,0],1:[0,1,0,0,0,0],2:[1,1,1,0,0,0],3:[0,0,0,2,3,4],4:[0,0,0,0,1,0],5:[0,0,0,2,1,2]})
df2=pd.DataFrame({6:[2,2,2,0,0,0],7:[0,2,0,0,0,0],8:[2,2,2,0,0,0],'d':[0,0,0,2,3,4],'e':[0,0,0,0,1,0],'f':[0,0,0,2,1,2]})
z=pd.concat([df1.iloc[:,3:],df2.iloc[:,0:3]],axis=1)
Here I have concatenated from 3rd column to last column of 1st dataframe and the first 3 column of 2nd dataframe. Similarly you concatenate whichever row or column you want to concatenate

Categories

Resources