Pandas: create a dataframe relating a column to other two columns - python

I have a dataframe with three columns: A, B, C. Let's say A and B are integer series ranging from 0 to 10. I'd like to create a new data frame in which unique values of A is the index, unique values of B are the columns and each cell is the mean value C obtained at the intersection of Ai,Cj.
So for instance if we grouped the dataframe like this:
Cvalues = df.groupby(['A','B'],as_index=False).mean()
in the (i,j) position of the dataframe I'd like to create there would be:
Cvalues.loc[Cvalues.A==i].loc[Cvalues.B==j].C
What is the easiest way to do that?

You are almost there. You can either pivot your Cvalues, or better yet, directly go for pivot_table and utilize its built-in option of aggfunc.
df = pd.DataFrame({'A':[2,0,1,1,2,0,1,0],
'B':[1,2,1,0,1,2,1,1],
'C':[10,20,30,40,50,60,70,80]})
Recommended One-Liner:
res = df.pivot_table(index='A', columns='B', values='C', aggfunc='mean')
Making Your Method Work:
Cvalues = df.groupby(['A','B'],as_index=False).mean()
res = Cvalues.pivot(index='A', columns='B', values='C')
Why bother, but just in case, you can make this a little more compact:
res = df.groupby(['A','B'],as_index=False).mean().pivot(index='A', columns='B', values='C')
Here is the result of both ways:
B 0 1 2
A
0 NaN 80.0 40.0
1 40.0 50.0 NaN
2 NaN 30.0 NaN
where, at the intersection of A=2 and B=1: 30.0 = (10 + 50)/2

Related

How to iterate over rows and multiple columns in panda?

I have a dataframe (df1) and I want to replace the values for the columns V2 and V3 if they have the same value than V1.
import pandas as pd
import numpy as np
df_start= pd.DataFrame({"ID":[1, 2 , 3 ,4, 5], "V1":[10,5,15,20,20], "V2":[10,5,20,17,15], "V3":[10, 25, 15, 10, 20]})
df_end = pd.DataFrame({"ID":[1, 2 , 3 ,4, 5], "V1":[10,5,15,20,20], "V2":[np.nan,np.nan,20,17,15], "V3":[np.nan, 25, np.nan, 10, np.nan]})
I know iterrows is not recommended but I don't know what I should do.
You can use mask:
For a seperate dataframe use assign:
df_end = df_start.assign(**df_start[['V2','V3']]
.mask(df_start[['V2','V3']].eq(df_start['V1'],axis=0)))
For modifying the input dataframe just assign inplace:
df_start[['V2','V3']] = (df_start[['V2','V3']]
.mask(df_start[['V2','V3']].eq(df_start['V1'],axis=0)))
ID V1 V2 V3
0 1 10 NaN NaN
1 2 5 NaN 25.0
2 3 15 20.0 NaN
3 4 20 17.0 10.0
4 5 20 15.0 NaN
You'll still use a regular loop to go through the columns, but the apply function is your best friend for this kind of row-wise operation. If you're going to use info from more than one column (here you're comparing some column and "V1"), you use apply on the DataFrame and specify the axis. If you were only looking at info from one column (like making a column that doubles values from V1 if they're even, you can use apply with just a Series.
For both versions of the function, the argument you're going to pass is a lambda expression. If you apply it do a DataFrame like you are here, the x represents the values in a row that can be indexed by a column. Finally, you assign the result back to a new or existing column in your DataFrame.
Assuming that df_start and df_end represent your planned input and output:
cols = ["V2","V3"]
for col in cols:
df_start[col] = df.apply(lambda x[col] if x[col] != x["V1"] else np.nan, axis=1]

How to "partially transpose" dataframe in Pandas?

I have csv file like this:
A,B,C,X
a,a,a,1.0
a,a,a,2.1
a,b,b,1.2
a,b,b,2.4
a,b,b,3.6
b,c,c,1.1
b,c,d,1.0
(A, B, C) is a "primary key" in this dataset, that means this set of columns should be unique. What I need to do is to find duplicates and present associated values (X column) in separate columns, like this:
A,B,C,X1,X2,X3
a,a,a,1.0,2.1,
a,b,b,1.2,2.4,3.6
I somehow know how to find duplicates and aggregate X values into tuples:
df = data.groupby(['A', 'B', 'C']).filter(lambda group: len(group) > 1).groupby(['A', 'B', 'C']).aggregate(tuple)
This is basically what I need, but I struggle with transforming it further.
I don't know how many duplicates for a given key I have in my data, so I need to find some max value and compute columns:
df['items'] = df['X'].apply(lambda x: len(x))
columns = [f'x_{i}' for i in range(1, df['X'].max() + 1)]
and then create new dataframe with new columns:
df2 = pd.DataFrame(df['RATE'].tolist(), columns=columns)
But at this point I lost index :shrug:
This page on Pandas docs suggests I should use something like this:
df.pivot(columns=columns, values=['X'])
because df already contains an index, but I get this (confusing) error:
KeyError: "None of [Index(['x_1', 'x_2'], dtype='object')] are in the [columns]"
What am I missing here?
I originally marked this as a duplicate of the infamous, but since this is a bit different, here's an answer:
(df.assign(col=df.groupby(['A','B','C']).cumcount().add(1))
.pivot_table(index=['A','B','C'], columns='col', values='X')
.add_prefix('X')
.reset_index()
)
Output:
col A B C X1 X2 X3
0 a a a 1.0 2.1 NaN
1 a b b 1.2 2.4 3.6
2 b c c 1.1 NaN NaN
3 b c d 1.0 NaN NaN
Note: this only differs to the linked question/answer in that you groupby/pivot on a set of columns, instead of one column.

comparing each value in two columns

How can I compare two columns in a dataframe and create a new column based on the difference of those two columns efficiently?
I have a feature in my table that has a lot of missing values and I need to backfill those information by using other tables in the database that contain that same feature. I have used np.select to compare the feature in my original table with the same feature in other table, but I feel like there should be an easy method.
Eg: pd.DataFrame({'A': [1,2,3,4,np.nan], 'B':[1,np.nan,30,4,np.nan]})
I expect the new column to contain values [1,2,"different",4,np.nan]. Any help will be appreciated!
pandas.Series.combine_first or pandas.DataFrame.combine_first could be useful here. These operate like a SQL COALESCE and combine the two columns by choosing the first non-null value if one exists.
df = pd.DataFrame({'A': [1,2,3,4,np.nan], 'B':[1,np.nan,30,4,np.nan]})
C = df.A.combine_first(df.B)
C looks like:
0 1.0
1 2.0
2 3.0
3 4.0
4 NaN
Then, to capture your requirement that two different non-null values should give "different" when combined, just find those indices and update the values.
mask = ~df.A.isna() & ~df.B.isna() & (df.A != df.B)
C[mask] = 'different'
C now looks like:
0 1
1 2
2 different
3 4
4 NaN
Another way is to use pd.DataFrame.iterrows with nunique:
import pandas as pd
df['C'] = [s['A'] if s.nunique()<=1 else 'different' for _, s in df.iterrows()]
Output:
A B C
0 1.0 1.0 1
1 2.0 NaN 2
2 3.0 30.0 different
3 4.0 4.0 4
4 NaN NaN NaN

Python adding two dataframes based on index (edited)

(no idea how to introduce a matrix here for readability)
I have two dataframes obtained with Panda and Python.
df1 = pd.DataFrame({'Index': ['0','1','2'], 'number':[3,'dd',1], 'people':[3,'s',3]})
df1 = df1.set_index('Index')
df2 = pd.DataFrame({'Index': ['0','1','2'], 'quantity':[3,2,'hi'], 'persons':[1,5,np.nan]})
I would like to sum the quantities of columns based on Index. Columns do not have the same name and may contain strings. (I have in fact 50 columns on each df). I want to consider nan as 0. The result should look:
df3
Index column 1 column 2
0 6 4
1 nan nan
2 nan nan
I was wondering how could this be done.
Note:
For sure a double while or for would do the trick, just not very elegant...
indices=0
columna=0
while indices<len(df.index)-1:
while columna<numbercolumns-1:
df3.iloc[indices,columna]=df1.iloc[indices,columna] +df2.iloc[indices,columna]
indices += 1
columna += 1
Thank you.
You can try of concatenating both dataframes, then add based on the index group
df1.columns = df.columns
df1.people = pd.to_numeric(df1.people,errors='coerce')
pd.concat([df,df1]).groupby('Index').sum()
Out:
number people
Index
A 8 5.0
B 2 2.0
C 2 5.0
F 3 3.0

Pandas column bind (cbind) two data frames

I've got a dataframe df_a with id information:
unique_id lacet_number
15 5570613 TLA-0138365
24 5025490 EMP-0138757
36 4354431 DXN-0025343
and another dataframe df_b, with the same number of rows that I know correspond to the rows in df_a:
latitude longitude
0 -93.193560 31.217029
1 -93.948082 35.360874
2 -103.131508 37.787609
What I want to do is simply concatenate the two horizontally (similar to cbind in R) and get:
unique_id lacet_number latitude longitude
0 5570613 TLA-0138365 -93.193560 31.217029
1 5025490 EMP-0138757 -93.948082 35.360874
2 4354431 DXN-0025343 -103.131508 37.787609
What I have tried:
df_c = pd.concat([df_a, df_b], axis=1)
which gives me an outer join.
unique_id lacet_number latitude longitude
0 NaN NaN -93.193560 31.217029
1 NaN NaN -93.948082 35.360874
2 NaN NaN -103.131508 37.787609
15 5570613 TLA-0138365 NaN NaN
24 5025490 EMP-0138757 NaN NaN
36 4354431 DXN-0025343 NaN NaN
The problem is that the indices for the two dataframes do not match. I read the documentation for pandas.concat, and saw that there is an option ignore_index. But that only applies to the concatenation axis, in my case the columns and it certainly is not the right choice for me. So my question is: is there a simple way to achieve this?
If you're sure the index row values are the same then to avoid the index alignment order then just call reset_index(), this will reset your index values back to start from 0:
df_c = pd.concat([df_a.reset_index(drop=True), df_b], axis=1)
DataFrame.join
While concat is fine, it's simpler to join:
C = A.join(B)
This still assumes aligned indexes, so reset_index as needed. In OP's example, B's index is already default, so we only need to reset A:
C = A.reset_index(drop=True).join(B)
# unique_id lacet_number latitude longitude
# 0 5570613 TLA-0138365 -93.193560 31.217029
# 1 5025490 EMP-0138757 -93.948082 35.360874
# 2 4354431 DXN-0025343 -103.131508 37.787609
You can use set_axis to make the index labels of one of the frames to be the same as the other's and concatenate horizontally or join. Unlike reset_index, this method preserves the index labels of one of the dataframes.
joined_df = pd.concat([df_a.set_axis(df_b.index), df_b], axis=1)
# or using `join`
joined_df = df_a.set_axis(df_b.index).join(df_b)

Categories

Resources