I have the following dummy dataframe:
df = pd.DataFrame({'Col1':['a,b,c,d', 'e,f,g,h', 'i,j,k,l,m'],
'Col2':['aa~bb~cc~dd', np.NaN, 'ii~jj~kk~ll~mm']})
Col1 Col2
0 a,b,c,d aa~bb~cc~dd
1 e,f,g,h NaN
2 i,j,k,l,m ii~jj~kk~ll~mm
The real dataset has shape 500000, 90.
I need to unnest these values to rows and I'm using the new explode method for this, which works fine.
The problem is the NaN, these will cause unequal lengths after the explode, so I need to fill in the same amount of delimiters as the filled values. In this case ~~~ since row 1 has three comma's.
expected output
Col1 Col2
0 a,b,c,d aa~bb~cc~dd
1 e,f,g,h ~~~
2 i,j,k,l,m ii~jj~kk~ll~mm
Attempt 1:
df['Col2'].fillna(df['Col1'].str.count(',')*'~')
Attempt 2:
np.where(df['Col2'].isna(), df['Col1'].str.count(',')*'~', df['Col2'])
This works, but I feel like there's an easier method for this:
characters = df['Col1'].str.replace('\w', '').str.replace(',', '~')
df['Col2'] = df['Col2'].fillna(characters)
print(df)
Col1 Col2
0 a,b,c,d aa~bb~cc~dd
1 e,f,g,h ~~~
2 i,j,k,l,m ii~jj~kk~ll~mm
d1 = df.assign(Col1=df['Col1'].str.split(',')).explode('Col1')[['Col1']]
d2 = df.assign(Col2=df['Col2'].str.split('~')).explode('Col2')[['Col2']]
final = pd.concat([d1,d2], axis=1)
print(final)
Col1 Col2
0 a aa
0 b bb
0 c cc
0 d dd
1 e
1 f
1 g
1 h
2 i ii
2 j jj
2 k kk
2 l ll
2 m mm
Question: is there an easier and more generalized method for this? Or is my method fine as is.
pd.concat
delims = {'Col1': ',', 'Col2': '~'}
pd.concat({
k: df[k].str.split(delims[k], expand=True)
for k in df}, axis=1
).stack()
Col1 Col2
0 0 a aa
1 b bb
2 c cc
3 d dd
1 0 e NaN
1 f NaN
2 g NaN
3 h NaN
2 0 i ii
1 j jj
2 k kk
3 l ll
4 m mm
This loops on columns in df. It may be wiser to loop on keys in the delims dictionary.
delims = {'Col1': ',', 'Col2': '~'}
pd.concat({
k: df[k].str.split(delims[k], expand=True)
for k in delims}, axis=1
).stack()
Same thing, different look
delims = {'Col1': ',', 'Col2': '~'}
def f(c): return df[c].str.split(delims[c], expand=True)
pd.concat(map(f, delims), keys=delims, axis=1).stack()
One way is using str.repeat and fillna() not sure how efficient this is though:
df.Col2.fillna(pd.Series(['~']*len(df)).str.repeat(df.Col1.str.count(',')))
0 aa~bb~cc~dd
1 ~~~
2 ii~jj~kk~ll~mm
Name: Col2, dtype: object
Just split the dataframe into two
df1=df.dropna()
df2=df.drop(df1.index)
d1 = df1['Col1'].str.split(',').explode()
d2 = df1['Col2'].str.split('~').explode()
d3 = df2['Col1'].str.split(',').explode()
final = pd.concat([d1, d2], axis=1).append(d3.to_frame(),sort=False)
Out[77]:
Col1 Col2
0 a aa
0 b bb
0 c cc
0 d dd
2 i ii
2 j jj
2 k kk
2 l ll
2 m mm
1 e NaN
1 f NaN
1 g NaN
1 h NaN
zip_longest can be useful here, given you don't need the original Index. It will work regardless of which column has more splits:
from itertools import zip_longest, chain
df = pd.DataFrame({'Col1':['a,b,c,d', 'e,f,g,h', 'i,j,k,l,m', 'x,y'],
'Col2':['aa~bb~cc~dd', np.NaN, 'ii~jj~kk~ll~mm', 'xx~yy~zz']})
# Col1 Col2
#0 a,b,c,d aa~bb~cc~dd
#1 e,f,g,h NaN
#2 i,j,k,l,m ii~jj~kk~ll~mm
#3 x,y xx~yy~zz
l = [zip_longest(*x, fillvalue='')
for x in zip(df.Col1.str.split(',').fillna(''),
df.Col2.str.split('~').fillna(''))]
pd.DataFrame(chain.from_iterable(l))
0 1
0 a aa
1 b bb
2 c cc
3 d dd
4 e
5 f
6 g
7 h
8 i ii
9 j jj
10 k kk
11 l ll
12 m mm
13 x xx
14 y yy
15 zz
Related
I have a DataFrame like this:
>>> df = pd.DataFrame({'a': list('ABCD'), 'b': ['E',np.nan,np.nan,'F']})
a b
0 A E
1 B NaN
2 C NaN
3 D F
I am trying to fill NaN with values of the previous column in the next row and dropping this second row. In other words, I want to combine the two rows with NaNs to form a single row without NaNs like this:
a b
0 A E
1 B C
2 D F
I have tried various flavors of df.fillna(method="<bfill/ffill>") but this didn't give me the expected output.
I haven't found any other question about this problem, Here's one. And actually that DataFrame is made from list of DataFrame by doing .concat(), you may notice that from indexes also. I am telling this because it may be easy to do in single row rather then in multiple rows.
I have found some suggestions to use shift, combine_first but non of them worked for me. You may try these too.
I also have found this too. It is a whole article about filling nan values but I haven't found problem/answer like mine.
OK misunderstood what you wanted to do the first time. The dummy example was a bit ambiguous.
Here is another:
>>> df = pd.DataFrame({'a': list('ABCD'), 'b': ['E',np.nan,np.nan,'F']})
a b
0 A E
1 B NaN
2 C NaN
3 D F
To my knowledge, this operation does not exist with pandas, so we will use numpy to do the work.
First transform the dataframe to numpy array and flatten it to be one-dimensional. Then drop NaNs using pandas.isna that is working on a larger range types than numpy.isnan, and then reshape the array to its original shape before transforming back to dataframe:
array = df.to_numpy().flatten()
pd.DataFrame(array[~pd.isna(array)].reshape(-1,df.shape[1]), columns=df.columns)
output:
a b
0 A E
1 B C
2 D F
It is also working for more complex examples, as long as the NaN pattern is conserved among columns with NaNs:
In:
a b c d
0 A H A2 H2
1 B NaN B2 NaN
2 C NaN C2 NaN
3 D I D2 I2
4 E NaN E2 NaN
5 F NaN F2 NaN
6 G J G2 J2
Out:
a b c d
0 A H A2 H2
1 B B2 C C2
2 D I D2 I2
3 E E2 F F2
4 G J G2 J2
In:
a b c
0 A F H
1 B NaN NaN
2 C NaN NaN
3 D NaN NaN
4 E G I
Out:
a b c
0 A F H
1 B C D
2 E G I
In case NaNs columns do not have the same pattern such as:
a b c d
0 A H A2 NaN
1 B NaN B2 NaN
2 C NaN C2 H2
3 D I D2 I2
4 E NaN E2 NaN
5 F NaN F2 NaN
6 G J G2 J2
You can apply the operation per group of two columns:
def elementwise_shift(df):
array = df.to_numpy().flatten()
return pd.DataFrame(array[~pd.isna(array)].reshape(-1,df.shape[1]), columns=df.columns)
(df.groupby(np.repeat(np.arange(df.shape[1]/2), 2), axis=1)
.apply(elementwise_shift)
)
output:
a b c d
0 A H A2 B2
1 B C C2 H2
2 D I D2 I2
3 E F E2 F2
4 G J G2 J2
You can do this in two steps with a placeholder column. First you fill all the nans in column b with the a values from the next row. Then you apply the filtering. In this example I use ffill with a limit of 1 to filter all nan values after the first, there's probably a better method.
import pandas as pd
import numpy as np
df=pd.DataFrame({"a":[1,2,3,3,4],"b":[1,2,np.nan,np.nan,4]})
# Fill all nans:
df['new_b'] = df['b'].fillna(df['a'].shift(-1))
df = df[df['b'].ffill(limit=1).notna()].copy() # .copy() because loc makes a view
df = df.drop('b', axis=1).rename(columns={'new_b': 'b'})
print(df)
# output:
# a b
# 0 1 1
# 1 2 2
# 2 3 2
# 4 4 4
Suppose I have a dataframe looking something like this:
col1 col2 col3 col4
0 A B F O
1 A G Q
2 A C G P
3 A H
4 A D I
5 A D I
6 A J U
7 A E J
How can I shift the columns if the column value is empty?
col1 col2 col3 col4
0 A B F O
1 A G Q
2 A C G P
3 A H
4 A D I
5 A D I
6 A J U
7 A E J
I thought I could check current column, if it's empty, take the next column value and make that empty.
for col in df.columns:
df[col] = np.where((df[col] == ''), df[f'col{int(col[-1])+1}'], df[col])
df[f'col{int(col[-1])+1}'] = np.where((df[col] == ''), '', df[col])
But I am failing somewhere. Sample df below.
df = pd.DataFrame(
{
'col1': ['A','A','A','A','A','A','A','A'],
'col2': ['B','','C','','D','D','','E'],
'col3': ['F','G','G','H','I','I','J',''],
'col4': ['O','Q','P','','','','U','J']
}
)
One way is to use np.argsort:
s = df.to_numpy()
orders = np.argsort(s=='', axis=1, kind='mergesort')
df[:] = s[np.arange(len(s))[:,None],orders]
Output:
col1 col2 col3 col4
0 A B F O
1 A G Q
2 A C G P
3 A H
4 A D I
5 A D I
6 A J U
7 A E J
Note:
A very similar approach can be found in this question.
Replace empty string with NaN
df = df.replace('', np.nan)
Apply dropna row-wise
odf = df.apply(lambda x: pd.Series(x.dropna().values), axis=1)
To retain column names,
odf.columns = df.columns
NOTE: It is always good to represent missing data with NaN
Output
col1 col2 col3 col4
0 A B F O
1 A G Q NaN
2 A C G P
3 A H NaN NaN
4 A D I NaN
5 A D I NaN
6 A J U NaN
7 A E J NaN
You can count the number of empty values for a column, then sort it, and finally get the desired datframe..
counts = {}
for col in df.columns.to_list():
counts[col] = (df[col]== '').sum() #Based on the example you have provided.
# Then sort the dictionary based on counts.
counts = dict(sorted(counts.items(), key=lambda item: item[1]))
#Assign back to the dataframe.
df = df[[*counts.keys()]]
df
col1 col3 col2 col4
0 A F B O
1 A G Q
2 A G C P
3 A H
4 A I D
5 A I D
6 A J U
7 A E J
Say, I have one data frame df:
a b c d e
0 1 2 dd 5 Col1
1 2 3 ee 9 Col2
2 3 4 ff 1 Col4
There's another dataframe df2:
Col1 Col2 Col3
0 1 2 4
1 2 3 5
2 3 4 6
I need to add a column sum in the first dataframe, wherein it sums values of columns in the second dataframe df2, based on values of column e in df1.
Expected output
a b c d e Sum
0 1 2 dd 5 Col1 6
1 2 3 ee 9 Col2 9
2 3 4 ff 1 Col4 0
The Sum value in the last row is 0 because Col4 doesn't exist in df2.
What I tried: Writing some lamdas, apply function. Wasn't able to do it.
I'd greatly appreciate the help. Thank you.
Try
df['Sum']=df.e.map(df2.sum()).fillna(0)
df
Out[89]:
a b c d e Sum
0 1 2 dd 5 Col1 6.0
1 2 3 ee 9 Col2 9.0
2 3 4 ff 1 Col4 0.0
Try this. The following solution sums all values for a particular column if present in df2 using apply method and returns 0 if no such column exists in df2.
df1.loc[:,"sum"]=df1.loc[:,"e"].apply(lambda x: df2.loc[:,x].sum() if(x in df2.columns) else 0)
Use .iterrows() to iterate through a data frame pulling out the values for each row as well as index.
A nest for loop style of iteration can be used to grab needed values from the second dataframe and apply them to the first
import pandas as pd
df1 = pd.DataFrame(data={'a': [1,2,3], 'b': [2,3,4], 'c': ['dd', 'ee', 'ff'], 'd': [5,9,1], 'e': ['Col1','Col2','Col3']})
df2 = pd.DataFrame(data={'Col1': [1,2,3], 'Col2': [2,3,4], 'Col3': [4,5,6]})
df1['Sum'] = df1['a'].apply(lambda x: None)
for index, value in df1.iterrows():
sum = 0
for index2, value2 in df2.iterrows():
sum += value2[value['e']]
df1['Sum'][index] = sum
Output:
a b c d e Sum
0 1 2 dd 5 Col1 6
1 2 3 ee 9 Col2 9
2 3 4 ff 1 Col3 15
I have two dataframes like this
import pandas as pd
import numpy as np
df1 = pd.DataFrame({
'key': list('AAABBCCAAC'),
'prop1': list('xyzuuyxzzz'),
'prop2': list('mnbnbbnnnn')
})
df2 = pd.DataFrame({
'key': list('ABBCAA'),
'prop1': [np.nan] * 6,
'prop2': [np.nan] * 6,
'keep_me': ['stuff'] * 6
})
key prop1 prop2
0 A x m
1 A y n
2 A z b
3 B u n
4 B u b
5 C y b
6 C x n
7 A z n
8 A z n
9 C z n
key prop1 prop2 keep_me
0 A NaN NaN stuff
1 B NaN NaN stuff
2 B NaN NaN stuff
3 C NaN NaN stuff
4 A NaN NaN stuff
5 A NaN NaN stuff
I now want to populate columns prop1 and prop2 in df2 using the values of df1. For each key, we will have more or equal rows in df1 than in df2 (in the example above: 5 times A vs 3 times A, 2 times B vs 2 times B and 3 times C vs 1 time C). For each key, I want to fill df2 using the first n rows per key from df1.
So, my expected outcome for df2 would be:
key prop1 prop2 keep_me
0 A x m stuff
1 B u n stuff
2 B u b stuff
3 C y b stuff
4 A y n stuff
5 A z b stuff
As key is not unique, I cannot simple build a dictionary and then use .map.
I was hoping that something along these lines would work:
pd.concat([df2.set_index('key'), df1.set_index('key')], axis=1, join='inner')
but that fails with
ValueError: Shape of passed values is (5, 22), indices imply (5, 10)
as - I guess - the index contains non-unique values.
How can I get my desired output?
Because duplicates in key values possible solution is create new counter columns in both DataFrames by GroupBy.cumcount, so possible replace missing values from df2 with align by MultiIndex created by key and g columns with DataFrame.fillna:
df1['g'] = df1.groupby('key').cumcount()
df2['g'] = df2.groupby('key').cumcount()
print (df1)
key prop1 prop2 g
0 A x m 0
1 A y n 1
2 A z b 2
3 B u n 0
4 B u b 1
5 C y b 0
6 C x n 1
7 A z n 3
8 A z n 4
9 C z n 2
print (df2)
key prop1 prop2 keep_me g
0 A NaN NaN stuff 0
1 B NaN NaN stuff 0
2 B NaN NaN stuff 1
3 C NaN NaN stuff 0
4 A NaN NaN stuff 1
5 A NaN NaN stuff 2
df = (df2.set_index(['key','g'])
.fillna(df1.set_index(['key','g']))
.reset_index(level=1, drop=True)
.reset_index())
print (df)
key prop1 prop2 keep_me
0 A x m stuff
1 B u n stuff
2 B u b stuff
3 C y b stuff
4 A y n stuff
5 A z b stuff
Another solution to build a dict first from df1 and then pop the elements to fill the NAs in df2
d = df1.groupby(by='key').apply(lambda x: x.values.tolist()).to_dict()
df2[['key','prop1','prop2']] = pd.DataFrame(df2.key.apply(lambda x: d[x].pop(0)).tolist())
key prop1 prop2 keep_me
0 A x m stuff
1 B u n stuff
2 B u b stuff
3 C y b stuff
4 A y n stuff
5 A z b stuff
I'm trying to concatenate Pandas DataFrame columns with NaN values.
In [96]:df = pd.DataFrame({'col1' : ["1","1","2","2","3","3"],
'col2' : ["p1","p2","p1",np.nan,"p2",np.nan], 'col3' : ["A","B","C","D","E","F"]})
In [97]: df
Out[97]:
col1 col2 col3
0 1 p1 A
1 1 p2 B
2 2 p1 C
3 2 NaN D
4 3 p2 E
5 3 NaN F
In [98]: df['concatenated'] = df['col2'] +','+ df['col3']
In [99]: df
Out[99]:
col1 col2 col3 concatenated
0 1 p1 A p1,A
1 1 p2 B p2,B
2 2 p1 C p1,C
3 2 NaN D NaN
4 3 p2 E p2,E
5 3 NaN F NaN
Instead of 'NaN' values in "concatenated" column, I want to get "D" and "F" respectively for this example?
I don't think your problem is trivial. However, here is a workaround using numpy vectorization :
In [49]: def concat(*args):
...: strs = [str(arg) for arg in args if not pd.isnull(arg)]
...: return ','.join(strs) if strs else np.nan
...: np_concat = np.vectorize(concat)
...:
In [50]: np_concat(df['col2'], df['col3'])
Out[50]:
array(['p1,A', 'p2,B', 'p1,C', 'D', 'p2,E', 'F'],
dtype='|S64')
In [51]: df['concatenated'] = np_concat(df['col2'], df['col3'])
In [52]: df
Out[52]:
col1 col2 col3 concatenated
0 1 p1 A p1,A
1 1 p2 B p2,B
2 2 p1 C p1,C
3 2 NaN D D
4 3 p2 E p2,E
5 3 NaN F F
[6 rows x 4 columns]
You could first replace NaNs with empty strings, for the whole dataframe or the column(s) you desire.
In [6]: df = df.fillna('')
In [7]: df['concatenated'] = df['col2'] +','+ df['col3']
In [8]: df
Out[8]:
col1 col2 col3 concatenated
0 1 p1 A p1,A
1 1 p2 B p2,B
2 2 p1 C p1,C
3 2 D ,D
4 3 p2 E p2,E
5 3 F ,F
We can use stack which will drop the NaN, then use groupby.agg and ','.join the strings:
df['concatenated'] = df[['col2', 'col3']].stack().groupby(level=0).agg(','.join)
col1 col2 col3 concatenated
0 1 p1 A p1,A
1 1 p2 B p2,B
2 2 p1 C p1,C
3 2 NaN D D
4 3 p2 E p2,E
5 3 NaN F F