Probably a duplicate, but I'm not even sure what to search for.
If I have a pandas dataframe like so:
index RH LH Data1 Data2 . . .
1 A1 A2 A B
2 B1 NaN C D
3 NaN C2 E F
And I want to re-index as so:
index Data1 Data2
A1 A B
A2 A B
B1 C D
C2 E F
Is there a simple-ish way to do this? Or should I just do a pair of for loops?
You can use DataFrame.set_index with all columns without names defined in list and reshape by DataFrame.stack, then remove last level by DataFrame.reset_index with drop=True, convert all another levels to columns and create index by DataFrame.set_index:
cols = df.columns.difference(['RH','LH']).tolist()
df = (df.set_index(cols)
.stack()
.reset_index(len(cols), drop=True)
.reset_index(name='idx')
.set_index('idx'))
print (df)
Data1 Data2
idx
A1 A B
A2 A B
B1 C D
C2 E F
Or use DataFrame.melt with DataFrame.dropna, remove column variable and last create index by idx column:
df = (df.melt(cols, value_name='idx')
.dropna(subset=['idx'])
.drop('variable', axis=1)
.set_index('idx'))
print (df)
Data1 Data2
idx
A1 A B
B1 C D
A2 A B
C2 E F
Related
I have a use-case where it naturally fits to compute each cell of a pd.DataFrame as a function of the corresponding index and column i.e.
import pandas as pd
import numpy as np
data = np.empty((3, 3))
data[:] = np.nan
df = pd.DataFrame(data=data, index=[1, 2, 3], columns=['a', 'b', 'c'])
print(df)
> a b c
>1 NaN NaN NaN
>2 NaN NaN NaN
>3 NaN NaN NaN
and I'd like (this is only a mock example) to get a result that is a function f(index, column):
> a b c
>1 a1 b1 c1
>2 a2 b2 c2
>3 a3 b3 c3
In order to accomplish this I need a way different to apply or applymap where the lambda gets the coordinates in terms of the index and col i.e.
def my_cell_map(ix, col):
return col + str(ix)
Here is possible use numpy - add index values to columns with broadcasting and pass to DataFrame constructor:
a = df.columns.to_numpy() + df.index.astype(str).to_numpy()[:, None]
df = pd.DataFrame(a, index=df.index, columns=df.columns)
print (df)
a b c
1 a1 b1 c1
2 a2 b2 c2
3 a3 b3 c3
EDIT: For processing by columns names is possible use x.name with index values:
def f(x):
return x.name + x.index.astype(str)
df = df.apply(f)
print (df)
a b c
1 a1 b1 c1
2 a2 b2 c2
3 a3 b3 c3
EDIT1: For your function is necessary use another lambda function for loop by index values:
def my_cell_map(ix, col):
return col + str(ix)
def f(x):
return x.index.map(lambda y: my_cell_map(y, x.name))
df = df.apply(f)
print (df)
a b c
1 a1 b1 c1
2 a2 b2 c2
3 a3 b3 c3
EDIT2: Also is possible loop by index and columns values and set by loc, if large DataFrame performance should be slow:
for c in df.columns:
for i in df.index:
df.loc[i, c] = my_cell_map(i, c)
print (df)
a b c
1 a1 b1 c1
2 a2 b2 c2
3 a3 b3 c3
I have N dataframes:
df1:
time data
1.0 a1
2.0 b1
3.0 c1
df2:
time data
1.0 a2
2.0 b2
3.0 c2
df3:
time data
1.0 a3
2.0 b3
3.0 c3
I want to merge all of them on id, thus getting
time data1 data2 data3
1.0 a1 a2 a3
2.0 b1 b2 b3
3.0 c1 c2 c3
I can assure all the ids are the same in all dataframes.
How can I do this in pandas?
One idea is use concat for list of DataFrames - only necessary create index by id for each DaatFrame. Also for avoid duplicated columns names is added keys parameter, but it create MultiIndex in output. So added map with format for flatten it:
dfs = [df1, df2, df3]
dfs = [x.set_index('id') for x in dfs]
df = pd.concat(dfs, axis=1, keys=range(1, len(dfs) + 1))
df.columns = df.columns.map('{0[1]}{0[0]}'.format)
df = df.reset_index()
print (df)
id data1 data2 data3
0 1 a1 a2 a3
1 2 b1 b2 b3
2 3 c1 c2 c3
Hi I have data (in excel and text file as well) like
C1 C2 C3
1 p a
1 q b
2 r c
2 s d
And I want the output like:
C1 C2 C3
1 p,q a,b
2 r,s c,d
How can I group the data based on column values.
I am open to anything: any library, any language, any tool
Like python, bash, or even excel?
I think we can do this using pandas in python, but I havent used it before.
Any leads appreciated.
First pandas.read_excel - output is DataFrame:
df = pd.read_excel('file.xlsx')
Then you can use groupby with agg join:
df = df.groupby('C1').agg(','.join).reset_index()
print (df)
C1 C2 C3
0 1 p,q a,b
1 2 r,s c,d
If more columns in df and need filter only C2 and C3:
df = df.groupby('C1')['C2','C3'].agg(','.join).reset_index()
print (df)
C1 C2 C3
0 1 p,q a,b
1 2 r,s c,d
For save to excel file use DataFrame.to_excel, obviously without index:
df.to_excel('file.xlsx', index=False)
I have two dataframes df1 and df2 with key as index.
dict_1={'key':[1,1,1,2,2,3], 'col1':['a1','b1','c1','d1','e1','f1']}
df1 = pd.DataFrame(dict_1).set_index('key')
dict_2={'key':[1,1,2], 'col2':['a2','b2','c2']}
df2 = pd.DataFrame(dict_2).set_index('key')
df1:
col1
key
1 a1
1 b1
1 c1
2 d1
2 e1
3 f1
df2
col2
key
1 a2
1 b2
2 c2
Note that there are unequal rows for each index. I want to concatenate these two dataframes such that, I have the following dataframe (say df3).
df3
col1 col2
key
1 a1 a2
1 b1 b2
2 d1 c2
i.e. concatenate the two columns so that the new dataframe as the least (of df1 and df2) rows for each index.
I tried
pd.concat([df1,df2],axis=1)
but I get the following error:
Value Error: Shape of passed values is (2,17), indices imply (2,7)
My question: How can I concatentate df1 and df2 to get df3? Should I use DataFrame.merge instead? If so, how?
Merge/join alone will get you a lot of (hard to get rid of) duplicates. But a little trick will help:
df1['count1'] = 1
df1['count1'] = df1['count1'].groupby(df1.index).cumsum()
df1
Out[198]:
col1 count1
key
1 a1 1
1 b1 2
1 c1 3
2 d1 1
2 e1 2
3 f1 1
The same thing for df2:
df2['count2'] = 1
df2['count2'] = df2['count2'].groupby(df2.index).cumsum()
And finally:
df_aligned = df1.reset_index().merge(df2.reset_index(), left_on = ['key','count1'], right_on = ['key', 'count2'])
df_aligned
Out[199]:
key col1 count1 col2 count2
0 1 a1 1 a2 1
1 1 b1 2 b2 2
2 2 d1 1 c2 1
Now, you can reset index with set_index('key') and drop no longer needed columns countn.
The biggest problem for why you are not going to be able to line up the two in the way that you want is that your keys are duplicative. How are you going to be line up the A1 value in df1 with the A2 value in df2 When A1, A2, B1, B2, and C1 all have the same key?
Using merge is what you'll want if you can resolve the key issues:
df3 = df1.merge(df2, left_index=True, right_index=True, how='inner')
You can use inner, outer, left or right for how.
I have two data frames:
df1
A1 B1
1 a
2 s
3 d
and
df2
A1 B1
1 a
2 x
3 d
I want to compare df1 and df2 on column B1. The column A1 can be used to join. I want to know:
Which rows are different in df1 and df2 with respect to column B1?
If there is a mismatch in the values of column A1. For example whether df2 is missing some values that are there in df1 and vice versa. And if so, which ones?
I tried using merge and join but that is not what I am looking for.
I've edited the raw data to illustrate the case of A1 keys in one dataframe but not the other.
When doing your merge, you want to specify an 'outer' merge so that you can see these items with an A1 key in one dataframe but not the other.
I've included the suffixes '_1' and '_2' to indicate the dataframe source (_1 = df1 and _2 = df2) of column B1.
df1 = pd.DataFrame({'A1': [1, 2, 3, 4], 'B1': ['a', 'b', 'c', 'd']})
df2 = pd.DataFrame({'A1': [1, 2, 3, 5], 'B1': ['a', 'd', 'c', 'e']})
df3 = df1.merge(df2, how='outer', on='A1', suffixes=['_1', '_2'])
df3['check'] = df3.B1_1 == df3.B1_2
>>> df3
A1 B1_1 B1_2 check
0 1 a a True
1 2 b d False
2 3 c c True
3 4 d NaN False
4 5 NaN e False
To check for missing A1 keys in df1 and df2:
# A1 value missing in `df1`
>>> d3[df3.B1_1.isnull()]
A1 B1_1 B1_2 check
4 5 NaN e False
# A1 value missing in `df2`
>>> df3[df3.B1_2.isnull()]
A1 B1_1 B1_2 check
3 4 d NaN False
EDIT
Thanks to #EdChum (the source of all Pandas knowledge...).
df3 = df1.merge(df2, how='outer', on='A1', suffixes=['_1', '_2'], indicator=True)
df3['check'] = df3.B1_1 == df3.B1_2
>>> df3
A1 B1_1 B1_2 _merge check
0 1 a a both True
1 2 b d both False
2 3 c c both True
3 4 d NaN left_only False
4 5 NaN e right_only False