I am seeing a wierd behavior from pandas, maybe it's just me but I am expecting a different result from what I am getting.
so assuming that I have a multi-index dataframe such has:
import pandas as pd
df = pd.DataFrame(index=list('abcde'), data={'A': range(5), 'B': range(5)})
df_first = pd.concat({'ticker1': df, 'ticker2': df, 'ticker3': df}, axis=1)
df_first.columns = df_first.columns.rename(('ticker', 'variables'))
df_first
Out[91]:
ticker ticker1 ticker2 ticker3
variables A B A B A B
a 0 0 0 0 0 0
b 1 1 1 1 1 1
c 2 2 2 2 2 2
d 3 3 3 3 3 3
e 4 4 4 4 4 4
and a second dataframe with the same level's name but reversed such has:
df2 = pd.DataFrame(index=list('abcde'), data={'ticker1': range(5), 'ticker2': range(5)})
df_sec = pd.concat({'C': df2, 'D': df2, 'E': df2}, axis=1)
df_sec.columns = df_sec.columns.rename(('variables', 'ticker'))
df_sec
Out[93]:
variables C D E
ticker ticker1 ticker2 ticker1 ticker2 ticker1 ticker2
a 0 0 0 0 0 0
b 1 1 1 1 1 1
c 2 2 2 2 2 2
d 3 3 3 3 3 3
e 4 4 4 4 4 4
as you can see the levels have the same names but are reversed. when I concat those 2 dataframe on the axis = 1, it mixes up my columns:
pd.concat([df_first, df_sec], axis=1)
Out[94]:
ticker ticker1 ticker2 ticker3 C D E
variables A B A B A B ticker1 ticker2 ticker1 ticker2 ticker1 ticker2
a 0 0 0 0 0 0 0 0 0 0 0 0
b 1 1 1 1 1 1 1 1 1 1 1 1
c 2 2 2 2 2 2 2 2 2 2 2 2
d 3 3 3 3 3 3 3 3 3 3 3 3
e 4 4 4 4 4 4 4 4 4 4 4 4
I know I can swap levels first and get the expected result such has:
pd.concat([df_first, df_sec.swaplevel(0, 1, 1)], axis=1)
Out[95]:
ticker ticker1 ticker2 ticker3 ticker1 ticker2 ticker1 ticker2 ticker1 ticker2
variables A B A B A B C C D D E E
a 0 0 0 0 0 0 0 0 0 0 0 0
b 1 1 1 1 1 1 1 1 1 1 1 1
c 2 2 2 2 2 2 2 2 2 2 2 2
d 3 3 3 3 3 3 3 3 3 3 3 3
e 4 4 4 4 4 4 4 4 4 4 4 4
but is there a way to concat based on the level names directly?
thanks
I can't think of anything that doesn't manipulate the columns index in some way. But this gets close to what you asked for. Namely, it operates on level name.
ln = 'variables'
pd.concat([df_first.stack(ln), df_sec.stack(ln)]).unstack(ln)
OR
ln = 'ticker'
pd.concat([df_first.stack(ln), df_sec.stack(ln)], axis=1).unstack(ln)
Related
I have a dataframe df:
(A,B) (B,C) (D,B) (E,F)
0 3 0 1
1 1 3 0
2 2 4 2
I want to split it into different columns for all columns in df as shown below:
A B B C D B E F
0 0 3 3 0 0 1 1
1 1 1 1 3 3 0 0
2 2 2 2 4 4 2 2
and add similar columns together:
A B C D E F
0 3 3 0 1 1
1 5 1 3 0 0
2 6 2 4 2 2
how to achieve this using pandas?
With pandas, you can use this :
out = (
df
.T
.reset_index()
.assign(col= lambda x: x.pop("index").str.strip("()").str.split(","))
.explode("col")
.groupby("col", as_index=False).sum()
.set_index("col")
.T
.rename_axis(None, axis=1)
)
# Output :
print(out)
A B C D E F
0 0 3 3 0 1 1
1 1 5 1 3 0 0
2 2 8 2 4 2 2
i think (A, B) as tuple
pd.concat([pd.DataFrame([df[i].tolist()] * len(i), index=list(i)) for i in df.columns]).sum(level=0).T
result:
A B C D E F
0 0 3 3 0 1 1
1 1 5 1 3 0 0
2 2 8 2 4 2 2
if future warning occur, use following code:
pd.concat([pd.DataFrame([df[i].tolist()] * len(i), index=list(i)) for i in df.columns]).groupby(level=0).sum().T
same result
Use concat with removed levels with MultiIndex in columns by Series.str.findall:
df.columns = df.columns.str.findall('(\w+)').map(tuple)
df = (pd.concat([df.droplevel(x, axis=1) for x in range(df.columns.nlevels)], axis=1)
.groupby(level=0, axis=1)
.sum())
print (df)
A B C D E F
0 0 3 3 0 1 1
1 1 5 1 3 0 0
2 2 8 2 4 2 2
For write ouput to file without index use:
df.to_csv('file.csv', index=False)
You can use findall to extract the variables in the header, then melt and explode, finallypivot_table:
out = (df
.reset_index().melt('index')
.assign(variable=lambda d: d['variable'].str.findall('(\w+)'))
.explode('variable')
.pivot_table(index='index', columns='variable', values='value', aggfunc='sum')
.rename_axis(index=None, columns=None)
)
Output:
A B C D E F
0 0 3 3 0 1 1
1 1 5 1 3 0 0
2 2 8 2 4 2 2
Reproducible input:
df = pd.DataFrame({'(A,B)': [0, 1, 2],
'(B,C)': [3, 1, 2],
'(D,B)': [0, 3, 4],
'(E,F)': [1, 0, 2]})
printing/saving without index:
print(out.to_string(index=False))
A B C D E F
0 3 3 0 1 1
1 5 1 3 0 0
2 8 2 4 2 2
# as file
out.to_csv('yourfile.csv', index=False)
My Problem is as follows:
I have a dataframe df which has 5 columns say ('A', 'B', 'C', 'D', 'E')
Now I am looking to combine these columns for some other purposes based on the columns where they are in sets say GP1 = [ 'A', 'B', 'D'] and GP2 = ['C','E'] based on which I will create two new columns.
df['Group1'] = df[GP1].min(axis=1)
df['Group2'] = df[GP2].max(axis=1)
However, it can be possible based on the data that many times say the column 'A' ( or say 'D' or 'B' or maybe all) may be missing from the first set or maybe the column 'C' or 'E' (or both) may be missing from second set.
So what I am looking for is to do something such that the code will check if any of the columns from first set or second set is missing and then only create the new 'Group1' or 'Group2' if all columns exists in a group and if any of the columns in any set is missing it will then skip creating the new column.
How can I achieve that. I was trying for loops but not helping and becoming complicated logic.
An example when all the columns in both set is there:
df_in
A B C D E
1 2 3 4 5
2 4 6 2 3
1 0 2 4 2
df_out
A B C D E Group1 Group2
1 2 3 4 5 1 5
2 4 6 2 3 2 6
1 0 2 4 2 0 2
An example when say E column from second group is not there:
df_in
A B C D
1 2 3 4
2 4 6 2
1 0 2 4
df_out
A B C D Group1
1 2 3 4 1
2 4 6 2 2
1 0 2 4 0
When both A & D are missing from set A ( and only B is there from set/group 1)
df_in
B C E
2 3 5
4 6 3
0 2 2
df_out
B C E Group2
2 3 5 5
4 6 3 6
0 2 2 2
The following case when A from set 1 missing and C from set 2 missing :
df_in
B D E
2 4 5
4 2 3
0 4 2
df_out
B D E
2 4 5
4 2 3
0 4 2
Any help in this direction will be immensely appreciated. Thanks
Here you go, I think you can use this:
df_out = (df_in.assign(Group1=df_in.reindex(gp1, axis=1).dropna().min(axis=1),
Group2=df_in.reindex(gp2, axis=1).dropna().max(axis=1))
.dropna(axis=1, how='all'))
MCVE:
df_in = pd.read_clipboard() #Read from copy of df_in in the question above
print(df_in)
# A B C D E
# 0 1 2 3 4 5
# 1 2 4 6 2 3
# 2 1 0 2 4 2
gp1 = ['A','B','D']
gp2 = ['C','E']
df_out = (df_in.assign(Group1=df_in.reindex(gp1, axis=1).dropna().min(axis=1),
Group2=df_in.reindex(gp2, axis=1).dropna().max(axis=1))
.dropna(axis=1, how='all'))
print(df_out)
# A B C D E Group1 Group2
# 0 1 2 3 4 5 1 5
# 1 2 4 6 2 3 2 6
# 2 1 0 2 4 2 0 2
df_in_copy=df_in.copy() #make a copy to reuse later
df_in = df_in.drop('E', axis=1) #Drop Col E
print(df_in)
# A B C D
# 0 1 2 3 4
# 1 2 4 6 2
# 2 1 0 2 4
df_out = (df_in.assign(Group1=df_in.reindex(gp1, axis=1).dropna().min(axis=1),
Group2=df_in.reindex(gp2, axis=1).dropna().max(axis=1))
.dropna(axis=1, how='all'))
print(df_out)
# A B C D Group1
# 0 1 2 3 4 1
# 1 2 4 6 2 2
# 2 1 0 2 4 0
df_in = df_in_copy.copy() #Copy for copy create
df_in = df_in.drop(['A','D'], axis=1) #Drop Columns A and D
print(df_in)
# B C E
# 0 2 3 5
# 1 4 6 3
# 2 0 2 2
df_out = (df_in.assign(Group1=df_in.reindex(gp1, axis=1).dropna().min(axis=1),
Group2=df_in.reindex(gp2, axis=1).dropna().max(axis=1))
.dropna(axis=1, how='all'))
print(df_out)
# B C E
# 0 2 3 5
# 1 4 6 3
# 2 0 2 2
I have the following dataframe:
Name B C D E
1 A 1 2 2 7
2 A 7 1 1 7
3 B 1 1 3 4
4 B 2 1 3 4
5 B 3 1 3 4
What I'm trying to do is to obtain a new dataframe in which, for rows with the same "Name", the elements in the "B" column are continuous, hence in this example for rows with "Name" = A, the dataframe would have to be padded with elements ranging from 1 to 7, and the values for columns C, D, E should be 0.
Name B C D E
1 A 1 2 2 7
2 A 2 0 0 0
3 A 3 0 0 0
4 A 4 0 0 0
5 A 5 0 0 0
6 A 6 0 0 0
7 A 7 0 0 0
8 B 1 1 3 4
9 B 2 1 5 4
10 B 3 4 3 6
What I've done so far is to turn the B column values for the same "Name" into continuous values:
new_idx = df_.groupby('Name').apply(lambda x: np.arange(x.index.min(), x.index.max() + 1)).apply(pd.Series).stack()
and reindexing the original (having set B as the index) df using this new Series, but I'm having trouble reindexing using duplicates. Any help would be appreciated.
You can use:
def f(x):
a = np.arange(x.index.min(), x.index.max() + 1)
x = x.reindex(a, fill_value=0)
return (x)
new_idx = (df.set_index('B')
.groupby('Name')
.apply(f)
.drop('Name', 1)
.reset_index()
.reindex(columns=df.columns))
print (new_idx)
Name B C D E
0 A 1 2 2 7
1 A 2 0 0 0
2 A 3 0 0 0
3 A 4 0 0 0
4 A 5 0 0 0
5 A 6 0 0 0
6 A 7 1 1 7
7 B 1 1 3 4
8 B 2 1 3 4
9 B 3 1 3 4
When using the drop method for a pandas.DataFrame it accepts lists of column names, but not tuples, despite the documentation saying that "list-like" arguments are acceptable. Am I reading the documentation incorrectly, as I would expect my MWE to work.
MWE
import pandas as pd
df = pd.DataFrame({k: range(5) for k in list('abcd')})
df.drop(['a', 'c'], axis=1) # Works
df.drop(('a', 'c'), axis=1) # Errors
Versions - Using Python 2.7.12, Pandas 0.20.3.
There is problem with tuples select Multiindex:
np.random.seed(345)
mux = pd.MultiIndex.from_arrays([list('abcde'), list('cdefg')])
df = pd.DataFrame(np.random.randint(10, size=(4,5)), columns=mux)
print (df)
a b c d e
c d e f g
0 8 0 3 9 8
1 4 3 4 1 7
2 4 0 9 6 3
3 8 0 3 1 5
df = df.drop(('a', 'c'), axis=1)
print (df)
b c d e
d e f g
0 0 3 9 8
1 3 4 1 7
2 0 9 6 3
3 0 3 1 5
Same as:
df = df[('a', 'c')]
print (df)
0 8
1 4
2 4
3 8
Name: (a, c), dtype: int32
Pandas treats tuples as multi-index values, so try this instead:
In [330]: df.drop(list(('a', 'c')), axis=1)
Out[330]:
b d
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
here is an example for deleting rows (axis=0 - default) in the multi-index DF:
In [342]: x = df.set_index(np.arange(len(df), 0, -1), append=True)
In [343]: x
Out[343]:
a b c d
0 5 0 0 0 0
1 4 1 1 1 1
2 3 2 2 2 2
3 2 3 3 3 3
4 1 4 4 4 4
In [344]: x.drop((0,5))
Out[344]:
a b c d
1 4 1 1 1 1
2 3 2 2 2 2
3 2 3 3 3 3
4 1 4 4 4 4
In [345]: x.drop([(0,5), (4,1)])
Out[345]:
a b c d
1 4 1 1 1 1
2 3 2 2 2 2
3 2 3 3 3 3
So when you specify tuple Pandas treats it as a multi-index label
I used this to delete column of tuples
del df3[('val1', 'val2')]
and it got deleted.
I want to add multiple columns to a pandas DataFrame and set them equal to an existing column. Is there a simple way of doing this? In R I would do:
df <- data.frame(a=1:5)
df[c('b','c')] <- df$a
df
a b c
1 1 1 1
2 2 2 2
3 3 3 3
4 4 4 4
5 5 5 5
In pandas this results in KeyError: "['b' 'c'] not in index":
df = pd.DataFrame({'a': np.arange(1,6)})
df[['b','c']] = df.a
you can use .assign() method:
In [31]: df.assign(b=df['a'], c=df['a'])
Out[31]:
a b c
0 1 1 1
1 2 2 2
2 3 3 3
3 4 4 4
4 5 5 5
or a little bit more creative approach:
In [41]: cols = list('bcdefg')
In [42]: df.assign(**{col:df['a'] for col in cols})
Out[42]:
a b c d e f g
0 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2
2 3 3 3 3 3 3 3
3 4 4 4 4 4 4 4
4 5 5 5 5 5 5 5
another solution:
In [60]: pd.DataFrame(np.repeat(df.values, len(cols)+1, axis=1), columns=['a']+cols)
Out[60]:
a b c d e f g
0 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2
2 3 3 3 3 3 3 3
3 4 4 4 4 4 4 4
4 5 5 5 5 5 5 5
NOTE: as #Cpt_Jauchefuerst mentioned in the comment DataFrame.assign(z=1, a=1) will add columns in alphabetical order - i.e. first a will be added to existing columns and then z.
A pd.concat approach
df = pd.DataFrame(dict(a=range5))
pd.concat([df.a] * 5, axis=1, keys=list('abcde'))
a b c d e
0 0 0 0 0 0
1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3 3
4 4 4 4 4 4
Turns out you can use a loop to do this:
for i in ['b','c']: df[i] = df.a
You can set them individually if you're only dealing with a few columns:
df['b'] = df['a']
df['c'] = df['a']
or you can use a loop as you discovered.