I have a DF mydataframe and it has multiple columns (over 75 columns) with default numeric index:
Col1 Col2 Col3 ... Coln
I need to arrange/change position to as follows:
Col1 Col3 Col2 ... Coln
I can get the index of Col2 using:
mydataframe.columns.get_loc("Col2")
but I don't seem to be able to figure out how to swap, without manually listing all columns and then manually rearrange in a list.
Try:
new_cols = [Col1, Col3, Col2] + df.columns[3:]
df = df[new_cols]
How to proceed:
store the names of columns in a list;
swap the names in that list;
apply the new order on the dataframe.
code:
l = list(df)
i1, i2 = l.index('Col2'), l.index('Col3')
l[i2], l[i1] = l[i1], l[i2]
df = df[l]
I'm imagining you want what #sentence is assuming. You want to swap the positions of 2 columns regardless of where they are.
This is a creative approach:
Create a dictionary that defines which columns get switched with what.
Define a function that takes a column name and returns an ordering.
Use that function as a key for sorting.
d = {'Col3': 'Col2', 'Col2': 'Col3'}
k = lambda x: df.columns.get_loc(d.get(x, x))
df[sorted(df, key=k)]
Col0 Col1 Col3 Col2 Col4
0 0 1 3 2 4
1 5 6 8 7 9
2 10 11 13 12 14
3 15 16 18 17 19
4 20 21 23 22 24
Setup
df = pd.DataFrame(
np.arange(25).reshape(5, 5)
).add_prefix('Col')
Using np.r_ to create array of column index:
Given sample as follows:
df:
col1 col2 col3 col4 col5 col6 col7 col8 col9 col10
0 0 1 2 3 4 5 6 7 8 9
1 10 11 12 13 14 15 16 17 18 19
i, j = df.columns.slice_locs('col2', 'col10')
df[df.columns[np.r_[:i, i+1, i, i+2:j]]]
Out[142]:
col1 col3 col2 col4 col5 col6 col7 col8 col9 col10
0 0 2 1 3 4 5 6 7 8 9
1 10 12 11 13 14 15 16 17 18 19
Related
I have a dataframe as such:
Col1 Col2 Col3.... Col64 Col1 Volume Col2 Volume....Col64 Volume.... Col1 Value Col2 Value...Col 64 Value
2 3 4 5 5 7 9 3 5
3 4 5 11 8 6 5 6 5
5 3 4 6 10 11 5 3 4
I want to multiply Col1 with Col1 Volume and then divide by Col1 Value and place the value in a new column called 'Col1 result'
similarly multiply Col2 with Col2 Volume and then divide by Col2 Value and place the value in a new column called 'Col2 result'
I wish to do this for every row of those columns.
Output should be as such and these columns should be appended to the existing dataframe.
Col1 Result Col2 Result
3.33 4.2
6 4.8
16.6 8.25
...
How can I perform this operation? It also has to be 1 to 1 multiplication, that is only the first row of Col1 should be multiplied with Col1 Volume and divided by first row of Col1 Value.
Doing it manually would take a lot of time.
Use DataFrame.filter for get all columns with Volume and Value with $ for end of string, remove substrings and then filter df by columns from df1, multiple and divide columns with DataFrame.add_suffix, replace missing columns 0 and append to original DataFrame:
df1 = df.filter(regex='Volume$').rename(columns=lambda x: x.replace(' Volume',''))
df2 = df.filter(regex='Value$').rename(columns=lambda x: x.replace(' Value',''))
df = df.join(df[df1.columns].mul(df1).div(df2).add_suffix(' Result').fillna(0))
print (df)
Col1 Col2 Col3 Col64 Col1 Volume Col2 Volume Col64 Volume \
0 2 3 4 5 5 7 9
1 3 4 5 11 8 6 5
Col1 Value Col2 Value Col64 Value Col1 Result Col2 Result Col64 Result
0 3 5 7 3.333333 4.2 6.428571
1 6 5 7 4.000000 4.8 7.857143
I have two dataframes, first one is:
col1 col2 col3
1 14 2 6
2 12 3 3
3 9 4 2
Second one is:
col4 col5 col6
2 14 2 6
3 12 3 3
I want to concatenate them and get the index values from second one and row values from the first one.
The result will be like this:
col1 col2 col3
2 12 3 3
3 9 4 2
My solution was pd.concat([df2, df1, axis=1)]).drop(df2, axis=1) but I believe there is more efficient way to do this.
You can use index from df2 with loc function on df1:
df1.loc[df2.index]
Output:
col1 col2 col3
2 12 3 3
3 9 4 2
I have a dataframe d1 with multiindex of col1 and col2:
col3 col4 col5
col1 col2
1 2 3 4 5
2 3 4 5 6
And another dataframe d2 with exact same structure:
col3 col4 col5
col1 col2
20 30 40 50 60
2 3 44 55 66
How to do d1.append(d2), to make it become, which override the previous keys:
col3 col4 col5
col1 col2
1 2 3 4 5
20 30 40 50 60
2 3 44 55 66
Try with combine_first
out = d2.combine_first(d1)
You could use pandas.concat with keep last
pd.concat([df1, df2]).groupby(level=[0, 1]).last()
#BENY's answer is more user friendly and readable.
Hello I have a df such as :
I wondered how I can subset row where :
COL1 contains a string "ok"
COL2 > 4
COL3 < 4
here is an exemple
COL1 COL2 COL3
AB_ok_7 5 2
AB_ok_4 2 5
AB_uy_2 5 2
AB_ok_2 2 2
U_ok_7 12 3
I should display only :
COL1 COL2 COL3
AB_ok_7 5 2
U_ok_7 12 3
Like this:
In [2288]: df[df['COL1'].str.contains('ok') & df['COL2'].gt(4) & df['COL3'].lt(4)]
Out[2288]:
COL1 COL2 COL3
0 AB_ok_7 5 2
4 U_ok_7 12 3
You can use boolean indexing and chaining all the conditions.
m = df['COL1'].str.contains('ok')
m1 = df['COL2'].gt(4)
m2 = df['COL3'].lt(4)
df[m & m1 & m2]
COL1 COL2 COL3
0 AB_ok_7 5 2
4 U_ok_7 12 3
I have a dataframe like this:
df1
col1 col2 col3 col4
1 2 A S
3 4 A P
5 6 B R
7 8 B B
I have another data frame:
df2
col5 col6 col3
9 10 A
11 12 R
I want to join these two data frame if any value of col3 and col4 of df1 matches with col3 values of df2 it will join.
the final data frame will look like:
df3
col1 col2 col3 col5 col6
1 2 A 9 10
3 4 A 9 10
5 6 R 11 12
If col3 value presents in df2 then it will join via col3 values else it will join via col4 values if it presents in col3 values of df2
How to do this in most efficient way using pandas/python?
Use double merge with default inner join, for second filter out rows matched in df3, last concat together:
df3 = df1.drop('col4', axis=1).merge(df2, on='col3')
df4 = (df1.drop('col3', axis=1).rename(columns={'col4':'col3'})
.merge(df2[~df2['col3'].isin(df1['col3'])], on='col3'))
df = pd.concat([df3, df4],ignore_index=True)
print (df)
col1 col2 col3 col5 col6
0 1 2 A 9 10
1 3 4 A 9 10
2 5 6 R 11 12
EDIT: Use left join and last combine_first:
df3 = df1.drop('col4', axis=1).merge(df2, on='col3', how='left')
df4 = (df1.drop('col3', axis=1).rename(columns={'col4':'col3'})
.merge(df2, on='col3', how='left'))
df = df3.combine_first(df4)
print (df)
col1 col2 col3 col5 col6
0 1 2 A 9.0 10.0
1 3 4 A 9.0 10.0
2 5 6 B 11.0 12.0
3 7 8 B NaN NaN