Pandas. How to sort a DataFrame without changing index? - python

df2 = pd.DataFrame({
"A": [26, 2, 3],
"B": [0, 7, 1],
"C": [7, 5, 4]
},
index=list('abc'))
df2
Output:
A B C
a 26 0 7
b 2 7 5
c 3 1 4
df2.sort_values(['B', 'A'], ascending=[False, True]) gives:
A B C
b 2 7 5
c 3 1 4
a 26 0 7
The column with indexes is now shuffled in new order, but I want it to be the same even after sorting. Parameter ignore_index just sets indexes from 0 to n-1. And the sort_index function isn't helpful too, because indexes can be not in lexicographical order.

You can add the index back after sorting:
df2 = df2.sort_values(['B', 'A'], ascending=[False, True]).reset_index(drop=True)
df2['index'] = ['a', 'b', 'c']
df2.set_index('index', inplace=True)
print(df2)
A B C
index
a 2 7 5
b 3 1 4
c 26 0 7

Use dataframe constructor:
df2 = pd.DataFrame({
"A": [26, 2, 3],
"B": [0, 7, 1],
"C": [7, 5, 4]
},
index=list('abc'))
print(df2)
Output:
A B C
a 26 0 7
b 2 7 5
c 3 1 4
Create new dataframe with constructor:
df2 = pd.DataFrame(df2.sort_values(['B', 'A'], ascending=[False, True]).to_numpy(),
index=df2.index, columns=df2.columns)
print(df2)
Output:
A B C
a 2 7 5
b 3 1 4
c 26 0 7

Related

Successively filling in a new column of a pandas DataFrame

I would like to extend an existing pandas DataFrame and fill the new column successively:
df = pd.DataFrame({'col1': [1, 2, 3, 4, 5, 6], 'col2': [7, 8, 9, 10, 11, 12]})
df['col3'] = pd.Series(['a' for x in df[:3]])
df['col3'] = pd.Series(['b' for x in df[3:4]])
df['col3'] = pd.Series(['c' for x in df[4:]])
I would expect a result as follows:
col1 col2 col3
0 1 7 a
1 2 8 a
2 3 9 a
3 4 10 b
4 5 11 c
5 6 12 c
However, my code fails and I get:
col1 col2 col3
0 1 7 a
1 2 8 a
2 3 9 NaN
3 4 10 NaN
4 5 11 NaN
5 6 12 NaN
What is wrong?
Use the loc accessor:
df = pd.DataFrame({'col1': [1, 2, 3, 4, 5, 6], 'col2': [7, 8, 9, 10, 11, 12]})
df.loc[:2,'col3'] = 'a'
df.loc[3,'col3'] = 'b'
df.loc[4:,'col3'] = 'c'
df
col1
col2
col3
0
1
7
a
1
2
8
a
2
3
9
a
3
4
10
b
4
5
11
c
5
6
12
c
As #Amirhossein Kiani and #Emma notes in the comments, you're never using df itself to assign values, so there is no need to slice it. Since you can assign a list to a DataFrame column, the following suffices:
df['col3'] = ['a'] * 3 + ['b'] + ['c'] * (len(df) - 4)
You can also use numpy.select to assign values. The idea is to create a list of boolean Serieses for certain index ranges and select values accordingly. For example, if index is less than 3, select 'a', if index is between 3 and 4, select 'b', etc.
import numpy as np
df['col3'] = np.select([df.index<3, df.index.to_series().between(3, 4, inclusive='left')], ['a','b'], 'c')
Output:
col1 col2 col3
0 1 7 a
1 2 8 a
2 3 9 a
3 4 10 b
4 5 11 c
5 6 12 c
Every time you do something like df['col3'] = pd.Series(['a' for x in df[:3]]), you're assigning a new pd.Series to the column col3. One alternative way to do this is to create your new column separately, then assign it to the df.
df = pd.DataFrame({'col1': [1, 2, 3, 4, 5, 6], 'col2': [7, 8, 9, 10, 11, 12]})
new_col = ['a' for _ in range(3)] + ['b'] + ['c' for _ in range(4, len(df))]
df['col3'] = pd.Series(new_col)

Lookup Values by Corresponding Column Header in Pandas 1.2.0 or newer

The operation pandas.DataFrame.lookup is "Deprecated since version 1.2.0", and has since invalidated a lot of previous answers.
This post attempts to function as a canonical resource for looking up corresponding row col pairs in pandas versions 1.2.0 and newer.
Standard LookUp Values With Default Range Index
Given the following DataFrame:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 B 4 8
I would like to be able to lookup the corresponding value in the column specified in Col:
I would like my result to look like:
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
Standard LookUp Values With a Non-Default Index
Non-Contiguous Range Index
Given the following DataFrame:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
Col A B
0 B 1 5
2 A 2 6
8 A 3 7
9 B 4 8
I would like to preserve the index but still find the correct corresponding Value:
Col A B Val
0 B 1 5 5
2 A 2 6 2
8 A 3 7 3
9 B 4 8 8
MultiIndex
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
Col A B
C E B 1 5
F A 2 6
D E A 3 7
F B 4 8
I would like to preserve the index but still find the correct corresponding Value:
Col A B Val
C E B 1 5 5
F A 2 6 2
D E A 3 7 3
F B 4 8 8
LookUp with Default For Unmatched/Not-Found Values
Given the following DataFrame
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 C 4 8 # Column C does not correspond with any column
I would like to look up the corresponding values if one exists otherwise I'd like to have it default to 0
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 C 4 8 0 # Default value 0 since C does not correspond
LookUp with Missing Values in the lookup Col
Given the following DataFrame:
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 NaN 4 8 # <- Missing Lookup Key
I would like any NaN values in Col to result in a NaN value in Val
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 NaN 4 8 NaN # NaN to indicate missing
Standard LookUp Values With Any Index
The documentation on Looking up values by index/column labels recommends using NumPy indexing via factorize and reindex as the replacement for the deprecated DataFrame.lookup.
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
factorize is used to convert the column encode the values as an "enumerated type".
idx, col = pd.factorize(df['Col'])
# idx = array([0, 1, 1, 0], dtype=int64)
# col = Index(['B', 'A'], dtype='object')
Notice that B corresponds to 0 and A corresponds to 1. reindex is used to ensure that columns appear in the same order as the enumeration:
df.reindex(columns=col)
B A # B appears First (location 0) A appers second (location 1)
0 5 1
1 6 2
2 7 3
3 8 4
We need to create an appropriate range indexer compatible with NumPy indexing.
The standard approach is to use np.arange based on the length of the DataFrame:
np.arange(len(df))
[0 1 2 3]
Now NumPy indexing will work to select values from the DataFrame:
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
[5 2 3 8]
*Note: This approach will always work regardless of type of index.
MultiIndex
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
Col A B Val
C E B 1 5 5
F A 2 6 2
D E A 3 7 3
F B 4 8 8
Why use np.arange and not df.index directly?
Standard Contiguous Range Index
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
In this case only, there is no error as the result from np.arange is the same as the df.index.
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
Non-Contiguous Range Index Error
Raises IndexError:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
IndexError: index 8 is out of bounds for axis 0 with size 4
MultiIndex Error
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
Raises IndexError:
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
LookUp with Default For Unmatched/Not-Found Values
There are a few approaches.
First let's look at what happens by default if there is a non-corresponding value:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 A 3 7
# 3 C 4 8
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 C 4 8 NaN # NaN Represents the Missing Value in C
If we look at why the NaN values are introduced, we will find that when factorize goes through the column it will enumerate all groups present regardless of whether they correspond to a column or not.
For this reason, when we reindex the DataFrame we will end up with the following result:
idx, col = pd.factorize(df['Col'])
df.reindex(columns=col)
idx = array([0, 1, 1, 2], dtype=int64)
col = Index(['B', 'A', 'C'], dtype='object')
df.reindex(columns=col)
B A C
0 5 1 NaN
1 6 2 NaN
2 7 3 NaN
3 8 4 NaN # Reindex adds the missing column with the Default `NaN`
If we want to specify a default value, we can specify the fill_value argument of reindex which allows us to modify the behaviour as it relates to missing column values:
idx, col = pd.factorize(df['Col'])
df.reindex(columns=col, fill_value=0)
idx = array([0, 1, 1, 2], dtype=int64)
col = Index(['B', 'A', 'C'], dtype='object')
df.reindex(columns=col, fill_value=0)
B A C
0 5 1 0
1 6 2 0
2 7 3 0
3 8 4 0 # Notice reindex adds missing column with specified value `0`
This means that we can do:
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(
columns=col,
fill_value=0 # Default value for Missing column values
).to_numpy()[np.arange(len(df)), idx]
df:
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 C 4 8 0
*Notice the dtype of the column is int, since NaN was never introduced, and, therefore, the column type was not changed.
LookUp with Missing Values in the lookup Col
factorize has a default na_sentinel=-1, meaning that when NaN values appear in the column being factorized the resulting idx value is -1
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 A 3 7
# 3 NaN 4 8 # <- Missing Lookup Key
idx, col = pd.factorize(df['Col'])
# idx = array([ 0, 1, 1, -1], dtype=int64)
# col = Index(['B', 'A'], dtype='object')
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
# Col A B Val
# 0 B 1 5 5
# 1 A 2 6 2
# 2 A 3 7 3
# 3 NaN 4 8 4 <- Value From A
This -1 means that, by default, we'll be pulling from the last column when we reindex. Notice the col still only contains the values B and A. Meaning, that we will end up with the value from A in Val for the last row.
The easiest way to handle this is to fillna Col with some value that cannot be found in the column headers.
Here I use the empty string '':
idx, col = pd.factorize(df['Col'].fillna(''))
# idx = array([0, 1, 1, 2], dtype=int64)
# col = Index(['B', 'A', ''], dtype='object')
Now when I reindex, the '' column will contain NaN values meaning that the lookup produces the desired result:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
idx, col = pd.factorize(df['Col'].fillna(''))
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
df:
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 NaN 4 8 NaN # Missing as expected
Other Approaches to LookUp
There are 2 other approaches to performing this operation:
apply (Intuitive, but quite slow)
apply can be used on axis=1 in order to use the Column values as the key:
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.apply(lambda row: row[row['Col']], axis=1)
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This operation will work regardless of index type:
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
# Col A B
# 0 B 1 5
# 2 A 2 6
# 8 A 3 7
# 9 B 4 8
df['Val'] = df.apply(lambda row: row[row['Col']], axis=1)
df:
Col A B Val
0 B 1 5 5
2 A 2 6 2
8 A 3 7 3
9 B 4 8 8
When dealing with Missing/Non-Corresponding Values we can use Series.get can be used to remedy this issue:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'C', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 C 3 7 <- Non Corresponding
# 3 NaN 4 8 <- Missing
df['Val'] = df.apply(lambda row: row.get(row['Col']), axis=1)
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 C 3 7 NaN # Missing value
3 NaN 4 8 NaN # Missing value
With Default Value
df['Val'] = df.apply(lambda row: row.get(row['Col'], default=-1), axis=1)
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 C 3 7 -1 # Default -1
3 NaN 4 8 -1 # Default -1
apply is extremely flexible and modifications are straightforward, however, the general iterative approach, as well as all the individual Series lookups can become extremely costly in large DataFrames.
get_indexer (limited)
Index.get_indexer can be used to convert the column to index values into an indexer for the DataFrame. This means there is no reason to reindex the DataFrame as the indexer corresponds to the DataFrame as a whole.
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This approach is reasonably fast, however, missing values are represented by -1 meaning that if a value is missing it will grab the value from the -1 column (The last column in the DataFrame).
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8],
'Col': ['B', 'A', 'A', 'C']})
# A B Col <- Col is now the Last Col
# 0 1 5 B
# 1 2 6 A
# 2 3 7 A
# 3 4 8 C <- Notice Col `C` does not correspond to a Valid Column Header
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df:
A B Col Val
0 1 5 B 5
1 2 6 A 2
2 3 7 A 3
3 4 8 C C # <- Value from the last column in the DataFrame (index -1)
It is also notable that not reindexing the DataFrame means converting the entire DataFrame to numpy. This can be very costly if there are many unrelated columns that all need converted:
import numpy as np
import pandas as pd
df = pd.DataFrame({1: 10,
2: 20,
3: 't',
4: 40,
5: np.nan,
'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df.to_numpy()
[[10 20 't' 40 nan 'B' 1 5 5]
[10 20 't' 40 nan 'A' 2 6 2]
[10 20 't' 40 nan 'A' 3 7 3]
[10 20 't' 40 nan 'B' 4 8 8]]
Compared to the reindexing approach which only contains columns relevant to the column values:
df.reindex(columns=['B', 'A']).to_numpy()
[[5 1]
[6 2]
[7 3]
[8 4]]
Another option is to build a tuple of the lookup columns, pivot the dataframe, and select the relevant columns with the tuples:
cols = [(ent, ent) for ent in df.Col.unique()]
df.assign(Val = df.pivot(index = None, columns = 'Col')
.reindex(columns = cols)
.ffill(axis=1)
.iloc[:, -1])
Col A B Val
0 B 1 5 5.0
2 A 2 6 2.0
8 A 3 7 3.0
9 B 4 8 8.0
Another possible method is to use melt:
df['value'] = (df.melt('Col', ignore_index=False)
.loc[lambda x: x['Col'] == x['variable'], 'value'])
print(df)
# Output:
Col A B value
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This method also works with Missing/Non-Corresponding Values:
df['value'] = (df.melt('Col', ignore_index=False)
.loc[lambda x: x['Col'] == x['variable'], 'value'])
print(df)
# Output
Col A B value
0 B 1 5 5.0
1 A 2 6 2.0
2 C 3 7 NaN
3 NaN 4 8 NaN
You can replace .loc[...] by query(...) but it's little slower although more expressive:
df['value'] = df.melt('Col', ignore_index=False).query('Col == variable')['value']

Is there a way to match serial numbers from two dataframes and add a list of Series (from rows) from df2 into a new column in df1 (Python, pandas)

As title, I am looking to generate a list (or other dtype) of all matching serial numbers from df2 and store them inside a new column in df1, such that when I pull up a record (product) from df1, I am able to find all the review scores for that product. Matched up by serial numbers.
data1 = { 'serialNumbers' : [1, 2, 3 ,4 ,5],
'product' : ['a', 'b' , 'c', 'd', 'e']}
data2 = { 'reviewScore' : [5, 1, 4, 1, 5, 2, 4, 3, 1, 3, 4],
'serialNumbers' : [1, 1, 1, 1, 3, 4, 4, 2, 3, 3, 4],
'otherData' : ['a', 'b' , 'c', 'd', 'e', 'a', 'b' , 'c', 'd', 'e','a']}
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
df1
serialNumbers product
0 1 a
1 2 b
2 3 c
3 4 d
4 5 e
df2
reviewScore serialNumbers
0 5 1
1 1 1
2 4 1
3 1 1
4 5 3
5 2 4
6 4 4
7 3 2
8 1 3
9 3 3
10 4 4
desired output:
serialNumbers product reviewData
0 1 a [5 : a , 1 : b, 4 : c, 1 : d]
1 2 b [3 : c]
2 3 c [5 : e, 1 : d, 3 : e]
3 4 d [2 : a, 4 : b, 4 : a]
4 5 e []
You can use a combination of pd.merge, groupby, and agg:
Let's break down the below:
We are left merging df1 with a grouped version of df2, which means we are keeping all information from df1, and attaching onto it the result of the groupby
The tolist() within the agg functions , returns all the reviewScores per serialNumber
res = pd.merge(df1,(df2.groupby('serialNumbers').agg({'reviewScore':lambda x: x.tolist()})).reset_index(),how='left')
which prints:
serialNumbers product reviewScore
0 1 a [5, 1, 4, 1]
1 2 b [3]
2 3 c [5, 1, 3]
3 4 d [2, 4, 4]
4 5 e NaN
EDIT 1:
Given your updated question, try this:
df2['temp'] = df2['reviewScore'].astype(str) + ' : ' + df2['otherData'].astype(str)
res = pd.merge(df1,(df2.groupby('serialNumbers').agg({'temp':lambda x: x.tolist()})).reset_index(),how='left')
which prints:
serialNumbers product temp
0 1 a [5 : a, 1 : b, 4 : c, 1 : d]
1 2 b [3 : c]
2 3 c [5 : e, 1 : d, 3 : e]
3 4 d [2 : a, 4 : b, 4 : a]
4 5 e NaN
Note that I am not sure that this is the most efficient way (or most pythonic way) to get this, but i think it can get you what you need.
EDIT 2:
df2['temp1'] = df2[['reviewScore','otherData']].values.tolist()
res = pd.merge(df1,(df2.groupby('serialNumbers').agg({'temp1':lambda x: x.tolist()})).reset_index(),how='left')
serialNumbers product temp1
0 1 a [[5, a], [1, b], [4, c], [1, d]]
1 2 b [[3, c]]
2 3 c [[5, e], [1, d], [3, e]]
3 4 d [[2, a], [4, b], [4, a]]
4 5 e NaN

Pass lists of columns to Pandas DataFrame instead of lists of rows

I am trying to create a DataFrame like this:
column_names= ["a", "b", "c"]
vals = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
df = pd.DataFrame(vals, columns=column_names)
Which results in the following DataFrame:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
I suppose this is the expected result. However, I am trying to achieve this result:
a b c
0 1 4 7
1 2 5 8
2 3 6 9
Where each nested list in vals corresponds to a whole column instead of a row.
Is there a way to get the above DataFrame without changing the way the data is passed to the constructor? Or even a method I can call to transpose the DataFrame?
Just zip it:
df = pd.DataFrame(dict(zip(column_names, vals)))
Outputs:
a b c
0 1 4 7
1 2 5 8
2 3 6 9
Try the column naming in a difference step -
column_names= ["a", "b", "c"]
vals = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
df = pd.DataFrame(vals).T
df.columns = column_names
a b c
0 1 4 7
1 2 5 8
2 3 6 9
Or if you can use numpy, you can do it in one step -
import numpy as np
column_names= ["a", "b", "c"]
vals = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
df = pd.DataFrame(vals.T, columns=column_names)
print(df)
a b c
0 1 4 7
1 2 5 8
2 3 6 9
Use transpose(df.T):
In [3397]: df = df.T.reset_index(drop=True)
In [3398]: df.columns = column_names
In [3399]: df
Out[3399]:
a b c
0 1 4 7
1 2 5 8
2 3 6 9
If use constructor simplier is use zip with unpacking list with *:
column_names= ["a", "b", "c"]
vals = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
df = pd.DataFrame(zip(*vals), columns=column_names)
print (df)
a b c
0 1 4 7
1 2 5 8
2 3 6 9
Solutions if already was created DataFrame:
df = pd.DataFrame(vals, columns=column_names)
Use DataFrame.T and reassign columns with index values:
df1 = df.T
df1.columns, df1.index = df1.index, df1.columns
print (df1)
a b c
0 1 4 7
1 2 5 8
2 3 6 9
One line solution with transpose, DataFrame.set_axis and DataFrame.reset_index:
df1 = df.T.set_axis(column_names, axis=1).reset_index(drop=True)
print (df1)
a b c
0 1 4 7
1 2 5 8
2 3 6 9
Or transpose only numpy array, thank you #Henry Yik:
df.loc[:] = df.T.to_numpy()

change column name using index

import pandas as pd
d = {
'one': [1, 2, 3, 4, 5],
'one': [9, 8, 7, 6, 5],
'three': ['a', 'b', 'c', 'd', 'e']
}
df = pd.DataFrame(d)
I have bigger dataframe with multiple columns of having same name .
I want to change the column name from number of column as in r.
e.g. colnames(df)[2]='two'
I want to change second column name 'one' to 'two' .I want to do
that in python .
I think the simpliest is assign new columns names by np.arange or range:
#valid dictionary have unique keys
d = {
'one1': [1, 2, 3, 4, 5],
'one2': [9, 8, 7, 6, 5],
'three': ['a', 'b', 'c', 'd', 'e']
}
df = pd.DataFrame(d)
df.columns = ['one'] * 2 + ['three']
print (df)
one one three
0 1 9 a
1 2 8 b
2 3 7 c
3 4 6 d
4 5 5 e
df.columns = np.arange(len(df.columns))
#alternative
#df.columns = range(len(df.columns))
print (df)
0 1 2
0 1 9 a
1 2 8 b
2 3 7 c
3 4 6 d
4 5 5 e
Then select by name:
print (df[1])
0 9
1 8
2 7
3 6
4 5
Name: 1, dtype: int64

Categories

Resources