How to create multiple columns in Pandas Dataframe? - python

I have data as you can see in the terminal. I need it to be converted to the Excel sheet format as you can see in the Excel sheet file by creating multi-levels in columns.
I researched this and reached many different things but cannot achieve my goal then, I reached "transpose", and it gave me the shape that I need but unfortunately that it did reshape from a column to a row instead where I got the wrong data ordering.
Current result:
Desired result:
What can I try next?

You can use pivot() function and reorder multi-column levels.
Before that, index/group data for repeated iterations/rounds:
data=[
(2,0,0,1),
(10,2,5,3),
(2,0,0,0),
(10,1,1,1),
(2,0,0,0),
(10,1,2,1),
]
columns = ["player_number", "cel1", "cel2", "cel3"]
df = pd.DataFrame(data=data, columns=columns)
df_nbr_plr = df[["player_number"]].groupby("player_number").agg(cnt=("player_number","count"))
df["round"] = list(itertools.chain.from_iterable(itertools.repeat(x, df_nbr_plr.shape[0]) for x in range(df_nbr_plr.iloc[0,0])))
[Out]:
player_number cel1 cel2 cel3 round
0 2 0 0 1 0
1 10 2 5 3 0
2 2 0 0 0 1
3 10 1 1 1 1
4 2 0 0 0 2
5 10 1 2 1 2
Now, pivot and reorder the colums levels:
df = df.pivot(index="round", columns="player_number").reorder_levels([1,0], axis=1).sort_index(axis=1)
[Out]:
player_number 2 10
cel1 cel2 cel3 cel1 cel2 cel3
round
0 0 0 1 2 5 3
1 0 0 0 1 1 1
2 0 0 0 1 2 1

This can be done with unstack after setting player__number as index. You have to reorder the Multiindex columns and fill missing values/delete duplicates though:
import pandas as pd
data = {"player__number": [2, 10 , 2, 10, 2, 10],
"cel1": [0, 2, 0, 1, 0, 1],
"cel2": [0, 5, 0, 1, 0, 2],
"cel3": [1, 3, 0, 1, 0, 1],
}
df = pd.DataFrame(data).set_index('player__number', append=True)
df = df.unstack('player__number').reorder_levels([1, 0], axis=1).sort_index(axis=1) # unstacking, reordering and sorting columns
df = df.ffill().iloc[1::2].reset_index(drop=True) # filling values and keeping only every two rows
df.to_excel('output.xlsx')
Output:

Related

How to create a new DataFrame repeating rows using indexes from original DF

I have a DataFrame of generated random agents. However, I want to expand them to match the population I am looking for, so I need to repeat rows, according to my sampled indexes.
Here is a loop code that is taking forever:
df = pd.DataFrame({'a': [0, 1, 2]})
sampled_indexes = [0, 0, 1, 1, 2, 2, 2]
new_df = pd.DataFrame(columns=['a'])
for i, idx in enumerate(sampled_indexes):
new_df.loc[i] = df.loc[idx]
Then, the original DataFrame:
df
a
0 0
1 1
2 2
gives me the result of an enlarged new dataframe
new_df
a
0 0
1 0
2 1
3 1
4 2
5 2
6 2
So, this loop is too slow with a DataFrame that has 34,000 or more rows (takes forever).
How can I do this simpler and faster?
Reindex the dataframe with sampled_indexes, then reset the index.
df.reindex(sampled_indexes).reset_index(drop=True)
You can do DataFrame.merge:
df = pd.DataFrame({'a': [0, 1, 2]})
sampled_indexes = [0, 0, 1, 1, 2, 2, 2]
print( df.merge(pd.DataFrame({'a': sampled_indexes})) )
Prints:
a
0 0
1 0
2 1
3 1
4 2
5 2
6 2

Removing certain Rows from subset of df

I have a pandas dataframe. All the columns right of column#2 may only contain the value 0 or 1. If they contain a value that is NOT 0 or 1, I want to remove that entire row from the dataframe.
So I created a subset of the dataframe to only contain columns right of #2
Then I found the indices of the rows that had values other than 0 or 1 and deleted it from the original dataframe.
See code below please
#reading data file:
data=pd.read_csv('MyData.csv')
#all the columns right of column#2 may only contain the value 0 or 1. So "prod" is a subset of the data df containing these columns:
prod = data.iloc[:,2:]
index_prod = prod[ (prod!= 0) & (prod!= 1)].dropna().index
data = data.drop(index_prod)
However when I run this, the index_prod vector is empty and so does not drop anything at all.
okay so my friend just told me that the data is not numeric and he fixed it by making it numeric. Can anyone please advise how I can find that out? Because all the columns were numeric it seemed like to me. All numbers
You can check dtypes by DataFrame.dtypes.
print (data.dtypes)
Or:
print (data.columns.difference(data.select_dtypes(np.number).columns))
And then convert all values without first 2 to numeric:
data.iloc[:,2:] = data.iloc[:,2:].apply(lambda x: pd.to_numeric(x, errors='coerce'))
Or all columns:
data = data.apply(lambda x: pd.to_numeric(x, errors='coerce'))
And last apply solution:
subset = data.iloc[:,2:]
data1 = data[subset.isin([0,1]).all(axis=1)]
Let's say you have this dataframe:
data = {'A': [1, 2, 3, 4, 5], 'B': [0, 1, 4, 3, 1], 'C': [2, 1, 0, 3, 4]}
df = pd.DataFrame(data)
A B C
0 1 0 2
1 2 1 1
2 3 4 0
3 4 3 3
4 5 1 4
And you want to delete rows based on column B that don't contain 0 or 1, we could accomplish by:
subset = df.iloc[:,1:]
index = subset[ (subset!= 0) & (subset!= 1)].dropna().index
df.drop(index)
A B C
0 1 0 2
1 2 1 1
4 5 1 4
df.reset_index(drop=True)
A B C
0 1 0 2
1 2 1 1
2 5 1 4

Pandas merge duplicate DataFrame columns preserving column names

How can I merge duplicate DataFrame columns and also keep all original column names?
e.g. If I have the DataFrame
df = pd.DataFrame({"col1" : [0, 0, 1, 2, 5, 3, 7],
"col2" : [0, 1, 2, 3, 3, 3, 4],
"col3" : [0, 1, 2, 3, 3, 3, 4]})
I can remove the duplicate columns (yes the transpose is slow for large DataFrames) with
df.T.drop_duplicates().T
but this only preserves one column name per unique column
col1 col2
0 0 0
1 0 1
2 1 2
3 2 3
4 5 3
5 3 3
6 7 4
How can I keep the information on which columns were merged? e.g. something like
[col1] [col2, col3]
0 0 0
1 0 1
2 1 2
3 2 3
4 5 3
5 3 3
6 7 4
Thanks!
# group columns by their values
grouped_columns = df.groupby(list(df.values), axis=1).apply(lambda g: g.columns.tolist())
# pick one column from each group of the columns
unique_df = df.loc[:, grouped_columns.str[0]]
# make a new column name for each group, don't think the list can work as a column name, you need to join them
unique_df.columns = grouped_columns.apply("-".join)
unique_df
I also used T and tuple to groupby
def f(x):
d = x.iloc[[0]]
d.index = ['-'.join(x.index.tolist())]
return d
df.T.groupby(df.apply(tuple), group_keys=False).apply(f).T

pandas assign value based on mean

Let's say I have a dataframe column. I want to create a new column where the value for a given observation is 1 if the corresponding value in the old column is above average. But the value should be 0 if the value in the other column is average or below.
What's the fastest way of doing this?
Say you have the following DataFrame:
df = pd.DataFrame({'A': [1, 4, 6, 2, 8, 3, 7, 1, 5]})
df['A'].mean()
Out: 4.111111111111111
Comparison against the mean will get you a boolean vector. You can cast that to integer:
df['B'] = (df['A'] > df['A'].mean()).astype(int)
or use np.where:
df['B'] = np.where(df['A'] > df['A'].mean(), 1, 0)
df
Out:
A B
0 1 0
1 4 0
2 6 1
3 2 0
4 8 1
5 3 0
6 7 1
7 1 0
8 5 1

Pandas dataframe merge and element-wide multiplication

I have a dataframe like
df1 = pd.DataFrame({'name':['al', 'ben', 'cary'], 'bin':[1.0, 1.0, 3.0], 'score':[40, 75, 15]})
bin name score
0 1 al 40
1 1 ben 75
2 3 cary 15
and a dataframe like
df2 = pd.DataFrame({'bin':[1.0, 2.0, 3.0, 4.0, 5.0], 'x':[1, 1, 0, 0, 0],
'y':[0, 0, 1, 1, 0], 'z':[0, 0, 0, 1, 0]})
bin x y z
0 1 1 0 0
1 2 1 0 0
2 3 0 1 0
3 4 0 1 1
4 5 0 0 0
what I want to do is extend df1 with the columns ‘x’, ‘y’, and ‘z’, and fill with score only where the bin matches and the the respective ‘x’, ‘y’, ‘z’ value is 1, not 0.
I’ve gotten as far as
df3 = pd.merge(df1, df2, how='left', on=['bin'])
bin name score x y z
0 1 al 40 1 0 0
1 1 ben 75 1 0 0
2 3 cary 15 0 1 0
but I don't see an elegant way to get the score values into the correct 'x', 'y', etc columns (my real-life problem has over a hundred such columns so df3['x'] = df3['score'] * df3['x'] might be rather slow).
You can just get a list of the columns you want to multiply the scores by and then use the apply function:
cols = [each for each in df2.columns if each not in ('name', 'bin')]
df3 = pd.merge(df1, df2, how='left', on=['bin'])
df3[cols] = df3.apply(lambda x: x['score'] * x[cols], axis=1)
This may not be much faster than iterating, but is an idea.
Import numpy, define the columns covered in the operation
import numpy as np
columns = ['x','y','z']
score_col = 'score'
Contruct a numpy array of the score column, reshaped to match the number of columns in the operation.
score_matrix = np.repeat(df3[score_col].values, len(columns))
score_matrix = score_matrix.reshape(len(df3), len(columns))
Multiply by the the columns and assign back to the dataframe.
df3[columns] = score_matrix * df3[columns]

Categories

Resources