Given dataframe:
df = pd.DataFrame({'a':[1,2,4,5,6,8],
'b':[5,6,4,8,9,6],
'c':[6,3,3,7,8,4],
'd':[1,2,3,8,7,3],
'e':[3,2,4,4,6,2],
'f':[3,2,6,4,5,5]})
I want to divide/split df several parts (into 2,3,4.. n parts)
Desired output:
df1 =
a b c d e f
0 1 5 6 1 3 3
1 2 6 3 2 2 2
df2 =
a b c d e f
2 4 4 3 3 4 6
3 5 8 7 8 4 4
df3 =
a b c d e f
4 6 9 8 7 6 5
5 8 6 4 3 2 5
UPDATED
Real data has not equal dividable size!
real data 4351 rows × 3 columns
Use qcut to split. How you want to store it after is up to you
import pandas as pd
gp = df.groupby(pd.qcut(range(df.shape[0]), 3)) # N = 3
d = {f'df{i+1}': x[1] for i, x in enumerate(gp)}
d['df1']
# a b c d e f
#0 1 5 6 1 3 3
#1 2 6 3 2 2 2
Assuming your DataFrame can be evenly divided into n chunks:
n = 3
dfs = [df.loc[i] for i in np.split(df.index, n)]
dfs is a list containing 3 dataframes.
Related
I have the following df:
df = pd.DataFrame({'a': [1,2,3,4,2], 'b': [3,4,1,0,4], 'c':[1,2,3,1,0], 'd':[3,2,4,1,4]})
I want to generate a combination of totals from these 4 columns, which equals 4 x 3 x 2 = 24 total combinations minus duplicates. I want the results in the same df.
I want something that looks like this (partial results shown):
A combo of a_b is the same as b_a and therefore I wouldn't want such a calculation since its a duplicate.
Is there a way to calculate all combinations and exclude duplicate totals?
import itertools as it
orig_cols = df.columns
for r in range(2, df.shape[1] + 1):
for cols in it.combinations(orig_cols, r):
df["_".join(cols)] = df.loc[:, cols].sum(axis=1)
Needs some looping, but not on the dataframe itself, but rather the combinations. We get 2, 3, ..., N-1'th combinations of the column names where N is number of columns. Then form the new _-joined column as the sum.
In [11]: df
Out[11]:
a b c d a_b a_c a_d b_c b_d c_d a_b_c a_b_d a_c_d b_c_d a_b_c_d
0 1 3 1 3 4 2 4 4 6 4 5 7 5 7 8
1 2 4 2 2 6 4 4 6 6 4 8 8 6 8 10
2 3 1 3 4 4 6 7 4 5 7 7 8 10 8 11
3 4 0 1 1 4 5 5 1 1 2 5 5 6 2 6
4 2 4 0 4 6 2 6 4 8 4 6 10 6 8 10
I want to use dataframe.melt function in pandas lib to convert data format from rows into column but keeping first column value. I ve just tried also .pivot, but it is not working good. Please look at the example below and please help:
ID Alphabet Unspecified: 1 Unspecified: 2
0 1 A G L
1 2 B NaN NaN
2 3 C H NaN
3 4 D I M
4 5 E J NaN
5 6 F K O
Into this:
ID Alphabet
0 1 A
1 1 G
2 1 L
3 2 B
4 3 C
5 3 H
6 4 D
7 4 I
8 4 M
9 5 E
10 5 J
11 6 F
12 6 K
11 6 O
Try (assuming ID is unique and sorted):
df = (
pd.melt(df, "ID")
.sort_values("ID", kind="stable")
.drop(columns="variable")
.dropna()
.reset_index(drop=True)
.rename(columns={"value": "Alphabet"})
)
print(df)
Prints:
ID Alphabet
0 1 A
1 1 G
2 1 L
3 2 B
4 3 C
5 3 H
6 4 D
7 4 I
8 4 M
9 5 E
10 5 J
11 6 F
12 6 K
13 6 O
Don't melt but rather stack, this will directly drop the NaNs and keep the order per row:
out = (df
.set_index('ID')
.stack().droplevel(1)
.reset_index(name='Alphabet')
)
Output:
ID Alphabet
0 1 A
1 1 G
2 1 L
3 2 B
4 3 C
5 3 H
6 4 D
7 4 I
8 4 M
9 5 E
10 5 J
11 6 F
12 6 K
13 6 O
One option is with pivot_longer from pyjanitor:
# pip install pyjanitor
import pandas as pd
import janitor
(df
.pivot_longer(
index = 'ID',
names_to = 'Alphabet',
names_pattern = ['.+'],
sort_by_appearance = True)
.dropna()
)
ID Alphabet
0 1 A
1 1 G
2 1 L
3 2 B
6 3 C
7 3 H
9 4 D
10 4 I
11 4 M
12 5 E
13 5 J
15 6 F
16 6 K
17 6 O
In the code above, the names_pattern accepts a list of regular expression to match the desired columns, all the matches are collated into one column names Alphabet in names_to.
This question already has answers here:
count rows by certain combination of row values pandas
(3 answers)
Closed 4 years ago.
I am trying to create a new column which will contain for each row the count of a specific value in the whole dataset.
I have the following dataframe:
import pandas as pd
df = pd.DataFrame({'a': [1,2,3,4,5], 'b': [2,3,4,5,6], 'c':['or','ta','fl','or','fl'], 'd':[5,9,1,3,7]})
I would like to add a column e which count for each row how many times the value of column c appear in the dataset, like this:
df = pd.DataFrame({'a': [1,2,3,4,5], 'b': [2,3,4,5,6], 'c':['or','ta','fl','or','fl'], 'd':[5,9,1,3,7], 'e':[2,1,2,2,2]})
a b c d
0 1 2 or 5
1 2 3 ta 9
2 3 4 fl 1
3 4 5 or 3
4 5 6 fl 7
I tried to iterate over the whole dataset but it didnt work:
def getSum(c):
return df[df==c].sum()
def createE(df):
for index, row in df.iterrows():
row['e'] = getSum(row['c'])
return df
a b c d e
0 1 2 or 5 2
1 2 3 ta 9 1
2 3 4 fl 1 2
3 4 5 or 3 2
4 5 6 fl 7 2
Use GroupBy.transform for this, and use the 'count' for parameter of transform:
df['e']=df.groupby('c')['c'].transform('count')
And now:
print(df)
Is:
a b c d e
0 1 2 or 5 2
1 2 3 ta 9 1
2 3 4 fl 1 2
3 4 5 or 3 2
4 5 6 fl 7 2
You can map each value in the column c to its count.
Setup
>>> df = pd.DataFrame({'a': [1,2,3,4,5], 'b': [2,3,4,5,6], 'c':['or','ta','fl','or','fl'], 'd':[5,9,1,3,7]})
>>> df
a b c d
0 1 2 or 5
1 2 3 ta 9
2 3 4 fl 1
3 4 5 or 3
4 5 6 fl 7
Solution
>>> df['e'] = df.c.map(df.c.value_counts())
>>> df
a b c d e
0 1 2 or 5 2
1 2 3 ta 9 1
2 3 4 fl 1 2
3 4 5 or 3 2
4 5 6 fl 7 2
I am working on a huge dataframe and trying to create a new column, based on a condition in another column. Right now, I have a big while-loop and this calculation takes too much time, is there an easier way to do it?
With lambda for example?:
def promo(dataframe, a):
i=0
while i < len(dataframe)-1:
i=i+1
if dataframe.iloc[i-1,5] >= a:
dataframe.iloc[i-1,6] = 1
else:
dataframe.iloc[i-1,6] = 0
return dataframe
Don't use loops in pandas, they are slow compared to a vectorized solution - convert boolean mask to integers by astype True, False are converted to 1, 0:
dataframe = pd.DataFrame({'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':list('aaabbb'),
'F':[5,3,6,9,2,4],
'G':[5,3,6,9,2,4]
})
a = 5
dataframe['new'] = (dataframe.iloc[:,5] >= a).astype(int)
print (dataframe)
A B C D E F G new
0 a 4 7 1 a 5 5 1
1 b 5 8 3 a 3 3 0
2 c 4 9 5 a 6 6 1
3 d 5 4 7 b 9 9 1
4 e 5 2 1 b 2 2 0
5 f 4 3 0 b 4 4 0
If you want to overwrite the 7th column:
a = 5
dataframe.iloc[:,6] = (dataframe.iloc[:,5] >= a).astype(int)
print (dataframe)
A B C D E F G
0 a 4 7 1 a 5 1
1 b 5 8 3 a 3 0
2 c 4 9 5 a 6 1
3 d 5 4 7 b 9 1
4 e 5 2 1 b 2 0
5 f 4 3 0 b 4 0
I have a DataFrame looks something like this :
import numpy as np
import pandas as pd
df=pd.DataFrame([['d',5,6],['a',6,6],['index',5,8],['b',3,1],['b',5,6],['index',6,7],
['e',2,3],['c',5,6],['index',5,8]],columns=['A','B','C'])
I want to select all the lines that are between index and create many dataframes
I want to obtain all as :
dataframe1:
A B C
1 a 6 6
2 index 5 8
3 3 b 3
dataframe 2
A B C
4 b 5 6
5 index 6 7
6 c 2 3
datframe3:
A B C
7 c 5 6
8 index 5 8
9 4 3 1
dataframe4 :
A B C
11 5 2 3
12 index 4 2
13 1 2 5
index_list = df.index[df['A'] == 'index'].tolist() # create a list of the index where df['A']=='index'
new_df = [] # empty list for dataframes
for i in index_list: # for loop
try:
new_df.append(df.iloc[i-1:i+2])
except:
pass
this creates a list of dataframes you can call them by new_df[0] new_df[1] or use a loop to print them out:
for i in range(len(new_df)):
print(f'{new_df[i]}\n')
A B C
1 a 6 6
2 index 5 8
3 b 3 1
A B C
4 b 5 6
5 index 6 7
6 e 2 3
A B C
7 c 5 6
8 index 5 8