Merge row if cels are equals pandas - python

I have this df:
import pandas as pd
df = pd.DataFrame({'Time' : ['s_1234','s_1234', 's_1234', 's_5678', 's_8998','s_8998' ],
'Control' : ['A', '', '','B', 'C', ''],
'tot_1' : ['1', '1', '1','1', '1', '1'],
'tot_2' : ['2', '2', '2','2', '2', '2']})
--------
Time Control tot_1 tot_2
0 1234 A 1 2
1 1234 A 1 2
2 1234 1 2
3 5678 B 1 2
4 8998 C 1 2
5 8998 1 2
I would like each time an equal time value to be merged into one column. I would also like the "tot_1" and "tot_2" columns to be added together. And finally I would like to keep checking if present. Like:
Time Control tot_1 tot_2
0 1234 A 3 6
1 5678 B 1 2
2 8998 C 2 4

Your data is different then the example df.
construct df:
import pandas as pd
df = pd.DataFrame({'Time' : ['s_1234','s_1234', 's_1234', 's_5678', 's_8998','s_8998' ],
'Control' : ['A', '', '','B', 'C', ''],
'tot_1' : ['1', '1', '1','1', '1', '1'],
'tot_2' : ['2', '2', '2','2', '2', '2']})
df.Time = df.Time.str.split("_").str[1]
df = df.astype({"tot_1": int, "tot_2": int})
Group by Time and aggregate the values.
df.groupby('Time').agg({"Control": "first", "tot_1": "sum", "tot_2": "sum"}).reset_index()
Time Control tot_1 tot_2
0 1234 A 3 6
1 5678 B 1 2
2 8998 C 2 4
EDIT for comment: Not sure if thats the best way to do it, but you could construct your agg information like this:
n = 2
agg_ = {"Control": "first"} | {f"tot_{i+1}": "sum" for i in range(n)}
df.groupby('Time').agg(agg_).reset_index()

Related

Intercalate pandas dataframe columns when they are in pairs

The desired result is this:
id name
1 A
2 B
3 C
4 D
5 E
6 F
7 G
8 H
Currently I do it this way:
import pandas as pd
df = pd.DataFrame({'home_id': ['1', '3', '5', '7'],
'home_name': ['A', 'C', 'E', 'G'],
'away_id': ['2', '4', '6', '8'],
'away_name': ['B', 'D', 'F', 'H']})
id_col = pd.concat([df['home_id'], df['away_id']])
name_col = pd.concat([df['home_name'], df['away_name']])
result = pd.DataFrame({'id': id_col, 'name': name_col})
result = result.sort_index().reset_index(drop=True)
print(result)
But this form uses the index to reclassify the columns, generating possible errors in cases where there are equal indexes.
How can I intercalate the column values always being:
Use the home of the 1st line, then the away of the 1st line, then the home of the 2nd line, then the away of the 2nd line and so on...
try this:
out = pd.DataFrame(df.values.reshape(-1, 2), columns=['ID', 'Name'])
print(out)
>>>
ID Name
0 1 A
1 2 B
2 3 C
3 4 D
4 5 E
5 6 F
6 7 G
7 8 H
Similar to the python zip, you go iterate through both dataframes:
home = pd.DataFrame(df[['home_id', 'home_name']].values, columns=('id', 'name'))
away = pd.DataFrame(df[['away_id', 'away_name']].values, columns=('id', 'name'))
def zip_dataframes(df1, df2):
rows = []
for i in range(len(df1)):
rows.append(df1.iloc[i, :])
rows.append(df2.iloc[i, :])
return pd.concat(rows, axis=1).T
zip_dataframes(home, away)
id name
0 1 A
0 2 B
1 3 C
1 4 D
2 5 E
2 6 F
3 7 G
3 8 H
You can do this using pd.wide_to_long with a little column header renaming:
import pandas as pd
df = pd.DataFrame({'home_id': ['1', '3', '5', '7'],
'home_name': ['A', 'C', 'E', 'G'],
'away_id': ['2', '4', '6', '8'],
'away_name': ['B', 'D', 'F', 'H']})
dfr = df.rename(columns=lambda x: '_'.join(x.split('_')[::-1])).reset_index()
df_out = (pd.wide_to_long(dfr, ['id', 'name'], 'index', 'No', sep='_', suffix='.*')
.reset_index(drop=True)
.sort_values('id'))
df_out
Output:
id name
0 1 A
4 2 B
1 3 C
5 4 D
2 5 E
6 6 F
3 7 G
7 8 H

Dataframe groupby certain column and repeat the row n times

I would like to get df_output from df_input in below code. It is basically repeating the row 2 times grouped by date column. Also repeated tag should be included.
import pandas as pd
df_input = pd.DataFrame( [
['01/01', '1', '10'],
['01/01', '2', '5'],
['01/02', '1', '9'],
['01/02', '2', '7'],
], columns=['date','type','value'])
df_output = pd.DataFrame( [
['01/01', '1', '10', '1'],
['01/01', '2', '5', '1'],
['01/01', '1', '10', '2'],
['01/01', '2', '5', '2'],
['01/02', '1', '9', '1'],
['01/02', '2', '7', '1'],
['01/02', '1', '9', '2'],
['01/02', '2', '7', '2'],
], columns=['date','type','value', 'repeat'])
print(df_output)
I thought about grouping by the date column above and repeat the rows n times, but could not find the code.
You can use GroupBy.apply per date, and pandas.concat:
N = 2
out = (df_input
.groupby(['date'], group_keys=False)
.apply(lambda d: pd.concat([d]*N))
)
output:
date type value
0 01/01 1 10
1 01/01 2 5
0 01/01 1 10
1 01/01 2 5
2 01/02 1 9
3 01/02 2 7
2 01/02 1 9
3 01/02 2 7
With "repeat" column:
N = 2
out = (df_input
.groupby(['date'], group_keys=False)
.apply(lambda d: pd.concat([d.assign(repeat=n+1) for n in range(N)]))
)
output:
date type value repeat
0 01/01 1 10 1
1 01/01 2 5 1
0 01/01 1 10 2
1 01/01 2 5 2
2 01/02 1 9 1
3 01/02 2 7 1
2 01/02 1 9 2
3 01/02 2 7 2

Explode single DataFrame row into multiple ones

My DataFrame has some columns where each value can be "1", "2", "3" or "any". Here is an example:
>>> df = pd.DataFrame({'a': ['1', '2', 'any', '3'], 'b': ['any', 'any', '3', '1']})
>>> df
a b
0 1 any
1 2 any
2 any 3
3 3 1
In my case, "any" means that the value can be "1", "2" or "3". I would like to generate all possible rows using only values "1", "2" and "3" (or, in general, any list of values that I might have). Here is the expected output for the example above:
a b
0 1 1
1 1 2
2 1 3
3 2 1
4 2 2
5 2 3
6 3 3
7 3 1
I got this output with this kind of ugly and complicated approach:
a = df['a'].replace('any', '1,2,3').apply(lambda x: eval(f'[{str(x)}]')).explode()
result = pd.merge(df.drop(columns=['a']), a, left_index=True, right_index=True)
b = result['b'].replace('any', '1,2,3').apply(lambda x: eval(f'[{str(x)}]')).explode()
result = pd.merge(result.drop(columns=['b']), b, left_index=True, right_index=True)
result = result.drop_duplicates().reset_index(drop=True)
Is there any simpler and/or nicer approach?
You can replace the string any with, e.g. '1,2,3', then split and explode:
(df.replace('any', '1,2,3')
.apply(lambda x: x.str.split(',') if x.name in ['a','b'] else x)
.explode('a').explode('b')
.drop_duplicates(['a','b'])
)
Output:
a b c
0 1 1 1
0 1 2 1
0 1 3 1
1 2 1 1
1 2 2 1
1 2 3 1
2 3 3 1
3 3 1 1
I would not use eval and string manipulations, but just replace 'any' with a set of values
import pandas as pd
df = pd.DataFrame({'a': ['1', '2', 'any', '3'], 'b': ['any', 'any', '3', '1']})
df['c'] = '1'
df[df == 'any'] = {'1', '2', '3'}
for col in df:
df = df.explode(col)
df = df.drop_duplicates().reset_index(drop=True)
print(df)
This gives the result
a b c
0 1 2 1
1 1 3 1
2 1 1 1
3 2 2 1
4 2 3 1
5 2 1 1
6 3 3 1
7 3 1 1

find duplicated groups in dataframe

I have a dataframe as described below and I need to find out the duplicate groups based on the columns - value1,value2 & value3 (groups should be grouped by id).
I need to fill column 'duplicated' with true
if the group appears elsewhere in the table,if group is unique fill with false.
note: each group has different id.
df = pd.DataFrame({'id': ['A', 'A', 'A', 'A', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'D', 'D'],
'value1': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value2': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value3': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'duplicated' : []
})
expected result is:
I tried this, but if is comparing rows, I need to compare groups (grouped by id)
import pandas as pd
data = pd.read_excel('C:/Users/path/Desktop/example.xlsx')
# False : Mark all duplicates as True.
data['duplicates'] = data.duplicated(subset= ["value1","value2","value3"], keep=False)
data.to_excel('C:/Users/path/Desktop/example_result.xlsx',index=False)
and I got:
note: the order of the records in the both groups doesnt matter
This may not be very efficient but it works if duplicated groups have the same "order".
import pandas as pd
df = pd.DataFrame({'id': ['A', 'A', 'A', 'A', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'D', 'D'],
'value1': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value2': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value3': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'duplicated': [False] * 13
})
def check_dup(df, col1, col2):
# Checks if two groups are duplicates.
# First checks the sizes, if they are equal then checks actual values.
df1 = df[df['id'] == col1][['value1', 'value2', 'value3']]
df2 = df[df['id'] == col2][['value1', 'value2', 'value3']]
if df1.size != df2.size:
return False
return (df1.values == df2.values).all()
id_unique = set(df['id'].values) # set of unique ids
id_dic = dict.fromkeys(id_unique, False) # dict for "duplicated" value for each id
for id1 in id_unique:
for id2 in id_unique - {id1}:
if check_dup(df, id1, id2):
id_dic[id1] = True
break
# Update 'duplicated' column on df
for id_ in id_dic:
df.loc[df['id'] == id_, 'duplicated'] = id_dic[id_]
print(df)
id value1 value2 value3 duplicated
0 A 1 1 1 True
1 A 2 2 2 True
2 A 3 3 3 True
3 A 4 4 4 True
4 B 1 1 1 False
5 B 2 2 2 False
6 C 1 1 1 True
7 C 2 2 2 True
8 C 3 3 3 True
9 C 4 4 4 True
10 D 1 1 1 False
11 D 2 2 2 False
12 D 3 3 3 False
You can do it like this
First sort_values just in case, set_index the id and stack to change the shape of your data and get a single column with to_frame
df_ = (df.sort_values(by=["value1","value2","value3"])
.set_index('id')[["value1","value2","value3"]]
.stack()
.to_frame()
)
Second, you can append an set_index with a cumcount per id, drop the level of index with the name of the original column (Value1 ...), unstack to get one row per id, fillna with a random value and use duplicated.
s_dup = df_.set_index([df_.groupby('id').cumcount()], append=True)\
.reset_index(level=1, drop=True)[0]\
.unstack()\
.fillna(0)\
.duplicated(keep=False)
print (s_dup)
id
A True
B False
C True
D False
dtype: bool
Now you can just map to the original dataframe:
df['dup'] = df['id'].map(s_dup)
print (df)
id value1 value2 value3 dup
0 A 1 1 1 True
1 A 2 2 2 True
2 A 3 3 3 True
3 A 4 4 4 True
4 B 1 1 1 False
5 B 2 2 2 False
6 C 2 2 2 True
7 C 1 1 1 True
8 C 3 3 3 True
9 C 4 4 4 True
10 D 1 1 1 False
11 D 2 2 2 False
12 D 3 3 3 False

How to normalize just one column of a dataframe while keeping the others unaffected?

Assuming we have a df as follows:
id A B
50 1 5
60 2 6
70 3 7
80 4 8
I would like to know as to how can normalize just the column B, between 0 and 1, while keeping the other columns id and column A completely unaffected?
Edit 1: If I do the following
import pandas as pd
df = pd.DataFrame({ 'id' : ['50', '60', '70', '80'],
'A' : ['1', '2', '3', '4'],
'B' : ['5', '6', '7', '8']
})
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
X_minmax = min_max_scaler.fit_transform(df.values[:,[2]])
I get the X_minmax as follows
0
0.333333
0.666667
1
I want these 4 values to be placed in place of the column B in the dataframe df without changing the other 2 columns looking as below:
id A B
50 1 0
60 2 0.333333
70 3 0.666667
80 4 1
You can reassign the value of the column:
df.B = (df.B - df.B.mean()) / (df.B.max() - df.B.min())
You might want to do something like this.
import sklearn.preprocessing as preprocessing
df=pd.DataFrame({'id':[50,60,70,80],'A':[1,2,3,4],'B':[5,6,7,8]})
float_array = df['B'].values.astype(float).reshape(-1,1)
min_max_scaler = preprocessing.MinMaxScaler()
scaled_array = min_max_scaler.fit_transform(float_array)
df['B']=scaled_array

Categories

Resources