The desired result is this:
id name
1 A
2 B
3 C
4 D
5 E
6 F
7 G
8 H
Currently I do it this way:
import pandas as pd
df = pd.DataFrame({'home_id': ['1', '3', '5', '7'],
'home_name': ['A', 'C', 'E', 'G'],
'away_id': ['2', '4', '6', '8'],
'away_name': ['B', 'D', 'F', 'H']})
id_col = pd.concat([df['home_id'], df['away_id']])
name_col = pd.concat([df['home_name'], df['away_name']])
result = pd.DataFrame({'id': id_col, 'name': name_col})
result = result.sort_index().reset_index(drop=True)
print(result)
But this form uses the index to reclassify the columns, generating possible errors in cases where there are equal indexes.
How can I intercalate the column values always being:
Use the home of the 1st line, then the away of the 1st line, then the home of the 2nd line, then the away of the 2nd line and so on...
try this:
out = pd.DataFrame(df.values.reshape(-1, 2), columns=['ID', 'Name'])
print(out)
>>>
ID Name
0 1 A
1 2 B
2 3 C
3 4 D
4 5 E
5 6 F
6 7 G
7 8 H
Similar to the python zip, you go iterate through both dataframes:
home = pd.DataFrame(df[['home_id', 'home_name']].values, columns=('id', 'name'))
away = pd.DataFrame(df[['away_id', 'away_name']].values, columns=('id', 'name'))
def zip_dataframes(df1, df2):
rows = []
for i in range(len(df1)):
rows.append(df1.iloc[i, :])
rows.append(df2.iloc[i, :])
return pd.concat(rows, axis=1).T
zip_dataframes(home, away)
id name
0 1 A
0 2 B
1 3 C
1 4 D
2 5 E
2 6 F
3 7 G
3 8 H
You can do this using pd.wide_to_long with a little column header renaming:
import pandas as pd
df = pd.DataFrame({'home_id': ['1', '3', '5', '7'],
'home_name': ['A', 'C', 'E', 'G'],
'away_id': ['2', '4', '6', '8'],
'away_name': ['B', 'D', 'F', 'H']})
dfr = df.rename(columns=lambda x: '_'.join(x.split('_')[::-1])).reset_index()
df_out = (pd.wide_to_long(dfr, ['id', 'name'], 'index', 'No', sep='_', suffix='.*')
.reset_index(drop=True)
.sort_values('id'))
df_out
Output:
id name
0 1 A
4 2 B
1 3 C
5 4 D
2 5 E
6 6 F
3 7 G
7 8 H
Related
I have this df:
import pandas as pd
df = pd.DataFrame({'Time' : ['s_1234','s_1234', 's_1234', 's_5678', 's_8998','s_8998' ],
'Control' : ['A', '', '','B', 'C', ''],
'tot_1' : ['1', '1', '1','1', '1', '1'],
'tot_2' : ['2', '2', '2','2', '2', '2']})
--------
Time Control tot_1 tot_2
0 1234 A 1 2
1 1234 A 1 2
2 1234 1 2
3 5678 B 1 2
4 8998 C 1 2
5 8998 1 2
I would like each time an equal time value to be merged into one column. I would also like the "tot_1" and "tot_2" columns to be added together. And finally I would like to keep checking if present. Like:
Time Control tot_1 tot_2
0 1234 A 3 6
1 5678 B 1 2
2 8998 C 2 4
Your data is different then the example df.
construct df:
import pandas as pd
df = pd.DataFrame({'Time' : ['s_1234','s_1234', 's_1234', 's_5678', 's_8998','s_8998' ],
'Control' : ['A', '', '','B', 'C', ''],
'tot_1' : ['1', '1', '1','1', '1', '1'],
'tot_2' : ['2', '2', '2','2', '2', '2']})
df.Time = df.Time.str.split("_").str[1]
df = df.astype({"tot_1": int, "tot_2": int})
Group by Time and aggregate the values.
df.groupby('Time').agg({"Control": "first", "tot_1": "sum", "tot_2": "sum"}).reset_index()
Time Control tot_1 tot_2
0 1234 A 3 6
1 5678 B 1 2
2 8998 C 2 4
EDIT for comment: Not sure if thats the best way to do it, but you could construct your agg information like this:
n = 2
agg_ = {"Control": "first"} | {f"tot_{i+1}": "sum" for i in range(n)}
df.groupby('Time').agg(agg_).reset_index()
I have 2 dataframes:
df1 = pd.DataFrame({'A':[1,2,3,4],
'B':[5,6,7,8],
'D':[9,10,11,12]})
and
df2 = pd.DataFrame({'type':['A', 'B', 'C', 'D', 'E'],
'color':['yellow', 'green', 'red', 'pink', 'black'],
'size':['S', 'M', 'L', 'S', 'M']})
I want to map Information from df2 to Header of df1, the result should look like below:
how can I do this? Many thanks :)
Use rename with aggregate values by DataFrame.agg:
df1 = pd.DataFrame({'A1':[1,2,3,4],
'B':[5,6,7,8],
'D':[9,10,11,12]})
s = df2.set_index('type', drop=False).agg(','.join, axis=1)
df1 = df1.rename(columns=s)
print (df1)
A1 B,green,M D,pink,S
0 1 5 9
1 2 6 10
2 3 7 11
3 4 8 12
For () need more processing:
s = df2.set_index('type').agg(','.join, axis=1).add(')').radd('(')
s = s.index +' ' + s
df1 = df1.rename(columns=s)
print (df1)
A (yellow,S) B (green,M) D (pink,S)
0 1 5 9
1 2 6 10
2 3 7 11
3 4 8 12
I have a dataframe as described below and I need to find out the duplicate groups based on the columns - value1,value2 & value3 (groups should be grouped by id).
I need to fill column 'duplicated' with true
if the group appears elsewhere in the table,if group is unique fill with false.
note: each group has different id.
df = pd.DataFrame({'id': ['A', 'A', 'A', 'A', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'D', 'D'],
'value1': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value2': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value3': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'duplicated' : []
})
expected result is:
I tried this, but if is comparing rows, I need to compare groups (grouped by id)
import pandas as pd
data = pd.read_excel('C:/Users/path/Desktop/example.xlsx')
# False : Mark all duplicates as True.
data['duplicates'] = data.duplicated(subset= ["value1","value2","value3"], keep=False)
data.to_excel('C:/Users/path/Desktop/example_result.xlsx',index=False)
and I got:
note: the order of the records in the both groups doesnt matter
This may not be very efficient but it works if duplicated groups have the same "order".
import pandas as pd
df = pd.DataFrame({'id': ['A', 'A', 'A', 'A', 'B', 'B', 'C', 'C', 'C', 'C', 'D', 'D', 'D'],
'value1': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value2': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'value3': ['1', '2', '3', '4', '1', '2', '1', '2', '3', '4', '1', '2', '3'],
'duplicated': [False] * 13
})
def check_dup(df, col1, col2):
# Checks if two groups are duplicates.
# First checks the sizes, if they are equal then checks actual values.
df1 = df[df['id'] == col1][['value1', 'value2', 'value3']]
df2 = df[df['id'] == col2][['value1', 'value2', 'value3']]
if df1.size != df2.size:
return False
return (df1.values == df2.values).all()
id_unique = set(df['id'].values) # set of unique ids
id_dic = dict.fromkeys(id_unique, False) # dict for "duplicated" value for each id
for id1 in id_unique:
for id2 in id_unique - {id1}:
if check_dup(df, id1, id2):
id_dic[id1] = True
break
# Update 'duplicated' column on df
for id_ in id_dic:
df.loc[df['id'] == id_, 'duplicated'] = id_dic[id_]
print(df)
id value1 value2 value3 duplicated
0 A 1 1 1 True
1 A 2 2 2 True
2 A 3 3 3 True
3 A 4 4 4 True
4 B 1 1 1 False
5 B 2 2 2 False
6 C 1 1 1 True
7 C 2 2 2 True
8 C 3 3 3 True
9 C 4 4 4 True
10 D 1 1 1 False
11 D 2 2 2 False
12 D 3 3 3 False
You can do it like this
First sort_values just in case, set_index the id and stack to change the shape of your data and get a single column with to_frame
df_ = (df.sort_values(by=["value1","value2","value3"])
.set_index('id')[["value1","value2","value3"]]
.stack()
.to_frame()
)
Second, you can append an set_index with a cumcount per id, drop the level of index with the name of the original column (Value1 ...), unstack to get one row per id, fillna with a random value and use duplicated.
s_dup = df_.set_index([df_.groupby('id').cumcount()], append=True)\
.reset_index(level=1, drop=True)[0]\
.unstack()\
.fillna(0)\
.duplicated(keep=False)
print (s_dup)
id
A True
B False
C True
D False
dtype: bool
Now you can just map to the original dataframe:
df['dup'] = df['id'].map(s_dup)
print (df)
id value1 value2 value3 dup
0 A 1 1 1 True
1 A 2 2 2 True
2 A 3 3 3 True
3 A 4 4 4 True
4 B 1 1 1 False
5 B 2 2 2 False
6 C 2 2 2 True
7 C 1 1 1 True
8 C 3 3 3 True
9 C 4 4 4 True
10 D 1 1 1 False
11 D 2 2 2 False
12 D 3 3 3 False
I have the following pandas DataFrame:-
import pandas as pd
df = pd.DataFrame({
'code': ['eq150', 'eq150', 'eq152', 'eq151', 'eq151', 'eq150'],
'reg': ['A', 'C', 'H', 'P', 'I', 'G'],
'month': ['1', '2', '4', '2', '1', '1']
})
df
code reg month
0 eq150 A 1
1 eq150 C 2
2 eq152 H 4
3 eq151 P 2
4 eq151 I 1
5 eq150 G 1
Expected Output:-
1 2 3 4
eq150 A, G C
eq152 H
eq151 I P
If you want the output to include the empty 3 column as well:
all_cols = list(map(
str,
list(range(
df.month.astype(int).min(),
df.month.astype(int).max()+1
))
))
df_cols = list(df.month.unique())
add_cols = list(set(all_cols)-set(df_cols))
df = df.pivot_table(
index='code',
columns='month',
aggfunc=','.join
).reg.rename_axis(None).rename_axis(None, axis=1).fillna('')
for col in add_cols: df[col] = ''
df = df[all_cols]
df
1 2 3 4
eq150 A,G C
eq151 I P
eq152 H
Use pivot_table with DataFrame.reindex for add missing months:
df['month'] = df['month'].astype(int)
r = range(df['month'].min(), df['month'].max() + 1)
df1 = (df.pivot_table(index='code',
columns='month',
values='reg',
aggfunc=','.join,
fill_value='')
.reindex(r, fill_value='', axis=1))
print (df1)
month 1 2 3 4
code
eq150 A,G C
eq151 I P
eq152 H
I have mydf below, which I have sorted on a dummy time column and the id:
mydf = pd.DataFrame(
{
'id': ['A', 'B', 'B', 'C', 'A', 'C', 'A'],
'time': [1, 4, 3, 5, 2, 6, 7],
'val': ['a', 'b', 'c', 'd', 'e', 'f', 'g']
}
).sort_values(['id', 'time'], ascending=False)
mydf
id time val
5 C 6 f
3 C 5 d
1 B 4 b
2 B 3 c
6 A 7 g
4 A 2 e
0 A 1 a
I want to add a column (last_val) which, for each unique id, holds the latest val based on the time column. Entries for which there is no last_val can be dropped. The output in this example would look like:
mydf
id time val last_val
5 C 6 f d
1 B 4 b c
6 A 7 g e
4 A 2 e a
Any ideas?
Use DataFrameGroupBy.shift after sort_values(['id', 'time'], ascending=False) (already in question) and then remove rows with missing values by DataFrame.dropna:
mydf['last_val'] = mydf.groupby('id')['val'].shift(-1)
mydf = mydf.dropna(subset=['last_val'])
Similar solution, only removed last duplicated rows by id column:
mydf['last_val'] = mydf.groupby('id')['val'].shift(-1)
mydf = mydf[mydf['id'].duplicated(keep='last')]
print (mydf)
id time val last_val
5 C 6 f d
1 B 4 b c
6 A 7 g e
4 A 2 e a