Reindex Panda Multiindex - python

I am trying to create a new index for a dataframe from created from a root file. I'm using uproot to bring in the file using the command
upfile_muon = uproot.open(file_prefix_muon + '.root')
tree_muon = upfile_muon['ntupler']['tree']
df_muon = tree_muon.pandas.df(['vh_sim_r','vh_sim_phi','vh_sim_z','vh_sim_tp1','vh_sim_tp2',
'vh_type','vh_station','vh_ring','vh_sim_theta'], entrystop=args.max_events)
This then creates a multiindex pandas dataframe with entries and subentries as my two indexes. I want to filter out all subentries of length 3 or less. I do that with the following command while creating vectors that slice the dataframe into the data that I need.
a = 0
bad_entries = 0
entries = []
nuindex = []
tru = 0
while(a < args.max_events):
if(df_muon.loc[(a),:].shape[0] > 3):
entries.append(a)
b = 0
while( b < df_muon.loc[(a),:].shape[0]):
nuindex.append(tru)
b = b + 1
tru = tru + 1
else:
bad_entries = bad_entries + 1
a = a + 1
df_muon = df_muon.loc[pd.IndexSlice[entries,:],:]
So now my dataframe looks like this
vh_sim_r vh_sim_phi vh_sim_z vh_sim_tp1 vh_sim_tp2 vh_type vh_station vh_ring vh_sim_theta
entry subentry
0 0 149.724701 -124.728081 793.598755 0 0 3 2 1 10.684152
1 149.236725 -124.180763 796.001221 -1 -1 3 2 1 10.618716
2 149.456131 -124.687302 796.001221 0 0 3 2 1 10.633972
3 92.405533 -126.913628 539.349976 0 0 4 1 1 9.721958
4 149.345184 -124.332527 839.810669 0 0 1 2 1 10.083608
5 176.544983 -123.978333 964.500000 0 0 2 3 1 10.372764
6 194.614502 -123.764595 1054.994995 0 0 2 4 1 10.451831
7 149.236725 -124.180763 796.001221 -1 -1 3 2 1 10.618716
8 149.456131 -124.687302 796.001221 0 0 3 2 1 10.633972
9 92.405533 -126.913628 539.349976 0 0 4 1 1 9.721958
10 149.345184 -124.332527 839.810669 0 0 1 2 1 10.083608
11 176.544983 -123.978333 964.500000 0 0 2 3 1 10.372764
12 194.614502 -123.764595 1054.994995 0 0 2 4 1 10.451831
1 0 265.027252 -3.324370 796.001221 0 0 3 2 1 18.415092
1 272.908997 -3.531896 839.903625 0 0 1 2 1 18.000479
2 299.305176 -3.531351 923.885132 0 0 1 3 1 17.950438
3 312.799255 -3.499015 964.500000 0 0 2 3 1 17.968519
4 328.321442 -3.530087 1013.620056 0 0 1 4 1 17.947645
5 181.831726 -1.668625 567.971252 0 0 3 1 1 17.752077
6 265.027252 -3.324370 796.001221 0 0 3 2 1 18.415092
7 197.739120 -2.073746 615.796265 0 0 1 1 1 17.802410
8 272.908997 -3.531896 839.903625 0 0 1 2 1 18.000479
9 299.305176 -3.531351 923.885132 0 0 1 3 1 17.950438
10 312.799255 -3.499015 964.500000 0 0 2 3 1 17.968519
11 328.321442 -3.530087 1013.620056 0 0 1 4 1 17.947645
12 356.493073 -3.441958 1065.694946 0 0 2 4 2 18.495964
2 0 204.523163 -124.065643 839.835571 0 0 1 2 1 13.686690
1 135.439163 -122.568153 567.971252 0 0 3 1 1 13.412345
2 196.380875 -123.940300 796.001221 0 0 3 2 1 13.858652
3 129.801193 -122.348656 539.349976 0 0 4 1 1 13.531607
4 224.134796 -124.194283 923.877441 0 0 1 3 1 13.636631
5 237.166031 -124.181770 964.500000 0 0 2 3 1 13.814683
6 246.809235 -124.196938 1013.871643 0 0 1 4 1 13.681540
7 259.389587 -124.164017 1054.994995 0 0 2 4 1 13.813211
8 204.523163 -124.065643 839.835571 0 0 1 2 1 13.686690
9 196.380875 -123.940300 796.001221 0 0 3 2 1 13.858652
10 129.801193 -122.348656 539.349976 0 0 4 1 1 13.531607
11 224.134796 -124.194283 923.877441 0 0 1 3 1 13.636631
12 237.166031 -124.181770 964.500000 0 0 2 3 1 13.814683
13 246.809235 -124.196938 1013.871643 0 0 1 4 1 13.681540
14 259.389587 -124.164017 1054.994995 0 0 2 4 1 13.813211
3 0 120.722900 -22.053474 615.786621 0 0 1 1 4 11.091969
1 170.635376 -23.190208 793.598755 0 0 3 2 1 12.134683
2 110.061127 -21.370941 539.349976 0 0 4 1 1 11.533570
3 164.784668 -23.263920 814.977478 0 0 1 2 1 11.430829
4 192.868652 -23.398684 948.691345 0 0 1 3 1 11.491603
5 199.817978 -23.325649 968.900024 0 0 2 3 1 11.652840
6 211.474625 -23.265354 1038.803833 0 0 1 4 1 11.506759
7 216.406830 -23.275047 1059.395020 0 0 2 4 1 11.545199
8 170.612457 -23.136520 793.598755 -1 -1 3 2 1 12.133101
5 0 179.913177 -14.877813 615.749207 0 0 1 1 1 16.287615
1 160.188034 -14.731569 565.368774 0 0 3 1 1 15.819215
2 240.671204 -15.410946 793.598755 0 0 3 2 1 16.870745
3 166.238678 -14.774992 586.454590 0 0 1 1 1 15.826117
4 241.036865 -15.400753 815.009399 0 0 1 2 1 16.475443
5 281.086792 -15.534301 948.707581 0 0 1 3 1 16.503710
6 288.768768 -15.577776 968.900024 0 0 2 3 1 16.596043
7 309.145935 -15.533208 1038.588745 0 0 1 4 1 16.576143
8 312.951233 -15.579374 1059.395020 0 0 2 4 1 16.457436
9 312.313416 -16.685022 1059.395020 -1 -1 2 4 1 16.425705
Now my goal is to find a way to change the 5 value in the entry index to a 4. I want to do this in a way that automates the process such that I can have a huge number of entries (~20,000), I can have my filter delete the unusable entries, then it renumbers all of the entries sequentially from 0 to the last unfiltered entry. I've tried all sorts of commands but I've had no luck. Is there a way to do this directly?

df_muon = (df_muon
.reset_index() # Get the multi-index back as columns
.replace({'entry': 5}, {'entry': 4}) # Replace 5 in column 'entry' with 4
.set_index(['entry', 'subentry']) # Go back to the multi-index
)

Related

is there any way to convert the columns in Pandas Dataframe using its mirror image Dataframe structure

the df I have is :
0 1 2
0 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
I wanted to obtain a Dataframe with columns reversed/mirror image :
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1
Is there any way to do that
You can check
df[:] = df.iloc[:,::-1]
df
Out[959]:
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1
Here is a bit more verbose, but likely more efficient solution as it doesn't require to rewrite the data. It only renames and reorders the columns:
cols = df.columns
df.columns = df.columns[::-1]
df = df.loc[:,cols]
Or shorter variant:
df = df.iloc[:,::-1].set_axis(df.columns, axis=1)
Output:
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1
There are other ways, but here's one solution:
df[df.columns] = df[reversed(df.columns)]
Output:
0 1 2
0 0 0 0
1 1 0 0
2 0 1 0
3 1 1 0
4 0 0 1
5 1 0 1
6 0 1 1
7 1 1 1

How to count consecutive same values in a pythonic way that looks iterative

So I am trying to count the number of consecutive same values in a dataframe and put that information into a new column in the dataframe, but I want the count to look iterative.
Here is what I have so far:
df = pd.DataFrame(np.random.randint(0,3, size=(15,4)), columns=list('ABCD'))
df['subgroupA'] = (df.A != df.A.shift(1)).cumsum()
dfg = df.groupby(by='subgroupA', as_index=False).apply(lambda grp: len(grp))
dfg.rename(columns={None: 'numConsec'}, inplace=True)
df = df.merge(dfg, how='left', on='subgroupA')
df
Here is the result:
A B C D subgroupA numConsec
0 2 1 1 1 1 1
1 1 2 1 0 2 2
2 1 0 2 1 2 2
3 0 1 2 0 3 1
4 1 0 0 1 4 1
5 0 2 2 1 5 2
6 0 2 1 1 5 2
7 1 0 0 1 6 1
8 0 2 0 0 7 4
9 0 0 0 2 7 4
10 0 2 1 1 7 4
11 0 2 2 0 7 4
12 1 2 0 1 8 1
13 0 1 1 0 9 1
14 1 1 1 0 10 1
The problem is, in the numConsec column, I don't want the full count for every row. I want it to reflect how it looks as you iteratively look at the dataframe. The problem is, my dataframe is too large to iteratively loop through and make the counts, as that would be too slow. I need to do it in a pythonic way and make it look like this:
A B C D subgroupA numConsec
0 2 1 1 1 1 1
1 1 2 1 0 2 1
2 1 0 2 1 2 2
3 0 1 2 0 3 1
4 1 0 0 1 4 1
5 0 2 2 1 5 1
6 0 2 1 1 5 2
7 1 0 0 1 6 1
8 0 2 0 0 7 1
9 0 0 0 2 7 2
10 0 2 1 1 7 3
11 0 2 2 0 7 4
12 1 2 0 1 8 1
13 0 1 1 0 9 1
14 1 1 1 0 10 1
Any ideas?

Create Duplicate Rows and Change Values in Specific Columns

How to create x amount of duplicates based on a row in the dataframe and change a single or multi variables from specific columns. The rows are then added to the end of the same dataframe.
A B C D E F
0 1 1 0 1 1 0
1 2 2 1 1 1 0
2 2 2 1 1 1 0
3 2 2 1 1 1 0
4 1 1 0 1 1 0 <- Create 25 Duplicates of this row (4) and change variable C to 1
5 1 1 0 1 1 0
6 2 2 1 1 1 0
7 2 2 1 1 1 0
8 2 2 1 1 1 0
9 1 1 0 1 1 0
I repeat only 10 times to keep length of result reasonable.
# Number of repeats |
# v
df.append(df.loc[[4] * 10].assign(C=1), ignore_index=True)
A B C D E F
0 1 1 0 1 1 0
1 2 2 1 1 1 0
2 2 2 1 1 1 0
3 2 2 1 1 1 0
4 1 1 0 1 1 0
5 1 1 0 1 1 0
6 2 2 1 1 1 0
7 2 2 1 1 1 0
8 2 2 1 1 1 0
9 1 1 0 1 1 0
10 1 1 1 1 1 0
11 1 1 1 1 1 0
12 1 1 1 1 1 0
13 1 1 1 1 1 0
14 1 1 1 1 1 0
15 1 1 1 1 1 0
16 1 1 1 1 1 0
17 1 1 1 1 1 0
18 1 1 1 1 1 0
19 1 1 1 1 1 0
Per comments, try:
df.append(df.loc[[4] * 10].assign(**{'C': 1}), ignore_index=True)
I am using repeat and reindex
s=df.iloc[[4],] # pick the row you want to do repeat
s=s.reindex(s.index.repeat(45))# repeat the row by the giving number
#s=pd.DataFrame([df.iloc[4,].tolist()]*25) if need enhance the speed , using this line replace the above
s.loc[:,'C']=1 # change the value
pd.concat([df,s]) #append to the original df

Python append dataframe such that only columns remain the same

I have the following dataframes in python pandas:
A:
1 2 3 4 5 6 7 8 9 10
0 1 1 1 1 1 1 1 0 0 1 1
B:
1 2 3 4 5 6 7 8 9 10
1 0 1 1 1 1 1 1 0 0 1 0
C:
1 2 3 4 5 6 7 8 9 10
2 0 1 1 1 0 0 0 0 0 1 0
I want to concatenate them together such that the column titles remain the same while row index and values get appended so the new dataframe is:
df:
1 2 3 4 5 6 7 8 9 10
0 1 1 1 1 1 1 1 0 0 1 1
1 0 1 1 1 1 1 1 0 0 1 0
2 0 1 1 1 0 0 0 0 0 1 0
I have tried using append and concat but none seem to be fulfilling the output I am trying to achieve. Any suggestions?
Here is what I tried:
df = pd.concat([df,pd.concat([A,B,C], ignore_index=True)], axis=1)
This is a plain vanilla concat
pd.concat([A, B, C])
1 2 3 4 5 6 7 8 9 10
0 1 1 1 1 1 1 1 0 0 1 1
1 0 1 1 1 1 1 1 0 0 1 0
2 0 1 1 1 0 0 0 0 0 1 0
Simple pd.concat will just do the work, you over complicated the task a little bit:
pd.concat([A,B,C], axis=0, ignore_index=True)

Transform dataframe to have a row for every observation at every time point

I have the following short dataframe:
A B C
1 1 3
2 1 3
3 2 3
4 2 3
5 0 0
I want the output to look like this:
A B C
1 1 3
2 1 3
3 0 0
4 0 0
5 0 0
1 1 3
2 1 3
3 2 3
4 2 3
5 0 0
use pd.MultiIndex.from_product with unique As and Bs. Then reindex.
cols = list('AB')
mux = pd.MultiIndex.from_product([df.A.unique(), df.B.unique()], names=cols)
df.set_index(cols).reindex(mux, fill_value=0).reset_index()
A B C
0 1 1 3
1 1 2 0
2 1 0 0
3 2 1 3
4 2 2 0
5 2 0 0
6 3 1 0
7 3 2 3
8 3 0 0
9 4 1 0
10 4 2 3
11 4 0 0
12 5 1 0
13 5 2 0
14 5 0 0

Categories

Resources