Dividing pairs of data frame rows by common names - python

I have a dataframe which looks like that:
1 2
a_value 2 8
a_ref 4 2
b_value 6 10
b_ref 3 15
c_value 7 3
note that some indices are pairs of name_value and name_ref and others are not
I want to find those pairs, and for each pair get four rows in my new dataframe: name_value, name_ref, name_ref/name_value, name_value/name_ref so my output dataframe looks like this:
1 2
a_value 2.0 8.000
a_ref 4.0 2.000
a_value/a_ref 0.5 4.000
a_ref/a_value 2.0 0.250
b_value 6.0 10.000
b_ref 3.0 15.000
b_value/b_ref 2.0 0.666
b_ref/b_value 0.5 1.500
I currently do it by iterating over the indices looking for ones that end with value and then trying to find the matching ref, but knowing pandas, it seems that there should be an easier way, maybe using groupby somehow. So.. is there?

This may not be the most elegant solution, but it works. First, lets find the common keys:
import numpy as np
keys = np.intersect1d(df.index.str.extract("(.+)_value").dropna(),
df.index.str.extract("(.+)_ref").dropna())
#array(['a', 'b'], dtype=object)
Next, select the matching refs and values:
refs = df.loc[keys + "_ref"]
values = df.loc[keys +"_value"]
Make a copy of each dataframe and assign them the keys as indexes:
values1 = values.copy()
values1.index = keys
refs1 = refs.copy()
refs1.index = keys
Perform the division and update the indexes once again:
ratios = values1 / refs1
ratios.index += "_value" + "/" + ratios.index + "_ref"
ratios1 = refs1 / values1
ratios1.index += "_ref" + "/" + ratios1.index + "_value"
Put everything together and sort:
pd.concat([refs, values, ratios, ratios1]).sort_index()
# 1 2
#a_ref 4.0 2.000000
#a_ref/a_value 2.0 0.250000
#a_value 2.0 8.000000
#a_value/a_ref 0.5 4.000000
#b_ref 3.0 15.000000
#b_ref/b_value 0.5 1.500000
#b_value 6.0 10.000000
#b_value/b_ref 2.0 0.666667

Related

dynamically replace a value within a range, and change the next closest value

I am looking for an elegant way to select columns that contain a value under 15, and if they do, i want to change it to 1. I also want to change the next closest number to 2. any suggestions would be great. I can subset accordingly but am stuck with dynamically adapting the next closest number
df i have
df = pd.DataFrame(data={'a':[1,1,13,23,40],
'b': [89.87,1,12,4,8],
'c': [45,12,901,12,29]}).astype(float)
df i want
expected = pd.DataFrame(data={'a':[1,1,1,2,40],
'b': [2,1,1,1,1],
'c': [45,1,901,1,2]}).astype(float)
You can use masks and mask:
mask = df.lt(15) # values lower than 15
mask2 = df.eq(df.mask(mask).min()) # min values, excluding values below 15
df.mask(mask, 1).mask(mask2, 2) # replacing mask with 1, mask2 with 2
output:
a b c
0 1.0 2.0 45.0
1 1.0 1.0 1.0
2 1.0 1.0 901.0
3 2.0 1.0 1.0
4 40.0 1.0 2.0

Improve efficiency of selecting values from dataframe by index

I have a simulation that uses pandas Dataframes to describe objects in a hierarchy. To achieve this, I have used a MultiIndex to show the route to a child object.
Parent df
par_val
a b
0 0.0 0.366660
1.0 0.613888
1 2.0 0.506531
3.0 0.327356
2 4.0 0.684335
0.0 0.013800
3 1.0 0.590058
2.0 0.179399
4 3.0 0.790628
4.0 0.310662
Child df
child_val
a b c
0 0.0 0 0.528217
1.0 0 0.515479
1 2.0 0 0.719221
3.0 0 0.785008
2 4.0 0 0.249344
0.0 0 0.455133
3 1.0 0 0.009394
2.0 0 0.775960
4 3.0 0 0.639091
4.0 0 0.150854
0 0.0 1 0.319277
1.0 1 0.571580
1 2.0 1 0.029063
3.0 1 0.498197
2 4.0 1 0.424188
0.0 1 0.572045
3 1.0 1 0.246166
2.0 1 0.888984
4 3.0 1 0.818633
4.0 1 0.366697
This implies that object (0,0,0) and (0,0,1) in the child Dataframes are both characterised by values at (0,0) in the parent Dataframe.
When a function is performed on the child dataframe for a certain subject of 'a', it may therefore need to grab a value from 'b'. My current solution locates the value from the parent Dataframe by index within the solution function:
import pandas as pd
import numpy as np
import time
from matplotlib import pyplot as plt
r = range(10, 1000, 10)
dt = []
for i in r:
start = time.time()
df_par = pd.DataFrame(
{'a': np.repeat(np.arange(5), i/5),
'b': np.append(np.arange(i/2), np.arange(i/2)),
'par_val': np.random.rand(i)
}).set_index(['a','b'])
df_child = pd.concat([df_par[[]]] * 2, keys = [0, 1], names = ['c'])\
.reorder_levels(['a', 'b', 'c'])
df_child['child_val'] = np.random.rand(i * 2)
df_child['solution'] = np.nan
def solution(row, df_par, var):
data_level = len(df_par.index.names)
index_filt = tuple([row.name[i] for i in range(data_level)])
sol = df_par.loc[index_filt, 'par_val'] / row.child_val
return sol
a_mask = df_child.index.get_level_values('a') == 0
df_child.loc[a_mask, 'solution'] = df_child.loc[a_mask].apply(solution,
df_par = df_par,
var = 10,
axis = 1)
stop = time.time()
dt.append(stop - start)
plt.plot(r, dt)
plt.show()
The solution function is becoming very costly for large amounts of iterations in the simulation:
(iterations (x) vs time in seconds (y))
Is there a more efficient method of calculating this? I have considered including the 'par_val' in the child df, but I was trying to avoid this as the very large amount of repetitions reduces the amount of simulations I can fit in RAM.
par_val is a float64 which takes 8 bytes for each value. If the child data frame has 1 million rows, that's 8MB of memory (before the OS's Memory Compression feature kicks in). If it has 1 billions rows, then yes, I would worry about the memory impact.
The bigger performance bottleneck though, is in your df_child.loc[a_mask].apply(..., axis=1) line. This makes pandas uses the slow Python loop instead of the much faster vectorized code. In SQL, we call the loop approach row-by-agonizing-row and it's an anti-pattern. You generally want to avoid .apply(..., axis=1) for this reason.
Here's one way to improve the performance without changing df_par or df_child:
a_mask = df_child.index.get_level_values('a') == 0
child_val = df_child.loc[a_mask, 'child_val'].droplevel(-1)
solution = df_par.loc[child_val.index, 'par_val'] / child_val
df_child.loc[a_mask, 'solution'] = solution.to_numpy()
Before:
After:

How do i create a for loop in Python, to copy specific positions in a dataframe to another one?

I have a dataframe called data6, with 6000 rows, and i want to copy to na 2000 rows data frames, called result, only Month columns values when level columns value are 1.
How do create a for loop with this rule?
Now:
in: data6 = df1[['level', 'Month']]
print(data6)
out: level Month
0 1.0 101.52
1 2.0 101.52
2 3.0 101.52
3 1.0 111.89
4 2.0 111.89
Expected after the for loop:
in: print(result)
out: level Month
0 1.0 101.52
1 1.0 111.89
2 1.0 112.27
3 1.0 89.57
4 1.0 110.35
Use Boolean indexing
Indexing and selecting data
# if level is a float
result = data6[data6.level == 1.0].reset_index(drop=True)
# if level is a string
result = data6[data6.level == '1.0'].reset_index(drop=True)
# if you only want the month column
result = pd.DataFrame(data6.Month[data6.level == 1.0]).reset_index(drop=True) # or '1.0'

Pandas - Outer Join on Column with Repeating Values

This is my first question on Stack Overflow, please let me know how I can help you help me if my question is unclear.
Goal: Use Python and Pandas to Outer join (or merge) Data Sets containing different experimental trials where the "x" axis of each trial is extremely similar but has some deviations. Most importantly, the "x" axis increases, hits a maximum and then decreases, often overlapping with previously existing "x" points.
Problem: When I go to join/merge the datasets on "x", the "x" column is sorted, messing up the order of the collected data and making it impossible to plot it correctly.
Here is a small example of what I am trying to do:
Wouldn't let me add pictures because I am new. Here is the code to generate these example data sets.
Data Sets :
Import:
import numpy as np
import pandas as pd
import random as rand
Code :
T1 = {'x':np.array([1,1.5,2,2.5,3,3.5,4,5,2,1]),'y':np.array([10000,8500,7400,6450,5670,5100,4600,4500,8400,9000]),'z':np.array(rand.sample(range(0,10000),10))}'
T2 = {'x':np.array([1,2,3,4,5,6,7,2,1.5,1]),'y':np.array([10500,7700,5500,4560,4300,3900,3800,5400,8400,8800]),'z':np.array(rand.sample(range(0,10000),10))}
Trial1 = pd.DataFrame(T1)
Trial2 = pd.DataFrame(T2)
Attempt to Merge/Join:
WomboCombo = Trial1.join(Trial2,how='outer',lsuffix=1,rsuffix=2, on='x')
WomboCombo2 = pd.merge(left=Trial1, right= Trial2, how = 'outer', left
Attempt to split into two parts, increasing and decreasing part (manually found row number where data "x" starts decreasing):
Trial1Inc = Trial1[0:8]
Trial2Inc = Trial2[0:7]
Result - Merge works well, join messes with the "x" column, not sure why:
Trial1Inc.merge(Trial2Inc,on='x',how='outer', suffixes=[1,2])
Incrementing section Merge Result
Trial1Inc.join(Trial2Inc,on='x',how='outer', lsuffix=1,rsuffix=2)
Incrementing section Join Result
Hopefully my example is clear, the "x" column in Trial 1 increases until 5, then decreases back towards 0. In Trial 2, I altered the test a bit because I noticed that I needed data at a slightly higher "x" value. Trial 2 Increases until 7 and then quickly decreases back towards 0.
My end goal is to plot the average of all y values (where there is overlap between the trials) against the corresponding x values.
If there is overlap I can add error bars. Pandas is almost perfect for what I am trying to do because an Outer join adds null values where there is no overlap and is capable of horizontally concatenating the two trials when there is overlap.
All thats left now is to figure out how to join on the "x" column but maintain its order of increasing values and then decreasing values. The reason it is important for me to first increase "x" and then decrease it is because when looking at the "y" values, it seems as though the initial "y" value at a given "x" is greater than the "y" value when "x" is decreasing (E.G. in trial 1 when x=1, y=10000, however, later in the trial when we come back to x=1, y=9000, this trend is important. When Pandas sorts the column before merging, instead of there being a clean curve showing a decrease in "y" as "x" increases and then the reverse, there are vertical downward jumps at any point where the data was joined.
I would really appreciate any help with either:
A) a perfect solution that lets me join on "x" when "x" contains duplicates
B) an efficient way to split the data sets into increasing "x" and decreasing "x" so that I can merge the increasing and decreasing sections of each trial separately and then vertically concat them.
Hopefully I did an okay job explaining the problem I would like to solve. Please let me know if I can clarify anything,
Thanks for the help!
I think #xyzjayne idea of splitting the dataframe is a great idea.
Splitting Trial1 and Trial2:
# index of max x value in Trial2
t2_max_index = Trial2.index[Trial2['x'] == Trial2['x'].max()].tolist()
# split Trial2 by max value
trial2_high = Trial2.loc[:t2_max_index[0]].set_index('x')
trial2_low = Trial2.loc[t2_max_index[0]+1:].set_index('x')
# index of max x value in Trial1
t1_max_index = Trial1.index[Trial1['x'] == Trial1['x'].max()].tolist()
# split Trial1 by max vlaue
trial1_high = Trial1.loc[:t1_max_index[0]].set_index('x')
trial1_low = Trial1.loc[t1_max_index[0]+1:].set_index('x')
Once we split the dataframes we join the highers together and the lowers together:
WomboCombo_high = trial1_high.join(trial2_high, how='outer', lsuffix='1', rsuffix='2', on='x').reset_index()
WomboCombo_low = trial1_low.join(trial2_low, how='outer', lsuffix='1', rsuffix='2', on='x').reset_index()
We now combine them toegther to have one dataframe WomboCombo
WomboCombo = WomboCombo_high.append(WomboCombo_low)
OUTPUT:
x y1 z1 y2 z2
0 1.0 10000.0 3425.0 10500.0 3061.0
1 1.5 8500.0 5059.0 NaN NaN
2 2.0 7400.0 2739.0 7700.0 7090.0
3 2.5 6450.0 9912.0 NaN NaN
4 3.0 5670.0 2099.0 5500.0 1140.0
5 3.5 5100.0 9637.0 NaN NaN
6 4.0 4600.0 7581.0 4560.0 9584.0
7 5.0 4500.0 8616.0 4300.0 3940.0
8 6.0 NaN NaN 3900.0 5896.0
9 7.0 NaN NaN 3800.0 6211.0
0 2.0 8400.0 3181.0 5400.0 9529.0
2 1.5 NaN NaN 8400.0 3260.0
1 1.0 9000.0 4280.0 8800.0 8303.0
One possible solution is to give you trial rows specific IDs an then merge on the IDs. Should keep the x values from being sorted.
Here's what I was trying out, but it doesn't address varying numbers of data points. I like gym-hh's answer, though it's not clear to me that you wanted two columns of y,z pairs. So you could combine his ideas and this code to get what you need.
Trial1['index1'] = Trial1.index
Trial2['index1'] = Trial2.index
WomboCombo = Trial1.append(Trial2)
WomboCombo.sort_values(by=['index1'],inplace=True)
WomboCombo
Output:
x y z index1
0 1.0 10000 7148 0
0 1.0 10500 2745 0
1 1.5 8500 248 1
1 2.0 7700 9505 1
2 2.0 7400 6380 2
2 3.0 5500 3401 2
3 2.5 6450 6183 3
3 4.0 4560 5281 3
4 3.0 5670 99 4
4 5.0 4300 8864 4
5 3.5 5100 5132 5
5 6.0 3900 7570 5
6 4.0 4600 9951 6
6 7.0 3800 7447 6
7 2.0 5400 3713 7
7 5.0 4500 3863 7
8 1.5 8400 8776 8
8 2.0 8400 1592 8
9 1.0 9000 2167 9
9 1.0 8800 782 9

Multiply many columns pandas

I have a data frame like this, but with many more columns and I would like to multiply each two adjacent columns and state the product of the two in a new column beside it and call it Sub_pro and at the end have the total sum of all Sub_pro in a column called F_Pro and reduce the precision to 3 decimal places. I don't know how to get the Sub_pro columns. Below is my code.
import pandas as pd
df = pd.read_excel("C:dummy")
df['F_Pro'] = ("Result" * "Attribute").sum(axis=1)
df.round(decimals=3)
print (df)
Input
Id Result Attribute Result1 Attribute1
1 0.5621 0.56 536 0.005642
2 0.5221 0.5677 2.15 93
3 0.024564 5.23 6.489 8
4 11.564256 4.005 0.45556 5.25
5 0.6123 0.4798 0.6667 5.10
Desire Output
id Result Attribute Sub_Pro Result1 Attribute1 Sub_pro1 F_Pro
1 0.5621 0.56 0.314776 536 0.005642 3.024112 3.338888
2 0.5221 0.5677 0.29639617 2.15 93 199.95 200.2463962
3 0.024564 5.23 0.12846972 6.489 8 51.912 52.04046972
4 11.564256 4.005 46.31484528 0.45556 5.25 2.39169 48.70653528
5 0.6123 0.4798 0.29378154 0.6667 5.1 3.40017 3.69395154
Because you have several columns named kind of the same, here is one way using filter. To see how it works, on your df, you do df.filter(like='Result') and you get the columns where the name has Result in it:
Result Result1
0 0.562100 536.00000
1 0.522100 2.15000
2 0.024564 6.48900
3 11.564256 0.45556
4 0.612300 0.66670
You can create an array containing the columns 'Sub_Pro':
import numpy as np
arr_sub_pro = np.round(df.filter(like='Result').values* df.filter(like='Attribute').values,3)
and you get the values of the columns sub_pro such as arr_sub_pro:
array([[3.1500e-01, 3.0240e+00],
[2.9600e-01, 1.9995e+02],
[1.2800e-01, 5.1912e+01],
[4.6315e+01, 2.3920e+00],
[2.9400e-01, 3.4000e+00]])
Now you need to add them at the right position in the dataframe, I think a loop for is necessary
for nb, col in zip( range(arr_sub_pro.shape[1]), df.filter(like='Attribute').columns):
df.insert(df.columns.get_loc(col)+1, 'Sub_pro{}'.format(nb), arr_sub_pro[:,nb])
here I get the location of the column Attibut(nb) and insert the value from column nb of arr_sub_pro at the next position
To add the column 'F_Pro', you can do:
df.insert(len(df.columns), 'F_Pro', arr_sub_pro.sum(axis=1))
the final df looks like:
Id Result Attribute Sub_pro0 Result1 Attribute1 Sub_pro1 \
0 1 0.562100 0.5600 0.315 536.00000 0.005642 3.024
1 2 0.522100 0.5677 0.296 2.15000 93.000000 199.950
2 3 0.024564 5.2300 0.128 6.48900 8.000000 51.912
3 4 11.564256 4.0050 46.315 0.45556 5.250000 2.392
4 5 0.612300 0.4798 0.294 0.66670 5.100000 3.400
F_Pro
0 3.339
1 200.246
2 52.040
3 48.707
4 3.694
import pandas as pd
src = "/opt/repos/pareto/test/stack/data.csv"
df = pd.read_csv(src)
count = 0
def multiply(x):
res = x.copy()
keys_len = len(x)
idx = 1
while idx + 1 < keys_len:
left = x[idx]
right = x[idx + 1]
new_key = "sub_prod_{}".format(idx)
# Multiply and round to three decimal places.
res[new_key] = round(left * right,3)
idx = idx + 1
return res
res_df = df.apply(lambda x: multiply(x),axis=1)
It solve the problem but you need now order de columns you can iterate over the keys instead of make a deep copy of the full row. I hope that the code help you.
Here's one way using NumPy and a dictionary comprehension:
# extract NumPy array for relevant columns
A = df.iloc[:, 1:].values
n = int(A.shape[1] / 2)
# calculate products and feed to pd.DataFrame
prods = pd.DataFrame({'Sub_Pro_'+str(i): np.prod(A[:, 2*i: 2*(i+1)], axis=1) \
for i in range(n)})
# calculate sum of product rows
prods['F_Pro'] = prods.sum(axis=1)
# join to original dataframe
df = df.join(prods)
print(df)
Id Result Attribute Result1 Attribute1 Sub_Pro_0 Sub_Pro_1 \
0 1 0.562100 0.5600 536.00000 0.005642 0.314776 3.024112
1 2 0.522100 0.5677 2.15000 93.000000 0.296396 199.950000
2 3 0.024564 5.2300 6.48900 8.000000 0.128470 51.912000
3 4 11.564256 4.0050 0.45556 5.250000 46.314845 2.391690
4 5 0.612300 0.4798 0.66670 5.100000 0.293782 3.400170
F_Pro
0 3.338888
1 200.246396
2 52.040470
3 48.706535
4 3.693952

Categories

Resources