How to pivot dataframe and transpose 1 row - python

I want to pivot this dataframe and convert the columns to a second level multiindex or column.
Original dataframe:
Type VC C B Security
0 Standard 2 2 2 A
1 Standard 16 13 0 B
2 Standard 52 35 2 C
3 RI 10 10 0 A
4 RI 10 15 31 B
5 RI 10 15 31 C
Desired dataframe:
Type A B C
0 Standard VC 2 16 52
1 Standard C 2 13 35
2 Standard B 2 0 2
3 RI VC 10 10 10
11 RI C 10 15 15
12 RI B 0 31 31

You could try as follows:
Use df.pivot and then transpose using df.T.
Next, chain df.sort_index to rearrange the entries, and apply df.swaplevel to change the order of the MultiIndex.
Lastly, consider getting rid of the Security as columns.name, and adding an index.name for the unnamed variable, e.g. Subtype here.
If you want the MultiIndex as columns, you can of course simply use df.reset_index at this stage.
res = (df.pivot(index='Security', columns='Type').T
.sort_index(level=[1,0], ascending=[False, False])
.swaplevel(0))
res.columns.name = None
res.index.names = ['Type','Subtype']
print(res)
A B C
Type Subtype
Standard VC 2 16 52
C 2 13 35
B 2 0 2
RI VC 10 10 10
C 10 15 15
B 0 31 31

Related

How to group dataframe by column and receive new column for every group

I have the following dataframe:
df = pd.DataFrame({'timestamp' : [10,10,10,20,20,20], 'idx': [1,2,3,1,2,3], 'v1' : [1,2,4,5,1,9], 'v2' : [1,2,8,5,1,2]})
timestamp idx v1 v2
0 10 1 1 1
1 10 2 2 2
2 10 3 4 8
3 20 1 5 5
4 20 2 1 1
5 20 3 9 2
I'd like to group data by timestamp and calculate the following cumulative statistic:
np.sum(v1*v2) for every timestamp. I'd like to see the following result:
timestamp idx v1 v2 stat
0 10 1 1 1 37
1 10 2 2 2 37
2 10 3 4 8 37
3 20 1 5 5 44
4 20 2 1 1 44
5 20 3 9 2 44
I'm trying to do the following:
def calc_some_stat(d):
return np.sum(d.v1 * d.v2)
df.loc[:, 'stat'] = df.groupby('timestamp').apply(calc_some_stat)
But for stat columns I receive all NaN values - what is wrong in my code?
We want groupby transform here not groupby apply:
df['stat'] = (df['v1'] * df['v2']).groupby(df['timestamp']).transform('sum')
If we really want to use the function we need to join back to scale up the aggregated DataFrame:
def calc_some_stat(d):
return np.sum(d.v1 * d.v2)
df = df.join(
df.groupby('timestamp').apply(calc_some_stat)
.rename('stat'), # Needed to use join but also sets the col name
on='timestamp'
)
df:
timestamp idx v1 v2 stat
0 10 1 1 1 37
1 10 2 2 2 37
2 10 3 4 8 37
3 20 1 5 5 44
4 20 2 1 1 44
5 20 3 9 2 44
The issue is that groupby apply is producing summary information:
timestamp
10 37
20 44
dtype: int64
This does not assign back to the DataFrame naturally as there are only 2 rows when the initial DataFrame has 6. We either need to use join to scale these 2 rows up to align with the original DataFrame, or we can avoid all of this using groupby transform which is designed to produce a:
like-indexed DataFrame on each group and return a DataFrame having the same indexes as the original object filled with the transformed values

Finding difference between two columns of a dataframe along with groupby

I saw a primitive version of this question here
but i my dataframe has diffrent names and i want to calculate separately for them
A B C
0 a 3 5
1 a 6 9
2 b 3 8
3 b 11 19
i want to groupby A and then find diffence between alternate B and C.something like this
A B C dA
0 a 3 5 6
1 a 6 9 NaN
2 b 3 8 16
3 b 11 19 NaN
i tried doing
df['dA']=df.groupby('A')(['C']-['B'])
df['dA']=df.groupby('A')['C']-df.groupby('A')['B']
none of them helped
what mistake am i making?
IIUC, here is one way to perform the calculation:
# create the data frame
from io import StringIO
import pandas as pd
data = '''idx A B C
0 a 3 5
1 a 6 9
2 b 3 8
3 b 11 19
'''
df = pd.read_csv(StringIO(data), sep='\s+', engine='python').set_index('idx')
Now, compute dA. I look last value of C less first value of B, as grouped by A. (Is this right? Or is it max(C) less min(B)?). If you're guaranteed to have the A values in pairs, then #BenT's shift() would be more concise.
dA = (
(df.groupby('A')['C'].transform('last') -
df.groupby('A')['B'].transform('first'))
.drop_duplicates()
.rename('dA'))
print(pd.concat([df, dA], axis=1))
A B C dA
idx
0 a 3 5 6.0
1 a 6 9 NaN
2 b 3 8 16.0
3 b 11 19 NaN
I used groupby().transform() to preserve index values, to support the concat operation.

pandas modifying sections then recombining

I have been on modifying an excel document with Pandas. I only need to work with small sections at a time, and breaking each into a separate DataFrame and then recombining back into the whole after modifying seems like the best solution. Is this feasible? I've tried a couple options with merge() and concat() but they don't seem to give me the results I am looking for.
As previously stated, I've tried using the merge() function to recombine them back together with the larger table I just got a memory error, and when I tested it with smaller dataframes, rows weren't maintained.
here's an small scale example of what I am looking to do:
import pandas as pd
df1 = pd.DataFrame({'A':[1,2,3,5,6],'B':[3,10,11,13,324],'C':[64,'','' ,'','' ],'D':[32,45,67,80,100]})#example df
print(df1)
df2= df1[['B','C']]#section taken
df2.at[2,'B'] = 1#modify area
print(df2)
df1 = df1.merge(df2)#merge dataframes
print(df1)
output:
A B C D
0 1 3 64 32
1 2 10 45
2 3 11 67
3 5 13 80
4 6 324 100
B C
0 3 64
1 10
2 1
3 13
4 324
A B C D
0 1 3 64 32
1 2 10 45
2 5 13 80
3 6 324 100
what I would like to see
A B C D
0 1 3 64 32
1 2 10 45
2 3 11 67
3 5 13 80
4 6 324 100
B C
0 3 64
1 10
2 1
3 13
4 324
A B C D
0 1 3 64 32
1 2 10 45
2 3 1 67
3 5 13 80
4 6 324 100
as I said before,in my actual code I just get a memoryerror if I try this due to the size of the dataframe
No need for merging here, you can just re-assign back the values from df2 into df1:
...
df1.loc[df2.index, df2.columns] = df2 #recover changes into original dataframe
print(df1)
giving as expected:
A B C D
0 1 3 64 32
1 2 10 45
2 3 1 67
3 5 13 80
4 6 324 100
df1.update(df2) gives same result (thanks to Quang Hoang for the precision)

Pandas: Iterate group by object

Given a data frame as following:
In [8]:
df
Out[8]:
Experiment SampleVol Mass
0 A 1 11
1 A 1 12
2 A 2 20
3 A 2 17
4 A 2 21
5 A 3 28
6 A 3 29
7 A 4 35
8 A 4 38
9 A 4 35
10 B 1 12
11 B 1 11
12 B 2 22
13 B 2 24
14 B 3 30
15 B 3 33
16 B 4 37
17 B 4 42
18 C 1 8
19 C 1 7
20 C 2 17
21 C 2 19
22 C 3 29
23 C 3 30
24 C 3 31
25 C 4 41
26 C 4 44
27 C 4 42
I would like to process some correlation study for the data frame of each Experiment. The study I want to conduct is to calculate the correlation of 'SampleVol' with its Mean('Mass').
The groupby function can help me to get the mean of masses.
grp = df.groupby(['Experiment', 'SampleVol'])
grp.mean()
Out[17]:
Mass
Experiment SampleVol
A 1 11.500000
2 19.333333
3 28.500000
4 36.000000
B 1 11.500000
2 23.000000
3 31.500000
4 39.500000
C 1 7.500000
2 18.000000
3 30.000000
4 42.333333
I understand for each data frame I should use some numpy function to compute the correlation coefficient. But now, my question is how can I iterate the data frames for each Experiment.
Following is an example of the desired output.
Out[18]:
Experiment Slope Intercept
A 0.91 0.01
B 1.1 0.02
C 0.95 0.03
Thank you very much.
You'll want to group on just the 'Experiment' column, rather than the two columns as you have above. You can iterate through the groups and perform a simple linear regression on the grouped values using the below code:
from scipy import stats
import pandas as pd
import numpy as np
grp = df.groupby(['Experiment'])
output = pd.DataFrame(columns=['Slope', 'Intercept'])
for name, group in grp:
slope, intercept, r_value, p_value, std_err = stats.linregress(group['SampleVol'], group['Mass'])
output.loc[name] = [slope,intercept]
print(output)
For those curious, this is how I generated the dummy data and what it looks like:
df = pd.DataFrame()
df['Experiment'] = np.array(pd.date_range('2018-01-01', periods=12, freq='6h').strftime('%a'))
df['SampleVol'] = np.random.uniform(1,5,12)
df['Mass'] = np.random.uniform(10,42,12)
References:
How to loop over grouped Pandas dataframe?
scipy.stats.linregress — SciPy v1.0.0 Reference Guide
Group By: split-apply-combine — pandas 0.22.0 documentation

Efficiently adding calculated rows based on index values to a pandas DataFrame

I have a pandas DataFrame in the following format:
a b c
0 0 1 2
1 3 4 5
2 6 7 8
3 9 10 11
4 12 13 14
5 15 16 17
I want to append a calculated row that performs some math based on a given items index value, e.g. adding a row that sums the values of all items with an index value < 2, with the new row having an index label of 'Red'. Ultimately, I am trying to add three rows that group the index values into categories:
A row with the sum of item values where index value are < 2, labeled as 'Red'
A row with the sum of item values where index values are 1 < x < 4, labeled as 'Blue'
A row with the sum of item values where index values are > 3, labeled as 'Green'
Ideal output would look like this:
a b c
0 0 1 2
1 3 4 5
2 6 7 8
3 9 10 11
4 12 13 14
5 15 16 17
Red 3 5 7
Blue 15 17 19
Green 27 29 31
My current solution involves transposing the DataFrame, applying a map function for each calculated column and then re-transposing, but I would imagine pandas has a more efficient way of doing this, likely using .append().
EDIT:
My in-elegant pre-set list solution (originally used .transpose() but I improved it using .groupby() and .append()):
df = pd.DataFrame(np.arange(18).reshape((6,3)),columns=['a', 'b', 'c'])
df['x'] = ['Red', 'Red', 'Blue', 'Blue', 'Green', 'Green']
df2 = df.groupby('x').sum()
df = df.append(df2)
del df['x']
I much prefer the flexibility of BrenBarn's answer (see below).
Here is one way:
def group(ix):
if ix < 2:
return "Red"
elif 2 <= ix < 4:
return "Blue"
else:
return "Green"
>>> print d
a b c
0 0 1 2
1 3 4 5
2 6 7 8
3 9 10 11
4 12 13 14
5 15 16 17
>>> print d.append(d.groupby(d.index.to_series().map(group)).sum())
a b c
0 0 1 2
1 3 4 5
2 6 7 8
3 9 10 11
4 12 13 14
5 15 16 17
Blue 15 17 19
Green 27 29 31
Red 3 5 7
For the general case, you need to define a function (or dict) to handle the mapping to different groups. Then you can just use groupby and its usual abilities.
For your particular case, it can be done more simply by directly slicing on the index value as Dan Allan showed, but that will break down if you have a more complex case where the groups you want are not simply definable in terms of contiguous blocks of rows. The method above will also easily extend to situations where the groups you want to create are not based on the index but on some other column (i.e., group together all rows whose value in column X is within range 0-10, or whatever).
The role of "transpose," which you say you used in your unshown solution, might be played more naturally by the orient keyword argument, which is available when you construct a DataFrame from a dictionary.
In [23]: df
Out[23]:
a b c
0 0 1 2
1 3 4 5
2 6 7 8
3 9 10 11
4 12 13 14
5 15 16 17
In [24]: dict = {'Red': df.loc[:1].sum(),
'Blue': df.loc[2:3].sum(),
'Green': df.loc[4:].sum()}
In [25]: DataFrame.from_dict(dict, orient='index')
Out[25]:
a b c
Blue 15 17 19
Green 27 29 31
Red 3 5 7
In [26]: df.append(_)
Out[26]:
a b c
0 0 1 2
1 3 4 5
2 6 7 8
3 9 10 11
4 12 13 14
5 15 16 17
Blue 15 17 19
Green 27 29 31
Red 3 5 7
Based the numbers in your example, I assume that by "> 4" you actually meant ">= 4".

Categories

Resources