Reduce number of columns in a pandas DataFrame - python

I'm trying to create a violin plot in seaborn. The input is a pandas DataFrame, and it looks like in order to separate the data along the x axis I need to differentiate on a single column. I currently have a DataFrame that has floating point values for several sensors:
>>>df.columns
Index('SensorA', 'SensorB', 'SensorC', 'SensorD', 'group_id')
That is, each Sensor[A-Z] column contains a bunch of numbers:
>>>df['SensorA'].head()
0 0.072706
1 0.072698
2 0.072701
3 0.072303
4 0.071951
Name: SensorA, dtype: float64
And for this problem, I'm only interested in 2 groups:
>>>df['group_id'].unique()
'1', '2'
I want each Sensor to be a separate violin along the x axis.
I think this means I need to convert this into something of the form:
>>>df.columns
Index('Value', 'Sensor', 'group_id')
where the Sensor column in the new DataFrame contains the text "SensorA", "SensorB", etc., the Value column in the new DataFrame contains the values that were original in each Sensor[A-Z] column, and the group information is preserved.
I could then create a violinplot using the following command:
ax = sns.violinplot(x="Sensor", y="Value", hue="group_id", data=df)
I'm thinking I kind of need to do a reverse pivot. Is there an easy way of doing this?

Use panda's melt function
import pandas as pd
import numpy as np
df = pd.DataFrame({'SensorA':[1,3,4,5,6], 'SensorB':[5,2,3,6,7], 'SensorC':[7,4,8,1,10], 'group_id':[1,2,1,1,2]})
df = pd.melt(df, id_vars = 'group_id', var_name = 'Sensor')
print df
gives
group_id Sensor value
0 1 SensorA 1
1 2 SensorA 3
2 1 SensorA 4
3 1 SensorA 5
4 2 SensorA 6
5 1 SensorB 5
6 2 SensorB 2
7 1 SensorB 3
8 1 SensorB 6
9 2 SensorB 7
10 1 SensorC 7
11 2 SensorC 4
12 1 SensorC 8
13 1 SensorC 1
14 2 SensorC 10

May it's not the best way but it works (AFAIU):
import pandas as pd
import numpy as np
df = pd.DataFrame({'SensorA':[1,3,4,5,6], 'SensorB':[5,2,3,6,7], 'SensorC':[7,4,8,1,10], 'group_id':[1,2,1,1,2]})
groupedID = df.groupby('group_id')
df1 = pd.DataFrame()
for groupNum in groupedID.groups.keys():
dfSensors = groupedID.get_group(groupNum).filter(regex='Sen').stack()
_, sensorNames = zip(*dfSensors.index)
df2 = pd.DataFrame({'Sensor': sensorNames, 'Value':dfSensors.values, 'group_id':groupNum})
df1 = pd.concat([df1, df2])
print(df1)
Output:
Sensor Value group_id
0 SensorA 1 1
1 SensorB 5 1
2 SensorC 7 1
3 SensorA 4 1
4 SensorB 3 1
5 SensorC 8 1
6 SensorA 5 1
7 SensorB 6 1
8 SensorC 1 1
0 SensorA 3 2
1 SensorB 2 2
2 SensorC 4 2
3 SensorA 6 2
4 SensorB 7 2
5 SensorC 10 2

Related

Is it possible to combine agg and value_counts in single line with Pandas

Given a df
a b ngroup
0 1 3 0
1 1 4 0
2 1 1 0
3 3 7 2
4 4 4 2
5 1 1 4
6 2 2 4
7 1 1 4
8 6 6 5
I would like to compute the summation of multiple columns (i.e., a and b) grouped by the column ngroup.
In addition, I would like to count the number of element for each of the group.
Based on these two condition, the expected output as below
a b nrow_same_group ngroup
3 8 3 0
7 11 2 2
4 4 3 4
6 6 1 5
The following code should do the work
import pandas as pd
df=pd.DataFrame(list(zip([1,1,1,3,4,1,2,1,6,10],
[3,4,1,7,4,1,2,1,6,1],
[0,0,0,2,2,4,4,4,5])),columns=['a','b','ngroup'])
grouped_df = df.groupby(['ngroup'])
df1 = grouped_df[['a','b']].agg('sum').reset_index()
df2 = df['ngroup'].value_counts().reset_index()
df2.sort_values('index', axis=0, ascending=True, inplace=True, kind='quicksort', na_position='last')
df2.reset_index(drop=True, inplace=True)
df2.rename(columns={'index':'ngroup','ngroup':'nrow_same_group'},inplace=True)
df= pd.merge(df1, df2, on=['ngroup'])
However, I wonder whether there exist built-in pandas that achieve something similar, in single line.
You can do it using only groupby + agg.
import pandas as pd
df=pd.DataFrame(list(zip([1,1,1,3,4,1,2,1,6,10],
[3,4,1,7,4,1,2,1,6,1],
[0,0,0,2,2,4,4,4,5])),columns=['a','b','ngroup'])
res = (
df.groupby('ngroup', as_index=False)
.agg(a=('a','sum'), b=('b', 'sum'),
nrow_same_group=('a', 'size'))
)
Here the parameters passed to agg are tuples whose first element is the column to aggregate and the second element is the aggregation function to apply to that column. The parameter names are the labels for the resulting columns.
Output:
>>> res
ngroup a b nrow_same_group
0 0 3 8 3
1 2 7 11 2
2 4 4 4 3
3 5 6 6 1
First aggregate a, b with sum then calculate size of each group and assign this to nrow_same_group column
g = df.groupby('ngroup')
g.sum().assign(nrow_same_group=g.size())
a b nrow_same_group
ngroup
0 3 8 3
2 7 11 2
4 4 4 3
5 6 6 1

expand pandas groupby results to initial dataframe

Say I have a dataframe df and group it by a few columns, dfg, with the median of one of its columns. How could I then take those median values, and expand them out so that those mean values are in a new column of the original df, and associated with the respective conditions? This will mean there are duplicates, but I will next be using this column for a subsequent calculation and having these in a column will make this possible.
Example data:
import pandas as pd
data = {'idx':[1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2],
'condition1':[1,1,2,2,3,3,4,4,1,1,2,2,3,3,4,4],
'condition2':[1,2,1,2,1,2,1,2,1,2,1,2,1,2,1,2],
'values':np.random.normal(0,1,16)}
df = pd.DataFrame(data)
dfg = df.groupby(['idx', 'condition2'], as_index=False)['values'].median()
example of desired result (note duplicates corresponding to correct conditions):
idx condition1 condition2 values medians
0 1 1 1 0.35031 0.656355
1 1 1 2 -0.291736 -0.024304
2 1 2 1 1.593545 0.656355
3 1 2 2 -1.275154 -0.024304
4 1 3 1 0.075259 0.656355
5 1 3 2 1.054481 -0.024304
6 1 4 1 0.9624 0.656355
7 1 4 2 0.243128 -0.024304
8 2 1 1 1.717391 1.155406
9 2 1 2 0.788847 1.006583
10 2 2 1 1.145891 1.155406
11 2 2 2 -0.492063 1.006583
12 2 3 1 -0.157029 1.155406
13 2 3 2 1.224319 1.006583
14 2 4 1 1.164921 1.155406
15 2 4 2 2.042239 1.006583
I believe you need GroupBy.transform with median for new column:
df['medians'] = df.groupby(['idx', 'condition2'])['values'].transform('median')

Comparing the value of a row in a certain column to the values in other columns

Using Pandas
I'm trying to determine whether a value in a certain row is greater than the values in all the other columns in the same row.
To do this I'm looping through the rows of a dataframe and using the 'all' function to compare the values in other columns; but it seems this is throwing an error "string indices must be integers"
It seems like this should work: What's wrong with this approach?
for row in dataframe:
if all (i < row['col1'] for i in [row['col2'], row['col3'], row['col4'], row['col5']]):
row['newcol'] = 'value'
Build a mask and pass it to loc:
df.loc[df['col1'] > df.loc[:, 'col2':'col5'].max(axis=1), 'newcol'] = 'newvalue'
The main problem, in my opinion, is using a loop for vectorisable logic.
Below is an example of how your logic can be implemented using numpy.where.
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0, 9, (5, 10)))
df['new_col'] = np.where(df[1] > df.max(axis=1),
'col1_is_max',
'col1_not_max')
Result:
0 1 2 3 4 5 6 7 8 9 new_col
0 4 1 3 8 3 2 5 1 1 2 col1_not_max
1 2 7 1 2 5 3 5 1 8 5 col1_is_max
2 1 8 2 5 7 4 0 3 6 3 col1_is_max
3 6 4 2 1 7 2 0 8 3 2 col1_not_max
4 0 1 3 3 0 3 7 4 4 1 col1_not_max

Create new dataframe by groups based on another dataframe

I don't have much experience with working with pandas. I have a pandas dataframe as shown below.
df = pd.DataFrame({ 'A' : [1,2,1],
'start' : [1,3,4],
'stop' : [3,4,8]})
I would like to create a new dataframe that iterates through the rows and appends to resulting dataframe. For example, from row 1 of the input dataframe - Generate a sequence of numbers [1,2,3] and corresponding column to named 1
A seq
1 1
1 2
1 3
2 3
2 4
1 4
1 5
1 6
1 7
1 8
So far, I've managed to identify what function to use to iterate through the rows of the pandas dataframe.
Here's one way with apply:
(df.set_index('A')
.apply(lambda x: pd.Series(np.arange(x['start'], x['stop'] + 1)), axis=1)
.stack()
.to_frame('seq')
.reset_index(level=1, drop=True)
.astype('int')
)
Out:
seq
A
1 1
1 2
1 3
2 3
2 4
1 4
1 5
1 6
1 7
1 8
If you would want to use loops.
In [1164]: data = []
In [1165]: for _, x in df.iterrows():
...: data += [[x.A, y] for y in range(x.start, x.stop+1)]
...:
In [1166]: pd.DataFrame(data, columns=['A', 'seq'])
Out[1166]:
A seq
0 1 1
1 1 2
2 1 3
3 2 3
4 2 4
5 1 4
6 1 5
7 1 6
8 1 7
9 1 8
To add to the answers above, here's a method that defines a function for interpreting the dataframe input shown, into a form that the poster wants:
def gen_df_permutations(perm_def_df):
m_list = []
for i in perm_def_df.index:
row = perm_def_df.loc[i]
for n in range(row.start, row.stop+1):
r_list = [row.A,n]
m_list.append(r_list)
return m_list
Call it, referencing the specification dataframe:
gen_df_permutations(df)
Or optionally call it wrapped in a dataframe creation function to return a final dataframe output:
pd.DataFrame(gen_df_permutations(df),columns=['A','seq'])
A seq
0 1 1
1 1 2
2 1 3
3 2 3
4 2 4
5 1 4
6 1 5
7 1 6
8 1 7
9 1 8
N.B. the first column there is the dataframe index that can be removed/ignored as requirements allow.

Stacked plot from pandas dataframe

I would like to create a stacked bar plot from the following dataframe:
VALUE COUNT RECL_LCC RECL_PI
0 1 15686114 3 1
1 2 27537963 1 1
2 3 23448904 1 2
3 4 1213184 1 3
4 5 14185448 3 2
5 6 13064600 3 3
6 7 27043180 2 2
7 8 11732405 2 1
8 9 14773871 2 3
There would be 2 bars in the plot. One for RECL_LCC and other for RECL_PI. There would be 3 sections in each bar corresponding to the unique values in RECL_LCC and RECL_PI i.e 1,2,3 and would sum up the COUNT for each section. So far, I have something like this:
df = df.convert_objects(convert_numeric=True)
sub_df = df.groupby(['RECL_LCC','RECL_PI'])['COUNT'].sum().unstack()
sub_df.plot(kind='bar',stacked=True)
However, I get this plot:
Any idea on how to obtain 2 columns (RECL_LCC and RECL_PI) instead of these 3?
So your problem was that the dtypes were not numeric so no aggregation function will work as they were strings, so you can convert each offending column like so:
df['col'] = df['col'].astype(int)
or just call convert_objects on the df:
df.convert_objects(convert_numeric=True)

Categories

Resources