How to plot one columns "usage" of another column in pandas - python

I would like to plot one variable as a constant, total_cap, in this case and layer on the maxused_capacity and meanused_capacity values. Essentially I would like the visual of a stacked bar plot but I do not want the totals agg'd together, the height of the bar for each site should awlays be just the value of Total_Cap with the other two values layered on
example df:
SITE Total_Cap maxused_Cap meanused_Cap
A 4 3 2
B 8 7 4
C 12 11 5
D 16 13 10
I tried this code but it simply adds the values together when plotting the bar
x= df4[['SITE','maxused_cap','Total_Cap']]
y= x.set_index('SITE')
z=y.groupby('SITE')
z.plot.bar(stacked=True).mean()
plt.show()

IIUC this does what you want to achieve by scaling the values relative to Total_Cap
df.set_index('SITE', inplace=True)
df[['maxused_Cap','meanused_Cap']].div(
(df['maxused_Cap']+df['meanused_Cap'])/df['Total_Cap'],
axis=0).plot.bar(stacked=True, figsize=(8,6));
Out:
Setup the dataframe
import pandas as pd
import io
t = '''
SITE Total_Cap maxused_Cap meanused_Cap
A 4 3 2
B 8 7 4
C 12 11 5
D 16 13 10'''
df = pd.read_csv(io.StringIO(t), sep='\s+')
df
Out:
SITE Total_Cap maxused_Cap meanused_Cap
0 A 4 3 2
1 B 8 7 4
2 C 12 11 5
3 D 16 13 10

Related

How to select the 3 last dates in Python

I have a dataset that looks like his:
ID date
1 O1-01-2012
1 05-02-2012
1 25-06-2013
1 14-12-2013
1 10-04-2014
2 19-05-2012
2 07-08-2014
2 10-09-2014
2 27-11-2015
2 01-12-2015
3 15-04-2013
3 17-05-2015
3 22-05-2015
3 30-10-2016
3 02-11-2016
I am working with Python and I would like to select the 3 last dates for each ID. Here is the dataset I would like to have:
ID date
1 25-06-2013
1 14-12-2013
1 10-04-2014
2 10-09-2014
2 27-11-2015
2 01-12-2015
3 22-05-2015
3 30-10-2016
3 02-11-2016
I used this code to select the very last date for each ID:
df_2=df.sort_values(by=['date']).drop_duplicates(subset='ID',keep='last')
But how can I select more than one date (for example the 3 last dates, or 4 last dates, etc)?
You might use groupby and tail following way to get 2 last items from each group:
import pandas as pd
df = pd.DataFrame({'ID':[1,1,1,2,2,2,3,3,3],'value':['A','B','C','D','E','F','G','H','I']})
df2 = df.groupby('ID').tail(2)
print(df2)
Output:
ID value
1 1 B
2 1 C
4 2 E
5 2 F
7 3 H
8 3 I
Note that for simplicity sake I used other (already sorted) data for building df.
can try this:
df.sort_values(by=['date']).groupby('ID').tail(3).sort_values(['ID', 'date'])
I tried this but with a non-datetime data type
a = [1,1,1,1,1,2,2,2,2,2,3,3,3,3,3]
b = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o']
import pandas as pd
import numpy as np
a = np.array([a,b])
df=pd.DataFrame(a.T,columns=['ID','Date'])
# the tail would give you the last n number of elements you are interested in
df_ = df.groupby('ID').tail(3)
df_
output:
ID Date
2 1 c
3 1 d
4 1 e
7 2 h
8 2 i
9 2 j
12 3 m
13 3 n
14 3 o

Pandas - How to swap column contents leaving label sequence intact?

I am using pandas v0.25.3. and am inexperienced but learning.
I have a dataframe and would like to swap the contents of two columns leaving the columns labels and sequence intact.
df = pd.DataFrame ({"A": [(1),(2),(3),(4)],
'B': [(5),(6),(7),(8)],
'C': [(9),(10),(11),(12)]})
This yields a dataframe,
A B C
0 1 5 9
1 2 6 10
2 3 7 11
3 4 8 12
I want to swap column contents B and C to get
A B C
0 1 9 5
1 2 10 6
2 3 11 7
3 4 12 8
I have tried looking at pd.DataFrame.values which sent me to numpy array and advanced slicing and got lost.
Whats the simplest way to do this?.
You can assign numpy array:
#pandas 0.24+
df[['B','C']] = df[['C','B']].to_numpy()
#oldier pandas versions
df[['B','C']] = df[['C','B']].values
Or use DataFrame.assign:
df = df.assign(B = df.C, C = df.B)
print (df)
A B C
0 1 9 5
1 2 10 6
2 3 11 7
3 4 12 8
Or just use:
df['B'], df['C'] = df['C'], df['B'].copy()
print(df)
Output:
A B C
0 1 9 5
1 2 10 6
2 3 11 7
3 4 12 8
You can also swap the labels:
df.columns = ['A','C','B']
If your DataFrame is very large, I believe this would require less from your computer than copying all the data.
If the order of the columns is important, you can then reorder them:
df = df.reindex(['A','B','C'], axis=1)

How do I convert my 2D numpy array to a pandas dataframe with given categories?

I have an array called 'values' which features 2 columns of mean reaction time data from 10 individuals. The first column refers to data collected for a single individual in condition A, the second for that same individual in condition B:
array([[451.75 , 488.55555556],
[552.44444444, 590.40740741],
[629.875 , 637.62962963],
[454.66666667, 421.88888889],
[637.16666667, 539.94444444],
[538.83333333, 516.33333333],
[463.83333333, 448.83333333],
[429.2962963 , 497.16666667],
[524.66666667, 458.83333333]])
I would like to plot these data using seaborn, to display the mean values and connected single values for each individual across the two conditions. What is the simplest way to convert the array 'values' into a 3 column DataFrame, whereby one column features all the values, another features a label distinguishing that value as condition A or condition B, and a final column which provides a number for each individual (i.e., 1-10)? For example, as follows:
Value Condition Individual
451.75 A 1
488.56 B 1
488.55 A 2
...etc
melt
You can do that using pd.melt:
pd.DataFrame(data, columns=['A','B']).reset_index().melt(id_vars = 'index')\
.rename(columns={'index':'Individual'})
Individual variable value
0 0 A 451.750000
1 1 A 552.444444
2 2 A 629.875000
3 3 A 454.666667
4 4 A 637.166667
5 5 A 538.833333
6 6 A 463.833333
7 7 A 429.296296
8 8 A 524.666667
9 0 B 488.555556
10 1 B 590.407407
11 2 B 637.629630
12 3 B 421.888889
13 4 B 539.944444
14 5 B 516.333333
15 6 B 448.833333
16 7 B 497.166667
17 8 B 458.833333
This should work
import pandas as pd
import numpy as np
np_array = np.array([[451.75 , 488.55555556],
[552.44444444, 590.40740741],
[629.875 , 637.62962963],
[454.66666667, 421.88888889],
[637.16666667, 539.94444444],
[538.83333333, 516.33333333],
[463.83333333, 448.83333333],
[429.2962963 , 497.16666667],
[524.66666667, 458.83333333]])
pd_df = pd.DataFrame(np_array, columns=["A", "B"])
num_individuals = len(pd_df.index)
pd_df = pd_df.melt()
pd_df["INDIVIDUAL"] = [(i)%(num_individuals) + 1 for i in pd_df.index]
pd_df
variable value INDIVIDUAL
0 A 451.750000 1
1 A 552.444444 2
2 A 629.875000 3
3 A 454.666667 4
4 A 637.166667 5
5 A 538.833333 6
6 A 463.833333 7
7 A 429.296296 8
8 A 524.666667 9
9 B 488.555556 1
10 B 590.407407 2
11 B 637.629630 3
12 B 421.888889 4
13 B 539.944444 5
14 B 516.333333 6
15 B 448.833333 7
16 B 497.166667 8
17 B 458.833333 9

Selecting the top 50 % percentage names from the columns of a pandas dataframe

I have a pandas dataframe that looks like this. The rows and the columns have the same name.
name a b c d e f g
a 10 5 4 8 5 6 4
b 5 10 6 5 4 3 3
c - 4 9 3 6 5 7
d 6 9 8 6 6 8 2
e 8 5 4 4 14 9 6
f 3 3 - 4 5 14 7
g 4 5 8 9 6 7 10
I can get the 5 number of largest values by passing df['column_name'].nlargest(n=5) but if I have to return 50 % of the largest in descending order, is there anything that is inbuilt in pandas of it I have to write a function for it, how can I get them? I am quite new to python. Please help me out.
UPDATE : So let's take column a into consideration and it has values like 10, 5,-,6,8,3 and 4. I have to sum all of them up and get the top 50% of them. so the total in this case is 36. 50% of these values would be 18. So from column a, I want to select 10 and 8 only. Similarly I want to go through all the other columns and select 50%.
Sorting is flexible :)
df.sort_values('column_name',ascending=False).head(int(df.shape[0]*.5))
Update: frac argument is available only on .sample(), not in .head or .tail. df.sample(frac=.5) does give 50% but head and tail expects only int. df.head(frac=.5) fails with TypeError: head() got an unexpected keyword argument 'frac'
Note: on int() vs round()
int(3.X) == 3 # True Where 0 >= X >=9
round(3.45) == 3 # True
round(3.5) == 4 # True
So when doing .head(int/round ...) do think of what behaviour fits your need.
Updated: Requirements
So let's take column a into consideration and it has values like 10,
5,-,6,8,3 and 4. I have to sum all of them up and get the top 50% of
them. so the total, in this case, is 36. 50% of these values would be
18. So from column a, I want to select 10 and 8 only. Similarly, I want to go through all the other columns and select 50%. -Matt
A silly hack would be to sort, find the cumulative sum, find the middle by dividing it with the sum total and then use that to select part of your sorted column. e.g.
import pandas as pd
data = pd.read_csv(
pd.compat.StringIO("""name a b c d e f g
a 10 5 4 8 5 6 4
b 5 10 6 5 4 3 3
c - 4 9 3 6 5 7
d 6 9 8 6 6 8 2
e 8 5 4 4 14 9 6
f 3 3 - 4 5 14 7
g 4 5 8 9 6 7 10"""),
sep=' ', index_col='name'
).dropna(axis=1).apply(
pd.to_numeric, errors='coerce', downcast='signed')
x = data[['a']].sort_values(by='a',ascending=False)[(data[['a']].sort_values(by='a',ascending=False).cumsum()
/data[['a']].sort_values(by='a',ascending=False).sum())<=.5].dropna()
print(x)
Outcome:
You could sort the data frame and display only 90% of the data
df.sort_values('column_name',ascending=False).head(round(0.9*len(df)))
data.csv
name,a,b,c,d,e,f,g
a,10,5,4,8,5,6,4
b,5,10,6,5,4,3,3
c,-,4,9,3,6,5,7
d,6,9,8,6,6,8,2
e,8,5,4,4,14,9,6
f,3,3,-,4,5,14,7
g,4,5,8,9,6,7,10
test.py
#!/bin/python
import pandas as pd
def percentageOfList(l, p):
return l[0:int(len(l) * p)]
df = pd.read_csv('data.csv')
print(percentageOfList(df.sort_values('b', ascending=False)['b'], 0.9))

Reduce number of columns in a pandas DataFrame

I'm trying to create a violin plot in seaborn. The input is a pandas DataFrame, and it looks like in order to separate the data along the x axis I need to differentiate on a single column. I currently have a DataFrame that has floating point values for several sensors:
>>>df.columns
Index('SensorA', 'SensorB', 'SensorC', 'SensorD', 'group_id')
That is, each Sensor[A-Z] column contains a bunch of numbers:
>>>df['SensorA'].head()
0 0.072706
1 0.072698
2 0.072701
3 0.072303
4 0.071951
Name: SensorA, dtype: float64
And for this problem, I'm only interested in 2 groups:
>>>df['group_id'].unique()
'1', '2'
I want each Sensor to be a separate violin along the x axis.
I think this means I need to convert this into something of the form:
>>>df.columns
Index('Value', 'Sensor', 'group_id')
where the Sensor column in the new DataFrame contains the text "SensorA", "SensorB", etc., the Value column in the new DataFrame contains the values that were original in each Sensor[A-Z] column, and the group information is preserved.
I could then create a violinplot using the following command:
ax = sns.violinplot(x="Sensor", y="Value", hue="group_id", data=df)
I'm thinking I kind of need to do a reverse pivot. Is there an easy way of doing this?
Use panda's melt function
import pandas as pd
import numpy as np
df = pd.DataFrame({'SensorA':[1,3,4,5,6], 'SensorB':[5,2,3,6,7], 'SensorC':[7,4,8,1,10], 'group_id':[1,2,1,1,2]})
df = pd.melt(df, id_vars = 'group_id', var_name = 'Sensor')
print df
gives
group_id Sensor value
0 1 SensorA 1
1 2 SensorA 3
2 1 SensorA 4
3 1 SensorA 5
4 2 SensorA 6
5 1 SensorB 5
6 2 SensorB 2
7 1 SensorB 3
8 1 SensorB 6
9 2 SensorB 7
10 1 SensorC 7
11 2 SensorC 4
12 1 SensorC 8
13 1 SensorC 1
14 2 SensorC 10
May it's not the best way but it works (AFAIU):
import pandas as pd
import numpy as np
df = pd.DataFrame({'SensorA':[1,3,4,5,6], 'SensorB':[5,2,3,6,7], 'SensorC':[7,4,8,1,10], 'group_id':[1,2,1,1,2]})
groupedID = df.groupby('group_id')
df1 = pd.DataFrame()
for groupNum in groupedID.groups.keys():
dfSensors = groupedID.get_group(groupNum).filter(regex='Sen').stack()
_, sensorNames = zip(*dfSensors.index)
df2 = pd.DataFrame({'Sensor': sensorNames, 'Value':dfSensors.values, 'group_id':groupNum})
df1 = pd.concat([df1, df2])
print(df1)
Output:
Sensor Value group_id
0 SensorA 1 1
1 SensorB 5 1
2 SensorC 7 1
3 SensorA 4 1
4 SensorB 3 1
5 SensorC 8 1
6 SensorA 5 1
7 SensorB 6 1
8 SensorC 1 1
0 SensorA 3 2
1 SensorB 2 2
2 SensorC 4 2
3 SensorA 6 2
4 SensorB 7 2
5 SensorC 10 2

Categories

Resources