I aim to create a plot similar to the image where the '2020 Q4' data is in the same column as '2020'.
So far I was only able to place the 2020 Q4 data simply as an extra column.
The data is provided as a DataFrame like in the code below:
# DataFrame using arrays.
import pandas as pd
# initialize data of lists.
data = {'A':[10, 15, 20, 26, 27, 35, 15],
'B':[20, 25, 32, 33, 50, 52, 8],
'C':[30, 35, 41, 49, 52, 53, 25]}
# Creates pandas DataFrame.
df = pd.DataFrame(data, index =['2015',
'2016',
'2017',
'2018',
'2019',
'2020',
'2020 Q4',])
# plotting the data
df.plot(kind='bar',stacked=True)
Two problems have to be addressed here. First, one has to transpose the Q4 data into the row above. Second, the corresponding columns A A-Q4 etc. need similar colors to make it clear that they belong to the same category. Matplotlib's tab20 colormap comes in handy. Here is one approach:
from matplotlib import pyplot as plt
# DataFrame using arrays.
import pandas as pd
import numpy as np
# initialize data of lists.
data = {'A':[10, 15, 20, 26, 27, 35, 15],
'B':[20, 25, 32, 33, 50, 52, 8],
'C':[30, 35, 41, 49, 52, 53, 25]}
# Creates pandas DataFrame.
df = pd.DataFrame(data, index =['2015',
'2016',
'2017',
'2018',
'2019',
'2020',
'2020 Q4',])
#get column names
columns = df.columns
#store data of last row and create a new dataframe without the last row
val_q4 = df.iloc[-1].values
df1 = df.iloc[:-1]
#alternatively, one can simply overwrite df it doesn't matter to remove the Q4 row
#df = df.iloc[:-1]
#generate additional columns for Q4 data
new_columns = [f(item) for item in columns for f in (lambda x: x, lambda x: x+" Q4")]
df1 = df1.reindex(columns=new_columns)
#store Q4 data in last row
df1.iloc[-1, range(1, 2*len(columns), 2)] = val_q4
#create corresponding color pairs using the tab20 colormap
colors = plt.cm.tab20(np.linspace(0, 1, 20))
df1.plot(kind='bar',stacked=True, color=colors)
plt.show()
Sample output:
Restrictions: It relies on your data structure that the rows are already sorted and the last two rows are "Year" and "Year Q4". Tab20 limits the use to 10 columns A, B, C,...,J because beginning with K, the colors will not be unique. However, stacked bar graphs with more than 10 categories should be outlawed anyhow.
You can introduce Q4 values for A, B and C rows and initialise it to zero for all years except 2020. This should give you the desired result.
For example, see this updated code:
# DataFrame using arrays.
import pandas as pd
# initialize data of lists.
data = {'A':[10, 15, 20, 26, 27, 35, 15],
'B':[20, 25, 32, 33, 50, 52, 8],
'C':[30, 35, 41, 49, 52, 53, 25]}
# additional data initialise with zero for all years
add_data = {'AQ4':[0]*5,
'BQ4':[0]*5,
'CQ4':[0]*5}
# take the last element in list A, B, C and append it to add_data dict
add_data['AQ4'].append(data['A'].pop())
add_data['BQ4'].append(data['B'].pop())
add_data['CQ4'].append(data['C'].pop())
# concatenate the 2 dicts
data.update(add_data)
# Creates pandas DataFrame.
df = pd.DataFrame(data, index =['2015',
'2016',
'2017',
'2018',
'2019',
'2020'])
# plotting the data
df.plot(kind='bar',stacked=True)
Note that I have removed 2020 Q4 index when creating the data frame.
Related
I have a dataframe named df1 in which I have 4 columns. I want to use 2 columns as a list in analysis to exponential smoothing forecast
I've pulled a code where I'm getting the desired result but I want to replace the list in the existing code with the columns of dataframe I have
Here is my existing code and result
import pandas as pd
import numpy as np
from statsmodels.tsa.holtwinters import ExponentialSmoothing
# example data
data = pd.DataFrame({
'ASIN': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'],
'UnitsOrdered': [18, 29, 22, 16, 18, 19, 16, 29, 18, 26]
})
# apply exponential smoothing for each ASIN
smoothed_data = []
for asin, group in data.groupby('ASIN'):
units_ordered = np.asarray(group['UnitsOrdered'])
model = ExponentialSmoothing(units_ordered, trend='add', seasonal=None).fit()
forecast = model.forecast(steps=1)
smoothed_data.append({
'ASIN': asin,
'UnitsOrdered': units_ordered,
'Forecast': forecast[0]
})
# combine results into a new dataframe
smoothed_df = pd.DataFrame(smoothed_data)
print(smoothed_df)
Result:
Item UnitsOrdered Forecast
0 A [18, 29, 22, 16, 18] 16.700000
1 B [19, 16, 29, 18, 26] 26.399755
No I have a dataframe where I have column named "Item" and "UnitsOrdered". I want these as my variables in this code instead of A and B
I want to convert 3 rows as multi level column header in pandas dataframe.
Sample dataframe is,
df = pd.DataFrame({'a':['foo_0', 'bar_0', 1, 2, 3], 'b':['foo_0', 'bar_0', 11, 12, 13],
'c':['foo_1', 'bar_1', 21, 22, 23], 'd':['foo_1', 'bar_1', 31, 32, 33]})
expected output looks like, wherein yellow colored is a column multi level column header.
Thank you,
-Nilesh
I want to select rows based on two conditions
df.Length.str.isnumeric() == False & df.Type == "Type1"
and change the values of all corresponding rows, in a specific column Length, to a value from a list, such as:
[120, 2151, 215, 25, 2451]
Thank you!
EDITED
Following your comment, I came up with this shorter solution.
import pandas as pd
import math
df = pd.DataFrame(
[
[10, 15, float('NaN')],
[25, 30, 35],
[40, 45, 50],
[55, 60, float('NaN')]
], columns=list('ABC'))
pre_computed_list = [20, 65]
row_indices = df['C'][df['C'].apply(math.isnan)].index.tolist()
df['C'][row_indices] = pre_computed_list
I am trying to draw subplots using two identical DataFrames ( predicted and observed) with exact same structure ... the first column is index
The code below makes new index when they are concatenated using pd.melt and merge
as you can see in the figure the index of orange line is changed from 1-5 to 6-10
I was wondering if some could fix the code below to keep the same index for the orange line:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
actual = pd.DataFrame({'a': [5, 8, 9, 6, 7, 2],
'b': [89, 22, 44, 6, 44, 1]})
predicted = pd.DataFrame({'a': [7, 2, 13, 18, 20, 2],
'b': [9, 20, 4, 16, 40, 11]})
# Creating a tidy-dataframe to input under seaborn
merged = pd.concat([pd.melt(actual), pd.melt(predicted)]).reset_index()
merged['category'] = ''
merged.loc[:len(actual)*2,'category'] = 'actual'
merged.loc[len(actual)*2:,'category'] = 'predicted'
g = sns.FacetGrid(merged, col="category", hue="variable")
g.map(plt.plot, "index", "value", alpha=.7)
g.add_legend();
The orange line ('variable' == 'b') doesn't have an index of 0-5 because of how you used melt. If you look at pd.melt(actual), the index doesn't match what you are expecting, IIUC.
Here is how I would rearrange the dataframe:
merged = pd.concat([actual, predicted], keys=['actual', 'predicted'])
merged.index.names = ['category', 'index']
merged = merged.reset_index()
merged = pd.melt(merged, id_vars=['category', 'index'], value_vars=['a', 'b'])
Set the ignore_index variable to false to preserve the index., e.g.
df = df.melt(var_name=‘species’, value_name=‘height’, ignore_index = False)
I am pretty new to Python and hence I need your help on the following:
I have two tables (dataframes):
Table 1 has all the data and it looks like that:
GenDate column has the generation day.
Date column has dates.
Column D and onwards has different values
I also have the following table:
Column I has "keywords" that can be found in the header of Table 1
Column K has dates that should be in column C of table 1
My goal is to produce a table like the following:
I have omitted a few columns for Illustration purposes.
Every column on table 1 should be split base on the Type that is written on the Header.
Ex. A_Weeks: The Weeks corresponds to 3 Splits, Week1, Week2 and Week3
Each one of these slits has a specific Date.
in the new table, 3 columns should be created, using A_ and then the split name:
A_Week1, A_Week2 and A_Week3.
for each one of these columns, the value that corresponds to the Date of each split should be used.
I hope the explanation is good.
Thanks
You can get the desired table with the following code (follow comments and check panda api reference to learn about functions used):
import numpy as np
import pandas as pd
# initial data
t_1 = pd.DataFrame(
{'GenDate': [1, 1, 1, 2, 2, 2],
'Date': [10, 20, 30, 10, 20, 30],
'A_Days': [11, 12, 13, 14, 15, 16],
'B_Days': [21, 22, 23, 24, 25, 26],
'A_Weeks': [110, 120, 130, 140, np.NaN, 160],
'B_Weeks': [210, 220, 230, 240, np.NaN, 260]})
# initial data
t_2 = pd.DataFrame(
{'Type': ['Days', 'Days', 'Days', 'Weeks', 'Weeks'],
'Split': ['Day1', 'Day2', 'Day3', 'Week1', 'Week2'],
'Date': [10, 20, 30, 10, 30]})
# create multiindex
t_1 = t_1.set_index(['GenDate', 'Date'])
# pivot 'Date' level of MultiIndex - unstack it from index to columns
# and drop columns with all NaN values
tt_1 = t_1.unstack().dropna(axis=1)
# tt_1 is what you need with multi-level column labels
# map to rename columns
t_2 = t_2.set_index(['Type'])
mapping = {
type_: dict(zip(
t_2.loc[type_, :].loc[:, 'Date'],
t_2.loc[type_, :].loc[:, 'Split']))
for type_ in t_2.index.unique()}
# new column names
new_columns = list()
for letter_type, date in tt_1.columns.values:
letter, type_ = letter_type.split('_')
new_columns.append('{}_{}'.format(letter, mapping[type_][date]))
tt_1.columns = new_columns