Waterfall chart with Plotly - Update Traces - python

I'm creating a waterfall plot for three categories, as shown in below code:
import plotly.graph_objects as go
fig=go.Figure()
fig.add_trace(go.Waterfall(
x = [["Category 1", "Category 1", "Category 1", "Category 1", "Category 1", "Category 1", "Category 1",
"Category 2", "Category 2", "Category 2", "Category 2", "Category 2", "Category 2", "Category 2",
"Category 3", "Category 3", "Category 3", "Category 3", "Category 3", "Category 3", "Category 3"
],
["Gross Income", "Taxes", "Net Revenue", "CPV", "Variable Expenses", "Recurrent Capex", "EBITDA",
"Gross Income", "Taxes", "Net Revenue", "CPV", "Variable Expenses", "Recurrent Capex", "EBITDA",
"Gross Income", "Taxes", "Net Revenue", "CPV", "Variable Expenses", "Recurrent Capex", "EBITDA",
]
],
measure = ["absolute", "relative", "relative", "relative", "relative", "relative", "total",
"absolute", "relative", "relative", "relative", "relative", "relative", "total",
"absolute", "relative", "relative", "relative", "relative", "relative", "total"
],
y = [
1693,-296,1501,-897,-27,-45,532,
1439.05,-251.6,1275.85,-762.44,-22.95,-38.25,452.2,
1134.31,-198.32,1005.67,-600.99,-18.09,-30.150,356.44
]
))
The code returns this image: https://i.stack.imgur.com/bDf2g.png
What I want to do next is to edit color of 'Gross Income' items to green, so only EBTIDA would present a different layout.
I tried so with:
fig.update_traces(marker_color="LightSeaGreen",selector=dict(x='Gross Income'))
It doesn't work, though. Does anyone know how to do it?
Thanks

This is very difficult because for waterfall charts in plotly, the marker colors are assigned based on whether they are increasing, decreasing or total and cannot be assigned colors based on their category.
However, with a very ugly hack, we can make the plot appear to have the desired color in the "gross income" category. We can plot the gross income bars separately for all three categories, assigning them the same value, and classifying them as "relative" so that we can use the argument increasing = {"marker":{"color":"lightseagreen"}} to make them all lightseagreen. Note: this only works because they all happen to be positive values.
Then, because we have to add each overlapping gross income as a separate trace, we will need to offset each of these bars to ensure they overlap the bars from your original waterfall figure. I just used trial and error to figure out that offset=-0.4 looks approximately correct. Since these additional bars are purely visual, I also disabled their hover info and prevented them from appearing in the legend.
import plotly.graph_objects as go
fig=go.Figure()
fig.add_trace(go.Waterfall(
x = [["Category 1", "Category 1", "Category 1", "Category 1", "Category 1", "Category 1", "Category 1",
"Category 2", "Category 2", "Category 2", "Category 2", "Category 2", "Category 2", "Category 2",
"Category 3", "Category 3", "Category 3", "Category 3", "Category 3", "Category 3", "Category 3"
],
["Gross Income", "Taxes", "Net Revenue", "CPV", "Variable Expenses", "Recurrent Capex", "EBITDA",
"Gross Income", "Taxes", "Net Revenue", "CPV", "Variable Expenses", "Recurrent Capex", "EBITDA",
"Gross Income", "Taxes", "Net Revenue", "CPV", "Variable Expenses", "Recurrent Capex", "EBITDA",
]
],
measure = ["absolute", "relative", "relative", "relative", "relative", "relative", "total",
"absolute", "relative", "relative", "relative", "relative", "relative", "total",
"absolute", "relative", "relative", "relative", "relative", "relative", "total"
],
y = [
1693,-296,1501,-897,-27,-45,532,
1439.05,-251.6,1275.85,-762.44,-22.95,-38.25,452.2,
1134.31,-198.32,1005.67,-600.99,-18.09,-30.150,356.44
]
))
## add the gross income bars in each category
for category, value in zip(["Category 1", "Category 2", "Category 3"], [1693,-1439.05,1134.31]):
fig.add_trace(go.Waterfall(
x = [[category],["Gross Income"]],
measure = ["relative"],
y = [value],
increasing = {"marker":{"color":"lightseagreen"}},
offset=-0.4,
connector={"visible":False},
showlegend=False,
hoverinfo='skip',
))
fig.show()

Related

How do I iterate through a list of dictionaries?

I need to take an inputted time, for example "12:20", and print a 5x3 ASCII clock representation of it. But I don't know how how iterate through a list of dictionaries, which I think is the simplest way to solve this problem.
time = input("enter a time HH:MM")
my_list = [
{"0": "000", "1": " 1 ","2":"222","3":"333","4":"44","5":"555","6":"666","7":"777","8":"888","9":"999"},
{"0": "000", "1": "11 ", "2": " 2", "3":" 3","4":"4 4","5":"5 ","6":"6 ","7":" 7","8":"8 8","9":"9 9"},
{"0": "000", "1": " 1 ", "2": "222", "3":"333","4":"444","5":"555","6":"666","7":" 7","8":"888","9":"999"},
{"0": "000", "1": " 1 ", "2": "2 ", "3":" 3","4":" 4","5":" 5","6":"6 6","7":" 7","8":"8 8","9":" 9"},
{"0": "000", "1": "111", "2": "222", "3":"333","4":" 4","5":"555","6":"666","7":" 7","8":"888","9":" 9"}
]
for i in my_list:
for l in my_list.keys():
if l == time[i]:
print(my_list[i][l])
I tried making a list of dictionaries with two for loops: one for iterating through the list and one for iterating through each dictionary. If the input is 12:20, I need to print a 5x3 12:00 like so:
1 222 222 000
11 2 : 2 0 0
1 222 222 0 0
1 2 : 2 0 0
111 222 222 000
You were almost there. You just overlooked a few fundamentals. Such as: you have to get the entire line before you print it, names matter, and you can't print a colon if you don't include one in your segments.
import re
time_valid = re.compile(r'^\d{2}:\d{2}$')
while not time_valid.match((time := input("enter a time HH:MM: "))):
#keep asking this question til the user get's it right
pass
segments = [
{"0":"000", "1":" 1 ", "2":"222", "3":"333", "4":"4 4", "5":"555", "6":"666", "7":"777", "8":"888", "9":"999", ":":" "},
{"0":"0 0", "1":"11 ", "2":" 2", "3":" 3", "4":"4 4", "5":"5 ", "6":"6 ", "7":" 7", "8":"8 8", "9":"9 9", ":":":"},
{"0":"0 0", "1":" 1 ", "2":"222", "3":"333", "4":"444", "5":"555", "6":"666", "7":" 7", "8":"888", "9":"999", ":":" "},
{"0":"0 0", "1":" 1 ", "2":"2 ", "3":" 3", "4":" 4", "5":" 5", "6":"6 6", "7":" 7", "8":"8 8", "9":" 9", ":":":"},
{"0":"000", "1":"111", "2":"222", "3":"333", "4":" 4", "5":"555", "6":"666", "7":" 7", "8":"888", "9":" 9", ":":" "}
]
for segment in segments:
line = ''
for c in time: #gather the entire line before printing
line = f'{line} {segment[c]}'
print(line)
With very little work this can be made into a console clock.
import threading
from datetime import datetime
from os import system, name
#repeating timer
class Poll(threading.Timer):
def run(self):
while not self.finished.wait(self.interval):
self.function(*self.args,**self.kwargs)
segments = [
{"0":"000", "1":" 1 ", "2":"222", "3":"333", "4":"4 4", "5":"555", "6":"666", "7":"777", "8":"888", "9":"999", ":":" "},
{"0":"0 0", "1":"11 ", "2":" 2", "3":" 3", "4":"4 4", "5":"5 ", "6":"6 ", "7":" 7", "8":"8 8", "9":"9 9", ":":":"},
{"0":"0 0", "1":" 1 ", "2":"222", "3":"333", "4":"444", "5":"555", "6":"666", "7":" 7", "8":"888", "9":"999", ":":" "},
{"0":"0 0", "1":" 1 ", "2":"2 ", "3":" 3", "4":" 4", "5":" 5", "6":"6 6", "7":" 7", "8":"8 8", "9":" 9", ":":":"},
{"0":"000", "1":"111", "2":"222", "3":"333", "4":" 4", "5":"555", "6":"666", "7":" 7", "8":"888", "9":" 9", ":":" "}
]
def display():
#get time
time = datetime.now().strftime("%H:%M:%S")
#clear console
system(('clear','cls')[name=='nt'])
#draw console
for segment in segments:
line = ''
for c in time:
#illustrates a simple method to replace graphics
line = f'{line} {segment[c].replace(c,chr(9608))}'
print(line)
#start clock
Poll(.1, display).start()

how to convert json response to excel using python

this reponse I am getting:
{
"value": [
{
"id": "/providers/Microsoft.Billing/Departments/1234/providers/Microsoft.Billing/billingPeriods/201903/providers/Microsoft.Consumption/usageDetails/usageDetails_Id1",
"name": "usageDetails_Id1",
"type": "Microsoft.Consumption/usageDetails",
"kind": "legacy",
"tags": {
"env": "newcrp",
"dev": "tools"
},
"properties": {
"billingAccountId": "xxxxxxxx",
"billingAccountName": "Account Name 1",
"billingPeriodStartDate": "2019-03-01T00:00:00.0000000Z",
"billingPeriodEndDate": "2019-03-31T00:00:00.0000000Z",
"billingProfileId": "xxxxxxxx",
"billingProfileName": "Account Name 1",
"accountName": "Account Name 1",
"subscriptionId": "00000000-0000-0000-0000-000000000000",
"subscriptionName": "Subscription Name 1",
"date": "2019-03-30T00:00:00.0000000Z",
"product": "Product Name 1",
"partNumber": "Part Number 1",
"meterId": "00000000-0000-0000-0000-000000000000",
"meterDetails": null,
"quantity": 0.7329,
"effectivePrice": 0.000402776395232,
"cost": 0.000295194820065,
"unitPrice": 4.38,
"billingCurrency": "CAD",
"resourceLocation": "USEast",
"consumedService": "Microsoft.Storage",
"resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Resource Group 1/providers/Microsoft.Storage/storageAccounts/Resource Name 1",
"resourceName": "Resource Name 1",
"invoiceSection": "Invoice Section 1",
"costCenter": "DEV",
"resourceGroup": "Resource Group 1",
"offerId": "Offer Id 1",
"isAzureCreditEligible": false,
"chargeType": "Usage",
"benefitId": "00000000-0000-0000-0000-000000000000",
"benefitName": "Reservation_purchase_03-09-2018_10-59"
}
},
{
"id": "/providers/Microsoft.Billing/Departments/1234/providers/Microsoft.Billing/billingPeriods/201903/providers/Microsoft.Consumption/usageDetails/usageDetails_Id1",
"name": "usageDetails_Id1",
"type": "Microsoft.Consumption/usageDetails",
"kind": "legacy",
"tags": {
"env": "newcrp",
"dev": "tools"
},
"properties": {
"billingAccountId": "xxxxxxxx",
"billingAccountName": "Account Name 1",
"billingPeriodStartDate": "2019-03-01T00:00:00.0000000Z",
"billingPeriodEndDate": "2019-03-31T00:00:00.0000000Z",
"billingProfileId": "xxxxxxxx",
"billingProfileName": "Account Name 1",
"accountName": "Account Name 1",
"subscriptionId": "00000000-0000-0000-0000-000000000000",
"subscriptionName": "Subscription Name 1",
"date": "2019-03-30T00:00:00.0000000Z",
"product": "Product Name 1",
"partNumber": "Part Number 1",
"meterId": "00000000-0000-0000-0000-000000000000",
"meterDetails": null,
"quantity": 0.7329,
"effectivePrice": 0.000402776395232,
"cost": 0.000295194820065,
"unitPrice": 4.38,
"billingCurrency": "CAD",
"resourceLocation": "USEast",
"consumedService": "Microsoft.Storage",
"resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Resource Group 1/providers/Microsoft.Storage/storageAccounts/Resource Name 1",
"resourceName": "Resource Name 1",
"invoiceSection": "Invoice Section 1",
"costCenter": "DEV",
"resourceGroup": "Resource Group 1",
"offerId": "Offer Id 1",
"isAzureCreditEligible": false,
"chargeType": "Usage",
"benefitId": "00000000-0000-0000-0000-000000000000",
"benefitName": "Reservation_purchase_03-09-2018_10-59"
}
}
]
}
code:
import pandas as pd
frame=pd.DataFrame()
for i in range (len(json_output['value'])):
df1= pd.DataFrame(data={'kind':json_output['value'][i]['kind'],
'id': json_output['value'][i]['id'],
'tags': json_output['value'][i]['tags'],
'name':json_output['value'][i]['name'],
'type':json_output['value'][i]['type'],
'billingAccountid':json_output['value'][i]['properties']['billingAccountId']},index=[i])
print(df1)
frame=frame.append(df1)
frame.to_csv('datt.csv')
Can you please help me to convert this data in to csv.
I am looking for
id,name,type,kind,tags,billingAccountId,resourceName etc into all column
I tried to convert into DataFrame it didn't work.
At last I am trying above python but its giving tags into null.
Note : I want to keep tags in dict format (for now)
I tried your code and stored json file into an output first:
-TAGS is a dictionary you access it without any keys so it will be NONE
If not comfortable by splitting TAGS use:
'tags':json_output['value'][i]['tags']['env']+json_output['value'][i]['tags']['dev']

Convert JSON table to JSON tree

I have the results of an SQL query in JSON format
value = [
{"Machine": "Mach 1", "Device": "Dev a", "Identifier": "HMI 1"},
{"Machine": "Mach 1", "Device": "Dev a", "Identifier": "HMI 2"},
{"Machine": "Mach 1", "Device": "Dev b", "Identifier": "HMI 3"},
{"Machine": "Mach 1", "Device": "Dev c", "Identifier": "HMI 5"},
{"Machine": "Mach 2", "Device": "Dev c", "Identifier": "HMI 6"},
{"Machine": "Mach 2", "Device": "Dev d", "Identifier": "HMI 7"},
{"Machine": "Mach 3", "Device": "Dev e", "Identifier": "HMI 8"}
]
I'm trying to generate a tree of the form:
Tree to be generated
[ ]- Mach 1
+[ ]- Dev a
| +-- HMI 2
| +-- HMI 3
+[ ]- Dev c
+-- HMI 5
[ ]- Mach 2
+[ ]- Dev c
| +-- HMI 6
+[ ]- Dev d
| +-- HMI 7
+[ ]- Dev e
+-- HMI 8
The output of the function is to be used by Inductive Automation's Perspective Tree component which expects it in the format:
items = [
{
"label": "Mach 1",
"expanded": true,
"data": "",
"items": [
{
"label": "Dev a",
"expanded": true,
"data": "",
"items": [
{
"label": "HMI 1",
"expanded": true,
"data": {
"Identifier": "HMI1",
"Device": "Dev a",
"Machine": "Mach 1"
},
"items": []
},
{
"label": "HMI 2",
"expanded": true,
"data": {
"Identifier": "HMI2",
"Device": "Dev a",
"Machine": "Mach 1"
},
"items": []
}
]
},
{
"label": "Dev b",
"expanded": true,
"data": "",
"items": [
{
"label": "HMI 3",
"expanded": true,
"data": {
"Identifier": "HMI3",
"Device": "Dev b",
"Machine": "Mach 1"
},
"items": []
}
]
}
]
},
…
I have created some linear Python code for a tree depth of three but I'd like to modify it to work automatically with tree depth from 1 to 6 (or so) returned by the SQL query. (The sample input and output above is three-level.) Unfortunately I can't figure out how to modify this to work with recursion for a variable number of columns.
Figure 1. The results of my lazy code (available on request).
Can anyone suggest an approach using Python - the script language of the Ignition application I'm using?
Many thanks.
You would need to provide the order in which the keys should be used to drill down in the hierarchy. This is good practice, as the order of the keys in a dictionary might not represent the desired order.
Once you have these keys as a list, you could use it to iteratively dig deeper into the hierarchy.
def makeForest(values, levels):
items = [] # The top level result array
paths = {} # Objects keyed by path
root = { "items": items } # Dummy: super root of the forest
for data in values:
parent = root
path = ""
for key in levels:
label = data[key]
path += repr([label])
node = paths.get(path, None)
if not node:
node = {
"label": data[key],
"expanded": True,
"data": "",
"items": []
}
paths[path] = node
parent["items"].append(node)
parent = node
parent["data"] = data
return items
# Example use:
value = [{"Machine": "Mach 1", "Device": "Dev a", "Identifier": "HMI 1"},{"Machine": "Mach 1", "Device": "Dev a", "Identifier": "HMI 2"},{"Machine": "Mach 1", "Device": "Dev b", "Identifier": "HMI 3"},{"Machine": "Mach 1", "Device": "Dev c", "Identifier": "HMI 5"},{"Machine": "Mach 2", "Device": "Dev c", "Identifier": "HMI 6"},{"Machine": "Mach 2", "Device": "Dev d", "Identifier": "HMI 7"},{"Machine": "Mach 3", "Device": "Dev e", "Identifier": "HMI 8"}]
forest = makeForest(value, ["Machine", "Device", "Identifier"])
print(forest)

Dataframe's with strange structure with variables in even columns

I'm a beginner with python in combination with pandas, and I understand the basics.
But I received a couple days ago 3 strange datasets in excel.
As image below:
import pandas as pd
dfinput = pd.DataFrame([
["uuid", "79876081-099b-474f-9e8f-ff917fd7394c", "uuid", "a96bc7cb-02b1-4d13-823a-908531cda095", "uuid",
"38bc7d20-10be-4774-973c-b3b00234a645", "uuid", "e7b12da6-a47f-4c24-8545-faa24e249a03", "uuid", "6b2c9426-bd6f-4bda-9c53-a86200e051f8"],
["variable 1", "value", "variable 1", "value", "variable 1",
"value", "variable 1", "value", "variable 1", "value"],
["variable 2", "value", "variable 2", "value", "variable 2",
"value", "variable 2", "value", "variable 2", "value"],
["variable 3", "value", "variable 3", "value", "variable 3",
"value", "variable 3", "value", "variable 3", "value"],
["variable 4", "value", "variable 4", "value", "variable 4",
"value", "variable 4", "value", "variable 4", "value"],
["variable 5", "value", "variable 5", "value", "variable 5",
"value", "variable 5", "value", "variable 5", "value"],
["variable 6", "value", "variable 6", "value", "variable 6",
"value", "variable 6", "value", "variable 6", "value"],
["variable 7", "value", "variable 7", "value", "variable 7",
"value", "variable 7", "value", "variable 7", "value"],
["variable 8", "value", "variable 8", "value", "variable 8",
"value", "variable 8", "value", "variable 8", "value"],
["variable 9", "value", "variable 9", "value", "variable 9",
"value", "variable 9", "value", "variable 9", "value"],
["variable 10", "value", "variable 10", "value", "variable 10",
"value", "variable 10", "value", "variable 10", "value"],
["variable A", "value", "variable B", "value", "variable A",
"value", "variable A", "value", "variable A", "value"],
["variable B", "value", "variable C", "value", "variable C",
"value", "variable B", "value", "variable B", "value"],
["variable C", "value", "variable D", "value", "variable D",
"value", "variable D", "value", "variable C", "value"],
["variable D", "value", "Variable E", "value", "Variable E",
"value", "Variable F", "value", "Variable E", "value"],
["Variable E", "value", "Variable F", "value", "Variable H",
"value", "Variable G", "value", "Variable F", "value"],
["Variable F", "value", "Variable H", "value", "",
"", "Variable H", "value", "Variable G", "value"],
["Variable G", "value", "", "", "", "", "", "", "Variable H", "value"]
])
I want the following result:
dfoutput = pd.DataFrame([["value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "null"],
["value", "value", "value", "value", "value", "value", "value", "value", "value",
"value", "null", "value", "value", "value", "value", "value", "null", "value"],
["value", "value", "value", "value", "value", "value", "value", "value", "value",
"value", "value", "null", "value", "value", "value", "null", "null", "value"],
["value", "value", "value", "value", "value", "value", "value", "value", "value",
"value", "value", "value", "null", "value", "null", "value", "value", "value"],
["value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "value", "null", "value", "value", "value", "value"]],
index=['79876081-099b-474f-9e8f-ff917fd7394c', 'a96bc7cb-02b1-4d13-823a-908531cda095',
'38bc7d20-10be-4774-973c-b3b00234a645', 'e7b12da6-a47f-4c24-8545-faa24e249a03', '6b2c9426-bd6f-4bda-9c53-a86200e051f8'],
columns=['variable 1', 'variable 2', 'variable 3', 'variable 4', 'variable 5', 'variable 6', 'variable 7', 'variable 8', 'variable 9', 'variable 10', 'variable A', 'variable B', 'variable C', 'variable D', 'Variable E', 'Variable F', 'Variable G', 'Variable H'])
I did try to loop the columns and create a new dataframe, but got stuck and think I make it unnecessary complex.
I can't get my head around it. Someone dealt with this before? and have a useful direction for me to go?
You can re-structure your data to your desired outcome with a rather simple manipulation. Note that I am using the dataframe (dfinput) you posted:
# Change first row to headers and Transpose
headers = dfinput.iloc[0]
one = (pd.DataFrame(dfinput.values[1:], columns=headers)).T
# Change first row to headers again
one.columns = one.iloc[0]
# Keep only odd indexed rows
res = one.iloc[1::2, :]
res
uuid variable 1 variable 2 variable 3 variable 4 variable 5 variable 6 variable 7 variable 8 variable 9 variable 10 variable A variable B variable C variable D Variable E Variable F Variable G
79876081-099b-474f-9e8f-ff917fd7394c value value value value value value value value value value value value value value value value value
a96bc7cb-02b1-4d13-823a-908531cda095 value value value value value value value value value value value value value value value value
38bc7d20-10be-4774-973c-b3b00234a645 value value value value value value value value value value value value value value value
e7b12da6-a47f-4c24-8545-faa24e249a03 value value value value value value value value value value value value value value value value
6b2c9426-bd6f-4bda-9c53-a86200e051f8 value value value value value value value value value value value value value value value value value

normalize mixed list of dicts/lists in python

I have a big dataset of addresses in following mixed formats:
1) Simple straight variant of houses and flats:
"Big District, Main Street, House 1, flat 1",
"Big District, Main Street, House 1, flat 2"
district
street
house
flat
flat
2) Complex variant of houses, flats and buildings:
"Big District, Main Street, House 1, flat 1"
"Big District, Main Street, House 1, flat 2"
"Big District, Main Street, House 1, Building 1, flat 1"
"Big District, Main Street, House 1, Building 1, flat 2"
(So there are House 1 with flats and House 1 building 1 with flats)
district
street
house
flat
flat
building
flat
flat
3) Varinat of house that only have buildings
"Big District, Main Street, House 1, Building 1, flat 1"
"Big District, Main Street, House 1, Building 1, flat 2"
"Big District, Main Street, House 1, Building 2, flat 1"
"Big District, Main Street, House 1, Building 2, flat 2"
(There is no house 1 without buildings in this case, only House 1 building 1 and House 1 building 2)
district
street
house
building
flat
flat
building
flat
flat
Data is structured as follows:
[
{"text": "street 1",
"level": 7,
"children":[
{"text": "house 1",
"level":8,
"children":[
{"text": "flat 1", "level": 11},
{"text": "flat 2", "level": 11}
]
},
{"text": "house 2",
"level": 8,
"children":[
{"text": "building 1",
"level": 9,
"children":[
{"text": "flat 1", "level": 11}
]
},
{"text": "flat 1", "level": 11}
]
},
{"text": "house 3",
"children": []
}
]
}
]
What I need is a list of dicts:
[
{"level 7": "Street 1", "level 8": "house 1", "level 9": NaN, "level 11":"flat 1"},
{"level 7": "Street 1", "level 8": "house 1", "level 9": NaN, "level 11":"flat 2"},
{"level 7": "Street 1", "level 8": "house 2", "level 9": "building 1", "level 11":"flat 1"},
{"level 7": "Street 1", "level 8": "house 2", "level 9": NaN, "level 11":"flat 1"},
{"level 7": "Street 1", "level 8": "house 3", "level 9": NaN, "level 11":NaN}
]
And I'm really stuck how to make this algorithm.

Categories

Resources