I am working on my first project and I am using Pygal to visualize some data from a database.
I am using the latest version of Python (3.6.5), Flask, Pygal and my IDE is Pycharm
The project is a simple budget application in which one enters the planned budget for a monthly expenditure and then the actual amounts for that expense/item (e.g. monthly car expenses, like gas).
I use Pygal to show 2 bar charts. The first one (which works as intended) shows a total planned amounts vs total actual amounts:
The second chart shows the planned vs actual per expense/item (e.g. monthly car expenses, like gas)
The issue I am facing is that the chart mixes up the labels and amounts. They show up in the chart based on the order they are entered, not on the item type.
For example:
In the above image, the are 3 items: Masina (car), Salariu (salary) and Economii (savings).
The amount represented by the blue column (the actual amount) should show up under the label Economii, not under Masina and it should not matter that I have entered it as the first actual in the database.
Also, adding more actual amounts for the same expenses item (Economii in our case) in the database simply adds more columns and it does not sum it up on the same column:
This is the database query function I am using:
def GraphData():
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
db_path = os.path.join(BASE_DIR, 'budget_db.db')
with sqlite3.connect(db_path) as db:
c = db.cursor()
d = db.cursor()
c.execute('SELECT title, category, name, planned_amount_month FROM Post')
d.execute('SELECT title_actual, category_actual, actual_amount_name, actual_amount FROM ActualPost')
data_planned = c.fetchall()
data_actual = d.fetchall()
return data_planned, data_actual
Below is the route I have created for both of the charts. The labels are pulled from the title_planned list and I have a feeling that the issue I am facing is because I am creating lists and I am appending them.
I think I should create dictionaries, but I have not idea how without messing everything else up:
#posts.route("/home")
def graphing():
data_planned, data_actual = GraphData()
title_planned = []
value_planned = []
title_actual = []
value_actual = []
for planned_row in data_planned:
title_planned.append(planned_row[2])
value_planned.append(planned_row[3])
for actual_row in data_actual:
title_actual.append(actual_row[2])
value_actual.append(actual_row[3])
graph = pygal.Bar(title=u'Total Planned Values vs Total Actual Values')
graph.add('Planned', [{'value': sum(value_planned), 'label': 'Total for Planned Budget:'}])
graph.add('Actual', [{'value': sum(value_actual), 'label': 'Total for Actual Amounts:'}])
graph_data = graph.render_data_uri()
graph_all = pygal.Bar(title=u'Planned Budget per item vs Actual Amounts per item')
graph_all.x_labels = title_planned
graph_all.add('Planned', value_planned)
graph_all.add('Actual', value_actual)
graph_all_data = graph_all.render_data_uri()
return render_template('home.html', graph_data=graph_data, graph_all_data=graph_all_data)
Edit:
I have been trying to do it using 2 dictionaries in the route with the expense item as dict key (title_planned_sum / title_actual_sum) and the amount as dict value (value_planned_sum / value_actual_sum):
tv_planned = dict(zip(title_planned, value_planned))
tv_planned_sum = {title_planned_sum: sum(value_planned_sum) for title_planned_sum, value_planned_sum in tv_planned.items()}
tv_actual = dict(zip(title_actual, value_actual))
tv_actual_sum = {title_actual_sum: sum(value_actual_sum) for title_actual_sum, value_actual_sum in tv_actual.items()}
Here is the full route:
#posts.route("/home")
def graphing():
data_planned, data_actual = GraphData()
title_planned = []
value_planned = []
title_actual = []
value_actual = []
for planned_row in data_planned:
title_planned.append(planned_row[2])
value_planned.append(planned_row[3])
for actual_row in data_actual:
title_actual.append(actual_row[2])
value_actual.append(actual_row[3])
tv_planned = dict(zip(title_planned, value_planned))
tv_planned_sum = {title_planned_sum: sum(value_planned_sum) for title_planned_sum, value_planned_sum in tv_planned.items()}
tv_actual = dict(zip(title_actual, value_actual))
tv_actual_sum = {title_actual_sum: sum(value_actual_sum) for title_actual_sum, value_actual_sum in tv_actual.items()}
graph = pygal.Bar(title=u'Total Planned Values vs Total Actual Values')
graph.add('Planned', [{'value': sum(value_planned), 'label': 'Total for Planned Budget:'}])
graph.add('Actual', [{'value': sum(value_actual), 'label': 'Total for Actual Amounts:'}])
graph_data = graph.render_data_uri()
graph_all = pygal.Bar(title=u'Planned Budget per item vs Actual Amounts per item')
graph_all.x_labels = title_planned
graph_all.add('Planned', tv_planned_sum)
graph_all.add('Actual', tv_actual_sum)
graph_all_data = graph_all.render_data_uri()
return render_template('home.html', graph_data=graph_data, graph_all_data=graph_all_data)
But of course, now I am getting this debug error:
TypeError: 'float' object is not iterable
And this is because, in the 2 dictionaries I am trying to work with, I am trying to sum the values for each key that receives multiple values with sum(value_planned_sum).
I got it working!
In the end I realized I was over-complicating my question, so I pulled out a pen and paper and started doing some pseudo-code to see where was the first step in my code where I could comfortably calculate the floats sum that was causing me head-aches.
Step 1: In my route, I could create a list containing nested tuples (with 2 elements each: str and float)
Step 2: Now, if some of the tuples had the same elements on index [0], how do I sum the float elements on index [1]?
So, I asked the this question on reddit.com/r/learnpython and the user diesch gave me an idea that I could use successfully: to import the itertools package and from it, to use groupby().
Here is how my route code looks now:
#posts.route("/home")
def graphing():
data_planned, data_actual = GraphData()
title_planned = []
value_planned = []
title_actual = []
value_actual = []
planned = []
actual = []
for planned_row in data_planned:
title_planned.append(planned_row[2])
value_planned.append(planned_row[3])
planned_list = zip(title_planned, value_planned)
for key, group in itertools.groupby(sorted(planned_list), lambda x: x[0]):
asum = 0
for i in group:
asum += i[1]
planned.append((key, asum))
planned_dict = dict(planned)
for actual_row in data_actual:
title_actual.append(actual_row[2])
value_actual.append(actual_row[3])
actual_list = zip(title_actual, value_actual)
for key, group in itertools.groupby(sorted(actual_list), lambda x: x[0]):
asum = 0
for i in group:
asum += i[1]
actual.append((key, asum))
actual_dict = dict(actual)
graph = pygal.Bar(title=u'Total Planned Values vs Total Actual Values')
graph.add('Planned', [{'value': sum(value_planned), 'label': 'Total for Planned Budget:'}])
graph.add('Actual', [{'value': sum(value_actual), 'label': 'Total for Actual Amounts:'}])
graph_data = graph.render_data_uri()
graph_all = pygal.Bar(title=u'Planned Budget per item vs Actual Amounts per item')
graph_all.x_labels = title_planned
graph_all.add('Planned', planned_dict)
graph_all.add('Actual', actual_dict)
graph_all_data = graph_all.render_data_uri()
return render_template('home.html', graph_data=graph_data, graph_all_data=graph_all_data)
Related
I am looking to calibrate the Heston model daily using scipy.optimize.minimize() over a period of time.
Some basic background information; I have collected information on 250.000 option trades over almost 4 years (so approx. 150 trades a day) and am looking to calibrate the heston model daily using option information on that specific day. I am however, quite new to nonlinear optimalization and even more so to scipy.optimize.minimize().
So far I have defined three functions:
the heston model function itself, which returns a dictonary with each day as key and the heston model estimated option price for each trade of that specific day as values;
the actual/observed option prices function, this function returns a dictonary in similar format as the heston model function.
the cost function, this function combines the two dictonaries of the previous functions and returns a list the sum of with all the squared differences for that specific date.
Now comes the problem, I tried to use scipy.optimize.minize() with my cost function, however I feel I have not correctly specified some parts of my three function so far in order to run the minimizer. Running scipy.optimize.minize() therefore expectly resulted in an error (picture below). It would be very much appreciate it somebody could give me some pointers on possible misspecification in my code.
picture of the dataframe: https://i.stack.imgur.com/2kkpl.png
picture of a small sample run: https://i.stack.imgur.com/LbSL5.png
picture of error when using scipy.optimize.minimize() https://i.stack.imgur.com/UuLIw.png
The code:
import pandas as pd
import datetime as dt
import time
import numpy as np
import QuantLib as ql
import scipy
from scipy.optimize import minimize
def hestonmodel(kap, the, sig, rho, init_vol):
dict1 = {}
for i in range(10):
if df_master.option_type[i] == "C":
payoff = ql.PlainVanillaPayoff(ql.Option.Call, df_master.strike_price[i])
else:
payoff = ql.PlainVanillaPayoff(ql.Option.Put, df_master.strike_price[i])
day_count = ql.Actual365Fixed()
calender = ql.NullCalendar()
experation_dates = ql.Date(df_master["maturity_date"][i],'%Y-%m-%d %H:%M:%S.%f')
calculation_dates = ql.Date(df_master["date"][i],'%Y-%m-%d %H:%M:%S.%f')
ql.Settings.instance().evaluationDate = calculation_dates
exercise = ql.EuropeanExercise(experation_dates)
option = ql.VanillaOption(payoff, exercise)
spot_price = df_master.index_price[i]
strike_price = df_master.strike_price[i]
riskfree_rate = df_master.risk_free_rate[i]
dividend = 0
variance = init_vol**2
initial_value = ql.QuoteHandle(ql.SimpleQuote(spot_price))
# Setting up flat risk free curves
discount_curve = ql.YieldTermStructureHandle(ql.FlatForward(calculation_dates, riskfree_rate,day_count))
dividend_yield = ql.YieldTermStructureHandle(ql.FlatForward(calculation_dates, dividend, day_count))
heston_process = ql.HestonProcess(discount_curve,dividend_yield, initial_value,variance,kap,the,sig,rho)
# Inputs used for the engine are model, Tolerance level, maximum evaluations
engine = ql.AnalyticHestonEngine(ql.HestonModel(heston_process),0.001,1000)
option.setPricingEngine(engine)
if i != 0:
if df_master.day_index[i] == df_master.day_index[i-1]:
dict1[df_master.day_index[i]].append(option.NPV())
else:
dict1[df_master.day_index[i]] = []
dict1[df_master.day_index[i]].append(option.NPV())
else:
dict1[df_master.day_index[i]] = []
dict1[df_master.day_index[i]].append(option.NPV())
return dict1
def actualpricefunction():
dict2 = {}
for i in range(10):
if i != 0:
if df_master.day_index[i] == df_master.day_index[i-1]:
dict2[df_master.day_index[i]].append(df_master.price[i])
else:
dict2[df_master.day_index[i]] = []
dict2[df_master.day_index[i]].append(df_master.price[i])
else:
dict2[df_master.day_index[i]] = []
dict2[df_master.day_index[i]].append(df_master.price[i])
return dict2
def costfunction(kap, the, sig, rho, init_vol):
dict1 = hestonmodel(kap, the, sig, rho, init_vol)
dict2 = actualpricefunction()
list1 = []
for i in dict1.keys():
list_temp1 = []
list_temp2 = []
d1 = dict1[i]
d2 = dict2[i]
for k in range(len(d1)):
result = pow((d1[k]-d2[k]),2)
list_temp1.append(result)
list_temp2 = sum(list_temp1)
list1.append(list_temp2)
return list1
The way I tried to run the scipy.optimize.minimize():
init_guess = (0.03,1,0.05,-0.6,0.03)
opt = si.optimize.minimize(costfunction(kap = 0.03,the = 1,sig = 0.05,rho=-0.6,init_vol=0.03), init_guess,method='Nelder-Mead', tol=1e-6)
I have created a nested list from a larger nested dictionary, and now want to convert that list into a data frame. the list i have created has no keys or values.
I have tried to convert the list into a dictionary using dict() but this does not work.
the list is in this format (names and data changed for anonymity)
['Bigclient', ['All Web Site Data', '129374116'],
'Otherclient', ['All Web Site Data', '164548948'], ['Filtered website data', '142386573'], ['Test', '72551604'].
so i have a parent value 'Bigclient' that then has a child list including the name of the data and an ID number corresponding to that name. Each parent value has different amounts of child pairs. I want to make a data frame that has trhee columns like so
Client_name dataname ID
BigClient All Web 129374116
Other Client All web 164548948
Other Client Filtered 142386573
Other Client Test 7255160
so the clients name (parent value) is used to group the datanames and id's
new =[]
for item in data['items']:
name = item.get('name')
if name:
new.append(name)
webprop = item.get('webProperties')
if webprop:
for profile in webprop:
profile = profile.get('profiles')
if profile:
for idname in profile:
idname = idname.get('name')
for idname1 in profile:
idname1 = idname1.get('id')
if idname:
result = [idname, idname1]
new.append(result)
else:
continue
else:
continue
this is how ive built my list up, however it has no dictionaries.
Here you go:
import pandas as pd
raw_data = ['Bigclient', ['All Web Site Data', '129374116'], 'Otherclient', ['All Web Site Data', '164548948'], ['Filtered website data', '142386573'], ['Test', '72551604']]
# collect dsata
keys_list = []
values_list = [[] for _ in range(2)]
count = -1
for item in raw_data:
if isinstance(item, str):
keys_list.append(item)
count += 1
else:
values_list[count].append(item)
# create data dictionary
data_dict = dict(zip(keys_list, values_list))
# create data frame
raw_df = pd.DataFrame(columns=['Client_name', 'data'])
for key, values in data_dict.items():
for value in values:
raw_df = raw_df.append({'Client_name': key, 'data': value}, ignore_index=True)
# split list data into 2 columns
spilt_data = pd.DataFrame(raw_df['data'].values.tolist(), columns=['dataname','ID'])
# concat data
result = pd.concat([raw_df, spilt_data], axis=1, sort=False)
# drop used column
result = result.drop(['data'], axis=1)
Output:
Client_name dataname ID
0 Bigclient All Web Site Data 129374116
1 Otherclient All Web Site Data 164548948
2 Otherclient Filtered website data 142386573
3 Otherclient Test 72551604
Hello you Pythonic lovers.
I have run into quite an interesting little issue that I have not been able to resolve due to my inexperience. I am constructing a dictionary in python based on a set of answers in a graph database and I have run into an interesting dilemma. (I am running Python 3
When all is said and done, I receive the following example output in my excel file (this is from column 0 , every entry is a row:
ACTUAL EXCEL FORMAT:
0/{'RecordNo': 0}
1/{'Dept': 'DeptName'}
2/{'Option 1': 'Option1Value'}
3/{'Option 2': 'Option2Value'}
4/{'Question1': 'Answer1'}
5/{'Question2': 'Answer2'}
6/{'Question3': 'Answer3'}
etc..
Expected EXCEL format:
0/Dept, Option 1, Option 2, Question 1, Question 2, Question 3
1/DeptName, Option1Value, Option2Value, Answer1, Answer2, Answer3
The keys of the dictionary are supposed to be the headers and the values, the contents of every row, but for some reason, it's writing it out as the key and value when I use the following output code:
EXCEL WRITER CODE:
ReportDF = pd.DataFrame.from_dict(DomainDict)
WriteMe = pd.ExcelWriter('Filname.xlsx')
ReportDF.to_excel(WriteMe, 'Sheet1')
try:
WriteMe.save()
print('Save completed')
except:
print('Error in saving file')
To build the dictionary, I use the following code:
EDIT (Removed sub-addition of dictionary entries, as it is the same and will be streamlined into a function call once the primary works).
DICTIONARY PREP CODE:
for Dept in Depts:
ABBR = Dept['dept.ABBR']
#print('Department: ' + ABBR)
Forests = getForestDomains(Quarter,ABBR)
for Forest in Forests:
DictEntryList = []
DictEntryList.append({'RecordNo': DomainCount})
DictEntryList.append({'Dept': ABBR})
ForestName = Forest['d.DomainName']
DictEntryList.append({'Forest ': ForestName})
DictEntryList.append({'Domain': ''})
AnswerEntryList = []
QList = getApplicableQuestions(str(SA))
for Question in QList:
FAnswer = ''
QDesc = Question['Question']
AnswerResult = getAnswerOfQuestionForDomainForQuarter(QDesc, ForestName, Quarter)
if AnswerResult:
for A in AnswerResult:
if(str(A['Answer']) != 'None'):
if(isinstance(A, numbers.Number)):
FAnswer = str(int(A['Answer']))
else:
FAnswer = str(A['Answer'])
else:
FAnswer = 'Unknown'
else:
print('GOBBLEGOBBLE')
FAnswer = 'Not recorded'
AnswerEntryList.append({QDesc: FAnswer})
for Entry in AnswerEntryList:
DictEntryList.append(Entry)
DomainDict[DomainCount] = DictEntryList
DomainCount+= 1
print('Ready to export')
If anyone could assist me in getting my data to export into the proper format within excel, it would be greatly appreciated.
EDIT:
Print of the final dictionary to be exported to excel:
{0: [{'RecordNo': 0}, {'Dept': 'Clothing'}, {'Forest ': 'my.forest'}, {'Domain': 'my.domain'}, {'Question1': 'Answer1'}, {'Question2': 'Answer2'}, {'Question3': 'Answer3'}], 1: [{...}]}
The problem in writing to Excel is due to the fact that the values in the final dictionary are lists of dictionaries themselves, so it may be that you want to take a closer look at how you're building the dictionary. In its current format, passing the final dictionary to pd.DataFrame.from_dict results in a DataFrame that looks like this:
# 0
# 0 {u'RecordNo': 0}
# 1 {u'Dept': u'Clothing'}
# 2 {u'Forest ': u'my.forest'}
# 3 {u'Domain': u'my.domain'}
# 4 {u'Question1': u'Answer1'}
# 5 {u'Question2': u'Answer2'}
# 6 {u'Question3': u'Answer3'}
So each value in the DataFrame row is itself a dict. To fix this, you can flatten/merge the inner dictionaries in your final dict before passing it into a DataFrame:
modified_dict = {k:{x.keys()[0]:x.values()[0] for x in v} for k, v in final_dict.iteritems()}
# {0: {'Domain': 'my.domain', 'RecordNo': 0, 'Dept': 'Clothing', 'Question1': 'Answer1', 'Question3': 'Answer3', 'Question2': 'Answer2', 'Forest ': 'my.forest'}}
Then, you can pass this dict into a Pandas object, with the additional argument orient=index (so that the DataFrame uses the keys in the inner dicts as columns) to get a DataFrame that looks like this:
ReportDF = pd.DataFrame.from_dict(modified_dict, orient='index')
# Domain RecordNo Dept Question1 Question3 Question2 Forest
# 0 my.domain 0 Clothing Answer1 Answer3 Answer2 my.forest
From there, you can write to Excel as you had indicated.
Edit: I can't test this without sample data, but from the look of it you can simplify your Dictionary Prep by building a dict instead of a list of dicts.
for Dept in Depts:
ABBR = Dept['dept.ABBR']
Forests = getForestDomains(Quarter,ABBR)
for Forest in Forests:
DictEntry = {}
DictEntry['RecordNo'] = DomainCount
DictEntry['Dept'] = ABBR
DictEntry['Forest '] = Forest['d.DomainName']
DictEntry['Domain'] = ''
QList = getApplicableQuestions(str(SA))
for Question in QList:
# save yourself a line of code and make 'Not recorded' the default value
FAnswer = 'Not recorded'
QDesc = Question['Question']
AnswerResult = getAnswerOfQuestionForDomainForQuarter(QDesc, ForestName, Quarter)
if AnswerResult:
for A in AnswerResult:
# don't convert None to string and then test for inequality to 'None'
# if statements evaluate None as False already
if A['Answer']:
if isinstance(A, numbers.Number):
FAnswer = str(int(A['Answer']))
else:
FAnswer = str(A['Answer'])
else:
FAnswer = 'Unknown'
else:
print('GOBBLEGOBBLE')
DictEntry[QDesc] = FAnswer
DomainDict[DomainCount] = DictEntry
DomainCount += 1
print('Ready to export')
Prerequisites
Dataset I'm working with is MovieLens 100k
Python Packages I'm using is Surprise, Io and Pandas
Agenda is to test the recommendation system using KNN (+ K-Fold) on Algorithms: Vector cosine & Pearson, for both User based CF & Item based CF
Briefing
So far, I have coded for both UBCF & IBCF as below
Q1. IBCF Generates data as per input given to it, I need it to export a csv file since I need to find out the predicted values
Q2. UBCF needs to enter each data separately and doesn't work even with immediate below code:
csvfile = 'pred_matrix.csv'
with open(csvfile, "w") as output:
writer = csv.writer(output,lineterminator='\n')
#algo.predict(user_id, item_id, estimated_ratings)
for val in algo.predict(str(range(1,943)),range(1,1683),1):
writer.writerow([val])
Clearly it throws the error of lists, as it cannot be Comma separated.
Q3 Getting Precision & Recall on Evaluated and Recommended values
CODE
STARTS WITH
if ip == 1:
one = 'cosine'
else:
one = 'pearson'
choice = raw_input("Filtering Method: \n1.User based \n2.Item based \n Choice:")
if choice == '1':
user_based_cf(one)
elif choice == '2':
item_based_cf(one)
else:
sim_op={}
exit(0)
UBCF:
def user_based_cf(co_pe):
# INITIALIZE REQUIRED PARAMETERS
path = '/home/mister-t/Projects/PycharmProjects/RecommendationSys/ml-100k/u.user'
prnt = "USER"
sim_op = {'name': co_pe, 'user_based': True}
algo = KNNBasic(sim_options=sim_op)
# RESPONSIBLE TO EXECUTE DATA SPLITS Mentioned in STEP 4
perf = evaluate(algo, df, measures=['RMSE', 'MAE'])
print_perf(perf)
print type(perf)
# START TRAINING
trainset = df.build_full_trainset()
# APPLYING ALGORITHM KNN Basic
res = algo.train(trainset)
print "\t\t >>>TRAINED SET<<<<\n\n", res
# PEEKING PREDICTED VALUES
search_key = raw_input("Enter User ID:")
item_id = raw_input("Enter Item ID:")
actual_rating = input("Enter actual Rating:")
print algo.predict(str(search_key), item_id, actual_rating)
IBCF
def item_based_cf(co_pe):
# INITIALIZE REQUIRED PARAMETERS
path = '/location/ml-100k/u.item'
prnt = "ITEM"
sim_op = {'name': co_pe, 'user_based': False}
algo = KNNBasic(sim_options=sim_op)
# RESPONSIBLE TO EXECUTE DATA SPLITS = 2
perf = evaluate(algo, df, measures=['RMSE', 'MAE'])
print_perf(perf)
print type(perf)
# START TRAINING
trainset = df.build_full_trainset()
# APPLYING ALGORITHM KNN Basic
res = algo.train(trainset)
print "\t\t >>>TRAINED SET<<<<\n\n", res
# Read the mappings raw id <-> movie name
rid_to_name, name_to_rid = read_item_names(path)
search_key = raw_input("ID:")
print "ALGORITHM USED : ", one
toy_story_raw_id = name_to_rid[search_key]
toy_story_inner_id = algo.trainset.to_inner_iid(toy_story_raw_id)
# Retrieve inner ids of the nearest neighbors of Toy Story.
k=5
toy_story_neighbors = algo.get_neighbors(toy_story_inner_id, k=k)
# Convert inner ids of the neighbors into names.
toy_story_neighbors = (algo.trainset.to_raw_iid(inner_id)
for inner_id in toy_story_neighbors)
toy_story_neighbors = (rid_to_name[rid]
for rid in toy_story_neighbors)
print 'The ', k,' nearest neighbors of ', search_key,' are:'
for movie in toy_story_neighbors:
print(movie)
Q1. IBCF Generates data as per input given to it, I need it to export a csv file since I need to find out the predicted values
the easiest way to dump anything to a csv would be to use the csv module!
import csv
res = [x, y, z, ....]
csvfile = "<path to output csv or txt>"
#Assuming res is a flat list
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
for val in res:
writer.writerow([val])
#Assuming res is a list of lists
with open(csvfile, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerows(res)
I have a python script to build inputs for a Google chart. It correctly creates column headers and the correct number of rows, but repeats the data for the last row in every row. I tried explicitly setting the row indices rather than using a loop (which wouldn't work in practice, but should have worked in testing). It still gives me the same values for each entry. I also had it working when I had this code on the same page as the HTML user form.
end1 = number of rows in the data table
end2 = number of columns in the data table represented by a list of column headers
viewData = data stored in database
c = connections['default'].cursor()
c.execute("SELECT * FROM {0}.\"{1}\"".format(analysis_schema, viewName))
viewData=c.fetchall()
curDesc = c.description
end1 = len(viewData)
end2 = len(curDesc)
Creates column headers:
colOrder=[curDesc[2][0]]
if activityOrCommodity=="activity":
tableDescription={curDesc[2][0] : ("string", "Activity")}
elif (activityOrCommodity == "commodity") or (activityOrCommodity == "aa_commodity"):
tableDescription={curDesc[2][0] : ("string", "Commodity")}
for i in range(3,end2 ):
attValue = curDesc[i][0]
tableDescription[curDesc[i][0]]= ("number", attValue)
colOrder.append(curDesc[i][0])
Creates row data:
data=[]
values = {}
for i in range(0,end1):
for j in range(2, end2):
if j == 2:
values[curDesc[j][0]] = viewData[i][j].encode("utf-8")
else:
values[curDesc[j][0]] = viewData[i][j]
data.append(values)
dataTable = gviz_api.DataTable(tableDescription)
dataTable.LoadData(data)
return dataTable.ToJSon(columns_order=colOrder)
An example javascript output:
var dt = new google.visualization.DataTable({cols:[{id:'activity',label:'Activity',type:'string'},{id:'size',label:'size',type:'number'},{id:'compositeutility',label:'compositeutility',type:'number'}],rows:[{c:[{v:'AA26FedGovAccounts'},{v:49118957568.0},{v:1.94956132673}]},{c:[{v:'AA26FedGovAccounts'},{v:49118957568.0},{v:1.94956132673}]},{c:[{v:'AA26FedGovAccounts'},{v:49118957568.0},{v:1.94956132673}]},{c:[{v:'AA26FedGovAccounts'},{v:49118957568.0},{v:1.94956132673}]},{c:[{v:'AA26FedGovAccounts'},{v:49118957568.0},{v:1.94956132673}]}]}, 0.6);
it seems you're appending values to the data but your values are not being reset after each iteration...
i assume this is not intended right? if so just move values inside the first for loop in your row setting code