I am wondering what I am doing wrong when trying to print the data of name of the following code in python.
import urllib.request, json
with urllib.request.urlopen("<THIS IS A URL IN THE ORIGINAL SCRIPT>") as url:
data = json.loads(url.read().decode())
print (data['Departure']['Product']['name'])
print (data['Departure']['Stops']['Stop'][0]['depTime'])
And this is the api I am fetching the data from:
{
"Departure" : [ {
"Product" : {
"name" : "Länstrafik - Buss 201",
"num" : "201",
"catCode" : "7",
"catOutS" : "BLT",
"catOutL" : "Länstrafik - Buss",
"operatorCode" : "254",
"operator" : "JLT",
"operatorUrl" : "http://www.jlt.se"
},
"Stops" : {
"Stop" : [ {
"name" : "Gislaved Lundåkerskolan",
"id" : "740040260",
"extId" : "740040260",
"routeIdx" : 12,
"lon" : 13.530096,
"lat" : 57.298178,
"depTime" : "20:55:00",
"depDate" : "2019-03-05"
}
data["Departure"] is a list, and you are indexing into it like it's a dictionary.
You wrote the dictionary sample confusingly. Here's how I think it looks:
d = {
"Departure" : [ {
"Product" : {
"name" : "Länstrafik - Buss 201",
"num" : "201",
"catCode" : "7",
"catOutS" : "BLT",
"catOutL" : "Länstrafik - Buss",
"operatorCode" : "254",
"operator" : "JLT",
"operatorUrl" : "http://www.jlt.se"
},
"Stops" : {
"Stop" : [ {
"name" : "Gislaved Lundåkerskolan",
"id" : "740040260",
"extId" : "740040260",
"routeIdx" : 12,
"lon" : 13.530096,
"lat" : 57.298178,
"depTime" : "20:55:00",
"depDate" : "2019-03-05"
}]}}]}
And here's how you can print depTime
print(d["Departure"][0]["Stops"]["Stop"][0]["depTime"])
The important part you missed is d["Departure"][0] because d["Departure"] is a list.
As Kyle said in the previous answer, data["Departure"] is a list, but you're trying to use it as a dictionary. There are 2 possible solutions.
Change data["Departure"]["Stops"]["Stop"] etc. to data["Departure"][0]["Stops"]["Stop"] etc.
Change the JSON file to make departure into a dictionary, which would allow you to keep your original code. This would make the final JSON snippet look like this:
"Departure" : {
"Product" : {
"name" : "Länstrafik - Buss 201",
"num" : "201",
"catCode" : "7",
"catOutS" : "BLT",
"catOutL" : "Länstrafik - Buss",
"operatorCode" : "254",
"operator" : "JLT",
"operatorUrl" : "http://www.jlt.se"
},
"Stops" : {
"name" : "Gislaved Lundåkerskolan",
"id" : "740040260",
"extId" : "740040260",
"routeIdx" : 12,
"lon" : 13.530096,
"lat" : 57.298178,
"depTime" : "20:55:00",
"depDate" : "2019-03-05"
}
}
Related
I want to query what comments have been made by any User about machine learning book between '2020-03-15' and '2020-04-25', ordered the comments from the most recent to the least recent.
Here is my document.
lib_books = db.lib_books
document_book1 = ({
"bookid" : "99051fe9-6a9c-46c2-b949-38ef78858dd0",
"title" : "Machine learning",
"author" : "Tom Michael",
"date_of_first_publication" : "2000-10-02",
"number_of_pages" : 414,
"publisher" : "New York : McGraw-Hill",
"topics" : ["Machine learning", "Computer algorithms"],
"checkout_list" : [
{
"time_checked_out" : "2020-03-20 09:11:22",
"userid" : "ef1234",
"comments" : [
{
"comment1" : "I just finished it and it is worth learning!",
"time_commented" : "2020-04-01 10:35:13"
},
{
"comment2" : "Some cases are a little bit outdated.",
"time_commented" : "2020-03-25 13:19:13"
},
{
"comment3" : "Can't wait to learning it!!!",
"time_commented" : "2020-03-21 08:21:42"
}]
},
{
"time_checked_out" : "2020-03-04 16:18:02",
"userid" : "ab1234",
"comments" : [
{
"comment1" : "The book is a little bit difficult but worth reading.",
"time_commented" : "2020-03-20 12:18:02"
},
{
"comment2" : "It's hard and takes a lot of time to understand",
"time_commented" : "2020-03-15 11:22:42"
},
{
"comment3" : "I just start reading, the principle of model is well explained.",
"time_commented" : "2020-03-05 09:11:42"
}]
}]
})
I tried this code, but it returns nothing.
query_test = lib_books.find({"bookid": "99051fe9-6a9c-46c2-b949-38ef78858dd0", "checkout_list.comments.time_commented" : {"$gte" : "2020-03-20", "$lte" : "2020-04-20"}})
for x in query_test:
print(x)
Can you try this
pipeline = [{'$match':{'bookid':"99051fe9-6a9c-46c2-b949-38ef78858dd0"}},//bookid filter
{'$unwind':'$checkout_list'},
{'$unwind':'$checkout_list.comments'},
{'$match':{'checkout_list.comments.time_commented':{"$gte" : "2020-03-20", "$lte" : "2020-04-20"}}},
{'$project':{'_id':0,'bookid':1,'title':1,'comment':'$checkout_list.comments'}},
{'$sort':{'checkout_list.comments.time_commented':-1}}]
query_test = lib_books.aggregate(pipeline)
#{"bookid": "99051fe9-6a9c-46c2-b949-38ef78858dd0", "checkout_list.comments.time_commented" : {"$gte" : "2020-03-20", "$lte" : "2020-04-20"}})
for x in query_test:
print(x)
I would recommend that you maintain comment field as one name, rather than keeping it as 'comment1', 'comment2', etc. If the field had been 'comment', it can be brought to the root itself
Aggregate can be modified as below
pipeline = [{'$match':{'bookid':"99051fe9-6a9c-46c2-b949-38ef78858dd0"}},//bookid filter
{'$unwind':'$checkout_list'},
{'$unwind':'$checkout_list.comments'},
{'$match':{'checkout_list.comments.time_commented':{"$gte" : "2020-03-20", "$lte" : "2020-04-20"}}},
{'$project':{'_id':0,'bookid':1,'title':1,'comment':'$checkout_list.comments.comment','time_commented':'$checkout_list.comments.time_commented'}},
{'$sort':{'time_commented':-1}}]
MongoDB Query, in case if required
db.books.aggregate([
{$match:{'bookid':"99051fe9-6a9c-46c2-b949-38ef78858dd0"}},//bookid filter
{$unwind:'$checkout_list'},
{$unwind:'$checkout_list.comments'},
{$match:{'checkout_list.comments.time_commented':{"$gte" : "2020-03-20", "$lte" : "2020-04-20"}}},
{$project:{_id:0,bookid:1,title:1,comment:'$checkout_list.comments.comment',time_commented:'$checkout_list.comments.time_commented'}},
{$sort:{'time_commented':-1}}
])
if there are multiple documents that you need to search, then you can use $in condition.
{$match:{'bookid':{$in:["99051fe9-6a9c-46c2-b949-38ef78858dd0","99051fe9-6a9c-46c2-b949-38ef78858dd1"]}}},//bookid filter
I am trying to add data to a pandas dataframe. I am having isses getting to the sub directories. Ideally i would like all of the ratio names like "priceBookValueRatio", "priceToBookRatio", ...ect in the first column, with the dates on the top row going left to right, and the appropriate values under each date. Can anyone help?
Here is my code
def get_jsonparsed_data(ticker):
url = ("https://financialmodelingprep.com/api/v3/financial-ratios/" + ticker)
response = urlopen(url)
data = response.read().decode("utf-8")
value = json.loads(data)
data = value["ratios"]
df = pd.DataFrame(data)
print df
And here is what my data looks like
{
"symbol" : "AAPL",
"ratios" : [ {
"date" : "2019-09-28",
"investmentValuationRatios" : {
"priceBookValueRatio" : "11.1154",
"priceToBookRatio" : "11.1154",
"priceToSalesRatio" : "3.8903",
"priceEarningsRatio" : "18.7109",
"receivablesTurnover" : "5.489",
"priceToFreeCashFlowsRatio" : "17.5607",
"priceToOperatingCashFlowsRatio" : "14.5863",
"priceCashFlowRatio" : "0",
"priceEarningsToGrowthRatio" : "0",
"priceSalesRatio" : "0",
"dividendYield" : "",
"enterpriseValueMultiple" : "1.7966762045884",
"priceFairValue" : "0"
},
"profitabilityIndicatorRatios" : {
"niperEBT" : "0.84056163195765",
"ebtperEBIT" : "1",
"ebitperRevenue" : "0.25266552384174",
"grossProfitMargin" : "0.37817768109035",
"operatingProfitMargin" : "1",
"pretaxProfitMargin" : "0.24572017188497",
"netProfitMargin" : "0.21238094505984",
"effectiveTaxRate" : "0.15943836804235",
"returnOnAssets" : "0.5848",
"returnOnEquity" : "0.6106",
"returnOnCapitalEmployed" : "0.2691",
"nIperEBT" : "0.84056163195765",
"eBTperEBIT" : "1",
"eBITperRevenue" : "0.25266552384174"
},
"operatingPerformanceRatios" : {
"receivablesTurnover" : "5.489",
"payablesTurnover" : "1.2542",
"inventoryTurnover" : "64.5433",
"fixedAssetTurnover" : "6.9606185456686",
"assetTurnover" : "0.76857223883066"
},
"liquidityMeasurementRatios" : {
"currentRatio" : "1.54",
"quickRatio" : "1.3844473032029",
"cashRatio" : "0.46202160464632",
"daysOfSalesOutstanding" : "-9.2636",
"daysOfInventoryOutstanding" : "64.2588",
"operatingCycle" : "",
"daysOfPayablesOutstanding" : "64.8648",
"cashConversionCycle" : ""
},
"debtRatios" : {
"debtRatio" : "0.3192",
"debtEquityRatio" : "1.194",
"longtermDebtToCapitalization" : "0.50361776241806",
"totalDebtToCapitalization" : "0.54422142191553",
"interestCoverage" : "0.0",
"cashFlowToDebtRatio" : "0.64222977037771",
"companyEquityMultiplier" : "3.7410043320661"
},
"cashFlowIndicatorRatios" : {
"operatingCashFlowPerShare" : "15.0267",
"freeCashFlowPerShare" : "12.94",
"cashPerShare" : "10.5773",
"payoutRatio" : "0.251",
"receivablesTurnover" : "5.489",
"operatingCashFlowSalesRatio" : "0.26670997101939",
"freeCashFlowOperatingCashFlowRatio" : "0.84875560231154",
"cashFlowCoverageRatios" : "0.64222977037771",
"shortTermCoverageRatios" : "4.2728448275862",
"capitalExpenditureCoverageRatios" : "6.6118151500715",
"dividendpaidAndCapexCoverageRatios" : "2.8191679531974",
"dividendPayoutRatio" : "0.25551976255972"
}
},
{
"date" : "2018-09-29",
"investmentValuationRatios" : {
"priceBookValueRatio" : "10.1842",
"priceToBookRatio" : "10.1842",
"priceToSalesRatio" : "4.1328",
"priceEarningsRatio" : "18.9226",
"receivablesTurnover" : "6.2738",
"priceToFreeCashFlowsRatio" : "17.563",
"priceToOperatingCashFlowsRatio" : "14.1753",
"priceCashFlowRatio" : "14.375642493446",
"priceEarningsToGrowthRatio" : "18.698887988401",
"priceSalesRatio" : "4.1912065394209",
"dividendYield" : "0.012318046710734",
"enterpriseValueMultiple" : "14.710301181747",
"priceFairValue" : "10.389124295011"
},
"profitabilityIndicatorRatios" : {
"niperEBT" : "0.81657819294131",
"ebtperEBIT" : "1",
"ebitperRevenue" : "0.27448935409176",
"grossProfitMargin" : "0.38343718820008",
"operatingProfitMargin" : "1",
"pretaxProfitMargin" : "0.26694026619477",
"netProfitMargin" : "0.22414202074587",
"effectiveTaxRate" : "0.18342180705869",
"returnOnAssets" : "1.0497",
"returnOnEquity" : "0.5556",
"returnOnCapitalEmployed" : "0.217",
"nIperEBT" : "0.81657819294131",
"eBTperEBIT" : "1",
"eBITperRevenue" : "0.27448935409176"
},
"operatingPerformanceRatios" : {
"receivablesTurnover" : "6.2738",
"payablesTurnover" : "1.2564",
"inventoryTurnover" : "60.2871",
"fixedAssetTurnover" : "6.4302488863064",
"assetTurnover" : "0.72621505229339"
},
"liquidityMeasurementRatios" : {
"currentRatio" : "1.133",
"quickRatio" : "0.99453976140569",
"cashRatio" : "0.22352474359306",
"daysOfSalesOutstanding" : "-8.8176",
"daysOfInventoryOutstanding" : "67.3325",
"operatingCycle" : "",
"daysOfPayablesOutstanding" : "76.8054",
"cashConversionCycle" : ""
},
"debtRatios" : {
"debtRatio" : "0.313",
"debtEquityRatio" : "1.0685",
"longtermDebtToCapitalization" : "0.46661721806832",
"totalDebtToCapitalization" : "0.51655010603258",
"interestCoverage" : "0.0",
"cashFlowToDebtRatio" : "0.67637989919901",
"companyEquityMultiplier" : "3.4133013523477"
},
"cashFlowIndicatorRatios" : {
"operatingCashFlowPerShare" : "15.6263",
"freeCashFlowPerShare" : "9.924",
"cashPerShare" : "5.2293",
"payoutRatio" : "0.226",
"receivablesTurnover" : "6.2738",
"operatingCashFlowSalesRatio" : "0.29154916319961",
"freeCashFlowOperatingCashFlowRatio" : "0.8280729395356",
"cashFlowCoverageRatios" : "0.67637989919901",
"shortTermCoverageRatios" : "3.7321187584345",
"capitalExpenditureCoverageRatios" : "5.8164200405619",
"dividendpaidAndCapexCoverageRatios" : "2.8652728954672",
"dividendPayoutRatio" : "0.2303337756799"
}
},
{
"date" : "2017-09-30",
"investmentValuationRatios" : {
"priceBookValueRatio" : "5.9086",
"priceToBookRatio" : "5.9086",
"priceToSalesRatio" : "3.4657",
"priceEarningsRatio" : "16.5922",
"receivablesTurnover" : "7.0564",
"priceToFreeCashFlowsRatio" : "15.4994",
"priceToOperatingCashFlowsRatio" : "12.37",
"priceCashFlowRatio" : "12.166429629599",
"priceEarningsToGrowthRatio" : "16.160760748713",
"priceSalesRatio" : "3.4086956688842",
"dividendYield" : "0.016341413728755",
"enterpriseValueMultiple" : "12.106846738693",
"priceFairValue" : "5.8292161925369"
},
"profitabilityIndicatorRatios" : {
"niperEBT" : "0.75443523849647",
"ebtperEBIT" : "1",
"ebitperRevenue" : "0.27957894553164",
"grossProfitMargin" : "0.38469860491899",
"operatingProfitMargin" : "1",
"pretaxProfitMargin" : "0.2676042820873",
"netProfitMargin" : "0.21092420845075",
"effectiveTaxRate" : "0.24556476150353",
"returnOnAssets" : "0.7847",
"returnOnEquity" : "0.3607",
"returnOnCapitalEmployed" : "0.1752",
"nIperEBT" : "0.75443523849647",
"eBTperEBIT" : "1",
"eBITperRevenue" : "0.27957894553164"
},
"operatingPerformanceRatios" : {
"receivablesTurnover" : "7.0564",
"payablesTurnover" : "1.2897",
"inventoryTurnover" : "65.6173",
"fixedAssetTurnover" : "6.7854838232247",
"assetTurnover" : "0.61077110404749"
},
"liquidityMeasurementRatios" : {
"currentRatio" : "1.276",
"quickRatio" : "1.089670085504",
"cashRatio" : "0.20125181026445",
"daysOfSalesOutstanding" : "-12.5636",
"daysOfInventoryOutstanding" : "56.8007",
"operatingCycle" : "",
"daysOfPayablesOutstanding" : "70.4447",
"cashConversionCycle" : ""
},
"debtRatios" : {
"debtRatio" : "0.3082",
"debtEquityRatio" : "0.863",
"longtermDebtToCapitalization" : "0.42034732372197",
"totalDebtToCapitalization" : "0.46322584262014",
"interestCoverage" : "0.0",
"cashFlowToDebtRatio" : "0.55519536652835",
"companyEquityMultiplier" : "2.7999060031183"
},
"cashFlowIndicatorRatios" : {
"operatingCashFlowPerShare" : "12.3101",
"freeCashFlowPerShare" : "9.779",
"cashPerShare" : "3.8888",
"payoutRatio" : "0.259",
"receivablesTurnover" : "7.0564",
"operatingCashFlowSalesRatio" : "0.28017222576058",
"freeCashFlowOperatingCashFlowRatio" : "0.80613468275594",
"cashFlowCoverageRatios" : "0.55519536652835",
"shortTermCoverageRatios" : "3.476695718075",
"capitalExpenditureCoverageRatios" : "5.1582202232752",
"dividendpaidAndCapexCoverageRatios" : "2.5465900079302",
"dividendPayoutRatio" : "0.26408967756613"
}
},
pandas cant construct a DataFrame from an arbitrary nested dict. You need to pass the data through in a standard format it can parse.
One way to do this is to create a list of single level dicts with keys and values and then construct the DataFrame from this
def clean_data(d):
ret = {}
ret['date'] = d['date']
for outer_key, rec in d.items():
if outer_key != 'date':
for k,v in rec.items():
ret[k]= v
return ret
cleaned_data = [clean_data(d) for d in data['ratios']]
df = pd.DataFrame.from_records(cleaned_data, index='date')
df = df.transpose()
will give you
I am not able to extract the "Data" "12639735;7490484;3469776;9164745;650;0"
from this file using python:
In php it's piece of cake for me but I cannot master it in python.
Other answers from Stackexchange didn't give me the answer.
Here is the contents of the file test.json:
{
"ActTime" : 1494535483,
"ServerTime" : "2017-05-11 22:44:43",
"Sunrise" : "05:44",
"Sunset" : "21:14",
"result" : [
{
"AddjMulti" : 1.0,
"AddjMulti2" : 1.0,
"AddjValue" : 0.0,
"AddjValue2" : 0.0,
"BatteryLevel" : 255,
"Counter" : "20130.221",
"CounterDeliv" : "12634.521",
"CounterDelivToday" : "0.607 kWh",
"CounterToday" : "1.623 kWh",
"CustomImage" : 0,
"Data" : "12639735;7490484;3469776;9164745;650;0",
"Description" : "",
"Favorite" : 1,
"HardwareID" : 3,
"HardwareName" : "Slimme Meter",
"HardwareType" : "P1 Smart Meter USB",
"HardwareTypeVal" : 4,
"HaveTimeout" : false,
"ID" : "1",
"LastUpdate" : "2017-05-11 22:44:39",
"Name" : "Elektriciteitsmeter",
"Notifications" : "false",
"PlanID" : "0",
"PlanIDs" : [ 0 ],
"Protected" : false,
"ShowNotifications" : true,
"SignalLevel" : "-",
"SubType" : "Energy",
"SwitchTypeVal" : 0,
"Timers" : "false",
"Type" : "P1 Smart Meter",
"TypeImg" : "counter",
"Unit" : 1,
"Usage" : "650 Watt",
"UsageDeliv" : "0 Watt",
"Used" : 1,
"XOffset" : "0",
"YOffset" : "0",
"idx" : "1"
}
],
"status" : "OK",
"title" : "Devices"
}
This should work
import json
with open('test.json') as f:
contents = json.load(f)
print(contents['result'][0]['Data'])
Similar questions have been asked before: Parsing values from a JSON file using Python?
Got it.
url = "http://192.168.2.1:8080/json.htm?type=devices&rid=1"
response = urllib.urlopen(url)
str = json.loads(response.read())
for i in str["result"]:
datastring = i["Data"]
elementstring = i["Data"].split(';')
counter = 0
for j in elementstring:
if counter == 4:
usage = j
counter += 1
delivery = get_num(i["UsageDeliv"])
I am using this approach to get the comments on page data.Its working fine,but I need to dump the data into MongoDB. Using this approach data is inserted but as a single document.I want to store that every comment should have a separate document with the information I am getting from the API.
from facepy import GraphAPI
import json
import pymongo
import json
connection = pymongo.MongoClient("mongodb://localhost")
facebook = connection.facebook
commen = facebook.comments
access = ''
#message
graph = GraphAPI(access)
page_id= 'micromaxinfo'
datas= graph.get(page_id+'/posts?fields=comments,created_time', page=True, retry=5)
posts=[]
for data in datas:
print data
commen.insert(data)
break
Output Stored in MongoDB:
{
"created_time" : "2015-11-04T08:04:14+0000",
"id" : "120735417936636_1090909150919253",
"comments" : {
"paging" : {
"cursors" : {
"after" : "WTI5dGJXVnVkRjlqZFhKemIzSTZNVEE1TVRReE5ESTVOelV6TlRRd05Ub3hORFEyTnpFNU5UTTU=",
"before" : "WTI5dGJXVnVkRjlqZFhKemIzSTZNVEE1TURrd09UVTRNRGt4T1RJeE1Eb3hORFEyTmpJME16Z3g="
}
},
"data" : [
{
"created_time" : "2015-11-04T08:06:21+0000",
"message" : "my favorite mobiles on canvas silver",
"from" : {
"name" : "Velchamy Alagar",
"id" : "828304797279948"
},
"id" : "1090909130919255_1090909580919210"
},
{
"created_time" : "2015-11-04T08:10:13+0000",
"message" : "Micromax mob. मैने कुछ दिन पहले Micromax Bolt D321 mob. खरिद लिया | Bt मेरा मोबा. बहुत गरम होता है Without internate. और internate MB कम समय मेँ ज्यादा खर्च होती है | कोई तो help करो.",
"from" : {
"name" : "Amit Gangurde",
"id" : "1637669796485258"
},
"id" : "1090909130919255_1090910364252465"
},
{
"created_time" : "2015-11-04T08:10:27+0000",
"message" : "Nice phones.",
"from" : {
"name" : "Nayan Chavda",
"id" : "1678393592373659"
},
"id" : "1090909130919255_1090910400919128"
},
{
"created_time" : "2015-11-04T08:10:54+0000",
"message" : "sir micromax bolt a089 mobile ki battery price kitna. #micromax mobile",
"from" : {
"name" : "Arit Singha Roy",
"id" : "848776351903695"
},
So technically I want to store only information coming in data field:
{
"created_time" : "2015-11-04T08:10:54+0000",
"message" : "sir micromax bolt a089 mobile ki battery price kitna. #micromax mobile",
"from" : {
"name" : "Arit Singha Roy",
"id" : "848776351903695"
}
How to get this into my database?
You can use the pentaho data integration open source ETL tool for this. I use it to store specific fields from the JSON output for tweets.
Select the fields you want to parse from the JSON and select an output as csv or table output in Oracle etc.
Hope this helps
I am building a python script which will be executed through Apache spark in which I am generating a RDD from json file stored on S3 bucket.
I need to filter that json RDD according to some data in json document and thereby generating a new json file which consist of filtered json documents.That json file needs to be uploaded to S3 bucket.
So please suggest me appropriate solution for its implementation through pyspark.
Input json
{
"_id" : ObjectId("55a787ee9efccaeb288b457f"),
"data" : {
"N◦ CATEGORIA" : 102.0,
"NOMBRE CATEGORIA" : "GASEOSAS",
"VARIABLE" : "TOP OF HEART",
"VAR." : "TOH",
"MARCA" : "COCA COLA ZERO",
"MES" : "ENERO",
"MES_N" : 1.0,
"AÑO" : 2014.0,
"UNIVERSO_TOTAL" : 1.0433982E7,
"UNIVERSO_FEMENINO" : 5529024.0,
"UNIVERSO_MASCULINO" : 4904958.0,
"PORCENTAJE_TOTAL" : 0.0066,
"PORCENTAJE_FEMENINO" : 0.0125,
"PORCENTAJE_MASCULINO" : null
},
"app_id" : ObjectId("5376349e11bc073138c33163"),
"category" : "excel_RAC",
"subcategory" : "RAC",
"created_time" : NumberLong(1437042670),
"instance_id" : null,
"metric_date" : NumberLong(1437042670),
"campaign_id" : ObjectId("5386602ba102b6cd4528ed93"),
"datasource_id" : ObjectId("559f5c8f9efccacf0a3c9875"),
"duplicate_id" : "695a3f5f562d0a02f1820fe5d91642a5"
}
The input json data needs to be filtered according to VARIABLE : "TOP OF HEART" and there by generate output json as following
Output Json
{
"_id" : ObjectId("55b5d19e9efcca86118b45a2"),
"widget_type" : "rac_toh_excel",
"campaign_id" : ObjectId("558554b29efccab00a3c987c"),
"datasource_id" : ObjectId("55b5d18f9efcca770b3c986a"),
"date_time" : NumberLong(1388530800),
"data" : {
"key" : "COCA COLA ZERO",
"values" : {
"x" : NumberLong(1388530800),
"y" : 1.0433982E7,
"data" : {
"id" : ObjectId("553a151e5c93ffe0408b46f9"),
"month" : 1.0,
"year" : 2014.0,
"total" : 1.0433982E7,
"variable" : "TOH",
"total_percentage" : 0.0066
}
}
},
"filter" : [
]
}