How to convert nested Dictionary to JSON string - python

I try to convert a nested dictionary to JSON string
a = {'default': {'version': 1.0, 'db': 'mangodb', 'uuid': 'eaada7dc-ec30-4548-a080-c4f70293202a'}, 'temperatures': [{1: 50}, {2: 100}]}
a_json = json.dumps(a['temperature'])
print(a_json)
I was expecting to have
{1: 50}, {2: 100}, but when I executed this code, I had [[null, {"1": 50}], {"2": 100}]]
How can I get a result without this null?

Something like the following should work:
import json
a = {'default': {'version': 1.0, 'db': 'mangodb', 'uuid': 'eaada7dc-ec30-4548-a080-c4f70293202a'}, 'temperatures': [{1: 50}, {2: 100}]}
with open('out.json', mode='w+') as f:
a_json = json.dump(a['temperatures'], f)
Also, in case you don't want to use an output file:
import json
a = {'default': {'version': 1.0, 'db': 'mangodb', 'uuid': 'eaada7dc-ec30-4548-a080-c4f70293202a'}, 'temperatures': [{1: 50}, {2: 100}]}
a_json = json.dumps(a['temperatures'])
print(a_json)
I have tested both samples and they appear to be working just fine.

Related

Write a dictionary to a csv file in tables. It keeps showing "String indices must be integers"

I want to write a table in csv file with a header ['Username', 'INFO', 'ERROR']
I have looked the "String indices must be integers" up online
Following the instructions, Python still keeps outputting the same type error
This is really upsetting
Does anyone know how to solve?
import csv
dict_data = {'ac': {'INFO': 2, 'ERROR': 2}, 'ahmed.miller': {'INFO': 2, 'ERROR': 4}, 'blossom': {'INFO': 2, 'ERROR': 6}, 'bpacheco': {'INFO': 0, 'ERROR': 2}, 'breee': {'INFO': 1, 'ERROR': 5}, 'britanni': {'INFO': 1, 'ERROR': 1}, 'enim.non': {'INFO': 2, 'ERROR': 3}, 'flavia': {'INFO': 0, 'ERROR': 5}, 'jackowens': {'INFO': 2, 'ERROR': 4}, 'kirknixon': {'INFO': 2, 'ERROR': 1}, 'mai.hendrix': {'INFO': 0, 'ERROR': 3}, 'mcintosh': {'INFO': 4, 'ERROR': 3}, 'mdouglas': {'INFO': 2, 'ERROR': 3}, 'montanap': {'INFO': 0, 'ERROR': 4}, 'noel': {'INFO': 6, 'ERROR': 3}, 'nonummy': {'INFO': 2, 'ERROR': 3}, 'oren': {'INFO': 2, 'ERROR': 7}, 'rr.robinson': {'INFO': 2, 'ERROR': 1}, 'sri': {'INFO': 2, 'ERROR': 2}, 'xlg': {'INFO': 0, 'ERROR': 4}}
with open("testinggg.csv", 'w') as output:
fields = ['Username', 'INFO', 'ERROR']
writer = csv.DictWriter(output, fieldnames=fields)
writer.writeheader()
for m in dict_data:
print(m)
writer.writerow({'Username': str(m[0]), 'INFO': str(m[1]['error']), 'ERROR': str(m[1]['info'])})
Python keeps showing this no matter how hard I try:
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
TypeError: string indices must be integers
for m in dict_data is only iterating the keys of the dict, which are strings. And you should see that with your print(m) statements:
ac
ahmed.miller
blossom
bpacheco
breee
...
The statement m[0] will give you just the first letter, like 'a' in 'ac'. The statement m[1]['error'] is the source of your error; you're treating the second letter, 'c' in 'ac', like it's a dict.
To get your username, the key, and INFO/ERROR, the values, use the items() iterator:
for k, v in dict_data.items():
writer.writerow({'Username': k, 'INFO': v['INFO'], 'ERROR': v['ERROR']})
You've also got errors in how you're trying to access the sub-dicts:
the INFO/ERROR keys are capitalized
you swapped info/error in your example
The above code fixes both of those issues.

Python json show data structure

I am working with json files that stores thousands or even more entries.
firstly I want to understand the data I am working with.
import json
with open("/home/xu/stock_data/stock_market_data/nasdaq/json/AAL.json", "r") as f:
data = json.load(f)
print(json.dumps(data, indent=4))
this gives me a easy to read format, but some of the "keys"(I am not familiar with the json name, so I use the word "key" as in dict objects) have thousands of values, which makes it hard to read as a whole.
I also tried:
import json
with open("/home/xu/stock_data/stock_market_data/nasdaq/json/AAL.json", "r") as f:
data = json.load(f)
df = pd.DataFrame.from_dict(data, orient="index")
print (df.info)
but got
<bound method DataFrame.info of result error
chart [{'meta': {'currency': 'USD', 'symbol': 'AAL',... None>
this result kind of shows the structure, but it ends with ... not showcasing the whole picture.
My Question:
Is there something that works like np.array.shape for json/dict/pandas, of which can show the shape of the structure?
Is there a better library usage of interpretating the json file's structure?
Edit:
Sorry perhaps my wording of my problem was misdirecting. I tried pprint, and it provided me with:
{ 'chart': { 'error': None,
'result': [ { 'events': { 'dividends': { '1406813400': { 'amount': 0.1,
'date': 1406813400},
'1414675800': { 'amount': 0.1,
'date': 1414675800},
'1423146600': { 'amount': 0.1,
'date': 1423146600},
'1430400600': { 'amount': 0.1,
'date': 1430400600},
'1438867800': { 'amount': 0.1,
'date': 1438867800},
'1446561000': { 'amount': 0.1,
'date': 1446561000},
'1454941800': { 'amount': 0.1,
'date': 1454941800},
'1462195800': { 'amount': 0.1,
'date': 1462195800},
'1470231000': { 'amount': 0.1,
'date': 1470231000},
'1478179800': { 'amount': 0.1,
'date': 1478179800},
'1486650600': { 'amount': 0.1,
'date': 1486650600},
'1494595800': { 'amount': 0.1,
'date': 1494595800},
'1502371800': { 'amount': 0.1,
'date': 1502371800},
'1510324200': { 'amount': 0.1,
'date': 1510324200},
'1517841000': { 'amount': 0.1,
'date': 1517841000},
'1525699800': { 'amount': 0.1,
'date': 1525699800},
'1533562200': { 'amount': 0.1,
'date': 1533562200},
'1541428200': { 'amount': 0.1,
'date': 1541428200},
'1549377000': { 'amount': 0.1,
'date': 1549377000},
'1557235800': { 'amount': 0.1,
'date': 1557235800},
'1565098200': { 'amount': 0.1,
'date': 1565098200},
'1572964200': { 'amount': 0.1,
'date': 1572964200},
'1580826600': { 'amount': 0.1,
'date': 1580826600}}},
'indicators': { 'adjclose': [ { 'adjclose': [ 18.19490623474121,
19.326200485229492,
19.05280113220215,
19.80699920654297,
20.268939971923828,
20.891149520874023,
20.928863525390625,
21.28710174560547,
20.88172149658203,
20.93828773498535,
20.721458435058594,
20.514055252075195,
20.466917037963867,
20.994853973388672,
20.81572914123535,
20.2595157623291,
20.155811309814453,
19.816425323486328,
20.702600479125977,
21.032560348510742,
20.740314483642578,
21.0419864654541,
21.26824951171875,
22.531522750854492,
23.266857147216797,
23.587390899658203,
25.9725284576416,
26.27420997619629,
27.150955200195312,
27.273509979248047,
27.7448787689209,
29.507808685302734,
30.92192840576172,
31.4404239654541,
31.817523956298828,
31.940074920654297,
31.676118850708008,
32.354888916015625,
31.157604217529297,
30.158300399780273,
30.63909339904785,
31.148174285888672,
30.969064712524414,
31.496990203857422,
31.01619529724121,
31.666685104370117,
32.31717300415039,
32.31717300415039,
30.497684478759766,
31.69496726989746,
32.006072998046875,
31.7326717376709,
31.940074920654297,
31.826950073242188,
31.346155166625977,
31.61954689025879,
...
...
...
#this goes on and on for the respective "keys" of the json file. which means I have to scroll down thousands of lines to find out what type of data I have.
what I am hoping to find a a solutions that outputs something like this, where it doesn't show the data itself in whole, but only shows the "keys" and maybe some additional information. as some files may literally contain many GBs of data, making it impractical to scroll through.
#this is what I am hoping to achieve.
{
"Name": {
"title": <datatype=str,len=20>,
"time_stamp":<data_type=list, len=3000>,
"closing_price":<data_type=list, len=3000>,
"high_price_of_the_day":<data_type=list, len=3000>
...
...
...
}
}
You have a few options on how to navigate this. If you want to render your data to make more informed decisions quickly, there are the built-in libraries for rendering dictionaries (see pprint) but on a personal level I recommend something that works out of the box without much configuration. I found pprintpp to be the ideal choice for any python data structure. https://pypi.org/project/pprintpp/
Simply run in your terminal:
pip3 install pprintpp
The libraries should install under C:\Users\User\AppData\Local\Programs\Python\PythonXX\Lib\site-packages\pprintpp
After that, simply do this in your code:
import json
from pprintpp import pprint
with open("/home/xu/stock_data/stock_market_data/nasdaq/json/AAL.json", "r") as f:
data = json.load(f)
pprint(data)
You can also do pprint(data, width=1) to guarantee next dictionary key goes on the next line, even if the key is short. Ie:
some_dict = {'a': 'b', 'c': {'aa': 'bb'}}
pprint(data, width=1)
Outputs:
{
'a': 'b',
'c': {
'aa': 'bb',
},
}
Hope this helped! Cheers :)

I want go get many items from FilterStore in Simpy Python

I want to get many items from FilterStore .
factory.stock_part.items
FilterStore is list type
[{'order_id': 534066215, 'id': 0}, {'order_id': 534066215, 'id': 1}, {'order_id': 534066215, 'id': 2}, {'order_id': 534066215, 'id': 3}, {'order_id': 534066215, 'id': 4}, {'order_id': 534066215, 'id': 5}, {'order_id': 534066215, 'id': 6}, {'order_id': 534066215, 'id': 7}, {'order_id': 534066215, 'id': 8}]
and I want to use:
factory.stock_part.get()
to get five items
factory.stock_part.get(5)
is not work.
How do I resolve this? Here's the full code:
import simpy
class Factory():
def __init__(self, env):
self.stock_part = simpy.FilterStore(env, capacity = 100000)
def stock_out(env,factory):
while True:
yield env.timeout(10)
#in here I hope the inventory reduce 3,but one of the following 3 lines is not work
factory.stock_part.get(5)
factory.stock_part.get()[0:3]
factory.stock_part.get(factory.stock_part.items[0:3])
env = simpy.Environment()
factory = Factory(env)
#create inventory list
factory.stock_part.put({'id':1})
factory.stock_part.put({'id':2})
factory.stock_part.put({'id':3})
factory.stock_part.put({'id':4})
factory.stock_part.put({'id':5})
on_process = env.process(stock_out(env, factory))
print('start')
env.run(until = 300)
print('end')
Where is your eval function?
If you are using a filterStore, then don't need to pass in a function that will evaluate each resource and returns True when it finds a match?
something like
part = yield factory.stock_part.get(lambda part: part['id'] == 5)
filter stores only return one element at a time

cerberus - how to validate arbitrary dict keys?

I have read issues here and here using keysrules and valuesrules but I've only seen them validate nested not root. I'd like to valid the top level root dict keys.
schema = {
'any_arbitrary_str': {
'type': 'dict',
'keysrules': {'type': 'string'},
'valuesrules': {'type': 'integer'},
},
}
v = Validator(schema)
v.validate({'test': {'a': 1, 'b': 2}})
print(v.errors)
In this example, I'd like to just validate that schema is dict of str: Dict[str, int] where the keys can be any arbitrary string.
I'm not sure I'm using it right docs, this fails with cerberus.schema.SchemaError: {'any_arbitrary_str': [{'keysrules': ['unknown rule'], 'valuesrules': ['unknown rule']}]} but it's still looking for any_arbitrary_str instead of any string also.
You can just nest it. Not pretty, but works. I have not found a more elegant solution yet.
schema = {
'document': {
'type': 'dict',
'keysrules': {'type': 'string'},
'valuesrules': {
'type': 'dict',
'keysrules': {'type': 'string'},
'valuesrules': {'type': 'integer'},
},
},
}
v = Validator(schema)
document_to_test = {'test': {'a': 1, 'b': 2}}
v.validate({'document': document_to_test})
print(v.errors)

Cerberus coercion within nested list

I get unexpected behaviour for the following code:
import cerberus
v = cerberus.Validator()
schema = {'list_of_values': {'type': 'list',
'schema': {'items': [{'type': 'string', 'coerce': str},
{'type': 'integer', 'coerce': int}]}}
}
document = {'list_of_values': [['hello', 100], [123, "122"]]}
v.validate(document, schema)
v.errors
I am expecting to have no errors, as the coercion should take care of the types. But I am getting
{'list_of_values': [{1: [{0: ['must be of string type'],
1: ['must be of integer type']}]}]}
Is this a bug? Am I misunderstanding how the coercion works?
#funky-future
Something not right on your end, I can indeed reproduce the problem just by copy paste the example into the prompt:
>>> import cerberus
>>> v = cerberus.Validator()
>>> schema = {'list_of_values': {'type': 'list',
... 'schema': {'items': [{'type': 'string', 'coerce': str},
... {'type': 'integer', 'coerce': int}]}}
... }
>>> document = {'list_of_values': [['hello', 100], [123, "122"]]}
>>> v.validate(document, schema)
False
>>> v.errors
{'list_of_values': [{1: [{0: ['must be of string type'], 1: ['must be of integer type']}]}]}
Python3.5.2, cerberus1.2

Categories

Resources