parse JSON file to CSV with key values null in python - python

Example
{"data":"value1","version":"value2","version1":"value3"}
{"data":"value1","version1":"value3"}
{"data":"value1","version1":"value3","hi":{"a":"true,"b":"false"}}
I have a JSON file and need to convert it to csv, however the rows are not having same columns, and some rows have nested attributes,how to convert them in python script.
I tried JSON to csv using Python code, but it gives me an error

In order to convert a JSON file to a CSV file in Python, you will need to use the Pandas library.
import pandas as pd
data = [
{
"data": "value1",
"version": "value2",
"version1": "value3"
},
{
"data": "value1",
"version1": "value3"
},
{
"data": "value1",
"version1": "value3",
"hi": {
"a": "true,",
"b": "false"
}
}
]
df = pd.DataFrame(data)
df.to_csv('data.csv', index=False)
I have correctly formatted your JSON since it was giving errors.

You could convert the JSON data to a flat list of lists with column names on the first line. Then process that to make the CSV output.
def flatDict(D,p=""):
if not isinstance(D,dict):
return {"":D}
return {p+k+s:v for k,d in D.items() for s,v in flatDict(d,".").items()}
def flatData(data):
lines = [*map(flatDict,data)]
names = dict.fromkeys(k for d in lines for k in d)
return [[*names]] + [ [*map(line.get,names)] for line in lines ]
The flatDict function converts a nested dictionary structure to a single level dictionary with nested keys combined and brought up to the top level. This is done recursively so that it works for any depth of nesting
The flatData function processes each line, to make a list of flattened dictionaries (lines). The union of all keys in that list forms the list of columns names (using a dictionary constructor to get them in order of appearance). The list of names and lines is returned by converting each dictionary to a list mapping key names to line data where present (using the .get() method of dictionaries).
output:
E = [{"data":"value1","version":"value2","version1":"value3"},
{"data":"value1","version1":"value3"},
{"data":"value1","version1":"value3","hi":{"a":"true","b":"false"}} ]
for line in flatData(E):
print(line)
['data', 'version', 'version1', 'hi.a', 'hi.b'] # col names
['value1', 'value2', 'value3', None, None] # data ...
['value1', None, 'value3', None, None]
['value1', None, 'value3', 'true', 'false']

Related

List key values for Json data file

I have a very long json file, that I need make sense of in order to query the correct data that is related to what I am interested in. In order to do this, I would like to extract all of the key values in order to know what is available to query. Is there an quick way of doing this, or should I just write a parser that traverses the json file and extracts anything in-between either { and : or , and : ?
given the example:
[{"Name": "key1", "Value": "value1"}, {"Name": "key2", "Value": "value2"}]
I am looking for the values:
"Name"
"Value"
That will depend on if there's any nesting. But the basic pattern is something like this:
import json
with open("foo.json", "r") as fh:
data = json.load(fh)
all_keys = set()
for datum in data:
keys = set(datum.keys())
all_keys.update(keys)
This:
dict = [{"Name": "key1", "Value": "value1"}, {"Name": "key2", "Value": "value2"}]
for val in dict:
print(val.keys())
gives you:
dict_keys(['Name', 'Value'])
dict_keys(['Name', 'Value'])

pandas change the order of columns

In my project I'm using flask I get a JSON (by REST API) that has data that I should convert to a pandas Dataframe.
The JSON looks like:
{
"entity_data":[
{"id": 1, "store": "a", "marker": "a"}
]
}
I get the JSON and extract the data:
params = request.json
entity_data = params.pop('entity_data')
and then I convert the data into a pandas dataframe:
entity_ids = pd.DataFrame(entity_data)
the result looks like this:
id marker store
0 1 a a
This is not the original order of the columns. I'd like to change the order of the columns as in the dictionary.
help?
Use OrderedDict for an ordered dictionary
You should not assume dictionaries are ordered. While dictionaries are insertion ordered in Python 3.7, whether or not libraries maintain this order when reading json into a dictionary, or converting the dictionary to a Pandas dataframe, should not be assumed.
The most reliable solution is to use collections.OrderedDict from the standard library:
import json
import pandas as pd
from collections import OrderedDict
params = """{
"entity_data":[
{"id": 1, "store": "a", "marker": "a"}
]
}"""
# replace myjson with request.json
data = json.loads(params, object_pairs_hook=OrderedDict)
entity_data = data.pop('entity_data')
df = pd.DataFrame(entity_data)
print(df)
# id store marker
# 0 1 a a
Just add the column names parameter.
entity_ids = pd.DataFrame(entity_data, columns=["id","store","marker"])
Assuming you have access to JSON sender, you can send the order in the JSON itself.
like
`{
"order":['id','store','marker'],
"entity_data":{"id": [1,2], "store": ["a","b"],
"marker": ["a","b"]}
}
then create DataFrame with columns specified. as said by Chiheb.K.
import pandas as pd
params = request.json
entity_data = params.pop('entity_data')
order = params.pop('order')
entity_df=pd.DataFrame(data,columns=order)
if you cannot explicitly specify the order in the JSON. see this answer to specify object_pairs_hook in
JSONDecoder to get an OrderedDict and then create the DataFrame

Writing 3 python dictionaries to a csv

I have 3 dictionaries( 2 of them are setdefault dicts with multiple values)-
Score_dict-
{'Id_1': [('100001124156327', 0.0),
('100003643614411',0.0)],
'Id_2': [('100000435456546',5.7),
('100000234354556',3.5)]}
post_dict-
{'Id_1':[(+,100004536)],
'Id_2' :[(-,100035430)]}
comment_dict-
{'Id_1':[(+,1023434234)],
'Id_2':[(-,10343534534)
(*,1097963644)]}
My current approach is to write them into 3 different csv files and then merging them,I want to merge them according to a common first row(ID_row).
But I am unable to figure out how to merge 3 csv files into a single csv file. Also , Is there any way which I can write all the 3 dictionaries into a single csv without writing them individually.
Output required-
Ids Score_Ids Post_Ids Comment_Ids
Id_1 100001124156327',0.0 +,100004536 +,1023434234
100003643614411',0.0
Id_2 100000435456546',5.7 -,100035430 -,10343534534
100000234354556',3.5 *,1097963644
How to do this in a correct way with the best approach?
You can merge them all first, then write them to a csv file:
import pprint
scores = {
'Id_1': [
('100001124156327', 0.0),
('100003643614411',0.0)],
'Id_2': [
('100000435456546',5.7),
('100000234354556',3.5)
]
}
post_dict = {
'Id_1':[
('+',100004536)
],
'Id_2' :[
('-',100035430)
]
}
comment_dict = {
'Id_1':[
('+',1023434234)
],
'Id_2':[
('-',10343534534),
('*',1097963644)
]
}
merged = {
key: {
"Score_Ids": value,
"Post_Ids": post_dict[key],
"Comment_Ids": comment_dict[key]
}
for key, value
in scores.iteritems()
}
pp = pprint.PrettyPrinter(depth=6)
pp.pprint(merged)
For reference: https://repl.it/repls/SqueakySlateblueDictionaries
I suggest you to transform your three dicts into one list of dicts before write it to a csv file.
Example
rows = [
{"Score_Id": "...", "Post_Id": "...", "Comment_Id": "..."},
{"Score_Id": "...", "Post_Id": "...", "Comment_Id": "..."},
{"Score_Id": "...", "Post_Id": "...", "Comment_Id": "..."},
...
]
And then use the csv.DictWriter class to write all the rows.
Since you have commas in your values (are you sure it's a good behaviour? Maybe splitting them in two different columns could be a better approach), be careful to use tabs or something else as separator
I suggest writing all three to the same file
You could get common keys by doing something like:
common_keys = set(score_dict.keys()+post_dict.keys()+comment_dict.keys())
for key_ in common_keys:
val_score = score_dict.get(key_, some_default_value)
post_score = post_dict.get(key_, some_default_value)
comment_score = comment_dict.get(key_, some_default_value)
# print key and vals to csv as before

Import nested MongoDB to Pandas

I have a Collection with heavily nested docs in MongoDB, I want to flatten and import to Pandas. There are some nested dicts, but also a list of dicts that I want to transform into columns (see examples below for details).
I already have function, that works for smaller batches of documents. But the solution (I found it in the answer to this question) uses json. The problem with the json.loads operation is, that it fails with a MemoryError on bigger selections from the Collection.
I tried many solutions suggesting other json-parsers (e.g. ijson), but for different reasons none of them solved my problem. The only way left, if I want to keep up the transformation via json, would be chunking bigger selections into smaller groups of documents and iterate the parsing.
At this point I thought, - and that is my main question here - maybe there is a smarter way to do the unnesting without taking the detour through json directly in MongoDB or in Pandas or somehow combined?
This is a shortened example Doc:
{
'_id': ObjectId('5b40fcc4affb061b8871cbc5'),
'eventId': 2,
'sId' : 6833,
'stage': {
'value': 1,
'Name': 'FirstStage'
},
'quality': [
{
'type': {
'value': 2,
'Name': 'Color'
},
'value': '124'
},
{
'type': {
'value': 7,
'Name': 'Length'
},
'value': 'Short'
},
{
'type': {
'value': 15,
'Name': 'Printed'
}
}
}
This is what a succcesful dataframe-representation would look like (I skipped columns '_id' and 'sId' for readability:
eventId stage.value stage.name q_color q_length q_printed
1 2 1 'FirstStage' 124 'Short' 1
My code so far (which runs into memory problems - see above):
def load_events(filter = 'sId', id = 6833, all = False):
if all:
print('Loading all events.')
cursor = events.find()
else:
print('Loading events with %s equal to %s.' %(filter, id))
print('Filtering...')
cursor = events.find({filter : id})
print('Loading...')
l = list(cursor)
print('Parsing json...')
sanitized = json.loads(json_util.dumps(l))
print('Parsing quality...')
for ev in sanitized:
for q in ev['quality']:
name = 'q_' + str(q['type']['Name'])
value = q.pop('value', 1)
ev[name] = value
ev.pop('quality',None)
normalized = json_normalize(sanitized)
df = pd.DataFrame(normalized)
return df
You don't need to convert the nested structures using json parsers. Just create your dataframe from the record list:
df = DataFrame(list(cursor))
and afterwards use pandas in order to unpack your lists and dictionaries:
import pandas
from itertools import chain
import numpy
df = pandas.DataFrame(t)
df['stage.value'] = df['stage'].apply(lambda cell: cell['value'])
df['stage.name'] = df['stage'].apply(lambda cell: cell['Name'])
df['q_']= df['quality'].apply(lambda cell: [(m['type']['Name'], m['value'] if 'value' in m.keys() else 1) for m in cell])
df['q_'] = df['q_'].apply(lambda cell: dict((k, v) for k, v in cell))
keys = set(chain(*df['q_'].apply(lambda column: column.keys())))
for key in keys:
column_name = 'q_{}'.format(key).lower()
df[column_name] = df['q_'].apply(lambda cell: cell[key] if key in cell.keys() else numpy.NaN)
df.drop(['stage', 'quality', 'q_'], axis=1, inplace=True)
I use three steps in order to unpack the nested data types. Firstly, the names and values are used to create a flat list of pairs (tuples). In the second step a dictionary based on the tuples takes keys from 1st and values from 2nd location of the tuples. Then all existing property names are extracted once using a set. Each property gets a new column using a loop. Inside the loop the values of each pair is mapped to the respective column cells.

Create a data frame from a complex nested dictionary?

I have a big nested, then nested then nested json file saved as .txt format. I need to access some specific key pairs and crate a data frame or another transformed json object for further use. Here is a small sample with 2 key pairs.
[
{
"ko_id": [819752],
"concepts": [
{
"id": ["11A71731B880:http://ontology.intranet.com/Taxonomy/116#en"],
"uri": ["http://ontology.intranet.com/Taxonomy/116"],
"language": ["en"],
"prefLabel": ["Client coverage & relationship management"]
}
]
},
{
"ko_id": [819753],
"concepts": [
{
"id": ["11A71731B880:http://ontology.intranet.com/Taxonomy/116#en"],
"uri": ["http://ontology.intranet.com/Taxonomy/116"],
"language": ["en"],
"prefLabel": ["Client coverage & relationship management"]
}
]
}
]
The following code load the data as list but I need to access to the data probably as a dictionary and I need the "ko_id", "uri" and "prefLabel" from each key pair and put it to a pandas data frame or a dictionary for further analysis.
with open('sample_data.txt') as data_file:
json_sample = js.load(data_file)
The following code gives me the exact value of the first element. But donot actually know how to put it together and build the ultimate algorithm to create the dataframe.
print(sample_dict["ko_id"][0])
print(sample_dict["concepts"][0]["prefLabel"][0])
print(sample_dict["concepts"][0]["uri"][0])
for record in sample_dict:
df = pd.DataFrame(record['concepts'])
df['ko_id'] = record['ko_id']
final_df = final_df.append(df)
You can pass the data to pandas.DataFrame using a generator:
import pandas as pd
import json as js
with open('sample_data.txt') as data_file:
json_sample = js.load(data_file)
df = pd.DataFrame(data = ((key["ko_id"][0],
key["concepts"][0]["prefLabel"][0],
key["concepts"][0]["uri"][0]) for key in json_sample),
columns = ("ko_id", "prefLabel", "uri"))
Output:
>>> df
ko_id prefLabel uri
0 819752 Client coverage & relationship management http://ontology.intranet.com/Taxonomy/116
1 819753 Client coverage & relationship management http://ontology.intranet.com/Taxonomy/116

Categories

Resources