I am reading a json file with python using below code:
import json
Ums = json.load(open('commerceProduct.json'))
for um in Ums :
des = um['description']
if des == None:
um['description'] = "Null"
with open("sample.json", "w") as outfile:
json.dump(um, outfile)
break
It is giving me the following error:
Traceback (most recent call last):
File "test.py", line 2, in <module>
Ums = json.load(open('commerceProduct.json'))
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5528 (char 5527)
while I am checking the json file, it looks fine.
The thing is it has one object on one line with deliminator being '\n'.
It is not corrupted since i have imported the same file in mongo.
Can someone please suggest what can be wrong in it ?
Thanks.
your JSON data is not in a valid format, one miss will mess up the python parser. Try to test your JSON data here, make sure it is in a correct format.
this return _default_decoder.decode(s) is returned when the python parser find somthing wrong in your json.
The code is valid and will work with a valid json doc.
You have one json object per line? That's not a valid json file. You have newline-delimited json, so consider using the ndjson package to read it. It has the same API as the json package you are familiar with.
import ndjson
Ums = ndjson.load(open('commerceProduct.json'))
...
Related
I have the following JSON file which contains some content(1st line and last line) due to which I am unable to load it as a JSON file. I want to edit this file using python so that I only have the content inside {} braces and "source.value(" & ");" are removed.
source.value(
{Meli:1,jack:3,rustin:4}
);
with open('check.json', 'rb') as g:
b=json.load(g)
Traceback (most recent call last):
File "<input>", line 2, in <module>
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
I'm not sure how best to generalize for your specific problem since many things could change based on how these files are generated.
First off the json format requires the names to be in double quotes - {"Meli":1,"jack":3,"rustin":4}
Assuming this is fixed, we open the file, read the lines and only use json to read the second line. Remember python indexes from 0.
import json
with open('test.json', 'r') as file:
lines = file.readlines()
data = json.loads(lines[1])
I have a question concerning an issue I ran into while using the json lib in Python.
I'm tying to read a json file using the json.load(file) command using the following code:
import json
filename= '../Data/exampleFile.json'
histFile= open(filename, 'w+')
print(json.load(histFile))
The JSON file I am trying to read is valid according to some website I found: a screenshot of that validation, because I'm new and still lack the reputation...
The error message I'm getting is the following:
File ".\testLoad.py", line 5, in <module>
print(json.load(histFile))
File "C:\Users\...\Python\Python37\lib\json\__init__.py", line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "C:\Users\...\Python\Python37\lib\json\__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "C:\Users\...\Python\Python37\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\...\Python\Python37\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Alright, so I believe it is not the file that is the issue, but the json.load(file) works for me in other cases.
Sadly I was not able to figure this error-message out on my own, so it would be amazing if someone with some more experience dealing with Python-JSON interaction could maybe help me out.
You opened the file for writing:
histFile= open(filename, 'w+')
# ^^^^
The w mode first truncates the file, so the file is empty (it doesn't matter here that the file can also be read from, the + sees to that but the file is truncated nonetheless). See the open() function documentation:
'w': open for writing, truncating the file first)
There is no JSON data in it to parse. This is why the exception tells you that parsing failed at the very start of the file:
Expecting value: line 1 column 1 (char 0)
There is no data in line one, column one.
If you wanted to open a file for both reading and writing without truncating it first, use 'r+' as the file mode.
I am trying to write out a csv file from data in JSON format. I can get the fieldnames to write to the csv file but not the item value I need. This is my first time coding in python so any help would be appreciated. The json file can be found below for reference:
https://data.ny.gov/api/views/nqur-w4p7/rows.json?accessType=DOWNLOAD
Here is my error:
Traceback (most recent call last):
File "ChangeDataType.py", line 5, in <module>
data = json.dumps(inputFile)
File "/usr/lib64/python3.4/json/__init__.py", line 230, in dumps
return _default_encoder.encode(obj)
File "/usr/lib64/python3.4/json/encoder.py", line 192, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib64/python3.4/json/encoder.py", line 250, in iterencode
return _iterencode(o, 0)
File "/usr/lib64/python3.4/json/encoder.py", line 173, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <_io.TextIOWrapper name='rows.json?accessType=DOWNLOAD' mode='r' encoding='UTF-8'> is not JSON serializable
Here is my code:
import json
import csv
inputFile = open("rows.json?accessType=DOWNLOAD", "r")
data = json.dumps(inputFile)
with open("Data.csv","w") as csvfile:
writer = csv.DictWriter(csvfile, extrasaction='ignore', fieldnames=["date", "new_york_state_average_gal", "albany_average_gal", "binghamton_average_gal",\
"buffalo_average_gal", "nassau_average_gal", "new_york_city_average_gal", "rochester_average_gal", "syracuse_average_gal","utica_average_gal"])
writer.writeheader()
for row in data:
writer.writerow([row["date"], row["new_york_state_average_gal"], row["albany_average_gal"], row["binghamton_average_gal"],\
row["buffalo_average_gal"], row["nassau_average_gal"], row["new_york_city_average_gal"], row["rochester_average_gal"], row["syracuse\
_average_gal"],row["utica_average_gal"]])
If you want to read a JSON file you should use json.load instead of json.dumps:
data = json.load(inputFile)
Seems you're still having problems even opening the file.
Python json to CSV
You were told to use json.load
dumps takes an object to a string. You want to read JSON to a dictionary.
You therefore need to load the JSON file, and you can open two files at once
with open("Data.csv","w") as csvfile, open("rows.json?accessType=DOWNLOAD") as inputfile:
data = json.load(inputfile)
writer = csv.DictWriter(csvfile,...
Also, for example, considering the JSON data looks like "fieldName" : "syracuse_average_gal", and that is the only occurrence of the Syracuse average value, row["syracuse_average_gal"] is not correct.
Carefully inspect your JSON and figure out to parse it from the very top bracket
I have a simple Websockets server in python, it receives messages from Android app clients, I tried to make the message payload from the client in JSON but I felt. It is only working when it is in String.
One solution I found is to keep the message string but with JSON format:
try {
json.put("name", "Jack");
json.put("age", "24");
message = json.toString(2);
} catch (JSONException e) {
e.printStackTrace();
}
webSocket.send(message);
Inspired by the Javascript JSON.stringify(message)
I printed the message on the server and it seems to be formatted
My question is how can I reverse back it into JSON on the server when it received?
I tried this way in Python:
def on_message(self,message):
data = json.loads(message)
self.write_message(data['name'])
but I got this error:
ERROR:tornado.application:Uncaught exception in /
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/tornado/websocket.py", line 494, in _run_callback
result = callback(*args, **kwargs)
File "index.py", line 24, in on_message
data = json.loads(message)
File "/usr/lib/python3.4/json/__init__.py", line 318, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.4/json/decoder.py", line 361, in raw_decode
raise ValueError(errmsg("Expecting value", s, err.value)) from None
ValueError: Expecting value: line 1 column 1 (char 0)
You should use the json Python package. To have a JSON, you could simply do import json and json.dumps(message).
Use a Json package in python
import json
data = json.loads(your_var)
In data variable you get a json format data
hope this will help you
Will something like this work for you?
import json
# assume this is the JSON you receive
text = json.dumps(dict(name='Jack', age='24'))
# show the text to be converted
print(text)
# outputs: {"name": "Jack", "age": "24"}
# load a string and convert to Python object
# see `json` module more for details
obj = json.loads(text)
I tried this way and it worked for me, I converted the message into a dictionary using ast and then used the new message as I wanted:
formattedMessage = ast.literal_eval(format(message))
self.write_message(formattedMessage["name"])
I'm trying to use the following code (within web2py) to read a csv file and convert it into a json object:
import csv
import json
originalfilename, file_stream = db.tablename.file.retrieve(info.file)
file_contents = file_stream.read()
csv_reader = csv.DictReader(StringIO(file_contents))
json = json.dumps([x for x in csv_reader])
This produces the following error:
'utf8' codec can't decode byte
0xa0 in position 1: invalid start byte
Apparently, there is a problem handling the spaces in the .csv file. The problem appears to stem from the json.dumps() line. The traceback from that point on:
Traceback (most recent call last):
File ".../web2py/gluon/restricted.py", line 212, in restricted
exec ccode in environment
File ".../controllers/default.py", line 2345, in <module>
File ".../web2py/gluon/globals.py", line 194, in <lambda>
self._caller = lambda f: f()
File ".../web2py/gluon/tools.py", line 3021, in f
return action(*a, **b)
File ".../controllers/default.py", line 697, in generate_vis
request.vars.json = json.dumps(list(csv_reader))
File "/usr/local/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/local/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/local/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 1: invalid start byte
Any suggestions regarding how to resolve this, or another way to get a csv file (which contains a header; using StringIO) into a json object that won't produce similar complications? Thank you.
The csv module (under Python 2) is purely byte-based; all strings you get out of it are bytes. However JSON is Unicode character-based, so there is an implicit conversion when you try to write out the bytes you got from CSV into JSON. Python guessed UTF-8 for this, but your CSV file wasn't UTF-8 - it was probably Windows code page 1252 (Western European - like ISO-8859-1 only not quite).
A quick fix would be to transcode your input (file_contents= file_contents.decode('windows-1252').encode('utf-8')), but probably you don't really want to rely on json guessing a particular encoding.
Best would be to explicitly decode your strings at the point of reading them from CSV. Then JSON will be able to cope with them OK. Unfortately csv doesn't have built-in decoding (at least in this Python version), but you can do it manually:
class UnicodeDictReader(csv.DictReader):
def __init__(self, f, encoding, *args, **kwargs):
csv.DictReader.__init__(self, f, *args, **kwargs)
self.encoding = encoding
def next(self):
return {
k.decode(self.encoding): v.decode(self.encoding)
for (k, v) in csv.DictReader.next(self).items()
}
csv_reader = UnicodeDictReader(StringIO(file_contents), 'windows-1252')
json_output = json.dumps(list(csv_reader))
it's not known in advance what sort of encoding will come up
Well that's more of a problem, since it's impossible to guess accurately what encoding a file is in. You would either have to specific a particular encoding, or give the user a way to signal what the encoding is, if you want to support non-ASCII characters properly.
Try replacing your final line with
json = json.dumps([x.encode('utf-8') for x in csv_reader])
Running unidecode over the file contents seems to do the trick:
from isounidecode import unidecode
...
file_contents = unidecode(file_stream.read())
...
Thanks, everyone!