I got an error : JSONDecodeError: Expecting value: line 1 column 1 (char 0). But don't understand why.
Here is my code :
import json
import urllib.request
url = "apiurl"
data = json.loads(url)
# Open the URL as Browser, not as python urllib
page = urllib.request.Request(url,headers={'User-Agent': 'Mozilla/5.0'})
infile = urllib.request.urlopen(page).read()
data = infile.decode('ISO-8859-1') # Read the content as string decoded with ISO-8859-1
command_obj = {x['command']: x for x in data}
with open('new_command.json', 'w') as f:
json.dump(command_obj, f, indent=2)
With this fonction, i'm just trying to fetch data from an api and modify its format. Thanks for your help
You're trying to read the URL itself (and not its content) as JSON:
data = json.loads(url)
... instead you want to read the content returned from the API as JSON:
# Open the URL as Browser, not as python urllib
page = urllib.request.Request(url,headers={'User-Agent': 'Mozilla/5.0'})
infile = urllib.request.urlopen(page).read()
data = infile.decode('ISO-8859-1')
# avoid re-using `data` variable name
json_data = json.loads(data)
However, be aware that JSON should always be returned as UTF-8, never as ISO-8859-1 / latin-1.
Related
I am currently trying to read out the locations of a company. The information about the locations is inside a script tag (json). So I read out the contet inside the corresponding script tag.
This is my code:
sauce = requests.get('https://www.ep.de/store-finder', verify=False, headers = {'User-Agent':'Mozilla/5.0'})
soup1 = BeautifulSoup(sauce.text, features="html.parser")
all_scripts = soup1.find_all('script')[6]
all_scripts.contents
The output is:
['\n\t\twindow.storeFinderComponent = {"center":{"lat":51.165691,"long":10.451526},"bounds":[[55.655085,5.160441],[46.439648,15.666775]],"stores":[{"code":"1238240","lat":51.411572,"long":10.425264,"name":"EP:Schulze","url":"/schulze-breitenworbis","showAsClosed":false,"isBusinessCard":false,"logoUrl":"https://cdn.prod.team-ec.com/logo/retailer/retailerlogo_epde_1238240.png","address":{"street":"Weststraße 6","zip":"37339","town":"Breitenworbis","phone":"+49 (36074) 31193"},"email":"info#ep-schulze-breitenworbis.de","openingHours":[{"day":"Mo.","openingTime":"09:00","closingTime":"18:00","startPauseTime":"13:00","endPauseTime":"14:30"},{"day":"Di.","openingTime":"09:00","closingTime":"18:00","startPauseTime":"13:00","endPauseTime":"14:30"},{"day":"Mi.","openingTime":"09:00","closingTime":"18:00","startPauseTime":"13:00","endPauseTime":"14:30"},...]
I have problems converting the content to a dictionary and reading all lat and long data.
When I try:
data = json.loads(all_scripts.get_text())
all_scripts.get_text() returns an empty list
So i tryed:
data = json.loads(all_scripts.contents)
But then i get an TypeError: the JSON object must be str, bytes or bytearray, not list
I dont know ho to convert the .content method to json:
data = json.loads(str(all_scripts.contents))
JSONDecodeError: Expecting value: line 1 column 2 (char 1)
Can anyone help me?
You could use regex to pull out the json and read that in.
import requests
import re
import json
html = requests.get('https://www.ep.de/store-finder', verify=False, headers = {'User-Agent':'Mozilla/5.0'}).text
pattern = re.compile('window\.storeFinderComponent = ({.*})')
result = pattern.search(html).groups(1)[0]
jsonData = json.loads(result)
You can removed first part of data and then last character of data and then load data to json
import json
data=all_scripts.contents[0]
removed_data=data.replace("\n\t\twindow.storeFinderComponent = ","")
clean_data=data[:-3]
json_data=json.loads(clean_data)
Output:
{'center': {'lat': 51.165691, 'long': 10.451526},
'bounds': [[55.655085, 5.160441], [46.439648, 15.666775]],
'stores': [{'code': '1238240',
'lat': 51.411572,
....
I'm trying to snip a embedded json from a webpage and then passing the json object to json.loads(). First url is okay but when loading the second url it's return error
ValueError: Unterminated string starting at: line 1 column 2078 (char 2077)
here is the code
import requests,json
from bs4 import BeautifulSoup
urls = ['https://www.autotrader.co.uk/dealers/greater-manchester/manchester/williams-landrover-9994',
'https://www.autotrader.co.uk/dealers/warwickshire/stratford-upon-avon/guy-salmon-land-rover-stratford-upon-avon-9965'
]
for url in urls:
r = requests.get(url)
soup = BeautifulSoup(r.content,'lxml')
scripts = soup.find_all('script')[0]
data = scripts.text.split("window['AT_APOLLO_STATE'] = ")[1].split(';')[0]
jdata = json.loads(data)
print(jdata)
If you print out scripts.text.split("window['AT_APOLLO_STATE'] = ")[1], you will see the follows that includes a ; right after and enthusiastic. So you get an invalid json string from scripts.text.split("window['AT_APOLLO_STATE'] = ")[1].split(';')[0]. And the data ends with and enthusiastic that is not a valid json string.
"strapline":"In our state-of-the-art dealerships across the U.K, Sytner Group
represents the world’s most prestigious car manufacturers.
All of our staff are knowledgeable and enthusiastic; making every interaction
special by going the extra mile.",
Reason has been given. You could also regex out appropriate string
import requests,json
urls = ['https://www.autotrader.co.uk/dealers/greater-manchester/manchester/williams-landrover-9994',
'https://www.autotrader.co.uk/dealers/warwickshire/stratford-upon-avon/guy-salmon-land-rover-stratford-upon-avon-9965'
]
p = re.compile(r"window\['AT_APOLLO_STATE'\] =(.*?});", re.DOTALL)
for url in urls:
r = requests.get(url)
jdata = json.loads(p.findall(r.text)[0])
print(jdata)
Missed a } in the original post.
I want to convert HTTP GET response (I am using requests library) to python object. Here's my code:
# Full, pure, response
response = requests.get(url)
# Getting request data/content represented in byte array
content = response.content
# Byte array to string
data = content.decode('utf8')
# This line causes "ValueError: malformed node or string: <_ast.Name object at 0x7f35068be128>"
#data = ast.literal_eval(data)
# I tried this also but data is still string after those 2 lines
data = json.dumps(data)
data = json.loads(data, object_hook=lambda d: namedtuple('X', d.keys())(*d.values()))
You can get the response as a dictionary using content = response.json(), and then pass that content to json.loads directly (This is assuming your response comes as a json)
# Full, pure, response
response = requests.get(url)
# Getting response as dictionary
content = response.json()
#Loading dictionary as json
data = json.loads(content, object_hook=lambda d: namedtuple('X', d.keys())(*d.values()))
I am trying to get a certain value in a string of json but I can't figure out how exactly to do it. I don't want to convert it into a string and strip / replace the unwanted pieces because then I won't be able to get the other values. My current code is:
username = "Dextication"
url = f"https://minecraft-statistic.net/api/player/info/{username}/"
response = requests.get(url)
json_data = json.loads(response.text)
print(json_data)
Edit:
when I run this, json.data = "{"status":"ok","data":{"online":0,"total_time_play":46990,"last_play":1513960562,"license":1,"name":"Dextication","uuid":"74d57a754855410c90b3d51bc99b8beb"}}"
I would like to only print the value: 46990
Try below code
import json, requests
username = "Dextication"
url = f"https://minecraft-statistic.net/api/player/info/{username}/"
response = requests.get(url)
json_data = json.loads(response.text)
result = json_data['data']['total_time_play']
print (result)
I'm a noob and I need to use the sessionid to post other commands like search.do, Im using Python 3.5 but Im not sure the best way to get and post it.
here is how I posted the request.
import urllib.parse
url = 'https://myapi.application.com/dmapi/login.do'
values = {'account' : 'MYACCOUNT', 'username': 'admin', 'password': 'pas1234', 'appid':'12346'}
data = urllib.parse.urlencode(values)
data = data.encode('utf-8') # data should be bytes
req = urllib.request.Request(url, data)
resp = urllib.request.urlopen(req)
respData = resp.read()
print(respData)
printing gets this result.
b'errorcode=0\r\nsessionid=ef9a9cbd-e063-4be2-9301-9de59891304c\r\n'
I need to use the sessionid in subsequent request. Whats the best way to go about this.
In fact the response in composed of lines (in bytes) one of which contains the session id. You could simply read and parse what you get:
resp = urllib.request.urlopen(req)
errorcode = None
sessionid = None
for line in resp.read():
line = line.strip() # remove end of line
if line.startswith(b'errorcode'):
errorcode = line.split(b'=')[1]
if line.startswith(b'sessionid'):
sessionid = line.split(b'=')[1]
One idea is to split by sessionid= and extract the last item:
>>> respData.split("sessionid=")[-1].strip()
'ef9a9cbd-e063-4be2-9301-9de59891304c'
Another, is to use a regular expression:
>>> import re
>>>
>>> re.search(r"sessionid=([A-Za-z0-9-]+)", respData).group(1)
'ef9a9cbd-e063-4be2-9301-9de59891304c'