Unable to parse json response with python - python

So I ran into an issue while importing API data into my code. Any help is much appreciated.
from urllib2 import Request, urlopen, URLError
import json, requests
data = requests.get('https://masari.superpools.net/api/live_stats?update=1522693430318').json()
data_parsed = json.loads(open(data,"r").read())
print data_parsed
I'm still quite new to python, and I ran into this error:
>C:\Users\bot2>python C:\Users\bot2\Desktop\Python_Code\helloworld.py
Traceback (most recent call last):
File "C:\Users\bot2\Desktop\Python_Code\helloworld.py", line 5, in <module>
data_parsed = json.loads(open(data,"r").read())
TypeError: coercing to Unicode: need string or buffer, dict found

data is already received as a json object (which is a dict in this case). Just do the following:
data = requests.get('https://masari.superpools.net/api/live_stats?update=1522693430318').json()
print data
Use data['network'] for example to access nested dictionaries.

Related

JSON Attribute Error str has no attribute JSON when collecting data off of website

Hello so im making a program to collect specific data off of a website. And im using cloud scraper so i dont get blocked. I have used cloud scraper before and It works perfectly fine. Only thing now is im trying to get the gas guzzlers on etheruem but when i try connecting to opensea I get an error saying.Would anyone know of a solution or an API i can use to get gas guzzlers?
Traceback (most recent call last):
File "main.py", line 9, in <module>
data = response.json()
AttributeError: 'str' object has no attribute 'json'```
Code
import cloudscraper
import json
import re
from json import loads
# Or: scraper = cloudscraper.CloudScraper() # CloudScraper inherits from requests.Session
scraper = cloudscraper.CloudScraper()
response = scraper.get("https://etherscan.io/gastracker").text
data = response.json()
b = json.loads(data)
print(b)````

grabbing the api dictionary json with python

I was following this tutorial on api grabbing with python:
https://www.youtube.com/watch?v=pxofwuWTs7c
The url gives:
{"date":"1468500743","ticker":{"buy":"27.96","high":"28.09","last":"27.97","low":"27.69","sell":"27.97","vol":"41224179.11399996"}}
I tried to follow the video and grab the 'last' data.
import urllib2
import json
url = 'https://www.okcoin.cn/api/v1/ticker.do?symbol=ltc_cny'
json_obj=urllib2.urlopen(url)
data= json.load(json_obj)
for item in data['ticker']:print item['last']
After typing the last line python returns:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: string indices must be integers
I think you just misread the payload returned by the server. In this case the ticker key is not of type list in the dictionary converted by the json module.
So You should do the following
import urllib2
import json
url = 'https://www.okcoin.cn/api/v1/ticker.do?symbol=ltc_cny'
json_obj = urllib2.urlopen(url)
data = json.load(json_obj)
print data['ticker']['last']

Write data scraped to text file with python script

I am newbie to data scraping. This is my first program i am writting in python to scrape data and store it into the text file. I have written following code to scrape the data.
from bs4 import BeautifulSoup
import urllib2
text_file = open("scrape.txt","w")
url = urllib2.urlopen("http://ga.healthinspections.us/georgia/search.cfm?1=1&f=s&r=name&s=&inspectionType=&sd=04/24/2016&ed=05/24/2016&useDate=NO&county=Appling&")
content = url.read()
soup = BeautifulSoup(content, "html.parser")
type = soup.find('span',attrs={"style":"display:inline-block; font- size:10pt;"}).findAll()
for found in type:
text_file.write(found)
However i run this program using command prompt it shows me following error.
c:\PyProj\Scraping>python sample1.py
Traceback (most recent call last):
File "sample1.py", line 9, in <module>
text_file.write(found)
TypeError: expected a string or other character buffer object
What am i missing here, or is there anything i haven't added to. Thanks.
You need to check if type is None, ie soup.find did not actually find what you searched.
Also, don't use the name type, it's a builtin.
find, much like find_all return one/a list of Tag object(s). If you call print on a Tag you see a string representation. This automatism isn;t invoked on file.write. You have to decide what attribute of found you want to write.

Parse HTML table in file to csv with BeautifulSoup

Hi I'm a Python noob and even bigger BeautifulSoup and html noob. I have a file downloaded that has an html table in it. In all the examples of BeautifulSoup parsing I have seen, they all use urllib to access the table url and then the read the response and pass it to BeautifulSoup to parse. My question is, for a locally stored file, do I have to load the entire file into memory? So instead of doing say:
contenturl = "http://www.bank.gov.ua/control/en/curmetal/detail/currency?period=daily"
soup = BeautifulSoup(urllib2.urlopen(contenturl).read())
Do I instead do:
soup = BeautifulSoup(open('/home/dir/filename').read())
That doesn't really seem to work. So I get the following error:
Traceback (most recent call last):
File "<string>", line 1, in <fragment>
TypeError: 'module' object is not callable
My apologies if its something really silly I'm doing but help is appreciated
Update: Issue is resolved, need to import class from module for BeautifulSoup. Thank you!

How do I fix a "JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0)"?

I'm trying to get Twitter API search results for a given hashtag using Python, but I'm having trouble with this "No JSON object could be decoded" error. I had to add the extra % towards the end of the URL to prevent a string formatting error. Could this JSON error be related to the extra %, or is it caused by something else? Any suggestions would be much appreciated.
A snippet:
import simplejson
import urllib2
def search_twitter(quoted_search_term):
url = "http://search.twitter.com/search.json?callback=twitterSearch&q=%%23%s" % quoted_search_term
f = urllib2.urlopen(url)
json = simplejson.load(f)
return json
There were a couple problems with your initial code. First you never read in the content from twitter, just opened the url. Second in the url you set a callback (twitterSearch). What a call back does is wrap the returned json in a function call so in this case it would have been twitterSearch(). This is useful if you want a special function to handle the returned results.
import simplejson
import urllib2
def search_twitter(quoted_search_term):
url = "http://search.twitter.com/search.json?&q=%%23%s" % quoted_search_term
f = urllib2.urlopen(url)
content = f.read()
json = simplejson.loads(content)
return json

Categories

Resources