How to properly scrape json response from reddit? - python

I am attempting to save all "author" entries from the json linked below into a list however am very new to python. Can someone kindly point me in the right direction?
the json: https://codebeautify.org/jsonviewer/cb0d0a91
Trying to scrape a reddit thread:
import requests
import json
url ="https://www.reddit.com/r/easternshoremd/comments/72u501/going_to_be_in_the_easton_area_for_work_next_week.json"
r = requests.get(url, headers={'User-agent': 'Chrome'})
d = r.json()
scrapedids = []
for child in d['data']['children']:
scrapedids.append(child['data']['author'])
print (scrapedids)
If I switch the url from a reddit post to the subreddit then it works. For example, if I set
url = ("https://www.reddit.com/r/easternshoremd.json")
I believe the issue is my lack of understanding in the directory/tree (whatever it's called) of json. I've been hung up for a few hours and appreciate any assistance.
The error:
Traceback (most recent call last):
File "/home/usr/PycharmProjects/untitled/delete.py", line 14, in
for child in d['data']['children']:
TypeError: list indices must be integers or slices, not str

You included a link to the JSON, which is good. It shows that the root is an array.
Therefore your code should look more like:
import requests
import json
url ="https://www.reddit.com/r/easternshoremd/comments/72u501/going_to_be_in_the_easton_area_for_work_next_week.json"
r = requests.get(url, headers={'User-agent': 'Chrome'})
listings = r.json()
scrapedids = []
for listing in listings:
for child in listing['data']['children']:
scrapedids.append(child['data']['author'])
print (scrapedids)
Note that I renamed d to listings which relates to the kind attribute ('listing').

Related

How to get a link on a website using Python that updates dynamically?

I am trying to download the most recent zip file from the ERCOT Website (https://www.ercot.com/mp/data-products/compliance-and-disclosure/?id=NP3-965-ER). However, the link of the zip file has a doclookup id that changes everytime. The id is also populated dynamically. I have tried using beautifulsoup to get the link, but since it's being loaded dynamically it is not providing any links. Any feedback or solutions will be appreciated. enter image description here
Using the exposed api:
import json
import pandas as pd
import pendulum
import requests
def get_document_id(type_id: int) -> int:
url = (
"https://www.ercot.com/misapp/servlets/IceDocListJsonWS?"
f"reportTypeId={type_id}&"
f"_={pendulum.now().format('X')}"
)
with requests.Session() as request:
response = request.get(url, timeout=10)
if response.status_code != 200:
print(response.raise_for_status())
data = json.loads(response.text)
return pd.json_normalize(data=data["ListDocsByRptTypeRes"], record_path="DocumentList").head(1)["Document.DocID"].squeeze()
id_number = get_document_id(13052)
print(id_number)
869234127

Parsing Json in python 3, get email from API

I'm trying to do a little code that gets the emails (and other things in the future) from an API. But I'm getting "TypeError: list indices must be integers or slices, not str" and I don't know what to do about it. I've been looking at other questions here but I still don't get it. I might be a bit slow when it comes to this.
I've also been watching some tutorials on the tube, and done the same as them, but still getting different errors. I run Python 3.5.
Here is my code:
from urllib.request import urlopen
import json, re
# Opens the url for the API
url = 'https://jsonplaceholder.typicode.com/posts/1/comments'
r = urlopen(url)
# This should put the response from API in a Dict
result= r.read().decode('utf-8')
data = json.loads(result)
#This shuld get all the names from the the Dict
for name in data['name']: #TypeError here.
print(name)
I know that I could regex the text and get the result that I want.
Code for that:
from urllib.request import urlopen
import re
url = 'https://jsonplaceholder.typicode.com/posts/1/comments'
r = urlopen(url)
result = r.read().decode('utf-8')
f = re.findall('"email": "(\w+\S\w+)', result)
print(f)
But that seems like the wrong way to do this.
Can someone please help me understand what I'm doing wrong here?
data is a list of dicts, that's why you are getting TypeError while iterating on it.
The way to go is something like this:
for item in data: # item is {"name": "foo", "email": "foo#mail..."}
print(item['name'])
print(item['email'])
#PiAreSquared's comment is correct, just a bit more explanation here:
from urllib.request import urlopen
import json, re
# Opens the url for the API
url = 'https://jsonplaceholder.typicode.com/posts/1/comments'
r = urlopen(url)
# This should put the response from API in a Dict
result= r.read().decode('utf-8')
data = json.loads(result)
# your data is a list of elements
# and each element is a dict object, so you can loop over the data
# to get the dict element, and then access the keys and values as you wish
# see below for some example
for element in data: #TypeError here.
name = element['name']
email = element['email']
# if you want to get all names, you should do
names = [element['name'] for element in data]
# same to get all emails
emails = [email['email'] for email in data]

Unable to extract the table from API using python

I am trying to extract a table using an API but I am unable to do so. I am pretty sure that I am not using it correctly, and any help would be appreciated.
Actually I am trying to extract a table from this API but unable to figure out the right way on how to do it. This is what is mentioned in the website. I want to extract Latest_full_data table.
This is my code to get the table but I am getting error:
import urllib
import requests
import urllib.request
locu_api = 'api_Key'
def locu_search(query):
api_key = locu_api
url = 'https://www.quandl.com/api/v3/databases/WIKI/metadata?api_key=' + api_key
response = urllib.request.urlopen(url).read()
json_obj = str(response, 'utf-8')
datanew = json.loads(json_obj)
return datanew
When I do print(datanew). Update: Even if I change it to return data new, error is still the same.
I am getting this below error:
name 'datanew' is not defined
I had the same issues with urrlib before. If possible, try to use requests it's a better designed and working library in my opinion. Also, it is capable of reading JSON with a single function so no need to run it through multiple lines Sample code here:
import requests
locu_api = 'api_Key'
def locu_search():
url = 'https://www.quandl.com/api/v3/databases/WIKI/metadata?api_key=' + api_key
return requests.get(url).json()
locu_search()
Edit:
The endpoint that you are calling might not be the correct one. I think you are looking for the following one:
import requests
api_key = 'your_api_key_here'
def locu_search(dataset_code):
url = f'https://www.quandl.com/api/v3/datasets/WIKI/{dataset_code}/metadata.json?api_key={api_key}'
req = requests.get(url)
return req.json()
data = locu_search("FB")
This will return with all the metadata regarding a company. In this case Facebook.
Maybe it doesn't apply to your specific problem, but what I normally do is the following:
import requests
def get_values(url):
response = requests.get(url).text
values = json.loads(response)
return values

scraping data from json after using requests

i am trying to extract specific data from requested json file
so after passing Authorization and using requests.get i got my request , i think it is called dictionary for python coders and called json for javascript coders
it containt too much information that i dont need and i would like to extract one or two only
for example {"bio" : " hello world " }
and that json file contains more that one " bio "
for example i am scraping 100 accounts and i would like to extract all " bio " in one code
so i tried this :
from bs4 import BeautifulSoup
import requests
headers = {"Authorization" : "xxxx"}
req = requests.get('website', headers = headers)
data = req.text
soup = BeautifulSoup(data,'html.parser')
titles = soup.find_all('span',{'class':'bio'})
for title in titles :
print(title.text)
and didnt work , i tried multiple ideas with no success
if possible please write me a code that i can understande since iam trying to learn more about my mistakes
thanks
The Aphid library I created is perfect for this.
from command-prompt
py -m pip install Aphid
Then its just as easy as loading your json data and searching it with aphid.
import json
import Aphid
resp = requests.get(yoururl)
data = json.loads(resp.text)
results = Aphid.findall(data, 'bio')
results is now equal to a list of tuples(key, value), of every occurence of the 'bio' key.
After you get your request either:
you get a simple json file (in which case you import it to python using json) or
you get an html file from which you can extract the json code (using BeautifulSoup) which in turn you will parse using json library.

How do I fix a "JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0)"?

I'm trying to get Twitter API search results for a given hashtag using Python, but I'm having trouble with this "No JSON object could be decoded" error. I had to add the extra % towards the end of the URL to prevent a string formatting error. Could this JSON error be related to the extra %, or is it caused by something else? Any suggestions would be much appreciated.
A snippet:
import simplejson
import urllib2
def search_twitter(quoted_search_term):
url = "http://search.twitter.com/search.json?callback=twitterSearch&q=%%23%s" % quoted_search_term
f = urllib2.urlopen(url)
json = simplejson.load(f)
return json
There were a couple problems with your initial code. First you never read in the content from twitter, just opened the url. Second in the url you set a callback (twitterSearch). What a call back does is wrap the returned json in a function call so in this case it would have been twitterSearch(). This is useful if you want a special function to handle the returned results.
import simplejson
import urllib2
def search_twitter(quoted_search_term):
url = "http://search.twitter.com/search.json?&q=%%23%s" % quoted_search_term
f = urllib2.urlopen(url)
content = f.read()
json = simplejson.loads(content)
return json

Categories

Resources