Scraping JSON from AJAX calls - python

Background
Considering this url:
base_url = "https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html"
I want to make the ajax call for the telephone number:
ajax_url = "https://www.olx.bg/ajax/misc/contact/phone/7XarI/?pt=e3375d9a134f05bbef9e4ad4f2f6d2f3ad704a55f7955c8e3193a1acde6ca02197caf76ffb56977ce61976790a940332147d11808f5f8d9271015c318a9ae729"
Wanted results
If I press the button through the site in my chrome browser in the console I would get the wanted result:
{"value":"088 *****"}
debugging
If I open a new tab and paste the ajax_url I would always get empty values:
{"value":"000 000 000"}
If I try something like:
Bash:
wget $ajax_url
Python:
import requests
json_response= requests.get(ajax_url)
I would just receive the html of the the site's handling page that there is an error.
Ideas
I have something more when I am opening the request with the browser. What more do I have? maybe a cookie?
How do I get the wanted result with Bash/Python ?
Edit
the code of the response html is 200
I have tried with curl I get the same html problem.
Kind of a fix.
I have noticed that if I copy the cookie of the browser, and make a request with all the headers INCLUDING the cookie from the browser, I get the correct result
# I think the most important header is the cookie
headers = DICT_WITH_HEADERS_FROM_BROWSER
json_response= requests.get(next_url,
headers=headers,
)
Final question
The only question left is how can I generate a cookie through a Python script?

First you should create a requests Session to store cookies.
Then send a http GET request to the page that is actually calling the ajax request. If any cookie is created by the website, it is sent in GET response and your sessions stores the cookie.
Then you can easily use the session to call ajax api.
Important Note 1:
The ajax url you are calling in the original website is a http POST request! you should not send a get request to that url.
Important Note 2:
You also must extract phoneToken from the website js code which is stored in a variable like var phoneToken = 'here is the pt';
Sample code:
import re
import requests
my_session = requests.Session()
# call html website
base_url = "https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html"
base_response = my_session.get(url=base_url)
assert base_response.status_code == 200
# extract phone token from base url response
phone_token = re.findall(r'phoneToken\s=\s\'(.+)\';', base_response.text)[0]
# call ajax api
ajax_path = "/ajax/misc/contact/phone/81i3H/?pt=" + phone_token
ajax_url = "https://www.olx.bg" + ajax_path
ajax_headers = {
'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,fa;q=0.8',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'Referer': 'https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36'
}
ajax_response = my_session.post(url=ajax_url, headers=ajax_headers)
print(ajax_response.text)
When you run the code above, the result below is displayed:
{"value":"088 558 9937"}

from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.firefox.options import Options
from bs4 import BeautifulSoup
import time
options = Options()
options.add_argument('--headless')
driver = webdriver.Firefox(options=options)
driver.get(
'https://www.olx.bg/ad/sobstvenik-tristaen-kamenitsa-1-CID368-ID81i3H.html')
number = driver.find_element_by_xpath(
"/html/body/div[3]/section/div[3]/div/div[1]/div[2]/div/ul[1]/li[2]/div/strong").click()
time.sleep(2)
source = driver.page_source
soup = BeautifulSoup(source, 'html.parser')
phone = soup.find("strong", {'class': 'xx-large'}).text
print(phone)
Output:
088 558 9937

Related

How do I web scrape this link and iterate through the page numbers?

My goal is to web scrape this url link and iterate through the pages. I keep getting a strange error. My code and error follows:
import requests
import json
import pandas as pd
url = 'https://www.acehardware.com/api/commerce/storefront/locationUsageTypes/SP/locations?page='
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0',
}
#create a url list to scrape data from all pages
url_list = []
for i in range(0, 4375):
url_list.append(url + str(i))
response = requests.get(url, headers=headers)
data = response.json()
d = json.dumps(data)
df = pd.json_normalize(d)
Error:
{'items': [{'applicationName': 'ReverseProxy', 'errorCode': 'UNAUTHORIZED', 'message': 'You are Unauthorized to perform the attempted operation. Application access token required', 'additionalErrorData': [{'name': 'OperationName', 'value': 'http://www.acehardware.com/api/commerce/storefront/locationUsageTypes/SP/locations?page=0&page=1'}]}], 'exceptionDetail': {'type': 'Mozu.Core.Exceptions.VaeUnAuthorizedException'}
This is strange to me because I should be able to access each page on this url
Specifically, since I can follow the link and copy and paste the json data. Is there a way to scrape this site without an api key?
It works in your browser because you have the cookie token saved in you local storage.
Once you delete all cookies, it does not work when you try to navigate to API link directly.
The token cookie is sb-sf-at-prod-s. Add this cookie to your headers and it will work.
I do not know if the value of this cookie is linked to my ip address. But if it is and it does not work for you. Just change the value of this cookie to one from your browser.
This cookies maybe is valid only for some request or for some time.
I recommend you to put some sleep between each request.
This website has security antibot Akamai.
import requests
import json
url = 'https://www.acehardware.com/api/commerce/storefront/locationUsageTypes/SP/locations?page='
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:98.0) Gecko/20100101 Firefox/98.0',
'cookie': 'sb-sf-at-prod=at=%2FVzynTSsuVJGJMAd8%2BjAO67EUtyn1fIEaqKmCi923rynHnztv6rQZH%2F5LMa7pmMBRiW00x2L%2B%2FLfmJhJKLpNMoK9OFJi069WHbzphl%2BZFM%2FpBV%2BdqmhCL%2FtylU11GQYQ8y7qavW4MWS4xJzWdmKV%2F01iJ0RkwynJLgcXmCzcde2oqgxa%2FAYWa0hN0xuYBMFlCoHJab1z3CU%2F01FJlsBDzXmJwb63zAJGVj4PIH5LvlcbnbOhbouQBKxCrMyrmpvxDf70U3nTl9qxF9qgOyTBZnvMBk1juoK8wL1K3rYp51nBC0O%2Bthd94wzQ9Vkolk%2B4y8qapFaaxRtfZiBqhAAtMg%3D%3D'
}
#create a url list to scrape data from all pages
url_list = []
for i in range(0, 4375):
url_list.append(url + str(i))
response = requests.get(url, headers=headers)
data = response.json()
d = json.dumps(data)
print(d)
I hope I have been able to help you.

Extract JSON response data only visible in Network>XHR > angular response

Thanks in advance for looking into this query.
I am trying to extract data from the angular response which is not visible in the HTML code using the inspect function of Chrome browser.
I researched and looked for solutions and have been able to find the data in the Network (tab)>Fetch/XHR>Response (screenshots) and also wrote code based on the knowledge I gained researching this topic.
Response
In order to read the response I am trying the below code by passing the parameters and cookies grabbed from the main URL
and passing them into the request via the below code segment from the main code shared further below. The parameters were created based on information I found under tab Network (tab)>Fetch/XHR>Header
http = urllib3.PoolManager()
r = http.request('GET',
'https://www.barchart.com/proxies/core-api/v1/quotes/get?' + urlencode(params),
headers=headers
)
QUESTIONS
Please help confirm what am I missing or doing wrong? I want to read and store the json response what should I be doing?
JSON to be extracted
Also is there a way to read the params using a function?, instead of assigning them as I have done below. What I mean is similar to what I have done for cookies (headers = x.cookies.get_dict()) is there a way to read and assign parameters?
Below is the full code I am using.
import requests
import urllib3
from urllib.parse import urlencode
url = 'https://www.barchart.com/etfs-funds/performance/percent-change/advances?viewName=main&timeFrame=5d&orderBy=percentChange5d&orderDir=desc'
header = {'accept': 'application/json', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36'}
s = requests.Session()
x = s.get(url, headers=header)
headers = x.cookies.get_dict()
params = { 'lists': 'etfs.us.percent.advances.unleveraged.5d',
'orderDir': 'desc',
'fields': 'symbol,symbolName,lastPrice,weightedAlpha,percentChangeYtd,percentChange1m,percentChange3m,percentChange1y,symbolCode,symbolType,hasOptions',
'orderBy': 'percentChange',
'meta': 'field.shortName,field.type,field.description,lists.lastUpdate',
'hasOptions': 'true',
'page': '1',
'limit': '100',
'in(leverage%2C(1x))':'',
'raw': '1'}
http = urllib3.PoolManager()
r = http.request('GET',
'https://www.barchart.com/proxies/core-api/v1/quotes/get?' + urlencode(params),
headers=headers
)
r.data
r.data response is below, returning an error.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">\n<HTML><HEAD><META
HTTP-EQUIV="Content-Type" CONTENT="text/html;
charset=iso-8859-1">\n<TITLE>ERROR: The request could not be
satisfied</TITLE>\n</HEAD><BODY>\n<H1>403 ERROR</H1>\n<H2>The request
could not be satisfied.</H2>\n<HR noshade size="1px">\nRequest
blocked.\nWe can\'t connect to the server for this app or website at
this time. There might be too much traffic or a configuration error.
Try again later, or contact the app or website owner.\n<BR
clear="all">\nIf you provide content to customers through CloudFront,
you can find steps to troubleshoot and help prevent this error by
reviewing the CloudFront documentation.\n<BR clear="all">\n<HR noshade
size="1px">\n<PRE>\nGenerated by cloudfront (CloudFront)\nRequest ID:
vcjzkFEpvdtf6ihDpy4dVkYx1_lI8SUu3go8mLqJ8MQXR-KRpCvkng==\n</PRE>\n<ADDRESS>\n</ADDRESS>\n</BODY></HTML>
You can get reponse by name, on your screenshot name get?lists=etfs.us is what you need, you also need to install playwright
There is a guide here: https://www.zenrows.com/blog/web-scraping-intercepting-xhr-requests#use-case-nseindiacom
from playwright.sync_api import sync_playwright
url = "https://www.barchart.com/etfs-funds/performance/percent-change/advances?viewName=main&timeFrame=5d&orderBy=percentChange5d&orderDir=desc"
with sync_playwright() as p:
def handle_response(response):
# the endpoint we are insterested in
if ("get?lists=etfs.us" in response.url):
print(response.json())
browser = p.chromium.launch()
page = browser.new_page()
page.on("response", handle_response)
page.goto(url, wait_until="networkidle")
page.context.close()
browser.close()

Scraping site missing data

So I'm trying to scrape the open positions on this site and when I use any type of requests (currently trying request-html) it doesn't show everything that's in the HTML.
# Import libraries
import time
from bs4 import BeautifulSoup
from requests_html import HTMLSession
# Set the URL you want to webscrape from
url = 'https://germanamerican.csod.com/ux/ats/careersite/5/home?c=germanamerican'
session = HTMLSession()
# Connect to the URL
response = session.get(url)
response.html.render()
# Parse HTML and save to BeautifulSoup object¶
soup = BeautifulSoup(response.text, "html5lib")
b = soup.findAll('a')
Not sure where to go. Originally thought the problem was due to javascript rendering but this is not working.
The issue is that the initial GET doesn't get the data (which I assume is the job listings), and the js that does do that, uses a POST with a authorization token in the header. You need to get this token and then make the POST to get the data.
This token appears to be dynamic so we're going to get a little wonky getting it, but doable.
url0=r'https://germanamerican.csod.com/ux/ats/careersite/5/home?c=germanamerican'
url=r'https://germanamerican.csod.com/services/x/career-site/v1/search'
s=HTMLSession()
r=s.get(url0)
print(r.status_code)
r.html.render()
soup=bs(r.text,'html.parser')
scripts=soup.find_all('script')
for script in scripts:
if 'csod.context=' in script.text: x=script
j=json.loads(x.text.replace('csod.context=','').replace(';',''))
payload={
'careerSiteId': 5,
'cities': [],
'countryCodes': [],
'cultureId': 1,
'cultureName': "en-US",
'customFieldCheckboxKeys': [],
'customFieldDropdowns': [],
'customFieldRadios': [],
'pageNumber': 1,
'pageSize': 25,
'placeID': "",
'postingsWithinDays': None,
'radius': None,
'searchText': "",
'states': []
}
headers={
'accept': 'application/json; q=1.0, text/*; q=0.8, */*; q=0.1',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9',
'authorization': 'Bearer '+j['token'],
'cache-control': 'no-cache',
'content-length': '272',
'content-type': 'application/json',
'csod-accept-language': 'en-US',
'origin': 'https://germanamerican.csod.com',
'referer': 'https://germanamerican.csod.com/ux/ats/careersite/5/home?c=germanamerican',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36',
'x-requested-with': 'XMLHttpRequest'
}
r=s.post(url,headers=headers,json=payload)
print(r.status_code)
print(r.json())
the r.json() thats printed out is a nice json format version of the table of job listings.
I don't think it's possible to scrape that website with Requests.
I would suggest using Selenium or Scrapy.
Welcome to SO!
Unfortunately, you won't be able to scrape that page with requests (nor requests_html or similar libraries) because you need a tool to handle dynamic pages - i.e., javascript-based.
With python, I would strongly suggest selenium and its webdriver. Below a piece of code that prints the desired output - i.e., all listed jobs (NB it requires selenium and Firefox webdriver to be installed and with the correct PATH to run)
# Import libraries
from bs4 import BeautifulSoup
from selenium import webdriver
# Set the URL you want to webscrape from
url = 'https://germanamerican.csod.com/ux/ats/careersite/5/home?c=germanamerican'
browser = webdriver.Firefox() # initialize the webdriver. I use FF, might be Chromium or else
browser.get(url) # go to the desired page. You might want to wait a bit in case of slow connection
page = browser.page_source # this is the page source, now full with the listings that have been uploaded
soup = BeautifulSoup(page, "lxml")
jobs = soup.findAll('a', {'data-tag' : 'displayJobTitle'})
for j in jobs:
print(j.text)
browser.quit()

Python: how to fill out a web form and get the resulting page source

I am trying to write a python script that will scrape http://www.fakenewsai.com/ and tell me whether or not a news article is fake news. I want the script to input a given news article into the website's url input field and hit the submit button. Then, I want to scrape the website to determine whether the article is "fake" or "real" news, as displayed on the website.
I was successful in accomplishing this using selenium and ChromeDriver, but the script was very slow (>2 minutes) and did not run on Heroku (using flask). For reference, here is the code I used:
from selenium import webdriver
import time
def fakeNews(url):
if url.__contains__("https://"):
url = url[8:-1]
if url.__contains__("http://"):
url = url[7:-1]
browser = webdriver.Chrome("static/chromedriver.exe")
browser.get("http://www.fakenewsai.com")
element = browser.find_element_by_id("url")
element.send_keys(url)
button = browser.find_element_by_id("submit")
button.click()
time.sleep(1)
site = "" + browser.page_source
result = ""
if(site[site.index("opacity: 1")-10] == "e"):
result = "Fake News"
else:
result = "Real News"
browser.quit()
return result
print(fakeNews('https://www.nytimes.com/2019/11/02/opinion/sunday/instagram-social-media.html'))
I have attempted to replicate this code using other python libraries, such as mechanicalsoup, pyppeteer, and scrapy. However, as a beginner at python, I have not found much success. I was hoping someone could point me in the right direction with a solution.
For the stated purpose, in my opinion it would be much more simple to analyze the website, understand it's functionality and then automate the browser behavior instead of the user behavior.
Try to hit F12 on your browser while on the website, open the Network tab, paste a URL on the input box and then hit submit, you will see that it sends a HTTP OPTIONS request and then a POST request to a URL. The server then returns a JSON response as a result.
So, you can use Python's request module (docs) to automate the very POST request instead of having a very complex code that simulates clicks and scrapes the result.
A very simple example you can build on is:
import json
import requests
def fake_news():
url = 'https://us-central1-fake-news-ai.cloudfunctions.net/detect/'
payload = {'url': 'https://www.nytimes.com/'}
headers = {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US,en;q=0.5',
'Connection': 'keep-alive', 'Content-Length': '103', 'Content-type': 'application/json; charset=utf-8',
'DNT': '1', 'Host': 'us-central1-fake-news-ai.cloudfunctions.net', 'Origin': 'http://www.fakenewsai.com',
'Referer': 'http://www.fakenewsai.com/', 'TE': 'Trailers',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0'}
response_json = requests.post(url, data=json.dumps(payload), headers=headers).text
response = json.loads(response_json)
is_fake = int(response['fake'])
if is_fake == 0:
print("Not fake")
elif is_fake == 1:
print("Fake")
else:
print("Invalid response from server")
if __name__ == "__main__":
fake_news()
PS: It would be fair to contact the owner of the website to discuss using his or her infrastructure for your project.
The main slowdown occurs on starting a chrome browser and locating the first URL.
Note that you are launching a browser for each request.
You can launch a browser on the initialization step and only do the automation parts per request.
This will greatly increase the performance.

python requests & beautifulsoup bot detection

I'm trying to scrape all the HTML elements of a page using requests & beautifulsoup. I'm using ASIN (Amazon Standard Identification Number) to get the product details of a page. My code is as follows:
from urllib.request import urlopen
import requests
from bs4 import BeautifulSoup
url = "http://www.amazon.com/dp/" + 'B004CNH98C'
response = urlopen(url)
soup = BeautifulSoup(response, "html.parser")
print(soup)
But the output doesn't show the entire HTML of the page, so I can't do my further work with product details.
Any help on this?
EDIT 1:
From the given answer, It shows the markup of the bot detection page. I researched a bit & found two ways to breach it :
I might need to add a header in the requests, but I couldn't understand what should be the value of header.
Use Selenium.
Now my question is, do both of the ways provide equal support?
It is better to use fake_useragent here for making things easy. A random user agent sends request via real world browser usage statistic. If you don't need dynamic content, you're almost always better off just requesting the page content over HTTP and parsing it programmatically.
import requests
from fake_useragent import UserAgent
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
ua=UserAgent()
hdr = {'User-Agent': ua.random,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
url = "http://www.amazon.com/dp/" + 'B004CNH98C'
response = requests.get(url, headers=hdr)
print response.content
Selenium is used for browser automation and high level web scraping for dynamic contents.
As some of the comments already suggested, if you need to somehow interact with Javascript on a page, it is better to use selenium. However, regarding your first approach using a header:
import requests
from bs4 import BeautifulSoup
url = "http://www.amazon.com/dp/" + 'B004CNH98C'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text,"html.parser")
These headers are a bit old, but should still work. By using them you are pretending that your request is coming from a normal webbrowser. If you use requests without such a header your code is basically telling the server that the request is coming from python, which most of the servers are rejecting right away.
Another alternative for you could also be fake-useragent maybe you can also have a try with this.
try this:
import requests
from bs4 import BeautifulSoup
url = "http://www.amazon.com/dp/" + 'B004CNH98C'
r = requests.get(url)
r = r.text
##options #1
# print r.text
soup = BeautifulSoup( r.encode("utf-8") , "html.parser")
### options 2
print(soup)

Categories

Resources