Python BeautifulSoup web-scraping Tripadvisor view a review - python

So I am new to web scraping and trying to view list of reviews for a particular hotel.
I am initially trying to view for a particular review by selecting a particular class, and I am not getting any output, even when I try to check the status code of the request, I don't get any output. I believe my code is taking really long to run.
Does web scraping take time to run or there is a problem with my code?
import requests
from bs4 import BeautifulSoup
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Max-Age': '3600',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'
}
url = "https://www.tripadvisor.ca/Hotel_Review-g154913-d1587398-Reviews-Le_Germain_Hotel_Calgary-Calgary_Alberta.html"
req = requests.get(url, headers)
print (req.status_code)
soup = BeautifulSoup(req.content, 'html.parser')
review = soup.find_all(class_="XllAv H4 _a").get_text()
print(review)

changed few headers keys and some requests parameters
i got error on .get_text() so replaced with other
import requests
from bs4 import BeautifulSoup
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'accept': '*/*',
'accept-encoding': 'gzip, deflate',
'accept-language': 'en,mr;q=0.9',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36'}
url = "https://www.tripadvisor.ca/Hotel_Review-g154913-d1587398-Reviews-Le_Germain_Hotel_Calgary-Calgary_Alberta.html"
req = requests.get(url,headers=headers,timeout=5,verify=False)
print (req.status_code)
soup = BeautifulSoup(req.content, 'html.parser')
#review = soup.find_all(class_="XllAv H4 _a").get_text()
#print(review)
for x in soup.body.find_all(class_="XllAv H4 _a"):
print(x.text)

Related

Error 403 on public page w/ python get request

I'm a complete newbie to python and trying to get the content of a webpage with a get request. The page I'm trying to access is public without any authorization as far as I can see. It's a job listing from the career website of a popular company and everyone can view the page.
My code looks like this:
import requests
from bs4 import BeautifulSoup
url = 'https://www.tuvsud.com/de-de/karriere/stellen/jobs/projektmanagerin-auditservice-food-corporate-functions-business-support-all-regions-133776'
headers = {
'Host': '',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate, br',
}
r = requests.get(url, headers=headers)
soup = BeautifulSoup(r.content, 'html.parser')
print(r.status_code)
However I get the status code 403. With the google url for example it works though.
I would be happy about any help! thanks in advance

Cannot identify Javascript XHR API loading data to this page

I am trying to parse the EPG data at the below link. When I inspect the HTML using the below, all the program data is missing. I realise this is because it's being loaded async by Javascript, but I cannot figure out in Chrome Tools which is the API call as there seems to be a lot loaded into this page at once:
import requests
url = 'https://mi.tv/ar/programacion/lunes'
headers ={
'Accept': 'text/html, */*; q=0.01',
'Referer': outer,
'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="90", "Google Chrome";v="90"',
'sec-ch-ua-mobile': '?0',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.212 Safari/537.36',
'X-KL-Ajax-Request': 'Ajax_Request',
'X-Requested-With': 'XMLHttpRequest'
}
r = requests.get(url=url, headers=headers)
rr = r.text
print(rr)
...anyone identify for me what the correct API is? I can see there are API parameters given in the HTML, but I've not been able to assemble them into a working link and I cannot see anything with that URL root in chrome tools...
The following shows the right url to use and how to return listings in a dict by channel key
import requests
from bs4 import BeautifulSoup as bs
from pprint import pprint
headers = {'User-Agent': 'Mozilla/5.0'}
r = requests.get('https://mi.tv/ar/async/guide/all/lunes/60', headers = headers)
soup = bs(r.content, 'lxml')
listings = {c.select_one('h3').text: list(zip([i.text for i in c.select('.time')], [i.text for i in c.select('.title')]))
for c in soup.select('.channel')}
pprint(listings)

Scraping Data from .ASPX Website URL with Python

I have a static .aspx url that I am trying to scrape. All of my attempts yield the raw html data of the regular website instead of the data I am querying.
My understanding is the headers I am using (which I found from another post) are correct and generalizable:
import urllib.request
from bs4 import BeautifulSoup
headers = {
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept-Encoding': 'gzip,deflate,sdch',
'Accept-Language': 'en-US,en;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3'
}
class MyOpener(urllib.request.FancyURLopener):
version = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17'
myopener = MyOpener()
url = 'https://www.mytaxcollector.com/trSearch.aspx'
# first HTTP request without form data
f = myopener.open(url)
soup_dummy = BeautifulSoup(f,"html5lib")
# parse and retrieve two vital form values
viewstate = soup_dummy.select("#__VIEWSTATE")[0]['value']
viewstategen = soup_dummy.select("#__VIEWSTATEGENERATOR")[0]['value']
Trying to enter the form data causes nothing to happen:
formData = (
('__VIEWSTATE', viewstate),
('__VIEWSTATEGENERATOR', viewstategen),
('ctl00_contentHolder_trSearchCharactersAPN', '631091430000'),
('__EVENTTARGET', 'ct100$MainContent$calculate')
)
encodedFields = urllib.parse.urlencode(formData)
# second HTTP request with form data
f = myopener.open(url, encodedFields)
soup = BeautifulSoup(f,"html5lib")
trans_emissions = soup.find("span", id="ctl00_MainContent_transEmissions")
print(trans_emissions.text)
This give raw html code almost exactly the same as the "soup_dummy" variable. But what I want to see is the data of the field ('ctl00_contentHolder_trSearchCharactersAPN', '631091430000') being submitted (this is the "parcel number" box.
I would really appreciate the help. If anything, linking me to a good post about HTML requests (one that not only explains but actually walks through scraping aspx) would be great.
To get the result using the parcel number, your parameters have to be somewhat different from what you have already tried with. Moreover, you have to use this url https://www.mytaxcollector.com/trSearchProcess.aspx to send the post requests.
Working code:
from urllib.request import Request, urlopen
from urllib.parse import urlencode
from bs4 import BeautifulSoup
url = 'https://www.mytaxcollector.com/trSearchProcess.aspx'
payload = {
'hidRedirect': '',
'hidGotoEstimate': '',
'txtStreetNumber': '',
'txtStreetName': '',
'cboStreetTag': '(Any Street Tag)',
'cboCommunity': '(Any City)',
'txtParcelNumber': '0108301010000', #your search term
'txtPropertyID': '',
'ctl00$contentHolder$cmdSearch': 'Search'
}
data = urlencode(payload)
data = data.encode('ascii')
req = Request(url,data)
req.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36')
res = urlopen(req)
soup = BeautifulSoup(res.read(),'html.parser')
for items in soup.select("table.propInfoTable tr"):
data = [item.get_text(strip=True) for item in items.select("td")]
print(data)

Scraping data from google finance using BeautifulSoup in python

I'm trying to get data from google finance from this link like this:
url = "https://www.google.com/finance/historical?cid=4899364&startdate=Dec+1%2C+2016&enddate=Mar+23%2C+2017&num=200&ei=4wLUWImyJs-iuASgwIKYBg"
request = urllib.request.Request(url,None,headers)
response = urllib.request.urlopen(request).read()
soup = BeautifulSoup(response, 'html.parser')
prices = soup.find_all("tbody")
print(prices)
I'm getting an empty list. I have also tried alternates like using soup.find_all('tr') but still I can't retrieve data successfully.
edit:
headers={'Host': 'www.google.com',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'Connection': 'keep-alive'
}
The problem was with html.parser. I instead used lxml and it worked. Also exchanged urllib with requests.

How to pass arguments for get method with urllib?

The response web page is as below when to slect title and input wordpress.
Here is my python code to pass arguments for get method with python3.
import urllib.request
import urllib.parse
url = 'http://www.it-ebooks.info/'
values = {'q': 'wordpress','type': 'title'}
data = urllib.parse.urlencode(values).encode(encoding='utf-8',errors='ignore')
headers = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0' }
request = urllib.request.Request(url=url, data=data,headers=headers,method='GET')
response = urllib.request.urlopen(request)
buff = response.read()
html = buff.decode("utf8")
print(html)
I can't get the desired output web page.
How to pass arguments for get method with urllib in my example?
The data kwarg of urllib.request.Request is only used for POST requests as it modifies the request's body.
GET requests simply use URL parameters, so you should append these to the url:
params = '?q=wordpress&type=title'
url = 'http://www.it-ebooks.info/search/{}'.format(params)
You can of course take the time and generalize this into a generic function.
is better if you use the library called requests
import requests
headers = {
'DNT': '1',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'es-ES,es;q=0.8,en;q=0.6',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Referer': 'http://www.it-ebooks.info/',
'Connection': 'keep-alive',
}
r = requests.get('http://www.it-ebooks.info/search/?q=wordpress&type=title', headers=headers)
print r.content

Categories

Resources