get around cookies with requests + python - python

I'm very much a noob in python and scraping. I understand the basics but just cannot get past this problem.
I'm trying to scrape content from www.tweakers.net using python with the requests and beautifullsoup libraries. However, when I scrape, I keep scraping the cookie statement instead of the actual site content. Hope that there is anyone who can help me with code. I got run down with similar issues on other websites so would really like to understand how I can tackle such an issue. This is what I have now.
import time
from bs4 import BeautifulSoup
import requests
from requests.cookies import cookiejar_from_dict
last_agreed_time = str(int(time.time() * 1000))
url = 'www.tweakers.net'
with requests.Session() as session:
session.headers = {'User-Agent': 'Mozilla/5.0 (Linux; U; Android 4.0.3; ko-kr; LG-L160L Build/IML74K) AppleWebkit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30'}
session.cookies = cookiejar_from_dict({
'wt3_sid': %3B318816705845986
'wt_cdbeid': 68907f896d9f37509a2f4b0a9495f272
'wt_feid': 2f59b5d845403ada14b462a2c1d0b967
'wt_fweid' 473bb8c305b0b42f5202e14a
})
response = session.get(url)
soup = BeautifulSoup(response.content)
soup.prettify()`
Do not mind the content of the header, I ripped it from somewhere else.

Two of the best imports for scraping would be selenium or cookielib. Here is a link to selenium, http://selenium-python.readthedocs.io/api.html, and cookielib, https://docs.python.org/2/library/cookielib.html.
## added selenium code
from selenium import webdriver
import time
from bs4 import BeautifulSoup
import requests
url = 'www.tweakers.net'
driver = webdriver.Chrome() # or webdriver.Firefox()
driver.set_window_size(1120, 550)
driver.get(url)
#add needed cookies
driver.add_cookie({'wt3_sid': %3B318816705845986
'wt_cdbeid': 68907f896d9f37509a2f4b0a9495f272
'wt_feid': 2f59b5d845403ada14b462a2c1d0b967
'wt_fweid' 473bb8c305b0b42f5202e14a})
##this would be to retrieve a cookie
print(driver.get_cookie('string'))
driver.get(url)
soup = BeautifulSoup(driver.content)
soup.prettify()

Related

How to web scrape a speed amount?

I was wondering how to web scrape the speed amount is the Fast.com website with python
I did some effort, here is what I've done so far:
import requests
from bs4 import BeautifulSoup
response = requests.get('https://fast.com/', headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/600.7.12 (KHTML, like Gecko) Version/8.0.7 Safari/600.7.12"})
soup = BeautifulSoup(response.text, 'lxml')
speed = soup.find('span', {'id' : 'speed-value'}).text
print(speed)
The output is always "0" and sometimes it gives me an error
My goal is to get the speed number in MB/s as shown in the website after the scan.
What I've forgot to do?
BeautifulSoups is more for Static Pages from my personal experience. I would recommend Selenium for more dynamic usage. It would allow for access after javascript and etc has loaded for easier web scraping.
from selenium import webdriver
driver_path = r"C:\chromedriver.exe"
driver = webdriver.Chrome(driver_path)
MBPS_CLASS = "speed-results-container"
driver.get("https://fast.com/")
while True:
print(driver.find_elements_by_class_name(MBPS_CLASS)[0].text)
# driver.find_element_by_id("speed-value").text # This works with ID also

Get Bing search results in Python

I am trying to make a chatbot that can get Bing search results using Python. I've tried many websites, but they all use old Python 2 code or Google. I am currently in China and cannot access YouTube, Google, or anything else related to Google (Can't use Azure and Microsoft Docs either). I want the results to be like this:
This is the title
https://this-is-the-link.com
This is the second title
https://this-is-the-second-link.com
Code
import requests
import bs4
import re
import urllib.request
from bs4 import BeautifulSoup
page = urllib.request.urlopen("https://www.bing.com/search?q=programming")
soup = BeautifulSoup(page.read())
links = soup.findAll("a")
for link in links:
print(link["href"])
And it gives me
/?FORM=Z9FD1
javascript:void(0);
javascript:void(0);
/rewards/dashboard
/rewards/dashboard
javascript:void(0);
/?scope=web&FORM=HDRSC1
/images/search?q=programming&FORM=HDRSC2
/videos/search?q=programming&FORM=HDRSC3
/maps?q=programming&FORM=HDRSC4
/news/search?q=programming&FORM=HDRSC6
/shop?q=programming&FORM=SHOPTB
http://go.microsoft.com/fwlink/?LinkId=521839
http://go.microsoft.com/fwlink/?LinkID=246338
https://go.microsoft.com/fwlink/?linkid=868922
http://go.microsoft.com/fwlink/?LinkID=286759
https://go.microsoft.com/fwlink/?LinkID=617297
Any help would be greatly appreciated (I'm using Python 3.6.9 on Ubuntu)
Actually, code you've written working properly, problem is in HTTP request headers. By default urllib use Python-urllib/{version} as User-Agent header value, which makes easy for website to recognize your request as automatically generated. To avoid this, you should use custom value which can be achieved passing Request object as first parameter of urlopen():
from urllib.parse import urlencode, urlunparse
from urllib.request import urlopen, Request
from bs4 import BeautifulSoup
query = "programming"
url = urlunparse(("https", "www.bing.com", "/search", "", urlencode({"q": query}), ""))
custom_user_agent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"
req = Request(url, headers={"User-Agent": custom_user_agent})
page = urlopen(req)
# Further code I've left unmodified
soup = BeautifulSoup(page.read())
links = soup.findAll("a")
for link in links:
print(link["href"])
P.S. Take a look on comment left by #edd under your question.

How to scrape text from this webpage?

I'm trying to scrape this HTML title
<h2 id="p89" data-pid="89"><span id="page77" class="pageNum" data-no="77" data-before-text="77"></span>Tuesday, July 30</h2>
from this website: https://wol.jw.org/en/wol/h/r1/lp-e
My code:
from bs4 import BeautifulSoup
import requests
url = requests.get('https://wol.jw.org/en/wol/h/r1/lp-e').text
soup = BeautifulSoup(url, 'lxml')
textodiario = soup.find('header')
dia = textodiario.h2.text
print(dia)
It should returns me today's day but it returns me a passed day: Wednesday, July 24
At the moment I don't have a PC to test, please double check for possible errors.
You need the chromedriver for your platform too, put it in the same folder of the script.
My idea would be to use selenium to get the HTML and then parse it:
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
url = "https://wol.jw.org/en/wol/h/r1/lp-e"
options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
driver = webdriver.Chrome(chrome_options=options)
driver.get(url)
time.sleep(3)
page = driver.page_source
driver.quit()
soup = BeautifulSoup(page, 'html.parser')
textodiario = soup.find('header')
dia = textodiario.h2.text
print(dia)
The data is getting loaded asynchronously and the contents of the div are being changed. What you need is a selenium web driver to act alongside bs4.
I actually tried your code, and there's definitely something wrong with how the website/the code is grabbing data. Because when I pipe the entirety of the URL text to a grep with July, it gives:
Wednesday, July 24
<h2 id="p71" data-pid="71"><span id="page75" class="pageNum" data-no="75" data-before-text="75"></span>Wednesday, July 24</h2>
<h2 id="p74" data-pid="74">Thursday, July 25</h2>
<h2 id="p77" data-pid="77">Friday, July 26</h2>
If I had to take a guess, the fact that they're keeping multiple dates under h2 probably doesn't help, but I have almost zero experience in web scraping. And if you notice, July 30th isn't even in there, meaning that somewhere along the line your data is getting weird (as LazyCoder points out).
Hope that Selenium fixes your issue.
Go to NetWork Tab and you will get the link.
https://wol.jw.org/wol/dt/r1/lp-e/2019/7/30
Here is the code.
from bs4 import BeautifulSoup
headers = {'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
session = requests.Session()
response = session.get('https://wol.jw.org/wol/dt/r1/lp-e/2019/7/30',headers=headers)
result=response.json()
data=result['items'][0]['content']
soup=BeautifulSoup(data,'html.parser')
print(soup.select_one('h2').text)
Output:
Tuesday, July 30

Scraping AJAX e-commerce site using python

I have a problem on scraping an e-commerce site using BeautifulSoup. I did some Googling but I still can't solve the problem.
Please refer on the pictures:
1 Chrome F12 :
2 Result :
Here is the site that I tried to scrape: "https://shopee.com.my/search?keyword=h370m"
Problem:
When I tried to open up Inspect Element on Google Chrome (F12), I can see the for the product's name, price, etc. But when I run my python program, I could not get the same code and tag in the python result. After some googling, I found out that this website used AJAX query to get the data.
Anyone can help me on the best methods to get these product's data by scraping an AJAX site? I would like to display the data in a table form.
My code:
import requests
from bs4 import BeautifulSoup
source = requests.get('https://shopee.com.my/search?keyword=h370m')
soup = BeautifulSoup(source.text, 'html.parser')
print(soup)
Welcome to StackOverflow! You can inspect where the ajax request is being sent to and replicate that.
In this case the request goes to this api url. You can then use requests to perform a similar request. Notice however that this api endpoint requires a correct UserAgent header. You can use a package like fake-useragent or just hardcode a string for the agent.
import requests
# fake useragent
from fake_useragent import UserAgent
user_agent = UserAgent().chrome
# or hardcode
user_agent = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1468.0 Safari/537.36'
url = 'https://shopee.com.my/api/v2/search_items/?by=relevancy&keyword=h370m&limit=50&newest=0&order=desc&page_type=search'
resp = requests.get(url, headers={
'User-Agent': user_agent
})
data = resp.json()
products = data.get('items')
Welcome to StackOverflow! :)
As an alternative, you can check Selenium
See example usage from documentation:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
When you use requests (or libraries like Scrapy) usually JavaScript not loaded. As #dmitrybelyakov mentioned you can reply these calls or imitate normal user interaction using Selenium.

BeautifulSoup can't find class that exists on webpage?

So I am trying to scrape the following webpage https://www.scoreboard.com/uk/football/england/premier-league/,
Specifically the scheduled and finished results. Thus I am trying to look for the elements with class = "stage-finished" or "stage-scheduled". However when I scrape the webpage and print out what page_soup contains, it doesn't contain these elements.
I found another SO question with an answer saying that this is because it is loaded via AJAX and I need to look at the XHR under the network tab on chrome dev tools to find the file thats loading the necessary data, however it doesn't seem to be there?
import bs4
import requests
from bs4 import BeautifulSoup as soup
import csv
import datetime
myurl = "https://www.scoreboard.com/uk/football/england/premier-league/"
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
page = requests.get(myurl, headers=headers)
page_soup = soup(page.content, "html.parser")
scheduled = page_soup.select(".stage-scheduled")
finished = page_soup.select(".stage-finished")
live = page_soup.select(".stage-live")
print(page_soup)
print(scheduled[0])
The above code throws an error of course as there is no content in the scheduled array.
My question is, how do I go about getting the data I'm looking for?
I copied the contents of the XHR files to a notepad and searched for stage-finished and other tags and found nothing. Am I missing something easy here?
The page is JavaScript rendered. You need Selenium. Here is some code to start on:
from selenium import webdriver
url = 'https://www.scoreboard.com/uk/football/england/premier-league/'
driver = webdriver.Chrome()
driver.get(url)
stages = driver.find_elements_by_class_name('stage-scheduled')
driver.close()
Or you could pass driver.content in to the BeautifulSoup method. Like this:
soup = BeautifulSoup(driver.page_source, 'html.parser')
Note:
You need to install a webdriver first. I installed chromedriver.
Good luck!

Categories

Resources