My main purpose is to go to this specific website, to click each of the products, have enough time to scrape the data from the clicked product, then go back to click another product from the page until all the products are clicked through and scraped (The scraping code I have not included).
My code opens up chrome to redirect to my desired website, generates a list of links to click by class_name. This is the part I am stuck on, I would believe I need a for-loop to iterate through the list of links to click and go back to the original. But, I can't figure out why this won't work.
Here is my code:
import csv
import time
from selenium import webdriver
import selenium.webdriver.chrome.service as service
import requests
from bs4 import BeautifulSoup
url = "https://www.vatainc.com/infusion/adult-infusion.html?limit=all"
service = service.Service('path to chromedriver')
service.start()
capabilities = {'chrome.binary': 'path to chrome'}
driver = webdriver.Remote(service.service_url, capabilities)
driver.get(url)
time.sleep(2)
links = driver.find_elements_by_class_name('product-name')
for link in links:
link.click()
driver.back()
link.click()
I have another solution to your problem.
When I tested your code it showed a strange behaviour. Fixed all problems that I had using xpath.
url = "https://www.vatainc.com/infusion/adult-infusion.html?limit=all"
driver.get(url)
links = [x.get_attribute('href') for x in driver.find_elements_by_xpath("//*[contains(#class, 'product-name')]/a")]
htmls = []
for link in links:
driver.get(link)
htmls.append(driver.page_source)
Instead of going back and forward I saved all links (named as links) and iterate over this list.
Related
I am trying to web scrape all the Jobs from a Job portal by selecting a particular country.
I am sorry to affix a picture but the intent to show you how the page looks like.
What i tried:
Below is what i tried but i;m not getting anything just started learning web scraping ..
import requests
from bs4 import BeautifulSoup
job_url = 'https://wd3.myworkdayjobs.com/careers/'
out_req = requests.get(job_url)
soup = BeautifulSoup(out_req.text, 'html.parser')
print(soup)
urls = []
for link in soup.find_all('a'):
print(link.get('href'))
any help will be much appreciated.
Try selenium library, Search based on attributes & After search results scrape using beautiful soup.
from selenium import webdriver
#browser exposes an executable file
#Through Selenium test we will invoke the executable file which will then #invoke actual browser
driver = webdriver.Chrome(executable_path="C:\\chromedriver.exe")
# to maximize the browser window
driver.maximize_window()
#get method to launch the URL
driver.get("Website")
#to refresh the browser
driver.refresh()
# identifying the checkboxes with type attribute in a list
chk =driver.find_elements_by_xpath("//input[#type='checkbox']")
# len method is used to get the size of that list
print(len(chk))
# get_attribute method is get the value attribute
for i in chk:
if i.get_attribute("value") == "United states of America":
i.click()
#to close the browser
driver.close()
#############################
#Beautiful soup code here
#############################
I have watched a video at this link https://www.youtube.com/watch?v=EELySnTPeyw and this is the code ( I have changed the xpath as it seems the website has been changed)
import selenium.webdriver as webdriver
def get_results(search_term):
url = 'https://www.startpage.com'
browser = webdriver.Chrome(executable_path="D:\\webdrivers\\chromedriver.exe")
browser.get(url)
search_box = browser.find_element_by_id('q')
search_box.send_keys(search_term)
try:
links = browser.find_elements_by_xpath("//a[contains(#class, 'w-gl__result-title')]")
except:
links = browser.find_lemets_by_xpath("//h3//a")
print(links)
for link in links:
href = link.get_attribute('href')
print(href)
results.append(href)
browser.close()
get_results('cat')
The code works well as for the part of opening the browser and navigating to the search box and sending keys but as for the links return an empty list although I have manually searched for the xpath in the developer tools and it returns 10 results.
You need to add keys.enter to your search. You weren't on the next page.
search_box.send_keys(search_term+Keys.ENTER)
Import
from selenium.webdriver.common.keys import Keys
Outputs
https://en.wikipedia.org/wiki/Cat
https://www.cat.com/en_US.html
https://www.cat.com/
https://www.youtube.com/watch?v=cbP2N1BQdYc
https://icatcare.org/advice/thinking-of-getting-a-cat/
https://www.caterpillar.com/en/brands/cat.html
https://www.petfinder.com/cats/
https://www.catfootwear.com/US/en/home
https://www.aspca.org/pet-care/cat-care/general-cat-care
https://www.britannica.com/animal/cat
I'm new in python and i try to crawl a whole website recursive with selenium.
I would like to do this with selenium because i want get all cookies which the website is used. I know that other tools can crawl a website easier and faster but other tools can't give me all cookies (first and third party).
Here my code:
from selenium import webdriver
import os, shutil
url = "http://example.com/"
links = set()
def crawl(start_link):
driver.get(start_link)
elements = driver.find_elements_by_tag_name("a")
urls_to_visit = set()
for el in elements:
urls_to_visit.add(el.get_attribute('href'))
for el in urls_to_visit:
if url in el:
if el not in links:
links.add(el)
crawl(el)
else:
return
dir_name = "userdir"
if os.path.isdir(dir_name):
shutil.rmtree(dir_name)
co = webdriver.ChromeOptions()
co.add_argument("--user-data-dir=userdir")
driver = webdriver.Chrome(options = co)
crawl(url)
print(links)
driver.close();
My problem is that the crawl function not open all pages from the website apparently. On some websites i can navigate to pages by hand that the function not reached. Why?
One thing I have noticed while using webdriver is that it needs time to load the page, the elements are not instantly available just like in a regular browser.
You may want to add some delays, or a loop to check for some type of footer to indicate that the page is loaded and you can start crawling.
I use the python package selenium to click the "load more" button automatically, which is successful. But why do I cannot get data after "load more"?
I want to crawl reviews from imdb using python. It only displays 25 reviews until I click "load more" button. I use the python package selenium to click the "load more" button automatically, which is successful. But why do I cannot get data after "load more" and just get the first 25 reviews data repeatedly?
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
import time
seed = 'https://www.imdb.com/title/tt4209788/reviews'
movie_review = requests.get(seed)
PATIENCE_TIME = 60
LOAD_MORE_BUTTON_XPATH = '//*[#id="browse-itemsprimary"]/li[2]/button/span/span[2]'
driver = webdriver.Chrome('D:/chromedriver_win32/chromedriver.exe')
driver.get(seed)
while True:
try:
loadMoreButton = driver.find_element_by_xpath("//button[#class='ipl-load-more__button']")
review_soup = BeautifulSoup(movie_review.text, 'html.parser')
review_containers = review_soup.find_all('div', class_ ='imdb-user-review')
print('length: ',len(review_containers))
for review_container in review_containers:
review_title = review_container.find('a', class_ = 'title').text
print(review_title)
time.sleep(2)
loadMoreButton.click()
time.sleep(5)
except Exception as e:
print(e)
break
print("Complete")
I want all the reviews, but now I can only get the first 25.
You have several issues in your script. Hardcoded wait is very inconsistent and certainly the worst option to comply. The way you have written your scraping logic within while True: loop, will slower the parsing process by collecting the same items over and over again. Moreover, every title produces a huge line gap in the output which needs to be properly stripped. I've slightly changed your script to reflect the suggestion I've given above.
Try this to get the required output:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
URL = "https://www.imdb.com/title/tt4209788/reviews"
driver = webdriver.Chrome()
wait = WebDriverWait(driver,10)
driver.get(URL)
soup = BeautifulSoup(driver.page_source, 'lxml')
while True:
try:
driver.find_element_by_css_selector("button#load-more-trigger").click()
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,".ipl-load-more__load-indicator")))
soup = BeautifulSoup(driver.page_source, 'lxml')
except Exception:break
for elem in soup.find_all(class_='imdb-user-review'):
name = elem.find(class_='title').get_text(strip=True)
print(name)
driver.quit()
Your code is fine. Great even. But, you never fetch the 'updated' HTML for the web page after hitting the 'Load More' button. That's why you are getting the same 25 reviews listed all the time.
When you use Selenium to control the web browser, you are clicking the 'Load More' button. This creates an XHR request (or more commonly called AJAX request) that you can see in the 'Network' tab of your web browser's developer tools.
The bottom line is that JavaScript (which is run in the web browser) updates the page. But in your Python program, you only get the HTML once for the page statically using the Requests library.
seed = 'https://www.imdb.com/title/tt4209788/reviews'
movie_review = requests.get(seed) #<-- SEE HERE? This is always the same HTML. You fetched in once in the beginning.
PATIENCE_TIME = 60
To fix this problem, you need to use Selenium to get the innerHTML of the div box containing the reviews. Then, have BeautifulSoup parse the HTML again. We want to avoid picking up the entire page's HTML again and again because it takes computation resources to have to parse that updated HTML over and over again.
So, find the div on the page that contains the reviews, and parse it again with BeautifulSoup. Something like this should work:
while True:
try:
allReviewsDiv = driver.find_element_by_xpath("//div[#class='lister-list']")
allReviewsHTML = allReviewsDiv.get_attribute('innerHTML')
loadMoreButton = driver.find_element_by_xpath("//button[#class='ipl-load-more__button']")
review_soup = BeautifulSoup(allReviewsHTML, 'html.parser')
review_containers = review_soup.find_all('div', class_ ='imdb-user-review')
pdb.set_trace()
print('length: ',len(review_containers))
for review_container in review_containers:
review_title = review_container.find('a', class_ = 'title').text
print(review_title)
time.sleep(2)
loadMoreButton.click()
time.sleep(5)
except Exception as e:
print(e)
break
I am trying to scrape data from a data table on this website: [http://www.oddsshark.com/ncaab/lsu-alabama-odds-february-18-2017-744793]
The site has multiple tabs, which changes the html (I am working in the 'matchup' tab). Within that matchup tab, there is a drop-down menu that changes the data table that I am trying to access. The items in the table that I am trying to access are 'li' tags within an unordered list. I just want to scrape the data from the "Overall" category of the drop-down menu.
I have been unable to access the data that I want. The item that I'm trying to access is coming back as a 'noneType'. Is there a way to do this?
url = "http://www.oddsshark.com/ncaab/lsu-alabama-odds-february-18-2017-
744793"
html_page = requests.get(url)
soup = BeautifulSoup(html_page.content, 'html.parser')
dataList = []
for ultag in soup.find_all('ul', {'class': 'base-list team-stats'}):
print(ultag)
for iltag in ultag.find_all('li'):
dataList.append(iltag.get_text())
So the problem is that the content of the tab you are trying to pull data from is dynamically loaded using React JS. So you have to use the selenium module in Python to open a browser to click the list element "Matchup" programmatically then get the source after clicking it.
On my mac I installed selenium and the chromewebdriver using these instructions:
https://gist.github.com/guylaor/3eb9e7ff2ac91b7559625262b8a6dd5f
Then signed the python file, so that the OS X firewall doesn't complain to us when trying run it, using these instructions:
Add Python to OS X Firewall Options?
Then ran the following python3 code:
import os
import time
from selenium import webdriver
from bs4 import BeautifulSoup as soup
# Setup Selenium Chrome Web Driver
chromedriver = "/usr/local/bin/chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver)
# Navigate in Chrome to specified page.
driver.get("http://www.oddsshark.com/ncaab/lsu-alabama-odds-february-18-2017-744793")
# Find the matchup list element using a css selector and click it.
link = driver.find_element_by_css_selector("li[id='react-tabs-0'").click()
# Wait for content to load
time.sleep(1)
# Get the current page source.
source = driver.page_source
# Parse into soup() the source of the page after the link is clicked and use "html.parser" as the Parser.
soupify = soup(source, 'html.parser')
dataList = []
for ultag in soupify.find_all('ul', {'class': 'base-list team-stats'}):
print(ultag)
for iltag in ultag.find_all('li'):
dataList.append(iltag.get_text())
# We are done with the driver so quit.
driver.quit()
Hope this helps as I noticed this was a similar problem to the one I just solved here - Beautifulsoup doesn't reach a child element