scraping data from unknown number of pages using beautiful soup - python

I want to parse some info from website that has data spread among several pages.
The problem is I don't know how many pages there are. There might be 2, but there might be also 4, or even just one page.
How can I loop over pages when I don't know how many pages there will be?
I know however the url pattern which looks something like in the code below.
Also, the pages names are not plain numbers but they are in 'pe2' for page 2 and 'pe4' for page 3 etc. so can't just loop over range(number).
This dummy code for the loop I am trying to fix.
pages=['','pe2', 'pe4', 'pe6', 'pe8',]
import requests
from bs4 import BeautifulSoup
for i in pages:
url = "http://www.website.com/somecode/dummy?page={}".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content)
#rest of the scraping code

You can use a while loop that will stop to run when encounters an exception.
Code:
from bs4 import BeautifulSoup
from time import sleep
import requests
i = 0
while(True):
try:
if i == 0:
url = "http://www.website.com/somecode/dummy?page=pe"
else:
url = "http://www.website.com/somecode/dummy?page=pe{}".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
#print page url
print(url)
#rest of the scraping code
#don't overflow website
sleep(2)
#increase page number
i += 2
except:
break
Output:
http://www.website.com/somecode/dummy?page
http://www.website.com/somecode/dummy?page=pe2
http://www.website.com/somecode/dummy?page=pe4
http://www.website.com/somecode/dummy?page=pe6
http://www.website.com/somecode/dummy?page=pe8
...
... and so on, until it faces an Exception.

Related

Scraping href links and scrape from these links

I'm doing python scraping and i'm trying to get all the links between href tags and then accessing it one by one to scrape data from these links. I'm a newbie and can't figure it out how to continue from this.The code is as follows:
import requests
import urllib.request
import re
from bs4 import BeautifulSoup
import csv
url = 'https://menupages.com/restaurants/ny-new-york'
url1 = 'https://menupages.com'
response = requests.get(url)
f = csv.writer(open('Restuarants_details.csv', 'w'))
soup = BeautifulSoup(response.text, "html.parser")
menu_sections=[]
for url2 in soup.find_all('h3',class_='restaurant__title'):
completeurl = url1+url2.a.get('href')
print(completeurl)
#print(url)
If you want to scrape all the links obtained from the first page, and then scrape all the links obtained from these links, etc, you need a recursive function.
Here is some initial code to get you started:
if __name__ == "__main__":
initial_url = "https://menupages.com/restaurants/ny-new-york"
scrape(initial_url)
def scrape(url):
print("now looking at " + url)
# scrape URL
# do something with the data
if (STOP_CONDITION): # update this!
return
# scrape new URLs:
for new_url in soup.find_all(...):
scrape(new_url, file)
The problem with this recursive function is that it will not stop until there are no links on the pages, which probably won't happen anytime soon. You will need to add a stop condition.

Beautiful soup with find all only gives the last result

I'm trying to retrieve all the products from a page using beautiful soup. The page has pagination, and to solve it I have made a loop to make the retrieve work for all pages.
But, when I move to the next step and try to "find_all()" the tags, it only gives the data from the last page.
If I try when one isolated page it works fine, so I guest that it is a problem with getting all the html from all pages.
My code is the next:
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
import urllib3 as ur
http = ur.PoolManager()
base_url = 'https://www.kiwoko.com/tienda-de-perros-online.html'
for x in range (1,int(33)+1):
dog_products_http = http.request('GET', base_url+'?p='+str(x))
soup = BeautifulSoup(dog_products_http.data, 'html.parser')
print (soup.prettify)
and ones it has finished:
soup.find_all('li', {'class': 'item product product-item col-xs-12 col-sm-6 col-md-4'})
As I said, if I do not use the for range and only retrieve one page (example: https://www.kiwoko.com/tienda-de-perros-online.html?p=10, it works fine and gives me the 36 products.
I have copied the "soup" in a word file and search the class to see if there is a problem, but there are all the 1.153 products I'm looking for.
So, I think the soup is right, but as I look for "more than one html" I do not think that the find all is working good.
¿What could be the problem?
You do want your find inside the loop but here is a way to copy the ajax call the page makes which allows you to return more items per request and also to calculate the number of pages dynamically and make requests for all products.
I re-use connection with Session for efficiency.
from bs4 import BeautifulSoup as bs
import requests, math
results = []
with requests.Session() as s:
r = s.get('https://www.kiwoko.com/tienda-de-perros-online.html?p=1&product_list_limit=54&isAjax=1&_=1560702601779').json()
soup = bs(r['categoryProducts'], 'lxml')
results.append(soup.select('.product-item-details'))
product_count = int(soup.select_one('.toolbar-number').text)
pages = math.ceil(product_count / 54)
if pages > 1:
for page in range(2, pages + 1):
r = s.get('https://www.kiwoko.com/tienda-de-perros-online.html?p={}&product_list_limit=54&isAjax=1&_=1560702601779'.format(page)).json()
soup = bs(r['categoryProducts'], 'lxml')
results.append(soup.select('.product-item-details'))
results = [result for item in results for result in item]
print(len(results))
# parse out from results what you want, as this is a list of tags, or do in loop above

Python - Beautiful Soup scraper returning some, but not all, text

I am trying to scrape the top 100 jobs in the United States from this list. When I run this code:
import urllib.request
from bs4 import BeautifulSoup
url = 'https://www.ranker.com/list/most-common-jobs-in-america/american-jobs'
page_opened = urllib.request.urlopen(url)
soup = BeautifulSoup(page_opened, 'html.parser')
jobs_soup = soup.find_all('span','listItem__title')
print(jobs_soup)
Beautiful Soup returns what I would expect, job titles surrounded by tags,except it only goes to "Secondary School Teachers", which is only #25 out of 100 jobs. I've used Beautiful Soup this same way on other webpages without problems. Is there anything funky about the webpage/my code which is causing the output to be incomplete?
With the network tab open in the developer tools of my browser, I saw that XHR requests were being made as I scrolled and some of the responses contained list items. You were only able to get the first 24 items because these requests were not being triggered. The url for one of the requests was:
https://cache-api.ranker.com/lists/354954/items?limit=20&offset=50&include=votes,wikiText,rankings,openListItemContributors&propertyFetchType=ALL&liCacheKey=null
By changing the limit to 100 and the offset to 0 I was able to get the top 100 jobs:
import json
from urllib.request import urlopen
# I removed the other query parameters and it still seems to work
url = 'https://cache-api.ranker.com/lists/354954/items?limit=100&offset=0'
resp = urlopen(url)
data = json.loads(resp.read())
job_titles = [item['name'] for item in data['listItems']]
print(len(job_titles))
print([job_titles[0], job_titles[-1]])
Output:
100
['Retail salespersons', 'Cleaners of vehicles and equipment']

how to scrape multipage website with python and export data into .csv file?

I would like to scrape the following website using python and need to export scraped data into a CSV file:
http://www.swisswine.ch/en/producer?search=&&
This website consist of 154 pages to relevant search. I need to call every pages and want to scrape data but my script couldn't call next pages continuously. It only scrape one page data.
Here I assign value i<153 therefore this script run only for the 154th page and gave me 10 data. I need data from 1st to 154th page
How can I scrape entire data from all page by once I run the script and also how to export data as CSV file??
my script is as follows
import csv
import requests
from bs4 import BeautifulSoup
i = 0
while i < 153:
url = ("http://www.swisswine.ch/en/producer?search=&&&page=" + str(i))
r = requests.get(url)
i=+1
r.content
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data = soup.find_all("ul", {"class": "contact-information"})
for item in g_data:
print(item.text)
You should put your HTML parsing code to under the loop as well. And you are not incrementing the i variable correctly (thanks #MattDMo):
import csv
import requests
from bs4 import BeautifulSoup
i = 0
while i < 153:
url = ("http://www.swisswine.ch/en/producer?search=&&&page=" + str(i))
r = requests.get(url)
i += 1
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data = soup.find_all("ul", {"class": "contact-information"})
for item in g_data:
print(item.text)
I would also improve the following:
use requests.Session() to maintain a web-scraping session, which will also bring a performance boost:
if you're making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase
be explicit about an underlying parser for BeautifulSoup:
soup = BeautifulSoup(r.content, "html.parser") # or "lxml", or "html5lib"

BeautifulSoup looping through urls

I'm trying to harvest some chess games and got the basics done courtesy of some help here.The main function looks something like:
import requests
import urllib2
from bs4 import BeautifulSoup
r = requests.get(userurl)
soup = BeautifulSoup(r.content)
gameids= []
for link in soup.select('a[href^=/livechess/game?id=]'):
gameid = link['href'].split("?id=")[1]
gameids.append(int(gameid))
return gameids
Basically what happens is that I go to the url for a specific user such as http://www.chess.com/home/game_archive?sortby=&show=live&member=Hikaru,grab the html and scrape the gameids.This works fine for one page.
However some users have played lots of games and since only 50 games are displayed per page, their games are listed on multiple pages.e.g
http://www.chess.com/home/game_archive?sortby=&show=live&member=Hikaru&page=2 (or 3/4/5 etc)
That's where I'm stuck.How can I loop through the pages and get the ids?
Follow the pagination by making an endless loop and follow the "Next" link until it is not found.
In other words, from:
following "Next" link until:
Working code:
from urlparse import urljoin
import requests
from bs4 import BeautifulSoup
base_url = 'http://www.chess.com/'
game_ids = []
next_page = 'http://www.chess.com/home/game_archive?sortby=&show=live&member=Hikaru'
while True:
soup = BeautifulSoup(requests.get(next_page).content)
# collect the game ids
for link in soup.select('a[href^=/livechess/game?id=]'):
gameid = link['href'].split("?id=")[1]
game_ids.append(int(gameid))
try:
next_page = urljoin(base_url, soup.select('ul.pagination li.next-on a')[0].get('href'))
except IndexError:
break # exiting the loop if "Next" link not found
print game_ids
For the URL you've provided (Hikaru GM), it would print you a list of 224 game ids from all pages.

Categories

Resources