Parsing text block in HTML with BeatifulSoup (IndustryAbout) - python

I would like to parse entries for mines from industryAbout. In this example I'm working on the Kevitsa Copper Concentrator.
The interesting block in the HTML is:
<strong>Commodities: Copper, Nickel, Platinum, Palladium, Gold</strong><br /><strong>Area: Lappi</strong><br /><strong>Type: Copper Concentrator Plant</strong><br /><strong>Annual Production: 17,200 tonnes of Copper (2015), 8,800 tonnes of Nickel (2015), 31,900 tonnes of Platinum, 25,100 ounces of Palladium, 12,800 ounces of Gold (2015)</strong><br /><strong>Owner: Kevitsa Mining Oy</strong><br /><strong>Shareholders: Boliden AB (100%)</strong><br /><strong>Activity since: 2012</strong>
I've written a (basic) working parser, which gives me
<strong>Commodities: Copper, Nickel, Platinum, Palladium, Gold</strong>
<strong>Area: Lappi</strong>
<strong>Type: Copper Concentrator Plant</strong>
....
But I would like to get $commodities, $type, $annual_production, $shareholders and $actitivity as separate variables. How can I do this? Regular expressions??
import requests
from bs4 import BeautifulSoup
import re
page = requests.get("https://www.industryabout.com/country-territories-3/2199-finland/copper-mining/34519-kevitsa-copper-concentrator-plant")
soup = BeautifulSoup(page.content, 'lxml')
rows = soup.select("strong")
for r in rows:
print(r)
2nd version:
import requests
from bs4 import BeautifulSoup
import re
import csv
links = ["34519-kevitsa-copper-concentrator-plant", "34520-kevitsa-copper-mine", "34356-glogow-copper-refinery"]
for l in links:
page = requests.get("https://www.industryabout.com/country-territories-3/2199-finland/copper-mining/"+l)
soup = BeautifulSoup(page.content, 'lxml')
rows = soup.select("strong")
d = {}
for r in rows:
name, value, *rest = r.text.split(":")
if not rest:
d[name] = value
print(d)

Does this do what you want?:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.industryabout.com/country-territories-3/2199-finland/copper-mining/34519-kevitsa-copper-concentrator-plant")
soup = BeautifulSoup(page.content, 'html.parser')
rows = soup.select("strong")
d = {}
for r in rows:
name, value, *rest = r.text.split(":")
if not rest: # links or scripts have more ":" probably not intesting for you
d[name] = value
print(d)

Related

Failing to scrape a webpage

I have problems trying to scrape a web with multiple pages with Spyder: the web has 1 to 6 pages and also a next button. Also, each of one the six pages has 30 results. I've tried two solutions without success.
This is the first one:
#SOLUTION 1#
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
driver.get('https://store.unionlosangeles.com/collections/outerwear?sort_by=creation_date&page_num=1')
#Imports the HTML of the webpage into python
soup = BeautifulSoup(driver.page_source, 'lxml')
postings = soup.find_all('div', class_ = 'isp_grid_product')
#Creates data frame
df = pd.DataFrame({'Link':[''], 'Vendor':[''],'Title':[''], 'Price':['']})
#Scrape the data
for i in range (1,7): #I've also tried with range (1,6), but it gives 5 pages instead of 6.
url = "https://store.unionlosangeles.com/collections/outerwear?sort_by=creation_date&page_num="+str(i)+""
postings = soup.find_all('li', class_ = 'isp_grid_product')
for post in postings:
link = post.find('a', class_ = 'isp_product_image_href').get('href')
link_full = 'https://store.unionlosangeles.com'+link
vendor = post.find('div', class_ = 'isp_product_vendor').text.strip()
title = post.find('div', class_ = 'isp_product_title').text.strip()
price = post.find('div', class_ = 'isp_product_price_wrapper').text.strip()
df = df.append({'Link':link_full, 'Vendor':vendor,'Title':title, 'Price':price}, ignore_index = True)
The output of this code is a data frame with 180 rows (30 x 6), but it repeats the results
of the first page. Thus, my first 30 rows are the first 30 results of the first page, and the rows 31-60 are again the same results of the first page and so on.
Here is the second solution I tried:
### SOLUTION 2 ###
from selenium import webdriver
import requests
from bs4 import BeautifulSoup
import pandas as pd
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
driver.get('https://store.unionlosangeles.com/collections/outerwear?sort_by=creation_date&page_num=1')
#Imports the HTML of the webpage into python
soup = BeautifulSoup(driver.page_source, 'lxml')
soup
#Create data frame
df = pd.DataFrame({'Link':[''], 'Vendor':[''],'Title':[''], 'Price':['']})
#Scrape data
i = 0
while i < 6:
postings = soup.find_all('li', class_ = 'isp_grid_product')
len(postings)
for post in postings:
link = post.find('a', class_ = 'isp_product_image_href').get('href')
link_full = 'https://store.unionlosangeles.com'+link
vendor = post.find('div', class_ = 'isp_product_vendor').text.strip()
title = post.find('div', class_ = 'isp_product_title').text.strip()
price = post.find('div', class_ = 'isp_product_price_wrapper').text.strip()
df = df.append({'Link':link_full, 'Vendor':vendor,'Title':title, 'Price':price}, ignore_index = True)
#Imports the next pages HTML into python
next_page = 'https://store.unionlosangeles.com'+soup.find('div', class_ = 'page-item next').get('href')
page = requests.get(next_page)
soup = BeautifulSoup(page.text, 'lxml')
i += 1
The problem with this second solution is that the program cannot recognize the attribute "get" in next_page, for reasons I cannot grasp (I haven't had this problem in other webs with paginations). Thus, I get only the first page and not the others.
How can I fix the code to properly scrape all 180 elements?
The data you see is loaded from external URL via javascript. You can simulate these calls with requests module. For example:
import requests
import pandas as pd
from bs4 import BeautifulSoup
from urllib.parse import urlparse, parse_qs
url = "https://store.unionlosangeles.com/collections/outerwear?sort_by=creation_date&page_num=1"
api_url = "https://cdn-gae-ssl-premium.akamaized.net/categories_navigation"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
params = {
"page_num": 1,
"store_id": "",
"UUID": "",
"sort_by": "creation_date",
"facets_required": "0",
"callback": "",
"related_search": "1",
"category_url": "/collections/outerwear",
}
q = parse_qs(
urlparse(soup.select_one("#isp_search_result_page ~ script")["src"]).query
)
params["store_id"] = q["store_id"][0]
params["UUID"] = q["UUID"][0]
all_data = []
for params["page_num"] in range(1, 7):
data = requests.get(api_url, params=params).json()
for i in data["items"]:
link = i["u"]
vendor = i["v"]
title = i["l"]
price = i["p"]
all_data.append([link, vendor, title, price])
df = pd.DataFrame(all_data, columns=["link", "vendor", "title", "price"])
print(df.head(10).to_markdown(index=False))
print("Total items =", len(df))
Prints:
link
vendor
title
price
/products/barn-jacket
Essentials
BARN JACKET
250
/products/work-vest-2
Essentials
WORK VEST
120
/products/tailored-track-jacket
Martine Rose
TAILORED TRACK JACKET
1206
/products/work-vest-1
Essentials
WORK VEST
120
/products/60-40-cloth-bug-anorak-1tone
Kapital
60/40 Cloth BUG Anorak (1Tone)
747
/products/smooth-jersey-stand-man-woman-track-jkt
Kapital
Smooth Jersey STAND MAN & WOMAN Track JKT
423
/products/supersized-sports-jacket
Martine Rose
SUPERSIZED SPORTS JACKET
1695
/products/pullover-vest
Nicholas Daley
PULLOVER VEST
267
/products/flannel-polkadot-x-bandana-reversible-1st-jkt-1
Kapital
FLANNEL POLKADOT X BANDANA REVERSIBLE 1ST JKT
645
/products/60-40-cloth-bug-anorak-1tone-1
Kapital
60/40 Cloth BUG Anorak (1Tone)
747
Total items = 175

Web Scraping find not moving on to next item

from bs4 import BeautifulSoup
import requests
def kijiji():
source = requests.get('https://www.kijiji.ca/b-mens-shoes/markham-york-region/c15117001l1700274').text
soup = BeautifulSoup(source,'lxml')
b = soup.find('div', class_='price')
for link in soup.find_all('a',class_ = 'title'):
a = link.get('href')
fulllink = 'http://kijiji.ca'+a
print(fulllink)
b = soup.find('div', class_='price')
print(b.prettify())
kijiji()
Usage of this is to sum up all the different kinds of items sold in kijiji and pair them up with a price.
But I can't seem to find anyway to increment what beautiful soup is finding with a class of price, and I'm stuck with the first price. Find_all doesn't work either as it just prints out the whole blob instead of grouping it together with each item.
If you have Beautiful soup 4.7.1 or above you can use following css selector select() which is much faster.
code:
import requests
from bs4 import BeautifulSoup
res=requests.get("https://www.kijiji.ca/b-mens-shoes/markham-york-region/c15117001l1700274").text
soup=BeautifulSoup(res,'html.parser')
for item in soup.select('.info-container'):
fulllink = 'http://kijiji.ca' + item.find_next('a', class_='title')['href']
print(fulllink)
price=item.select_one('.price').text.strip()
print(price)
Or to use find_all() use below code block
import requests
from bs4 import BeautifulSoup
res=requests.get("https://www.kijiji.ca/b-mens-shoes/markham-york-region/c15117001l1700274").text
soup=BeautifulSoup(res,'html.parser')
for item in soup.find_all('div',class_='info-container'):
fulllink = 'http://kijiji.ca' + item.find_next('a', class_='title')['href']
print(fulllink)
price=item.find_next(class_='price').text.strip()
print(price)
Congratulations on finding the answer. I'll give you another solution for reference only.
import requests
from simplified_scrapy.simplified_doc import SimplifiedDoc
def kijiji():
url = 'https://www.kijiji.ca/b-mens-shoes/markham-york-region/c15117001l1700274'
source = requests.get(url).text
doc = SimplifiedDoc(source)
infos = doc.getElements('div',attr='class',value='info-container')
for info in infos:
price = info.select('div.price>text()')
a = info.select('a.title')
link = doc.absoluteUrl(url,a.href)
title = a.text
print (price)
print (link)
print (title)
kijiji()
Result:
$310.00
https://www.kijiji.ca/v-mens-shoes/markham-york-region/jordan-4-oreo-2015/1485391828
Jordan 4 Oreo (2015)
$560.00
https://www.kijiji.ca/v-mens-shoes/markham-york-region/yeezy-boost-350-yecheil-reflectives/1486296645
Yeezy Boost 350 Yecheil Reflectives
...
Here are more examples:https://github.com/yiyedata/simplified-scrapy-demo/tree/master/doc_examples
from bs4 import BeautifulSoup
import requests
def kijiji():
source = requests.get('https://www.kijiji.ca/b-mens-shoes/markham-york-region/c15117001l1700274').text
soup = BeautifulSoup(source,'lxml')
b = soup.find('div', class_='price')
for link in soup.find_all('a',class_ = 'title'):
a = link.get('href')
fulllink = 'http://kijiji.ca'+a
print(fulllink)
print(b.prettify())
b = b.find_next('div', class_='price')
kijiji()
Was stuck on this for an hour, as soon as I posted this on stack I immediately came up with an idea, messy code but works!

How to extract data in columns from page using soup

Trying to capture data which is present in bullet points
link https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/
need to extract the data using xpath here
data to be extracted
4 Door Sedan
4 Cylinder, 1.8 Litre
Constantly Variable Transmission, Front Wheel Drive
Petrol - Unleaded ULP
6.4 L/100km
tried this :
import requests
import lxml.html as lh
import pandas as pd
import html
from lxml import html
from bs4 import BeautifulSoup
import requests
cars = []
urls = ['https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/']
for url in urls:
car_data={}
headers = {'User-Agent':'Mozilla/5.0'}
page = (requests.get(url, headers=headers))
tree = html.fromstring(page.content)
if tree.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[4]/div/div'):
car_data["namings"] = tree.xpath('/html/body/div[1]/div[2]/div/div[1]/div[1]/div[4]/div/div')[0]
You've imported BeautifulSoup so why not use css class selector?
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/', headers = {'User-Agent':'Mozilla/5.0'})
soup = bs(r.content, 'lxml')
info = [i.text.strip() for i in soup.select('.dgi-')]
You could also print as
for i in soup.select('.dgi-'):
print(i.text.strip())
find_all()-returns a collection of elements.
strip()- in-built function of Python is used to remove all the leading and trailing spaces from a string.
Ex.
import requests
from bs4 import BeautifulSoup
cars = []
urls = ['https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/']
for url in urls:
car_data=[]
headers = {'User-Agent':'Mozilla/5.0'}
page = (requests.get(url, headers=headers))
soup = BeautifulSoup(page.content,'lxml')
car_obj = soup.find("div",{'class':'r-center-pane'}).find("div",\
{'class':'micro-spec'}).find("div",{'class':'columns'}).find_all("dd")
for x in car_obj:
text = x.text.strip()
if text != "":
car_data.append(text)
cars.append(car_data)
print(cars)
O/P:
[['4 Door Sedan', '4 Cylinder, 1.8 Litre', 'Constantly Variable Transmission,
Front Wheel Drive', 'Petrol - Unleaded ULP', '6.4 L/100km']]

How to get complete href links using beautifulsoup in python

I am trying to get top movies name by genre. I couldn't get complete href links for that, I stuck by getting half href links
By the following code I got,
https://www.imdb.com/search/title?genres=action&sort=user_rating,desc&title_type=feature&num_votes=25000,
https://www.imdb.com/search/title?genres=adventure&sort=user_rating,desc&title_type=feature&num_votes=25000,
https://www.imdb.com/search/title?genres=animation&sort=user_rating,desc&title_type=feature&num_votes=25000,
https://www.imdb.com/search/title?genres=biography&sort=user_rating,desc&title_type=feature&num_votes=25000,
.........
Like that but i want to all top 100 movies name by its genre like action, Adventure, Animation, Biography.......
I tried the following code:
from bs4 import BeautifulSoup
import requests
url = 'https://www.imdb.com'
main_url = url + '/chart/top'
res = requests.get(main_url)
soup = BeautifulSoup(res.text, 'html.parser')
for href in soup.find_all(class_='subnav_item_main'):
# print(href)
all_links = url + href.find('a').get('href')
print(all_links)
I want complete link as shown bellow from a link
/search/title?genres=action&sort=user_rating,desc&title_type=feature&num_votes=25000,&pf_rd_m=A2FGELUUNOQJNL&pf_rd_p=5aab685f-35eb-40f3-95f7-c53f09d542c3&pf_rd_r=FM1ZEBQ7E9KGQSDD441H&pf_rd_s=right-6&pf_rd_t=15506&pf_rd_i=top&ref_=chttp_gnr_1"
You need another loop over those urls and a limit to only get 100. I store in a dictionary with keys being genre and values being a list of films. Note original titles may appear e.g. The Mountain II (2016) is Dag II (original title).
links is a list of tuples where I keep the genre as first item and url as second.
import requests, pprint
from bs4 import BeautifulSoup as bs
from urllib.parse import urljoin
url = 'https://www.imdb.com/chart/top'
genres = {}
with requests.Session() as s:
r = s.get(url)
soup = bs(r.content, 'lxml')
links = [(i.text, urljoin(url,i['href'])) for i in soup.select('.subnav_item_main a')]
for link in links:
r = s.get(link[1])
soup = bs(r.content, 'lxml')
genres[link[0].strip()] = [i['alt'] for i in soup.select('.loadlate', limit = 100)]
pprint.pprint(genres)
Sample output:

Scraping through on Wiki using "tr" and "td" with BeautifulSoup and python

Total python3 beginner here. I can't seem to get just the name of of the colleges to print out.
the class is no where near the college names and i can't seem to narrow the find_all down to what i need. and print to a new csv file. Any ideas?
import requests
from bs4 import BeautifulSoup
import csv
res= requests.get("https://en.wikipedia.org/wiki/Ivy_League")
soup = BeautifulSoup(res.text, "html.parser")
colleges = soup.find_all("table", class_ = "wikitable sortable")
for college in colleges:
first_level = college.find_all("tr")
print(first_level)
You can use soup.select() to utilize css selectors and be more precise:
import requests
from bs4 import BeautifulSoup
res= requests.get("https://en.wikipedia.org/wiki/Ivy_League")
soup = BeautifulSoup(res.text, "html.parser")
l = soup.select(".mw-parser-output > table:nth-of-type(2) > tbody > tr > td:nth-of-type(1) a")
for each in l:
print(each.text)
Printed result:
Brown University
Columbia University
Cornell University
Dartmouth College
Harvard University
University of Pennsylvania
Princeton University
Yale University
To put a single column into csv:
import pandas as pd
pd.DataFrame([e.text for e in l]).to_csv("your_csv.csv") # This will include index
With:
colleges = soup.find_all("table", class_ = "wikitable sortable")
you are getting all the tables with this class (there are five), not getting all the colleges in the table. So you can do something like this:
import requests
from bs4 import BeautifulSoup
res= requests.get("https://en.wikipedia.org/wiki/Ivy_League")
soup = BeautifulSoup(res.text, "html.parser")
college_table = soup.find("table", class_ = "wikitable sortable")
colleges = college_table.find_all("tr")
for college in colleges:
college_row = college.find('td')
college_link = college.find('a')
if college_link != None:
college_name = college_link.text
print(college_name)
EDIT: I added an if to discard the first line, that has the table header

Categories

Resources