Scraping through on Wiki using "tr" and "td" with BeautifulSoup and python - python

Total python3 beginner here. I can't seem to get just the name of of the colleges to print out.
the class is no where near the college names and i can't seem to narrow the find_all down to what i need. and print to a new csv file. Any ideas?
import requests
from bs4 import BeautifulSoup
import csv
res= requests.get("https://en.wikipedia.org/wiki/Ivy_League")
soup = BeautifulSoup(res.text, "html.parser")
colleges = soup.find_all("table", class_ = "wikitable sortable")
for college in colleges:
first_level = college.find_all("tr")
print(first_level)

You can use soup.select() to utilize css selectors and be more precise:
import requests
from bs4 import BeautifulSoup
res= requests.get("https://en.wikipedia.org/wiki/Ivy_League")
soup = BeautifulSoup(res.text, "html.parser")
l = soup.select(".mw-parser-output > table:nth-of-type(2) > tbody > tr > td:nth-of-type(1) a")
for each in l:
print(each.text)
Printed result:
Brown University
Columbia University
Cornell University
Dartmouth College
Harvard University
University of Pennsylvania
Princeton University
Yale University
To put a single column into csv:
import pandas as pd
pd.DataFrame([e.text for e in l]).to_csv("your_csv.csv") # This will include index

With:
colleges = soup.find_all("table", class_ = "wikitable sortable")
you are getting all the tables with this class (there are five), not getting all the colleges in the table. So you can do something like this:
import requests
from bs4 import BeautifulSoup
res= requests.get("https://en.wikipedia.org/wiki/Ivy_League")
soup = BeautifulSoup(res.text, "html.parser")
college_table = soup.find("table", class_ = "wikitable sortable")
colleges = college_table.find_all("tr")
for college in colleges:
college_row = college.find('td')
college_link = college.find('a')
if college_link != None:
college_name = college_link.text
print(college_name)
EDIT: I added an if to discard the first line, that has the table header

Related

Parsing text block in HTML with BeatifulSoup (IndustryAbout)

I would like to parse entries for mines from industryAbout. In this example I'm working on the Kevitsa Copper Concentrator.
The interesting block in the HTML is:
<strong>Commodities: Copper, Nickel, Platinum, Palladium, Gold</strong><br /><strong>Area: Lappi</strong><br /><strong>Type: Copper Concentrator Plant</strong><br /><strong>Annual Production: 17,200 tonnes of Copper (2015), 8,800 tonnes of Nickel (2015), 31,900 tonnes of Platinum, 25,100 ounces of Palladium, 12,800 ounces of Gold (2015)</strong><br /><strong>Owner: Kevitsa Mining Oy</strong><br /><strong>Shareholders: Boliden AB (100%)</strong><br /><strong>Activity since: 2012</strong>
I've written a (basic) working parser, which gives me
<strong>Commodities: Copper, Nickel, Platinum, Palladium, Gold</strong>
<strong>Area: Lappi</strong>
<strong>Type: Copper Concentrator Plant</strong>
....
But I would like to get $commodities, $type, $annual_production, $shareholders and $actitivity as separate variables. How can I do this? Regular expressions??
import requests
from bs4 import BeautifulSoup
import re
page = requests.get("https://www.industryabout.com/country-territories-3/2199-finland/copper-mining/34519-kevitsa-copper-concentrator-plant")
soup = BeautifulSoup(page.content, 'lxml')
rows = soup.select("strong")
for r in rows:
print(r)
2nd version:
import requests
from bs4 import BeautifulSoup
import re
import csv
links = ["34519-kevitsa-copper-concentrator-plant", "34520-kevitsa-copper-mine", "34356-glogow-copper-refinery"]
for l in links:
page = requests.get("https://www.industryabout.com/country-territories-3/2199-finland/copper-mining/"+l)
soup = BeautifulSoup(page.content, 'lxml')
rows = soup.select("strong")
d = {}
for r in rows:
name, value, *rest = r.text.split(":")
if not rest:
d[name] = value
print(d)
Does this do what you want?:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.industryabout.com/country-territories-3/2199-finland/copper-mining/34519-kevitsa-copper-concentrator-plant")
soup = BeautifulSoup(page.content, 'html.parser')
rows = soup.select("strong")
d = {}
for r in rows:
name, value, *rest = r.text.split(":")
if not rest: # links or scripts have more ":" probably not intesting for you
d[name] = value
print(d)

Python web scraping - Loop through all categories and subcategories

I am trying to retrieve all categories and subcategories within a retail website. I am able to use BeautifulSoup to pull every single product in the category once I am in it. However, I am struggle with the loop for categories. I'm using this as a test website https://www.uniqlo.com/us/en/women
How do I loop through each category as well as the subcategories on the left side of the website? The problem is that you would have to click on the category before the website displays all the subcategories. I would like to extract all products within the category/subcategory into a csv file. This is what I have so far:
import bs4
import json
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
myurl = 'https://www.uniqlo.com/us/en/women/'
uClient = uReq(myurl)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html,"html.parser")
filename = "products.csv"
file = open(filename,"w",newline='')
product_list = []
containers = page_soup.findAll("li",{"class" : lambda L: L and
L.startswith('grid-tile')}) #Find all li with class: grid-tile
for container in containers:
product_container = container.findAll("div",{"class":"product-swatches"})
product_names = product_container[0].findAll("li")
for i in range(len(product_names)):
try:
product_name = product_names[i].a.img.get("alt")
product_mod_name = product_name.split(',')[0].lstrip()
print(product_mod_name)
except:
product_name = ''
i +=1
product = [product_mod_name]
print(product)
product_list.append(product)
import csv
with open('products.csv','a',newline='') as file:
writer=csv.writer(file)
for row in product_list:
writer.writerow(row)
You can try this script. It will go through different categories and subcategories of products and parse the title and price of them. There are several products with same names and the only difference between them are colors. So, don't count them as duplicate. I've written the script in a very compact manner so stretch it as per your comfortability:
import requests
from bs4 import BeautifulSoup
res = requests.get('https://www.uniqlo.com/us/en/women')
soup = BeautifulSoup(res.text, "lxml")
for items in soup.select("#category-level-1 .refinement-link"):
page = requests.get(items['href'])
broth = BeautifulSoup(page.text,"lxml")
for links in broth.select("#category-level-2 .refinement-link"):
req = requests.get(links['href'])
sauce = BeautifulSoup(req.text,"lxml")
for data in sauce.select(".product-tile-info"):
title = data.select(".name-link")[0].text
price = ' '.join([item.text for item in data.select(".product-pricing span")])
print(title.strip(),price.strip())
Results are like:
WOMEN CASHMERE CREW NECK SWEATER $79.90
Women Extra Fine Merino Crew Neck Sweater $29.90 $19.90
WOMEN KAWS X PEANUTS LONG-SLEEVE HOODED SWEATSHIRT $19.90

Review scraping form tripadvisor

I am new to web scraping in python3. I want to scrape the reviews of all the hotels in dubai but the problem is I can only scrape the hotel review which I describe in the url. Can anyone show me how I can get all of the hotel reviews without implicitly giving url of each hotel?
import requests
from bs4 import BeautifulSoup
importurl = 'https://www.tripadvisor.com/Hotel_Review-g295424-d302778-Reviews-Roda_Al_Bustan_Dubai_Airport-Dubai_Emirate_of_Dubai.html'
r = requests.get(importurl)
soup = BeautifulSoup(r.content, "lxml")
resultsoup = soup.find_all("p", {"class" : "partial_entry"})
#save the reviews to a test text file locally
for review in resultsoup:
review_list = review.get_text()
print(review_list)
with open('testreview.txt', 'w') as fid:
for review in resultsoup:
review_list = review.get_text()
fid.write(review_list)
you should find the index page of all hotel, get all the link into a list, than loop the url list to get comment.
import bs4, requests
index_pages = ('http://www.tripadvisor.cn/Hotels-g295424-oa{}-Dubai_Emirate_of_Dubai-Hotels.html#ACCOM_OVERVIEW'.format(i) for i in range(0, 540, 30))
urls = []
with requests.session() as s:
for index in index_pages:
r = s.get(index)
soup = bs4.BeautifulSoup(r.text, 'lxml')
url_list = [i.get('href') for i in soup.select('.property_title')]
urls.append(url_list)
out:
len(urls): 540

Scraping Indeed with Beautiful Soup

I'm unfamiliar with html and web scraping with beautiful soup. I'm trying to retrieve Job titles, salaries, location and company name from various indeed job postings. This is my code so far:
URL = "http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10"
import urllib2
import bs4
from bs4 import BeautifulSoup
soup = BeautifulSoup(urllib2.urlopen(URL).read())
resultcol = soup.find_all(id = 'resultsCol')
company = soup.findAll('span', attrs={"class":"company"})
jobs = (soup.find_all({'class': " row result"}))
though I have the commands to find jobs and company, I can't get the contents. I'm aware there's a contents command, but none of my variables so far have that attribute. Thanks!
First I seach div with one job all elements and then I search elements inside this div
import urllib2
from bs4 import BeautifulSoup
URL = "http://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10"
soup = BeautifulSoup(urllib2.urlopen(URL).read(), 'html.parser')
results = soup.find_all('div', attrs={'data-tn-component': 'organicJob'})
for x in results:
company = x.find('span', attrs={"itemprop":"name"})
print 'company:', company.text.strip()
job = x.find('a', attrs={'data-tn-element': "jobTitle"})
print 'job:', job.text.strip()
salary = x.find('nobr')
if salary:
print 'salary:', salary.text.strip()
print '----------'
updated #furas example, for python3:
import urllib.request
from bs4 import BeautifulSoup
URL = "https://www.indeed.com/jobs?q=data+scientist+%2420%2C000&l=New+York&start=10"
soup = BeautifulSoup(urllib.request.urlopen(URL).read(), 'html.parser')
results = soup.find_all('div', attrs={'data-tn-component': 'organicJob'})
for x in results:
company = x.find('span', attrs={"class":"company"})
if company:
print('company:', company.text.strip() )
job = x.find('a', attrs={'data-tn-element': "jobTitle"})
if job:
print('job:', job.text.strip())
salary = x.find('nobr')
if salary:
print('salary:', salary.text.strip())
print ('----------')

Extracting Most Read Titles with BS4

I want to extract the titles in the Most Read section of a news page. This is what I have so far, but I'm getting all the titles. I just want the ones in the Most Read section.
`
import requests
from bs4 import BeautifulSoup
base_url = 'https://www.michigandaily.com/section/opinion'
r = requests.get(base_url)
soup = BeautifulSoup(r.text, "html5lib")
for story_heading in soup.find_all(class_= "views-field views-field-title"):
if story_heading.a:
print(story_heading.a.text.replace("\n", " ").strip())
else:
print(story_heading.contents[0].strip())`
You need to limit your scope to only the div container for the most read articles.
import requests
from bs4 import BeautifulSoup
base_url = 'https://www.michigandaily.com/section/opinion'
r = requests.get(base_url)
soup = BeautifulSoup(r.text, "html5lib")
most_read_soup = soup.find_all('div', {'class': 'view-id-most_read'})[0]
for story_heading in most_read_soup.find_all(class_= "views-field views-field-title"):
if story_heading.a:
print(story_heading.a.text.replace("\n", " ").strip())
else:
print(story_heading.contents[0].strip())
You can use a css selector to get the specific tags from the top most read div:
from bs4 import BeautifulSoup
base_url = 'https://www.michigandaily.com/section/opinion'
r = requests.get(base_url)
soup = BeautifulSoup(r.text, "html5lib")
css = "div.pane-most-read-panel-pane-1 a"
links = [a.text.strip() for a in soup.select(css)]
Which will give you:
[u'Michigan in Color: Anotha One', u'Migos trio ends 2016 SpringFest with Hill Auditorium concert', u'Migos dabs their way to a seminal moment for Ann Arbor hip hop', u'Best of Ann Arbor 2016', u'Best of Ann Arbor 2016: Full List']

Categories

Resources