I am trying to get the name of the events on this page, using beautiful soup 4 : https://www.orbitxch.com/customer/sport/1
I tried to filter the html code for tags with class="biab_item-link biab_market-link js-event-link biab_has-time", has it seemed to be the ones containing each unique event name once.
Here is my code
from bs4 import BeautifulSoup
import urllib3
http = urllib3.PoolManager()
url = 'https://www.orbitxch.com/customer/sport/1'
response = http.request('GET', url)
soup = BeautifulSoup(response.data, features="lxml")
for tag in soup.find_all("a", class_="biab_item-link biab_market-link js-event-link biab_has-time"):
print(tag["title"])
But nothing happens.
That's because html content is dynamically changed by javascript. Data came from this URL: https://www.orbitxch.com/customer/api/event-updates?eventIds=29108154,29106937,29096310,29096315,29106936,29096313,29096309,29096306,29107821,29108318,29106488,29106934,29106830,29106490,29104420 but honestly I don't know where can you find these IDs. This URL returns JSON response which you can easily parse using Python library.
Related
I am trying to webscrape a government site that uses frameset.
Here is the URL - https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/index.htm
I've tried using splinter/selenium
url = "https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/index.htm"
browser.visit(url)
time.sleep(10)
full_xpath_frame = '/html/frameset/frameset/frame[2]'
tree = browser.find_by_xpath(full_xpath_frame)
for i in tree:
print(i.text)
It just returns an empty string.
I've tried using the requests library.
import requests
from lxml import HTML
url = "https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/index.htm"
# get response object
response = requests.get(url)
# get byte string
data = response.content
print(data)
And it returns this
b"<html>\r\n<head>\r\n<meta http-equiv='Content-Type'\r\ncontent='text/html; charset=iso-
8859-1'>\r\n<title>Lake_ County Election Results</title>\r\n</head>\r\n<FRAMESET rows='20%,
*'>\r\n<FRAME src='titlebar.htm' scrolling='no'>\r\n<FRAMESET cols='20%, *'>\r\n<FRAME
src='menu.htm'>\r\n<FRAME src='Lake_ElecSumm_all.htm' name='reports'>\r\n</FRAMESET>
\r\n</FRAMESET>\r\n<body>\r\n</body>\r\n</html>\r\n"
I've also tried using beautiful soup and it gave me the same thing. Is there another python library I can use in order to get the data that's inside the second table?
Thank you for any feedback.
As mentioned you could go for the frames and its src:
BeautifulSoup(r.text).select('frame')[1].get('src')
or directly to the menu.htm:
import requests
from bs4 import BeautifulSoup
r = requests.get('https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/menu.htm')
link_list = ['https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults'+a.get('href') for a in BeautifulSoup(r.text).select('a')]
for link in link_list[:1]:
r = requests.get(link)
soup = BeautifulSoup(r.text)
###...scrape what is needed
I'm trying to code a web scraping to obtain information of Linkedin Jobs post, including Job Description, Date, role, and link of the Linkedin job post. While I have made great progress obtaining job information about the job posts I'm currently stuck on how I could get the 'href' link of each job post. I have made many attempts including using class driver.find_element_by_class_name, and select_one method, neither seems to obtain the 'canonical' link by resulting none value. Could you please provide me some light?
This is the part of my code that tries to get the href link:
import requests
from bs4 import BeautifulSoup
url = https://www.linkedin.com/jobs/view/manager-risk-management-at-american-express-2545560153?refId=tOl7rHbYeo8JTdcUjN3Jdg%3D%3D&trackingId=Jhu1wPbsTyRZg4cRRN%2BnYg%3D%3D&position=1&pageNum=0&trk=public_jobs_job-result-card_result-card_full-click
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
urls = []
for link in soup.find_all('link'):
print(link.get('href'))
link: https://www.linkedin.com/jobs/view/manager-risk-management-at-american-express-2545560153?refId=tOl7rHbYeo8JTdcUjN3Jdg%3D%3D&trackingId=Jhu1wPbsTyRZg4cRRN%2BnYg%3D%3D&position=1&pageNum=0&trk=public_jobs_job-result-card_result-card_full-click
Picture of the code where the href link is stored
I think you were trying to access the href attribute incorrectly, to access them, use object["attribute_name"].
this works for me, searching for just links where rel = "canonical":
import requests
from bs4 import BeautifulSoup
url = "https://www.linkedin.com/jobs/view/manager-risk-management-at-american-express-2545560153?refId=tOl7rHbYeo8JTdcUjN3Jdg%3D%3D&trackingId=Jhu1wPbsTyRZg4cRRN%2BnYg%3D%3D&position=1&pageNum=0&trk=public_jobs_job-result-card_result-card_full-click"
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
for link in soup.find_all('link', rel='canonical'):
print(link['href'])
The <link> has an attribute of rel="canonical". You can use an [attribute=value] CSS selector: [rel="canonical"] to get the value.
To use a CSS selector, use the .select_one() method instead of find().
import requests
from bs4 import BeautifulSoup
url = "https://www.linkedin.com/jobs/view/manager-risk-management-at-american-express-2545560153?refId=tOl7rHbYeo8JTdcUjN3Jdg%3D%3D&trackingId=Jhu1wPbsTyRZg4cRRN%2BnYg%3D%3D&position=1&pageNum=0&trk=public_jobs_job-result-card_result-card_full-click"
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
print(soup.select_one('[rel="canonical"]')['href'])
Output:
https://www.linkedin.com/jobs/view/manager-risk-management-at-american-express-2545560153?refId=tOl7rHbYeo8JTdcUjN3Jdg%3D%3D&trackingId=Jhu1wPbsTyRZg4cRRN%2BnYg%3D%3D
I'm trying to use webscraping (python) to download a video from a website.
As you can see on the following capture, I can see the div I want to select.
But when I try to extract this content with BeautifulSoup, I get an empty result.
BeautifulSoup is not the problem, the problem is that response.text (see the following code) does not contain this section (however the divs surrounding this div are present in the response).
import requests
from bs4 import BeautifulSoup
url = "https://www.jw.org/nsi/library/bible/nwt/books/genesis/1"
response = requests.get(url)
if response.status_code != 404 :
soup = BeautifulSoup(response.text, "html.parser")
print(soup.select("div[class^='video-js']"))
I have a series of XML files at a HTTPS URL below. I need to get the latest XML file from the URL.
I tried to modify this piece of code but does not work. Please help.
from bs4 import BeautifulSoup
import urllib.request
import requests
url = 'https://www.oasis.oati.com/cgi-bin/webplus.dll?script=/woa/woa-planned-outages-report.html&Provider=MISO'
response = requests.get(url, verify=False)
#html = urllib.request.urlopen(url,verify=False)
soup = BeautifulSoup(response)
I suppose beautifulsoup does not read response object. And if I use the urlopen function, it throws SSL error.
BeautifulSoup does not understand the requests's Response instances directly - grab .content and pass it to the "soup" to parse:
soup = BeautifulSoup(response.content, "html.parser") # you can also use "lxml" or "html5lib" instead of "html.parser"
BeautifulSoup understands the "file-like" objects as well - which means that, once you figure out your SSL error issue you can do:
data = urllib.request.urlopen(url)
soup = BeautifulSoup(data, "html.parser")
I did not frame my question correctly in the first place. But after furthering researching, I found out that I was really trying to extract all the URLs within the referenced url tags. With some more background of the Beautiful Soup, I would use soup.find_all('a').
There's something I still don't understand about using BeautifulSoup. I can use this to parse the raw HTML of a webpage, here "example_website.com":
from bs4 import BeautifulSoup # load BeautifulSoup class
import requests
r = requests.get("http://example_website.com")
data = r.text
soup = BeautifulSoup(data)
# soup.find_all('a') grabs all elements with <a> tag for hyperlinks
Then, to retrieve and print all elements with the 'href' attribute, we can use a for loop:
for link in soup.find_all('a'):
print(link.get('href'))
What I don't understand: I have a website with several webpages, and each webpage lists several hyperlinks to a single webpage with tabular data.
I can use BeautifulSoup to parse the homepage, but how do I use the same Python script to scrape page 2, page 3, and so on? How do you "access" the contents found via the 'href' links?
Is there a way to write a python script to do this? Should I be using a spider?
You can do that with requests+BeautifulSoup for sure. It would be of a blocking nature, since you would process the extracted links one by one and you would not proceed to the next link until you are done with the current. Sample implementation:
from urlparse import urljoin
from bs4 import BeautifulSoup
import requests
with requests.Session() as session:
r = session.get("http://example_website.com")
data = r.text
soup = BeautifulSoup(data)
base_url = "http://example_website.com"
for link in soup.find_all('a'):
url = urljoin(base_url, link.get('href'))
r = session.get(url)
# parse the subpage
Though, it may quickly get complex and slow.
You may need to switch to Scrapy web-scraping framework which makes web-scraping, crawling, following the links easy (check out CrawlSpider with link extractors), fast and in a non-blocking nature (it is based on Twisted).