Web scraping an Ajax webpage [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am trying to scrape the list items (state names: list of states); the states are visible on the web page as a list but are dynamic . Can I scrape them in BeautifulSoup? Is it doable? Some hints, please. New to web scraping! Open for other tool
<div class="md-nav-item cite hidden-xs" data-popover=".citation-popover" data-popover-url="/ajax/topic/2023068/cite?citeUrl=https://www.britannica.com/topic/list-of-cities-and-towns-in-the-United-States-2023068">
<em class="material-icons" data-icon="bookmark">
</em>
<div class="hidden-xs">
Cite
</div>
<div class="citation-popover md-popover text-left">
</div>
</div>

States weren't dynamic, try with this:
#!/usr/bin/env python
from bs4 import BeautifulSoup
import requests
url = 'https: ...url here'
html = requests.get(url).text
soup = BeautifulSoup(html, "html.parser")
for i in soup.find_all('a', attrs={'class':'md-crosslink'}):
print i.text
The above code uses bs4's attrs field to specify that we are interested in the a tag with the class="md-crosslink " items, and most specifically about the text they contain.

Related

Getting Text value from element using beautifulsoup in Python [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am making python script which getting text data from online site.
this is simple web scraping script and the language is only python.
I don't use selenium and use only beautifulsoup.
and I can scrape text from <p> or <div> or even <h> and <a>
but when I try to get text from <td>, the code is not working.
I shared my code below.
from threading import Thread
from bs4 import BeautifulSoup
from lxml import etree
detailPage = requests.get(SUBURL, headers=HEADERS)
detailsoup = BeautifulSoup(detailPage.content, "html.parser")
detaildom = etree.HTML(str(detailsoup))
name = detaildom.xpath('//*[#id="productTitle"]')[0].text
asin = detaildom.xpath('//*[#id="productDetails_detailBullets_sections1"]/tbody/tr[1]/td')[0].text
here, getting name is working, asin return empty string.
You can find the table by its ID productDetails_detailBullets_sections1 and find the <td> which contains the "ASIN".
Using a CSS selector:
soup = BeautifulSoup(requests.get(url, headers=headers).content, "html.parser")
print("ASIN:", soup.select_one("#productDetails_db_sections tr > td").get_text(strip=True))
Using .find():
soup = BeautifulSoup(requests.get(url, headers=headers).content, "html.parser")
table_info = soup.find(id="productDetails_detailBullets_sections1").find("tr")
print("ASIN:", table_info.find('td').get_text(strip=True))
Output (in both solutions):
ASIN: B079LWYC17

Python Web-scraping - Nested Tags [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to get information from the page below
http://books.toscrape.com/
I want to get the rating (stars) of each book and I used the code below
import requests
from bs4 import BeautifulSoup
from urllib.request import urlopen
import re
response = requests.get(
'http://books.toscrape.com/')
if response.status_code == 200:
print('Requisição bem sucedida!')
linhas = soup.find_all(class_=re.compile("rating"))
but what comes is the following
<p class="star-rating Three">
<i class="icon-star"></i>
<i class="icon-star"></i>
<i class="icon-star"></i>
<i class="icon-star"></i>
<i class="icon-star"></i>
</p>,
what am I doing wrong ?
Actually class-name is contain the star value so we can extract using attrs['class'] mehtod or d['class'][1] also works!
import requests
from bs4 import BeautifulSoup
from urllib.request import urlopen
import re
response = requests.get(
'http://books.toscrape.com/')
soup=BeautifulSoup(response.text,"html.parser")
data=soup.find_all("p",class_="star-rating")
for d in data:
print(d.attrs['class'][1])
Output:
Three
One
One
Four
..

How extract URL from web page [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I need a way to extract url from the list at this web page https://iota-nodes.net/
using Python. I tried BeautifulSoup but without success.
My code is:
from bs4 import BeautifulSoup, SoupStrainer
import requests
url = "https://iota-nodes.net/"
page = requests.get(url)
data = page.text
soup = BeautifulSoup(data)
for link in soup.find_all('a'):
print(link.get('href'))
No need for BeautifulSoup, as the data is coming from an AJAX request. Something like this should work:
import requests
response = requests.get('https://api.iota-nodes.net/')
data = response.json()
hostnames = [node['hostname'] for node in data]
Note that the data comes from an API endpoint being https://api.iota-nodes.net/.

Using HTML parser to get contents of a particular div [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to use a HTML parser like beautiful soup (python) to get the contents of a specific div, store all data within it in my local server by running a python script that will be executed regularly on my web server by cron.
Also, I need to be able to show those contents exactly as they were shown in the webpage before on my web site.
If the contents of the div is text alone, it would be easy enough but it is a combination of text and image.
Although there are occasionally swf files, I do not wish to import them.
Let's say the div in question is called 'cont'.
What would be the best way to do this?
Luckily i have a spider which does exactly what you need to do.
from soup import BeautifulSoup as bs
from scrapy.http import Request
from scrapy.spider import BaseSpider
from hn.items import HnItem
class HnSpider(BaseSpider):
name = 'hn'
allowed_domains = []
start_urls = ['http://news.ycombinator.com']
def parse(self, response):
if 'news.ycombinator.com' in response.url:
soup = bs(response.body)
items = [(x[0].text, x[0].get('href')) for x in
filter(None, [
x.findChildren() for x in
soup.findAll('td', {'class': 'title'})
])]
for item in items:
print item
hn_item = HnItem()
hn_item['title'] = item[0]
hn_item['link'] = item[1]
try:
yield Request(item[1], callback=self.parse)
except ValueError:
yield Request('http://news.ycombinator.com/' + item[1], callback=self.parse)
yield hn_item
Refer the Github link to know more.

get all link site in source html (python) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I want get all link in one web page ,this function only one link but need get all link ! of course i know need The One Ring true but i don't know use
i need get all link
def get_next_target(page):
start_link = page.find('<a href=')
start_quote = page.find('"', start_link)
end_quote = page.find('"', start_quote + 1)
url = page[start_quote + 1:end_quote]
return url, end_quote
This is where a HTML parser comes in handy. I recommend BeautifulSoup:
from bs4 import BeautifulSoup as BS
def get_next_target(page)
soup = BS(page)
return soup.find_all('a', href=True)
You may use lxml for that:
import lxml.html
def get_all_links(page):
document = lxml.html.parse(page)
return document.xpath("//a")
site = urllib.urlopen('http://somehwere/over/the/rainbow.html')
site_data = site.read()
for link in BeautifulSoup(site_data, parseOnlyThese=SoupStrainer('a')):
if link.has_attr('href'):
print(link['href'])

Categories

Resources