How to get info from this html using beautifulsoup? - python

I want to get all the social link of a company from this. When doing
summary_div.find("div", {'class': "cp-summary__social-links"})
I am getting this
<div class="cp-summary__social-links">
<div data-integration-name="react-component" data-payload='{"props":
{"links":[{"url":"http://www.snapdeal.com?utm_source=craft.co","icon":"web","label":"Website"},
{"url":"http://www.linkedin.com/company/snapdeal?utm_source=craft.co","icon":"linkedin","label":"LinkedIn"},
{"url":"https://instagram.com/snapdeal/?utm_source=craft.co","icon":"instagram","label":"Instagram"},
{"url":"https://www.facebook.com/Snapdeal?utm_source=craft.co","icon":"facebook","label":"Facebook"},
{"url":"https://www.crunchbase.com/organization/snapdeal?utm_source=craft.co","icon":"cb","label":"CrunchBase"},
{"url":"https://www.youtube.com/user/snapdeal?utm_source=craft.co","icon":"youtube","label":"YouTube"},
{"url":"https://twitter.com/snapdeal?utm_source=craft.co","icon":"twitter","label":"Twitter"}],
"companyName":"Snapdeal"},"name":"CompanyLinks"}' data-rwr-element="true"></div></div>
I also tried getting children of cp-summary__social-links, which I want indeed and then find all a tag to get all the links. This does not work too.
Any idea, how to do this?
Update: As Sraw suggested, I managed to get all urls by doing like this.
urls = []
social_link = summary_div.find("div", {'class': "cp-summary__social-links"}).find("div", {"data-integration-name": "react-component"})
json_text = json.loads(social_link["data-payload"])
for link in json_text['props']['links']:
urls.append(link['url'])
Thanks in advance.

Related

Extracting information from website with BeautifulSoup and Python

I'm attempting to extract information from this website. I can't get the text in the three fields marked in the image (in green, blue, and red rectangles) no matter how hard I try.
Using the following function, I thought I would succeed to get all of the text on the page but it didn't work:
from bs4 import BeautifulSoup
import requests
def get_text_from_maagarim_page(url: str):
html_text = requests.get(url).text
soup = BeautifulSoup(html_text, "html.parser")
res = soup.find_all(class_ = "tooltippedWord")
text = [el.getText() for el in res]
return text
url = "https://maagarim.hebrew-academy.org.il/Pages/PMain.aspx?koderekh=1484&page=1"
print(get_text_from_maagarim_page(url)) # >> empty list
I attempted to use the Chrome inspection tool and the exact reference provided here, but I couldn't figure out how to use that data hierarchy to extract the desired data.
I would love to hear if you have any suggestions on how to access this data.
Update and more details
As far as I can tell from the structure of the above-mentioned webpage, the element I'm looking for is in the following structure location:
<form name="aspnetForm" ...>
...
<div id="wrapper">
...
<div class="content">
...
<div class="mainContentArea">
...
<div id="mainSearchPannel" class="mainSearchContent">
...
<div class="searchPanes">
...
<div class="wordsSearchPane" style="display: block;">
...
<div id="searchResultsAreaWord"
class="searchResultsContainer">
...
<div id="srPanes">
...
<div id="srPane-2" class="resRefPane"
style>
...
<div style="height:600px;overflow:auto">
...
<ul class="esResultList">
...
# HERE IS THE TARGET ITEMS
The relevant items look likes this:
And the relevant data is in <td id ... >
The content you want is not present in the web page that beautiful soup loads. It is fetched in separate HTTP requests done when a "web browser" runs the javascript code present in the said web page. Beautiful Soup does not run javascript.
You may try to figure out what HTTP request has responded with the required data using the "Network" tab in your browser developer tools. If that turns out to be a predictable HTTP request then you can recreate that request in python directly and then use beautiful soup to pick out useful parts. #Martin Evans's answer (https://stackoverflow.com/a/72090358/1921546) uses this approach.
Or, you may use methods that actually involve remote controlling a web browser with python. It lets a web browser load the page and then you can access the DOM in Python to get what you want from the rendered page. Other answers like Scraping javascript-generated data using Python and scrape html generated by javascript with python can point you in that direction.
Exactly what tag-class are you trying to scrape from the webpage? When I copied and ran your code I included this line to check for the class name in the pages html, but did not find any.
print("tooltippedWord" in requests.get(url).text) #False
I can say that it's generally easier to use the attrs kwarg when using find_all or findAll.
res = soup.findAll(attrs={"class":"tooltippedWord"})
less confusion overall when typing it out. As far as a few possible approaches would be to look at the page in chrome (or another browser) using the dev tools to search for some non-random class tags or id tags like esResultListItem.
From there if you know what tag you are looking for //etc you can include it in the search like so.
res = soup.findAll("div",attrs={"class":"tooltippedWord"})
It's definitely easier if you know what tag you are looking for as well as if there are any class names or ids included in the tag
<span id="somespecialname" class="verySpecialName"></span>
if you're still looking or help, I can check by tomorrow, it is nearly 1:00 AM CST where I live and I still need to finish my CS assignments. It's just a lot easier to help you if you can provide more examples Pictures/Tags/etc so we could know how to best explain the process to you.
*
It is a bit difficult to understand what the text is, but what you are looking for is returned from a separate request made by the browser. The parameters used will hopefully make some sense to you.
This request returns JSON data which contains a d entry holding the HTML that you are looking for.
The following shows a possible approach:how to extract data near to what you are looking for:
import requests
from bs4 import BeautifulSoup
post_json = {"tabNum":3,"type":"Muvaot","kod1":"","sug1":"","tnua":"","kod2":"","zurot":"","kod":"","erechzman":"","erechzura":"","arachim":"1484","erechzurazman":"","cMaxDist":"","aMaxDist":"","sql1expr":"","sql1sug":"","sql2expr":"","sql2sug":"","sql3expr":"","sql3sug":"","sql4expr":"","sql4sug":"","sql5expr":"","sql5sug":"","sql6expr":"","sql6sug":"","sederZeruf":"","distance":"","kotm":"הערך: <b>אֶלָּא</b>","mislifnay":"0","misacharay":"0","sOrder":"standart","pagenum":"1","lines":"0","takeMaxPage":"true","nMaxPage":-1,"year":"","hekKazar":False}
req = requests.post('https://maagarim.hebrew-academy.org.il/Pages/ws/Arachim.asmx/GetMuvaot', json=post_json)
d = req.json()['d']
soup = BeautifulSoup(d, "html.parser")
for num, table in enumerate(soup.find_all('table'), start=1):
print(f"Entry {num}")
tr_row_second = table.find('tr', class_='srRowSecond')
td = tr_row_second.find_all('td')[1]
print(" ", td.strong.text)
tr_row_third = table.find('tr', class_='srRowThird')
td = tr_row_third.find_all('td')[1]
print(" ", td.text)
This would give you information starting:
Entry 1
תעודות בר כוכבא, ואדי מורבעאת 45
המסירה: Mur, 45
Entry 2
תעודות בר כוכבא, איגרת מיהונתן אל יוסה
מראה מקום: <שו' 4>  |  המסירה: Mur, 46
Entry 3
ברכת המזון
מראה מקום: רחם נא יי אלהינו על ישראל עמך, ברכה ג <שו' 6> (גרסה)  |  המסירה: New York, Jewish Theological Seminary (JTS), ENA, 2150, 47
Entry 4
ברכת המזון
מראה מקום: נחמנו יי אלהינו, ברכה ד, לשבת <שו' 6>  |  המסירה: Cambridge, University Library, T-S Collection, 8H 11, 4
I suggest you print(soup) to understand better what is returned.

BeautifulSoup Web Scrape Running but Not Printing

Mega new coder here as I learned Web scraping yesterday. I'm attempting to scrape a site with the following html code:
<div id="db_detail_colorways">
<a class="db_colorway_line" href="database_detail_colorway.php?
ID=11240&table_name=glasses">
<div class="db_colorway_line_image"><img
src="database/Sport/small/BallisticNewMFrameStrike_MatteBlack_Clear.jpg"/>.
</div>.
<div class="grid_4" style="overflow:hidden;">Matte Black</div><div
class="grid_3">Clear</div><div class="grid_1">$133</div><div
class="grid_2">OO9060-01</div><div class="clear"></div></a><a
There are 4 total items being scraped. The goal is to print the attribute stored in <div class="grid_4" the code should loop over the 4 items being scraped, so for the html code provided, the first being displayed is "Matte Black" Here is my code:
for frame_colors in soup.find_all('a', class_ = 'db_colorway_line'):
all_frame_colors = frame_colors.find_all('div', class_ = 'grid_4').text
print(all_frame_colors)
Basically the code runs correctly and everything else thus far has run correctly in this jupyter notebook, but this runs and does not print out anything. I'm thinking it's a syntax error, but I could be wrong. Hopefully this makes sense. Can anyone help? Thanks!
You are treating a list of elements as a single element
frame_colors.find_all('div', class_ = 'grid_4').text
You can run loop of all_frame_colors and get the text from there like this:
for frame_colors in soup.find_all('a', class_ = 'db_colorway_line'):
all_frame_colors = frame_colors.find_all('div', class_ = 'grid_4')
for af in all_frame_colors:
print(af.text)
If it solves you problem then don't forget to mark this as an answer!

python selenium scrape href (link) from website

I have this site https://jobs.ubs.com/TGnewUI/Search/home/HomeWithPreLoad?partnerid=25008&siteid=5012&PageType=searchResults&SearchType=linkquery&LinkID=6017#keyWordSearch=&locationSearch=
I want to scrape the link for each job role, the HTML source for one of the roles is:
<a id="Job_1" href="https://jobs.ubs.com/TGnewUI/Search/home/HomeWithPreLoad?partnerid=25008&siteid=5012&PageType=JobDetails&jobid=223876" ng-class="oQ.ClassName" class="jobProperty jobtitle" ng-click="handlers.jobClick($event, this)" ng-bind-html="$root.utils.htmlEncode(oQ.Value)">Technology Delivery Lead (IB Technology)</a>
I have tried this:
job_link = driver.find_elements_by_css_selector(".jobProperty.jobtitle ['href']")
for job_link in job_link:
job_link = job_link.text
print(job_link)
But it simply returns nothing, can someone kindly help
Why not just print out it's href tag by get_attribute.
job_link = driver.find_elements_by_css_selector(".jobProperty.jobtitle")
for job_link in job_link:
print(job_link.get_attribute('href'))

How to retrieve a title attribute from an image using Beautiful Soup

container = page_soup.findAll("img",{"class":"logo"})
container[0]
<img alt="GIGABYTE" class="logo" src="//c1.neweggimages.com/brandimage//Brand1314.gif" title="GIGABYTE">
</img>
How can I scrape the world "GIGABYTE" as text from the above?
I am a beginner at this.
container = page_soup.findAll("img",{"class":"logo"})
container[0] and the Result => img alt="GIGABYTE" class="logo" src="//c1.neweggimages.com/brandimage//Brand1314.gif" title="GIGABYTE">
Now I can scrape the word "GIGABYTE" as text from the above result. I am a beginner at this. plz, help.
sorry this is the question
It appears that webscraping goes against the terms of service of that website, however I believe you're free to do whatever you'd like with the html if you obtain it manually.
soup.find_all('img', class_='logo')[0]['title']

Find nested divs scrapy

I am trying to get the text from a div that is nested. Here is the code that I currently have:
sites = hxs.select('/html/body/div[#class="content"]/div[#class="container listing-page"]/div[#class="listing"]/div[#class="listing-heading"]/div[#class="price-container"]/div[#class="price"]')
But it is not returning a value. Is my syntax wrong? Essentially I just want the text out of <div class="price">
Any ideas?
The URL is here.
The price is inside an iframe so you should scrape https://www.rentler.com/ksl/listing/index/?sid=17403849&nid=651&ad=452978
Once you request this url:
hxs.select('//div[#class="price"]/text()').extract()[0]

Categories

Resources