I am trying to get the origin of this product with Beautifulsoup. I am trying to select the div in which the product data is ordered but I can't.
Later I tried to obtain other div in the code, one of the firsts in the code, but had the same problem. Then I ran prettify and the div I am searching didn't even appear. How can I get this data?
Here is the code I tried:
import urllib.request
from bs4 import BeautifulSoup
urlpage = 'https://www.esselungaacasa.it/ecommerce/nav/auth/supermercato/home.html?freevisit=true#!/negozio/prodotto/5397031?productCode=417932&productType=GROCERY&menuItemId=300000000002399'
page = urllib.request.urlopen(urlpage)
soup = BeautifulSoup(page, 'html.parser')
results = soup.findAll('div', attrs={'class': 'dettaglio'})
I wish I could get all the content of that div so later I can scrape the paragraphs inside it (the 'Origine' paragraph, specifically). Thank you!
The page makes requests for that content to this url:
https://www.esselungaacasa.it/ecommerce/resources/auth/displayable/breadcrumbs/300000000002399
This requires headers authentication which seems to comprise as shown below (tested multiple times) . The values are only valid for less than a couple of minutes so you need to see if you can obtain them from a prior request and dynamically update them.
The json contains html which you can extract and parse with BeautifulSoup.
import requests
headers = {
'Cookie' : 'JSESSIONID=2GqxcW2JyxJ6JvSj7N6VsySBjG29fv4X4tqyVhkcQCJk012YZrJF!-137423361; rxVisitor=155754270561377S2OAJ7NF3RRVHAGOONVM6J8BTDM9E9; _ga=GA1.3.1899558727.1557542711; _gid=GA1.3.1887185695.1557542711; cc_advertising=yes; dtSa=-; BIGipServerPOOL-produzione20.esselungaacasa.it-HTTP=!t1YtlfoXiajamCWJ/a5rCzj/QGm88V4Qo0VUYPxsnhd0TBgWyp+Vfi6oydBlxU/hJ9i5S7kWGT9W/is=; BIGipServerPOOL-ecom30.webapp.esselungaacasa.it-AEM-HTTP=!EMW5HHM3WmpSfPyJ/a5rCzj/QGm88eK13IPf7jx3ZN2rGHroQLAAMcP+cqfG6pU/IQ0WkgGmjLJMCQ8=; dtCookie=1$343B9EA5CDF2E30CCE04D4415DA0CE8D|bdb705b7939fba60|1; XSRF-ECOM-TOKEN=16B8A78F9DC3F2AFFD0137EA22662C77A098944B2FD6F2F2C27693BD76BAF15C; dtLatC=127; BIGipServerPOOL-ecom30.webapp.esselungaacasa.it-8001=!S2wA3HtVHQfvqreJ/a5rCzj/QGm88ZamTbPAvAabBDwyKXTfVg7cipoMLFPFfqZEc5Cotrd56OEwVA==; _gat_UA-79392629-1=1; dtPC=1$544681956_471h17vCBGNMJLCLJIAOCFOMIEGLEBHHPIFOKNI; rxvt=1557546485823|1557542705627',
'x-dtpc;' : '1$544433049_580h6vCBGNMJLCLJIAOCFOMIEGLEBHHPIFOKNI',
'X-XSRF-TOKEN' : '16B8A78F9DC3F2AFFD0137EA22662C77A098944B2FD6F2F2C27693BD76BAF15C'
}
r = requests.get('https://www.esselungaacasa.it/ecommerce/resources/auth/displayable/detail/5397031/300000000002399', headers = headers)
You can see an example of the json response here
Content html is inside the list called informations. Sample shown:
Related
I am trying to get a value from a webpage. In the source code of the webpage, the data is in CDATA format and also comes from a jQuery. I have managed to write the below code which gets a large amount of text, where the index 21 contains the information I need. However, this output is large and not in a format I understand. Within the output I need to isolate and output "redshift":"0.06" but dont know how. what is the best way to solve this.
import requests
from bs4 import BeautifulSoup
link = "https://wis-tns.weizmann.ac.il/object/2020aclx"
html = requests.get(link).text
soup = BeautifulSoup(html, "html.parser")
res = soup.findAll('b')
print soup.find_all('script')[21]
It can be done using the current approach you have. However, I'd advise against it. There's a neater way to do it by observing that the redshift value is present in a few convenient places on the page itself.
The following approach should work for you. It looks for tables on the page with the class "atreps-results-table" -- of which there are two. We take the second such table and look for the table cell with the class "cell-redshift". Then, we just print out its text content.
from bs4 import BeautifulSoup
import requests
link = 'https://wis-tns.weizmann.ac.il/object/2020aclx'
html = requests.get(link).text
soup = BeautifulSoup(html, 'html.parser')
tab = soup.find_all('table', {'class': 'atreps-results-table'})[1]
redshift = tab.find('td', {'class': 'cell-redshift'})
print(redshift.text)
Try simply:
soup.select_one('div.field-redshift > div.value>b').text
If you view the Page Source of the URL, you will find that there are two script elements that are having CDATA. But the script element in which you are interested has jQuery in it. So you have to select the script element based on this knowledge. After that, you need to do some cleaning to get rid of CDATA tags and jQuery. Then with the help of json library, convert JSON data to Python Dictionary.
import requests
from bs4 import BeautifulSoup
import json
page = requests.get('https://wis-tns.weizmann.ac.il/object/2020aclx')
htmlpage = BeautifulSoup(page.text, 'html.parser')
scriptelements = htmlpage.find_all('script')
for script in scriptelements:
if 'CDATA' in script.text and 'jQuery' in script.text:
scriptcontent = script.text.replace('<!--//--><![CDATA[//>', '').replace('<!--', '').replace('//--><!]]>', '').replace('jQuery.extend(Drupal.settings,', '').replace(');', '')
break
jsondata = json.loads(scriptcontent)
print(jsondata['objectFlot']['plotMain1']['params']['redshift'])
Hi, I want to get the text(number 18) from em tag as shown in the picture above.
When I ran my code, it did not work and gave me only empty list. Can anyone help me? Thank you~
here is my code.
from urllib.request import urlopen
from bs4 import BeautifulSoup
url = 'https://blog.naver.com/kwoohyun761/221945923725'
html = urlopen(url)
soup = BeautifulSoup(html, 'lxml')
likes = soup.find_all('em', class_='u_cnt _count')
print(likes)
When you disable javascript you'll see that the like count is loaded dynamically, so you have to use a service that renders the website and then you can parse the content.
You can use an API: https://www.scraperapi.com/
Or run your own for example: https://github.com/scrapinghub/splash
EDIT:
First of all, I missed that you were using urlopen incorrectly the correct way is described here: https://docs.python.org/3/howto/urllib2.html . Assuming you are using python3, which seems to be the case judging by the print statement.
Furthermore: looking at the issue again it is a bit more complicated. When you look at the source code of the page it actually loads an iframe and in that iframe you have the actual content: Hit ctrl + u to see the source code of the original url, since the side seems to block the browser context menu.
So in order to achieve your crawling objective you have to first grab the initial page and then grab the page you are interested in:
from urllib.request import urlopen
from bs4 import BeautifulSoup
# original url
url = "https://blog.naver.com/kwoohyun761/221945923725"
with urlopen(url) as response:
html = response.read()
soup = BeautifulSoup(html, 'lxml')
iframe = soup.find('iframe')
# iframe grabbed, construct real url
print(iframe['src'])
real_url = "https://blog.naver.com" + iframe['src']
# do your crawling
with urlopen(real_url) as response:
html = response.read()
soup = BeautifulSoup(html, 'lxml')
likes = soup.find_all('em', class_='u_cnt _count')
print(likes)
You might be able to avoid one round trip by analyzing the original url and the URL in the iframe. At first glance it looked like the iframe url can be constructed from the original url.
You'll still need a rendered version of the iframe url to grab your desired value.
I don't know what this site is about, but it seems they do not want to get crawled maybe you respect that.
I am trying to web-scrape using an h2 tag, but BeautifulSoup returns an empty list.
<h2 class="iCIMS_InfoMsg iCIMS_InfoField_Job">
html=urlopen("https://careersus-endologix.icims.com/jobs/2034/associate-supplier-quality-engineer/job")
bs0bj=BeautifulSoup(html,"lxml")
nameList=bs0bj.findAll("h2",{"class":"iCIMS_InfoMsg iCIMS_InfoField_Job"})
print(nameList)
The content is inside an iframe and updated via js (so not present in initial request). You can use the same link the page is using to obtain iframe content (the iframe src). Then extract the string from the script tag that has the info and load with json, extract the description (which is html) and pass back to bs to then select the h2 tags. You now have the rest of the info stored in the second soup object as well if required.
import requests
from bs4 import BeautifulSoup as bs
import json
r = requests.get('https://careersus-endologix.icims.com/jobs/2034/associate-supplier-quality-engineer/job?mobile=false&width=1140&height=500&bga=true&needsRedirect=false&jan1offset=0&jun1offset=60&in_iframe=1')
soup = bs(r.content, 'lxml')
script = soup.select_one('[type="application/ld+json"]').text
data = json.loads(script)
soup = bs(data['description'], 'lxml')
headers = [item.text for item in soup.select('h2')]
print(headers)
The answer lays hidden in two elements:
javascript rendered contents: after document.onload
in particular the content managed by js comes after this comment and it's, indeed, rendered by js. The line where the block starts is: "< ! - -BEGIN ICIMS - - >" (space added to avoid it goes blank)
As you can imagine the h2 class="ICISM class here" DOESN'T exist WHEN you call the bs4 methods.
The solution?
IMHO the best way to achieve what you want is to use selenium, to get a full rendered web page.
check this also
Web-scraping JavaScript page with Python
I'm trying to scrape a webpage that contains a table of test results using Python and BeautifulSoup, At this point I don't mind if its just raw html/un parsed data.
There is a table of results all contained within a parent DIV tag called 'test-view-grid-area'.
I got the class of name of the DIV tag from inspecting the webpage within chrome, and when viewing source of webpage its definitely correct, but when I run the below code, my results come back as:
[<div class="test-view-grid-area"></div>]
So it appears to be finding the tag but not returning its contents? I am not sure what I need to do to get the contents of the DIV class returned.
from bs4 import BeautifulSoup
import urllib3
http = urllib3.PoolManager()
url = '[url of server / webpage]')
response = http.request('GET', url, headers=headers)
soup = BeautifulSoup (response.data, 'html.parser')
grid_data = soup.find_all("div", class_="test-view-grid-area")
print(grid_data)
Edit: I've gotten a little further, I am now getting the following response directly from the script tag that returns a JSON string:
[<script class="__allSuitesOfSelectedPlan" defer="defer" type="application/json">
{"selectedOutcome":"","selectedTester":{"displayName" <etc>}</script>]
So next now I am trying to figure out how to do some regex to create my search pattern for everything between {}, then run that pattern against my initial data scrape, and then load the json string into a object.
I am looking for a way to scrape data from the student-accomodation website uniplaces: https://www.uniplaces.com/en/accommodation/berlin.
In the end, I would like to scrape particular information for each property, such as bedroom size, number of roommates, location. In order to do this, I will first have to scrape all property links and then scrape the individual links afterwards.
However, even after going through the console and using BeautifulSoup for the extraction of urls, I was not able to extract the urls leading to the separate listings. They don't seem to be included as a [href] and I wasn't able to identify the links in any other format within the html code.
This is the python code I used but it also didn't return anything:
from bs4 import BeautifulSoup
import urllib.request
resp = urllib.request.urlopen("https://www.uniplaces.com/accommodation/lisbon")
soup = BeautifulSoup(resp, from_encoding=resp.info().get_param('charset'))
for link in soup.find_all('a', href=True):
print(link['href'])
So my question is: If links are not included in http:// format or referenced as [href]: is there any way to extract the listings urls?
I would really highly appreciate any support on this!
All the best,
Hannah
If you look at the network tab, you find some API call specifically to this url : https://www.uniplaces.com/api/search/offers?city=PT-lisbon&limit=24&locale=en_GB&ne=38.79507211908374%2C-9.046124472314432&page=1&sw=38.68769060641113%2C-9.327992453271463
which specifies the location PT-lisbon and northest(ne) and southwest(sw) direction. From this file, you can get the id for each offers and append it to the current url, you can also get all info you get from the webpage (price, description etc...)
For instance :
import requests
resp = requests.get(
url = 'https://www.uniplaces.com/api/search/offers',
params = {
"city":'PT-lisbon',
"limit":'24',
"locale":'en_GB',
"ne":'38.79507211908374%2C-9.046124472314432',
"page":'1',
"sw":'38.68769060641113%2C-9.327992453271463'
})
body = resp.json()
base_url = 'https://www.uniplaces.com/accommodation/lisbon'
data = [
(
t['id'], #offer id
base_url + '/' + t['id'], #this is the offer page
t['attributes']['accommodation_offer']['title'],
t['attributes']['accommodation_offer']['price']['amount'],
t['attributes']['accommodation_offer']['available_from']
)
for t in body['data']
]
print(data)