I am looking for a way to scrape data from the student-accomodation website uniplaces: https://www.uniplaces.com/en/accommodation/berlin.
In the end, I would like to scrape particular information for each property, such as bedroom size, number of roommates, location. In order to do this, I will first have to scrape all property links and then scrape the individual links afterwards.
However, even after going through the console and using BeautifulSoup for the extraction of urls, I was not able to extract the urls leading to the separate listings. They don't seem to be included as a [href] and I wasn't able to identify the links in any other format within the html code.
This is the python code I used but it also didn't return anything:
from bs4 import BeautifulSoup
import urllib.request
resp = urllib.request.urlopen("https://www.uniplaces.com/accommodation/lisbon")
soup = BeautifulSoup(resp, from_encoding=resp.info().get_param('charset'))
for link in soup.find_all('a', href=True):
print(link['href'])
So my question is: If links are not included in http:// format or referenced as [href]: is there any way to extract the listings urls?
I would really highly appreciate any support on this!
All the best,
Hannah
If you look at the network tab, you find some API call specifically to this url : https://www.uniplaces.com/api/search/offers?city=PT-lisbon&limit=24&locale=en_GB&ne=38.79507211908374%2C-9.046124472314432&page=1&sw=38.68769060641113%2C-9.327992453271463
which specifies the location PT-lisbon and northest(ne) and southwest(sw) direction. From this file, you can get the id for each offers and append it to the current url, you can also get all info you get from the webpage (price, description etc...)
For instance :
import requests
resp = requests.get(
url = 'https://www.uniplaces.com/api/search/offers',
params = {
"city":'PT-lisbon',
"limit":'24',
"locale":'en_GB',
"ne":'38.79507211908374%2C-9.046124472314432',
"page":'1',
"sw":'38.68769060641113%2C-9.327992453271463'
})
body = resp.json()
base_url = 'https://www.uniplaces.com/accommodation/lisbon'
data = [
(
t['id'], #offer id
base_url + '/' + t['id'], #this is the offer page
t['attributes']['accommodation_offer']['title'],
t['attributes']['accommodation_offer']['price']['amount'],
t['attributes']['accommodation_offer']['available_from']
)
for t in body['data']
]
print(data)
Related
I am trying to get the links to the individual search results on a website (National Gallery of Art). But the link to the search doesn't load the search results. Here is how I try to do it:
url = 'https://www.nga.gov/collection-search-result.html?artist=C%C3%A9zanne%2C%20Paul'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
I can see that the links to the individual results could be found under soup.findAll('a') but they do not appear, instead the last output is a link to empty search result:
https://www.nga.gov/content/ngaweb/collection-search-result.html
How could I get a list of links, the first of which is the first search result (https://www.nga.gov/collection/art-object-page.52389.html), the second is the second search result (https://www.nga.gov/collection/art-object-page.52085.html) etc?
Actually, data is generating from api calls json response. Here is the desired
list of links.
Code:
import requests
import json
url= 'https://www.nga.gov/collection-search-result/jcr:content/parmain/facetcomponent/parList/collectionsearchresu.pageSize__30.pageNumber__1.json?artist=C%C3%A9zanne%2C%20Paul&_=1634762134895'
r = requests.get(url)
for item in r.json()['results']:
url = item['url']
abs_url = f'https://www.nga.gov{url}'
print(abs_url)
Output:
https://www.nga.gov/content/ngaweb/collection/art-object-page.52389.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.52085.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.46577.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.46580.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.46578.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.136014.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.46576.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.53120.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.54129.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.52165.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.46575.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.53122.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.93044.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.66405.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.53119.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.53121.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.46579.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.66406.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.45866.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.53123.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.45867.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.45986.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.45877.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.136025.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.74193.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.74192.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.66486.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.76288.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.76223.html
https://www.nga.gov/content/ngaweb/collection/art-object-page.76268.html
This seems to work for me:
from bs4 import BeautifulSoup
import requests
url = 'https://www.nga.gov/collection-search-result.html?artist=C%C3%A9zanne%2C%20Paul'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
for a in soup.findAll('a'):
print(a['href'])
It returns all of the html a href links.
For the links from the search results specifically, those are loaded via AJAX and you would need to implement something that renders the javascript like headless chrome. You can read about one of the ways to implement this here, which fits your use case very closely. http://theautomatic.net/2019/01/19/scraping-data-from-javascript-webpage-python/
If you want to ask how to render javascript from python and then parse the result, you would need to close this question and open a new one, as it is not scoped correctly as is.
Is there any way to crawl every page in a URL?
Such as https://gogo.mn/ to find every article page in the URL?
The following is what I have so far. The problem is that the news article patterns are weird for example https://gogo.mn/r/qqm4m
So the code like following will never find the articles.
base_url = 'https://gogo.mn/'
for i in range(number_pages):
url = base_url+str(i)
req = requests.get(url)
soup = BeautifulSoup(req.content)
How do I crawl such websites?
The easiest way is way is to first get the page from the website. This can be accomplished thusly:
url = 'https://gogo.mn/'
response = requests.get(url)
Then your page is contained in the response variable which you can examine by looking at response.text.
Now use BeautifulSoup to find all of the links that are contained on the page:
a_links = html.find_all('a')
This returns a bs4.element.ResultSet type that can be iterated through using a for loop. Looking at your particular site I found that they don't include the baseURL in many of their links so some normalization of the URLS has to be performed.
for link in a_links:
if ('https' in link['href']) or ('http' in link['href']):
print (link['href'])
else:
xLink = link['href'][1:]
print (f'{url}{xLink}')
Once you've done that you then have all of the links from a given page. You would then need to eliminate duplicates and for each page run through the links on the new pages. This would involve recursively stepping through all links you find.
Regards
I have not used Scrapy. But to get all the content using only request and BeautifulSoup, you need to find the index page (sometimes archives or search results) of the website, save the urls of all the pages, loop through the urls, and save the content of the pages.
I am trying to get the origin of this product with Beautifulsoup. I am trying to select the div in which the product data is ordered but I can't.
Later I tried to obtain other div in the code, one of the firsts in the code, but had the same problem. Then I ran prettify and the div I am searching didn't even appear. How can I get this data?
Here is the code I tried:
import urllib.request
from bs4 import BeautifulSoup
urlpage = 'https://www.esselungaacasa.it/ecommerce/nav/auth/supermercato/home.html?freevisit=true#!/negozio/prodotto/5397031?productCode=417932&productType=GROCERY&menuItemId=300000000002399'
page = urllib.request.urlopen(urlpage)
soup = BeautifulSoup(page, 'html.parser')
results = soup.findAll('div', attrs={'class': 'dettaglio'})
I wish I could get all the content of that div so later I can scrape the paragraphs inside it (the 'Origine' paragraph, specifically). Thank you!
The page makes requests for that content to this url:
https://www.esselungaacasa.it/ecommerce/resources/auth/displayable/breadcrumbs/300000000002399
This requires headers authentication which seems to comprise as shown below (tested multiple times) . The values are only valid for less than a couple of minutes so you need to see if you can obtain them from a prior request and dynamically update them.
The json contains html which you can extract and parse with BeautifulSoup.
import requests
headers = {
'Cookie' : 'JSESSIONID=2GqxcW2JyxJ6JvSj7N6VsySBjG29fv4X4tqyVhkcQCJk012YZrJF!-137423361; rxVisitor=155754270561377S2OAJ7NF3RRVHAGOONVM6J8BTDM9E9; _ga=GA1.3.1899558727.1557542711; _gid=GA1.3.1887185695.1557542711; cc_advertising=yes; dtSa=-; BIGipServerPOOL-produzione20.esselungaacasa.it-HTTP=!t1YtlfoXiajamCWJ/a5rCzj/QGm88V4Qo0VUYPxsnhd0TBgWyp+Vfi6oydBlxU/hJ9i5S7kWGT9W/is=; BIGipServerPOOL-ecom30.webapp.esselungaacasa.it-AEM-HTTP=!EMW5HHM3WmpSfPyJ/a5rCzj/QGm88eK13IPf7jx3ZN2rGHroQLAAMcP+cqfG6pU/IQ0WkgGmjLJMCQ8=; dtCookie=1$343B9EA5CDF2E30CCE04D4415DA0CE8D|bdb705b7939fba60|1; XSRF-ECOM-TOKEN=16B8A78F9DC3F2AFFD0137EA22662C77A098944B2FD6F2F2C27693BD76BAF15C; dtLatC=127; BIGipServerPOOL-ecom30.webapp.esselungaacasa.it-8001=!S2wA3HtVHQfvqreJ/a5rCzj/QGm88ZamTbPAvAabBDwyKXTfVg7cipoMLFPFfqZEc5Cotrd56OEwVA==; _gat_UA-79392629-1=1; dtPC=1$544681956_471h17vCBGNMJLCLJIAOCFOMIEGLEBHHPIFOKNI; rxvt=1557546485823|1557542705627',
'x-dtpc;' : '1$544433049_580h6vCBGNMJLCLJIAOCFOMIEGLEBHHPIFOKNI',
'X-XSRF-TOKEN' : '16B8A78F9DC3F2AFFD0137EA22662C77A098944B2FD6F2F2C27693BD76BAF15C'
}
r = requests.get('https://www.esselungaacasa.it/ecommerce/resources/auth/displayable/detail/5397031/300000000002399', headers = headers)
You can see an example of the json response here
Content html is inside the list called informations. Sample shown:
My goal is to write a python script that takes an artist's name as a string input and then appends it to the base URL that goes to the genius search query.Then retrieves all the lyrics from the returned web page's links (Which is the required subset of this problem that will also contain specifically the artist name in every link in that subset.).I am in the initial phase right now and just have been able to retrieve all links from the web page including the ones that I don't want in my subset. I tried to find a simple solution but failed continuously.
import requests
# The Requests library.
from bs4 import BeautifulSoup
from lxml import html
user_input = input("Enter Artist Name = ").replace(" ","+")
base_url = "https://genius.com/search?q="+user_input
header = {'User-Agent':''}
response = requests.get(base_url, headers=header)
soup = BeautifulSoup(response.content, "lxml")
for link in soup.find_all('a',href=True):
print (link['href'])
This returns this complete list while I only need the ones that end with lyrics and the artist's name (here for instance Drake). These will the links from where I should be able to retrieve the lyrics.
https://genius.com/
/signup
/login
https://www.facebook.com/geniusdotcom/
https://twitter.com/Genius
https://www.instagram.com/genius/
https://www.youtube.com/user/RapGeniusVideo
https://genius.com/new
https://genius.com/Drake-hotline-bling-lyrics
https://genius.com/Drake-one-dance-lyrics
https://genius.com/Drake-hold-on-were-going-home-lyrics
https://genius.com/Drake-know-yourself-lyrics
https://genius.com/Drake-back-to-back-lyrics
https://genius.com/Drake-all-me-lyrics
https://genius.com/Drake-0-to-100-the-catch-up-lyrics
https://genius.com/Drake-started-from-the-bottom-lyrics
https://genius.com/Drake-from-time-lyrics
https://genius.com/Drake-the-motto-lyrics
/search?page=2&q=drake
/search?page=3&q=drake
/search?page=4&q=drake
/search?page=5&q=drake
/search?page=6&q=drake
/search?page=7&q=drake
/search?page=8&q=drake
/search?page=9&q=drake
/search?page=672&q=drake
/search?page=673&q=drake
/search?page=2&q=drake
/embed_guide
/verified-artists
/contributor_guidelines
/about
/static/press
mailto:brands#genius.com
https://eventspace.genius.com/
/static/privacy_policy
/jobs
/developers
/static/terms
/static/copyright
/feedback/new
https://genius.com/Genius-how-genius-works-annotated
https://genius.com/Genius-how-genius-works-annotated
My next step would be to use selenium to emulate scroll which in the case of genius.com gives the entire set of search results. Any suggestions or resources would be appreciated. I would also like a few comments about the way I wish to proceed with this solution. Can we make it more generic?
P.S. I may not have well lucidly explained my problem but I have tried my best. Also, any ambiguities are welcome too. I am new to scraping and python and programming as well in so, just wanted to make sure that I am following the right path.
Use the regex module to match only the links you want.
import requests
# The Requests library.
from bs4 import BeautifulSoup
from lxml import html
from re import compile
user_input = input("Enter Artist Name = ").replace(" ","+")
base_url = "https://genius.com/search?q="+user_input
header = {'User-Agent':''}
response = requests.get(base_url, headers=header)
soup = BeautifulSoup(response.content, "lxml")
pattern = re.compile("[\S]+-lyrics$")
for link in soup.find_all('a',href=True):
if pattern.match(link['href']):
print (link['href'])
Output:
https://genius.com/Drake-hotline-bling-lyrics
https://genius.com/Drake-one-dance-lyrics
https://genius.com/Drake-hold-on-were-going-home-lyrics
https://genius.com/Drake-know-yourself-lyrics
https://genius.com/Drake-back-to-back-lyrics
https://genius.com/Drake-all-me-lyrics
https://genius.com/Drake-0-to-100-the-catch-up-lyrics
https://genius.com/Drake-started-from-the-bottom-lyrics
https://genius.com/Drake-from-time-lyrics
https://genius.com/Drake-the-motto-lyrics
This just looks if your link matches the pattern ending in -lyrics. You may use similar logic to filter using user_input variable as well.
Hope this helps.
I am new to Python and working on a scraping project. I am using Firebug to copy the CSS path of required links. I am trying to collect the links under the tab of "UPCOMING EVENTS" from http://kiascenehai.pk/ but it is just for learning how I can get the specified links.
I am looking for the fix of this problem and also suggestions for how to retrieve specified links using CSS selectors.
from bs4 import BeautifulSoup
import requests
url = "http://kiascenehai.pk/"
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data)
for link in soup.select("html body div.body-outer-wrapper div.body-wrapper.boxed-mode div.main- outer-wrapper.mt30 div.main-wrapper.container div.row.row-wrapper div.page-wrapper.twelve.columns.b0 div.row div.page-wrapper.twelve.columns div.row div.eight.columns.b0 div.content.clearfix section#main-content div.row div.six.columns div.small-post-wrapper div.small-post-content h2.small-post-title a"):
print link.get('href')
First of all, that page requires a city selection to be made (in a cookie). Use a Session object to handle this:
s = requests.Session()
s.post('http://kiascenehai.pk/select_city/submit_city', data={'city': 'Lahore'})
response = s.get('http://kiascenehai.pk/')
Now the response gets the actual page content, not redirected to the city selection page.
Next, keep your CSS selector no larger than needed. In this page there isn't much to go on as it uses a grid layout, so we first need to zoom in on the right rows:
upcoming_events_header = soup.find('div', class_='featured-event')
upcoming_events_row = upcoming_events_header.find_next(class_='row')
for link in upcoming_events_row.select('h2 a[href]'):
print link['href']
This is co-founder KiaSceneHai.pk; please don't scrape websites, alot of effort goes into collecting the data, we offer access through our API, you can use the contact form to request access, ty