Say I want to scrape the following url:
https://soundcloud.com/search/sounds?q=edm&filter.created_at=last_week
I have the following python code:
import requests
from lxml import html
urlToSearch = 'https://soundcloud.com/search/sounds?q=edm&filter.created_at=last_week'
page = requests.get(urlToSearch)
tree = html.fromstring(page.content)
print(tree.xpath('//*[#id="content"]/div/div/div[3]/div/div/div/ul/div/div/text()'))
The trouble is when I print the text at the following xpath:
//*[#id="content"]/div/div/div[3]/div/div/div/ul/div/div
nothing appears but [] despite me confirming that "Found 500+ tracks" should be there. What am i doing wrong?
The problem is that requests does not generate dynamic content.
Right click on the page and view the page source, you'll see that the static content does not include any of the content that you see after the dynamic content has loaded.
However, (using Chrome) open dev tools, click on network and XHR. It looks like you can get the data through an API which is better than scraping anyway!
Problem is that with modern websites almost all web pages will change quite a lot after its been loaded with JavaScript, css etc. You will fetch the basic html before any DOM updates etc been made and will look differently to actually visiting the page with a browser.
Use the Selenium WebDriver framework (mostly used for test automation), it will emulate loading the page, executing javascripts etc.
Selenium Documentation for Python
Related
I have been trying to scrape some data using beautiful soup from https://www.eia.gov/coal/markets/. However when I parse the contents some of the data does not show up at all. Those data fields are visible in chrome inspector but not in the soup. The thing is they do not seem to be text elements. I think they are fed using an external database. I have attached the screenshots below. Is there any other way to scrape that data?
Thanks in advance.
Google inspector:
Beautiful soup parsed content:
#DMart is correct. The data you are looking for is being populated by Javascript, have a look at line 1629 in the page source. Beautiful soup doesn't act as a client browser so there is nowhere for the JS to execute. So it looks like selenium is your best bet.
See This thread for more information.
Not enough detail in your question but this information is probably dynamically loaded and you're not fetching the entire page source.
Without your code it's tough to see if you're using selenium to do it (you tagged this questions as such) which may indicate you're using page_source which does not guarantee you the entire completed source of the page you're looking at.
If you're using requests its even more unlikely you're capturing the entire page's completed source code.
The data is loaded via ajax, so it is not available in the initial document. If you go to the networking tab in chrome dev tools you will see that the site reaches out to https://www.eia.gov/coal/markets/coal_markets_json.php. I searched for some of the numbers in the response and it looks like the data you are looking for is there.
This is a direct json response from the backend. Its better than selenium if you can get it to work.
Thanks you all!
Opening the page using selenium using a webdriver and then parsing the page source using beautiful soup worked.
webdriver.get('https://www.eia.gov/coal/markets/')
html=webdriver.page_source
soup=BS(html)
table=soup.find("table",{'id':'snl_dpst'})
rows=table.find_all("tr")
So I'm using scrapy to scrape a data from Amazon books section. But somehow I got to know that it has some dynamic data. I want to know how dynamic data can be extracted from the website. Here's something I've tried so far:
import scrapy
from ..items import AmazonsItem
class AmazonSpiderSpider(scrapy.Spider):
name = 'amazon_spider'
start_urls = ['https://www.amazon.in/s?k=agatha+christie+books&crid=3MWRDVZPSKVG0&sprefix=agatha%2Caps%2C269&ref=nb_sb_ss_i_1_6']
def parse(self, response):
items = AmazonsItem()
products_name = response.css('.s-access-title::attr("data-attribute")').extract()
for product_name in products_name:
print(product_name)
next_page = response.css('li.a-last a::attr(href)').get()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)
Now I was using SelectorGadget to select a class which I have to scrape but in case of a dynamic website, it doesn't work.
So how do I scrape a website which has dynamic content?
what exactly is the difference between dynamic and static content?
How do I extract other information like price and image from the website? and how to get particular classes for example like a price?
how would I know that data is dynamically created?
So how do I scrape a website which has dynamic content?
there are a few options:
Use Selenium, which allows you to simulate opening a browser, letting the page render, then pull the html source code
Sometimes you can look at the XHR and see if you can fetch the data directly (like from an API)
Sometimes the data is within the <script> tags of the html source. You could search through those and use json.loads() once you manipulate the text into a json format
what exactly is the difference between dynamic and static content?
Dynamic means the data is generated from a request after the initial page request. Static means all the data is there at the original call to the site
How do I extract other information like price and image from the website? and how to get particular classes for example like a price?
Refer to your first question
how would I know that data is dynamically created?
You'll know it's dynamically created if you see it in the dev tools page source, but not in the html page source you first request. You can also see if the data is generated by additional requests in the dev tool and looking at Network -> XHR
Lastly
Amazon does offer an API to access the data. Try looking into that as well
If you want to load dynamic content, you will need to simulate a web browser. When you make an HTTP request, you will only get the text returned by that request, and nothing more. To simulate a web browser, and interact with data on the browser, use the selenium package for Python:
https://selenium-python.readthedocs.io/
So how do I scrape a website which has dynamic content?
Websites that have dynamic content have their own APIs from where they are pulling data. That data is not even fixed it will be different if you will check it after some time. But, it does not mean that you can't scrape a dynamic website. You can use automated testing frameworks like Selenium or Puppeteer.
what exactly is the difference between dynamic and static content?
As I have explained this in your first question, the static data is fixed and will remain the same forever but the dynamic data will be periodically updated or changes asynchronously.
How do I extract other information like price and image from the website? and how to get particular classes for example like a price?
for that, you can use libraries like BeautifulSoup in python and cheerio in Nodejs. Their docs are quite easy to understand and I will highly recommend you to read them thoroughly.
You can also follow this tutorial
how would I know that data is dynamically created?
While reloading the page open the network tab in chrome dev tools. You will see a lot of APIs are working behind to provide the relevant data according to the page you are trying to access. In that case, the website is dynamic.
So how do I scrape a website which has dynamic content?
To scrape the dynamic content from websites, we are required to let the web page load completely, so that the data can be injected into the page.
What exactly is the difference between dynamic and static content?
Content in static websites is fixed content that is not processed on the server and is directly returned by using prebuild source code files.
Dynamic websites load the contents by processing them on the server side in runtime. These sites can have different data every time you load the page, or when the data is updated.
How would I know that data is dynamically created?
You can open the Dev Tools and open the Networks tab. Over there once you refresh the page, you can look out for the XHR requests or requests to the APIs. If some requests like those exist, then the site is dynamic, else it is static.
How do I extract other information like price and image from the website? and how to get particular classes for example like a price?
To extract the dynamic content from the websites we can use Selenium (python - one of the best options) :
Selenium - an automated browser simulation framework
You can load the page, and use the CSS selector to match the data on the page. Following is an example of how you can use it.
import time
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://www.amazon.in/s?k=agatha+christie+books&crid=3MWRDVZPSKVG0&sprefix=agatha%2Caps%2C269&ref=nb_sb_ss_i_1_6")
time.sleep(4)
titles = driver.find_elements_by_css_selector(
".a-size-medium.a-color-base.a-text-normal")
print(titles[0].text)
In case you don't want to use Python, there are other open-source options like Puppeteer and Playwright, as well as complete scraping platforms such as Bright Data that have built-in capabilities to extract dynamic content automatically.
I am just starting with web scraping and unfortunately, I am facing a showstopper: I would like pull some financial data but it seems that the website is quite complex (dynamic content etc.).
Data I would like pull
Website:
https://www.de.vanguard/web/cf/professionell/de/produktart/detailansicht/etf/9527/EQUITY/performance
So far, I've used Beautiful Soup to get this done. However, I cannot even find the table. Any ideas?
Look into using selenium to launch an automated web browser. This loads the web page and it's associated dynamic content, as well as allow you the option to 'click' on certain web elements to load content that may be generated on_click. You can use this in tandem with BeautifulSoup by passing driver.page_source to BeautifulSoup and parsing through it that way.
This SO answer provides a basic example that would serve as a good starting point: Python WebDriver how to print whole page source (html)
I'm trying to scrape the content from the following website:
https://mobile.admiral.at/en/event/event/all#/event/15a822ab-84a1-e511-90a2-000c297013a7
I have previously scraped the content successfully using dryscrape and the following code:
import dryscrape
import webkit_server
from lxml import html
session = dryscrape.Session()
session.set_timeout(20)
session.set_attribute('auto_load_images', False)
session.visit('https://mobile.admiral.at/en/event/event/all#/event/15a822ab-84a1-e511-90a2-000c297013a7')
response = session.body()
tree = html.fromstring(response)
print(tree.xpath('(//td[#class="team-name"]/text())[1]'))
The above example would print the home team (which in this case would be 'France')
It seems that the structure of the source has been changed, so I'm unable to scrape the contents properly.
What confuses me is that I'm able to see the tags using the Firefox Inspector tool, however it's not visible in the response when I pull the source.
I assume they must have hidden the content somehow to make it impossible (?) to scrape the data.
Could someone please point me in the right direction how to scrape the content properly.
The content that you need is loaded using jQuery (Ajax). I don't know if dryscrape has been updated lately, but the last time I used it didn't support ajax content loaded from jQuery...
Anyway.. just taking a look to the network inspector of chrome you will realize that the main content is loaded using an API. You can call to that API directly and you will get an awesome JSON with all the data of the page:
import requests
data = requests.get('https://mobile.admiral.at/;apiVer=json;api=main;jsonType=object;apiRw=1/en/api/event/get-event?id=15a822ab-84a1-e511-90a2-000c297013a7').json()
I am trying to write a Python script that will periodically check a website to see if an item is available. I have used requests.get, lxml.html, and xpath successfully in the past to automate website searches. In the case of this particular URL (http://www.anthropologie.com/anthro/product/4120200892474.jsp?cm_vc=SEARCH_RESULTS#/) and others on the same website, my code was not working.
import requests
from lxml import html
page = requests.get("http://www.anthropologie.com/anthro/product/4120200892474.jsp?cm_vc=SEARCH_RESULTS#/")
tree = html.fromstring(page.text)
html_element = tree.xpath(".//div[#class='product-soldout ng-scope']")
at this point, html_element should be a list of elements (I think in this case only 1), but instead it is empty. I think this is because the website is not loading all at once, so when requests.get() goes out and grabs it, it's only grabbing the first part. So my questions are
1: Am I correct in my assessment of the problem?
and
2: If so, is there a way to make requests.get() wait before returning the html, or perhaps another route entirely to get the whole page.
Thanks
Edit: Thanks to both responses. I used Selenium and got my script working.
You are not correct in your assessment of the problem.
You can check the results and see that there's a </html> right near the end. That means you've got the whole page.
And requests.text always grabs the whole page; if you want to stream it a bit at a time, you have to do so explicitly.
Your problem is that the table doesn't actually exist in the HTML; it's build dynamically by client-side JavaScript. You can see that by actually reading the HTML that's returned. So, unless you run that JavaScript, you don't have the information.
There are a number of general solutions to that. For example:
Use selenium or similar to drive an actual browser to download the page.
Manually work out what the JavaScript code does and do equivalent work in Python.
Run a headless JavaScript interpreter against a DOM that you've built up.
The page uses javascript to load the table which is not loaded when requests gets the html so you are getting all the html just not what is generated using javascript, you could use selenium combined with phantomjs for headless browsing to get the html:
from selenium import webdriver
browser = webdriver.PhantomJS()
browser.get("http://www.anthropologie.eu/anthro/index.jsp#/")
html = browser.page_source
print(html)