Convert the XPath gotten from browser to usable XPath for Scrapy - python

This is a problem that I always have getting a specific XPath with my browser.
Assume that I want to extract all the images from some websites like Google Image Search or Pinterest. When I use Inspect element then use copy XPath to get the XPath for an image, it gives me some thing like following :
//*[#id="rg_s"]/div[13]/a/img
I got this from an image from Google Search. When I want to use it in my spider, I used Selector and HtmlXPathSelector with the following XPaths, but they all don't work!
//*[#id="rg_s"]/div/a/img
//div[#id="rg_s"]/div[13]/a/img
//[#class="rg_di rg_el"]/a/img #i change this based on the raw html of page
#hxs.select(xpath).extract()
#Selector(response).xpath('xpath')
.
.
I've read many questions, but I couldn't find a general answer to how I can use XPaths obtained from a web browser in Scrapy.

Usually it is not safe and reliable to blindly follow browser's suggestion about how to locate an element.
First of all, XPath expression that developer tools generate are usually absolute - starting from the the parent of all parents - html tag, which makes it being more dependant on the page structure (well, firebug can also make expressions based on id attributes).
Also, the HTML code you see in the browser can be pretty much different from what Scrapy receives due to asynchronous nature of the website page load and javascript being dynamically executed in the browser. Scrapy is not a browser and "sees" only the initial HTML code of a page, before the "dynamic" part.
Instead, inspect what Scrapy really has in the response: open up the Scrapy Shell, inspect the response and debug your XPath expressions and CSS selectors:
$ scrapy shell https://google.com
>>> response.xpath('//div[#id="myid"]')
...
Here is what I've got for the google image search:
$ scrapy shell "https://www.google.com/search?q=test&tbm=isch&qscrl=1"
In [1]: response.xpath('//*[#id="ires"]//img/#src').extract()
Out[1]:
[u'https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcRO9ZkSuDqt0-CRhLrWhHAyeyt41Z5I8WhOhTkGCvjiHmRiTSvDBfHKYjx_',
u'https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQpwyzbW_qsRenDw3d4wwpwwm8n99ukMtLCVaPiTJxyviyQVBQeRCglVaY',
u'https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSrxtoY3-3QHwhjc5Ofx8090uDYI8VOUbi3gUrd9USxZ-Vb1D5pAbOzJLMS',
u'https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcTQO1A3dDJ07tIaFMHlXNOsOnpiY_srvHKJE1xOpsMZscjL3aKGxaGLOgru',
u'https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQ71ukeTGCPLuClWd6MetTtQ0-0mwzo3rn1ug0MUnbpXmKnwNuuBnSWXHU',
u'https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcRZmWrYR9A4W97jpjhtIbyUM5Lj3vRL0vgCKG_xfylc5wKFAk6UB8jiiKA',
...
u'https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcRj08jK8sBjX90Tu1RO4BfZkKe5A59U0g1TpMWPFZlNnA70SQ5i5DMJkvV0']

The XPath generated from an insertion point in a browser is bound to be brittle because there are many different possible XPath expressions to reach any given node, JavaScript can modify the HTML, and the browser doesn't know your intentions.
For the example you gave,
//*[#id="rg_s"]/div[13]/a/img
the 13th div is particularly prone to breakage.
Try instead to find a uniquely identifying characteristic closer to your target. A unique #id attribute would be ideal, or a #class that uniquely identifies your target or a close ancestor of your target can work well too.
For example, for Google Image Search, something like the following XPath
//div[#id='rg_s']//img[#class='rg_i']"
will select all images of class rg_i within the div containing the search results.
If you're willing to abandon the copy-and-paste approach and learn enough XPath to generalize your selections, you'll get much better results. Of course, standard disclaimers apply about changes to presentation necessitating updating of scraping techniques too. Using a direct API call would be much more robust (and proper as well).

Related

Best way to get XPATH and CSS SELECTORS for scraping with Selenium

What is the best way you guys know to get the XPATH AND CSS SELECTOR of a scraped website using selenium?
Someone suggested that I use these XPATH and CSS SELECTORS as parameters for an exercise I'm working on:
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[placeholder='Search']"))).send_keys('Tech')
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[text()='Cancel']/.."))).click()
These parameters work very well for the exercise. However, I'm unsure on how to get (or "build") those parameters...
If I use Chrome's Inspect > right click > Copy XPATH or Copy Selector, I get some very different parameters that don't seem to work as well, and don't seem to be found by selenium.
#search-bar
//*[#id="app-container"]/div/section/div/div/div[2]/button
Is there a tool or a technique to get better XPATH or CSS SELECTORS as in my first example?
I like the resources shared by #JD2775. They are good to get you started understanding how to construct and understand xpaths and css selectors. When you are comfortable with that, you can work on your selector strategy. Hopefully you find at least some of the following helpful.
What makes a "good" xpath or css selector?
The selector should reliably and uniquely identify the targeted element.
For example, if an element's class occurs multiple times on the page, do not use only this class to identify the element. This is the most basic requirement for your selector
The selector should not be prone to "flakiness" -- ie, false failures that occur as a result of changes that are unrelated to the test.
Accomplish this by relying on as little of the DOM as possible to identify your element. For example, if both work to uniquely identify the element, //*[#id="app-container"]//button should be preferred over //*[#id="app-container"]/div/section/div/div/div[2]/button. Or, as you identify, "//button[text()='Cancel']/.." is the better choice.
Probably less important, but still worth considering: how easy is it to understand from the selector which element is being grabbed?
Some best practices
If you are working with a development team and thus have access to the source code of the application you are testing, implement a custom HTML attribute that is used ONLY for automation, and which has a value to uniquely identify and describe the element. In your test code you can then identify each of the elements you need with a line like this:
my_field = driver.find_element_by_css_selector('input[data-e2e-attribute="aCertainField"]')`
Organize your selection of elements into a Page Object Model, which abstracts the definition of webelements to one spot. So you can use these elements anywhere in your test without having to locate them, and it's easier to make changes to your selectors when necessary
You are correct that right-clicking and Copy Xpath is a bad way to get an Xpath. You are left with a long and brittle selector. It is much better to build your own. Once you get the hang of it, it is pretty simple to start building your own CSS and Xpath selectors. Some of them get complicated but if you keep practicing and searching for solutions you will get better and better.
The problem is it is very difficult to explain how to do it in a forum like this. Your best bet is to YouTube some videos on how to create Xpath and CSS selectors for Selenium. Here is a decent one I just found for Xpath:
https://www.youtube.com/watch?v=3uktjWgKrtI
This follows the approach I use in Chrome Dev Tools and using the built in Find window (no plugins)
Here is a good cheatsheet I have used in the past for Xpath and CSS Selectors
https://www.automatetheplanet.com/selenium-webdriver-locators-cheat-sheet/
Good luck

Why does the XPath of some elements change sometimes?

I'm working on some automated actions for Instagram using Python and Selenium and sometimes my code crashes because of a NoSuchElementException.
For example, when I first wrote a function for unfollowing a user, I used something like:
following_xpath = "//*[#id='react-root']/section/main/div/header/section/div[1]/div[2]/div/span/span[1]/button"
After running a few times, my code crashed because it couldn't find the element so upon inspecting the page I found out that the XPath now is:
following_xpath = "//*[#id="react-root"]/section/main/div/header/section/div[2]/div/div/div[2]/div/span/span[1]/button"
There's a small difference in div[1]/div[2]/div to div[2]/div/div/div[2]. So I have two questions:
Why does this happen?
Is there a bulletproof method that guarantees I will always be getting the right XPath (or element)?
It's high time we bust the myth that XPath changes.
Locator Strategies e.g. xpath and css-selectors are derived by the user and the more canonical the locators are constructed the more durable they are.
XML Path Language (XPath)
XPath 3.1 is an expression language that allows the processing of values conforming to the data model defined in XQuery and XPath Data Model (XDM) 3.1. The name of the language derives from its most distinctive feature, the path expression, which provides a means of hierarchic addressing of the nodes in an XML tree. As well as modeling the tree structure of XML, the data model also includes atomic values, function items, and sequences. This version of XPath supports JSON as well as XML, adding maps and arrays to the data model and supporting them with new expressions in the language and new functions in XQuery and XPath Functions and Operators 3.1.
Selectors
CSS (Cascading Style Sheets) is a language for describing the rendering of HTML and XML documents on screen, on paper, in speech, etc. CSS uses Selectors for binding style properties to elements in the document. These expressions can also be used, for instance, to select a set of elements, or a single element from a set of elements, by evaluating the expression across all the elements in a subtree.
This usecase
As per your code trials:
following_xpath = "//*[#id='react-root']/section/main/div/header/section/div[1]/div[2]/div/span/span[1]/button"
and
following_xpath = "//*[#id="react-root"]/section/main/div/header/section/div[2]/div/div/div[2]/div/span/span[1]/button"
Here are a couple of takeaways:
The DOM Tree contains React elements. So it is quite clear that the app uses ReactJS. React is a declarative, efficient, and flexible JavaScript library for building user interfaces. It lets you compose complex UIs from small and isolated pieces of code called components.
The xpaths are absolute xpaths.
The xpaths contains indexes.
So, the application is dynamic in nature and elements are liable to be added and moved within the HTML DOM on firing of any DOM events.
Solution
In such cases when the application is based on either:
JavaScript
Angular
ReactJS
jQuery
AJAX
Vue.js
Ember.js
GWT
The canonical approach is to construct relative and/or dynamic locators inducing WebDriverWait. Some examples:
To interact with the username field on instagram login page:
WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[name='username']"))).send_keys("anon")
You can find a detailed discussion in Filling in login forms in Instagram using selenium and webdriver (chrome) python OSX
To locate the first line of the address just below the text as FIND US on facebook:
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//span[normalize-space()='FIND US']//following::span[2]")))
You can find a detailed discussion in Decoding Class names on facebook through Selenium
Interecting with GWT elements:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[#title='Viewers']//preceding::span[1]//label"))).click()
You can find a detailed discussion in How to click on GWT enabled elements using Selenium and Python
The answer to (1) is simple: the page content has changed.
Firstly, the notion that there is "an XPath" for every element in a document is wrong: there are many (an infinite number) of XPath expressions that will select a given element. You've probably generated these XPaths using a tool that tries to give you what it considers the most useful XPath expression, but it's not the only one possible.
The best XPath expression to use is one that isn't going to change when the content of the page changes: but it's very hard for any tool to give you that, because it has no idea what's likely to change in the page content.
Using an #id attribute value (which these paths do) is more likely to be stable than using numeric indexing (which these paths also do), but that's based on guesses about what's likely to change, and those guesses can always be wrong. The only way of writing an XPath expression that continues to do "the right thing" when the page changes is to correctly guess what aspects of the page structure are going to vary and what parts are going to remain stable. So the only "bulletproof" answer (2) is to understand not just the current page structure, but its invariants over time.

XPath returns no result for some elements with scrapy shell

I am using scrapy shell to extract data of the following web page:
https://www.apo-in.de/product/acc-akut-600-brausetabletten.24170.html
Most data works, but there is a table in the lower part which content (the PZN e.g.) I seem not to be able to extract.
scrapy shell
fetch('https://www.apo-in.de/product/acc-akut-600-brausetabletten.24170.html')
>>> response.xpath('//*[#id="accordionContent5e95408f73b10"]/div/table/tbody/tr[1]/td/text()').extract()
It returns: []
I also downloaded the page to view as scrapy sees it:
scrapy fetch --nolog https://www.apo-in.de/product/acc-akut-600-brausetabletten.24170.html > test.html
Although it looks OK in HTML and although I can grab it in chrome, it does not work in scrapy shell.
How can I retrieve this data?
Problem You have encountered is that id 'accordionContent5e95408f73b10' is dynamically generated. So, id in Your browser and scrapy's response are different ones.
In common cases there is good workaround to write xpath with "substring search" (//*[contains(#id, 'accordionContent')]), but in this case there are a lot of such ids.
I can advise to write more complicated xpath.
//div[#id='accordion']/div[contains(#class, 'panel')][1]/div[contains(#id, 'accordionContent')]/div[#class='panel-body']/table/tbody/tr[1]/td
What this xpath do:
Find all "subpanels" with descriptions //div[#id='accordion']/div[contains(#class, 'panel')];
We get first "subpanel" (where PZN is located) and navigate into table with data: //div[#id='accordion']/div[contains(#class, 'panel')][1]/div[contains(#id, 'accordionContent')]/div[#class='panel-body']/table;
And last part is retrieving first tr's td.
By the way, xpath can be simplified to //div[#id='accordion']/div[contains(#class, 'panel')][1]//table/tbody/tr[1]/td . But i've written full xpath for more accurate understanding what we're navigating.

Scrapy SgmlLinkExtractor how to define XPath

I want to retreive the cityname and citycode and store it in one string variable. The image shows the precise location:
Google Chrome gave me the following XPath:
//*[#id="page"]/main/div[4]/div[2]/div[1]/div/div/div[1]/div[2]/div/div[1]/div/a[1]/span
So I defined the following statement in scrapy to get the desired information:
plz = response.xpath('//*[#id="page"]/main/div[4]/div[2]/div[1]/div/div/div[1]/div[2]/div/div[1]/div/a[1]/span/text()').extract()
However I was not successful, the string remains empty. What XPath definition should I use instead?
Most of the time this occurs, this is because browsers correct invalid HTML. How do you fix this? Inspect the (raw) HTML source and write your own XPath that navigate the DOM with the shortest/simplest query.
I scrape a lot of data off of the web and I've never used an XPath as specific as the one you got from the browser. This is for a few reasons:
It will fail quickly on invalid HTML or the most basic of hierarchy changes.
It contains no identifying data for debugging an issue when the website changes.
It's way longer than it should be.
Here's an example (there are a lot of different XPath queries you could write to find this data, I'd suggest you learning and re-writing this query so there are common themes for XPath queries throughout your project) query for grabbing that element:
//div[contains(#class, "detail-address")]//h2/following-sibling::span
The other main source of this problem is sites that extensively rely on JS to modify what is shown on the screen. Conveniently, though, this would be debugged the same was as above. As soon as you glance at the HTML returned on page load, you would notice that the data you are querying doesn't exist until JS executes. At that point, you would need to do some sort of headless browsing.
Since my answer was essentially "write your own XPath" (rather than relying on the browser), I'll leave some sources:
basic XPath introduction
list of XPath functions
XPath Chrome extension
The DOM is manipulated by javascript, so what chrome shows is the xpath after
the all the stuff has happened.
If all you want is to get the cities, you can get it this way (using scrapy):
city_text = response.css('.detail-address span::text').extract_first()
city_code, city_name = city_text.split(maxsplit=1)
Or you can manipulate the JSON in CDATA to get all the data you need:
cdata_text = response.xpath('//*[#id="tdakv"]/text()').extract_first()
json_str = cdata_text.splitlines()[2]
json_str = json_str[json_str.find('{'):]
data = json.loads(json_str) # import json
city_code = data['kvzip']
city_name = data['kvplace']

Scrapy Xpath not extraction data [duplicate]

This is a problem that I always have getting a specific XPath with my browser.
Assume that I want to extract all the images from some websites like Google Image Search or Pinterest. When I use Inspect element then use copy XPath to get the XPath for an image, it gives me some thing like following :
//*[#id="rg_s"]/div[13]/a/img
I got this from an image from Google Search. When I want to use it in my spider, I used Selector and HtmlXPathSelector with the following XPaths, but they all don't work!
//*[#id="rg_s"]/div/a/img
//div[#id="rg_s"]/div[13]/a/img
//[#class="rg_di rg_el"]/a/img #i change this based on the raw html of page
#hxs.select(xpath).extract()
#Selector(response).xpath('xpath')
.
.
I've read many questions, but I couldn't find a general answer to how I can use XPaths obtained from a web browser in Scrapy.
Usually it is not safe and reliable to blindly follow browser's suggestion about how to locate an element.
First of all, XPath expression that developer tools generate are usually absolute - starting from the the parent of all parents - html tag, which makes it being more dependant on the page structure (well, firebug can also make expressions based on id attributes).
Also, the HTML code you see in the browser can be pretty much different from what Scrapy receives due to asynchronous nature of the website page load and javascript being dynamically executed in the browser. Scrapy is not a browser and "sees" only the initial HTML code of a page, before the "dynamic" part.
Instead, inspect what Scrapy really has in the response: open up the Scrapy Shell, inspect the response and debug your XPath expressions and CSS selectors:
$ scrapy shell https://google.com
>>> response.xpath('//div[#id="myid"]')
...
Here is what I've got for the google image search:
$ scrapy shell "https://www.google.com/search?q=test&tbm=isch&qscrl=1"
In [1]: response.xpath('//*[#id="ires"]//img/#src').extract()
Out[1]:
[u'https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcRO9ZkSuDqt0-CRhLrWhHAyeyt41Z5I8WhOhTkGCvjiHmRiTSvDBfHKYjx_',
u'https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQpwyzbW_qsRenDw3d4wwpwwm8n99ukMtLCVaPiTJxyviyQVBQeRCglVaY',
u'https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSrxtoY3-3QHwhjc5Ofx8090uDYI8VOUbi3gUrd9USxZ-Vb1D5pAbOzJLMS',
u'https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcTQO1A3dDJ07tIaFMHlXNOsOnpiY_srvHKJE1xOpsMZscjL3aKGxaGLOgru',
u'https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQ71ukeTGCPLuClWd6MetTtQ0-0mwzo3rn1ug0MUnbpXmKnwNuuBnSWXHU',
u'https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcRZmWrYR9A4W97jpjhtIbyUM5Lj3vRL0vgCKG_xfylc5wKFAk6UB8jiiKA',
...
u'https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcRj08jK8sBjX90Tu1RO4BfZkKe5A59U0g1TpMWPFZlNnA70SQ5i5DMJkvV0']
The XPath generated from an insertion point in a browser is bound to be brittle because there are many different possible XPath expressions to reach any given node, JavaScript can modify the HTML, and the browser doesn't know your intentions.
For the example you gave,
//*[#id="rg_s"]/div[13]/a/img
the 13th div is particularly prone to breakage.
Try instead to find a uniquely identifying characteristic closer to your target. A unique #id attribute would be ideal, or a #class that uniquely identifies your target or a close ancestor of your target can work well too.
For example, for Google Image Search, something like the following XPath
//div[#id='rg_s']//img[#class='rg_i']"
will select all images of class rg_i within the div containing the search results.
If you're willing to abandon the copy-and-paste approach and learn enough XPath to generalize your selections, you'll get much better results. Of course, standard disclaimers apply about changes to presentation necessitating updating of scraping techniques too. Using a direct API call would be much more robust (and proper as well).

Categories

Resources