Im trying to automate the download of report using selenium. To get to the page where the report is I have to click on an image with this code
<div class="leaflet-marker-icon single-icon-container running hover asset leaflet-zoom-hide leaflet-clickable" tabindex="0" style="margin-left: -22px; margin-top: -41px; width: 44px; height: 44px; opacity: 1; transform: translate3d(525px, 238px, 0px); z-index: 238;"><div class="icon-value" lid="219058"></div></div>
I tried with
wtg = driver.find_elements_by_class_name(
"leaflet-marker-icon single-icon-container running hover asset leaflet-zoom-hide leaflet-clickable")
wtg.click()
but nothing happens.
There are 7 elements with the same class, and a unique "id " tha looks like lid="219058" but I dont know how to select that.
leaflet-marker-icon single-icon-container running hover asset leaflet-zoom-hide leaflet-clickable contains multiple class names while driver.find_element_by_class_name method intends to get a single class name.
I can't give you a correct locator for this element since you didn't share the page link, however if you wish to locate that element based on these class names combination you can use CSS Selector or XPath as following:
wtg = driver.find_element_by_css_selector(".leaflet-marker-icon.single-icon-container.running.hover.asset.leaflet-zoom-hide.leaflet-clickable")
wtg.click()
Or
wtg = driver.find_element_by_xpath("//*[#class='leaflet-marker-icon single-icon-container running hover asset leaflet-zoom-hide leaflet-clickable']")
wtg.click()
Also you should use driver.find_element_by_class_name, not driver.find_elements_by_class_name since driver.find_elements_by_class_name will give you a list of web elements, not a single web element that can be clicked directly.
Alternatively you can use the first index inside the list of received web elements as described by FLAK-ZOSO
Generally speaking, the best practice when building web scrapers is to always use xpath, since xpath can apply all the filters (id, class, etc) in a more flexible way (in some cases though, performance in selenium might be decreased).
I recommend you check this article on how to write xpaths for various needs: https://www.softwaretestinghelp.com/xpath-writing-cheat-sheet-tutorial-examples/
For your particular use case, I would use:
driver.find_element_by_xpath('//div[#lid="219058"]')
This will actually click on the inner div (notice how the lid is actually inside the nested div). If you wish to click on the outer div you can use:
driver.find_element_by_xpath('//div[#lid="219058"]/parent::div')
I again recommend you to learn Xpath syntax and always use it, it is way easier to manipulate than the other selenium selectors and is also faster in case you ever choose to implement a C compiled html parser such as lxml to parse the elements.
Remember that driver.find_elements_by_class_name() returns a list.
You have to do something like this when using this get/find method:
driver.find_elements_by_class_name('class')[0] #If you want the first of the page
In your case you need to use the css_selector because you have multiple classes, like suggested by #Prophet.
You can also use only one of the classes and simply use the class_name selector.
In your case, if you need the first element of the page with that class, you have to add [0].
Related
I am unable to locate a web element on a website, The web elements of the website are dynamic and the elements which i am trying to locate have very similar attributes to that of others with just small differences like changes in integers. These integers also change when i refresh the page so i am unable to locate can someone help?
I tried following but there maybe mistakes in syntaxes:
With contains text = WebDriverWait(browser, 15).until(EC.presence_of_element_located( (By.XPATH,"//div[contains(text(),'Type here...')]") ))
Absolute Xpath and Rel Xpath(these changes)
Contains sibling, contains parents but the parent and sibling elements are also unable to locate
here is the html of the element <div contenteditable="true" class="cke_textarea_inline cke_editable cke_editable_inline cke_contents_ltr placeholder cke_show_borders" tabindex="0" spellcheck="true" role="textbox" aria-label="Rich Text Editor, editor15" title="Rich Text Editor, editor15" aria-describedby="cke_849" style="position: relative;" xpath="1">enter question here</div>
There are spaces being added () which some time cause issue, please deleting that space. Also if you can share html then we might be able to help. WebDriverWait(browser,15).until(EC.presence_of_element_located((By.XPATH,"//div[contains(text(),'Type here...')]")))
I removing space doesn't work then you can try below xpath-
//div[text()='enter question here'][contains(#aria-label,'Rich Text Editor')]
If there are really no attributes you would consider reliable, you can access the element by the text inside of it. I'd recommend two things: Don't do this unless you find no other choice, and don't just use //div, add as much XPATH info to the path as you can.
driver.find_element_by_xpath('//div[text()="enter question here"]')
OR
driver.find_element_by_xpath('//div[contains(text(),"enter question")]')
I am working with element like
<div class="slick-track" style="opacity: 1; width: 9140px; left: 0px;" role="listbox">
when trying to start using Robot Framework for Python for creating autotests
I need to check this element exist, and, its items. How can I do that?
You can simply use:
Page should contain element css=div.slick-track
Or even better, in order to avoid possible delay issues (like an element taking a few seconds to be displayed on the page)
Wait until page contains element css=div.slick-track
Comment: Didn't quite understand what you meant by "and, its items".
If i understood your question properly..
To locate the element by class:
Location should contain class:slick-track
to check its attributes:
element attribute value should be class:slick-track role listbox
let me know if this is what you meant.
I have tried,
browser.find_element_by_class_name('("my_class")[1]')
browser.find_element_by_class_name('("my_class")[position()=1]')
browser.find_element_by_class_name("my_class")[1]
The "easy" answer
The simple way to get what you want is to use the plural form of the locator method you are already using, find_elements_by_class_name(). The plural forms return a list instead of just the first match so in your case you would use
find_elements_by_class_name("my_class")[1]
The find_elements_* method returns a list and the [1] at the end specifies to return only the second item in the collection (the index starts at 0).
What I would use
I generally don't use *_by_class_name() because it's rare that I'm only looking for a class. I typically at least specify the tag name also, e.g. div.my_class. Another option, and the one I typically use, is a CSS selector. CSS selectors should be preferred over XPath because of better performance, better support, etc.*
An example
<div class="class1 class2 class3">123</div>
<div class="class2">2</div>
<div class="class3">3</div>
<div class="class1 class2">12</div>
<div class="class1 class3">13</div>
<div class="class1">1</div>
<div class="class2 class3">23</div>
If you had the above HTML and wanted the second instance of "class1", you would use
driver.find_elements_by_css_selector("div.class1")[1]
Another advantage of CSS selectors over XPath is that CSS selectors look for the class name amongst multiple class names on an element where XPath can only do a text search which can lead to false or missed matches. The CSS selector above would return 4 total elements: "123", "12", "13", "1". The index [1] returns only the second instance, "12".
If you used the XPath that DebanjanB suggested,
//*[#class='my_class'][position()=2]
it would return nothing. That's because there's only one element that has the exact string "my_class" as the class. It misses all the other elements that contain but are not only "my_class". You could improve it to find them all but it still has all the downfalls of XPath vs CSS selectors, it's much longer, and so on...
See the Selenium-python docs for more info on ways to find elements.
*If you need more details on the why, there's a number of articles already addressing this or look through the videos on the Selenium Conference YT channel and watch some of the keynote addresses by Simon Stewart or other Selenium contributors.
Don't forget
You may need to use a wait if the page is slow to load.
On some lazy loading pages you will need to scroll the page to get the additional elements to load.
Quick info: I'm using Mac OS, Python 3.
I have like 800 links that need to be clicked on a page (and many more pages to go so need automation).
They were hidden because you only see those links when you hover over.
I fixed that by injecting CSS rule (just saying in case its the reason it's not working).
When I try to find elements by xpath it does not want to click the links afterwards and it also doesn't find all of them always just 4 (even when more are displayed in view).
HTML:
Display
When i click ok copy xpath in inspect it gives me:
//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a
But it doesn't work when I use it like this:
driver.find_elements_by_xpath('//*[#id="tiles"]/li[3]/div[2]/ul/li[2]/a')
So two questions:
How do I get them all?
How do I get it to click on each of them?
The pattern in the XPath is the same, with the /li[3] being the only number that changes, for this I created a for loop to create them all based on the count on page which I did successfully.
So if it can be done with the XPaths generated by myself that are corresponding to when I copy XPath in inspector then I only need question 2 answered.
PS.: this is HTML of parent of that first HTML:
<li onclick="openPopup(event, 'collect', {item_id: 165214})" class="collect" data-item-id="165214">Display</li>
This XPath,
//a[.="Display"]
will select all a links with anchor text equal to "Display".
As per your question, the HTML you have shared and your code attempts there is no necessity to get the <li> tags. Instead we will get the <a> tags in a list. So to answer your first question How do I get them all you can use the following line of code :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
Next to click on each of them you have to create a loop to iterate through all the <a> tag as follows :
all_Display = driver.find_elements_by_xpath("//*[#id='tiles']//li/div[2]/ul/li[#class='collect']/a[#title='Display']")
for each_Display in all_Display :
each_Display.click()
Using an XPath with elements by position is not ideal. Instead use a CSS selector to match the attributes for the targeted elements.
Something like:
all_Display = driver.find_elements_by_css_selector("#tiles li[onclick][data-item-id] a[title]")
You can then click them in a loop if none of them is loading a new page:
for element in all_Display:
element.click()
I have a page that has to be scrapped.I use the python code
div = driver.find_element_by_class_name("parent")
data = div.find_elements_by_class_name("child1")
//I cannot access the web elements of **data** for eg: data.find_elements_by
for tag in data
//I cannot print the information of each div here
the Html
<div class="Parent">
<div class = child1 >
<div class = "heading">
data
</div>
</div>
<div class = child1 child2 >strong text
<div class = "heading">
<span>data</span>
</div>
</div>
</div>
Is there an easy way to access data
Well you can access html tags or text in different ways http://selenium-python.readthedocs.io/locating-elements.html
For multiple elements you can use :
find_elements_by_name
find_elements_by_xpath
find_elements_by_link_text
find_elements_by_partial_link_text
find_elements_by_tag_name
find_elements_by_class_name
find_elements_by_css_selector
There isn't a simple solution as far as I'm aware only by having specifics about the information you're looking for.
For instance let's you're using xpath (my personal preference):
Absolute XPath :
/html/body/div[2]/div/div/footer/section[3]/div/ul/li[3]/a
We can use Absolute xpath: /html/body/div[2]/div/div/footer/section[3]/div/ul/li[3]/a
Above xpath will technically work, but each of those nested
relationships will need to be present 100% of the time, or the locator
will not function. Above choosed xpath is known as Absolute xpath.
There is a good chance that your xpath will vary in every release. It
is always better to choose Relative xpath, as it helps us to reduce
the chance of element not found exception.
Relative xpath: //*[#id=’social-media’]/ul/li[3]/a
We can have a different approach to the data, therefore by using the correct way to 'select' the data we need, we can only extract/select the needed information. Look into each of these methods to understand them better, because you're asking for one line of code and each of those have their pros/cons (times when they can be useful or not).
It seems you want to access text which is inside heading div, if it is so then you can try the below code.
element=driver.find_element_by_class_name("heading")
data=element.text
assuming you are asking a way to loop through data where the info present is located in different locators in different nesting levels
There are multiple ways,
look for various selectors that match your pattern - find a way to do it that matches your problem - you refer css/xpath selector reference
if there are many selectors( but consistenly being used), you can use ByChained/ByAll Selectors look for the implementation in java, it will be like this, you can mimic the implementation,
selector1 = .Heading .child2
selector3 = .Heading .child3 span
selector2 = .Heading .child1
ByAll(selector1,selector2,selector3)'
if the parent is the only matching selector and there's no way to know abt child selectors, then another way is to use innerText/textContent property from a common parent
driver.findElement(By.cssSelector
('.child1').getAttribute('innerText')
if none of these, solves your problem, and you application is dynamic enough to use different references and different nesting levels each time for all the page, then it was meant to be not scrapped. so your should look for other ways of scrapping it.