Find a Button without xPath, ID, or Name in Python - python

I am trying to find a Submit Button on a webpage, however using xPath is not working to find it for some strange reason, and it has no class, name, or id. The line itself in the webpage is below.
The xPath is
/html/body/div[3]/form/input[2]
but like I said, it can't find it this way for some reason. The selector is
body > div.pagebodydiv > form > input[type="submit"]:nth-child(5)
I just need to press this button and I can't figure out how. Please help.
It is the Submit Button

You could try using '//', meaning can be found here https://www.w3schools.com/xml/xpath_syntax.asp
xpath_selector = "//input[#value = 'Submit']"
OR:
xpath_selector = "//input[#value = 'Submit' and #type = 'submit']"
OR
xpath_selector = "//input[starts-with(#value, 'Submit') and starts-with(#type, 'submit')]
The way your are currently focusing on could easily cause error if page change a tiny bit in structure, moreover, selecting from root takes a lot of effort.

Related

Unable to switch to iFrame using Selenium

I'm currently trying out Selenium for Python to do some web automation.
I got stuck a the checkout where you have to enter your credit card data.
I'm aware that I have to switch to iFrame to locate the input box.
But I was never able to switch the frame.
I tried it for ID or for example for this:
time.sleep(4)
frame = self.driver.find_element_by_xpath("//iframe[#name= '__privateHeidelpayFrame--heidelpay-holder-iframe-1616489954834']")
self.driver.switch_to.frame(frame)
name_input = self.driver.find_element_by_id("card-holder")
name_input.clear()
name_input.send_keys(credit_card.NAME)
I'm always getting "Unable to locate element : {"method":"xpath","selector":"//iframe[#name= '__privateHeidelpayFrame--heidelpay-holder-iframe-1616489954834']"}".
I added a pic from the Source Code.
Source Code from the Webpage
I'm thankful for any advise!
Best, Jannic
Possibly id & name attribute is dynamic.
Try with starts-with() or use other attribute like class to identify the iframe.
frame = self.driver.find_element_by_xpath("//iframe[starts-with(#name, '__privateHeidelpayFrame--heidelpay-holder-iframe-')]")
self.driver.switch_to.frame(frame)
OR
frame = self.driver.find_element_by_xpath("//iframe[#class='heidelpayUIIframe']")
self.driver.switch_to.frame(frame)
Have you tried using a simpler API to find the iFrame?
Does
self.driver.find_element(By.NAME, "__privateHeidelpayFrame--heidelpay-holder-iframe-1616489954834'")
Return anything?

Unable to get all children (dynamic loading) selenium python

This question has already been answered and one of the easiest ways is to get the tag name, if already known, within the element
child_elements = element.find_elements_by_tag_name("<tag name>")
However, for the following element pasted, only 9 out of 25 instances of the tag name is returned. I am novice in JavaScript and thus, I am not able to zero down on the reason. In this example, I am trying to get the dt tag within the ol element. The code snippet I am using for that is,
par_element = browser.find_element_by_class_name('search-results__result-list')
child_elements = par_element.find_elements_by_tag_name("dt")
The element skeleton/structure from the page source is shown in the image below:
(the structure is the same for all the div tags, as one is expanded to show for example.
I have also tried getting the class name result-lockup__name directly, and it still returns only 9 out of the 25 instances. What could be the reason?
EDIT
Initially,all the elements were not loaded, and thus I had to scroll through the page by
browser.execute_script('window.scrollTo(0,document.body.scrollHeight)')
When the problem occurred once again, and I was not able to figure out, I raised this question. Apparently, it looks like even the scroll is not helping, as certain elements look hidden
After manually scrolling through them again, keeping the code in pause, I was able to "enable" them.
Is this a type of mask to save sites from being scraped? I feel now that I would probably have to scroll up in increments to reveal them all, but is there a smarter way?
The elements are loading dynamically and you need to scroll the page slowly to get all the child elements.Try the below code hopefully it will work.This is just an workaround.
element_list=[]
while True:
browser.find_element_by_tag_name("body").send_keys(Keys.DOWN)
time.sleep(2)
listlen_before=len(element_list)
par_element = browser.find_element_by_class_name('search-results__result-list')
child_elements = par_element.find_elements_by_tag_name("dt")
for ele in child_elements:
if ele.text in element_list:
continue
else:
element_list.append(ele.text)
listlen_after = len(element_list)
if listlen_before==listlen_after:
break

Selecting a button, list object has no attribute click python selenium

After searching for a while an answer to my question, I couldn't get an answer that helped me, so here I am asking for your help ! :)
Right now, I am trying to select a plan on a website page which, after it has been selected (Read : a certain button clicked) displays the rest of the page where I can send the keys / values that I want to send.
Here is the code I am using
select_plan = browser.find_elements_by_xpath(".//*[#id='PostAdMainForm']/div[1]/div/div/div/div/div[1]/div[3]/button")
select_plan.click()
I found the xpath with Firepath, but when I run my code it gives me a AttributeError: 'list' object has no attribute 'click'
Here is the page I am trying to click from
https://www.kijiji.ca/p-post-ad.html?categoryId=214&hpGalleryAddOn=false&postAs=ownr
(I am looking to click on the left button, the one in blue)
Thank you very much for you help :)
The method find_elements returns a list, not a single element. You are taking the result and trying to click on it. Like the error says, you can't click on a list.
Either use find_element (singular) or use find_elements (plural) and then click on one of the elements that was returned.
# using find_elements
select_plans = browser.find_elements_by_xpath(".//*[#id='PostAdMainForm']/div[1]/div/div/div/div/div[1]/div[3]/button")
if len(select_plans) > 0:
select_plans[0].click()
# using find_element
select_plan = browser.find_element_by_xpath(".//*[#id='PostAdMainForm']/div[1]/div/div/div/div/div[1]/div[3]/button")
if select_plan:
select_plan.click()
Though, the link for the page you shared did not have the blue button. However, I have found it, after navigating to 'Post Your Ad' page. You can click on the Select button which is in blue color using the text appearing before it. For example, using text Basic, you can reach to the Select button. Following code shows, how we can achieve this:
select_plan = browser.find_element_by_xpath("//h3[text()='Basic']/following::button[text()='Select'][1]")
select_plan.click()
Let me know, whether it works for you.

python selenium: how to get first hidden button's url?

The page that I worked on has invisible hidden option button.
* download sample video in the page(button is hidden by html)
[button1] (<- LINK_TEXT i s 'button1')
[button2]
[button3]
so, I used 'EC.element_to_be_clickable'.
This code is working, but this way can not used if I don't know button's LINK_TEXT. The name of LINK_TEXT is different for each page.
I want to get only video's first link url(ex- button1).
_sDriver = webdriver.Firefox()
_sDriver.get('www.example.com/video')
wait = WebDriverWait(_sDriver, 10)
download_menu = _sDriver.find_element_by_id("download-button")
action = ActionChains(_sDriver)
action.move_to_element(download_menu).perform()
documents_url = wait.until(EC.element_to_be_clickable((By.LINK_TEXT, "button1"))).get_attribute('href')
my code's result is gotten by url of 'button1', but if I don't know 'button1' text, how to get first hidden button's url using python?
thanks for your help.
First of all, by "button" I assume you mean a element in this case.
And, since the button is hidden, element_to_be_clickable would not work, use presence_of_element_located. To get the very first a element, use the "by tag name" locator:
documents_url = wait.until(EC.presence_of_element_located((By.TAG_NAME, "a"))).get_attribute('href')
There could be a better way to locate the element, without seeing the actual HTML of the mentioned "button" elements, it is difficult to tell.

Python crawler not finding specific Xpath

I asked my previous question here:
Xpath pulling number in table but nothing after next span
This worked and i managed to see the number i wanted in a firefox plugin called xpath checker. the results show below.
so I know i can find this number with this xpath, but when trying to run a python scrpit to find and save the number it says it cannot find it.
try:
views = browser.find_element_by_xpath("//div[#class='video-details-inside']/table//span[#class='added-time']/preceding-sibling::text()")
except NoSuchElementException:
print "NO views"
views = 'n/a'
pass
I no that pass is not best practice but i am just testing this at the moment trying to find the number. I'm wondering if i need to change something on the end of the xpath like .text as the xpath checker normally shows a results a little differently. Like below:
i needed to use the xpath i gave rather than the one used in the above picture because i only want the number and not the date. You can see part of the source in my previous question.
Thanks in advance! scratching my head here.
The xpath used in find_element_by_xpath() has to point to an element, not a text node and not an attribute. This is a critical thing here.
The easiest approach here would be to:
get the td's text (parent)
get the span's text (child)
remove child's text from parent's
Code:
span = browser.find_element_by_xpath("//div[#class='video-details-inside']/table//span[#class='added-time']")
td = span.find_element_by_xpath('..')
views = td.text.replace(span.text, '').strip()

Categories

Resources