I'm trying to fill a small form with Selenium, but it has dynamic elements that it doesn't let me capture in any way. In this link you can see the form: https://www.asefasalud.es/
I have tried almost all the search elements, to capture the object, but none of them work.
driver.find_element_by_xpath("//*[contains(#id, 'inputDiafnac')]").get(0).send_keys("2")
driver.find_element_by_xpath("//*[contains(#id, 'inputDiafnac')]").send_keys("2")
driver.find_element_by_xpath("//input[#id='inputDiafnac1']").send_keys("2")
driver.find_element_by_css_selector("#inputDiafnac1").send_keys("2")
driver.find_element_by_id("inputDiafnac1").send_keys("2")
I don't know if there is another way to capture these elements, thank you.
I found something I think might work.
driver.switch_to.frame(driver.find_element(By.CSS_SELECTOR, "[name='calcular-seguro-medico']"))
driver.find_element_by_id("inputDiafnac1").send_keys("2")
Once you are done with elements inside this frame, you will need to switch back to the top level frame using
driver.switch_to.default_content()
Related
I am using python / selenium to archive some posts. They are simple text + images. As the site requires a login, I'm using selenium to access it.
The problem is, the page shows all the posts, and they are only fully readable on clicking a text labeled "read more", which brings up a popup with the full text / images.
So I'm writing a script to scroll the page, click read more, scrape the post, close it, and move on to the next one.
The problem I'm running into, is that each read more button is an identical element:
read more
If I try to loop through them using XPaths, I run into the problem of them being formatted differently as well, for example:
//*[#id="page"]/div[2]/article[10]/div[2]/ul/li/a
//*[#id="page"]/div[2]/article[14]/div[2]/p[3]/a
I tried formatting my loop to just loop through the article numbers, but of course the xpath's terminate differently. Is there a way I can add a wildcard to the back half of my xpaths? Or search just by the article numbers?
/ is used to go for direct child, use // instead to go from <article> to the <a>
//*[#id="page"]/div[2]/article//a[.="read more"]
This will give you a list of elements you can iterate. You might be able to remove the [.="read more"], but it might catch unrelated <a> tags, depends on the rest of the html structure.
You can also try looking for the read more elements directly by text
//a[.="read more"]
I recommend using CSS Selectors over XPaths. CSS Selector provide faster, cleaner and simpler way to deal with these queries.
('a[href^="javascript"]')
This will selects every element whose href attribute value begins with "javascript" which is what you are looking for...
You can learn more about Locating Elements by CSS Selectors in selenium here.
readMore = driver.find_element(By.CSS_SELECTOR, 'a[href^="javascript"]')
And about Locating Hyperlinks by Link Text
readMore_link = driver.find_elements(By.LINK_TEXT, 'javascript')
I am trying to find the textbox element using the find_element_by_xpath() method, but It keeps telling me it cant find said element, here's the line of code that does that.
I've tried finding it by link_text, partial link text, selector and it just doesn't work
bar = nav.find_element_by_xpath('//*[#id="react-root"]/div/div/div[2]/main/div/div/div/div[2]/div/div/aside/div[2]/div[2]/div/div/div/div/div[1]/div/div/div/div[2]/div/div/div/div')
Thanks in advance!
So, I suggest creating your xpath if you want to be precise and avoid taking it based on html structure (which can change).
The locators looks like:
And you can take it with xpath:
//input[#placeholder='Search people' and #role='combobox']
To avoid this problem, I suggest going trough a tutorial for a better understanding regarding how to create custom locators: Xpath tutorial
What was happening is: When I opened the tab to inspect the element, the DM structure changed because of my screen size so the xpath wasn't the same
I'm trying to switch frames in Selenium using an Xpath instead of using the name of the frame. The frame doesn't have a name so I figured I could just use the Xpath, but I'm not sure Selenium supports using an Xpath instead of a name.
This is the normal way to switch frames:
driver.switch_to.frame("WhateverFrame")
This is what I have tried:
driver.switch_to.frame(By.XPath("//*[#id='ui-id-1']/iframe"))
driver.find_frame_by_xpath("//*[#id='ui-id-43']/iframe")
Any suggestions as to how I should alter my code to get this to work?
You can switch to frame using different options.
driver.switch_to.frame('frame_name')
driver.switch_to.frame(frame_index)
driver.switch_to.frame(element)
So in your case you can send the element as shown below.
driver.switch_to.frame(driver.find_element_by_xpath("//*[#id='ui-id-43']/iframe"))
Also you can do the below, if you want to go to first iframe.
driver.switch_to.frame(driver.find_elements_by_tag_name("iframe")[0])
I am using Selenium with Python to automaticlly extract some data from our power plants and right now I need to click on an element.
The problem is that the element's xpaths and order change for each plant we are monitoring. The only static info is the value, just like in the 3rd line value="T_U0.
I tried many approaches and I couldn't find a solution. I can't use index or child because the order of the parameters is changing. I tried CSS selector with no success.
Here you get some of my tries...
driver.find_element_by_xpath("//input[#value='T_U0']").click()
driver.find_element_by_css_selector("input[#data-id-sys-abbreviation='388']").click()
I tried many other things but I was just desperately trying anything.
What I really need is a find_by_value, if there is a way of doing it please let me know, if there isn't please show me how I can do it.
I need to click in some options that change order accordingly to the plant
The problem is with the first xpath. You are trying to locate an input while you need to get option.
Try this:
driver.find_element_by_xpath("//option[#value='T_U0']").click()
You can try to click/select element via displayed text.
Pseudo code:
driver.find_element_by_xpath("//option[text()="Some text"]").click()
I essentially have a start_url that has my javascript search form and button, hence the need of selenium. I use selenium to select the appropriate items in my select box objects, and click the search button. The following page, I do some scrapy magic. However, now I want to go BACK to the original start_url and fill out a different object, etc. and repeat until no more.
Essentially, I have tried making a for-loop and trying to get the browser to go back to the original response.url, but somehow it crashed. I may try having a duplicate list of start_url's on the top for scrapy to parse through, but I'm not sure if that is the best approach. What can I do in my situation?
Here the advice is to use driver.back() : https://selenium-python.readthedocs.io/navigating.html#navigation-history-and-location
The currently selected answer provides a link to an external site and that link is broken. The selenium docs talk about
driver.forward()
driver.back()
but those will sometimes fail, even if you explicitly use some wait functions.
I found a better solution. You can use the below command to navigate backwards.
driver.execute_script("window.history.go(-1)")
hope this helps someone else in the future.
To move backwards and forwards in your browser’s history use
driver.forward()
driver.back()