I am currently trying to automate a process using Selenium with python, but I have hit a roadblock with it. The list is part of a list which is under a tree. I have identified the base of the tree with the following xpath
item = driver.find_element_by_xpath("//*[#id='filter']/ul/li[1]//ul//li")
items = item.find_elements_by_tag_name("li")
I am trying to Loop through the "items" section but need and click on anything with an "input" tag
for k in items:
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, "input")))).click()
When execute the above I get the following error:
"TypeError: find_element() argument after * must be an iterable, not WebElement"
For some reason .click() will not work if I use something like the below.
k.find_element_by_tag_name("input").click()
it only works if i use the webdriverwait. I have had to use the web driver wait method anytime i needed to click something on the page.
My question is:
What is the syntax to replicate items = item.find_elements_by_tag_name("li")
for WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, "input")))).click()
i.e how do I use a base path and append to the using the private methods find_elements(By.TAG_NAME)
Thanks in advance
I have managed to find a work around and get Selenium to do what i need.
I had to call the javascript execution, so instead of trying to get
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, "input")))).click() to work, i just used
driver.execute_script("arguments[0].click();", k.find_element_by_tag_name("input"))
Its doing exactly what I needed it to do.
Related
this the website I am dealing with https://www.bseindia.com/corporates/ann.html?curpg=1&annflag=1&dt=20211021&dur=P&dtto=20211027&cat=Insider%20Trading%20/%20SAST&scrip=&anntype=A
Here I am able to send the code for the "security name" but in-order to submit it I need to click the dropdown element that comes after giving the security name. How do I achieve this with selenium. I used the code below and its not working(StaleElementReferenceException)
security_name = driver.find_element_by_id("scripsearchtxtbx")
security_name.send_keys('INE350H01032')
sec_click = driver.find_element_by_xpath('//*[#id="ulSearchQuote2"]/li')
sec_click.click()
This can also be accomplished using the Keys Library within Selenium. Selenium Keys not only sends input statements like strings, but it can also send commands such as escape, tab, or in this case enter. Your updated code should look as follows:
security_name = driver.find_element_by_id("scripsearchtxtbx")
security_name.send_keys('INE350H01032')
security_name.send_keys(Keys.ENTER)
sec_click = driver.find_element_by_xpath('//*[#id="ulSearchQuote2"]/li')
sec_click.click()
These are called special keys. For more examples and more information on this, see this link
you can paly with Xpath:
security_name = driver.find_element_by_id("scripsearchtxtbx")
security_name.send_keys('INE350H01032')
sec_click = driver.find_element_by_xpath("//ul[#id='ulSearchQuote2']/li//strong[text()='INE350H01032']")
sec_click.click()
same -->("//ul[#id='ulSearchQuote2']//strong[text()='INE350H01032']")
you can check more about xpath here: xpath
I am scraping the xbox website with selenium but I encountered a problem when extracting someone's followers and friends: both elements have the same class, with no other property setting them apart, so I need to find all elements with that class and append them to a list and get the first, second value. I just need to know how to find all elements with a class whilst using wait until as seen below
followers = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".item-value-data"))).text
#this currently only gets the first element
I am aware of how to do this without wait; just putting elements, but I couldn't find anything regarding using this in wait.
WebDriverWait waits until at least 1 element matching the passed condition is found.
There is no expected condition supported by Selenium with Python to wait for predefined amount of elements matching or something like this.
What you can do is to put a small sleep after the wait to make the page fully loaded and then get the list of desired elements.
Like this:
WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".item-value-data")))
time.sleep(1)
followers = []
followers_els = driver.find_elements_by_css_selector(".item-value-data")
for el in followers_els:
followers.append(el.text)
I am very new to web scraping, so I still have lots of trouble. Currently, I am trying to web scrape from https://www.enterprisetrucks.com/truckrental/en_US.html by setting the pickup time by running this code:
pickupTime = d.find_element_by_id('fldPickuptime_msdd')
pickupTime.click();
select = Select(d.find_element_by_id('fldPickuptime'))
select.select_by_value('20:00')
But I get the error saying that the element is not currently visible and may not be manipulated.
The dropdown which is present is not of the Select type, so you cannot use the Select method here. You need to click on the time using the xpath of that element directly.
You can use the xpath:
pickupTime = d.find_element_by_id("fldPickuptime_msdd")
pickupTime.click();
selectTime = d.find_element_by_xpath("//*[#id='fldPickuptime_msdd']//span[text()='8:00 PM']")
selectTime.click();
Code for JavaScriptExecutor Click:
element = driver.find_element_by_xpath("Enter the xpath here")
driver.execute_script("arguments[0].click();", element)
I'm trying to create a program extracting all persons I follow on Instagram. I'm using Python, Selenium and Chromedriver.
To do so, I first get the number of followed persons and click on the 'following' button : `
nb_abonnements = int(webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a/span').text)
sleep(randrange(1,3))
abonnements = webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a')
abonnements.click()
I then use the following code to get the followers and scroll the popup page in case I can't find one:
followers_panel = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]')
while i < nb_abonnements:
try:
print(i)
followed = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1)).text
#the followeds are in an ul-list
i += 1
followed_list.append(followed)
except NoSuchElementException:
webdriver.execute_script(
"arguments[0].scrollBy(0,400)",followers_panel
)
sleep(7)
The problem is once i is at 12, the program raises the exception and scrolls. From there, he still can't find the next follower and is stuck in a loop where he does nothing but scroll. I've checked the source codeof the IG page, and it turns out the path is still good, but apparently I can't access the elements as I do anymore, probably because the ul-list in which I am accessing them has become to long (line 5 of the program).
I can't work out how to solve this. I hope you will be of some help.
UPDATE: the DOM looks like this:
html
body
span
script
...
div[3]
div
...
div
div
div[2]
ul
div
li
li
li
li
...
li
The ul is the list of the followers.
The lis contain the info i'm trying to extract (username). Even when I go go by myself on the webpage, open the popup window, scroll a little and let everything load, I can't find the element I'm looking for by typing the xpath in the search bar of the DOM manually. Although the path is correct, I can check it by looking at the DOM.
I've tried various webdrivers for selenium, currently I am using chromedriver 2.45.615291. I've also put an explicit wait to wait for the element to show (WebDriverWait(webdriver, 10).until(EC.presence_of_element_located((By.XPATH, '/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1))))), but I just get a timeout exception: selenium.common.exceptions.TimeoutException: Message:.
It just seems like once the ul list is too long (which is from the moment I've scrolled down enough to load new people), I can't access any element of the list by its XPATH, even the elements that were already loaded before I began scrolling.
Instead of using xpath for each of the child element... find the ul-list element then find all the child elements using something like : ul-list element.find_elements_by_tag_name(). Then iterate through each element in the collection & get the required text
I've foud a solution: i just access the element through the XPATH like this: find_element_by_xpath("(//*[#class='FPmhX notranslate _0imsa '])[{}]".format(i)). I don't know why it didn't work the other way, but like this it works just fine.
I'm having trouble selecting a button in my Splinter script using the find_by_css method. The documentation is sparse at best, and I haven't found a lot of good articles out there with examples.
br.find_by_css('div#edit-field-download-files-und-0 a.button.launcher').first.click()
...where br is my browser instance.
I've tried a few different ways of writing it. I'm really not sure how I'm supposed to do it because the documentation doesn't give any hard examples of the syntax.
Here's a screenshot of the element.
Sorry the screenshot kind of sucks.
Does anyone have any experience with this?
The css selector looks alright, just that i am not sure from where have you got find_by_css as a method?
How about this :-
br.find_element_by_css_selector("div#edit-field-download-files-und-0 a.button.launcher").click()
Selenium provides the following methods to locate elements in a page:
find_element_by_id
find_element_by_name
find_element_by_xpath
find_element_by_link_text
find_element_by_partial_link_text
find_element_by_tag_name
find_element_by_class_name
find_element_by_css_selector
To find multiple elements (these methods will return a list):
find_elements_by_name
find_elements_by_xpath
find_elements_by_link_text
find_elements_by_partial_link_text
find_elements_by_tag_name
find_elements_by_class_name
find_elements_by_css_selector
I'm working on something similar where I'm trying to click stuff on a webpage. The documentation for find_by_css() is very poor and you need to type the css path to the element you want to click.
Say we want to go to the about tab on python.org
from splinter import Browser
from time import sleep
with Browser() as browser: #<--Create browser instance (firefox default driver)
browser.visit('http://www.python.org') #<--Visits url string
browser.find_by_css('#about > a').click()
# ^--Put css path here in quotes
sleep(5)
If your connection is good you might not get the chance to see the about tab getting clicked but you should end up on the about page.
The hard part is figuring out the css path to an element. However once you have it, the find_by_css() method looks pretty easy
I like the W3Schools reference for CSS selection parameters: http://www.w3schools.com/cssref/css_selectors.asp
As for your code... I recommend breaking this down into a few steps, at least during debug. The call to br.find_by_css('css_string') returns a list of elements. So you can grab that list and check the count.
elems = br.find_by_css('div#edit-field-download-files-und-0 a.button.launcher')
if len(elems) == 1:
elems.first.click()
If you don't check the length of the returned list and call '.first' on an empty list, you'll get an exception. If len > 1, you're probably getting things you don't expect.
Each id on a page is unique, and you can daisy-chain searches, so you can use a few different statements to make this happen:
id_elems = br.find_by_id('edit-field-download-files-und-0')
if id_elems:
id_elem = id_elems.first
a_elems = id_elem.find_by_tag("a")
for e in a_elems:
if e.has_class("button launcher"):
print('Found it!')
e.click()
This is, of course, just one of many ways to do this.
Lastly, Splinter is a wrapper around Selenium and other webdrivers. It's possible that, even after you find the element to click, the actual click won't do anything. If this happens, you can also try clicking on the wrapped Selenium object, available as e._element. So you could try e._element.click() if necessary.