How to click the icon using selenium - python

I am trying to automate my certain activities using selenium. I was launching a webpage and a security windows popup appears for the login credentials.
I have automated that using Autoit. Now after login I need to click on the option and I have tried it based on the find_element_by_text and find_element_by_id.
I was getting an error as Unable to find the element with CSS selector and I have seen some other post in the StackOverflow with the same issue but I could not able to fix it by myself.
Here is how my HTML looks like. Could you please guide me on this and also please share any document for further checking. Thanks.
driver = webdriver.Ie(r"C:\\IEDriverServer\\IEDriverServer.exe")
driver.get('URL of the page')
#driver.implicitly_wait(10) # seconds
autoit.win_wait("Windows Security")
# now make that crap active so we can do stuff to it
autoit.win_activate("Windows Security")
# type in the username
autoit.send('username')
# tab to the password field
autoit.send("{TAB}")
# type in the password
autoit.send('password')
# kill!
autoit.send("{ENTER}")
driver.maximize_window()
driver.implicitly_wait(120) # seconds
#submit_button_incidents = driver.find_element_by_link_text("3-Normal Incidents")
submit_button_incidents= driver.find_element_by_id("nodeImgs13pm")
submit_button_incidents.click()
driver.implicitly_wait(10)
++ updating the info
I have tried to copy the whole HTML but the page was restricted so I cant able to view the full HTML page other than the basic templates. Adding some more screenshots of the developer tools.
Here how my webpage looks like.

try with this code :
submit_button_incidents = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, "//span[contains(text(),'My Group'])")))
submit_button_incidents.click()
and do not use implicit wait too many times in code.
Implicit wait is set for life time of web driver instance.
Reference :
Selenium official document for wiats
Xpath tutorial
cssSelector tutorial
UPDATE :
As OP has shared the HTML code. As per the requirement you can go ahead with this code.
As there are two elements with text as My Groups's queue.
For first one you can write XPATH as :
//a[#class='scbdtext']/span[contains(text(),'My Group')]
Code:
submit_button_incidents = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//a[#class='scbdtext']/span[contains(text(),'My Group')]")))
submit_button_incidents.click()
For second one you can write XPATH as :
//a[#class='scbdtextfoc']/span[contains(text(),'My Group')]
Code :
submit_button_incidents = WebDriverWait(driver, 10).until(
EC.element_to_be_clickable((By.XPATH, "//a[#class='scbdtextfoc']/span[contains(text(),'My Group')]")))
submit_button_incidents.click()
Hope this will help.

Use ActionChains with double click for this to work in Python.
from selenium.webdriver import ActionChains
# Get the element however you want
element = driver.find_element_by_xpath("//a[#class='scbdtextfoc']/span[contains(text(),'My Group')]")
ActionChains(driver).double_click(settings_icon).perform()

Related

Selenium Automation "Unable to locate element" error on the web UI of qbittorrent application

I am trying to automate downloading of movies with magnet links using the BitTorrent web UI. I can click on the 'add torrent link' button and the popup does appear but after that, the code fails as it is unable to find the element where the torrent link needs to be added. The same problem occurs when I try to input the file location. I tried time.sleep but had no luck.
My code snippet:
def torrent(path, n):
#link to web UI
driver.get("http://127.0.0.1:8080/")
#default login credentials
username = driver.find_element_by_xpath("//*[#id='username']")
username.send_keys("admin")
password = driver.find_element_by_xpath("//*[#id='password']")
password.send_keys("adminadmin")
time.sleep(2)
driver.find_element_by_xpath("//*[#id='login']").click()
time.sleep(2)
#the elements I am trying to find
driver.find_element_by_xpath("/html/body/div[1]/div[1]/div[2]/a[1]").click()
time.sleep(5)
location = driver.find_element_by_xpath("/html/body/form/div/fieldset/table/tbody/tr[2]/td[2]")
location.send_keys(path)
input = driver.find_element_by_xpath("/html/body/form/div/textarea")
i = 0
while i <= n-1:
input.send_keys(list_torrent[i])
If you need any other information please let me know. I tried using BitTorrent API already but with no luck. The HTML page has a hidden overflow but it shouldn't be a problem since I am clicking on the elements which should make my code visible.
Thanks in advance.
Looks like it is in iframe.
switch to iframe like this :
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH," Ifrmae xpath here")))
and then do the other stuff here :
once done with iframe switch it back to
default_content like this :
driver.switch_to.default_content()

How to perform data fetch on button click on a built in html page using selenium

I am new to Selenium and I am trying to mimic user actions on a site to fetch data from a built in html page on button click. I am able to populate all the field details, but button click is not working, it looks like js code not running.
I tried many options like adding wait time, Action chain etc but it didnt work, i am providing site and code i have written.
driver = webdriver.Chrome()
driver.get("https://www1.nseindia.com/products/content/derivatives/equities/historical_fo.htm")
driver.implicitly_wait(10)
assigned values to all the other fields
driver.find_element_by_id('rdDateToDate').click()
Dfrom = driver.find_element_by_id('fromDate')
Dfrom.send_keys("02-Oct-2020")
Dto = driver.find_element_by_id('toDate')
Dto.send_keys("08-Oct-2020")
innerHTML = driver.execute_script("document.ready")
sleep(5)
getdata_btn = driver.find_element_by_id('getButton')
ActionChains(driver).move_to_element(getdata_btn).click().click().perform()
I recommend using a full xpath.
chrome.get("https://www1.nseindia.com/products/content/derivatives/equities/historical_fo.htm")
time.sleep(2)
print("click")
fullxpath = "/html/body/div[2]/div[3]/div[2]/div[1]/div[3]/div/div[1]/div/form/div[19]/p[2]/input"
chrome.find_element_by_xpath(fullxpath).click()
I have tried the button clicking and it worked with XPath ... I though its because of someone used the ID twice on a website, but I can not find it ... so i have no idea whats going wrong there ...
Good luck :)

xpath returns more than one result, how to handle in python

I have started selenium using python. I am able to change the message text using find_element_by_id. I want to do the same with find_element_by_xpath which is not successful as the xpath has two instances. want to try this out to learn about xpath.
I want to do web scraping of a page using python in which I need clarity on using Xpath mainly needed for going to next page.
#This code works:
import time
import requests
import requests
from selenium import webdriver
driver = webdriver.Chrome()
url = "http://www.seleniumeasy.com/test/basic-first-form-demo.html"
driver.get(url)
eleUserMessage = driver.find_element_by_id("user-message")
eleUserMessage.clear()
eleUserMessage.send_keys("Testing Python")
time.sleep(2)
driver.close()
#This works fine. I wish to do the same with xpath.
#I inspect the the Input box in chrome, copy the xpath '//*[#id="user-message"]' which seems to refer to the other box as well.
# I wish to use xpath method to write text in this box as follows which does not work.
driver = webdriver.Chrome()
url = "http://www.seleniumeasy.com/test/basic-first-form-demo.html"
driver.get(url)
eleUserMessage = driver.find_elements_by_xpath('//*[#id="user-message"]')
eleUserMessage.clear()
eleUserMessage.send_keys("Test Python")
time.sleep(2)
driver.close()
To elaborate on my comment you would use a list like this:
eleUserMessage_list = driver.find_elements_by_xpath('//*[#id="user-message"]')
my_desired_element = eleUserMessage_list[0] # or maybe [1]
my_desired_element.clear()
my_desired_element.send_keys("Test Python")
time.sleep(2)
The only real difference between find_elements_by_xpath and find_element_by_xpath is the first option returns a list that needs to be indexed. Once it's indexed, it works the same as if you had run the second option!

Not able to click a button in Selenium (Booking.com)

I am programming a python scraper with help of Selenium. The first few steps are:
goes on booking.com, insert a city name, selects the first date and then tries to open the check-out calendar.
Here is where my problem occurs. I am not able to click the check-out calendar button (The important are of the website).
I tried to click every element regarding to the to check-out calendar (The elements of check-out calendar) with element.click(). I also tried the method
element = self.browser.find_element_by_xpath('(//div[contains(#class,"checkout-field")]//button[#aria-label="Open calendar"])[1]') self.browser.execute_script("arguments[0].click();", element)
It either does nothing (in case of execute.script() and click() on div elements) or it throws following exception when directly clicking the button:
Element <button class="sb-date-field__icon sb-date-field__icon-btn bk-svg-wrapper"
type="button"> is not clickable at point (367.5,316.29998779296875)
because another element <div class="sb-date-field__display"> obscures it
Here is a short code to test it:
browser = webdriver.Firefox()
browser.get("https://www.booking.com/")
wait = WebDriverWait(browser, 5)
element = wait.until(EC.presence_of_element_located((
By.XPATH, '(//div[contains(#class,"checkout-field")]//button[#aria-label="Open calendar"])[1]')))
element = wait.until(EC.element_to_be_clickable((
By.XPATH, '(//div[contains(#class,"checkout-field")]//button[#aria-label="Open calendar"])[1]')))
element.click()
I have a temporarily solution for my problem but I am not satisfied with it.
element = browser.find_element_by_xpath('(//div[contains(#class,"checkout-field")]//button[#aria-label="Open calendar"])[1]')
hov = ActionChains(browser).move_to_element(element)
hov.click().perform()
This will open the calendar by hovering over the object and clicking it. This strangely opens the calendar.
The methods mentioned above still don't work.
Define clicka as an xpath. Now use executescript to click the element.
driver.execute_script("arguments[0].click();", clicka)
I'm not 100% sure that I got everything you posted, because the layout is a bit messy.
However, I tried to test the issue with both Selenium Java and Firefox Scratchpad (a Web Developer tool that allows to run JavaScript scripts) and it worked perfectly - the button was clickable on both of them.
If you're interested in further testing using this tool, this is the code I've used:
In JavaScript:
function getElementByXpath(path) {
return document.evaluate(path, document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue;
}
var myElement = getElementByXpath('(//div[contains(#class,"checkout-field")]//button[#aria-label="Open calendar"])[1]')
myElement.click()
and in Java:
FirefoxDriver driver = new FirefoxDriver();
WebDriverWait wait = new WebDriverWait(driver, 10);
driver.navigate().to("https://www.booking.com");
wait.until(ExpectedConditions.elementToBeClickable(By.xpath("(//div[contains(#class,'checkout-field')]//button[#aria-label='Open calendar'])[1]")));
driver.findElement(By.xpath("(//div[contains(#class,'checkout-field')]//button[#aria-label='Open calendar'])[1]")).click();
System.out.println("success");
if your are having the control check out button on all the web site managing with explicit wait required lots of codding you can use implicit wait below is in the java.
System.setProperty("webdriver.chrome.driver",
"G:\\TopsAssignment\\SampleJavaExample\\lib\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize();
driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);

Clicking on a Javascript Link on Firefox with Selenium

I am trying to get some comments off the car blog, Jalopnik. It doesn't come with the web page initially, instead the comments get retrieved with some Javascript. You only get the featured comments. I need all the comments so I would click "All" (between "Featured" and "Start a New Discussion") and get them.
To automate this, I tried learning Selenium. I modified their script from Pypi, guessing the code for clicking a link was link.click() and link = broswer.find_element_byxpath(...). It doesn't look liek the "All" button (displaying all comments) was pressed.
Ultimately I'd like to download the HTML of that version to parse.
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
import time
browser = webdriver.Firefox() # Get local session of firefox
browser.get("http://jalopnik.com/5912009/prius-driver-beat-up-after-taking-out-two-bikers/") # Load page
time.sleep(0.2)
link = browser.find_element_by_xpath("//a[#class='tc cn_showall']")
link.click()
browser.save_screenshot('screenie.png')
browser.close()
Using Firefox with the Firebug plugin, I browsed to http://jalopnik.com/5912009/prius-driver-beat-up-after-taking-out-two-bikers.
I then opened the Firebug console and clicked on ALL; it obligingly showed a single AJAX call to http://jalopnik.com/index.php?op=threadlist&post_id=5912009&mode=all&page=0&repliesmode=hide&nouser=true&selected_thread=null
Opening that url in a new window gets me the comment feed you are seeking.
More generally, if you substitute the appropriate article-ID into that url, you should be able to automate the process without Selenium.

Categories

Resources