send.keys(Keys.ENTER) is not working in Python - python

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time
browser=webdriver.Chrome('C:/Users/Dell/Downloads/chromedriver')
browser.get('https://www.screener.in/')
sbox = browser.switch_to.active_element
sbox.send_keys('Infosys Ltd')
sbox.send_keys(Keys.RETURN)
The enter key is not working. I have tried using .submit() too but still isn't working. Please let me know if there is any other way to get it.

Try using Keys.ENTER instead of Keys.RETURN

url = "https://www.foodpanda.pk/restaurants/new?lat=24.9414896&lng=67.1676002&vertical=restaurants"
browser = webdriver.Chrome()
browser.get('https://www.screener.in/')
sbox = browser.switch_to.active_element
sbox.send_keys('Infosys Ltd')
WebDriverWait(browser, 10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, '[class="dropdown-content visible"]')))
sbox.send_keys(Keys.ENTER)
wait for the dropdown to be visible before sending the enter
imports required:
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By

switch_to may be unstable depending on browsers. Also, make sure sbox is visible before you interact with it. Try:
sbox['value'].send_keys('Infosys Ltd')
sbox['value'].send_keys(Keys.RETURN)

Related

Can't find element belonging to 'Accept All' button

I recently started learning Selenium and webscraping in Python. I'm trying to find and click the 'Accept All' button on the pop-up (image of the pop-up can be found below) when entering the following site: https://www.sherdog.com, using Chrome. It takes around 5 seconds for the pop-up to load. I have tried different things and have red what I could find on stackoverflow describing similar problems. To no avail. I always get the NoSuchElementException (or NoAlertPresentException).
I have tried the following things:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver.get('https://www.sherdog.com')
driver.find_element(By.CLASS_NAME, 'Button__StyledButton-a1qza5-0 incZp')
driver.switch_to.alert
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'Button__StyledButton-a1qza5-0 incZp')))
except:
print("An exception occurred")
I also thought I might have to switch frame using driver.switchTo().frame(driver.findElement(By.id("rufous-sandbox"))), but am honestly unsure which frame to select. When looking through the HTML code (which I just started learning) I see some references to JavaScript (of which I have zero knowledge). Maybe that is causing me trouble?
If anybody could provide some insight, or point me in the right direction, would be greatly appreciated.
This is how you click that element:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument('disable-notifications')
chrome_options.add_argument("window-size=1920,1080")
webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
actions = ActionChains(browser)
url = 'https://www.sherdog.com'
browser.get(url)
elem = WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[#title='Scroll to the bottom of the text below to enable this button']")))
elem.click()
print('clicked')
Bear in mind that, if window is not sufficiently large, that text will need to be scrolled (and button will not be clickable). Default headless window size is quite small, so make sure your window is sufficiently large.
Selenium docs: https://www.selenium.dev/documentation/
To click on the element Accept Cookies you need to induce WebDriverWait for the element_to_be_clickable() and you can use either of the following locator strategies:
Using CSS_SELECTOR:
driver.get("https://www.sherdog.com")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div>a.cnaccept"))).click()
Using XPATH:
driver.get("https://www.sherdog.com")
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//a[#class='cnaccept']"))).click()
Note: You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
There are 2 objects you need to close there and they are not alerts. So driver.switch_to.alert is not relevant here.
Always try using stable unique locators. Class names like incZp may often be dynamic and not reliable.
Button__StyledButton-a1qza5-0 incZp are actually 2 class names, so you have to use CSS_SELECTOR or XPATH to work with them.
It is always preferred to wait for element visibility, not just existence when you going to click on that element.
This should work:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver.get('https://www.sherdog.com')
wait = WebDriverWait(driver, 20)
wait.until(EC.visibility_of_element_located((By.XPATH, "//button[contains(text(),'Continue')]"))).click()
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div#cookieNotice a.cnaccept"))).click()

Message: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div[2]/div[2]/div/div[3]/div[2]/div/div/div[2]/a[1]"}

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
import time
print('\n')
print("PROGRAM STARTING")
print('~~~~~~')
print('\n')
# initiate driver
driver = webdriver.Chrome()
driver.get('http://arcselfservice.sbcounty.gov/web/user/disclaimer')
#begin
driver.find_element_by_xpath('//*[#id="submitDisclaimerAccept"]').click()
driver.find_element_by_xpath('/html/body/div[2]/div[2]/div/div[3]/div[2]/div/div/div[2]/a[1]').click()
I have been stuck on this error for a long time, for some reason it can't find the element even though I am specifying the xpath. There doesn't seem to be any iframes, and implicit or explicit wait doesn't work either. Please help.
So the issue was waiting for the element to come up and then clicking it.
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "/html/body/div[2]/div[2]/div/div[3]/div[2]/div/div/div[2]/a[1]"))).click()
Another way if you want to change to the other tags later.
path = "//a/div/h1[text()='{}']/../..".format("Fictitious Business Names Application")
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH ,path))).click()
Import
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
The error is coming because the element is taking time to be available for use. Kindly use the explicit wait for the element extraction.
A small snippet can be:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH ,'/html/body/div[2]/div[2]/div/div[3]/div[2]/div/div/div[2]/a[1]'))).click()
Just dont forget to import
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

Selenium clicks next button instead of back button

Trying to scrape this webpage for prices, and I need the prices to be in US dollars, so it is currency I understand. However, when I initially load the URL, it gives the prices in multiple seemingly random currencies. I found that I could change this by clicking the next button, and then the back button, but when I tried to automate this, it did not work. Instead, running this code clicks the next button twice, rather than clicking it once, waiting for five seconds, and then clicking the back button. Here is the code that I am currently using that can replicate this problem.
from selenium import webdriver
driver = webdriver.Chrome(r'C:\Users\Hank\Desktop\chromedriver_win32\chromedriver.exe')
driver.get('https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara')
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait as wait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import StaleElementReferenceException
import time
time.sleep(5)
action = ActionChains(driver)
next_button=wait(driver, 10).until(EC.element_to_be_clickable((By.ID,'searchResults_btn_next')))
action.move_to_element(next_button).click().perform()
time.sleep(5)
back_button=wait(driver, 10).until(EC.element_to_be_clickable((By.ID,'searchResults_btn_prev')))
action.move_to_element(back_button).click().perform()
Thanks, your time and help is greatly appreciated. Please direct me to a relevant question if this one has already been answered somewhere else.
You don't need ActionChains class, it's works by .click() method.
Try following code:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
driver = webdriver.Chrome(r'C:\Users\Hank\Desktop\chromedriver_win32\chromedriver.exe')
driver.get('https://steamcommunity.com/market/listings/440/Unusual%20Old%20Guadalajara')
wait = WebDriverWait(driver, 20)
next_button = wait.until(EC.element_to_be_clickable((By.ID,'searchResults_btn_next')))
next_button.click()
time.sleep(5)
back_button = wait.until(EC.element_to_be_clickable((By.ID,'searchResults_btn_prev')))
back_button.click()
But note, time.sleep(5) is bad way, you can use other way, ex : wait until the second page element appear.
Or instead of time.sleep(...) in this case, you can use this code:
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,'.pagebtn.disabled')))
The above is the disable previous button since you landing in the first pagination, and will gone when you arrieve on second pagination. Use .invisibility_of_element_located, it will more efficient.

Tab on Website is "Not Clickable" Using Selenium w/ Python

I'm trying to use Selenium to click the tab for quarterly financials on this page:
http://www.msn.com/en-us/money/stockdetails/financials/fi-126.1.AAPL.NAS
When I run my code, it works some of the time, and sometime it tells me:
"Element is not clickable at point (897.7999877929688, 20.100006103515625). Other element would receive the click:
<span class="mectrlname mectrlsignin"></span>"
Here is the code I am running...
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import *
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Firefox()
driver.get('http://www.msn.com/en-us/money/stockdetails/financials/fi-126.1.AAPL.NAS')
wait = WebDriverWait(driver, 3)
qtrtab = wait.until(EC.element_to_be_clickable((By.XPATH,'//*[#id="financials-period-list"]/li[2]')))
qtrtab.click()
Does anyone know why sometimes I get the error message and other times it works just fine? Should I be doing this differently? Thanks!
There is a "frozen" header that covers the element you want to click when the cursor is moved to it. Just maximize the browser window to avoid this problem:
driver = webdriver.Firefox()
driver.maximize_window()

What is the best way to check URL change with Selenium in Python?

So, what's I want to do is to run a function on a specific webpage (which is a match with my regex).
Right now I'm checking it every second and it works, but I'm sure that there is a better way (as it's flooding that website with getting requests).
while flag:
time.sleep(1)
print(driver.current_url)
if driver.current_url == "mydesiredURL_by_Regex":
time.sleep(1)
myfunction()
I was thinking to do that somehow with WebDriverWait but not really sure how.
This is how I implemented it eventually. Works well for me:
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 5)
desired_url = "https://yourpageaddress"
def wait_for_correct_current_url(desired_url):
wait.until(
lambda driver: driver.current_url == desired_url)
I was thinking to do that somehow with WebDriverWait
Exactly. First of all, see if the built-in Expected Conditions may solve that:
title_is
title_contains
Sample usage:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
wait.until(EC.title_is("title"))
wait.until(EC.title_contains("part of title"))
If not, you can always create a custom Expected Condition to wait for url to match a desired regular expression.
To really know that the URL has changed, you need to know the old one. Using WebDriverWait the implementation in Java would be something like:
wait = new WebDriverWait(driver, 10);
wait.until(ExpectedConditions.not(ExpectedConditions.urlToBe(oldUrl)));
I know the question is for Python, but it's probably easy to translate.
Here is an example using WebdriverWait with expected_conditions:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
url = 'https://example.com/before'
changed_url = 'https://example.com/after'
driver = webdriver.Chrome()
driver.get(url)
# wait up to 10 secs for the url to change or else `TimeOutException` is raised.
WebDriverWait(driver, 10).until(EC.url_changes(changed_url))
Use url_matches Link to match a regex pattern with a url. It does re.search(pattern, url)
from selenium import webdriver
import re
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
pattern='https://www.example.com/'
driver = webdriver.Chrome()
wait = WebDriverWait(driver,10)
wait.until(EC.url_matches(pattern))

Categories

Resources