Click button with Selenium Python works only with debug mode - python

I tried to get data from the menu in this website. I've written this script which seems work well in debug mode. In this script, i close the chrome every time I've get information for certain city.
In debug mode, the detail information is shown when 'element' is clicked by the script. However, if i run the script, it seems that it doesn't do anything after city information is sent. The button 'Visualize Results' is enabled in the website when city data is entered, but the detail information that supposed to be shown after clicking this button by the script is not shown as if it is not clicked. Is there something that i miss?
thank you in advance.
for city in Cities:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get(link)
driver.find_element(By.ID, "inputText").send_keys(city)
driver.find_element(By.ID, 'btninputText').click()
element = wait(driver, 5).until(EC.presence_of_element_located((By.XPATH,div[#id="btviewPVGridGraph"]'))).click()
driver.close()

You need to wait for clickability of all 3 elements you accessing here. Presence is not enough.
Need to add a short delay between clicking the search button and clicking visualization button
The following code works:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 30)
url = "https://re.jrc.ec.europa.eu/pvg_tools/en/"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.ID, "inputText"))).send_keys("Paris")
wait.until(EC.element_to_be_clickable((By.ID, "btninputText"))).click()
time.sleep(2)
wait.until(EC.element_to_be_clickable((By.ID, "btviewPVGridGraph"))).click()
Th result is:

Try below xpath
wait(driver, 5).until(EC.presence_of_element_located(By.XPATH,'//div[#id="btviewPVGridGraph"]')).click()

Related

How to click button with Selenium

I tried with XPath but selenium can't click this image/button.
from undetected_chromedriver.v2 import Chrome
def test():
driver = Chrome()
driver.get('https://bandit.camp')
WebDriverWait(driver,30).until(EC.element_to_be_clickable((By.XPATH,"/html/body/div[1]/div/main/div/div/div/div/div[5]/div/div[2]/div/div[3]/div"))).click()
if __name__ == "__main__":
test()
Try the below one, I checked it, and it is working fine, while clicking on the link it is opening a separate window for login.
free_case = driver.find_element(By.XPATH, ".//p[contains(text(),'Open your free')]")
driver.execute_script("arguments[0].scrollIntoView(true)", free_case)
time.sleep(1)
driver.execute_script("arguments[0].click();", free_case)
First you need to wait for presence of that element.
Then you need to scroll the page down to make that element visible since initially this element is out of the visible screen so you can't click it.
Now you can click it and it works.
The code below is working:
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
options.add_argument('--disable-notifications')
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://bandit.camp/"
driver.get(url)
element = wait.until(EC.presence_of_element_located((By.XPATH, "//div[contains(.,'daily case')][contains(#class,'v-responsive__content')]")))
element.location_once_scrolled_into_view
time.sleep(0.3)
element.click()

Trying to locate an element in a webpage but getting NoSuchElementException

I am trying to get the webdriver to click a button on the site random.org The button is a generator that generates a random integer between 1 and 100. It looks like this:
After inspecting the webpage, I found the corresponding element on the webpage looks something like this:
It is inside an iframe and someone suggested that I should first switch over to that iframe to locate the element, so I incorporated that in my code but I am constantly getting NoSuchElementException error. I have attached my code and the error measage below for your reference. I can't understand why it cannot locate the button element despite referencing the ID, which is supposed to unique in the entire document.
The code:
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Edge()
driver.get("https://www.random.org/")
driver.implicitly_wait(15)
driver.switch_to.frame(driver.find_element(By.TAG_NAME, "iframe"))
button = driver.find_element(By.CSS_SELECTOR, "input[id='hnbzsqjufzxezy-button']")
button.click()
The error message:
Make sure that there are no more Iframes on the page. If there are a few an not only one do this:
iframes = driver.find_elements(By.CSS, 'iframe')
// try switching to each iframe:
driver.switch_to.frame(iframes[0])
driver.switch_to.frame(iframes[1])
You can't find the button because its name contain random letters. Every time you will refresh the page you can see that the name value will change. So, do this:
button = driver.findElement(By.CSS, 'input[type="button"][value="Generate"]')
button.click()
There are several issues with your code:
First you need to close the cookies banner
The locator of the button is wrong. It's id is dynamic.
You need to use WebDriverWait to wait for elements clickability.
The following code works:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(service=webdriver_service, options=options)
url = 'https://www.random.org/'
driver.get(url)
wait = WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[onclick*='all']"))).click()
wait.until(EC.frame_to_be_available_and_switch_to_it((By.TAG_NAME, "iframe")))
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[id*='button']"))).click()

why element in html like (drop down, button, textbox) do not reachabel with selenium in python?

I want to change or at least make any small effect in the korean custom website but it seems they are not accessible by my code! Maybe they are in the internal iframe or not, I don't know. I want to change dropdown, write something in textbox and click search button, but I cannot.
May anyone help me?
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
webdriver_service = Service('C:\Webdriver\chromedriver.exe')
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
url = 'http://www.kita.org/kStat/byCom_AllCount.do'
browser.get(url)
time.sleep(5)
select = Select(WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, '/html/body/table/tbody/tr/td/table/tbody/tr/td/form/table/tbody/tr/td/table[3]/tbody/tr/td/table/tbody/tr/td/table/tbody/tr[2]/td[2]/select')))) # select dropdown
select.select_by_index(1)
browser.find_element(By.XPATH,'/html/body/table/tbody/tr/td/table/tbody/tr/td/form/table/tbody/tr/td/table[2]/tbody/tr/td[2]/table/tbody/tr/td[2]/a/img').click() #click seach Button
time.sleep(5)
There is an iframe there.
You need to switch to it first in order to access elements in it.
Also, you should improve your locators.
And insert some text into the item input.
The following code works
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(service=webdriver_service, options=options)
url = "http://www.kita.org/kStat/byCom_AllCount.do"
driver.get(url)
wait = WebDriverWait(driver, 20)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.ID,"iframe_stat")))
select = Select(WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[name='cond_choosefield']"))))
select.select_by_index(1)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "[name='cond_prdt_cd']"))).send_keys("kuku")
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[href*='searchForm']"))).click()

How to get a full-page screenshot in Python using Selenium and Screenshot

I'm trying to get a full-length screenshot and haven't been able to make it work. Here's the code I'm using:
from Screenshot import Screenshot
from selenium import webdriver
import time
ob = Screenshot.Screenshot()
driver = webdriver.Chrome()
driver.maximize_window()
driver.implicitly_wait(10)
url = "https://stackoverflow.com/questions/73298355/how-to-remove-duplicate-values-in-one-column-but-keep-the-rows-pandas"
driver.get(url)
img_url = ob.full_Screenshot(driver, save_path=r'.', image_name='example.png')
print(img_url)
driver.quit()
But this gives us a clipped screenshot:
So as you can see that's just what the driver window is showing, not a full-length screenshot. How can I tweak this code to get what I'm looking for?
Here is an example of how you can take full <body> screenshot of a page:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time as t
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument('disable-notifications')
chrome_options.add_argument("window-size=1280,720")
webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
url = 'https://stackoverflow.com/questions/7263824/get-html-source-of-webelement-in-selenium-webdriver-using-python?rq=1'
browser.get(url)
required_width = browser.execute_script('return document.body.parentNode.scrollWidth')
required_height = browser.execute_script('return document.body.parentNode.scrollHeight')
browser.set_window_size(required_width, required_height)
t.sleep(5)
browser.execute_script("window.scrollTo(0,document.body.scrollHeight);")
required_width = browser.execute_script('return document.body.parentNode.scrollWidth')
required_height = browser.execute_script('return document.body.parentNode.scrollHeight')
browser.set_window_size(required_width, required_height)
t.sleep(1)
body_el = WebDriverWait(browser,10).until(EC.element_to_be_clickable((By.TAG_NAME, "body")))
body_el.screenshot('full_page_screenshot.png')
print('took full screenshot!')
t.sleep(1)
browser.quit()
Selenium setup is for linux, but just note the imports, and the part after defining the browser. Code above is starting from a small window, then it maximizes it to fit in the full page body, then it waits a bit and computes the body size again, just to account for some scripts kicking in on user's input. Then it takes the screenshot - tested and working on a really long page.
To get a full-page screenshot using Selenium-Python clients you can use the GeckoDriver and firefox based save_full_page_screenshot() method as follows:
Code:
driver = webdriver.Firefox(service=s, options=options)
driver.get('https://stackoverflow.com/questions/73298355/how-to-remove-duplicate-values-in-one-column-but-keep-the-rows-pandas')
driver.save_full_page_screenshot('fullpage_gecko_firefox.png')
driver.quit()
Screenshot:
tl; dr
[py] Adding full page screenshot feature for Firefox

How to accept cookies popup within #shadow-root (open) using Selenium Python

I am trying to press the accept button in a cookies popup in the website https://www.immobilienscout24.de/
Snapshot:
I understand that this requires
driver.execute_script("""return document.querySelector('#usercentrics-root')""")
But I can't trickle down the path to the accept button in order to click it. Can anyone provide some help?
This is one way (tested & working) you can click that button: please observe the imports, as well as the code after defining the browser/driver:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument('disable-notifications')
import time as t
webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
actions = ActionChains(browser)
url = 'https://www.immobilienscout24.at/regional/wien/wien/wohnung-kaufen'
browser.get(url)
page_title = WebDriverWait(browser, 3).until(EC.presence_of_element_located((By.CSS_SELECTOR, "a[title='Zur Homepage']")))
actions.move_to_element(page_title).perform()
parent_div = WebDriverWait(browser, 20000).until(EC.presence_of_element_located((By.ID, "usercentrics-root")))
shadowRoot = browser.execute_script("return arguments[0].shadowRoot", parent_div)
try:
button = WebDriverWait(shadowRoot, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[data-testid='uc-accept-all-button']")))
button.click()
print('clicked')
except Exception as e:
print(e)
print('no click button')
That page is reacting to user's behavior, and it will only fully load the page once it detects mouse movements, hence the ActionChains() part of the code. After that, we drill down into the shadow root element, we locate the button (using Waits, to make sure it's clickable), and then we click it.
Selenium documentation can be found at https://www.selenium.dev/documentation/
The element Alle akzeptieren within the website is located within a #shadow-root (open).
Solution
To click on the element Alle akzeptieren you have to use shadowRoot.querySelector() and you can use the following Locator Strategy:
Code Block:
driver.execute("get", {'url': 'https://www.immobilienscout24.de/'})
time.sleep(10)
item = driver.execute_script('''return document.querySelector('div#usercentrics-root').shadowRoot.querySelector('button[data-testid="uc-accept-all-button"]')''')
item.click()

Categories

Resources