Python Selenium Drag and Drop - python

I know that there are already other related posts but none of them give a complete answer. Bellow is the code for drag and drop which I'm using:
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
url = 'http://www.w3schools.com/html/html5_draganddrop.asp'
driver = webdriver.Firefox()
driver.get(url)
element = driver.find_element_by_id("drag1")
target = driver.find_element_by_id("div2")
ActionChains(driver).drag_and_drop(element, target).perform()
Can you tell me what is wrong with this code?
Later edit:
Found the following example which works:
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
firefox = webdriver.Firefox()
firefox.get('http://www.theautomatedtester.co.uk/demo2.html')
draggable = firefox.find_element_by_class_name("draggable")
droppable = firefox.find_element_by_name("droppable")
dragdrop = ActionChains(firefox)\
.drag_and_drop(draggable, droppable)
dragdrop.perform()
It must be related to the page source (js code?) but I don't know what.

You are trying to drop and drag it's correct . But the actual url is
:http://www.w3schools.com/html/tryit.asp?filename=tryhtml5_draganddrop
and the second thing is the two id's are inside a frame so you must *switch_to_frame* first before perform().

I've tried to get this working as well and it seems that switch_to_frame doesn't seem to help. Some additional research has me thinking that perhaps Selenium WebDriver doesn't fully support HTML 5 drag and drop?
https://code.google.com/p/selenium/issues/detail?id=3604
I'm going to see if I can find a nice jquery drag and drop test page that I can use test the iframe behavior on.

Related

Unable to locate "Accept" Button - Selenium - Beginner Web Scraping

I am trying to use Selenium in order to learn different ways of web scraping.
When the code is executed Firefox starts and the "accept cookies" or what ever pops up. I am unable to locate the "accept" button when inspecting the page.
my code so far:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service
import pandas as pd
import time
PATH = "C:/Users/myuser/Desktop/Driver/geckodriver.exe"
driver = webdriver.Firefox(executable_path=PATH)
driver.maximize_window() # For maximizing window
driver.get("https://www.immonet.de/")
button_pos = driver.find_element(by=By.CLASS_NAME, value="sc-gsDKAQ fILFKg")
button_pos.click()
print(driver.title)
input = input()
I get the following error: Unable to locate element: .sc-gsDKAQ fILFKg
My thought was locating the button via the inspect tool as follows:
What am I missing or doing wrong? How would i find the right element?
Thanks!
Pat
First of all, to display this url,accepting the cookies is a must but to accept and click on the cookie button isn't a easy task because cookies button is under shadow root (open) selenium and webdriverWait can do nothing on shadow root,so to execute shadow root you need to apply JavaScript querySelector.
#To execute shadow root and accept cookies
driver.execute_script('''return document.querySelector('div#usercentrics-root').shadowRoot.querySelector('button[data-testid="uc-accept-all-button"]')''').click()
Class attribute in the html element can contain multiple classes separated by space. i.e. "sc-gsDKAQ fILFKg", contains two classes, sc-gsDKAQ and fILFKg.
You can user either but both are random and can be changed next time css is recompiled. I recommend to think of xpath using data-testid attribute

How do I test every link on a webpage with Selenium using Pytho and pytest or Selenium Firefox IDE?

So I'm trying to learn Selenium for automated testing. I have the Selenium IDE and the WebDrivers for Firefox and Chrome, both are in my PATH, on Windows. I've been able to get basic testing working but this part of the testing is eluding me. I've switched to using Python because the IDE doesn't have enough features, you can't even click the back button.
I'm pretty sure this has been answered elsewhere but none of the recommended links provided an answer that worked for me. I've searched Google and YouTube with no relevant results.
I'm trying to find every link on a page, which I've been able to accomplish, even listing the I would think this would be just a default test. I even got it to PRINT the text of the link but when I try to click the link it doesn't work. I've tried doing waits of various sorts, including
visibility_of_any_elements_located AND time.sleep(5) To wait before trying to click the link.
I've tried this to click the link after waiting self.driver.find_element(By.LINK_TEXT, ("lnktxt")).click(). But none work, not in below code, the below code works, listing the URL Text, the URL and the URL Text again, defined by a variable.
I guess I'm not sure how to get a variable into the By.LINK_TEXT or ...by_link_text statement, assuming that would work. I figured if I got it into the variable I could use it again. That worked for print but not for click()
I basically want to be able to load a page, list all links, click a link, go back and click the next link, etc.
The only post this site recommended that might be helpful was...
How can I test EVERY link on the WEBSITE with Selenium
But it's Java based and I've been trying to learn Python for the past month so I'm not ready to learn Java just to make this work. The IDE does not seem to have an easy option for this, or from all my searches it's not documented well.
Here is my current Selenium code in Python.
import pytest
import time
import json
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
wait_time_out = 15
class TestPazTestAll2():
def setup_method(self, method):
driver = webdriver.Firefox()
self.driver = webdriver.Firefox()
self.vars = {}
def teardown_method(self, method):
self.driver.quit()
def test_pazTestAll(self):
self.driver.get('https://poetaz.com/poems/')
lnks=self.driver.find_elements_by_tag_name("a")
print ("Total Links", len(lnks))
# traverse list
for lnk in lnks:
# get_attribute() to get all href
print(lnk.get_attribute("text"))
lnktxt = (lnk.get_attribute("text"))
print(lnk.get_attribute("href"))
print(lnktxt)
driver.quit()
Again, I'm sure I missed something in my searches but after hours of searching I'm reaching out.
Any help is appreciated.
I basically want to be able to load a page, list all links, click a link, go back and click the next link, etc.
I don't recommend doing this. Selenium and manipulating the browser is slow and you're not really using the browser for anything where you'd really need a browser.
What I recommend is simply sending requests to those scraped links and asserting response status codes.
import requests
link_elements = self.driver.find_elements_by_tag_name("a")
urls = map(lambda l: l.get_attribute("href"), link_elements)
for url in urls:
response = requests.get(url)
assert response.status_code == 200
(You also might need to prepend some base url to those strings found in href attributes.)

Changing attribute values of a tag using selenium python

I want to click on Select Year dropdown and select a year from it. Go to that page and fetch the HTML.
I've written this piece of code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome('C:/Users/SoumyaPandey/Desktop/Galytix/Scrapers/data_ingestion/chromedriver.exe')
driver.get('https://investors.aercap.com/results-and-events/financial-results')
driver.maximize_window()
driver.find_element_by_id('year_filter_chosen').click()
driver.find_element_by_class_name('active-result')
I'm just starting to work with selenium and got no clue how to proceed further.
I tried to look for the next class after clicking on the dropdown. I want to set the attribute value 'data-option-array-index' to 1 first, open the page, get html. Then keep on changing the value of this attribute.
Any help would be much appreciated!!
driver.get('https://investors.aercap.com/results-and-events/financial-results')
elem=driver.find_element_by_css_selector('#year-filter')
driver.execute_script("arguments[0].style.display = 'block';", elem)
selectYear=Select(elem)
selectYear.select_by_index(1)
Simply find the element and use Select after you change it's style to display block to access it's values.
Imports
from selenium.webdriver.support.select import Select
For tag in selenium there's great Class Select, example provided my colleague in the neighbor answer. But there's also a bit easier way to do it as well, like:
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('https://investors.aercap.com/results-and-events/financial-results')
el = driver.find_element(By.CSS_SELECTOR, '#year-filter')
time.sleep(3)
driver.execute_script("arguments[0].style.display = 'block';", el)
el.send_keys('2018')
time.sleep(3)
driver.quit()

Dynamic element (Table) in page is not updated when i use Click() in selenium, so i couldn't retrive the new data

Page which i need to scrape data from: Digikey Search result
Issue
It is allowed to show only 100 row in each table, so i have to move between multiple tables using the NextPageButton.
As illustrated in the code below, I actually do though, but the results retrieves to me every time the first table results and doesn't move on to the next table results on my click action ActionChains(driver).click(element).perform().
Keep in mind that NO new pages is opened, click is going to be intercepted by some sort of JavaScript to do some rich UI stuff on the same page to load a new table of data
My Expectations
I am just trying to validate that I could move to the next table, then i will edit the code to loop through all of them.
This piece of code should return the data in the second table from results, BUT it actually returns the values from the first table which loaded initially with the URL. This means that the click action didn't occur or it actually occurred but the WebDriver driver content isn't being updated by interacting with dynamic JavaScript elements in the page.
I will appreciate any help, Thanks..
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.expected_conditions import presence_of_element_located
from selenium.webdriver import ActionChains
import time
import sys
url = "https://www.digikey.com/en/products/filter/coaxial-connectors-rf-terminators/382?s=N4IgrCBcoA5QjAGhDOl4AYMF9tA"
chrome_driver_path = "..PATH\\chromedriver"
chrome_options = Options()
chrome_options.add_argument ("--headless")
webdriver = webdriver.Chrome(
executable_path= chrome_driver_path
,options= chrome_options
)
with webdriver as driver:
wait = WebDriverWait(driver, 10)
driver.get(url)
wait.until(presence_of_element_located((By.CSS_SELECTOR, "tbody")))
element = driver.find_element_by_css_selector("button[data-testid='btn-next-page']")
ActionChains(driver).click(element).perform()
time.sleep(10) #too much time i know, but to make sure it is not a waiting issue. something needs to be updated
results = driver.find_elements_by_css_selector("tbody")
for count in results:
countArr = count.text
print(countArr)
print()
driver.close()
Finally found a SOLUTION !
Source of the solution.
As expected the issue was in the clicking action itself. It is somehow not being done right or it's not being done at all as illustrated in the solution Source question.
the solution is to click the button using Javascript execution.
Change line 30
ActionChains(driver).click(element).perform()
to be as following:
driver.execute_script("arguments[0].click();",element)
That's it..

Selenium drop down option unable to web scrape

So I have to web scrape the info of car year, model and make from https://auto-buy.geico.com/nb#/sale/vehicle/gskmsi/ (if the link doesn't work, kindly go to 'https://geico.com', fill in the zip code as '75002', enter random details in customer info and you will land up in the vehicle info link).
Having browsed through various answers, I have figured out that I can't use mechanize or something similar owing to the browser sending JavaScript requests every time I select an option in the menu. That leaves something like Selenium to help me.
Following is my code:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Ie("IEDriverServer.exe")
WebDriverWait(driver, 10)
driver.get('https://auto-buy.geico.com/nb#/sale/customerinformation/gskmsi')
html = driver.page_source
soup = BeautifulSoup(html)
select = Select(driver.find_element_by_id('vehicleYear'))
print(select)
The output is an empty [] because it's unable to locate the form.
Please let me know how to select the data from the forms of the page.
P.S.: Though I have used IE, any code correction using Mozilla or Chrome is also welcome.
You need to fill out all the info in "Customer" tab using Selenium and then wait for the appearance of this select element:
from selenium.webdriver.support import ui
select_element = ui.Select(ui.WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, "vehicleYear"))))
Then select a needed option:
select_element.select_by_visible_text("2017")
Hope it helps you!

Categories

Resources