I wanted to know that how can one get the coordinates of a element according to the screen resolution rather than the browser windows size, I have tried this already (code block), but it provides coordinates according to the browser window rather than the screen
element = driver.find_element_by_xpath("//*[#id='search_form_input_homepage']")
print(element.location)
Any alternatives that I can use?
A terrible attempt to explain what I mean :
note: driver.execute_script is not allowed, as the website has a bot blocker :(
You can use .size and .location to get the sizes.
Try this:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep, strftime
url = "some url"
webdriver = webdriver.Chrome()
webdriver.get(url)
webdriver.fullscreen_window()
cookies = webdriver.find_element_by_xpath("xome xpath")
location = cookies.location
size = cookies.size
w, h = size['width'], size['height']
print(location)
print(size)
print(w, h)
print(element.location_once_scrolled_into_view)
Try if this helps , more available methods like size rect etc can be found at:
https://www.selenium.dev/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webelement.html#module-selenium.webdriver.remote.webelement
Related
The link contains a map showing the current location of the bus, and I want to scrape the map every few minutes with python and output it as an image. I tried to manage it with the following code, but the output is not showing the map but only showing the route. Moreover, if I want to run multiple times with selenium, it will open a lot of browsers on the backend. Is there any other way to do this? Thanks
Code I tried:
from PIL import Image
from selenium import webdriver
driver = webdriver.Chrome('./chromedriver')
driver.maximize_window() # maximize window
driver.get("https://mobi.mit.edu/default/transit/route?feed=nextbus&direction=loop&agency=mit&route=tech&_tab=map")
element = driver.find_element("xpath", "/html/body/div/div/main/div[2]/div/div[2]/div/div[3]/div/div/div/div/div/div"); # this is the map xpath
location = element.location;
size = element.size;
driver.save_screenshot("canvas.png");
x = location['x'];
y = location['y'];
width = location['x']+size['width'];
height = location['y']+size['height'];
im = Image.open('canvas.png')
im = im.crop((int(x), int(y), int(width), int(height)))
im.save('canvas_el.png') # your file
Output:
Expected:
I am trying to extract the title, duration and the link of all the videos that a YT channel has. I used selenium and python in the following way:
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
results = []
url = "https://www.youtube.com/channel/<channel name>/videos"
driver.get(url)
ht=driver.execute_script("return document.documentElement.scrollHeight;")
while True:
prev_ht=driver.execute_script("return document.documentElement.scrollHeight;")
driver.execute_script("window.scrollTo(0, document.documentElement.scrollHeight);")
time.sleep(2)
ht=driver.execute_script("return document.documentElement.scrollHeight;")
if prev_ht==ht:
break
links=driver.find_elements_by_xpath('//*[#class="style-scope ytd-grid-renderer"]')
for link in links:
print()
print(link.get_attribute("href"), link.get_attribute("text"))
When I try to get the duration of the video using class="style-scope ytd-thumbnail-overlay-time-status-renderer" class, the driver returns that the element doesn't exist. I managed the got the other two features though.
Your XPath locator is not correct, so please use the following:
links=driver.find_elements_by_xpath('//*[name() = "ytd-grid-video-renderer" and #class="style-scope ytd-grid-renderer"]')
Now, to get the videos length per each link you defined you can do the following:
links=driver.find_elements_by_xpath('//*[name() = "ytd-grid-video-renderer" and #class="style-scope ytd-grid-renderer"]')
for link in links:
duration = link.find_element_by_xpath('.//span[contains(#class,"time-status")]').text
print(duration)
Good Morning!
Selenium can have trouble getting the video duration if the cursor is not in the perfect spot. Here's a GIF to show that: Gif. You can get around this by using some of Youtube's built-in Javascript functions. Here's an example that uses this:
video_dur = self.driver.execute_script(
"return document.getElementById('movie_player').getCurrentTime()")
video_len = self.driver.execute_script(
"return document.getElementById('movie_player').getDuration()")
video_len = int(video_len) / 60
Have a great day!
I have a website which I'm using Selenium to extract information from mouse hover box that shows up to display relevant information for each review. I was able to get the first one correct and no matter how I tried I have trouble loop over the page to get information from all. Can Anyone have an idea what I am doing wrong. Here is my code.
from lxml import html
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Chrome(executable_path="YOUR PATH")
driver.implicitly_wait(15)
driver.get("https://www.depositaccounts.com/banks/reviews/chase-manhattan-bank.html")
#find all
data= driver.find_element_by_xpath("//div[contains(#class, 'bankReviewContainer')]/div[1]/div[3]/div[1]")
for i in data:
ActionChains(driver).move_to_element(i).perform()
time.sleep(2)
#print content of each box
hover_data=driver.find_element_by_xpath("//*[#class='popover fade right in']").get_attribute("innerHTML")
print ( hover_data)
driver.quit()```
You need to use find_elements_by_xpath instead of find_element.
Try this code:
#find all
data= driver.find_elements_by_xpath("//div[contains(#class, 'bankReviewContainer')]/div[1]/div[3]/div[1]")
for i in data:
ActionChains(driver).move_to_element(i).perform()
time.sleep(2)
#print content of each box
hover_data=driver.find_element_by_xpath("//*[#class='popover fade right in']").get_attribute("innerHTML")
print(hover_data)
I am running a django app and in that scope I have a page with a navigation bar and when I click on my contact-tab this automatically scrolls down to my contact-section.
I am trying to test this behaviour with selenium but I can't seem to figure out how I can test the actual position of the page. Or, in other words, I want to verify that the page actually scrolled down to the contact-section.
Right now I am doing this:
def test_verify_position(
self, browser, live_server
):
""""""
browser.get(live_server.url)
contact_section = browser.find_element_by_id("contact").click()
assert ????
I think I somehow have to get the current scroll location-coordinates. I know I can get the location of an element using .location. But the relative position of an element is always the same no matter the scroll-position. I tried this to debug:
def test_verify_position(
self, browser, live_server
):
""""""
browser.get(live_server.url)
e = browser.find_element_by_xpath(
"/html/body/section[4]/div/div[1]/h2/strong")
location = e.location
print(location)
browser.find_element_by_css_selector(".nav-contact").click()
e = browser.find_element_by_xpath("/html/body/section[4]/div/div[1]/h2/strong")
location = e.location
print(location)
This prints the same coordinates for before and after the scroll.
I also searched the official doc https://www.selenium.dev/documentation/en/webdriver/web_element/ but couldn't find a better solution or any solution for that matter.
Anyone knows how to do this? Help is very much appreciated. Thanks in advance!
Did you want to click on it and check if it moved there? Cause you can return the current scroll height to match with the element location. You could get x,y offset if you want to as well.
height = driver.execute_script("return document.body.scrollHeight")
print(height)
nav = browser.find_element_by_css_selector(".nav-contact")
location = nav.location
print(location)
browser.execute_script("arguments[0].scrollIntoView();",nav)
nav.click()
Assert.assertTrue(height.equals(location['y']));
#what the answer was
browser.execute_script("return window.pageYOffset")
Edit : You can identify one of element in screen loading after click on Contact nav and wait till its visible.
try:
WebDriverWait(driver.visibility_of_element_located, 60).until(EC.presence_of_element_located((By.XPATH, '<xpath>')))
except TimeoutException:
assert False
Need to Import:
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
I have been able to catch screenshots as pngs of some elements such the one with following code
from selenium import webdriver
from PIL import Image
from io import BytesIO
from os.path import expanduser
from time import sleep
# Define url and driver
url = 'https://www.formula1.com/'
driver = webdriver.Chrome('chromedriver')
# Go to url, scroll down to right point on page and find correct element
driver.get(url)
driver.execute_script('window.scrollTo(0, 4100)')
sleep(4) # Wait a little for page to load
element = driver.find_element_by_class_name('race-list')
location = element.location
size = element.size
png = driver.get_screenshot_as_png()
driver.quit()
# Store image as bytes, crop it and save to desktop
im = Image.open(BytesIO(png))
im = im.crop((200, 150, 700, 725))
path = expanduser('~/Desktop/')
im.save(path + 'F1-info.png')
This outputs to:
Which is what I want but not exactly how I want. I needed to manually input some scrolling down and as I couldn't get the element I wanted (class='race step-1 step-2 step-3') I had to manually crop the image too.
Any better solutions?
In case someone is wondering. This is how I managed it in the end. First I found and scrolled to the right part of the page like this
element = browser.find_element_by_css_selector('.race.step-1.step-2.step-3')
browser.execute_script('arguments[0].scrollIntoView()', element)
browser.execute_script('window.scrollBy(0, -80)')
and then cropped the image
im = im.crop((200, 80, 700, 560))