I'm looking to scrape a label from an SVG that only arrives with a mouse hover.
I'm working with this link for the data contained with the [+] expand button to the right in each of the table rows. When you press [+] expand, an SVG table pops up that shows elements that contain elements. When you hover on each of the elements, a element appears called "Capacity Impact" with a value for each of the bars. These values are the values I want to scrape.
See a screenshot below.
So far, my code is successful in opening each of the [+] expand buttons, and identifying the polygons but I can't get to the labels using either XPATH or CSS Selectors. See code below.
driver.get(url)
table_button_xpath = "//table[#class='data-view-table redispatching dataTable']//tr//td[#class = 'button-column']//a[#class='openIcon pre-table-button operation-detail-expand small-button ui-button-light ui-button ui-widget ui-corner-all ui-button-text-only']"
driver.find_element(By.ID, "close-button").click()
driver.find_element(By.ID, "cookieconsent-button").click()
# open up all the "+" buttons
table_buttons = driver.find_element(By.XPATH, table_button_xpath)
for i in list(range(1, 10)):
driver.find_element(By.XPATH, table_button_xpath).click()
# find all the polygons
polygons = driver.find_elements(By.TAG_NAME, 'path')
label_xpath = "//*[name()='svg']//*[name()='g' and #id = 'ballons')]//*[name()='g']//*[name()='tspan']"
for polygon in polygons :
action.move_to_element(polygon)
labels_by_xpath = driver.find_elements(By.XPATH, label_xpath)
labels_by_css_selector = driver.find_elements(By.CSS_SELECTOR, "svg>#ballons>g>text>tspan")
Both labels_by_xpath and labels_by_css_selector return a list of 0 elements. I've tried many versions of both the xpath and css selector approach, along with using WebDriverWait, but I can't get it to return the capacity impact values.
HTML screenshot is also copied below (to be clear, the number I need to scrape is the "50" text in the tag.
Any help is appreciated! Thank you,
Sophie
The solution to your problem is with the locator.
Here is the updated locator to select the desired element.
CSS Selector :
svg>[id^='balloons']>g:nth-child(2)>text:nth-child(2)>tspan
try this to get the element Capacity 50
x = driver.find_elements(By.CSS_SELECTOR, "svg>[id^='balloons']>g>text>tspan")
Related
Good time of the day!
Faced with a seemingly simple problem,
But it’s been a while, and I’m asking for your help.
I work with Selenium on Python and I need to curse about
20 items on google search page by random request.
And I’ll give you an example of the elements below, and the bottom line is, once the elements are revealed,
Google generates new such elements
Problem:
Cannot click on the element. I will need to click on existing and then on new, generated elements in this block: click for open see blocks with element
Tried to click on xpath, having collected all the elements:
xpath = '//*[#id="qmCCY_adG4Sj3QP025p4__16"]/div/div/div[1]/div[4]'
all_elements = driver.find_element(By.XPATH, value=xpath)
for element in all_elements:
element.click()
sleep(2)
Important note!
id xpath has constantly changing and is generated by another on the google side
Tried to click on the class
class="r21Kzd"
Tried to click on the selector:
#qmCCY_adG4Sj3QP025p4__16 > div > div > div > div.wWOJcd > div.r21Kzd
Errors
This is when I try to click using xpath:
Message: no such element: Unable to locate element: {"method":"xpath","selector"://*[#id="vU-CY7u3C8PIrgTuuJH4CQ_9"]/div/div[1]/div[4]}
In other cases, the story is almost the same, the driver does not find the element and cannot click on it. Below I apply a scratch tag on which I need to click
screenshot tags on google search
Thanks for the help!
In case iDjcJe IX9Lgd wwB5gf are a fixed class name values of that element all you need is to use CSS_SELECTOR instead of CLASS_NAME with a correct syntax of CSS Selectors.
So, instead of driver.find_element(By.CLASS_NAME, "iDjcJe IX9Lgd wwB5gf") try using this:
driver.find_element(By.CSS_SELECTOR, ".iDjcJe.IX9Lgd.wwB5gf")
(dots before each class name, no spaces between them)
I am trying to scrape from a webpage that changes it's class names and other element attributes dynamically (there is no pattern in the name of the class). I use code in the following format:
element_1 = driver.find_elements(By.XPATH, '//*[contains (#class, "DkEaL")]')
a snippet of the webpage element is:
<button aria-label="cats-over-fence" jsaction="pane.rating.moreCats" jstcache="98" class="DkEaL" jsan="7.DkEaL,0.aria-label,0.jsaction">35 reviews</button>
Is there any way to detect this element as it changes dynamically without manually inspecting it?
If your webelement always contains the word "reviews" and it is the first element with this property in the HTML, then you can target it with the following
button = driver.find_element(By.XPATH, '//button[contains(., "reviews")]')
If it is not the first button containing "reviews", then you can do the following
position_of_your_button = 5 # for example
buttons = driver.find_elements(By.XPATH, '//button[contains(., "reviews")]')
button = buttons[position_of_your_button]
If this is not the case, then you have to rely on the position of the webelement in the HTML, but to help on this you should give us the full HTML code or simply the url of the webpage.
So on the e-commerce webpage (https://www.jooraccess.com/r/products?token=feba69103f6c9789270a1412954cf250) the color name of the product is displayed when I hover over it, I was able to determine what the new line in HTML code that appears when I hover over, but I don't know how to grab the text ('NAVY').
<div class="ui top left popup transition visible Tooltip_Tooltip__M0LJL Tooltip_black__heZoQ" style="position: absolute; inset: auto auto -7494px 378px;">NAVY</div>
driver.get("https://www.jooraccess.com/r/products?token=feba69103f6c9789270a1412954cf250")
elements = WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[#class='Swatch_swatch__2X1CY']")))
for el in elements:
ActionChains(driver).move_to_element(el).perform()
mouseover = WebDriverWait(driver, 30).until(EC.visibility_of_element_located((By.XPATH, "//div[#class='ui top left popup transition visible Tooltip_Tooltip__M0LJL Tooltip_black__heZoQ'")))
print(mouseover)
I have checked your code.Your xpath expression seems wrong for tool tip element.
I have changed that and also to get the text value you need to print(element.text)
driver.get("https://www.jooraccess.com/r/products?token=feba69103f6c9789270a1412954cf250")
elements = WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[#class='Swatch_swatch__2X1CY']")))
for el in elements:
ActionChains(driver).move_to_element(el).perform()
mouseover = WebDriverWait(driver, 30).until(EC.visibility_of_element_located((By.XPATH, "//div[#class='ui top left popup transition visible Tooltip_Tooltip__M0LJL Tooltip_black__heZoQ']")))
print(mouseover.text)
Output on my terminal.
for some reason I couldn't get your site to work with the platform I am using but I managed to create an example. I would recommend using the Actions library and use it to move to the element in question.
ActionChains(self.driver).move_to_element(self.driver.find_element(By.XPATH, "//a[#class='dropdown-toggle']")).perform()
This is for the site I am using but if you change the xpath for your own element it should work. Once the driver has done this you can find the element you want that is revealed by hovering. You should be able to just put this in a for loop and you are good to go.
I made a runnable example here
Here is the solution I came to before I saw the answers here:
elements = WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[#class='Swatch_swatch__2X1CY']")))
for el in elements:
ActionChains(driver).move_to_element(el).perform()
page = BeautifulSoup(driver.page_source, features='html.parser')
print(page.find("div", class_="ui top left popup transition visible Tooltip_Tooltip__M0LJL Tooltip_black__heZoQ").text)
Hy there! I am automating aliexpress with selenium and python where users can buy products at voice command and can purchase any type of product. now the problem is a color and size selection, I have tried x-path but every element have a different x-path for the same color and size, I want a selector for at least four colors and four sizes, for clearance I have given the image, code, and link to the page too. if anyone has the solution plz mention it. thanx in advance
code :
#for selecting color 2 of an third item, but different for every element
elif '2' in query:
try:
color_picker2=driver.find_element_by_xpath('//*[#id="root"]/div/div[2]/div/div[2]/div[7]/div/div[1]/ul/li[2]/div')
color_picker2.click()
except:
color_picker2=driver.find_element_by_xpath('//*[#id="root"]/div/div[2]/div/div[2]/div[6]/div/div/ul/li[2]/div')
color_picker2.click()
link to the page is :
https://www.aliexpress.com/item/1005001621523593.html?spm=a2g0o.productlist.0.0.45157741uKKhLZ&algo_pvid=bd6c858e-759b-4c66-a59b-2b1724286123&algo_exp_id=bd6c858e-759b-4c66-a59b-2b1724286123-0
image (marked)for required details is :
to select the type you can use the css selector and you can use this selector changing the index based on what you want select; I'm selecting the image but I think with only the div class sku-property-image should be enough:
First Model CSS Selector:
ul[class='sku-property-list'] li:nth-child(1) div[class='sku-property-image'] img
If you want select the second one just change 1 for 2:
ul[class='sku-property-list'] li:nth-child(2) div[class='sku-property-image'] img
For the Size the question is a bit more complex because size and country have the same selector so in this case you have to get the father element and hardcode the child of what you are looking for, as u can see in the below selector the div:nth-child(2) indicate the size section, instead li:nth-child(1) which size select, 1=S, 2=M, etc... example:
First SIZE S CSS Selector:
div[class='sku-wrap'] div:nth-child(2) ul[class='sku-property-list'] li:nth-child(1) div[class='sku-property-text'] span
Second SIZE M CSS Selector:
div[class='sku-wrap'] div:nth-child(2) ul[class='sku-property-list'] li:nth-child(2) div[class='sku-property-text'] span
The color buttons in the webpage seem to have a class named 'sku-property-image'. The sizes have 'sku-property-text'. Try to find_elements_by_class_name (example: Selenium Finding elements by class name in python). Then read what's inside of the element and click() conditionally.
I'm trying to expand some collapsible content by taking all the elements that need to be expanded, and then clicking on them. Then once they're open, scrape the data shown. So far I'm grabbing a list of elements by their xpath, with this:
clicks = driver.find_elements_by_xpath("//div[contains(#class, 'collapsableContent') and contains(#class, 'empty')]")
and I've tried iterating with a simple for loop:
for item in clicks:
item.click()
but that doesn't seem to work. Any suggestions on where to look?
The specific page I'm trying to get this from is: https://sports.betway.com/en/sports/sct/esports/cs-go
Here is the code that you should use to open all the divs which have the collapsed empty class.
# click on close button in cookies policy (as this is the element which will overlap the element clicks)
driver.find_element_by_css_selector(".messageBoxCloseButton.icon-cross").click()
# get all the divs (collapsed divs)
links = driver.find_elements_by_xpath("//div[#class='collapsableContent empty']/preceding-sibling::div[#class='collapsableHeader']")
# click on each of those links
for link in links:
link.location_once_scrolled_into_view
link.click()