So I am trying to find elements by CSS selector using the class name. One of my code works while the other does not.
Screenshot of the html code that works:
The code for the above screenshot, that works:
driver.get(url)
print('a') # to check if code has run till here
disabled_next = driver.find_elements(By.CSS_SELECTOR, '.page-link.next.disabled')
print(disabled_next)
Screenshot of the html code that does NOT work:
The code for the above screenshot, that does not work:
driver.get(url)
print('a')
enabled_next = driver.find_elements(By.CSS_SELECTOR, '.page-link.next')
print(enabled_next)
Just trying to understand why the second one does not work. It does not even print ('a'). The error I am getting is along the lines of this:
selenium.common.exceptions.WebDriverException: Message: unknown error: cannot determine loading status
from unknown error: unexpected command response
(Session info: chrome=103.0.5060.66)
I am using chromedriver103 with chrome version 103. I know there have been some issues relating to it. Could that be why? Or have I used the wrong format for '.page-link.next'.
Also, a semi-related question: How do I grab the text after <a class = "text here"? I would like to just grab "text here". So in my examples above I would love to grab "page-link next disabled" or "page-link next".
Thank you in advance!
This is not related to your code at all, your code should do fine. The error message pertains to the chromedriver version and the current chrome browser you are using. Try to double check the chromedriver and chrome browser you are using for testing.
cannot determine loading status
Related
I have the following code in Python/Selenium:
try:
main = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "main"))
)
print(main.text)
except:
driver.quit()
And expecting a print statement but the "try" block seems to fail. I have all the right packages imported and what not but no dice. I am following along with a tutorial and everything has been working fine up to this point. Am happy to post the file thus far if needed. But just wondering why it is failing every pass. Any input is appreciated! Thanks!
EDIT: So I'm just tryna fetch/print the main content of the page given by:
<main id="main" class="site-main" role="main">...</main> ==$0
I have basically zero knowledge of html and I am just trying to follow along with a tutorial so I have no idea why its throwing errors.
Have you tried this:
print(main.get_attribute("innerHTML"))
or
print(main.get_attribute("outerHTML"))
I need to open Instagram Followers Page with Python. I have already tried several ways to do it, but have no results... Can someone help me, please?
First I tried to do it this way:
# Go to the Group
driver.get('https://www.instagram.com/biblio_com/')
time.sleep(5)
driver.find_element_by_xpath('//a[#href="/biblio_com/followers/"]').click()
But it gives an error:
selenium.common.exceptions.ElementNotInteractableException: Message: Element <a class=" _81NM2" href="/biblio_com/followers/"> could not be scrolled into view
Then I tried the following:
# WebDriverWait(driver, 100).until(EC.element_to_be_clickable((By.XPATH, '//a[#href="/biblio_com/followers/"]'))).click()
But it gives the same error ... (((
Another way I was trying to solve this problem was:
followers_link = driver.find_element_by_xpath('//a[#href="/biblio_com/followers/"]')
ActionChains(driver).move_to_element(followers_link).click(followers_link).perform()
It gives no error. But it also gives no result ...
Can someone help me with this?
driver.find_element_by_xpath('//a[#href="/biblio_com/followers/"]').click()
I tried with chromedriver and for me it is working. What browser are you using?
I am trying to scrape the target website for product_links. The program should open the required URL in the browser and scrape all the links with a particular class name. But for some reason, I am getting a NoSuchElementException for this piece of code
links = driver.find_elements_by_class_name("styles__StyledTitleLink-mkgs8k-5")
for link in links:
self.driver.implicitly_wait(15)
product_links.append(link.find_element_by_css_selector('a').get_attribute('href'))
I tried printing out the text in each link with link.text in the for loop. The code is actually selecting the required elements. But for some reason is not able to extract the href URL from each link. I am not sure what I am doing wrong.
This is the entire error message
NoSuchElementException: Message: no such element: Unable to locate
element: {"method":"css selector","selector":"a"} (Session info:
chrome=83.0.4103.106)
Error seems there is no css element with 'a' so you need to try with other locators to identify elements. try with xpath=//a[contains(text(),'text of that element')]
You are looking for a class name generated by a builder, check the random string at the end of the class name, these classes won't be found in every web.
if you want to scrape them, find a different generic class or find all classes with a subset string "StyledTitleLink"
Here's how to do it with JQuery
You should try and find a different solution to your problem
SCREENSHOT OF THE HTML
Here is the screenshot of html code with which I am struggling , I want to click on the Smart Watches in the left-Nav and I am using the following code to click on it
driver.implicitly_wait(30)
driver.find_element_by_link_text('Smart Watches').click()
But I am getting the following error and I am clueless why i just cant find it on page
selenium.common.exceptions.NoSuchElementException: Message: no such
element: Unable to locate element: {"method":"link
text","selector":"Smart Watches"} (Session info:
chrome=60.0.3112.113) (Driver info: chromedriver=2.29.461591
(62ebf098771772160f391d75e589dc567915b233),platform=Windows NT
6.2.9200 x86_64)
I have also tried the explicit code and Expected conditions as follows -:
wait = WebDriverWait(driver, 20)
link = wait.until(expected_conditions.presence_of_element_located((By.LINK_TEXT,'"Smart Watches"')))
link.click()
But even its giving me Timeout exception
Here is the link of the page where I am stuck since morning
https://www.kogan.com/au/shop/phones/
I am very new to coding , any help would be helpful !! I just want to know why find_element_by_link_text is not working here , it looks weird to me!!
Thanks in advance
The problem is that when you use find_element_by_link_text(), it must be an exact match to the text contained in the link. In your HTML picture, you can see "Smart Watches" but what you aren't seeing is the SPAN just below but still inside the A is closed. Most likely if you expand it, you will see additional text that you must include if you are going to use find_element_by_link_text().
Another option is find_element_by_partial_link_text() which is more like a contains() instead of equals(). Depending on the page, it may find too many matches. You would have to try and see if it works.
Yet another option is using an XPath. There are a lot of different ways to create an XPath for this depending on exactly what you want.
This is the most general and thus most likely to find unwanted links but it may work. It's pretty much the same as find_element_by_partial_link_text()
//a[contains(.,'Smart Watches')
Other options include
//a[starts-with(.,'Smart Watches')
//li[#data-filter-facet='smart-watches']/a[contains(.,'Smart Watches')
//li[#data-filter-facet='smart-watches']/a[starts-with(.,'Smart Watches')
... and so on...
You can try this way...
Accessing text: find_element_by_xpath["//a[contains(text(), 'Smart Watches')]"].click()
Don't know why it is not wotks but partial link text works. Please see my java code for the same:
WebDriver driver=new FirefoxDriver();
driver.get("https://www.kogan.com/au/shop/phones/");
WebElement watch=driver.findElement(By.partialLinkText("Smart Watch"));
WebDriverWait waitElement=new WebDriverWait(driver, 30);
waitElement.until(ExpectedConditions.elementToBeClickable(watch));
watch.click();
You need to add double quotation marks, as it is in the html code:
driver.find_element_by_link_text('"Smart Watches"').click()
most of the time it might happen that link is taking some more time to load after your page has loaded. rather than using implicit wait use explicit wait.
wait = WebDriverWait(driver, 30)
link = wait.until(expected_conditions.presence_of_element_located((By.LINK_TEXT,"Smart Watches")))
link.click()
it could also be the case that link is inside another frame, in this case you will have to switch to this frame.
Here is the link I'm trying to click:
Add Keywords
I tried a few options(listed below) but they didn't work; any ideas?
self.br.find_element_by_xpath("//*[#id='btnAddKeywords']").click()
self.br.execute_script("OpenAddKeywords();return false;")
This is the error I've got for execute_script:
Message: u'Error Message => \'Can\'t find variable: OpenAddKeywords\'\n caused by Request =>
And this is the one that I've got for xpath:
Message: u'Error Message => \'Unable to find element with xpath \'//*[#id=\'btnAddKeywords\']\'\'\n caused by Request =>
As I mentioned in my own question here, the problem would be solved by means of ActionChains class; a brief code is here:
el = driver.find_element_by_id("someid")
webdriver.ActionChains(driver).move_to_element(el).click(el).perform()
The main problem is that in some cases, specially when you have a some javascript codes in your page, the DOM would change and the element you've found before will be staled. 'ActionChains' would keep it alive to perform actions on.
You can try to use xpath like below. It is working for me because i have used last project.
driver.find_element_by_xpath("xpath").click()
Please try it...