I'm trying to scroll down a specific div in a webpage (ticker box in facebook) using selenium (with python).
I can find the element, but when i try to scroll it down using:
header = driver.find_element_by_class_name("tickerActivityStories")
driver.execute_script('arguments[0].scrollTop = arguments[0].scrollHeight', header)
Nothing happens.
I've found a lot of answers, but mostly are for selenium in java, so i would like to know if it's possible to do it from python.
I've found a lot of answers, but mostly are for selenium in java
There is no difference, selenium has the same api in java and python, you just need to find the same function in python selenium.
First of all check if your JS works in the browser(if not, try scroll parent element).
Also you can try :
from selenium.webdriver.common.keys import Keys
header.send_keys(Keys.PAGE_DOWN)
Related
Apologies if this question was answered before, I want to click on an area in a browser with plain text using Selenium Webdriver in python
The code I'm using is:
element_plainText = driver.find_elements(By.XPATH, '//*[contains(#class, "WgFkxc")]')
element_plainText.click()
However this is returning "ElementNotInteractableException". Can anyone help me out with this?
Selenium is trying to be helpful here, by telling you why it won't click on the element; ElementNotInteractableException means it thinks that what you're trying to click on isn't clickable.
This usually happens because either:
The element isn't actually visible, or is disabled
Another element is "overlapping" the element, possibly invisibly
You're clicking something Selenium thinks won't do anything, like plain text
There's two things I'd try to get around this. Firstly, Actions. Selenium has an Action API you can use to cause specific UI events to occur. I'd suggest finding the co-ordinates of the text, then making Selenium click those co-ordinates instead of telling it to click the element. Read more about that API here.
Secondly, try clicking it with Javascript, using a Javascript Executor. That can often give you the same outcome as using Selenium directly, without it being so "helpful".
I am trying to get the value from webpage using selenium base on python. However, I do not get the point about how to make it. The value I want to catch is in the red mark side of below picture. Please help me to figure out this. Thank you very much!
Please click here to check the picture
There are few ways (By css,by xpath...) I recommend you reading this page: http://selenium-python.readthedocs.io/locating-elements.html
You also need some understanding about html, css and the DOM but the simplest way to get the value you want is by xpath (But its the slowest)
Open your web browser lets say chrome, right click on the element you want and inspect. It will open developer tools and you will have the html line selected already, right click it again-> copy -> copy xpath
from selenium import webdriver
driver = webdriver.Chrome('/path/to/chromedriver')
driver.get("http://www.web.com")
driver.find_element_by_xpath(the_xpath_you_copied_before_from_dev_tools).text
I am using Selenium WebDriver to do automated testing of a website. I have been successful in clicking through numerous menus and links to a point.
At one point the website I am working with generates links that look like this:
<U onclick="HourglassSubmitItem(document.all('PageName').value, '00000001Responsibility Code')">Responsibility Code</U>
I am trying to use the .click functionality of the webdriver to click this link with no success.
Using this:
page.find_element_by_xpath("//u[contains(text(),'Responsibility Code')]")
successfully finds the U tag above. but when I add .click() to the end of this xpath, the click is not performed. But it also does not generate an error. So, my question is can Selenium be used to simulate clicks on an HTML tag that is NOT an anchor () tag? If so, how?
I will also say that I do not have control over the page I am working with, so changing the to is not possible.
I would appreciate any guidance the Community could provide.
Thank You for you help,
Chris
Sometimes using JavaScript could solve the "clicking" issue:
element = page.find_element_by_xpath("//u[contains(text(),'Responsibility Code')]")
page.execute_script('arguments[0].click();', element)
You can prefer JavaScript in this case.
WebElment element = page.find_element_by_xpath("//u[contains(text(),'Responsibility Code')]")
JavaScriptExecutor executor = (JavaScriptExecutor)driver;
executor.ExecuteScript("arguments[0].click();", element);
Thank you for answering my previous question but as one is solved another is found apparently.
Interacting with the flash game itself is now the problem. I have tried researching how to do it in Selenium but it can't be done. I've seen FlashSelenium, Sikuli, and AutoIT.
I can only find documentation for FlashSelenium in Java, It's easier for me to use AutoIT rather than Sikuli as I'd have to learn to use Jpython to create the kind of script I want to, which I am not straying away from learning just trying to finish this as fast as possible. As for AutoIT, the only problem with it is that I don't undertsand how to use it with seleium
from selenium import webdriver
import autoit
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://na58.evony.com/s.html?loginid=747970653D74727947616D65&adv=index")
driver.maximize_window()
assert "Evony - Free forever " in driver.title
So far I have this and It's doing what is suppose to do which is create a new account using that "driver.get" but when I reach to the page, it is all flash and I can not interact with anything in the webpage so I have to use AutoIT but I don't know how to get it to "pick-up" from where selenium left off. I want it to interact with a button on the webpage and from viewing a previous post on stackoverflow I can use a (x,y) to specify the location but unfortunately that post didn't explain beyond that. Any and all information would be great, thanks.
Yes, you can use any number of scraping libraries (scrapy and beautiful soup are both easy to use and very powerful). Personally though, I like Selenium and its python bindings because they're the most flexible. Your final script would look something like this:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://xx.yy.zz")
# Click the "New comer, click here to play!" button
elem = driver. find_element_by_partial_link_text("Click here to play")
elem.send_keys(Keys.RETURN)
Can you post what the source of the page looks like (maybe using a Pastebin)?
Edit: updated to show you how to click the "Click here to play" button.
I would like to web-scrape the html source code of java-script pages that I canĀ“t access without selecting one option in a drop-down list and, after, 'clicking' on links. Spite of not been in java, a simple example can be this:
Web-scrape the main wikipedia pages in all languages available in the drop-down list in the bottom of this url: http://www.wikipedia.org/
To do so, I need to select one language, English for example, and then 'click' in the 'Main Page' link in the left of the new url (http://en.wikipedia.org/wiki/Special:Search?search=&go=Go).
After this step, I would scrape the html source code of the wikipedia main page in English.
Is there any way to do this using R? I have already tried RCurl and XML packages, but it does not work well with the javascript page.
If it is not possible with R, could anyone tell me how to do this with python?
It's possible to do this using python with the selenium package. There are some useful examples here. I found it helpful to install Firebug so that I could identify elements on the page. There is also a Selenium Firefox plugin with an interactive window that can help too.
import sys
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://website.aspx")
elem = driver.find_element_by_id("ctl00_ctl00")
elem.send_keys( '15' )
elem.send_keys( Keys.RETURN )
Take a look at the RCurl and XML packages for posting form information to the website and then processing the data afterwards. RCurl is pretty cool, but you might have an issue with the HTML parsing because if it isn't standards compliant, the XML package may not want to play nice.
If you are interested in learning Python however, Celenius' example above coupled with beautifulSoup would be what you need.