I'm new in IT and automation QA, I have a trouble in using Selenium library and SeleniumLibrary in RobotFramework. As I know, the SeleniumLibrary will support using method as Keyword in Robot Framework (RF).
Suppose in *.py file I can use Selenium to call open browser:
driver = webdriver.Chrome() driver.get("https://google.com") ...
and in *.robot I open a browser with SeleniumLibrary like:
open browser https://google.com
While researching for many solutions, I found many tutorials in Selenium and it was easy to understand and apply, but in my project, they's been extending the SeleniumLibrary by:
driver = SeleniumLibrary
driver.open_browser("https://google.com")
= driver.get("https://google.com") in *.py (from selenium import webdriver)
I feel hard to read the source code of SeleniumLibrary e.g If I need to use
find_element(By.ID/CSS/etc.., "abc") in *.py and in SeleniumLibrary I only searched that
def find_element(
self,
locator: str,
tag: Optional[str] = None,
required: bool = True,
parent: WebElement = None,
)
I dont how to write code like before with selenium.webdriver..
Does anyone help me to clear them? How can I use driver.find_element (of SeleniumLibrary) in *.py file that equals driver.find_element_by_CSS_selector (of Selenium.webdriver)
There're various strategies how to locate elements on a page, read about them in the documentation here.
The point is that you want to do something with the element you locate, in Python, you'd typically do it in two steps, first find the element and then click on it (for example). In Robot Framework, you do it in one step like so:
Click Element id:foo
That's all. Two steps are taken care of here:
finding the element in the DOM
clicking on the element
Robot Framework is a bit more high-level than Python, so many tasks in it will require fewer lines of code. I think this is what you're asking about and are not really used to yet. I recommend using RF on a small project to learn it. You always have documentation ready on the Internet.
Related
When I go to the following website: https://www.bvl.com.pe/mercado/movimientos-diarios and use Selenium's page_source option, or urllib.request.urlopen what I get is a different string than if I go to Google Chrome, and open the INSPECT option in the contextual menu and copy the entire thing.
From my research, I understand it has to do with Javascript running on the webpage and what I am getting is the base HTML.
What code can I use (Python) to get the same information?
That behavior entirely browser-dependent. The browser takes the raw HTML, processes it, runs a JS script (usually), styles it with CSS and does many other things. So to get such a result in Python you'd have to make your own web browser.
After much digging around, I came upon a solution that works in most cases. Use Headless Chrome with the --dump-dom switch.
https://developers.google.com/web/updates/2017/04/headless-chrome
Programmatically in Python use the subprocess module to run Chrome in a shell and either assign the output to a variable or direct the output to a text file.
I want to find the time whatever (an object, image, text, link, DB or anything) loads first in a requested website using Python and Selenium.
Checkout performance.timing, it's JavaScript and comes default in your browser. You have a lot of options to display, like:
navigationStart
connectStart
connectEnd
domLoading
domInteractive
domComplete
Just go to your console window in your browser and type performance.timing. Might be of use to you.
If you find something you can use, you can use selenium to execute the JavaScript inside the browser using execute_script:
driver.execute_script(‘return performance.timing.domComplete’)
I have the python piece of code which needs to be converted to Robot Framework.
Here is the python code..
chromeOptions = webdriver.ChromeOptions()
prefs = {"download.default_directory" : "G:/"}
chromeOptions.add_experimental_option("prefs",prefs)
chromedriver = "C:/Python27/chromedriver.exe"
driver = webdriver.Chrome(executable_path=chromedriver, chrome_options=chromeOptions)
Is it possible to make it work in Robot Framework?
I don't have have much knowledge on Robot Framework.
Using Selenium2Library, a direct translation of that code would look like this:
${chromeOptions}= Evaluate sys.modules['selenium.webdriver'].ChromeOptions() sys, selenium.webdriver
${prefs}= Create Dictionary download.default_directory G:/
Call Method ${chromeOptions} add_experimental_option prefs ${prefs}
${chromedriver}= Set Variable C:/Python27/chromedriver.exe
Create Webdriver Chrome chrome executable_path=${chromedriver} chrome_options=${chromeOptions}
Go To http://someurl/
[Teardown] Close All Browsers
This code depends on two library imports - Selenium2Library and Collections. It worked after adjusting your paths to my system.
Given that you know Python already, I'd direct you to any number of questions asking about how to implement Python in Robot Framework (that coding is mostly on the Python side). Much of the code could likely be simplified by the Open Browser keyword if all you want to do is open a new browser instance to a webpage. To change settings in Chrome, either refer to the questions that explain how to implement your Python code in Robot Framework or use ombre42's suggestions.
I am trying to click this radio button using selenium and python.
<input type="radio" name="tweet_target_1" value="website" class="tweet-website-button radio-selection-validate serialize-me newline-before field-order-15">
I have
website = driver.find_element(name="tweet_target_1")
website.click()
but it's not allowing me to click it. How can I click using a combo of name, value or class, value etc.?
Is there a good source of info about how to use selenium? Because most of what I've found is on java and I'm using python.
EDIT: using XPATH
I tried
website = driver.find_elements(By.XPATH, "//form[#id='dmca_form' and #class='twitter-form custom-form']/div[20][#class='list-container']/div[1][#class='list-item']/div[7][#class='clearfix inf-tweet init-hide']/div[#class='input']/ul[#class='options']/li[2]/label/input[#class='tweet-website-button radio-selection-validate serialize-me newline-before field-order-15']/")
website.click()
I keep getting
AttributeError: 'list' object has no attribute 'click'
I know this comes a little too late perhaps, but I joined just recently.
Tip: Use, Firebug and with it Firepath. Locate the radio button and find out the xpath for the element in question.
website = driver.find_element_by_xpath(".//**")
website.click()
OR
website = driver.find_element_by_xpath(".//**").click()
This should work all the time you try. Also, just using from selenium import webdriver
should make the click() function work correctly.
I'm not sure where you found the documentation that said you could call find_element like that, but you should either be doing driver.find_element_by_name("tweet_target_1") or driver.find_element(By.NAME, "tweet_target_1") (having first imported By of course). Also, Selenium Java code is pretty easily convertible to Python code; it follows a few pretty simple transformation rules, and if you still have questions, all the code for the library itself will also be on your machine to look at.
I would like to web-scrape the html source code of java-script pages that I can´t access without selecting one option in a drop-down list and, after, 'clicking' on links. Spite of not been in java, a simple example can be this:
Web-scrape the main wikipedia pages in all languages available in the drop-down list in the bottom of this url: http://www.wikipedia.org/
To do so, I need to select one language, English for example, and then 'click' in the 'Main Page' link in the left of the new url (http://en.wikipedia.org/wiki/Special:Search?search=&go=Go).
After this step, I would scrape the html source code of the wikipedia main page in English.
Is there any way to do this using R? I have already tried RCurl and XML packages, but it does not work well with the javascript page.
If it is not possible with R, could anyone tell me how to do this with python?
It's possible to do this using python with the selenium package. There are some useful examples here. I found it helpful to install Firebug so that I could identify elements on the page. There is also a Selenium Firefox plugin with an interactive window that can help too.
import sys
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://website.aspx")
elem = driver.find_element_by_id("ctl00_ctl00")
elem.send_keys( '15' )
elem.send_keys( Keys.RETURN )
Take a look at the RCurl and XML packages for posting form information to the website and then processing the data afterwards. RCurl is pretty cool, but you might have an issue with the HTML parsing because if it isn't standards compliant, the XML package may not want to play nice.
If you are interested in learning Python however, Celenius' example above coupled with beautifulSoup would be what you need.