I'm new here and new to Selenium. I have an SPA that is written in AngularJS. I'm trying to test a view with Python webdriver but it can't find the elements in the routed page. My question is how can I test the routed page?
view<div id ="form" align="center" ng-controller = "BP">
<input id = "topNumber" ng-model ="topNum" placeholder="Top Number" type ="number" class="form"/>
<br><br>
<input id = "botNumber" ng-model="bottomNum" placeholder="Bottom Number" type="number" class ="form"/>
<br><br>
<button class ="form" id ="bp_enter" ng-click="write(topNum, bottomNum)">Enter</button>
</div>
code
current python attempt
from selenium import webdriver
path = r'C:\Users\Mason\Desktop\chromedriver.exe'
var = webdriver.Chrome(path)
var.get("http://localhost/PhpProject30/index.php#!/")
var.find_element_by_id('home').click()
var.find_element_by_id('bp_btn').click()
var.find_element_by_id('BP_enter_btn').click()
var.get("http://localhost/PhpProject30/index.php#!/BP_Enter")
var.find_element_by_id('bp_enter').click()
Turns out that Selenium was executing to fast. It was looking for an element in the view before it the view loaded. I put a pause in there to give the view sometime and it worked like charm.
All I did was the following
import time
time.sleep(.5)
Related
I'm wrote a bot that used selenium to scrape all needed data and performed a few simple tasks. I don't know why I didnt use http requests instead from the start but I am now trying to switch to that. One of the selenium functions used a simple driver.get(url) to trigger an action on the site. Using requests.get, however, does not work.
This selenium code worked
import time
from selenium import webdriver
AM4_URL = 'https://www.airline4.net/?gameType=app&uid=102692112805909972638&uid_token=8adee69e774d89fb6e9f903e7d2afc70&mail=bsgpricecheck#gmail.com&mail_token=286f8bd25bcc32f49a02036102ce072c&device=ios&version=6&FCM=daf5d0d8bf4d7962061eac3a8e4bffa770d6593f31fd5b070d690f244dfb40d1#'
def depart():
# Load driver and get login url
if pax_rep > 80:
driver = webdriver.Firefox(executable_path='C:\webdrivers\geckodriver.exe')
driver.get(AM4_URL)
driver.minimize_window()
driver.get("https://www.airline4.net/route_depart.php?mode=all&ids=x")
time.sleep(100)
def randfunc():
depart()
But now im trying to switch over to requests because all the other bot functions work with it. I tried this and it doesn't perform the action.
import requests
# I was able to combine the URLs into one. It still performs the action when on a browser.
dep_url = 'https://www.airline4.net/route_depart.php?mode=all&ids=x?gameType=app&uid=102692112805909972638&uid_token=8adee69e774d89fb6e9f903e7d2afc70&mail=bsgpricecheck#gmail.com&mail_token=286f8bd25bcc32f49a02036102ce072c&device=ios&version=6&FCM=daf5d0d8bf4d7962061eac3a8e4bffa770d6593f31fd5b070d690f244dfb40d1#'
requests.get(dep_url)
I figured this code would work because the url doesnt return any content. I thought it was using a GET request as a command.
I would also like to note, I got the route_depart.php url from an ajax button.
Here's the HTML from that
<div class="btn-group d-flex" role="group">
<button class="btn" style="display:none;" onclick="Ajax('def227_j22.php','runme');"></button>
<button class="btn w-100 btn-danger btn-xs" onclick="Ajax('route_depart.php?mode=all&ids=x','runme',this);">
<span class="glyphicons glyphicons-plane"></span> Depart <span id="listDepartAmount">5</span></button>
</div>
I am not a professional programmer, so please excuse any dumb mistakes--I am doing some research and I am trying to log into a database using Selenium to search it for about 1000 terms.
I have two issues:
1. How to log in using Selenium after a redirect to an organizational sign on page
2. How to search the database
Until I solve 1, I can't really get to 2, so I am really only asking about 1.
Here is the code I have so far:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
path_to_chromedriver = 'C:/Users/Kyle/Desktop/chromedriver' # change path as needed
browser = webdriver.Chrome(executable_path = path_to_chromedriver)
url = 'http://library.universityname.edu/corpaffils'
browser.get(url)
username = selenium.find_element_by_id("login_id")
password = selenium.find_element_by_id("login_password")
username.send_keys("my username")
password.send_keys("my password")
selenium.find_element_by_name("submit").click()
When I navigate to the above URL, it directs me to my organizational sign on page (say, login.universityname.edu), which I should be able to enter my username and password into, and then it would direct me to the database, but when I execute the code above, it does not log me in.
The html that I can find on the organizational sign in page looks like this:
<li><label for="login_id">ID:</label><input id="login_id" placeholder="ID" type="text" NAME="user" SIZE="20" VALUE=""/></li>
...
...
<li><label for="login_password">Password:</label><input id="login_password" placeholder="Password" type="password" NAME="pass" SIZE="20" /></li>
...
...
<ul class="submit">
<li><input type="SUBMIT" name="submit" value="Sign in"></li>
</ul>
I think there might be two issues, but I am not sure which it is:
1. Either my code is trying to enter the login information before the redirect, and thus it isn't entering into anything; or
2. My Selenium code is not properly identifying the fields for the organizational sign on, so it isn't logging me in; or
3. Both
Is there something I need to do to account for the redirect? Am I identifying the login fields correctly and handling them accurately?
Replace selenium with browser when you are finding an element and this will work.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
path_to_chromedriver = 'C:/Users/Kyle/Desktop/chromedriver' # change path as needed
browser = webdriver.Chrome(executable_path = path_to_chromedriver)
url = 'http://library.universityname.edu/corpaffils'
browser.get(url)
username = browser.find_element_by_id("login_id")
password = browser.find_element_by_id("login_password")
username.send_keys("my username")
password.send_keys("my password")
browser.find_element_by_name("submit").click()
You don't need to do anything for redirecting. It will redirect automatically once it logs in.
N.B: Don't forget to close and quit the browser when you are done.
I'm a selenium newbie and just trying to learn the basics. I have a simple CherryPy webapp that takes a first name and last name as input:
My Webapp:
<p>
<label></label>
<input name="first_name"></input>
<br></br>
</p>
<p>
<label></label>
<input name="last_name"></input>
<br></br>
</p>
In my python console I have:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get('http://localhost:8080')
The page loads fine in FF but I'm a little lost on how to get text into the 'first_name' and 'last_name' text boxes. I see examples where you do something like inputElement = driver.find_element_by_id("n") and then inputElement.send_keys('my_first_name') but I don't have an id...just a name. Do I need to add stuff to my web page? Thanks!
You can use find_element_by_name:
driver.find_element_by_name('first_name').send_keys("my_first_name")
driver.find_element_by_name('last_name').send_keys("my_last_name")
Python: 3.4.1
Browser: Chrome
I'm trying to push a button which is located in a form using Selenium with Python. I'm fairly new to Selenium and HTML.
The HTML code is as follows:
<FORM id='QLf_437222' method='POST' action='xxxx'>
<script>document.write("<a href='javascript:void(0);' onclick='document.getElementById(\"QLf_437222\").submit();' title='xxx'>51530119</a>");</script>
<noscript><INPUT type='SUBMIT' value='51530119' title='xxx' name='xxxx'></noscript>
<INPUT type=hidden name="prodType" value="DDA"/>
<INPUT type=hidden name="BlitzToken" value="BlitzToken"/>
<INPUT type=hidden name="productInfo" value="40050951530119"/>
<INPUT type=hidden name="reDirectionURL" value="xxx"/>
</FORM>
I've been trying the following:
driver.execute("javascript:void(0)")
driver.find_element_by_xpath('//*[#id="QLf_437104"]/a').click()
driver.find_element_by_xpath('//*[#id="QLf_437104"]/a').submit()
driver.find_element_by_css_selector("#QLf_437104 > a").click()
driver.find_element_by_css_selector("#QLf_437104 > a").submit()
Python doesn't throw an exception, so it seems like I'm clicking something, but it doesn't do what I want.
In addition to this the webpage acts funny when the chrome driver is initialized from Selenium. When clicking the button in the initialized chrome driver, the webpage throws an error (888).
I'm not sure where to go from here. Might it be something with the hidden elements?
If I can provide additional information please let me know.
EDIT:
It looks like the form id changes sometimes.
What it sounds like you are trying to do, is to submit the form, right?
The <a> that you are pointing out is simply submitting that form. Since that is being injected via JavaScript, it's possible that it's not showing up when you try to click it. What i'd recommend, is doing:
driver.find_element_by_css_selector("form[id^='QLf']").submit()
That will avoid the button, and submit the appropriate form.
In the above CSS selector, i also used [id^= this means, find a <form> with an ID attribute that starts with QLf, because it looks like the numbers after, are automatically generated.
I need to input text into the text box on this website:
http://www.link.cs.cmu.edu/link/submit-sentence-4.html
I then need the return page's html to be returned. I have looked at other solutions. But i am aware that there is no solution for all. I have seen selenium, but im do not understand its documentation and how i can apply it. Please help me out thanks.
BTW i have some experience with beautifulsoup, if it helps.I had asked before but requests was the only solution.I don't know how to use it though
First, imho automation via BeautifulSoup is overkill if you're looking at a single page. You're better off looking at the page source and get the form structure off it. Your form is really simple:
<FORM METHOD="POST"
ACTION="/cgi-bin/link/construct-page-4.cgi#submit">
<input type="text" name="Sentence" size="120" maxlength="120"></input><br>
<INPUT TYPE="checkbox" NAME="Constituents" CHECKED>Show constituent tree
<INPUT TYPE="checkbox" NAME="NullLinks" CHECKED>Allow null links
<INPUT TYPE="checkbox" NAME="AllLinkages" OFF>Show all linkages
<INPUT TYPE="HIDDEN" NAME="LinkDisplay" VALUE="on">
<INPUT TYPE="HIDDEN" NAME="ShortLength" VALUE="6">
<INPUT TYPE="HIDDEN" NAME="PageFile" VALUE="/docs/submit-sentence-4.html">
<INPUT TYPE="HIDDEN" NAME="InputFile" VALUE="/scripts/input-to-parser">
<INPUT TYPE="HIDDEN" NAME="Maintainer" VALUE="sleator#cs.cmu.edu">
<br>
<INPUT TYPE="submit" VALUE="Submit one sentence">
<br>
</FORM>
so you should be able to extract the fields and populate them.
I'd do it with curl and -X POST (like here -- see the answer too :)).
If you really want to do it in python, then you need to do something like POST using requests.
Pulled straight from the docs and changed to your example.
from selenium import webdriver
# Create a new instance of the Firefox driver
driver = webdriver.Firefox()
# go to the page
driver.get("http://www.link.cs.cmu.edu/link/submit-sentence-4.html")
# the page is ajaxy so the title is originally this:
print driver.title
# find the element that's name attribute is Sentence
inputElement = driver.find_element_by_name("Sentence")
# type in the search
inputElement.send_keys("You're welcome, now accept the answer!")
# submit the form
inputElement.submit()
This will at least help you input the text. Then, take a look at this example to retrieve the html.
Following OP's requirement of having the process in python.
I wouldn't use selenium, because it's launching a browser on your desktop and is overkill for just filling up a form and getting its reply (you could justify it if your page would have JS or ajax stuff).
The form request code could be something like:
import requests
payload = {
'Sentence': 'Once upon a time, there was a little red hat and a wolf.',
'Constituents': 'on',
'NullLinks': 'on',
'AllLinkages': 'on',
'LinkDisplay': 'on',
'ShortLegth': '6',
'PageFile': '/docs/submit-sentence-4.html',
'InputFile': "/scripts/input-to-parser",
'Maintainer': "sleator#cs.cmu.edu"
}
r = requests.post("http://www.link.cs.cmu.edu/cgi-bin/link/construct-page-4.cgi#submit",
data=payload)
print r.text
the r.text is the HTML body which you can parse via e.g. BeautifulSoup.
Looking at the HTML reply, I think your problem will be in processing the text within the <pre> tags, but that's an entirely different thing outside the scope of this question.
HTH,