Python and Internet Explorer (specific website) - python

I am trying to use Python to open an IE instance, navigate to a particular site, and enter login credentials. I am currently trying to make IEC work, but am open to other options that do the same thing.
I am having trouble the last part (login) because the "button" does not seem to be recognized as such. It appears to be some sort of trigger that acts as a button (role="button")? I am not very familiar with this:
<a title="Click here to log in." class="focus-parent ajax-request need-focus-pageobject" role="button" href="../MyAccount/MyAccountUserLogin.asp?Referrer=&AjaxRequest=true">
Here is the code I have tried:
import IEC
ie = IEC.IEController()
ie.Navigate('https://efun.toronto.ca/torontofun/Start/Start.asp')
ie.PollWhileBusy()
# code after here does not work properly
ie.Navigate('https://efun.toronto.ca/torontofun/MyAccount/MyAccountUserLogin.asp?Referrer=&AjaxRequest=true')
ie.ClickButton(caption='toolbar-login')
ie.SetInputValue('ClientBarcode', '123')
ie.SetInputValue('AccountPIN', 'XYZ')
ie.ClickButton(name='Enter')
I would appreciate tips on how to open the login menu in this case.

Did you try selenium ?
from selenium import webdriver
driver = webdriver.Ie()
login_btn = driver.find_element_by_id("toolbar-login")
login_btn.send_keys(Keys.RETURN)
user_name = driver.find_element_by_id("ClientBarcode")
user_name.sendKeys("user")
user_name.sendKeys(Keys.RETURN)

Related

Why is selenium not able to find element with ID, even when it is not in an iframe?

I am trying to make an automatic program for logging in to GitHub. I could find only the sign-in option. After that, I could not find the Username field. I have confirmed that the element is definitely not in a/an (i)frame. I have tried an alternative with css-selector.
Here is the code I tried:
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options
chrome_opt = Options()
chrome_opt.add_experimental_option("detach", True) # type: ignore[unknown]
auto = Chrome(options=chrome_opt)
auto.get("https://github.com")
signin_link = auto.find_element("link text", "Sign in")
signin_link.click()
username = auto.find_element("id", "login_field")
username.send_keys("ArnabRollin") # type: ignore[unknown]
# FIXME
The type-ignore comments are there because of 'strict mode' type checking in VS Code. Also, after 5 tries of running it, it finally worked, but when I ran it again it didn't.
now your code is looking for elements in the page https://github.com - the one used in method get()
instead of clicking element's link, get it with webdriver:
signin_link = auto.find_element("link text", "Sign in")
signin_link.click()
use
signin_link = auto.find_element("link text", "Sign in").get_attribute('href')
auto.get(signin_link)
auto.get(url2) will save new page context into driver. after sign in is complete, a new page context will be needed
Note, I'm not sure it's ethical scraping this website, and besides, they have Captcha.
You can use this CSS selector:
username = auto.find_element(By.CSS_SELECTOR, "input.js-login-field")
Additionally, when you go to github.com and click on login, the URL changes to /login: https://github.com/login

can't access a pop-up login form using Selenium in Python

I've gone through a number of similar topics here but they all seem to vary in how the pop-up window is designed. I've tried a few different ways and here is the most recent. So before I enter the login info, I need to click that client login button to access the login form but I can't even get it to open, let alone entering login information.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome('C:\Login Automation\chromedriver.exe')
driver.get("https://www.datamyne.com/ ")
clientlogin = driver.find_element_by_xpath("//div[#id='holder']").click()
username = driver.find_element_by_xpath("//*[#id='user']").send_keys('myusername')
password = driver.find_element_by_xpath("//*[#id='pass']").send_keys('mypassword')
the error I'm getting here is "NoSuchWindowException: Message: no such window: target window already closed from unknown error: web view not found"
the element of the first button is this:
<a style="position: relative" href="javascript:showHide('dialog-login');" class="green-btn user-login top-right-5">Client Login</a>
and then the actual login button is another javascript line:
Login
Any tips as to how to approach this would be really appreciated!
I have inspect mention website login Form based on JavaScript. You can easily execute script through selenium. I have create a basic code snippet for you.
from selenium import webdriver
driver = webdriver.Chrome('chromedriver.exe')
driver.get("https://www.datamyne.com/")
##Javascript script execute using selenium
clientlogin = driver.execute_script("javascript:showHide('dialog-login');")
driver.implicitly_wait(5)
username = driver.find_element_by_xpath('//*[#id="User"]').send_keys('myusername')
password = driver.find_element_by_xpath('//*[#id="Pass"]').send_keys('mypassword')
save = driver.find_element_by_xpath('//*[#id="formLoginDM"]/div[1]/a').click()
clientlogin = driver.find_element_by_xpath("//div[#id='holder']").click()
The xpath is off. The holder isn't what you need to click.
I suspect you want:
clientlogin = driver.find_element_by_xpath("//a[text()='Client Login']").click()
Can you try this xpath once for the 'Client Login' pop-up modal
clientlogin = driver.find_element_by_xpath("//a[#class='green-btn user-login top-right-5']']").click()

Failed to use selenium to automatically click the link in a website

I want to use selenium to automatically log in a website(https://www.cypress.com/) and download some materials.
I successfully open the website using selenium. But when I use selenium to click the "Log in" button. It shows this:
Access Denied
Here is my code:
from time import sleep
from selenium import webdriver
class Cypress():
def extractData(self):
browser = webdriver.Chrome(executable_path=r"C:chromedriver.exe")
browser.get("https://www.cypress.com/")
sleep(5)
element = browser.find_element_by_link_text("Log in")
sleep(1)
element.click()
pass
if __name__ == "__main__":
a = Cypress()
a.extractData()
pass
Can anyone give me some idea?
The website is protected using Akamai CDN, services, or whatever is loaded there.
I took a quick glance and it seems like the Akamai service worker is up, but I don't see any sensor data protection, selenium is simply detected as webdriver (and plenty other things) and flagged, try to login using requests, or ask the website owner to give you API access for your project.
Akamai cookies are up, so surely the protection is too, the 301 you got is the bot protection stopping you from automating something on a protected endpoint.

How to write a Python script to search for a keyword in a particular website's database using the website's search bar?

I want to search for a keyword in a particular website using its search bar. For example I want to search about "birds" in Wikipedia. For that I have to open Google Chrome, then open Wikipedia, then search for the word "birds" in Wikipidea's search engine.
I want to automate this process using Python. I'm using PyCharm.
If emulating a browser user activity is OK to you, you may consider installing Selenium and Chrome webdriver (here is the instruction: https://pypi.org/project/selenium/ ).
"Example 1" is similar to the solution of your problem.
The search bar is <input type="search" name="search" placeholder="Search Wikipedia" title="Search Wikipedia [alt-shift-f]" accesskey="f" id="searchInput" tabindex="1" autocomplete="off"> element and has "searchInput" id, which you can use to select it with el = browser.find_element_by_id("searchInput")
Then use el.send_keys('birds' + Keys.RETURN) to fill the input with your request and search.
So the script may look like the following:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome()
browser.get('https://en.wikipedia.org/wiki/Main_Page')
print("Enter a keyword to search on wikipedia: ", end='')
keyword = input()
elem = browser.find_element_by_id('searchInput') # Find the search box
elem.send_keys(keyword + Keys.RETURN)
# do something with the opened page
browser.quit()
If you don't want to emaluate browser activity, you may resolve it somehow with requests and BeautifulSoup4 modules, but the solution will be more complex, though, probably, more efficient

How click on dynamic buttons link "#" from selenium and splinter?

I am trying to scrap something from website (example facebook(not using graph api just doing for learning), so I successfully login and land on front page, where I want to scrap some data, but the problem is when I land on front page, then facebook shows a layer and a box which says "turn on notification", now without click on any button between "Not Now" or "turn on" I can't do anything with splinter, and when I tried to click splinter doesn't do anything because the link of those button are "#"
when hovering on button footer shows this :
and inspect element shows this :
I tried with other account but that shows this layer as first thing after login :
Now I have question how to click on these 2 types of button via splinter or selenium :
first type of button which shows "#" as href
second which chrome shows for block, allow things
My code is :
from selenium import webdriver
from splinter import Browser
web_driver=webdriver.Chrome('/Users/paul/Downloads/chromedriver/chromedriver')
url = "https://www.example.com"
browser = Browser("chrome")
visit_browser = browser.visit(url)
email_box = '//*[#id="email"]'
find_1 = browser.find_by_xpath(email_box)
find_1.fill("example#gmail.com")
password_box = '//*[#id="pass"]'
find_2 = browser.find_by_xpath(password_box)
find_2.fill("example12345")
button_sub = '//*[#id="u_0_5"]'
find_3 = browser.find_by_xpath(button_sub)
find_3.click()
for testing purpose you can try on "see more button" in trending section on facebook, that also shows "#" how to click that ?
Not letting me comment because I don't have enough rep ... but have you tried to select the element by class and then performing .click() on it? That might do the trick as the href being "#" probably means the button has another purpose.
I have solved my problem , since that link was "#" so if i was click via css or other method it was just reloading the page and that layer appear again and again after every reload , But i tried little different solution and i click it by javascript :
First i tried and find the right element for clicking via js console in chrome :
document.getElementsByClassName('layerCancel _4jy0 _4jy3 _517h _51sy _42ft')[0].click();
This is working perfect in js console so now i used splinter method "browser.execute_script()" and pass that script as argument to this method.
browser.execute_script("document.getElementsByClassName('layerCancel _4jy0 _4jy3 _517h _51sy _42ft')[0].click()")
And its working perfect now as i wanted. But still i have not found a way how to click on browser push notification 'Allow" , "Block" etc
Thanks :)

Categories

Resources