Why Selenium method get(), doesnt work second time? - python

Help me figure out why when using the get () method a second time, the page does not go? The method works only if you use a time delay time.sleep ()
Not working:
LOGIN = 'something#mail.com'
PASS = 'somepass'
LINK = 'https://stepik.org/'
browser = webdriver.Chrome()
browser.get(LINK)
browser.implicitly_wait(5)
browser.find_element_by_id('ember232').click()
username = browser.find_element_by_name('login').send_keys(LOGIN)
pas = browser.find_element_by_name('password').send_keys(PASS)
button = browser.find_element_by_xpath('//button[#type = "submit"]').click()
browser.get('https://stepik.org/lesson/237240/step/3?unit=209628')
Working
LOGIN = 'something#mail.com'
PASS = 'somepass'
LINK = 'https://stepik.org/'
browser = webdriver.Chrome()
browser.get(LINK)
browser.implicitly_wait(5)
browser.find_element_by_id('ember232').click()
username = browser.find_element_by_name('login').send_keys(LOGIN)
pas = browser.find_element_by_name('password').send_keys(PASS)
button = browser.find_element_by_xpath('//button[#type = "submit"]').click()
time.sleep(5)
browser.get('https://stepik.org/lesson/237240/step/3?unit=209628')

You are trying to login into the web site and then to navigate to some internal page.
By clicking the submit button
button = browser.find_element_by_xpath('//button[#type = "submit"]').click()
You are trying to log into the site.
This process takes some time.
So if immediately after clicking the submit page, while login is still not proceed, you are trying to navigate to some internal page this will not work since you still not logged in.
However you do not need to use a hardcoded sleep of 5 seconds.
You can use an explicit wait of expected conditions like presence_of_element_located() of some internal element to indicate you are inside the web site. Once this condition is fulfilled you can navigate to the desired internal page.

Try an alternative way:
driver.navigate().to("https://stepik.org/lesson/237240/step/3?unit=209628")

Related

Python - Pop up window authentication without a source code

i'm trying to access a pop-up authentication box in a internal link which i can t share as it is confidential, i tried the below code to access it, it works fine for a single link but i have multiple links which i have it under a loop.
driver.get(links[i])
time.sleep(2)
window_before = driver.window_handles[0]
driver.switch_to.window(window_before)
shell = win32.Dispatch("WScript.Shell")
time.sleep(2)
shell.Sendkeys('username')
shell.Sendkeys('{TAB}')
shell.Sendkeys('password')
shell.Sendkeys('{ENTER}')
time.sleep(4)
#Second login page Enter password again and log in
password = driver.find_element_by_name('p_t02').send_keys('password')
logon = driver.find_element_by_xpath("//td[#class='t10C']").click()
Problem: when it runs the second loop for the second link it actually skips to the below part and throws an error
driver.get(links[i])
time.sleep(2)
window_before = driver.window_handles[0]
driver.switch_to.window(window_before)
shell = win32.Dispatch("WScript.Shell")
time.sleep(2)
shell.Sendkeys('username')
shell.Sendkeys('{TAB}')
shell.Sendkeys('password')
shell.Sendkeys('{ENTER}')
time.sleep(4)
it skips the above and goes to the below code and throws an error
#Second login page Enter password again and log in
password = driver.find_element_by_name('p_t02').send_keys('password')
can anyone help me get past the authentication with another way
like without using the
shell = win32.Dispatch("WScript.Shell")

Stuck in Hcapture loop

Am using selenium and python plus 2capture API.i was able to retrieve the tokens successfully and even submit the form using js.
The form is submitted but the link keeps on reloading therefore cannot go past the hcapture loop.
here is my code:
def Solver(self, browser):
WebDriverWait(browser, 60).until(Ec.frame_to_be_available_and_switch_to_it((By.XPATH,'//*[#id="cf-hcaptcha-container"]/div[2]/iframe')))
captcha = CaptchaRecaptcha()
url = browser.current_url
code = captcha.HCaptcha(url)
script = "let submitToken = (token) => {document.querySelector('[name=h-captcha-response]').innerText = token document.querySelector('.challenge-form').submit() }submitToken('{}')".format(code)
script1 = (f"document.getElementsByName('h-captcha-response')[0].innerText='{code}'")
print(script)
browser.execute_script(script)
time.sleep(5)
browser.switch_to.parent_frame()
time.sleep(10)
Am using proxies in the web driver and also switching the user agent
someone, please explain what am doing wrong or what I should do to break the loop.

Selenium.click() is not working, instead it is calling an unwanted function

I am using selenium to interact with a website. I'm using twitter as an example.
Here is my code:
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
r = 0
def loadPage():
driver = webdriver.Firefox()
driver.set_window_size(800, 800)
#url = "about:blank"
url = "http://www.twitter.com/login"
driver.get(url)
login(driver)
def login(driver):
print("login was called")
name = "session[username_or_email]"
global r
try:
elem = driver.find_element_by_name(name)
elem.clear()
elem.send_keys("#someaccount")
elem.send_keys(Keys.TAB)
actions = ActionChains(driver)
actions.send_keys('password')
actions.send_keys(Keys.RETURN)
actions.perform()
r=0
retweet(driver)
except:
driver.implicitly_wait(3)
r+=1
if r <= 5: #only try this 5 times
print(r)
login(driver)
else:
print("Could not find element " + name)
#driver.close()
def retweet(driver):
g = 'g'
print(driver.current_url)
icon = driver.find_elements_by_tag_name(g)
icon.click()
loadPage()
When the function retweet() is called, icon.click() at line 43 calls the function login(). (The intended behavior is to perform a click, not to call the function login().)
using "icon.send_keys(Keys.RETURN)" at line 43 exhibits the same behavior.
program outputs:
login was called
1
login was called
https://twitter.com/login
1
login was called
https://twitter.com/login
1
login was called
2
login was called
3
login was called
4
login was called
5
login was called
Could not find element session[username_or_email]
The reason your login function is called again and again, because it found NoSuchElementException at line icon = driver.find_elements_by_tag_name(g). Once exception occurred it is going to execute code under except block. Which is nothing but to call login method as per above code.
Now, why NoSuchElementException occurred even if there are plethora of tag is available on page ? To answer that if you see your page in inspection mode all <g> tags are inside <svg> tag. To identify <svg> tag we need to use name method of xpath. So if you will use below it will not throw exception:
def retweet(driver):
xpathLink = "//*[name()='svg']//*[name()='g']"
print(driver.current_url)
icon = driver.find_element_by_xpath(xpathLink )
icon.click()
But, still you will not be clicking retweet link as above xpath will find any link icon present on the twitter page. So if you want to click re-tweet link only you need to use below xpath.
xpathRetweet = //div[#data-testid='retweet']//*[name()='svg']//*[name()='g']
Note : Above will always click first re-tweet link on the page. if you want to click all on page. You need to use find_elements o get list of all re-tweet links and click them one by one.

always "wrong password" message in selenium automated login

I'm trying to automate a duolingo login with Selenium with the code posted below.
While everything seems to work as expected at first, I always get an "Wrong password" message on the website after the login button is clicked.
I have checked the password time and time again and even changed it to one without special characters, but still the login fails.
I have seen in other examples that there is sometimes an additional password input field, however I cannot find one while inspecting the html.
What could I be missing ?
(Side note: I'm also open to a completely different solution without a webdriver since I really only want to get to the duolingo.com/learn page to scrape some data, but as of yet I haven't found an alternative way to login)
The code used:
from selenium import webdriver
from time import sleep
url = "https://www.duolingo.com/"
def login():
driver = webdriver.Chrome()
driver.get(url)
sleep(2)
hve_acnt_btn = driver.find_element_by_xpath("/html/body/div/div/div/span[1]/div/div[1]/div[2]/div/div[2]/a")
hve_acnt_btn.click()
sleep(2)
email_input = driver.find_element_by_xpath("/html/body/div[1]/div[3]/div[2]/form/div[1]/div/label[1]/div/input")
email_input.send_keys("email#email.com")
sleep(2)
pwd_input = driver.find_element_by_css_selector("input[type=password]")
pwd_input.clear()
pwd_input.send_keys("password")
sleep(2)
login_btn = driver.find_element_by_xpath("/html/body/div[1]/div[3]/div[2]/form/div[1]/button")
login_btn.click()
sleep(5)
login()
I couldn't post the website's html because of the character limit, so here is the link to the duolingo page: Duolingo
Switch to Firefox or a browser which does not tell the page that you are visiting it automated. See my earlier answer for a very similar issue here: https://stackoverflow.com/a/57778034/8375783
Long story short: When you start Chrome it will run with navigator.webdriver=true. You can check it in console. Pages can detect that flag and block login or other actions, hence the invalid login. This is a read-only flag set by the browser during startup.
With Chrome I couldn't log in to Duolingo either. After I switched the driver to Firefox, the very same code just worked.
Also if I may recommend, try to use Xpath with attributes.
Instead of this:
hve_acnt_btn = driver.find_element_by_xpath("/html/body/div/div/div/span[1]/div/div[1]/div[2]/div/div[2]/a")
You can use:
hve_acnt_btn = driver.find_element_by_xpath('//*[#data-test="have-account"]')
Same goes for:
email_input = driver.find_element_by_xpath("/html/body/div[1]/div[3]/div[2]/form/div[1]/div/label[1]/div/input")
vs:
email_input = driver.find_element_by_xpath('//input[#data-test="email-input"]')

Python splinter cant click on element by css on page

I am trying to automate a booking in process on a travel site using
splinter and having trouble clicking on a css element on the page.
This is my code
import splinter
import time
secret_deals_email = {
'user[email]': 'adf#sad.com'
}
browser = splinter.Browser()
url = 'http://roomer-qa-1.herokuapp.com'
browser.visit(url)
click_FIND_ROOMS = browser.find_by_css('.blue-btn').first.click()
time.sleep(10)
# click_Book_button = browser.find_by_css('.book-button-row.blue-btn').first.click()
browser.fill_form(secret_deals_email)
click_get_secret_deals = browser.find_by_name('button').first.click()
time.sleep(10)
click_book_first_room_list = browser.find_by_css('.book-button-row-link').first.click()
time.sleep(5)
click_book_button_entry = browser.find_by_css('.entry-white-box.entry_box_no_refund').first.click()
The problem is whenever I run it and the code gets to the page where I need to click the sort of purchase I would like. I can't click any of the option on the page.
I keep getting an error of the element not existing no matter what should I do.
http://roomer-qa-1.herokuapp.com/hotels/atlanta-hotels/ramada-plaza-atlanta-downtown-capitol-park.h30129/44389932?rate_plan_id=1&rate_plan_token=6b5aad6e9b357a3d9ff4b31acb73c620&
This is the link to the page that is causing me trouble please help :).
You need to whait until the element is present at the website. You can use the is_element_not_present_by_css method with a while loop to do that
while not(is_element_not_present_by_css('.entry-white-box.entry_box_no_refund')):
time.sleep(50)

Categories

Resources