Inconsistent results when using selenium and when manually clicking - python

I am trying to automate logging into my salesforce account. When I use my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser= webdriver.Firefox()
browser.get("https://xxxxx.my.salesforce.com/?
SAMLRequest=&startURL=%2Fidp%2Flogin%3Fapp%
3D0sp0g000000Gmhj&un=xxxxx.xxxxx%40xxxxx.com")
elem=browser.find_element_by_id("username")
elem.send_keys("xxxxx.xxxx#xxxxx.com")
elem_pass=browser.find_element_by_id("password")
elem_pass.send_keys("xxxxxxx")
rem_me=browser.find_element_by_id("rememberUn")
rem_me.click()
elem.send_keys(Keys.ENTER)
As you can see, I pass the link to the url, pass the usname, password and remember me.
When I run this with Selenium, it goes to a email 2FA authentication page.
But when I do it manually:
Copy the url mentioned above.
Paste it into the address bar of firefox browser.
The uname and pass show already populated.
When I hit enter, it logs me in. (No 2FA).
Is Salesforce somehow detecting that request is from selenium?
And is there a way to get around it?
Could this be related to this?
Different results when using Selenium + Python

Yup, I got it resolved. I had to import cookies from Firefox, and use them with Selenium. from selenium import webdriver from selenium.webdriver.common.keys import Keys import os os.chdir("C:\Users\tsingh\Desktop\Cookies") ffprofile = webdriver.FirefoxProfile('C:\Users\tsingh\Desktop\Cookies') browser = webdriver.Firefox(firefox_profile=ffprofile)

Related

SELENIUM PYTHON: How to pass automatic security validation?

I am trying to get in this website: "https://core.cro.ie/".
I can get in using normal web search, but I can't get in using selenium.
My code looks like this:
site= "https://core.cro.ie/"
driver = webdriver.Edge(service=Service(EdgeChromiumDriverManager().install()))
driver.get(site)
driver.maximize_window()
Any ideas? Thank you very much
This code works fine for navigation ( I dont have Edge browser):
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
site= "https://core.cro.ie/"
driver = webdriver.Firefox()
driver.get(site)
driver.maximize_window()
I have installed selenium prior running the test. It seems like the website has some sort of bot prevention mechanism but navigation works fine:
pip install selenium

How do I test every link on a webpage with Selenium using Pytho and pytest or Selenium Firefox IDE?

So I'm trying to learn Selenium for automated testing. I have the Selenium IDE and the WebDrivers for Firefox and Chrome, both are in my PATH, on Windows. I've been able to get basic testing working but this part of the testing is eluding me. I've switched to using Python because the IDE doesn't have enough features, you can't even click the back button.
I'm pretty sure this has been answered elsewhere but none of the recommended links provided an answer that worked for me. I've searched Google and YouTube with no relevant results.
I'm trying to find every link on a page, which I've been able to accomplish, even listing the I would think this would be just a default test. I even got it to PRINT the text of the link but when I try to click the link it doesn't work. I've tried doing waits of various sorts, including
visibility_of_any_elements_located AND time.sleep(5) To wait before trying to click the link.
I've tried this to click the link after waiting self.driver.find_element(By.LINK_TEXT, ("lnktxt")).click(). But none work, not in below code, the below code works, listing the URL Text, the URL and the URL Text again, defined by a variable.
I guess I'm not sure how to get a variable into the By.LINK_TEXT or ...by_link_text statement, assuming that would work. I figured if I got it into the variable I could use it again. That worked for print but not for click()
I basically want to be able to load a page, list all links, click a link, go back and click the next link, etc.
The only post this site recommended that might be helpful was...
How can I test EVERY link on the WEBSITE with Selenium
But it's Java based and I've been trying to learn Python for the past month so I'm not ready to learn Java just to make this work. The IDE does not seem to have an easy option for this, or from all my searches it's not documented well.
Here is my current Selenium code in Python.
import pytest
import time
import json
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
wait_time_out = 15
class TestPazTestAll2():
def setup_method(self, method):
driver = webdriver.Firefox()
self.driver = webdriver.Firefox()
self.vars = {}
def teardown_method(self, method):
self.driver.quit()
def test_pazTestAll(self):
self.driver.get('https://poetaz.com/poems/')
lnks=self.driver.find_elements_by_tag_name("a")
print ("Total Links", len(lnks))
# traverse list
for lnk in lnks:
# get_attribute() to get all href
print(lnk.get_attribute("text"))
lnktxt = (lnk.get_attribute("text"))
print(lnk.get_attribute("href"))
print(lnktxt)
driver.quit()
Again, I'm sure I missed something in my searches but after hours of searching I'm reaching out.
Any help is appreciated.
I basically want to be able to load a page, list all links, click a link, go back and click the next link, etc.
I don't recommend doing this. Selenium and manipulating the browser is slow and you're not really using the browser for anything where you'd really need a browser.
What I recommend is simply sending requests to those scraped links and asserting response status codes.
import requests
link_elements = self.driver.find_elements_by_tag_name("a")
urls = map(lambda l: l.get_attribute("href"), link_elements)
for url in urls:
response = requests.get(url)
assert response.status_code == 200
(You also might need to prepend some base url to those strings found in href attributes.)

Selenium not storing cookies

I'm working on trying to automate a game I want to get ahead in called pokemon vortex and when I login using selenium it works just fine, however when I attempt to load a page that requires a user to be logged in I am sent right back to the login page (I have tried it outside of selenium with the same browser, chrome).
This is what I have
import time
from selenium import webdriver
from random import randint
driver = webdriver.Chrome(r'C:\Program Files (x86)\SeleniumDrivers\chromedriver.exe')
driver.get('https://zeta.pokemon-vortex.com/dashboard/');
time.sleep(5) # Let the user actually see something!
usernameLoc = driver.find_element_by_id('myusername')
passwordLoc = driver.find_element_by_id('mypassword')
usernameLoc.send_keys('mypassword')
passwordLoc.send_keys('12345')
submitButton = driver.find_element_by_id('submit')
submitButton.submit()
time.sleep(3)
driver.get('https://zeta.pokemon-vortex.com/map/10')
time.sleep(10)
I'm using python 3.6+ and I literally just installed selenium today so it's up to date, how do I force selenium to hold onto cookies?
Using a pre-defined user profile might solve your problem. This way your cache will be saved and will not be deleted.
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--user-data-dir=C:/Users/user_name/AppData/Local/Google/Chrome/User Data")
driver = webdriver.Chrome(options=options)
driver.get("xyz.com")

difference between chromedirver and phantomjs with python

I’m working to make web crawler with python by using selenium
Here, I successfully got contents by using chromedriver, but problem occurred when I tried to make
Headless access crawling through PhantomJS. find_element_by_id, or find_element_by_name did not work
Is there any difference between these? Actually I am trying to make this as headless because I want to run this
Code in ubuntu server as a batch job without GUI support.
My script is like as below.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import re
#driver = webdriver.PhantomJS('/Users/user/Downloads/phantomjs-2.1.1-macosx/bin/phantomjs')
#driver = webdriver.Chrome('/Users/user/Downloads/chromedriver')
driver = webdriver.PhantomJS()
driver.set_window_size(1120, 550)
driver.get(url)
driver.implicitly_wait(3)
#here I tried two different find_tag things but both didn’t work
user = driver.find_element(by=By.NAME,value="user:email")
password = driver.find_element_by_id('user_password')

Use and retain the information of the current login session with Selenium

I am automating certain tasks using Web Browser with Selenium. Suppose I open a webpage say Facebook or Quora using webdriver, the page that opened asks for the username and password again even though I am still logged in.
from selenium import webdriver
b = webdriver.Chrome()
b.get("https://www.quora.com/")
I want the webdriver to use and retain the information of the current session so that I am able to land on my profile without having to enter my username and password again. How can I achieve this? Thanks.
Edit 1 : I tried pointing it to the chrome user data, but isn't working.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_option = Options()
chrome_option.add_argument('user-data-dir=~/Library/Application Support/Google/Chrome/Default')
b = webdriver.Chrome(executable_path="/Users/mymac/Downloads/chromedriver",chrome_options=chrome_option)
b.get("https://www.quora.com/")

Categories

Resources