accept facebook cookies on firefox robot framework - python

from my page i can login with socials like facebook and google.
When i click on "login with facebook" , on the facebook page i get the "manage cookies".
How can click on "accept all" and store it for the next runs?
i'm using firefox.
I tried with this :
Click Login With Facebook
Switch browser 1 ## browser 0 is my homepage
Click Element //*[#id="u_0_8_yL"] ## xpath "accept all" button
OR
${options} Evaluate sys.modules['selenium.webdriver.firefox.options'].Options() sys
Call Method ${options} add_argument --disable-notifications
${driver} Create Webdriver Firefox options=${options}
go to https://www.facebook.com/login.php?
Click Element //*[#id="u_0_8_yL"]
I get always the error FAIL : Element with locator '//*[#id="u_0_8_yL"]' not found.
Any help?

Facebook ids are probably not static and are generated at random every time you open their page. Try to click on element based on class or text:
Click Element //*[text()="accept all"]

Related

Why is selenium not able to find element with ID, even when it is not in an iframe?

I am trying to make an automatic program for logging in to GitHub. I could find only the sign-in option. After that, I could not find the Username field. I have confirmed that the element is definitely not in a/an (i)frame. I have tried an alternative with css-selector.
Here is the code I tried:
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options
chrome_opt = Options()
chrome_opt.add_experimental_option("detach", True) # type: ignore[unknown]
auto = Chrome(options=chrome_opt)
auto.get("https://github.com")
signin_link = auto.find_element("link text", "Sign in")
signin_link.click()
username = auto.find_element("id", "login_field")
username.send_keys("ArnabRollin") # type: ignore[unknown]
# FIXME
The type-ignore comments are there because of 'strict mode' type checking in VS Code. Also, after 5 tries of running it, it finally worked, but when I ran it again it didn't.
now your code is looking for elements in the page https://github.com - the one used in method get()
instead of clicking element's link, get it with webdriver:
signin_link = auto.find_element("link text", "Sign in")
signin_link.click()
use
signin_link = auto.find_element("link text", "Sign in").get_attribute('href')
auto.get(signin_link)
auto.get(url2) will save new page context into driver. after sign in is complete, a new page context will be needed
Note, I'm not sure it's ethical scraping this website, and besides, they have Captcha.
You can use this CSS selector:
username = auto.find_element(By.CSS_SELECTOR, "input.js-login-field")
Additionally, when you go to github.com and click on login, the URL changes to /login: https://github.com/login

Selenium WebDriver Python 3 Refresh Page Until Text Found And Afterwards Click To Buy

I am a total noob that tries to get together a code that will do below things.
1.) Log in with my credentials to the page https://www.alza.sk/EN/
2.) Refreshes the page in a specific interval, for example "60 seconds"
3.) Checks for the “In stock” text on any product displayed or a specific one product that will be opened in the one browser tab"
4.) Click on the “Add to cart” button beside the specific product that will be opened in the mentioned one browset tab and continue to the shopping cart by clickiny on “Proceed to Checkout” button, afterwards click on “Continue” button, if popup window displays with two buttons “Do not add anything” and “Add the selected items to your cart“ then click on the first button “Do not add anything”. In the next page choose the “Bratislava - main shop” checkbox and after that choose “Confirm your selection” button in the popup window. Afterwards click on “All payments option” section so all options are displayed and choose the “Cash / Card (when collected) checkbox. After that click on “Continue” button. And this is the step when i need to login by entering my credentials. So this i”ll update later today. But i suppose it would be best to log in in the Step 1.) as i described.
This is all i could so far get together, but i am unable to add all i described, so it will work.
from selenium import webdriver
import time
import urllib
driver = webdriver.Chrome()
driver.get("https://www.alza.sk/EN/")
while(True):
driver.refresh()
time.sleep("60")
from selenium import webdriver
import time
driver = webdriver.Chrome()
driver.get("input url here")
expected_text="Available"
actual_text=driver.find_element(By.XPATH, '//button[text()="Available"]').text
while actual_text!=True:
driver.refresh()
time.sleep(input time interval here)
if expected_text==actual_text:
actual_text=True
driver.find_element(By.XPATH, '//button[text()="Available"]').click()
#add other steps
This is just a suggestion on how to proceed with the script. If you can update the question with the URL and proper code, it will be easier to give a proper solution.

How click on dynamic buttons link "#" from selenium and splinter?

I am trying to scrap something from website (example facebook(not using graph api just doing for learning), so I successfully login and land on front page, where I want to scrap some data, but the problem is when I land on front page, then facebook shows a layer and a box which says "turn on notification", now without click on any button between "Not Now" or "turn on" I can't do anything with splinter, and when I tried to click splinter doesn't do anything because the link of those button are "#"
when hovering on button footer shows this :
and inspect element shows this :
I tried with other account but that shows this layer as first thing after login :
Now I have question how to click on these 2 types of button via splinter or selenium :
first type of button which shows "#" as href
second which chrome shows for block, allow things
My code is :
from selenium import webdriver
from splinter import Browser
web_driver=webdriver.Chrome('/Users/paul/Downloads/chromedriver/chromedriver')
url = "https://www.example.com"
browser = Browser("chrome")
visit_browser = browser.visit(url)
email_box = '//*[#id="email"]'
find_1 = browser.find_by_xpath(email_box)
find_1.fill("example#gmail.com")
password_box = '//*[#id="pass"]'
find_2 = browser.find_by_xpath(password_box)
find_2.fill("example12345")
button_sub = '//*[#id="u_0_5"]'
find_3 = browser.find_by_xpath(button_sub)
find_3.click()
for testing purpose you can try on "see more button" in trending section on facebook, that also shows "#" how to click that ?
Not letting me comment because I don't have enough rep ... but have you tried to select the element by class and then performing .click() on it? That might do the trick as the href being "#" probably means the button has another purpose.
I have solved my problem , since that link was "#" so if i was click via css or other method it was just reloading the page and that layer appear again and again after every reload , But i tried little different solution and i click it by javascript :
First i tried and find the right element for clicking via js console in chrome :
document.getElementsByClassName('layerCancel _4jy0 _4jy3 _517h _51sy _42ft')[0].click();
This is working perfect in js console so now i used splinter method "browser.execute_script()" and pass that script as argument to this method.
browser.execute_script("document.getElementsByClassName('layerCancel _4jy0 _4jy3 _517h _51sy _42ft')[0].click()")
And its working perfect now as i wanted. But still i have not found a way how to click on browser push notification 'Allow" , "Block" etc
Thanks :)

I am trying to click a button/element on a popup window using Selenium2Library in Robotframework .But I get the following error in Robotframework

I am trying to click a button/element on a popup window using Selenium 2 Library in Robot framework, but I get an error.
This is the code:
Test Click Like
Wait Until Page Contains Element //iframe[#title="Facebook Social Plugin"]
Select Frame //iframe[#title="Facebook Social Plugin"]
#Frame Should Contain //*[#id="u_0_0"]/div/div/div[2]/div[1]/a/img[#src="https://www.facebook.com/rsrc.php/v1/yi/r/odA9sNLrE86.jpg"]
Click Element xpath=//*[#id='u_0_0']/div/div/div[3]/div[1]/div[2]/div/div[2]/div/div[2]/a[1]/em/*[#data-intl-translation="Thích"]
Sleep 5s
Capture Page Screenshot
Wait Until Page Contains Element xpath=//*[#id='u_0_0']/div/div/div[3]/div[1]/div[2]/div/div[2]/div/div[2]/a[1]/em/*[#data-intl-translation="Bỏ thích"]
Sleep 5s
Capture Page Screenshot
This is the error:
Error
ValueError: Element locator 'xpath=//*[#id='u_0_0']/div/div/div[3]/div[1]/div[2]/div/div[2]/div/div[2]/a[1]/em/*[#data-intl-translation="Thích"]' did not match any elements.
I have this HTML code:
<a href="#">
<em class="_4qba" data-intl-translation="Thích" data-intl-trid="">Thích</em>
</a>
You have to select that pop-up window first for performing any actions on that opened pop-up window.

Selenium Web Driver Navigate to Page with POST Request

I'm trying to navigate to a page using selenium in python with a post request.
I've got the request working with seleniumrequests:
response = driver.request('POST','http://example.com', data={"agree": "1"})
but it only returns a 200 request string, I'm trying to actually navigate to the page.
It may be a bit slower, but couldn't you have your driver find the element (check box or whatever it is you have to click saying you agree) and have it click that, then click the agree button to submit it?
So use something like:
self.driver = webdriver.Firefox()
driver = self.driver
driver.get('http://example.com')
driver.findElementById('idOfCheckBox').click()
Then if there is another button to submit that use another driver.findElementById('idOfButton').click()

Categories

Resources