I am trying to scrap something from website (example facebook(not using graph api just doing for learning), so I successfully login and land on front page, where I want to scrap some data, but the problem is when I land on front page, then facebook shows a layer and a box which says "turn on notification", now without click on any button between "Not Now" or "turn on" I can't do anything with splinter, and when I tried to click splinter doesn't do anything because the link of those button are "#"
when hovering on button footer shows this :
and inspect element shows this :
I tried with other account but that shows this layer as first thing after login :
Now I have question how to click on these 2 types of button via splinter or selenium :
first type of button which shows "#" as href
second which chrome shows for block, allow things
My code is :
from selenium import webdriver
from splinter import Browser
web_driver=webdriver.Chrome('/Users/paul/Downloads/chromedriver/chromedriver')
url = "https://www.example.com"
browser = Browser("chrome")
visit_browser = browser.visit(url)
email_box = '//*[#id="email"]'
find_1 = browser.find_by_xpath(email_box)
find_1.fill("example#gmail.com")
password_box = '//*[#id="pass"]'
find_2 = browser.find_by_xpath(password_box)
find_2.fill("example12345")
button_sub = '//*[#id="u_0_5"]'
find_3 = browser.find_by_xpath(button_sub)
find_3.click()
for testing purpose you can try on "see more button" in trending section on facebook, that also shows "#" how to click that ?
Not letting me comment because I don't have enough rep ... but have you tried to select the element by class and then performing .click() on it? That might do the trick as the href being "#" probably means the button has another purpose.
I have solved my problem , since that link was "#" so if i was click via css or other method it was just reloading the page and that layer appear again and again after every reload , But i tried little different solution and i click it by javascript :
First i tried and find the right element for clicking via js console in chrome :
document.getElementsByClassName('layerCancel _4jy0 _4jy3 _517h _51sy _42ft')[0].click();
This is working perfect in js console so now i used splinter method "browser.execute_script()" and pass that script as argument to this method.
browser.execute_script("document.getElementsByClassName('layerCancel _4jy0 _4jy3 _517h _51sy _42ft')[0].click()")
And its working perfect now as i wanted. But still i have not found a way how to click on browser push notification 'Allow" , "Block" etc
Thanks :)
Related
I am stuck in a problem and searched a couple of days.
I am testing a website using selenium in python, I want to check the working of "About Us" button. In that website, clicking "About US" will scroll the page smoothly and take you to that "About US" section.
Now I want to confirm with code, that did that clicking took me to the write section or not?
The first logic came in my mind was to check that either the main division of "About US" section is in viewport after clicking the button? But I don't know how to check it.
I have gone through the documentation and found is_displayed() method but that is used to see is the item is visible or not (like its opacity etc)
Kindly help me.
Regards
There are many ways to assert for correct page loaded, most used are the assert for correct url loaded and page title.
Assert for Correct URL Loaded:
String expectedUrl = "https://www.google.com";
WebDriver driver = new FirefoxDriver();
driver.get(expectedUrl);
try{
Assert.assertEquals(expectedUrl, driver.getCurrentUrl());
System.out.println("Navigated to correct webpage");
}
catch(Throwable pageNavigationError){
System.out.println("Didn't navigate to correct webpage");
}
Assert for page title:
String expectedTitle = "Google";
String expectedUrl = "https://www.google.com";
WebDriver driver = new FirefoxDriver();
driver.get(expectedUrl);
try{
Assert.assertEquals(expectedTitle, driver.getTitle());
System.out.println("Navigated to correct webpage");
}
catch(Throwable pageNavigationError){
System.out.println("Didn't navigate to correct webpage");
}
from my page i can login with socials like facebook and google.
When i click on "login with facebook" , on the facebook page i get the "manage cookies".
How can click on "accept all" and store it for the next runs?
i'm using firefox.
I tried with this :
Click Login With Facebook
Switch browser 1 ## browser 0 is my homepage
Click Element //*[#id="u_0_8_yL"] ## xpath "accept all" button
OR
${options} Evaluate sys.modules['selenium.webdriver.firefox.options'].Options() sys
Call Method ${options} add_argument --disable-notifications
${driver} Create Webdriver Firefox options=${options}
go to https://www.facebook.com/login.php?
Click Element //*[#id="u_0_8_yL"]
I get always the error FAIL : Element with locator '//*[#id="u_0_8_yL"]' not found.
Any help?
Facebook ids are probably not static and are generated at random every time you open their page. Try to click on element based on class or text:
Click Element //*[text()="accept all"]
I am new to Selenium and I am trying to mimic user actions on a site to fetch data from a built in html page on button click. I am able to populate all the field details, but button click is not working, it looks like js code not running.
I tried many options like adding wait time, Action chain etc but it didnt work, i am providing site and code i have written.
driver = webdriver.Chrome()
driver.get("https://www1.nseindia.com/products/content/derivatives/equities/historical_fo.htm")
driver.implicitly_wait(10)
assigned values to all the other fields
driver.find_element_by_id('rdDateToDate').click()
Dfrom = driver.find_element_by_id('fromDate')
Dfrom.send_keys("02-Oct-2020")
Dto = driver.find_element_by_id('toDate')
Dto.send_keys("08-Oct-2020")
innerHTML = driver.execute_script("document.ready")
sleep(5)
getdata_btn = driver.find_element_by_id('getButton')
ActionChains(driver).move_to_element(getdata_btn).click().click().perform()
I recommend using a full xpath.
chrome.get("https://www1.nseindia.com/products/content/derivatives/equities/historical_fo.htm")
time.sleep(2)
print("click")
fullxpath = "/html/body/div[2]/div[3]/div[2]/div[1]/div[3]/div/div[1]/div/form/div[19]/p[2]/input"
chrome.find_element_by_xpath(fullxpath).click()
I have tried the button clicking and it worked with XPath ... I though its because of someone used the ID twice on a website, but I can not find it ... so i have no idea whats going wrong there ...
Good luck :)
I am trying to get the google map embed url using selenium. I am able to click the share button and the page shows a modal with share url and an embed url. However i am un able to switch the dialog box.
Here is my code
browser.get('https://www.google.com/maps/place/%s?hl=en'%(code))
time.sleep(3)
share_class = "ripple-container"
buttons = browser.find_elements_by_class_name(share_class)
for but in buttons:
x = but.text
if x == 'SHARE':
but.click()
modal = browser.switch_to.active_element
share = modal.find_element_by_id("modal-dialog")
print(share.text)
here is the image.
You don't need to switch to the modal dialog, you can access it just like you would any other HTML on the page. You can simplify your code to
browser.get('https://www.google.com/maps/place/%s?hl=en'%(code))
browser.find_element_by_xpath("//button/div[.='SHARE']").click()
url = browser.find_element_by_id("last-focusable-in-modal").text
print(url)
But... if you read the dialog, you will see that it states
You can also copy the link from your browser's address bar.
so the URL you are navigating to in the first line is what you are going to copy from the Share link so there's really no point. You already have the URL.
I am trying to get some comments off the car blog, Jalopnik. It doesn't come with the web page initially, instead the comments get retrieved with some Javascript. You only get the featured comments. I need all the comments so I would click "All" (between "Featured" and "Start a New Discussion") and get them.
To automate this, I tried learning Selenium. I modified their script from Pypi, guessing the code for clicking a link was link.click() and link = broswer.find_element_byxpath(...). It doesn't look liek the "All" button (displaying all comments) was pressed.
Ultimately I'd like to download the HTML of that version to parse.
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
import time
browser = webdriver.Firefox() # Get local session of firefox
browser.get("http://jalopnik.com/5912009/prius-driver-beat-up-after-taking-out-two-bikers/") # Load page
time.sleep(0.2)
link = browser.find_element_by_xpath("//a[#class='tc cn_showall']")
link.click()
browser.save_screenshot('screenie.png')
browser.close()
Using Firefox with the Firebug plugin, I browsed to http://jalopnik.com/5912009/prius-driver-beat-up-after-taking-out-two-bikers.
I then opened the Firebug console and clicked on ALL; it obligingly showed a single AJAX call to http://jalopnik.com/index.php?op=threadlist&post_id=5912009&mode=all&page=0&repliesmode=hide&nouser=true&selected_thread=null
Opening that url in a new window gets me the comment feed you are seeking.
More generally, if you substitute the appropriate article-ID into that url, you should be able to automate the process without Selenium.