Python Selenium cant locate iframe - gmail account generator - python

I hope you all feel well. I tried coding an "gmail creator".
This Project is really important to me, so I would be very
thankfull if someone can help me. If u want, for the guy who
can help me and if this works will get some money on paypal.
Not much but some money. Also sorry for my english I am from germany.
So :
I am struggling for 4.5 Hours now.
It just cant find the element "firstname". I tried via normal way:
driver find element by xpath and doesnt find, also id and many things.
Then I noticed, that there is an iframe on the page, so i though I have to switch to it before.
But now it cant find this iframe.
Infos:
-Python
-Selenium
-Chrome webdriver
THE SITE:
https://www.google.com/intl/de/gmail/about/#
CODE:
def main():
options = webdriver.ChromeOptions()
options.add_argument("--window-size=950,1000")
driver = webdriver.Chrome(options=options)
driver.get("https://www.google.com/intl/de/gmail/about/#")
time.sleep(randint(1, 3))
#click create account
while True:
try:
driver.find_element_by_xpath("/html/body/div[2]/div[1]/div[4]/ul[1]/li[3]/a").click()
break
except:
yeet = 1
#works all well till here
#now I am struggling finding element, firstname element
#firstname_input = driver.find_element_by_xpath("/html/body/div[1]/div[1]/div[2]/div[1]/div[2]/div/div/div[2]/div/div[1]/div/form/span/section/div/div/div[1]/div[1]/div[1]/div/div[1]/div/div[1]/input")
#iframe = driver.find_element_by_xpath("/html/body/iframe")
#iframe = driver.find_element_by_tag_name("iframe")
#driver.switch_to.frame(iframe)
###all didnt work
main()

By clicking on "Create Account" button a new window is opened.
So, you have to switch to that new window.
To do so you should do the following:
new_window = driver.window_handles[1]
driver.switch_to_window(new_window)
Now you can access the "firstname" element and all the others there.
If you wish finally get back to the first window just do it similarly to the above:
initial_window = driver.window_handles[0]
driver.switch_to_window(initial_window)
Also, please never use long absolute locators like "/html/body/div[2]/div[1]/div[4]/ul[1]/li[3]/a.
The 'Create Button' can be located by this //li/a[#data-action="create an account" and (contains(#class,'button'))] XPath

Related

Printing web search results won't work in Selenium script, but works when I type it into the shell

I'm very new and learning web scraping in python by trying to get the search results from the website below after a user types in some information, and then print the results. Everything works great up until the very last 2 lines of this script. When I include them in the script, nothing happens. However, when I remove them and then just try typing them into the shell after the script is done running, they work exactly as I'd intended. Can you think of a reason this is happening? As I'm a beginner I'm also super open if you see a much easier solution. All feedback is welcome. Thank you!
#Setup
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
import time
#Open Chrome
driver = webdriver.Chrome()
driver.get("https://myutilities.seattle.gov/eportal/#/accountlookup/calendar")
#Wait for site to load
time.sleep(10)
#Click on street address search box
elem = driver.find_element(By.ID, 'sa')
elem.click()
#Get input from the user
addr = input('Please enter part of your address and press enter to search.\n')
#Enter user input into search box
elem.send_keys(addr)
#Get search results
elem = driver.find_element(By.XPATH, ('/html/body/app-root/main/div/div/account-lookup/div/section/div[2]/div[2]/div/div/form/div/ul/li/div/div[1]'))
print(elem.text)
I haven't used Selenium in a while, so I can only point you in the right direction. It seems to me you need to iterate over the individual entries, and print those, as opposed to printing the entire div as one element.
You should remove the parentheses from the xpath expression
You can shorten the xpath expression as follows:
Code:
elems = driver.find_element(By.XPATH, '//*[#class="addressResults"]/div')
for elem in elems:
print(elem.text)
You are using an absolute XPATH, what you should be looking into are relative XPATHs
Something like this should do it:
elems = driver.find_elements(By.XPATH, ("//*[#id='addressResults']/div"))
for elem in elems:
...
I ended up figuring out my problem - I just needed to add in a bit that waits until the search results actually load before proceeding on with the script. tossing in a time.sleep(5) did the trick. Eventually I'll add a bit that checks that an element has loaded before proceeding with the script, but this lets me continue for now. Thanks everyone for your answers!

Unable to find element by any means

I'm kinda of a newbie in Selenium, started learning it for my job some time ago. Right now I'm working with a code that will open the browser, enter the specified website, put the products ID in the search box, search, and them open it. Once it opens the product, it needs to extract its name and price and write it in a CSV file. I'm kinda struggling with it a bit.
The main problem right now is that Selenium is unable to open the product after searching it. I've tried by ID, name and class and it still didn't work.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import csv
driver = webdriver.Firefox()
driver.get("https://www.madeiramadeira.com.br/")
assert "Madeira" in driver.title
elem = driver.find_element_by_id("input-autocomplete")
elem.clear()
elem.send_keys("525119")
elem.send_keys(Keys.RETURN)
product_link = driver.find_element_by_id('variant-url').click()
The error I get is usually this:
NoSuchElementException: Message: Unable to locate element: [id="variant-url"]
There are multiple elements with the id="variant-url", So you could use the index to click on the desired element, Also you need to handle I think the cookie pop-Up. Check the below code, hope it will help
#Disable the notifications window which is displayed on the page
option = Options()
option.add_argument("--disable-notifications")
driver = webdriver.Chrome(r"Driver_Path",chrome_options=option)
driver.get("https://www.madeiramadeira.com.br/")
assert "Madeira" in driver.title
elem = driver.find_element_by_id("input-autocomplete")
elem.clear()
elem.send_keys("525119")
elem.send_keys(Keys.RETURN)
#Clicked on the cookie popup button
accept= driver.find_element_by_xpath("//button[#id='lgpd-button-agree']")
accept.click()
OR with explicitWait
accept=WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"//button[#id='lgpd-button-agree']")))
accept.click()
#Here uses the XPath with the index like [1][2] to click on the specific element
product_link = driver.find_element_by_xpath("((//a[#id='variant-url'])[1])")
product_link.click()
OR with explicitWait
product_link=WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,"((//a[#id='variant-url'])[1])")))
product_link.click()
I used to get this error all the time when starting. The problem is that there are probably multiple elements with the same ID "variant-url". There are two ways you can fix this
By using "driver.find_elementS_by_id"
product_links = driver.find_elements_by_id('variant-url')
product_links[2].click()
This will create an index off all the elements with id 'variant-url' then click the index of 2. This works but it is annoying to find the correct index of the button you want to click and also takes a long time if there are many elements with the same ID.
By using Xpaths or CSS selectors
This way is a lot easier as each element has a specific Xpath of Selector. It will look like this.
product_link = driver.get_element_by_xpath("XPATH GOES HERE").click()
To get an Xpath or selector 1. go into developer mode on your browser by inspecting the element 2. right click the element in the F12 menu 3. hover over copy 4. move to copy Xpath and click on it.
Hope this helps you :D

Python Selenium unable to locate element when it's there?

I am trying to scrape product's delivery date data from a bunch of lists of product urls.
I am running the python file on terminal doing multi-processing so ultimately this opens up multiple chrome browsers (like 10 - 15 of them) which slows down my computer quite a bit.
My code basically clicks a block that contains shipping options, which would would show a pop up box that shows estimated delivery time. I have included an example of a product url in the code below.
I noticed that some of my chrome browsers freeze and do not locate the element and click it like I have in my code. I've incorporated refreshing the page in webdriver into my code just in case that will do the trick but it doesn't seem like the frozen browsers are even refreshing.
I don't know why it would do that as I have set the webdriver to wait until the element is clickable. Do I just increase the time in time.sleep() or the seconds in webdriverwait() to resolve this?
chromedriver = "path to chromedriver"
driver = webdriver.Chrome(chromedriver, options=options)
# Example url
url = "https://shopee.sg/Perfect-Diary-X-Sanrio-MagicStay-Loose-Powder-Weightless-Soft-velvet-Blurring-Face-Powder-With-Cosmetic-Puff-Oil-Control-Smooth-Face-Powder-Waterproof-Applicable-To-Mask-Face-Cinnamoroll-Purin-Gudetama-i.240344695.5946255000?ads_keyword=makeup&adsid=1095331&campaignid=563217&position=1"
driver.get(url)
time.sleep(2)
try:
WebDriverWait(driver,60).until(EC.element_to_be_clickable((By.XPATH,'//div[#class="flex flex-column"]//div[#class="shopee-drawer "]'))).click()
while retries <= 5:
try:
shipping_block = WebDriverWait(driver,60).until(EC.element_to_be_clickable((By.XPATH,'//div[#class="flex flex-column"]//div[#class="shopee-drawer "]'))).click()
break
except TimeoutException:
driver.refresh()
retries += 1
except (NoSuchElementException, StaleElementReferenceException):
delivery_date = None
The element which you desire will get displayed when you hover the mouse, V The type of the element is svg which you need to handle accordingly.
you can take the help of this XPath to hover the mouse
((//*[name()='svg']//*[name()='g'])/*[name()='path'][starts-with(#d, 'm11 2.5c0 .1 0 .2-.1.3l')])
Getting the text from the popUp you need to check with this XPath
((//div[#class='shopee-drawer__contents']//descendant::div[4]))
You can use get_attribute("innerText") to get all the values
You can just check it here the same answer, I hope it will help
First, don't use headless option in webdriver.ChromeOption() to let the webpage window pop up to observe if the element is clicked or not.
Second, your code is just click the element, it doesn't help you to GET anything. After click and open new window, it should do something like this:
items = WebDriverWait(driver, 60).until(
EC.visibility_of_all_elements_located((By.CLASS_NAME, 'load-done'))
)
for item in items:
deliver_date= item.get_attribute('text')
print(deliver_date)

Selenium webdriver.get() method doesnt always work

link = "https://www.google.com"
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % str(xxx))
chrome = webdriver.Chrome(chrome_options=chrome_options)
time.sleep(3)
chrome.get(link)
print("po get")
time.sleep(1)
chrome.get(link)
time.sleep(15)
Hello, I have a problem with selenium for a long time and I would like to find a way how to fix it
The problem is that almost everytime I run a script that's opening selenium / even when I use it like
for a test such as :
from selenium import webdriver
chrome = webdriver.Chrome()
chrome.get(https://www.google.com)
It still sometimes doesn't get the website, I thought it's because of how slow selenium opens but even after its nicely open it doesn't get the value, sadly it just gets stuck on an empty browser that had data in an url window... any idea what should I do to fix it?
Okay, after a few hours I decided to give it a little try and change ( "" ) to ( ' ' ) and as I can see it works :D I dont know why it has a problem with string ""
here is edited line of my code :
chrome.get('https://www.google.com')
( I ve tried it with proxies with loop that was starting the webdriver
100 times , and everytime after i've changed it, it passed )
You can use driver.navigate.to("");
Also as I see, You might missing String Double Quotes " " here, chrome.get(https://www.google.com)
I think it's too late for answer, but I want to answer anyway. You may have already solved problem, but if it doesn't work check this out:
Different links navigate you same site or etc. It is not a code error but I want to say that this could cause of selenium.common.exceptions.NoSuchElementException: Message: no such element
You logged a site and if you want to go back login screen, you should log out of your account, otherwise the site will automatically navigate to the home screen because you already logged in.
That's all I wanted to say.

Navigate through all the members of Research Gate Python Selenium

I am a rookie in python selenium. I have to navigate through all the members from the members page of an institution in Research Gate, which means I have to click the first member to go to their profile page and go back to the members page to click the next member.I tried for loop, but every time it is clicking only on the first member. Could anyone please guide me. Here is what I have tried.
from selenium import webdriver
import urllib
driver = webdriver.Firefox("/usr/local/bin/")
university="Lawrence_Technological_University"
members= driver.get('https://www.researchgate.net/institution/' + university +'/members')
membersList = driver.find_element_by_tag_name("ul")
list = membersList.find_elements_by_tag_name("li")
for member in list:
driver.find_element_by_class_name('display-name').click()
print(driver.current_url)
driver.back()
You are not even doing anything with the list members in your for loop. The state of the page changes after navigating to a different page & coming back, so you need to find the element again. One approach to handle this is given below:
for i in range(len(list)):
membersList = driver.find_element_by_tag_name("ul")
element = membersList.find_elements_by_tag_name("li")[i]
element.click()
driver.back()

Categories

Resources