I am making an automated python script which opens chromedriver on a loop until it finds a specific element on the webpage (using selenium) the driver gets. This obviously eats up recourses eventually as it is constantly opening and closing the driver while on the loop.
Is there a way to use an existing chromedriver window instead of just opening and closing on a loop until a conditional is satisfied?
If that is not possible is there an alternative way to go about this you would reccomend?
Thanks!
Script:
from selenium.webdriver.common.keys import Keys
from selenium import webdriver
import pyautogui
import time
import os
def snkrs():
driver = webdriver.Chrome('/Users/me/Desktop/Random/chromedriver')
driver.get('https://www.nike.com/launch/?s=in-stock')
time.sleep(3)
pyautogui.click(184,451)
pyautogui.click(184,451)
current = driver.current_url
driver.get(current)
time.sleep(3.5)
elem = driver.find_element_by_xpath("//* .
[#id='j_s17368440']/div[2]/aside/div[1]/h1")
ihtml = elem.get_attribute('innerHTML')
if ihtml == 'MOON RACER':
os.system("clear")
print("SNKR has not dropped")
time.sleep(1)
else:
print("SNKR has dropped")
pyautogui.click(1303,380)
pyautogui.hotkey('command', 't')
pyautogui.typewrite('python3 messages.py') # Notifies me by text
pyautogui.press('return')
pyautogui.click(928,248)
pyautogui.hotkey('ctrl', 'z') # Kills the bash loop
snkrs()
Bash loop file:
#!/bin/bash
while [ 1 ]
do
python snkrs.py
done
You are defining a method that contains the chromedriver launch and then running through the method once (not looping) so each method call generates a new browser instance. Instead of doing that, do something more like this...
url = 'https://www.nike.com/launch/?s=in-stock'
driver.get(url)
# toggle grid view
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[aria-label='Show Products as List']"))).click();
# wait for shoes to drop
while not driver.find_elements((By.XPATH, "//div[#class='figcaption-content']//h3[contains(.,'MOON RACER')]"))
print("SNKR has not dropped")
time.sleep(300) // 300s = 5 mins, don't spam their site
driver.get(url)
print("SNKR has dropped")
I simplified your code, changed the locator, and added a loop. The script launches a browser (once), loads the site, clicks the grid view toggle button, and then looks for the desired shoe to be displayed in this list. If the shoes don't exist, it just sleeps for 5 mins, reloads the page, and tries again. There's no need to refresh the page every 1s. You're going to draw attention to yourself and the shoes aren't going to be refreshed on the site that often anyway.
If you're just trying to wait until something changes on the page then this should do the trick:
snkr_has_not_dropped = True
while snkr_has_not_dropped:
elem = driver.find_element_by_xpath("//* .[ # id = 'j_s17368440'] / div[2] / aside / div[1] / h1")
ihtml = elem.get_attribute('innerHTML')
if ihtml == 'MOON RACER':
print("SNKR has not dropped")
driver.refresh()
else:
print("SNKR has dropped")
snkr_has_not_dropped = False
Just need to refresh the page and try again.
Related
I am trying to scrape a website that populates a list of providers. the site makes you go through a list of options and then finally it populates a list of providers through a pop up that has an endless/continuous scroll.
i have tried:
from selenium.webdriver.common.action_chains import ActionChains
element = driver.find_element_by_id("my-id")
actions = ActionChains(driver)
actions.move_to_element(element).perform()
but this code didn't work.
I tried something similar to this:
driver.execute_script("arguments[0].scrollIntoView();", list )
but this didnt move anything. it just stayed on the first 20 providers.
i tried this alternative:
main = driver.find_element_by_id('mainDiv')
recentList = main.find_elements_by_class_name('nameBold')
for list in recentList :
driver.execute_script("arguments[0].scrollIntoView(true);", list)
time.sleep(20)
but ended up with this error message:
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
The code that worked the best was this one:
while True:
# Scroll down to bottom
element_inside_popup = driver.find_element_by_xpath('//*[#id="mainDiv"]')
element_inside_popup.send_keys(Keys.END)
# Wait to load page
time.sleep(3)
but this is an endless scroll that i dont know how to stop since "while True:" will always be true.
Any help with this would be great and thanks in advance.
This is my code so far:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support.ui import Select
import pandas as pd
PATH = '/Users/AnthemScraper/venv/chromedriver'
driver = webdriver.Chrome(PATH)
#location for the website
driver.get('https://shop.anthem.com/sales/eox/abc/ca/en/shop/plans/medical/snq?execution=e1s13')
print(driver.title)
#entering the zipcode
search = driver.find_element_by_id('demographics.zip5')
search.send_keys(90210)
#making the scraper sleep for 5 seconds while the page loads
time.sleep(5)
#entering first name and DOB then hitting next
search = driver.find_element_by_id('demographics.applicants0.firstName')
search.send_keys('juelz')
search = driver.find_element_by_id('demographics.applicants0.dob')
search.send_keys('01011990')
driver.find_element_by_xpath('//*[#id="button/shop/getaquote/next"]').click()
#hitting the next button
driver.find_element_by_xpath('//*[#id="hypertext/shop/estimatesavings/skipthisstep"]').click()
#making the scraper sleep for 2 seconds while the page loads
time.sleep(2)
#clicking the no option to view all the health plans
driver.find_element_by_xpath('//*[#id="radioNoID"]').click()
driver.find_element_by_xpath('/html/body/div[4]/div[11]/div/button[2]/span').click()
#making the scraper sleep for 2 seconds while the page loads
time.sleep(2)
driver.find_element_by_xpath('//*[#id="hypertext/shop/medical/showmemydoctorlink"]/span').click()
time.sleep(2)
#section to choose the specialist. here we are choosing all
find_specialist=\
driver.find_element_by_xpath('//*[#id="specializedin"]')
#this is the method for a dropdown
select_provider = Select(find_specialist)
select_provider.select_by_visible_text('All Specialties')
#choosing the distance. Here we click on 50 miles
choose_mile_radius=\
driver.find_element_by_xpath('//*[#id="distanceInMiles"]')
select_provider = Select(choose_mile_radius)
select_provider.select_by_visible_text('50 miles')
driver.find_element_by_xpath('/html/body/div[4]/div[11]/div/button[2]/span').click()
#handling the endless scroll
while True:
time.sleep(20)
# Scroll down to bottom
element_inside_popup = driver.find_element_by_xpath('//*[#id="mainDiv"]')
element_inside_popup.send_keys(Keys.END)
# Wait to load page
time.sleep(3)
#block below allows us to grab the majority of the data. we would have to split it up in pandas since this info
#is nested in with classes
time.sleep(5)
main = driver.find_element_by_id('mainDiv')
sections = main.find_elements_by_class_name('firstRow')
pcp_info = []
#print(section.text)
for pcp in sections:
#the site stores the information inside inner classes which make it difficult to scrape.
#the solution would be to pull the entire text in the block and hope to clean it aftewards
#innerText allows to pull just the text inside the blocks
first_blox = pcp.find_element_by_class_name('table_content_colone').get_attribute('innerText')
second_blox = pcp.find_element_by_class_name('table_content_coltwo').get_attribute('innerText')
#creating columns and rows and assigning them
pcp_items = {
'first_block' : [first_blox],
'second_block' : [second_blox]
}
pcp_info.append(pcp_items)
df = pd.DataFrame(pcp_info)
print(df)
df.to_csv('yerp.csv',index=False)
#driver.quit()
I am trying to make a bot that automatically fills out this form automatically. I am using selenium with python, and I can get the script to populate the fields of the form correctly, but when I click on the search button, it doesn't bring me to the next page - it seems to just refresh the current page. I have tried filling out the form with the script and hitting the search button manually, but the same behavior is produced. I have also tried using .click() and .submit() - no dice. Apologies if some of my code is messy as this is one of my first python scripts. Thanks!
# Tee time booker
# This script books a tee time at ponemah greens
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
import time
site = webdriver.Chrome(ChromeDriverManager().install())
# user inputs
play_date="07/11/2021"
num_players=2
site.get("https://amherstpp.ezlinksgolf.com/index.html#/preSearch")
time.sleep(1)
title = site.title
assert "Amherst CC Prepaid - Online tee times made EZ" in title
dateField=site.find_element_by_xpath('//*[#id="dateInput"]')
dateField.clear()
dateField.send_keys(play_date)
time.sleep(1)
playerField=site.find_element_by_xpath('//*[#id="pc"]')
playerField.send_keys(num_players)
time.sleep(1)
courseSelector=site.find_element_by_xpath('/html/body/div[3]/div[2]/div[2]/ui-view/div/div/div/div[2]/div[2]/form/div/ul/li[4]/div/ul/li[2]/div/div[1]/input')
courseSelector.click()
time.sleep(1)
searchButton=site.find_element_by_xpath('/html/body/div[3]/div[2]/div[2]/ui-view/div/div/div/div[2]/div[2]/form/div/div/div/button')
searchButton.send_keys("\n")
print("Button Click")
I've been working on a fake "bet bot" in order to learn selenium, but I'm having trouble closing a pop up that shows up sometimes on the web site that I want to get the odds from.
My approach is to use the function submit_bets(); a filtered games list in the form:
"League|team 1|team 2|Date|Probability in %|and prediction(1,X or 2)"
I get the data from here. Then for each of the filtered games I open the league bet page on the betting website, and go through all the games there to find the filtered game and get the real odds. For each filtered game in filtered_games I need to open the page of the bet website and if the pop up shows up, I can't get the data.
def submit_bets(filtered_games):
driver = webdriver.Chrome(PATH)
f=codecs.open("bets.txt","r", encoding='utf-8')
for line in filtered_games:
l=line.split("|")
print(l)
driver.get(leagues_to_links.get(l[0]))
scroll_down(driver)
time.sleep(2)
try:
button = driver.find_element(By.XPATH, "/html/body/div[1]/div/section[2]/div[7]/div/div/div[1]/button" )
driver.execute_script("arguments[0].scrollIntoView(true)", button)
button.click()
except:
print("no button")
games=driver.find_elements_by_class_name("events-list__grid__event")
for i in games:
game=str(i.text).split("\n")
try:
if forebet_teams_to_betano.get(l[1]) in game[2] and forebet_teams_to_betano.get(l[2]) in game[3]:
print(game)
if str(l[5]) == "1":
print("1")
print(str(game[7]))
elif str(l[5]) == "X":
print("X")
print(str(game[9]))
else:
print("2")
print(str(game[11]))
except:
print("")
In this link you can find the html of the page when the pop up shows up:
Github page with the html
In this link you can find the page files, you might have to refresh it sometimes to get the pop up
Thank you for your time, and feel free to leave any tips to improve my code.
My solution:
#Closing popup for Portugese betting site
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
URL = "https://www.betano.pt/sport/futebol/ligas/17083r/"
# Browser options
options = Options()
options.headless = True
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.set_preference("browser.privatebrowsing.autostart", True)
browser = webdriver.Firefox(firefox_profile=firefox_profile)
browser.get(URL)
##### Copy this part into your own code #####
try:
browser.find_element_by_xpath('//button[#class="sb-modal__close__btn uk-modal-close-default uk-icon uk-close"]').click() # Click pop-up close button
print("Pop-up closed.")
except:
print("Pop-up button not found.")
#########
Closes this popup:
Keep in mind this relies on finding the button by it's very specific class name. You'll need to adapt the try-except at the end into your own code.
I'm trying to fetch some information from specific web elements. The problem is that when i try to fetch the information without for loop the program works like a charm. But the same when i put it in a for loop and try it does not detect the web elements in the loop. Here's the code i have been trying:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
from lxml import html
import requests
import xlwt
browser = webdriver.Firefox() # Get local session of firefox
# 0 wait until the pages are loaded
browser.implicitly_wait(3) # 3 secs should be enough. if not, increase it
browser.get("http://ae.bizdirlib.com/taxonomy/term/1493") # Load page
links = browser.find_elements_by_css_selector("h2 > a")
def test():#test function
elems = browser.find_elements_by_css_selector("div.content.clearfix > div > fieldset> div > ul > li > span")
print elems
for elem in elems:
print elem.text
elem1 = browser.find_elements_by_css_selector("div.content.clearfix>div>fieldset>div>ul>li>a")
for elems21 in elem1:
print elems21.text
return 0
for link in links:
link.send_keys(Keys.CONTROL + Keys.RETURN)
link.send_keys(Keys.CONTROL + Keys.PAGE_UP)
time.sleep(5)
test() # Want to call test function
link.send_keys(Keys.CONTROL + 'w')
The output i get when i print the object is a empty array as the output []. Can somebody help me enhance it. Newbie to selenium.
In the previous question i had asked about printing. But the problem lies is that it self is that the element is not detecting by itself. This way question is totally different.
I couldnt open the page but as I understand you want to open links sequencially and do something. With link.send_keys(Keys.CONTROL + 'w') you are closing the newly opened tab so your links open in a new tab. In this condition must switch to new window so that you can reach the element in new window. You can query windows by driver.window_handles and switch to last window by driver.switch_to_window(driver.window_handles[-1]) and after you closed the window you must switch back to the first window by driver.switch_to_window(driver.window_handles[0])
for link in links:
link.send_keys(Keys.CONTROL + Keys.RETURN)
# switch to new window
driver.switch_to_window(driver.window_handles[-1])
link.send_keys(Keys.CONTROL + Keys.PAGE_UP) # dont know why
time.sleep(5)
test() # Want to call test function
link.send_keys(Keys.CONTROL + 'w')
#switch back to the first window
driver.switch_to_window(driver.window_handles[0])
Setup :python bindings for selenium 2.45.0 ,IEserver driver2.45.0(x86),python 2.7.9 ,window 7 64 bit
Issue : when i click on this redirect button href= https:www.work.test.co.in:1XXX9/TEST/servlet/MainServlet/home" target="_blank"
a new window opens , unable to click anything on new window as control(focus) remains on previous window (confirmed by closing the previous window).
Tried
1.no name , so cannot try
driver.switch_to_window("windowName")
2.tried to print the handle (so that i can use handle reference ) but i can see only one window handle . used following code
for handle in driver.window_handles:
print "Handle arr = ",handle
driver.switch_to_window(handle)
3.Question1 : why i am getting only one window handle handle , i can see two IE instances in task manager.
4.i tried using index - 0 ,1 etc.
driver.switch_to_window(driver.window_handles[-1])
5.not sure of this thing though tried
driver.SwitchTo().Window(driver.WindowHandles.Last())
6.tried though i am sure that its not an alert window .
alert = driver.switch_to_alert()
SCRIPT :
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
driver = webdriver.Ie()
driver.get("https://my intranet site .aspx")
driver.implicitly_wait(2)
elem = driver.find_element_by_xpath("my xpath ")
elem.click()
driver.implicitly_wait(2)
elem = driver.find_element_by_xpath("//*[#id='tab1_2']/div[16]")
elem.click()
handle = driver.current_window_handle
print "Handle main = ",handle
driver.implicitly_wait(5)
elem = driver.find_element_by_xpath("page link button")
elem.click()
sleep(5)
my tried scenarioes here
Suggestions will be highly appreciated
Update - when new window opened directly through link URL , able to perform actions on it like clicking etc
So only issue is when I open it in continuation of first window through script.
Update : Main concern is why not getting second window handle even if task manager showing two instances of IE .
I don't know Python, but in Java I would do it in this way:
// get handles to all opened windows before the click
Set<String> handlesBeforeTheClick = driver.getWindowHandles();
// and now click on the link that opens a new window
findElement( linkThatOpensNewWindow ).click();
// then wait until a new window will be opened
wait.until( ....condition ==> handlesBeforeClick.size() < driver.getWindowHandles().size(); .... )
// then get a handle to a new window
Set<String> handlesAfterClick = driver.getWindowHandles();
handlesAfterClick.removeAll( handlesBeforeClick );
String handleToNewWindow = handlesAfterClick.iterator().next();