selenium click() not working on closing pop-up - python

I've been working on a fake "bet bot" in order to learn selenium, but I'm having trouble closing a pop up that shows up sometimes on the web site that I want to get the odds from.
My approach is to use the function submit_bets(); a filtered games list in the form:
"League|team 1|team 2|Date|Probability in %|and prediction(1,X or 2)"
I get the data from here. Then for each of the filtered games I open the league bet page on the betting website, and go through all the games there to find the filtered game and get the real odds. For each filtered game in filtered_games I need to open the page of the bet website and if the pop up shows up, I can't get the data.
def submit_bets(filtered_games):
driver = webdriver.Chrome(PATH)
f=codecs.open("bets.txt","r", encoding='utf-8')
for line in filtered_games:
l=line.split("|")
print(l)
driver.get(leagues_to_links.get(l[0]))
scroll_down(driver)
time.sleep(2)
try:
button = driver.find_element(By.XPATH, "/html/body/div[1]/div/section[2]/div[7]/div/div/div[1]/button" )
driver.execute_script("arguments[0].scrollIntoView(true)", button)
button.click()
except:
print("no button")
games=driver.find_elements_by_class_name("events-list__grid__event")
for i in games:
game=str(i.text).split("\n")
try:
if forebet_teams_to_betano.get(l[1]) in game[2] and forebet_teams_to_betano.get(l[2]) in game[3]:
print(game)
if str(l[5]) == "1":
print("1")
print(str(game[7]))
elif str(l[5]) == "X":
print("X")
print(str(game[9]))
else:
print("2")
print(str(game[11]))
except:
print("")
In this link you can find the html of the page when the pop up shows up:
Github page with the html
In this link you can find the page files, you might have to refresh it sometimes to get the pop up
Thank you for your time, and feel free to leave any tips to improve my code.

My solution:
#Closing popup for Portugese betting site
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
URL = "https://www.betano.pt/sport/futebol/ligas/17083r/"
# Browser options
options = Options()
options.headless = True
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.set_preference("browser.privatebrowsing.autostart", True)
browser = webdriver.Firefox(firefox_profile=firefox_profile)
browser.get(URL)
##### Copy this part into your own code #####
try:
browser.find_element_by_xpath('//button[#class="sb-modal__close__btn uk-modal-close-default uk-icon uk-close"]').click() # Click pop-up close button
print("Pop-up closed.")
except:
print("Pop-up button not found.")
#########
Closes this popup:
Keep in mind this relies on finding the button by it's very specific class name. You'll need to adapt the try-except at the end into your own code.

Related

Can't click on a element

Code that i am using:
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
wd = webdriver.Chrome()
url = 'https://www.dailyfx.com/economic-calendar#next-seven-days'
wd.get(url)
time.sleep(20)
try:
wd.find_element(By.XPATH, "/html/body/div[7]/div/div/button/img").click()
except:
print('No Calendar Advertisement')
try:
wd.find_element(By.XPATH,"/html/body/div[1]/div[2]/div/div/div[2]/button").click()
except:
print('No Cookies Button')
time.sleep(3)
try:
wd.find_element(By.XPATH,"/html/body/div[1]/div[1]/div/div/div[1]/span").click()
except:
print('No App Advertisement')
#Clear calendar filter
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[1]/div[2]").click()
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[2]/div[2]/div[2]/div[1]/div[1]/label").click()
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[2]/div[2]/div[2]/div[1]/div[2]/label").click()
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[2]/div[2]/div[2]/div[1]/div[3]/label").click()
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[2]/div[2]/div[2]/div[1]/div[4]/label").click()
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[2]/div[2]/div[2]/div[1]/div[5]/label").click()
#Selecting only United States
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[2]/div[2]/div[2]/div[1]/div[1]/div/span").click()
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[2]/div[2]/div[2]/div[2]/div[1]/div/div/div[1]/label").click()
#Closing Calendar Filter
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[3]/div/div[1]/div[1]/div[2]").click()
#Working part:
wd.find_element(By.XPATH,"/html/body/div[5]/div/div[4]/div[2]/div/div/div[1]/div/div/div[4]/div[5]/table/tbody/tr[13]/td[1]/div/div[1]").click()
https://www.dailyfx.com/economic-calendar#next-seven-days
So, I am accessing this website and trying to click on a element. As you can see, the website shows some economic news and when you click on it, it shows a graphic with information, which is my goal - opening the graph. For some reason, I can only open the graphic when the table data (td\[1\]) is 1(that occurs just for the first economic news). When the table data(td\[3\]) change to 3(economic news that are more distant to happen), I can't open the graphic anymore. This code works:
wd.find_element(By.XPATH,"/html/body/div\[5\]/div/div\[4\]/div\[2\]/div/div/div\[1\]/div/div/div\[4\]/div\[5\]/table/tbody/tr\[13\]/td\[1\]/div/div\[1\]").click() When change to td\[3\], doesnt work: wd.find_element(By.XPATH,"/html/body/div\[5\]/div/div\[4\]/div\[2\]/div/div/div\[1\]/div/div/div\[4\]/div\[5\]/table/tbody/tr\[93\]/td\[3\]/div/div\[1\]").click()
I tried clicking on multiple different elements, but still doesn't work when trying to click on td\[3\] elements.
Tried to open a graphic of a economic news but only work when td\[1\], not for td\[3\].

Right click save link as then save

hello guys i want to right click save link as then save on the save pop up that windows shows.
this is an example:
https://www.who.int/data/gho/data/indicators/indicator-details/GHO/proportion-of-population-below-the-international-poverty-line-of-us$1-90-per-day-(-)
go on this page in the data tab u can see EXPORT DATA in CSV format:Right-click here & Save link
so if u right click and save link as it will let u save the data as csv.
i want to automate that can it be done using selenium python if so how?
i tried using actionchains but im not sure thats gona work
I think you would be better off using the Data API (json) offered on the same page, but I have managed to download the file using the code below and Google Chrome.
There is a lot going on which I didn't want to go into (hence the lazy usage of an occasional sleep), but the basic principle is that the Export link is inside a frame inside a frame (and there are many iframes on the page). Once the correct iframe has been found and the link located, a right click brings up the system menu.
This menu cannot be accessed via Selenium (because it is inside an iframe?), so pyautogui is used to move down to "Save link as..." and also to click the "Save" button on the Save As dialog.:
url = "https://www.who.int/data/gho/data/indicators/indicator-details/GHO/proportion-of-population-below-the-international-poverty-line-of-us$1-90-per-day-(-)"
driver.get(url)
driver.execute_script("window.scrollBy(0, 220);")
button = driver.find_element(By.CSS_SELECTOR, "button#dataButton")
button.click()
WebDriverWait(driver, 5).until(EC.text_to_be_present_in_element_attribute((By.CSS_SELECTOR, "button#dataButton"), "class", "active"))
time.sleep(10)
iframes = driver.find_elements(By.CSS_SELECTOR, "iframe[src*='https://app.powerbi.com/reportEmbed'")
for i in iframes:
try:
driver.switch_to.frame(i)
iframes2 = driver.find_elements(By.CSS_SELECTOR, "iframe[src*='cvSandboxPack.html']")
for i2 in iframes2:
try:
driver.switch_to.frame(i2)
downloads = driver.find_elements(By.CSS_SELECTOR, "a[download='data.csv']")
if len(downloads) > 0:
ActionChains(driver).context_click(downloads[0]).perform()
# Selenium cannot communicate with system dialogs
time.sleep(1)
pyautogui.typewrite(['down','down','down','down','enter'])
time.sleep(1)
pyautogui.press('enter')
time.sleep(2)
return
except StaleElementReferenceException as e:
continue
finally:
driver.switch_to.frame(i)
iframes2 = WebDriverWait(driver, 5).until(EC.presence_of_all_elements_located((By.TAG_NAME, "iframe")))
except StaleElementReferenceException as e:
continue
finally:
driver.switch_to.default_content()
iframes = WebDriverWait(driver, 5).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "iframe[src*='https://app.powerbi.com/reportEmbed']")))

Why can't I select elements from Instagram using Selenium in Python?

I have this piece of code I wrote a few months back, which was working as far as I can remember. I tried to run it today and it doesn't work any more. Does anyone have any idea why? The problem lies in the automate(browser) function, when trying to select the like button, Selenium is unable to find it. I tried using CSS selectors, full XPATH and XPATH but it just won't work. It seems weird because it works when selecting the log in and not now buttons, as well as the number of posts and the first post. It is when I open the first post that I seem to be unable to select anything. No like button, no comment area, no next button.
Any help will be appreciated!
from selenium import webdriver
from selenium.webdriver.common.by import By; from selenium.webdriver.common.keys import Keys
USERNAME = YOUR_USERNAME
PASSWORD = YOUR_PASSWORD
def login():
# Log into instagram
options = webdriver.EdgeOptions()
options.add_argument("--log-level=3")
browser = webdriver.Edge(options=options)
browser.get('https://www.instagram.com/'); browser.implicitly_wait(3)
browser.find_element(By.NAME,'username').send_keys(USERNAME); browser.implicitly_wait(3)
browser.find_element(By.NAME,'password').send_keys(PASSWORD); browser.implicitly_wait(3)
browser.find_element(By.XPATH,"//*[#id='loginForm']/div/div[3]/button").click(); browser.implicitly_wait(3)
# Check if logged in successfully
try:
if browser.find_element(By.XPATH,"//*[#id='slfErrorAlert']"):
browser.close(); exit('Error: Login information is incorrect')
except: print(f"Successfully logged into {USERNAME}")
# Close pop-up and return
try: browser.find_element(By.XPATH,'/html/body/div[5]/div/div/div/div[3]/button[2]').click()
except: pass
return browser
def automate(browser):
likes = comments = errors = 0
# Prompt for target username and comment text
target = input("Target's username: ")
comment = input("Comment: ")
# Go to user and click first post
browser.get('https://www.instagram.com/' + target); browser.implicitly_wait(3)
# Get account's number of posts
totalposts = int(browser.find_element(By.XPATH,'//*[#id="react-root"]/div/div/section/main/div/header/section/ul/li[1]/span/span').text)
# Open first post
browser.find_element(By.CLASS_NAME,"_9AhH0").click(); browser.implicitly_wait(3)
# Like (and comment) every post
for post in range(totalposts):
try:
# Check if post is already liked based on the button's color, like it if not
like_button = browser.find_element(By.XPATH, '//*[#id="react-root"]/div/div/section/main/div/div[1]/article/div/div[2]/div/div[2]/section[1]/span[1]/button/span/svg')
if like_button.getAttributes("color") == "#ed4956":
print("Already liked (will not be commented either)")
browser.find_element_by_link_text(By.LINK_TEXT, 'Next').click()
else:
browser.find_element(By.XPATH, '//*[#id="react-root"]/div/div/section/main/div/div[1]/article/div/div[2]/div/div[2]/section[1]/span[1]/button/').click()
print("Liked"); likes += 1
except:
print("Could not like"); errors += 1
# If post was not already liked, comment it and go to next post
try:
browser.find_element(By.XPATH,"/html/body/div[6]/div[2]/div/article/div/div[2]/div/div[2]/section[3]/div/form/").click(); browser.implicitly_wait(3)
browser.find_element(By.XPATH,"/html/body/div[6]/div[2]/div/article/div/div[2]/div/div[2]/section[3]/div/form/textarea").send_keys(comment + Keys.ENTER); browser.implicitly_wait(3)
print(f"Commented {comment}"); comments += 1
browser.find_element(By.LINK_TEXT, 'Next').click()
except:
print("Could not comment"); errors += 1
browser.find_element(By.LINK_TEXT, 'Next').click()
# Close browser and return stats
print('Closing browser...'); browser.close()
stats = {'liked': likes, 'commented': comments, 'errors': errors}
return stats
if __name__ == "__main__":
browser = login()
stats = automate(browser)
print(f"Liked: {stats['liked']} | Commented: {stats['commented']} | Errors: {stats['errors']}")

Python Selenium how to use an existing chromedriver window?

I am making an automated python script which opens chromedriver on a loop until it finds a specific element on the webpage (using selenium) the driver gets. This obviously eats up recourses eventually as it is constantly opening and closing the driver while on the loop.
Is there a way to use an existing chromedriver window instead of just opening and closing on a loop until a conditional is satisfied?
If that is not possible is there an alternative way to go about this you would reccomend?
Thanks!
Script:
from selenium.webdriver.common.keys import Keys
from selenium import webdriver
import pyautogui
import time
import os
def snkrs():
driver = webdriver.Chrome('/Users/me/Desktop/Random/chromedriver')
driver.get('https://www.nike.com/launch/?s=in-stock')
time.sleep(3)
pyautogui.click(184,451)
pyautogui.click(184,451)
current = driver.current_url
driver.get(current)
time.sleep(3.5)
elem = driver.find_element_by_xpath("//* .
[#id='j_s17368440']/div[2]/aside/div[1]/h1")
ihtml = elem.get_attribute('innerHTML')
if ihtml == 'MOON RACER':
os.system("clear")
print("SNKR has not dropped")
time.sleep(1)
else:
print("SNKR has dropped")
pyautogui.click(1303,380)
pyautogui.hotkey('command', 't')
pyautogui.typewrite('python3 messages.py') # Notifies me by text
pyautogui.press('return')
pyautogui.click(928,248)
pyautogui.hotkey('ctrl', 'z') # Kills the bash loop
snkrs()
Bash loop file:
#!/bin/bash
while [ 1 ]
do
python snkrs.py
done
You are defining a method that contains the chromedriver launch and then running through the method once (not looping) so each method call generates a new browser instance. Instead of doing that, do something more like this...
url = 'https://www.nike.com/launch/?s=in-stock'
driver.get(url)
# toggle grid view
WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[aria-label='Show Products as List']"))).click();
# wait for shoes to drop
while not driver.find_elements((By.XPATH, "//div[#class='figcaption-content']//h3[contains(.,'MOON RACER')]"))
print("SNKR has not dropped")
time.sleep(300) // 300s = 5 mins, don't spam their site
driver.get(url)
print("SNKR has dropped")
I simplified your code, changed the locator, and added a loop. The script launches a browser (once), loads the site, clicks the grid view toggle button, and then looks for the desired shoe to be displayed in this list. If the shoes don't exist, it just sleeps for 5 mins, reloads the page, and tries again. There's no need to refresh the page every 1s. You're going to draw attention to yourself and the shoes aren't going to be refreshed on the site that often anyway.
If you're just trying to wait until something changes on the page then this should do the trick:
snkr_has_not_dropped = True
while snkr_has_not_dropped:
elem = driver.find_element_by_xpath("//* .[ # id = 'j_s17368440'] / div[2] / aside / div[1] / h1")
ihtml = elem.get_attribute('innerHTML')
if ihtml == 'MOON RACER':
print("SNKR has not dropped")
driver.refresh()
else:
print("SNKR has dropped")
snkr_has_not_dropped = False
Just need to refresh the page and try again.

Is it possible to forwardSelenium/webdriver results to mechanize/beautiful soup

Ok, so I pretty much used webdriver to navigate to a specific page with a table of results contained in a unique div. I had to use webdriver to fill the forms and interact with the javascript buttons. Anyways, i need to scrape the table into a file but I can't figure this out. Here's the code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
# Open Firefox
driver = webdriver.Firefox()
driver.get("https://subscriber.hoovers.com/H/login/login.html")
# Login and submit
username = driver.find_element_by_id('j_username')
username.send_keys('THE_EMAIL_ADDRESS')
password = driver.find_element_by_id('j_password')
password.send_keys('THE_PASSWORD')
password.submit()
# go to "build a list" url (more like 'build-a-table' get it right guys!
driver.get('http://subscriber.hoovers.com/H/search/buildAList.html?_target0=true')
# expand industry list to reveal SIC codes form
el = driver.find_elements_by_xpath("//h2[contains(string(), 'Industry')]")[0]
action = webdriver.common.action_chains.ActionChains(driver)
action.move_to_element_with_offset(el, 5, 5)
action.click()
action.perform()
# fill sic.codes form with all the SIC codes
siccodes = driver.find_element_by_id('advancedSearchCriteria.sicCodes')
siccodes.send_keys('316998,321114,321211,321212,321213,321214,321219,321911,'
'321912,321918,321992,322121,322130,326122,326191,326199,327110,327120,'
'327212,327215,327320,327331,327332,327390,327410,327420,327910,327991,'
'327993,327999,331313,331315,332216,332311,332312,332321,332322,332323,'
'333112,333414,333415,333991,'334290,335110,335121,335122,335129,335210,'
'335221,335222,335224,335228,335311,335312,335912,335929,335931,335932,'
'335999,337920,339910,339993,339994,339999,423310,423320,423330,423610,'
'423620,423710,423720,423730,424950,444120')
# wait 5 seconds because this is a big list to load
time.sleep(5)
# Select "Add to List" button and clickity-clickidy-CLICK!
butn = driver.find_element_by_xpath('/html/body/div[2]/div[3]/div[1]/form/div/div[3]/div/div[2]/div[1]/div[2]/p[1]/button')
action = webdriver.common.action_chains.ActionChains(driver)
action.move_to_element_with_offset(butn, 5, 5)
action.click()
action.perform()
# wait 10 seconds to add them to list
time.sleep(10)
# Now select confirm list button and wait to be forwarded to results page
butn = driver.find_element_by_xpath('/html/body/div[3]/div/div[1]/input[2]')
action = webdriver.common.action_chains.ActionChains(driver)
action.send_keys("\n")
action.move_to_element_with_offset(butn, 5, 5)
action.click()
action.perform()
# wait 10 seconds, let it load and dig up them numbah tables
time.sleep(10)
# Check that we're on the right results landing page...
print driver.current_url
# Good we have arrived! Now lets save this url for scrape-time!
url = driver.current_url
# Print everything... but we only need the table!!! HOWW?!?!?!?
sourcecode = driver.page_source.encode("utf-8")
# EVERYTHING AFTER THIS POINT DOESN't WORK!!!! `~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All I need is to print the table out as organized as possible with a for loop but it seems this works a lot better with mechanize or BeautifulSoup. So is this possible? Any suggestions? also, sorry if my code is sloppy, I'm multitasking with deadlines and other scripts. Please help meehh! I will provide my login credentials if you really need them and want to help me. It's nothing too serious, just a company SIC and D-U-N-S number database but I don't think you need it to figure this out. I know there's a few jedi's out there that can save me. :)

Categories

Resources