I'm trying to scrape betting websites for a student project but I have trouble scrolling down and displaying the dynamic data of the website. I'm using Selenium, Safari, Python and the traditional ways are not working (ActionChains, execute_script with scroll down...).
I was wondering if someone already ran through such issues and found a solution.
from bs4 import BeautifulSoup
from selenium import webdriver
import time
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Safari()
driver.get("https://www.unibet.fr/sport/football")
# Get around the cookies
l=driver.find_element(By.CLASS_NAME,"ui-button")
driver.execute_script("arguments[0].click();", l)
time.sleep(5)
And then i'm stuck. I tried different methods:
execute_script method with scroll down
Send space keys to the website / down arrow (working for other website on the same display)
ActionChains with scroll down to element...
Thanks for your help
Greg
Related
i am trying to using the selenium auto input the HTML code in http://ueditor.baidu.com/website/examples/completeDemo.html. My procedure is that click the html first, and then code HTML in, while the IDE always told me that cant locate the element. it mad me crazy that the after click the HTML Button, the element is right there, but always error on. i just wonder how can i write in box after click the HTML button by using selenium? thanks to the warm-hearted guy
import os,time
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
chromePath = r'E:/Python/WEB/web-infor-transfer/monidenglu/chromedriver.exe'
wd = webdriver.Chrome()
loginUrl = 'http://ueditor.baidu.com/website/examples/completeDemo.html'
wd.get(loginUrl)
wd.find_element_by_xpath('//*[#id="edui4"]').click()
time.sleep(2)
wd.find_element_by_xpath('//*[#id="edui1_iframeholder"]/div/div[2]/div/div/div[2]/div/div[2]').send_keys('hello')
time.sleep(5)
wd.quit()
Try action chains. For example - sending keys to the browser itself worked:
wd.find_element_by_xpath('/html/body/div[1]/div/div/div[2]/div/div[2]/div/div/div[2]/div/div[2]/pre[2]/span').click()
actions = ActionChains(wd)
actions.send_keys('hello')
actions.perform()
As the title said, I'd like to performa a Google Search using Selenium and then open all results of the first page on separate tabs.
Please have a look at the code, I can't get any further (it's just my 3rd day learning Python)
Thank you for your help !!
Code:
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
import pyautogui
query = 'New Search Query'
browser = webdriver.Chrome('/Users/MYUSERNAME/Desktop/Desktop-Files/Chromedriver/chromedriver')
browser.get('http://www.google.com')
search = browser.find_element_by_name('q')
search.send_keys(query)
search.send_keys(Keys.RETURN)
element = browser.find_element_by_class_name('LC20lb')
element.click()
The reason why I imported pyautogui is because I tried simulating a right click and then open in new tab for each result but it was a little confusing :)
Forget about pyautogui as what you want to do can be done in Selenium. Same with most of the rest. You just do not need it. See if this code meets your needs.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
query = 'sins of a solar empire' #my query about a video game
browser = webdriver.Chrome()
browser.get('http://www.google.com')
search = browser.find_element_by_name('q')
search.send_keys(query)
search.send_keys(Keys.RETURN)
links = browser.find_elements_by_class_name('r') #I went on Google Search and found the container class for the link
for link in links:
url = link.find_element_by_tag_name('a').get_attribute("href") #this code extracts the url of the HTML link
browser.execute_script('''window.open("{}","_blank");'''.format(url)) # this code uses Javascript to open a new tab and open the given url in that new tab
print(link.find_element_by_tag_name('a').get_attribute("href"))
I’m working to make web crawler with python by using selenium
Here, I successfully got contents by using chromedriver, but problem occurred when I tried to make
Headless access crawling through PhantomJS. find_element_by_id, or find_element_by_name did not work
Is there any difference between these? Actually I am trying to make this as headless because I want to run this
Code in ubuntu server as a batch job without GUI support.
My script is like as below.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import re
#driver = webdriver.PhantomJS('/Users/user/Downloads/phantomjs-2.1.1-macosx/bin/phantomjs')
#driver = webdriver.Chrome('/Users/user/Downloads/chromedriver')
driver = webdriver.PhantomJS()
driver.set_window_size(1120, 550)
driver.get(url)
driver.implicitly_wait(3)
#here I tried two different find_tag things but both didn’t work
user = driver.find_element(by=By.NAME,value="user:email")
password = driver.find_element_by_id('user_password')
I am trying to use Selenium to send key inputs to an HTML5 game I created using Phaser. However, I am quite puzzled why I can't get it to work. The same code works when I try on 2048 or other websites like google, python etc (but not on other HTML5 games built via phaser). Any tips or pointers would be super useful!
Below the python code:
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Firefox()
driver.get("https://dry-anchorage-61733.herokuapp.com/") #this is game link
#driver.get("https://gabrielecirulli.github.io/2048/") #works for 2048
actions = ActionChains(driver)
for _ in range(6):
actions.send_keys(Keys.ARROW_UP).perform()
time.sleep(1)
actions.send_keys(Keys.ARROW_LEFT).perform()
time.sleep(1)
Looks like selenium doesn't focus on your app.
Try clicking on the element and chain with sending keys
element = driver.find_element_by_tag_name("canvas")
actions.click(element).key_down(Keys.ARROW_LEFT).perform()
This worked for me
I'm using Selenium for Python to navigate a dynamic webpage (link).
I would like to automatically scroll down to a y-point on the page. I have tried various ways to code this, which seem to work on other pages, but not on this particular webpage.
Some examples of my efforts that don't seem to work for Firefox with Selenium:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://en.boerse-frankfurt.de/etp/Deka-Oekom-Euro-Nachhaltigkeit-UCITS-ETF-DE000ETFL474")
By scrolling to a y-position:
driver.execute_script("window.scrollTo(0, 1400);")
Or by finding an element:
try:
find_scrollpoint = driver.find_element_by_xpath("//*[#id='main-wrapper']/div[10]/div/div[1]/div[1]")
except:
pass
else:
scrollpoint = find_scrollpoint.location["y"]
driver.execute_script("window.scrollTo(0, scrollpoint);")
Questions:
What could be some unusual circumstances on webpages such as this, where this very typical scrollTo code may fail?
What can possibly be done to mitigate it? Perhaps keypresses to scroll down – but it would still have to know when to stop pressing down.
It is just quite a dynamic page. I would wait for this specific element to be visible and then scroll into it's view. This works for me:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get("http://en.boerse-frankfurt.de/etp/Deka-Oekom-Euro-Nachhaltigkeit-UCITS-ETF-DE000ETFL474")
wait = WebDriverWait(driver, 10)
elm = wait.until(EC.visibility_of_element_located((By.XPATH, "//*[#id='main-wrapper']/div[10]/div/div[1]/div[1]")))
driver.execute_script("arguments[0].scrollIntoView();", elm)