I am trying to use Selenium to send key inputs to an HTML5 game I created using Phaser. However, I am quite puzzled why I can't get it to work. The same code works when I try on 2048 or other websites like google, python etc (but not on other HTML5 games built via phaser). Any tips or pointers would be super useful!
Below the python code:
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Firefox()
driver.get("https://dry-anchorage-61733.herokuapp.com/") #this is game link
#driver.get("https://gabrielecirulli.github.io/2048/") #works for 2048
actions = ActionChains(driver)
for _ in range(6):
actions.send_keys(Keys.ARROW_UP).perform()
time.sleep(1)
actions.send_keys(Keys.ARROW_LEFT).perform()
time.sleep(1)
Looks like selenium doesn't focus on your app.
Try clicking on the element and chain with sending keys
element = driver.find_element_by_tag_name("canvas")
actions.click(element).key_down(Keys.ARROW_LEFT).perform()
This worked for me
Related
I'm trying to scrape betting websites for a student project but I have trouble scrolling down and displaying the dynamic data of the website. I'm using Selenium, Safari, Python and the traditional ways are not working (ActionChains, execute_script with scroll down...).
I was wondering if someone already ran through such issues and found a solution.
from bs4 import BeautifulSoup
from selenium import webdriver
import time
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Safari()
driver.get("https://www.unibet.fr/sport/football")
# Get around the cookies
l=driver.find_element(By.CLASS_NAME,"ui-button")
driver.execute_script("arguments[0].click();", l)
time.sleep(5)
And then i'm stuck. I tried different methods:
execute_script method with scroll down
Send space keys to the website / down arrow (working for other website on the same display)
ActionChains with scroll down to element...
Thanks for your help
Greg
I am trying to send a ALT+ESC command to my selenium chrome-driver to send it to the back of all other windows
This is the relevant code
from selenium.webdriver import Keys
from selenium.webdriver.common.action_chains import ActionChains
actions = ActionChains(driver)
actions.send_keys(Keys.LEFT_ALT, Keys.ESCAPE)
actions.perform()
this is not working please help
To press key-combo:
from selenium.webdriver import Keys
from selenium.webdriver.common.action_chains import ActionChains
ActionChains(driver).key_down(Keys.LEFT_ALT).send_keys(Keys.ESCAPE).key_up(Keys.LEFT_ALT).perform()
But seems this is an OS-level keys combination for work with windows and this will not work in the selenium context.
Selenium actions applied to web page elements, fire some events inside browser.
ultimately I couldn't find a way to execute a OS command in selenium.
The following code has the same functionality but dose not necessarily run on the browser
from pyautogui import hotkey
hotkey('altleft', 'esc')
This has been bugging me for a long time, if anyone can help me spot the mistake in my program, it will be appreciated.
Thanks.
class amz_bot():
def __init__(self):
self.driver = webdriver.Firefox()
def login(self):
self.driver.get("http://www.amazon.com/")
time.sleep(5)
while True:
ActionChains(self.driver).send_keys(Keys.F5).perform()
time.sleep(5)
bot = amz_bot()
bot.login()
It looks like the F5 isn't going anywhere.
Selenium spins up a browser but the ActionChain object is a low level key press. Even though it's created with the driver object, it doesn't have the context of the window.
If you were to send any other (normal) keys - where are they sent?
One solution is to use send_keys_to_element instead just blind send_keys.
This will put the focus in the window then send f5.
In my solution here, i'm assuming it only has one html tag... It's a fair assumption, but, my first attempt used body and then it turned out there were 3 pf those.
So... Try this:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from time import sleep
import time
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
class amz_bot():
def __init__(self):
self.driver = webdriver.Chrome()
def login(self):
self.driver.get("http://www.amazon.com/")
time.sleep(5)
while True:
e = WebDriverWait(self.driver, 10).until(EC.presence_of_element_located((By.TAG_NAME, 'html')))
ActionChains(self.driver).send_keys_to_element(e, Keys.F5).perform()
time.sleep(10)
bot = amz_bot()
bot.login()
This kind of worked for me - but it's weird. When selenium presses F5 (so it's doing the key press!) the page isn't refreshing, it's going into a search result.
I'm going to have another look in a moment (and i'll update the answer) but this is a step in the right direction. At least something is happening with the key press.
Update - Another option is to use javascript. Instead of F5, this refreshes the page:
self.driver.execute_script("location.reload()")
It's worth a try for it's simplicity but it might be similar to the standard browser.refresh()
Another update:
Another approach is to navigate the browser to the current url - doing the same job as the refresh
self.driver.get(self.driver.current_url)
I’m working to make web crawler with python by using selenium
Here, I successfully got contents by using chromedriver, but problem occurred when I tried to make
Headless access crawling through PhantomJS. find_element_by_id, or find_element_by_name did not work
Is there any difference between these? Actually I am trying to make this as headless because I want to run this
Code in ubuntu server as a batch job without GUI support.
My script is like as below.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import re
#driver = webdriver.PhantomJS('/Users/user/Downloads/phantomjs-2.1.1-macosx/bin/phantomjs')
#driver = webdriver.Chrome('/Users/user/Downloads/chromedriver')
driver = webdriver.PhantomJS()
driver.set_window_size(1120, 550)
driver.get(url)
driver.implicitly_wait(3)
#here I tried two different find_tag things but both didn’t work
user = driver.find_element(by=By.NAME,value="user:email")
password = driver.find_element_by_id('user_password')
I have written many scrapers but I am not really sure how to handle infinite scrollers. These days most website etc, Facebook, Pinterest has infinite scrollers.
You can use selenium to scrap the infinite scrolling website like twitter or facebook.
Step 1 : Install Selenium using pip
pip install selenium
Step 2 : use the code below to automate infinite scroll and extract the source code
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import sys
import unittest, time, re
class Sel(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "https://twitter.com"
self.verificationErrors = []
self.accept_next_alert = True
def test_sel(self):
driver = self.driver
delay = 3
driver.get(self.base_url + "/search?q=stckoverflow&src=typd")
driver.find_element_by_link_text("All").click()
for i in range(1,100):
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(4)
html_source = driver.page_source
data = html_source.encode('utf-8')
if __name__ == "__main__":
unittest.main()
Step 3 : Print the data if required.
Most sites that have infinite scrolling do (as Lattyware notes) have a proper API as well, and you will likely be better served by using this rather than scraping.
But if you must scrape...
Such sites are using JavaScript to request additional content from the site when you reach the bottom of the page. All you need to do is figure out the URL of that additional content and you can retrieve it. Figuring out the required URL can be done by inspecting the script, by using the Firefox Web console, or by using a debug proxy.
For example, open the Firefox Web Console, turn off all the filter buttons except Net, and load the site you wish to scrape. You'll see all the files as they are loaded. Scroll the page while watching the Web Console and you'll see the URLs being used for the additional requests. Then you can request that URL yourself and see what format the data is in (probably JSON) and get it into your Python script.
Finding the url of the ajax source will be the best option but it can be cumbersome for certain sites. Alternatively you could use a headless browser like QWebKit from PyQt and send keyboard events while reading the data from the DOM tree. QWebKit has a nice and simple api.