Python Selenium time out explicitly - python

My script does not come out to execute the next line after clicking below link.
monkey_click_by_id(driver, "fwupdatelink")
Is there any way to come out after clicking it explicitly without fail?

The issue could be that the webdriver moves on too quickly before your scripts gets a chance to get the information it needs. my suggestions is to make it wait until the element is loaded by visibility which would be the same as an actual user seeing the link show up in their browser. just to be safe.
try adding this code before your script. it waits for the element with ID "fwupdatelink" to be visible or otherwise just 10 second which ever is shorter.
from selenium.webdriver.common.by import By
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.ID,"fwupdatelink")))
You could also use a try/except statement, just in case your driver gives you a timeout error.

Related

ChromeDriver sometimes does and sometimes doesn't close

I tried to implement Selenium to scrape the pages from the list. While trying to scrape the list, sometimes, the execution just stops. It seems that sometimes the execution doesn't go past driver.close() and it happens completely at random. Below is the code I use to scrape multiple pages.
I would appreciate if anyone suggested a way to ensure that the driver closes after scraping the data.
from selenium import webdriver
addresses = ['address1', 'address2',...]
results = []
for address in addresses:
driver = get_chromedriver() # returns webdriver instance
driver.get(f"https://www.example.com/{address}")
values = scrape_some_data()
driver.close()
driver.quit()
results.append(values)
# do something with the list of values
A few things I have noticed which might, or might not, be helpful in solving your issues:
Unless you really need to, it might be better to call driver = get_chromedriver() outside the loop, and run the driver.quit() after the loop is complete, that will speed up your execution significantly as your browser will not need to re-open. However if you are accessing multiple instances of the same website then you might need to depend on your method.
driver.quit() should be sufficient for your use without the need for driver.close() here.
If you want to use multiple instances definitely, it might be better to use threading. I've heard of a few cases where issues can occur if a loop is used while destroying/recreating the driver over and over.
Try changing your code as below.
You declare webdriver instance once and use driver.get to open a browser url.
Also, I suggest to append all values before you quit webdriver.
from selenium import webdriver
driver = get_chromedriver() # returns webdriver instance
addresses = ['address1', 'address2',...]
results = []
for address in addresses:
driver.get(f"https://www.example.com/{address}")
values = scrape_some_data()
results.append(values)
driver.close()
# do something with the list of values
Difference between driver.close() and driver.quit():
close() method closes the current window.
quit() method quits the driver and closes every associated window.
So, if you want one window to be closed, use close(), all windows - quit()
One more thing I suggest: add explicit waits for all your data to be loaded before webdriver is closed.
To use explicit waits import:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
And use like:
wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "css_selector"))) # for a list of elements
Take this as an example: How to find and compare text with the style property using Selenium/Python?
If all above suggestions will not work, try closing webdriver in finally block.

how to click a button as long as xpath is not found on web browser using python? i don't want to use sleep function

I am creating a program in python using web browser. There is an internet issue. When the internet is slow, the program gets the error (xpath is not found) and stops. I am also using the sleep function
How can I create a while loop of xpath?
or any other methods please explain.
I have done this...
browser.find_element_by_xpath('//*[#id="react-root"]/section/nav/div[2]/div/div/div[2]/div[1]/div/span').click()
time.sleep(3)
i want to do
when internet gets slow. The program will wait for xpath then click.
Try using Explicit Wait
See this link : https://www.selenium.dev/documentation/en/webdriver/waits/#explicit-wait
You can and should use webdriver wait.
It was introduced exactly for this purpose.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 20)
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="react-root"]/section/nav/div[2]/div/div/div[2]/div[1]/div/span'))).click()
You also should use better locators.
//*[#id="react-root"]/section/nav/div[2]/div/div/div[2]/div[1]/div/span
doesn't look good at all.

Selenium - How to adjust the mouse speed when moving a slider?

I've got this code to bypass captcha basically:
#!/usr/bin/python
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time
import sys
try:
driver = webdriver.Chrome()
driver.get(sys.argv[1])
time.sleep(2)
slider = driver.find_element_by_id('nc_2_n1z')
move = ActionChains(driver)
move.click_and_hold(slider).move_by_offset(400, 0).release().perform()
time.sleep(5)
driver.close()
except:
pass
Everything works but when I execute this code, it moves the slider very fast (probably less than 1 second) so I can't bypass the Slide to verify captcha. From start to finish moving the slider, I want it to take 3-5 seconds so it'll act more like a human when moving the slider. Is it possible to adjust the speed when moving the slider ?
You can try this by splitting the below line
move.click_and_hold(slider).move_by_offset(400, 0).release().perform()
You have to click and hold for the desired seconds then release
move.click_and_hold(slider).perform()
sleep(2)
move.move_by_offset(400, 0).release().perform()
However i am not sure the ask can handle captcha as most of the modern captcha can figure out if you are running script

WhatsApp Web automation with Selenium not working

I found this python script on github that sends automatic WhatsApp Web messages through Selenium.
#https://www.github.com/iamnosa
#Let's import the Selenium package
from selenium import webdriver
#Let's use Firefox as our browser
web = webdriver.Firefox()
web.get('http://web.whatsapp.com')
input()
#Replace Mr Kelvin with the name of your friend to spam
elem = web.find_element_by_xpath('//span[contains(text(),"Mr Kelvin")]')
elem.click()
elem1 = web.find_elements_by_class_name('input')
while True:
elem1[1].send_keys('hahahahahahaha')
web.find_element_by_class_name('send-container').click()
Even though it was meant for spamming, I was trying to adapt it for a good purpose, but the script as it stands doesn't seem to work. Instead of sending a message through WhatsApp Web, it simply loads a QR authentication screen and then it does nothing after I authenticate with my cellphone.
Any clue as to why this is happening? I'm running the lastest version of Selenium WebDriver on Firefox and geckodriver has already been extracted to /usr/bin/.
I realise this post is older, but it still seems to be frequently looked at.
The keystroke explanation of #vhad01 makes sense but did not work for me.
A simple dirty workaround that worked for me:
Replace input() with
import time
time.sleep(25)
while 25 is the amount of seconds it will be waited until the code will be further executed. (15 should also be sufficient to scan the QR code,...).
The way I implement the scanning of the QR code is by detecting if the search bar is present or not on the page.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
chatlist_search = ".jN-F5.copyable-text.selectable-text"
web.get("https://web.whatsapp.com")
WebDriverWait(web, 60).until(EC.visibility_of_element_located((By.CSS_SELECTOR, chatlist_search)))
This would wait until the chat search-bar is rendered on the page, or it will timeout in 60 seconds.
This line :
input()
is waiting for a keystroke to continue.
Simply press any key after scanning.
I was writing a selenium script to schedule my msgs and I came across your question. Yes, problem is that input() line.
Instead of using input():
Use time.sleep(), no doubt it will work but better approach it to use implicit_wait(15)
Time.sleep() makes you wait even after scanning. The script totally stops till the given seconds.
In implicit_wait() the if element appear before specified time than script will start executing otherwise script will throw NoSuchElementException.
I used a more different method to whatsapp_login() and QR scanning. To see that my repo link: https://github.com/shauryauppal/PyWhatsapp
You would like this approach too.
A better way is to scan the QR code hit return in the command line and the proceed further on your code.
browser = webdriver.Firefox()
browser.get('https://web.whatsapp.com/')
print('Please Scan the QR Code and press enter')
input()
This is all you need and is also not very vague logic to apply to this problem.

Scrapy PhantomJs slow linux

I'm trying to scrape contents from this page on my linux machine. I want to display all the list of wines by clicking the show more button [around 600] until no show more buttons appear. I'm using selenium and PhantomJS for handling javascripts. I'm using time.sleep() show that once i click the show more button, it sleeps for some short time until another appears. The problem i'm facing is, initially the program clicks the show more button quickly but once it reaches around 100-150 clicks, the time taken to detect the show more button increases at an alarming rate, taking too much time. Below is the code that detects the show more button and clicks it.
def parse(self,response):
sel = Selector(self.driver.get(response.url))
self.driver.get(response.url)
click = self.driver.find_elements_by_xpath("//*[#id='btn-more-wines']")
try:
while click:
click[0].click()
time.sleep(2)
except Exception:
print 'no clicks'
An Explicit Wait (instead of time.sleep()) can make a positive impact here:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
wait = WebDriverWait(driver, 10)
click = wait.until(EC.element_to_be_clickable((By.XPATH, "//*[#id='btn-more-wines']")))
This would basically wait for the "Show More" button to become clickable.
Another possible improvement could be achieved by switching to Chrome in a headless mode (with a virtual display):
How do I run Selenium in Xvfb?

Categories

Resources