Recursive function with selenium - python

I'm trying to do some web automation using selenium in python, and if my script encounters any errors I want the whole process to restart again (an infinite loop)
so basically I tried to use a recursive function and recall it each time an error occurred, it looks like this :
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
def my_func():
try :
driver.get("https://mywebsite.com")
.
.
.
except Exception as E :
print(str(E)) #printing the exception message
driver.quit() #quitting from the current tab
my_func() #recalling the function again
my_func()
the first time everything works fine and when an error occurs (because the website switch to another page) it prints this Exception: Message: element not interactable which is totally normal, but on the second iteration I get this :
HTTPConnectionPool(host='127.0.0.1', port=59887): Max retries exceeded with url: /session/6d23ab3406dbef8f6581c4c7652d2633/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000019D2BA3CC10>: Failed to establish a new connection: `[WinError 10061] No connection could be made because the target machine actively refused it'))
so is there any way to fix this error or other better solutions for this script?

I think, the message:
No connection could be made because the target machine actively refused it
It may be a security issue that resulted from the target URL. Try to connect to the internet via another connection such as a mobile hotspot.

I figured it out I have to create a new driver on each iteration, also I added a time.sleep before recalling the function just in case the requests were sent too fast :
def my_func():
try :
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://mywebsite.com")
.
.
.
except Exception as E :
print(str(E)) #printing the exception message
driver.quit() #quitting from the current tab
time.sleep(5)
my_func() #recalling the function again
my_func()

Related

metatrader error initialize() failed, error code = (-10005, 'IPC timeout')

I got this error
initialize() failed, error code = (-10005, 'IPC timeout')
when execute this code:
import MetaTrader5 as mt5
# display data on the MetaTrader 5 package
print("MetaTrader5 package author: ",mt5.__author__)
print("MetaTrader5 package version: ",mt5.__version__)
# establish connection to the MetaTrader 5 terminal
if not mt5.initialize(login=999999, server="xyz-Demo",password="abcdef"):
print("initialize() failed, error code =",mt5.last_error())
mt5.shutdown()
Can anyone help me please? Thanks in advance
There might be some solutions to your problem:
Try initializing a connection to the MT5 terminal using mt5.initialize() then login into the trading account using mt5.login(account, server, password).
Try closing all previous connections to the mt5 terminal using mt5.shutdown() on all previous scripts
This is how I solved it.
I divided the process into two:
I did the initialization and then,
I logged in.
Importantly, I'm using a Windows PC and everything started working when I changed the path from
"C:\Program Files\MetaTrader 5\terminal64.exe"
to
"C:/Program Files/MetaTrader 5/terminal64.exe"
The code:
def account_login(login = name,password=key, server= serv,):
if mt5.login(login,password,server):
print("logged in succesffully")
else:
print("login failed, error code: {}".format(mt5.last_error()))
def initialize(login = name, server=serv, password=key, path=path):
if not mt5.initialize(path):
print("initialize() failed, error code {}", mt5.last_error())
else:
account_login(login, password, server)
Maybe, We must launch the app.
In actually, I solved same issue by launch the app.

Python 3 Read data from URL [duplicate]

I have this simple minimal 'working' example below that opens a connection to google every two seconds. When I run this script when I have a working internet connection, I get the Success message, and when I then disconnect, I get the Fail message and when I reconnect again I get the Success again. So far, so good.
However, when I start the script when the internet is disconnected, I get the Fail messages, and when I connect later, I never get the Success message. I keep getting the error:
urlopen error [Errno -2] Name or service not known
What is going on?
import urllib2, time
while True:
try:
print('Trying')
response = urllib2.urlopen('http://www.google.com')
print('Success')
time.sleep(2)
except Exception, e:
print('Fail ' + str(e))
time.sleep(2)
This happens because the DNS name "www.google.com" cannot be resolved. If there is no internet connection the DNS server is probably not reachable to resolve this entry.
It seems I misread your question the first time. The behaviour you describe is, on Linux, a peculiarity of glibc. It only reads "/etc/resolv.conf" once, when loading. glibc can be forced to re-read "/etc/resolv.conf" via the res_init() function.
One solution would be to wrap the res_init() function and call it before calling getaddrinfo() (which is indirectly used by urllib2.urlopen().
You might try the following (still assuming you're using Linux):
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
res_init = libc.__res_init
# ...
res_init()
response = urllib2.urlopen('http://www.google.com')
This might of course be optimized by waiting until "/etc/resolv.conf" is modified before calling res_init().
Another solution would be to install e.g. nscd (name service cache daemon).
For me, it was a proxy problem.
Running the following before import urllib.request helped
import os
os.environ['http_proxy']=''
response = urllib.request.urlopen('http://www.google.com')

URLlib2 causing program to stop after 20 attempts

so i am trying to write a program wich needs to check for an urllib2 111 error
I do this by using:
def Refresher:
req = urllib2.Request('http://example.com/myfile.txt')
try:
urlopen = urllib2.urlopen(req)
except urllib2.HTTPError as e:
if e.code == 404 or e.code == 111:
error = True
At the end of refresher I update it using because refresher also edits a tk window:
root.after(75, Refresher)
My problem is that when I reboot the server (and therefor cause a 111 error) this works fine for the first 20 times. But after the 20th time through my function appears to stop running with no error being thrown in the console.Then when the server comes back up my function starts running again.
How do I keep my program refreshing as the function does other things aswell as checking if the server is down?
Thanks in advance.
Use requests instead of urllib2, it's safer to use and easier to understand, if the error persists, then the problem will be in another part of the server configuration.

Skip Connection Interruptions (Site & BeautifulSoup)

I'm currently doing this with my script:
Get the body (from sourcecode) and search for a string, it does it until the string is found. (If the site updates.)
Altough, if the connection is lost, the script stops.
My 'connection' code looks something like this (This keeps repeating in a while loop every 20 seconds):
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
url = ('url')
openUrl = opener.open(url).read()
soup = BeautifulSoup(openUrl)
I've used urllib2 & BeautifulSoup.
Can anyone tell me how I could tell the script to "freeze" if the connection is lost and look to see if the internet connection is alive? Then continue based on the answer.(So, to check if the script CAN connect, not to see if the site is up. If it does checkings this way, the script will stop with a bunch of errors.)
Thank you!
Found the solution!
So, I need to check the connection every LOOP, before actually doing stuff.
So I created this function:
def check_internet(self):
try:
header = {"pragma" : "no-cache"}
req = urllib2.Request("http://www.google.ro", headers=header)
response = urllib2.urlopen(req,timeout=2)
return True
except urllib2.URLError as err:
return False
And it works, tested it with my connection down & up!
For the other newbies wodering:
while True:
conn = check_internet('Site or just Google, just checking for connection.')
try:
if conn is True:
#code
else:
#need to make it wait and re-do the while.
time.sleep(30)
except: urllib2.URLError as err:
#need to wait
time.sleep(20)
Works perfectly, the script has been running for about 10 hours now and it handles errors perfectly! It also works with my connection off and shows proper messages.
Open to suggestions for optimization!
Rather than "freeze" the script, I would have the script continue to run only if the connection is alive. If it's alive, run your code. If it's not alive, either attempt to reconnect, or halt execution.
while keepRunning:
if connectionIsAlive():
run_your_code()
else:
reconnect_maybe()
One way to check whether the connection is alive is described here Checking if a website is up via Python
If your program "stops with a bunch of errors" then that is likely because you're not properly handling the situation where you're unable to connect to the site (for various reasons such as you not having internet, their website is down, etc.).
You need to use a try/except block to make sure that you catch any errors that occur because you were unable to open a live connection.
try:
openUrl = opener.open(url).read()
except urllib2.URLError:
# something went wrong, how to respond?

mechanize keeps giving URLErrors

I'm automating some stuff with mechanize. I have a working program that logs into a site and goes to a page while logged in. However, sometimes I just get URLErrors stating the connection has timed out, whenever I do anything via mechanize:
URLError: <urlopen error [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
If I restart the program or re-try the attempts, it will just work. If I visit the same site with Chrome, it will never time out, no matter how often I attempt to log in.
What might be the cause of this? It sounds like mechanize is doing something that isn't ideal. I've gotten a similar pattern with different sites as well - URLErrors when there are in fact no connection issues.
EDIT: I also notice that if I do this - retry immediately - it often works, but then will fail again on the next thing I do, etc.
last_response = ...
for attempt in (1, 2):
try:
self.mech.select_form(nr=0)
self.mech[self.LOGIN_FORM_DATA[1]] = self.user
self.mech[self.LOGIN_FORM_DATA[2]] = self.password
resp = self.mech.submit()
html = resp.read()
resp.close()
except mechanize.URLError:
self.error("URLError submitting form, trying again...")
self.mech.set_response(last_response) #reset the response
continue
break

Categories

Resources