I have a function that is on a schedule to run every minute. The first step is to pull data from an API. Sometimes this works and sometimes it times out. If it works, I want to do a bunch of stuff to the data and then save it. If it doesn't work, I just want to skip to the end of the function and not do anything. So here is how it works:
def job():
try:
a = requests.get("API address that sometimes work and sometimes doesn't")
except:
print('connection error')
#a bunch of code that transforms the data and then saves it
and here is the scheduler:
schedule.every().minute.at(":01").do(job)
while 1:
schedule.run_pending()
time.sleep(1)
I can kind of achieve what I want by moving the #a bunch of code line into the try, but then other potential errors are going to be caught in the "connection error" as well.
Is there a way where I can make the exception skip to the end of the function? And then because its on a one minute scheduler it just tries again later?
I know its not reproducible code but this is simple enough that it shouldn't be necessary.
You can simply use return.
def job():
try:
a = requests.get("API address that sometimes work and sometimes doesn't")
except:
print('connection error')
return
Related
I am trying to find a way in which I can re run my code if there is a timeout or an error.
This is something that happens if the internet connection drops - So I'd need to delay for a few seconds whilst it comes back online and try again..
Would there be a way in which I could run my code in another python script and tell it to re run if it times out or disconnects?
Thanks in advance
If you are talking about http request connection error and if you are using the requests library you can use this urllib retry
FYI https://docs.python-requests.org/en/master/api/#requests.adapters.HTTPAdapter
Of course, other libraries will have their own retry function.
If you just want simple retry code, use the below code
retries_count = 3 # hard value
delay = 3 # hard value
while True:
try:
... run some code
return or break
except {Your Custom Error}:
if retries_count <= 0:
raise
retries_count -= 1
time.sleep(delay)
A Google search for "python throttling" will give you a lot of reference.
You can use a try-catch in an infinite loop like this:
while True:
try:
# your code
# call another python script
except:
time.sleep(10)
also, you can check if the error and run whatever you need to run, depend on the error type. for example:
while True:
try:
# your code
# call another python script
# break
except Exception as e:
if e == 'error type':
time.sleep(10)
else:
pass
You actually can do it with try except block.
You can find all about it here: https://docs.python.org/3/tutorial/errors.html
You just make one script, that has something like that:
while True:
try:
another_script_name.main()
except:
time.sleep(20)
Of course, you need to import both time and the other script you made.
What these lines are doing is just an infinite loop, that always tries to run the main function on the other script you made, and if some sort of error occurs, the system will sleep for 20 seconds and then will try again, because it is in the infinite loop.
I am somewhat new to python and error handling. I am running a program that connects to a data source and continuously receives market data, every so often I encounter an "Exception in thread Thread" which I think happens because an issue with the data connection occurs, so the computation that the program should be doing can't be completed. I think I can resolve this problem by restarting the program from the beginning if the connection fails. Something like... if this type of error is encountered, go back to the beginning and try to reconnect, and if that doesn't fix the problem after a few attempts, stop trying. The thread is currently started in the App class, when the class is initialized, so going back to the first line would solve the problem, I'm just not sure how I would do this. My main code is fairly complex, but is structured like so:
if __name__ == '__main__':
my_app = App(args)
while my_app.market_open():
do some computation
do some more computation, etc.
my_app.disconnect()
Define a function def main():
and within it, do the (re)connection stuff.
Then you write the famous
if __name__ == '__main__' :
main()
To restart from anywhere within your program, call main() from wherever you are! In this case, you put everything under/after if __name__ == "__main__" into main()s definition and from inside main() call main()
You might also try putting your calculations into a try / except loop to catch the error and redirect the code to a new function call? You will need to specify the error:
try:
while my_app.market_open():
do some computation
do some more computation, etc.
except WhatEverYourErrorIsHere:
while my_app.market_open():
do some computation again?
Here is example running different functions in two threads every minute. Again, you will need to specify the Error to catch:
import schedule, threading, datetime
def someCallableFunction():
try:
print('running someCallableFunction\n')
except ConnectionError:
print(datetime.utcnow(), 'External Connection Error')
return
def someOtherCallableFunction():
try:
print('running someOtherCallableFunction\n')
except ConnectionError:
print(datetime.utcnow(), 'External Connection Error')
return
def run_threaded(job_func):
job_thread = threading.Thread(target=job_func)
job_thread.start()
schedule.every(1).minutes.at(':00').do(run_threaded, someCallableFunction)
schedule.every(1).minutes.at(':00').do(run_threaded,
someOtherCallableFunction)
while True:
schedule.run_pending()
I have a scraping code within a for loop, but it would take several hours to complete, and the program stops when my Internet connection breaks. What I (think I) need is a condition at the beginning of the scraper that tells Python to keep trying at that point.
I tried to use the answer from here:
for w in wordlist:
#some text processing, works fine, returns 'textresult'
if textresult == '___': #if there's nothing in the offline resources
bufferlist = list()
str1=str()
mlist=list() # I use these in scraping
br = mechanize.Browser()
tried=0
while True:
try:
br.open("http://the_site_to_scrape/")
# scraping, with several ifs. Each 'for w' iteration results with scrape_result string.
except (mechanize.HTTPError, mechanize.URLError) as e:
tried += 1
if isinstance(e,mechanize.HTTPError):
print e.code
else:
print e.reason.args
if tried > 4:
exit()
time.sleep(120)
continue
break
Works while I'm online. When the connection breaks, Python writes the 403 code and skips that word from wordlist, moves on to the next and does the same. How can I tell Python to wait for connection within the iteration?
EDIT: I would appreciate it if you could write at least some of the necessary commands and tell me where they should be placed in my code, because I've never dealt with exception loops.
EDIT - SOLUTION I applied Abhishek Jebaraj's modified solution. I just added a very simple exception handling command:
except:
print "connection interrupted"
time.sleep(30)
Also, Jebaraj's getcode command will raise an error. Before r.getcode, I used this:
import urllib
r = urllib.urlopen("http: the site ")
The top answer to this question helped me as well.
Write another while loop inside which will keep trying to connect to the internet.
It will break only when it receives status code of 200 and then you can continue with your program.
Kind of like
retry = True
while retry:
try:
r = br.open(//your site)
if r.getcode()/10==20:
retry = False
except:
// code to handle any exception
// rest of your code
I have a Python program that deals with a webpage with Selenium every 5 minutes from a cron job. cron sends me an email for every exception raised (cron just catches all stderr). Sometimes the website has some persistent internal errors, and I end up with hundreds of system emails to check. I then started ignoring exceptions, but I found it would be more sane if the program would send me an email if the problem persists for one hour.
I have written the following code:
#!/usr/bin/env python3
import sys, os, time
name = os.path.basename(sys.argv[0])
def send_email(server, message):
# just print for testing
print('Sending email...')
def handle_exception(exception, function):
# save a state file in /dev/shm with the name of the exception
warn_state = '/dev/shm/{}.{}'.format(name, exception.__name__)
try:
function
except(exception) as exception:
if os.path.exists(warn_state):
if time.time() - os.path.getmtime(warn_state) > 3600:
send_email('localhost', exception.args[1])
else:
open(warn_state, 'w').close()
else:
if os.path.exists(warn_state):
os.remove(warn_state)
def my_function():
# try raising a NameError
print(undefined_variable)
# run my_function() catching exceptions
handle_exception(NameError, my_function)
When I run the code above, I have checked that the else: part is being executed, indicating that function is not failing at all. I'm new to programming, so I don't really know if this would be the proper way to do this, but I created this function because I have to deal with at least five different types of exceptions on this script, that are raised randomly by Selenium due to server or network problems.
you are not actually calling function
function
does nothing. you need to do
function()
I have a python thread that runs every 20 seconds. The code is:
import threading
def work():
Try:
#code here
except (SystemExit, KeyboardInterrupt):
raise
except Exception, e:
logger.error('error somewhere',exc_info=True)
threading.Timer(20, work).start ();
It usually runs completely fine. Once in a while, it'll return an error that doesnt make much sense. The errors are the same two errors. The first one might be legitimate, but the errors after that definitely aren't. Then after that, it returns that same error every time it runs the thread. If I kill the process and start over, then it runs cleanly. I have absolutely no idea what going on here. Help please.
As currently defined in your question, you are most likely exceeding your maximum recursion depth. I can't be certain because you have omitted any opportunities for flow control that may be evident in your try block. Furthermore, everytime your code fails to execute, the general catch for exceptions will log the exception and then bump you into a new timer with a new logger (assume you are declaring that in the try block). I think you probably meant to do the following:
import threading
import time
def work():
try:
#code here
pass
except (SystemExit, KeyboardInterrupt):
raise
except Exception, e:
logger.error('error somewhere',exc_info=True)
t = threading.Timer(20, work)
t.start()
i = 0
while True:
time.sleep(1)
i+=1
if i >1000:
break
t.cancel()
If this is in fact the case, the reason your code was not working is that when you call your work function the first time, it processes and then right at the end, starts another work function in a new timer. This happens add infinitum until the stack fills up, python coughs, and gets angry that you have recursed (called a function from within itself) too many times.
My code fix pulls the timer outside of the function so we create a single timer, which calls the work function once every 20 seconds.
Because threading.timers run in separate threads, we also need to wait around in the main thread. To do this, I added a simple while loop that will run for 1000 seconds and then close the timer and exit. If we didn't wait around in the main loop, it would call your timer and then close out immediately causing python to clean up the timer before it executed even once.