Continue with for loop after certain ammount of time - python

How would you be able to move to the next iteration of a for loop if a given iteration takes more than a certain amount of time? The code should look something like this.
for i in range(0, max_iterations):
timer function
call to api
The timer function will serve the purpose of forcing the for loop to continue onto the next iteration if the api has not finished. It should work in 120 seconds for that iteration. How would the timer function be written? Thank you in advance!

This is only truly possible with a non-blocking API call or an API call with a timeout. For example, if you are using the socket library, you could use socket.setblocking(0)to make the socket API calls non-blocking.
In your case, you have said you are using the Yandex API. This appears to be JSON over https, so you may wish to try urllib2.urlopen(). This method accepts a timeout. This is even easier than using a non-blocking call as urlopen() will simply give up and return an error after the timeout has expired.
Using threads as suggested in some of the comments will give you a partial solution. Since there is no ability to stop a thread started with the threading module, all of the API calls you initiate that do not complete will stay in a blocked state for the life of the python interpreter and those threads will never exit.
If you do use the threading module to solve this problem, you should make all of the threads that run API calls daemon threads thread.setDaemon(True) so that when your main thread exits, the interpreter stops. Otherwise the interpreter will not exit until all of the API calls have completed and returned.

Related

How to make an api for a python app that is in a continuous loop?

I have a python app that calls a recursive method which runs forever. It is a loop that scrapes a webpage and looks for a number and once it finds it, it prints out a message, increments the number, and calls the method again with the incremented number. This goes on forever because the webpage updates itself about once a week and my method prints out the message when that update is caught.
I want to make a mobile app that notifies users when the method prints out a message (ideally within a minute or two of the change). What is the best way to create an api that would allow me to do this? If there is another way, how can i do it?
Using recursive method for infinite loop is a big mistake because every time you call method again the last method goes to stack and if you do it infinite time finally you get stack overflow error. best way for infinite jobs are thread with a simple "while True":
import threading
SenderThread = threading.Thread(target=sender)
SenderThread.start()
def sender():
while True:
# do your job here
edit:
according to this:
Python threads are used in cases where the execution of a task involves some waiting. One example would be interaction with a service hosted on another computer, such as a webserver. Threading allows python to execute other code while waiting; this is easily simulated with the sleep function.
The reason i used thread is for the main program to do its job or respose to inputs or any thing else that you need.

Background timer via Tornado IOLoop.spawn_callback

I want to run a timer in a tornado based web app such that the it runs in background and is non blocking.
Once the timer finishes, a specific task has to be called so it is very important that the timer completes exactly on time.
What should be the ideal way to do it ?
I read Tornado IOLoop.spawn_callback in the documentation but I am not very clear that it would behave correctly.
I don't quite understand the statement in the doc
Unlike all other callback-related methods on IOLoop, spawn_callback does not associate the callback with its caller’s stack_context
If you want to run a function after a specific time, you can use IOLoop.call_later. Use it like this:
IOLoop.current().call_later(5, my_func) # will call my_func 5 seconds later
def my_func():
# do something
IOLoop.spawn_callback is used for running a callback/function in the next iteration of the IOLoop, that is - almost instantly. You can't add a time out to spawn_callback. Since you want to schedule a callback after a timeout, IOLoop.call_later is what you need.
In your comment you asked
Why according to you IOLoop.spawn_callback is not to be used?
Well, I never said to not use it. You can use it if you need it. In this case, you don't.
So, when do you need it? When you'll need to run a callback almost instantly, without a timeout, that's when you can use spawn_callback. But even then, there's IOLoop.add_callback which is used much more widely than spawn_callback.

How can I stop the execution of a Python function from outside of it?

So I have this library that I use and within one of my functions I call a function from that library, which happens to take a really long time. Now, at the same time I have another thread running where I check for different conditions, what I want is that if a condition is met, I want to cancel the execution of the library function.
Right now I'm checking the conditions at the start of the function, but if the conditions happen to change while the library function is running, I don't need its results, and want to return from it.
Basically this is what I have now.
def my_function():
if condition_checker.condition_met():
return
library.long_running_function()
Is there a way to run the condition check every second or so and return from my_function when the condition is met?
I've thought about decorators, coroutines, I'm using 2.7 but if this can only be done in 3.x I'd consider switching, it's just that I can't figure out how.
You cannot terminate a thread. Either the library supports cancellation by design, where it internally would have to check for a condition every once in a while to abort if requested, or you have to wait for it to finish.
What you can do is call the library in a subprocess rather than a thread, since processes can be terminated through signals. Python's multiprocessing module provides a threading-like API for spawning forks and handling IPC, including synchronization.
Or spawn a separate subprocess via subprocess.Popen if forking is too heavy on your resources (e.g. memory footprint through copying of the parent process).
I can't think of any other way, unfortunately.
Generally, I think you want to run your long_running_function in a separate thread, and have it occasionally report its information to the main thread.
This post gives a similar example within a wxpython program.
Presuming you are doing this outside of wxpython, you should be able to replace the wx.CallAfter and wx.Publisher with threading.Thread and PubSub.
It would look something like this:
import threading
import time
def myfunction():
# subscribe to the long_running_function
while True:
# subscribe to the long_running_function and get the published data
if condition_met:
# publish a stop command
break
time.sleep(1)
def long_running_function():
for loop in loops:
# subscribe to main thread and check for stop command, if so, break
# do an iteration
# publish some data
threading.Thread(group=None, target=long_running_function, args=()) # launches your long_running_function but doesn't block flow
myfunction()
I haven't used pubsub a ton so I can't quickly whip up the code but it should get you there.
As an alternative, do you know the stop criteria before you launch the long_running_function? If so, you can just pass it as an argument and check whether it is met internally.

Is it possible to prevent python's http.client.HTTPResponse.read() from hanging when there is no data?

I'm using Python http.client.HTTPResponse.read() to read data from a stream. That is, the server keeps the connection open forever and sends data periodically as it becomes available. There is no expected length of response. In particular, I'm getting Tweets through the Twitter Streaming API.
To accomplish this, I repeatedly call http.client.HTTPResponse.read(1) to get the response, one byte at a time. The problem is that the program will hang on that line if there is no data to read, which there isn't for large periods of time (when no Tweets are coming in).
I'm looking for a method that will get a single byte of the HTTP response, if available, but that will fail instantly if there is no data to read.
I've read that you can set a timeout when the connection is created, but setting a timeout on the connection defeats the whole purpose of leaving it open for a long time waiting for data to come in. I don't want to set a timeout, I want to read data if there is data to be read, or fail if there is not, without waiting at all.
I'd like to do this with what I have now (using http.client), but if it's absolutely necessary that I use a different library to do this, then so be it. I'm trying to write this entirely myself, so suggesting that I use someone else's already-written Twitter API for Python is not what I'm looking for.
This code gets the response, it runs in a separate thread from the main one:
while True:
try:
readByte = dc.request.read(1)
except:
readByte = []
if len(byte) != 0:
dc.responseLock.acquire()
dc.response = dc.response + chr(byte[0])
dc.responseLock.release()
Note that the request is stored in dc.request and the response in dc.response, these are created elsewhere. dc.responseLock is a Lock that prevents dc.response from being accessed by multiple threads at once.
With this running on a separate thread, the main thread can then get dc.response, which contains the entire response received so far. New data is added to dc.response as it comes in without blocking the main thread.
This works perfectly when it's running, but I run into a problem when I want it to stop. I changed my while statement to while not dc.twitterAbort, so that when I want to abort this thread I just set dc.twitterAbort to True, and the thread will stop.
But it doesn't. This thread remains for a very long time afterward, stuck on the dc.request.read(1) part. There must be some sort of timeout, because it does eventually get back to the while statement and stop the thread, but it takes around 10 seconds for that to happen.
How can I get my thread to stop immediately when I want it to, if it's stuck on the call to read()?
Again, this method is working to get Tweets, the problem is only in getting it to stop. If I'm going about this entirely the wrong way, feel free to point me in the right direction. I'm new to Python, so I may be overlooking some easier way of going about this.
Your idea is not new, there are OS mechanisms(*) for making sure that an application is only calling I/O-related system calls when they are guaranteed to be not blocking . These mechanisms are usually used by async I/O frameworks, such as tornado or gevent. Use one of those, and you will find it very easy to run code "while" your application is waiting for an I/O event, such as waiting for incoming data on a socket.
If you use gevent's monkey-patching method, you can proceed using http.client, as requested. You just need to get used to the cooperative scheduling paradigm introduced by gevent/greenlets, in which your execution flow "jumps" between sub-routines.
Of course you can also perform blocking I/O in another thread (like you did), so that it does not affect the responsiveness of your main thread. Regarding your "How can I get my thread to stop immediately" problem:
Forcing a thread that's blocking in a system call to stop is usually not a clean or even valid process (also see Is there any way to kill a Thread in Python?). Either -- if your application has finished its jobs -- you take down the entire process, which also affects all contained threads, or you just leave the thread be and give it as much time to terminate as required (these 10 seconds you were referring to are not a problem -- are they?)
If you do not want to have such long-blocking system calls anywhere in your application (be it in the main thread or not), then use above-mentioned techniques to prevent blocking system calls.
(*) see e.g. O_NONBLOCK option in http://man7.org/linux/man-pages/man2/open.2.html

Is calling QCoreApplications.processEvents() on a set interval safe?

I have a Qt application written in PySide (Qt Python binding). This application has a GUI thread and many different QThreads that are in charge of performing some heavy lifting - some rather long tasks. As such long task sometimes gets stuck (usually because it is waiting for a server response), the application sometimes freezes.
I was therefore wondering if it is safe to call QCoreApplication.processEvents() "manually" every second or so, so that the GUI event queue is cleared (processed)? Is that a good idea at all?
It's safe to call QCoreApplication.processEvents() whenever you like. The docs explicitly state your use case:
You can call this function occasionally when your program is busy
performing a long operation (e.g. copying a file).
There is no good reason though why threads would block the event loop in the main thread, though. (Unless your system really can't keep up.) So that's worth looking into anyway.
A couple of hints people might find useful:
A. You need to beware of the following:
Every so often the threads want to send stuff back to the main thread. So they post an event and call processEvents
If the code runs from the event also calls processEvents then instead of returning to the next statement, python can instead dispatch a worker thread again and that can then repeat this process.
The net result of this can be hundreds or thousands of nested processEvent statements which can then result in a recursion level exceeded error message.
Moral - if you are running a multi-threaded application do NOT call processEvents in any code initiated by a thread which runs in the main thread.
B. You need to be aware that CPython has a Global Interpreter Lock (GIL) that limits threads so that only one can run at any one time and the way that Python decides which threads to run is counter-intuitive. Running process events from a worker thread does not seem to do what it says on the can, and CPU time is not allocated to the main thread or to Python internal threads. I am still experimenting, but it seems that putting worker threads to sleep for a few miliseconds allows other threads to get a look in.

Categories

Resources