Django + multiprocessing.dummy.Pool + sleep = weird result - python

In my Django application, I want to do some work in background when a certain view is requested. To that end, I created a multiprocessing.dummy.Pool of workers, and whenever that URL is called, I start a new process on it. The task to be executed in background can have to do some retries with a certain timeout between them.
Since this whole thing is executed, so to speak, not on a UI thread, I thought I'd use sleep for timeouts. When I unittest this arrangement, everything works fine, but when this runs in Django, the thread gets to the sleep statement and then never wakes up, but when I restart the Django app, the thread gets past the sleep statement and then is immediately killed by the restart. I know I could schedule retries using Timers, but I wanted a simpler solution.
Here's a simplified version of my code:
from multiprocessing.dummy import Pool
POOL = Pool(settings.POOL_WORKERS)
def background_task(arg):
refresh = True
try:
for i in range(settings.GET_RETRY_LIMIT):
status, result = (arg, refresh=refresh)
refresh = False
if status is Statuses.OK:
return result
if i < settings.GET_RETRY_LIMIT - 1:
sleep(settings.GET_SLEEP_TIME)
except Exception as e:
logging.error(e)
return []
def do_background_work(arg):
POOL.apply_async(
background_task,
(arg)
)
def my_view(request):
arg = get_arg_from_request(request)
do_background_work(arg)
return Response("Ok")
UPD: By the way, turns out that the workers are most probably killed by Harakiri

Related

Immediately raise exceptions in concurrent.futures

I run several threads concurrently using concurrent.futures. All of them are necessary to run successfully for the next steps in the code to succeed.
While at the end of all processes I can raise any exceptions by running .result(), ideally any exception raised in a single thread would immediately stop all threads. This would be helpful to identify bugs in any task sooner, rather than waiting until all long-running processes complete.
Is this possible?
It's possible to exit after the first exception and not submit any new jobs to the Executor. However, once a job has been submitted, it can't be cancelled, you have to wait for all submitted jobs to finish (or timeout). See this question for details. Here's a short example that cancels any unsubmitted jobs once the first exception occurs. However, it still waits for the already submitted jobs to finish. This uses the "FIRST EXCEPTION" keyword listed in the concurrent.futures docs.
import time
import concurrent.futures
def example(i):
print(i)
assert i != 1
time.sleep(i)
return i
if __name__ == "__main__":
futures = []
with concurrent.futures.ThreadPoolExecutor() as executor:
for number in range(5):
futures.append(executor.submit(example, number))
exception = False
for completed, running_or_error in concurrent.futures.wait(futures, return_when="FIRST_EXCEPTION"):
try:
running_or_error.result()
except Exception as e:
for future in futures:
print(future.cancel()) # cancel all unstarted futures
raise e

Python, Use threading and schedule to keep running a function constantly

I am making a bot that auto-posts to Instagram using instabot, now the thing is that if I exceed a number of request the bot terminate the script after retrying for some minutes.
The solution I came up with is to schedule the script to run every hour or so, and to ensure that the script will keep running constantly I used threading to restart the posting function when the thread is dead.
The function responsible for posting, in this code if the bot instance from instabot retried sending requests for some minutes and failed, it terminates the whole script.
def main():
create_db()
try:
os.mkdir("images")
print("[INFO] Images Directory Created")
except:
print("[INFO] Images Directory Found")
# GET A SUBMISSION FROM HOT
submissions = list(reddit.subreddit('memes').hot(limit=100))
for sub in submissions:
print("*"*68)
url = sub.url
print(f'[INFO] URL : {url}')
if "jpg" in url or "png" in url:
if not sub.stickied:
print("[INFO] Valid Post")
if check_if_exist(sub.id) is None:
id_ = sub.id
name = sub.title
link = sub.url
status = "FALSE"
print(f"""
[INFO] ID = {id_}
[INFO] NAME = {name}
[INFO] LINK = {link}
[INFO] STATUS = {status}
""")
# SAVE THE SUBMISSION TO THE DATABASE
insert_db(id_, name, link, status)
post_instagram(id_)
print(f"[INFO] Picture Uploaded, Next Upload is Scheduled in 60 min")
break
time.sleep(5 * 60)
The scheduling function:
def func_start():
schedule.every(1).hour.do(main)
while True:
schedule.run_pending()
time.sleep(10 * 60)
And last piece of code:
if __name__ == '__main__':
t = threading.Thread(target=func_start)
while True:
if not t.is_alive():
t.start()
else:
pass
So basically I want to keep running the main function every hour or so, but I am not having any successes.
Looks to me like schedule and threading are overkill for your use case as your script only performs one single task, so you do not need concurrency and can run the whole thing in the main thread. You primarily just need to catch exceptions from the main function. I would go with something like this:
if __name__ == '__main__':
while True:
try:
main()
except Exception as e:
# will handle exceptions from `main` so they do not
# terminate the script
# note that it would be better to only catch the exact
# exception you know you want to ignore (rather than
# the very broad `Exception`), and let other ones
# terminate the script
print("Exception:", e)
finally:
# will sleep 10 minutes regardless whether the last
# `main` run succeeded or not, then continue running
# the infinite loop
time.sleep(10 * 60)
...unless you actually want each main run to start precisely in 60-minute intervals, in which case you'll probably need either threading or schedule. Because, if running main takes, say, 3 to 5 minutes, then simply sleeping 60 minutes after each execution means you'll be launching the function every 63 to 65 minutes.

Python Tornado - Asynchronous Request is blocking

The request handlers are as follows:
class TestHandler(tornado.web.RequestHandler): # localhost:8888/test
#tornado.web.asynchronous
def get(self):
t = threading.Thread(target = self.newThread)
t.start()
def newThread(self):
print "new thread called, sleeping"
time.sleep(10)
self.write("Awake after 10 seconds!")
self.finish()
class IndexHandler(tornado.web.RequestHandler): # localhost:8888/
def get(self):
self.write("It is not blocked!")
self.finish()
When I GET localhost:8888/test, the page loads 10 seconds and shows Awake after 10 seconds; while it is loading, if I open localhost:8888/index in a new browser tab, the new index page is not blocked and loaded instantly. These fit my expectation.
However, while the /test is loading, if I open another /test in a new browser tab, it is blocked. The second /test only starts processing after the first has finished.
What mistakes have I made here?
What you are seeing is actually a browser limitation, not an issue with your code. I added some extra logging to your TestHandler to make this clear:
class TestHandler(tornado.web.RequestHandler): # localhost:8888/test
#tornado.web.asynchronous
def get(self):
print "Thread starting %s" % time.time()
t = threading.Thread(target = self.newThread)
t.start()
def newThread(self):
print "new thread called, sleeping %s" % time.time()
time.sleep(10)
self.write("Awake after 10 seconds!" % time.time())
self.finish()
If I open two curl sessions to localhost/test simultaneously, I get this on the server side:
Thread starting 1402236952.17
new thread called, sleeping 1402236952.17
Thread starting 1402236953.21
new thread called, sleeping 1402236953.21
And this on the client side:
Awake after 10 seconds! 1402236962.18
Awake after 10 seconds! 1402236963.22
Which is exactly what you expect. However in Chromium, I get the same behavior as you. I think that Chromium (perhaps all browsers) will only allow one connection at a time to be opened to the same URL. I confirmed this by making IndexHandler run the same code as TestHandler, except with slightly different log messages. Here's the output when opening two browser windows, one to /test, and one to /index:
index Thread starting 1402237590.03
index new thread called, sleeping 1402237590.03
Thread starting 1402237592.19
new thread called, sleeping 1402237592.19
As you can see both ran concurrently without issue.
I think you picked the "wrong" test for checking parallel GET requests, that's because you're using a blocking function for your test: time.sleep(), which its behavior doesn't really occur when you simply render an HTML page ...
What happens is, that the def get() ( which handle all GET requests ) is actually being blocked when you use time.sleep it cannot process any new GET requests, puts them in some kind of "queue".
So if you really want to test sleep() - use the Tornado non-blocking function: tornado.gen.sleep()
Example:
from tornado import gen
#gen.coroutine
def get(self):
yield self.time_wait()
#gen.coroutine
def time_wait(self):
yield gen.sleep(15)
self.write("done")
Open multiple tabs in your browser, then you'll see that all requests are being processed when they arrive w/o "queueing" the new requests that comes in ..

How to abort a client request

I have this code in my application that use Django framework:
import os, time
def get(self, request, id=None):
pid = os.fork()
if pid == 0:
self.run()
time.sleep(600)
else:
time.sleep(20)
return write_response()
I create a child process that will create the data that will be returned. I really need to do the work an child process. The function run use an external software to calculate the data to return. If I don't create a new process only the first request will succeed (the external software constraint).
The child process take about 10 seconds to do the work. The parent wait 20 seconds and then return a response using the data calculated by the child. For the client everything is working. In the server I get exception (Broken pip).
When the child continue executing the client has closed the socket so it raise an exception. What should I do to fix my problem?

Can't catch SIGINT in multithreaded program

I've seen many topics about this particular problem but i still can't figure why i'm not catching a SIGINT in my main Thread.
Here is my code:
def connect(self, retry=100):
tries=retry
logging.info('connecting to %s' % self.path)
while True:
try:
self.sp = serial.Serial(self.path, 115200)
self.pileMessage = pilemessage.Pilemessage()
self.pileData = pilemessage.Pilemessage()
self.reception = reception.Reception(self.sp,self.pileMessage,self.pileData)
self.reception.start()
self.collisionlistener = collisionListener.CollisionListener(self)
self.message = messageThread.Message(self.pileMessage,self.collisionlistener)
self.datastreaminglistener = dataStreamingListener.DataStreamingListener(self)
self.datastreaming = dataStreaming.Data(self.pileData,self.datastreaminglistener)
return
except serial.serialutil.SerialException:
logging.info('retrying')
if not retry:
raise SpheroError('failed to connect after %d tries' % (tries-retry))
retry -= 1
def disconnect(self):
self.reception.stop()
self.message.stop()
self.datastreaming.stop()
while not self.pileData.isEmpty():
self.pileData.pop()
self.datastreaminglistener.remove()
while not self.pileMessage.isEmpty():
self.pileMessage.pop()
self.collisionlistener.remove()
self.sp.close()
if __name__ == '__main__':
import time
try:
logging.getLogger().setLevel(logging.DEBUG)
s = Sphero("/dev/rfcomm0")
s.connect()
s.set_motion_timeout(65525)
s.set_rgb(0,255,0)
s.set_back_led_output(255)
s.configure_locator(0,0)
except KeyboardInterrupt:
s.disconnect()
In the main function I call Connect() which is launching Threads over which i don't have direct controll.
When I launch this script I would like to be able to stop it when hitting Control+C by calling the "disconnect()" function which stops all the other threads.
In the code i provided it doesn't work because there is no thread in the main function. But I already tryied putting all the instuctions from Main() in a Thread with a While loop without success.
Is there a simple way to solve my problem ?
Thanx
Your indentation is messed up, but there's enough to go on.
Your main thread isn't catching SIGINT because it's not alive. There is nothing that stops your main thread from continuing past the try block, seeing no more code, and closing up shop.
I am not familiar with Sphero. I just attempted to google its docs and was linked to a bunch of 404 pages, so I'll tell you what you would normally do in a threaded environment - join your threads to the main thread so that the main thread can't finish execution before the worker threads.
for t in my_thread_list:
t.join() #main thread can't get past here until all the threads finish
If your Sphero object doesn't provide join-like functionality, you could hack something in that blocks, i.e.
raw_input('Press Enter to disconnect')
s.disconnect()

Categories

Resources