Use of threading.Thread.join() - python

I am new to multithreading in python and trying to learn multithreading using threading module. I have made a very simple program of multi threading and i am having trouble understanding the threading.Thread.join method.
Here is the source code of the program I have made
import threading
val = 0
def increment():
global val
print "Inside increment"
for x in range(100):
val += 1
print "val is now {} ".format(val)
thread1 = threading.Thread(target=increment, args=())
thread2 = threading.Thread(target=increment, args=())
thread1.start()
#thread1.join()
thread2.start()
#thread2.join()
What difference does it make if I use
thread1.join()
thread2.join()
which I have commented in the above code? I ran both the source codes (one with comments and the one without comments) but the output is the same.

A call to thread1.join() blocks the thread in which you're making the call, until thread1 is finished. It's like wait_until_finished(thread1).
For example:
import time
def printer():
for _ in range(3):
time.sleep(1.0)
print "hello"
thread = Thread(target=printer)
thread.start()
thread.join()
print "goodbye"
prints
hello
hello
hello
goodbye
—without the .join() call, goodbye would come first and then 3 * hello.
Also, note that threads in Python do not provide any additional performance (in terms of CPU processing power) because of a thing called the Global Interpreter Lock, so while they are useful for spawning off potentially blocking (e.g. IO, network) and time consuming tasks (e.g. number crunching) to keep the main thread free for other tasks, they do not allow you to leverage multiple cores or CPUs; for that, look at multiprocessing which uses subprocesses but exposes an API equivalent to that of threading.
PLUG: ...and it is also for the above reason that, if you're interested in concurrency, you might also want to look into a fine library called Gevent, which essentially just makes threading much easier to use, much faster (when you have many concurrent activities) and less prone to concurrency related bugs, while allowing you to keep coding the same way as with "real" threads. Also Twisted, Eventlet, Tornado and many others, are either equivalent or comparable. Furthermore, in any case, I'd strongly suggest reading these classics:
Generator Tricks for Systems Programmers
A Curious Course on Coroutines and Concurrency

I modified the code so that you will understand how exactly join works.
so run this code with comments and without comments and observe the output for both.
val = 0
def increment(msg,sleep_time):
global val
print "Inside increment"
for x in range(10):
val += 1
print "%s : %d\n" % (msg,val)
time.sleep(sleep_time)
thread1 = threading.Thread(target=increment, args=("thread_01",0.5))
thread2 = threading.Thread(target=increment, args=("thread_02",1))
thread1.start()
#thread1.join()
thread2.start()
#thread2.join()

As the relevant documentation states, join makes the caller wait until the thread terminates.
In your case, the output is the same because join doesn't change the program behaviour - it's probably being used to exit the program cleanly, only when all the threads have terminated.

Related

Multi Threading: Two Threads vs Nested Threads Python

I want to speed up my program as much as possible. Can someone help me which will be better in terms of speed? As per my requirement I can go with any approach.
Approach 1 (spawned 2 threads from main process):
def a(something):
# Does something at fixed interval
while 1:
print("a")
time.sleep(60)
def b(something):
# Keeps running for infinitely without any delay.
while 1:
print("b")
def main():
something = {}
t1 = threading.Thread(target=b, args=(something,))
t1.start()
t2 = threading.Thread(target=a, args=(something,))
t2.start()
Approach 2 (spawned a nested thread):
def a(something):
# Does something at fixed interval
while 1:
print("a")
time.sleep(60)
def b(something):
t2 = threading.Thread(target=a, args=(something,))
t2.start()
# Keeps running for infinitely without any delay.
while 1:
print("b")
def main():
something = {}
t1 = threading.Thread(target=b, args=(something,))
t1.start()
P.S. a and b are just dummy functions but does the things in similar way.
The coexistence of threads is flat, not hierarchical. A thread does not operate within another thread. (I am pretty sure that this is the case for CPython, it would be nice if someone can check it).
In other words, there is no difference between a thread spawned within the main thread and a thread spawned within any other thread (what you refer to a nested thread).
Regarding the other small differences between your two approaches (such as global vs local variables), they would hardly affect speed.
And finally, in this particular case multithreading would work as expected, the Python's infamous GIL Lock won't have any effects (the time.sleep() block would be avoided by rescheduling threads).
Instead of multiple threats i suggest you look into multiprocessing, while heavier python's threads effectively will only run in one CPU core no matter how many of them you use. This is due to the GIL lock and that's why multiprocessing is used to get around that.
An alternative would be to use PyPy which is a different python implementation than the official with speedups is some cases and without GIL, which can allow for mutlithreading efficienlty

Python - Why doesn't multithreading increase the speed of my code?

I tried improving my code by running this with and without using two threads:
from threading import Lock
from threading import Thread
import time
start_time = time.clock()
arr_lock = Lock()
arr = range(5000)
def do_print():
# Disable arr access to other threads; they will have to wait if they need to read
a = 0
while True:
arr_lock.acquire()
if len(arr) > 0:
item = arr.pop(0)
print item
arr_lock.release()
b = 0
for a in range(30000):
b = b + 1
else:
arr_lock.release()
break
thread1 = Thread(target=do_print)
thread1.start()
thread1.join()
print time.clock() - start_time, "seconds"
When running 2 threads my code's run time increased. Does anyone know why this happened, or perhaps know a different way to increase the performance of my code?
The primary reason you aren't seeing any performance improvements with multiple threads is because your program only enables one thread to do anything useful at a time. The other thread is always blocked.
Two things:
Remove the print statement that's invoked inside the lock. print statements drastically impact performance and timing. Also, the I/O channel to stdout is essentially single threaded, so you've built another implicit lock into your code. So let's just remove the print statement.
Use a proper sleep technique instead of "spin locking" and counting up from 0 to 30000. That's just going to burn a core needlessly.
Try this as your main loop
while True:
arr_lock.acquire()
if len(arr) > 0:
item = arr.pop(0)
arr_lock.release()
time.sleep(0)
else:
arr_lock.release()
break
This should run slightly better... I would even advocate getting the sleep statement out altogether so you can just let each thread have a full quantum.
However, because each thread is either doing "nothing" (sleeping or blocked on acquire) or just doing a single pop call on the array while in the lock, the majority of the time spent is going to be in the acquire/release calls instead of actually operating on the array. Hence, multiple threads aren't going to make your program run faster.

Why doesn't eventlet GreenPool call func after spawn_n unless waitall()?

This code prints nothing:
def foo(i):
print i
def main():
pool = eventlet.GreenPool(size=100)
for i in xrange(100):
pool.spawn_n(foo, i)
while True:
pass
But this code prints numbers:
def foo(i):
print i
def main():
pool = eventlet.GreenPool(size=100)
for i in xrange(100):
pool.spawn_n(foo, i)
pool.waitall()
while True:
pass
The only difference is pool.waitall(). In my mind, waitall() means wait until all greenthreads in the pool are finished working, but an infinite loop waits for every greenthread, so pool.waitall() is not necessary.
So why does this happen?
Reference: http://eventlet.net/doc/modules/greenpool.html#eventlet.greenpool.GreenPool.waitall
The threads created in an eventlet GreenPool are green threads. This means that they all exist within one thread at the operating-system level, and the Python interpreter handles switching between them. This switching can only happen when one thread either yields (deliberately provides an opportunity for other threads to run) or is waiting for I/O.
When your code runs:
while True:
pass
… that thread of execution is blocked – stuck on that code – and no other green threads can get scheduled.
When you instead run:
pool.waitall()
… eventlet makes sure that it yields while waiting.
You could emulate this same behaviour by modifying your while loop slightly to call the eventlet.sleep function, which yields:
while True:
eventlet.sleep()
This could be useful if you wanted to do something else in the while True: loop while waiting for the threads in your pool to complete. Otherwise, just use pool.waitall() – that’s what it’s for.

Return whichever expression returns first

I have two different functions f, and g that compute the same result with different algorithms. Sometimes one or the other takes a long time while the other terminates quickly. I want to create a new function that runs each simultaneously and then returns the result from the first that finishes.
I want to create that function with a higher order function
h = firstresult(f, g)
What is the best way to accomplish this in Python?
I suspect that the solution involves threading. I'd like to avoid discussion of the GIL.
I would simply use a Queue for this. Start the threads and the first one which has a result ready writes to the queue.
Code
from threading import Thread
from time import sleep
from Queue import Queue
def firstresult(*functions):
queue = Queue()
threads = []
for f in functions:
def thread_main():
queue.put(f())
thread = Thread(target=thread_main)
threads.append(thread)
thread.start()
result = queue.get()
return result
def slow():
sleep(1)
return 42
def fast():
return 0
if __name__ == '__main__':
print firstresult(slow, fast)
Live demo
http://ideone.com/jzzZX2
Notes
Stopping the threads is an entirely different topic. For this you need to add some state variable to the threads which needs to be checked in regular intervals. As I want to keep this example short I simply assumed that part and assumed that all workers get the time to finish their work even though the result is never read.
Skipping the discussion about the Gil as requested by the questioner. ;-)
Now - unlike my suggestion on the other answer, this piece of code does exactly what you are requesting:
from multiprocessing import Process, Queue
import random
import time
def firstresult(func1, func2):
queue = Queue()
proc1 = Process(target=func1,args=(queue,))
proc2 = Process(target=func2, args=(queue,))
proc1.start();proc2.start()
result = queue.get()
proc1.terminate(); proc2.terminate()
return result
def algo1(queue):
time.sleep(random.uniform(0,1))
queue.put("algo 1")
def algo2(queue):
time.sleep(random.uniform(0,1))
queue.put("algo 2")
print firstresult(algo1, algo2)
Run each function in a new worker thread, the 2 worker threads send the result back to the main thread in a 1 item queue or something similar. When the main thread receives the result from the winner, it kills (do python threads support kill yet? lol.) both worker threads to avoid wasting time (one function may take hours while the other only takes a second).
Replace the word thread with process if you want.
You will need to run each function in another process (with multiprocessing) or in a different thread.
If both are CPU bound, multithread won help much - exactly due to the GIL -
so multiprocessing is the way.
If the return value is a pickleable (serializable) object, I have this decorator I created that simply runs the function in background, in another process:
https://bitbucket.org/jsbueno/lelo/src
It is not exactly what you want - as both are non-blocking and start executing right away. The tirck with this decorator is that it blocks (and waits for the function to complete) as when you try to use the return value.
But on the other hand - it is just a decorator that does all the work.

Multi-threaded web scraping in Python/PySide/PyQt

I'm building a web scraper of a kind. Basically, what the soft would do is:
User (me) inputs some data (IDs) - IDs are complex, so not just numbers
Based on those IDs, the script visits http://localhost/ID
What is the best way to accomplish this? So I'm looking upwards of 20-30 concurrent connections to do it.
I was thinking, would a simple loop be the solution? This loop would start QThreads (it's a Qt app), so they would run concurrently.
The problem I am seeing with the loop however is how to instruct it to use only those IDs not used before i.e. in the iteration/thread that had been executed just before it was? Would I need some sort of a "delegator" function which will keep track of what IDs had been used and delegate the unused ones to the QThreads?
Now I've written some code but I am not sure if it is correct:
class GUI(QObject):
def __init__(self):
print "GUI CLASS INITIALIZED!!!"
self.worker = Worker()
for i in xrange(300):
QThreadPool().globalInstance().start(self.worker)
class Worker(QRunnable):
def run(self):
print "Hello world from thread", QThread.currentThread()
Now I'm not sure if these achieve really what I want. Is this actually running in separate threads? I'm asking because currentThread() is the same every time this is executed, so it doesn't look that way.
Basically, my question comes down to how do I execute several same QThreads concurrently?
Thanks in advance for the answer!
As Dikei says, Qt is red herring here. Focus on just using Python threads as it will keep your code much simpler.
In the code below we have a set, job_queue, containing the jobs to be executed. We also have a function, worker_thread which takes a job from the passed in queue and executes. Here it just sleeps for a random period of time. The key thing here is that set.pop is thread safe.
We create an array of thread objects, workers, and call start on each as we create it. From the Python documentation threading.Thread.start runs the given callable in a separate thread of control. Lastly we go through each worker thread and block until it has exited.
import threading
import random
import time
pool_size = 5
job_queue = set(range(100))
def worker_thread(queue):
while True:
try:
job = queue.pop()
except KeyError:
break
print "Processing %i..." % (job, )
time.sleep(random.random())
print "Thread exiting."
workers = []
for thread in range(pool_size):
workers.append(threading.Thread(target=worker_thread, args=(job_queue, )))
workers[-1].start()
for worker in workers:
worker.join()
print "All threads exited"

Categories

Resources