I have a class that starts multiple threads upon initialization. Originally I was using threading, but I learned the hard way how painfully slow it can get. As I researched this, it seems that multiprocessing would be faster because it actually utilizes multiple cores. The only hard part is the fact that it doesn't automatically share values. How could I make the following code share self across all processes?
Ideally, it would also share across processes outside of the class as well.
Also, I would rather share the entire class than share each individual value, if possible.
import multiprocessing as mp
from time import sleep
class ThreadedClass:
def __init__(self):
self.var = 0
#Here is where I would want to tell multiprocessing to share 'self'
change_var = mp.Process(target=self.change_var, args=())
print_var = mp.Process(target=self.print_var, args=())
change_var.start()
sleep(0.5)
print_var.start()
def change_var(self):
while True:
self.var += 1
print("Changed var to ", self.var)
sleep(1)
def print_var(self):
while True:
print("Printing var: ", self.var)
sleep(1)
ThreadedClass()
I also included output of the above code below:
Changed var to 1
Printing var: 0
Changed var to 2
Printing var: 0
Changed var to 3
Printing var: 0
Changed var to 4
Printing var: 0
Changed var to 5
Printing var: 0
Changed var to 6
Printing var: 0
Changed var to 7
Printing var: 0
Changed var to 8
Printing var: 0
Changed var to 9
Printing var: 0
Changed var to 10
Thanks in advance.
First of all, multiprocessing means that you are making sub-processes. This means that in general, they have their own space in memory and don't talk to each other. To be clear, when you start a new multiprocessing thread, python copies all your global variables into that thread and then runs that thread separate from everything else. So, when you spawned your two processes, change_var and print_var, each of them received a copy of self, and since their are two copies of self, neither of them is talking to each. One thread is updating it's own copy of self and producing answers that are counting, the other is not updating self. You can easily test this yourself:
import multiprocessing as mp
LIST = [] # This list is in parent process.
def update(item):
LIST.append(item)
p = mp.Process(target=update, args=(5,)) # Copies LIST, update, and anything else that is global.
p.start()
p.join()
# The LIST in the sub-process is cleaned up in memory when the process ends.
print(LIST) # The LIST in the parent process is not updated.
It would be very dangerous if different processes were updating each other's variables while they were trying to process with them; hence, naturally to isolate them (and prevent "segmentation faults"), the entire namespace is copied. If you want sub-processes to talk to each other, you need to communicate with a manager and Queue that is designed for that.
I personally recommend to write your code around things like a Pool() instead. Very clean, input an array, get back an array, done. But if you want to go down the rabbit hole, here is what I read on the multiprocessing website.
import multiprocessing as mp
def f(queue):
queue.put(['stuff',15])
def g(queue):
queue.put(['other thing'])
queue = mp.Queue()
p = mp.Process(target=f,args=(queue,))
q = mp.Process(target=g,args=(queue,))
p.start()
q.start()
for _ in range(2):
print(queue.get())
p.join()
q.join()
The main idea is that the queue does not get copied and instead allows things to be left in the queue. When the you run queue.get() it waits for something in the queue to be gotten that was left by some other process. queue.get() blocks and waits. This means you could have one process read the contents of the other process, like:
import multiprocessing as mp
def f(queue):
obj = queue.get() # Blocks this sub-process until something shows up.
if obj:
print('Something was in the queue from some other process.')
print(obj)
def g(queue):
queue.put(['leaving information here in queue'])
queue = mp.Queue()
p = mp.Process(target=f,args=(queue,))
q = mp.Process(target=g,args=(queue,))
p.start()
This is kindof cool, so I recommend waiting here a second to think about what is waiting to process. Next start the q process.
q.start()
Notice that p didn't get to finish processing until q was started. This is because the Queue blocked and waited for something to show up.
# clean up
p.join()
q.join()
You can read more at: https://docs.python.org/3.4/library/multiprocessing.html?highlight=process#multiprocessing.Queue
Related
So I'm using multiprocessing pool with 3 threads, to run a function that does a certain job, and I have a variable defined outside this function which equals 0, and every time the function do it job it should add 1 to that variable and print it, but every thread uses a separated variable
here is the code:
from multiprocessing import Pool
number_of_doe_jobs = 0
def thefunction():
global number_of_doe_jobs
# JOB CODE GOES HERE
number_of_doe_jobs+=1
if __name__ =="__main__":
p = Pool(3)
p.map(checker, datalist)
the desired output is that it adds 1 to number_of_doe_jobs ,
but every thread add 1 to it own number_of_doe_jobs , so there are 3 number_of_doe_jobs variables now.
You are not spawning 3 threads. You are spawning 3 processes. Each process has its own memory space, with its own copy of the interpreter and its own independent object space. Global variables are not shared across processes. There are ways to create shared variables (which communicate over sockets), but you might be better served by using a multiprocessing.Queue. Create it in the mainline code, and pass it as a parameter to the subprocesses. Have the jobs push a "complete" flag on the queue, and have the mainline code read the results.
FOLLOWUP
The NUMBER of jobs will always be equal to len(datalist), so it's not clear why you would track that. Here, I create a multiprocessing queue and pass that to the function. Python implements that by creating a socket. The checker function sends a signal when it finishes, and the mainline code fetches each one and prints it. q.get will block until something is in the queue.
import multiprocessing
def checker(q):
# JOB CODE GOES HERE
q.put( "done" )
if __name__ =="__main__":
q = multiprocessing.Queue()
p = Pool(3)
p.map(lambda: checker(q), datalist)
for _ in datalist:
print( q.get() )
I need to run a parallelized process on a list of inputs but using in the process all the variables and functions defined above in the code. But the process itself can be parallelized, because it depends only on one variable, the input of the list.
So I have two possibilities but I don’t know how to implement neither of the two:
1) to use a class, and have a method that should be parallelized using all the functions and attributes of that class. That is: run the method in a parallelized loop, but giving the chance to read the attributes of the object without creating a copy of it.
2) just have a big main and define global variables before running the parallelized process.
Ex:
from joblib import Parallel, delayed
def func(x,y,z):
#do something
a = func0(x,y) #whatever function
a = func1(a,z) #whatever function
return a
if name==“__main__””:
#a lot of stuff in which you create y and z
global y,z
result = Parallel(n_jobs=2)(delayed(func)(i,y,z)for i in range(10))
So the problem is that when I get to the parallel function, y and z are already defined and they are just lookup data, and my question is how can I pass those values to the paralleled function, without python creating a copy for each job?
If you just need to pass a list to some parallel processes I would use the built in threading module. From what I can tell of your question this is all that you need, and you are able to pass arguments to the threads.
Here is a basic threading setup:
import threading
def func(x, y):
print(x, y) # random example
x, y = "foo", "bar"
threads = []
for _ in range(10): # create 10 threads
t = threading.Thread(target=func, args=(x, y,))
threads.append(t)
t.start()
for t in threads:
t.join() # waits for the thread to complete
However if you need to keep track of that list in a thread-safe way you will want to use a Queue:
import threading, queue
# build a thread-safe list
my_q = queue.Queue()
for i in range(1000):
my_q.put(i)
# here is your worker function
def worker(queue):
while not queue.empty():
task = queue.get() # get the next value from the queue
print(task)
queue.task_done() # when you are done tell the queue that this task is complete
# spin up some threads
threads = []
for _ in range(10):
t = threading.Thread(target=worker, args=(my_q,))
threads.append(t)
t.start()
my_q.join() # joining the queue means your code will wait here until the queue is empty
Now to answer your question about shared state, you can create an object to hold your variables. That way instead of passing a copy of the variables to each thread, you can pass the object itself (I believe this is called a Borg, but I could be slightly wrong on that). When doing this if you plan on making any changes to the shared variable it is imported to ensure they are thread-safe. For example if two threads try to increment a number at the same time you could potentially lose that change as one thread overwrites the other. To prevent this we use the threading.Lock object. (if you do not care about this, just ignore all of the lock stuff below).
There are other ways of doing this, but I find this method to be easy to understand and extremely flexible:
import threading
# worker function
def worker(vars, lock):
with lock:
vars.counter += 1
print(f"{threading.current_thread().name}: counter = {vars.counter}")
# this holds your variables to be referenced by threads
class Vars(object):
counter = 0
vars = Vars()
lock = threading.Lock()
# spin up some threads
threads = []
for _ in range(10):
t = threading.Thread(target=worker, args=(vars, lock, ))
threads.append(t)
t.start()
for t in threads:
t.join()
I'm trying to reduce the processing time of reading a database with roughly 100,000 entries, but I need them to be formatted a specific way, in an attempt to do this, I tried to use python's multiprocessing.map function which works perfectly except that I can't seem to get any form of queue reference to work across them.
I've been using information from Filling a queue and managing multiprocessing in python to guide me for using queues across multiple processes, and Using a global variable with a thread to guide me for using global variables across threads. I've gotten the software to work, but when I check the list/queue/dict/map length after running the process, it always returns zero
I've written a simple example to show what I mean:
You have to run the script as a file, the map's initialize function does not work from the interpreter.
from multiprocessing import Pool
from collections import deque
global_q = deque()
def my_init(q):
global global_q
global_q = q
q.append("Hello world")
def map_fn(i):
global global_q
global_q.append(i)
if __name__ == "__main__":
with Pool(3, my_init, (global_q,)) as pool:
pool.map(map_fn, range(3))
for p in range(len(global_q)):
print(global_q.pop())
Theoretically, when I pass the queue object reference from the main thread to the worker threads using the pool function, and then initialize that thread's global variables using with the given function, then when I insert elements into the queue from the map function later, that object reference should still be pointing to the original queue object reference (long story short, everything should end up in the same queue, because they all point to the same location in memory).
So, I expect:
Hello World
Hello World
Hello World
1
2
3
of course, the 1, 2, 3's are in arbitrary order, but what you'll see on the output is ''.
How come when I pass object references to the pool function, nothing happens?
Here's an example of how to share something between processes by extending the multiprocessing.managers.BaseManager class to support deques.
There's a Customized managers section in the documentation about creating them.
import collections
from multiprocessing import Pool
from multiprocessing.managers import BaseManager
class DequeManager(BaseManager):
pass
class DequeProxy(object):
def __init__(self, *args):
self.deque = collections.deque(*args)
def __len__(self):
return self.deque.__len__()
def appendleft(self, x):
self.deque.appendleft(x)
def append(self, x):
self.deque.append(x)
def pop(self):
return self.deque.pop()
def popleft(self):
return self.deque.popleft()
# Currently only exposes a subset of deque's methods.
DequeManager.register('DequeProxy', DequeProxy,
exposed=['__len__', 'append', 'appendleft',
'pop', 'popleft'])
process_shared_deque = None # Global only within each process.
def my_init(q):
""" Initialize module-level global. """
global process_shared_deque
process_shared_deque = q
q.append("Hello world")
def map_fn(i):
process_shared_deque.append(i) # deque's don't have a "put()" method.
if __name__ == "__main__":
manager = DequeManager()
manager.start()
shared_deque = manager.DequeProxy()
with Pool(3, my_init, (shared_deque,)) as pool:
pool.map(map_fn, range(3))
for p in range(len(shared_deque)): # Show left-to-right contents.
print(shared_deque.popleft())
Output:
Hello world
0
1
2
Hello world
Hello world
You cant use global variable for multiprocesing.
Pass to the function multiprocessing queue.
from multiprocessing import Queue
queue= Queue()
def worker(q):
q.put(something)
Also you are propably experiencing that the code is allright, but as the pool create separate processes, even the errors are separeted and therefore you dont see the code not only isnt working, but that it throws error.
The reason why your output is '', is because nothing was appended to your q/global_q. And if it was appended, then only some variable, that may be called global_q, but its totally different one than your global_q in your main thread
Try to print('Hello world') inside the function you want to multiprocess and you will see by yourself, that nothing is actually printed at all. That processes is simply outside of your main thread and the only way to access that process is by multiprocessing Queues. You access the Queue by queue.put('something') and something = queue.get()
Try to understand this code and you will do well:
import multiprocessing as mp
shared_queue = mp.Queue() # This will be shared among all procesess, but you need to pass the queue as an argument in the process. You CANNOT use it as global variable. Understand that the functions kind of run in total different processes and nothing can really access them... Except multiprocessing.Queue - that can be shared across all processes.
def channel(que,channel_num):
que.put(channel_num)
if __name__ == '__main__':
processes = [mp.Process(target=channel, args=(shared_queue, channel_num)) for channel_num in range(8)]
for p in processes:
p.start()
for p in processes: # wait for all results to close the pool
p.join()
for i in range(8): # Get data from Queue. (you can get data out of it at any time actually)
print(shared_queue.get())
I have a large dataset in a list that I need to do some work on.
I want to start x amounts of threads to work on the list at any given time, until everything in that list has been popped.
I know how to start x amounts of threads (lets say 20) at a given time (by using thread1....thread20.start())
but how do I make it start a new thread when one of the first 20 threads finish? so at any given time there are 20 threads running, until the list is empty.
what I have so far:
class queryData(threading.Thread):
def __init__(self,threadID):
threading.Thread.__init__(self)
self.threadID = threadID
def run(self):
global lst
#Get trade from list
trade = lst.pop()
tradeId=trade[0][1][:6]
print tradeId
thread1 = queryData(1)
thread1.start()
Update
I have something going with the following code:
for i in range(20):
threads.append(queryData(i))
for thread in threads:
thread.start()
while len(lst)>0:
for iter,thread in enumerate(threads):
thread.join()
lock.acquire()
threads[iter] = queryData(i)
threads[iter].start()
lock.release()
Now it starts 20 threads in the beginning...and then keeps starting a new thread when one finishes.
However, it is not efficient, as it waits for the first one in the list to finish, and then the second..and so on.
Is there a better way of doing this?
Basically I need:
-Start 20 threads:
-While list is not empty:
-wait for 1 of the 20 threads to finish
-reuse or start a new thread
As I suggested in a comment, I think using a multiprocessing.pool.ThreadPool would be appropriate — because it would handle much of the thread management you're manually doing in your code automatically. Once all the threads are queued-up for processing via ThreadPool's apply_async() method calls, the only thing that needs to be done is wait until they've all finished execution (unless there's something else your code could be doing, of course).
I've translated the code in my linked answer to another related question so it's more similar to what you appear to be doing to make it easier to understand in the current context.
from multiprocessing.pool import ThreadPool
from random import randint
import threading
import time
MAX_THREADS = 5
print_lock = threading.Lock() # Prevent overlapped printing from threads.
def query_data(trade):
trade_id = trade[0][1][:6]
time.sleep(randint(1, 3)) # Simulate variable working time for testing.
with print_lock:
print(trade_id)
def process_trades(trade_list):
pool = ThreadPool(processes=MAX_THREADS)
results = []
while(trade_list):
trade = trade_list.pop()
results.append(pool.apply_async(query_data, (trade,)))
pool.close() # Done adding tasks.
pool.join() # Wait for all tasks to complete.
def test():
trade_list = [[['abc', ('%06d' % id) + 'defghi']] for id in range(1, 101)]
process_trades(trade_list)
if __name__ == "__main__":
test()
You can wait for a thread to complete with : thread.join(). This call will block until that thread completes, at which point you can create a new one.
However, instead of respawning a Thread each time, why not recycle your existing threads ?
This can be done by the use of tasks for example. You keep a list of tasks in a shared collection, and when one of your threads finishes a task, it retrieves another one from that collection.
I have two different functions f, and g that compute the same result with different algorithms. Sometimes one or the other takes a long time while the other terminates quickly. I want to create a new function that runs each simultaneously and then returns the result from the first that finishes.
I want to create that function with a higher order function
h = firstresult(f, g)
What is the best way to accomplish this in Python?
I suspect that the solution involves threading. I'd like to avoid discussion of the GIL.
I would simply use a Queue for this. Start the threads and the first one which has a result ready writes to the queue.
Code
from threading import Thread
from time import sleep
from Queue import Queue
def firstresult(*functions):
queue = Queue()
threads = []
for f in functions:
def thread_main():
queue.put(f())
thread = Thread(target=thread_main)
threads.append(thread)
thread.start()
result = queue.get()
return result
def slow():
sleep(1)
return 42
def fast():
return 0
if __name__ == '__main__':
print firstresult(slow, fast)
Live demo
http://ideone.com/jzzZX2
Notes
Stopping the threads is an entirely different topic. For this you need to add some state variable to the threads which needs to be checked in regular intervals. As I want to keep this example short I simply assumed that part and assumed that all workers get the time to finish their work even though the result is never read.
Skipping the discussion about the Gil as requested by the questioner. ;-)
Now - unlike my suggestion on the other answer, this piece of code does exactly what you are requesting:
from multiprocessing import Process, Queue
import random
import time
def firstresult(func1, func2):
queue = Queue()
proc1 = Process(target=func1,args=(queue,))
proc2 = Process(target=func2, args=(queue,))
proc1.start();proc2.start()
result = queue.get()
proc1.terminate(); proc2.terminate()
return result
def algo1(queue):
time.sleep(random.uniform(0,1))
queue.put("algo 1")
def algo2(queue):
time.sleep(random.uniform(0,1))
queue.put("algo 2")
print firstresult(algo1, algo2)
Run each function in a new worker thread, the 2 worker threads send the result back to the main thread in a 1 item queue or something similar. When the main thread receives the result from the winner, it kills (do python threads support kill yet? lol.) both worker threads to avoid wasting time (one function may take hours while the other only takes a second).
Replace the word thread with process if you want.
You will need to run each function in another process (with multiprocessing) or in a different thread.
If both are CPU bound, multithread won help much - exactly due to the GIL -
so multiprocessing is the way.
If the return value is a pickleable (serializable) object, I have this decorator I created that simply runs the function in background, in another process:
https://bitbucket.org/jsbueno/lelo/src
It is not exactly what you want - as both are non-blocking and start executing right away. The tirck with this decorator is that it blocks (and waits for the function to complete) as when you try to use the return value.
But on the other hand - it is just a decorator that does all the work.