multiprocessing spawning processes without pooling - python

I am trying to use the multiprocessing library to spawn new processes without using a Pool and without creating zombies.
On Unix when a process finishes but has not been joined it becomes a
zombie. There should never be very many because each time a new
process starts (or active_children() is called) all completed
processes which have not yet been joined will be joined. Also calling
a finished process’s Process.is_alive will join the process. Even so
it is probably good practice to explicitly join all the processes that
you start.
This implementation is a short version of a bigger script that creates zombies after some hours :
from multiprocessing import Process
import time
def target(task):
print(f"Working for {task*2} seconds ...")
time.sleep(task*2)
if __name__ == '__main__':
processes = 4
list_process = [None] * processes
targets = [[2] for i in range(10)]
list_process = [None] * processes
while targets:
for i in range(processes):
p = list_process[i]
if not (p and p.is_alive()):
list_process[i] = Process(target=target, args=(targets.pop(0)))
list_process[i].start()
if p:
p.join()
for process in list_process:
if process:
process.join()
On the bigger version, list_process has only zombies and no more task can be processed.
Update 1
Thanks to Booboo, I was able to get better sight of what is happening :
from multiprocessing import Process
import time
def target(task):
print(f"Working for {task*2} seconds ...")
time.sleep(task*2)
if __name__ == '__main__':
started_count = 0
joined_count = 0
joined_list = []
processes = 4
list_process = [None] * processes
targets = [[2] for i in range(10)]
list_process = [None] * processes
while targets:
for i in range(processes):
p = list_process[i]
if not (p and p.is_alive()):
list_process[i] = Process(target=target, args=(targets.pop(0)))
list_process[i].start()
print(list_process[i].pid)
started_count += 1
if p:
assert(not p.is_alive())
p.join()
joined_list.append(list_process[i].pid)
joined_count += 1
for process in list_process:
if process:
process.join()
joined_list.append(list_process[i].pid)
joined_count += 1
print(f'Final started count: {started_count}, final joined count: {joined_count}')
print(joined_list)
Output :
20604
24108
1272
23616
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
18492
17348
19992
6216
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
18744
26240
Working for 4 seconds ...
Working for 4 seconds ...
Final started count: 10, final joined count: 10
[18492, 17348, 19992, 6216, 18744, 26240, 6216, 6216, 6216, 6216]
I have 10 processes that are joined but some are not the good ones (the pid 6216 has not been invoked for a task, and the first ones are not joined), leading to not joined processes, why ?

I've seen this code before and as far as it goes, it seems correct. I have modified it to keep track of the number of times processes are started and joined and added an assertion just as a "sanity check":
from multiprocessing import Process
import time
def target(task):
print(f"Working for {task*2} seconds ...")
time.sleep(task*2)
if __name__ == '__main__':
started_count = 0
joined_count = 0
processes = 4
list_process = [None] * processes
targets = [[2] for i in range(10)]
list_process = [None] * processes
while targets:
for i in range(processes):
p = list_process[i]
if not (p and p.is_alive()):
list_process[i] = Process(target=target, args=(targets.pop(0)))
list_process[i].start()
started_count += 1
print('started count:', started_count)
if p:
assert(not p.is_alive())
p.join()
joined_count += 1
print('joined count:', joined_count)
for process in list_process:
if process:
process.join()
joined_count += 1
print('joined count:', joined_count)
print(f'Final started count: {started_count}, final joined count: {joined_count}')
Prints:
started count: 1
started count: 2
started count: 3
started count: 4
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
started count: 5
joined count: 1
started count: 6
joined count: 2
started count: 7
joined count: 3
started count: 8
joined count: 4
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
started count: 9
joined count: 5
started count: 10
joined count: 6
joined count: 7
Working for 4 seconds ...
Working for 4 seconds ...
joined count: 8
joined count: 9
joined count: 10
Final started count: 10, final joined count: 10
Could there be something else in your program you haven't posted that is causing the problem?
Implementing a Process Pool
If I might make a suggestion: Your method of implementing a process pool is rather inefficient. If you had 100 tasks to submit you are creating 100 processes. That is not the definition of a processing pool. True, you are controlling the degree of parallelism but you are failing to reuse processes, which is the central ideal of a pool. The following demonstrates how to create a pool of 4 processes that can execute as many tasks as required. When all the tasks are completed, you only have to join the 4 processes. This could go a long way to solving your zombie issue:
from multiprocessing import Process, Queue
import time
def target(queue):
while True:
task = queue.get()
if task is None: # "end of file" indicator
break
print(f"Working for {task*2} seconds ...")
time.sleep(task*2)
if __name__ == '__main__':
N_PROCESSES = 4
processes = []
queue = Queue()
for _ in range(N_PROCESSES):
processes.append(Process(target=target, args=(queue,)))
for process in processes:
process.start()
# Write tasks to the job queue:
for _ in range(10):
queue.put(2)
# And write an "end of file" indicator for each process in the pool:
for _ in range(N_PROCESSES):
queue.put(None)
# Wait for processes to complete:
for process in processes:
process.join()
Prints:
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ...
Working for 4 seconds ..
Note that you can additionally pass to each process a second queue for outputting results. Just be sure to get the results from this queue before joining the processes.

Related

Why is my parallel code slower than my serial code?

Recently started learning parallel on my own and I have next to no idea what I'm doing. Tried applying what I have learnt but I think I'm doing something wrong because my parallel code is taking a longer time to execute than my serial code. My PC is running a i7-9700. This is the original serial code in question
def getMatrix(name):
matrixCreated = []
i = 0
while True:
i += 1
row = input('\nEnter elements in row %s of Matrix %s (separated by commas)\nOr -1 to exit: ' %(i, name))
if row == '-1':
break
else:
strList = row.split(',')
matrixCreated.append(list(map(int, strList)))
return matrixCreated
def getColAsList(matrixToManipulate, col):
myList = []
numOfRows = len(matrixToManipulate)
for i in range(numOfRows):
myList.append(matrixToManipulate[i][col])
return myList
def getCell(matrixA, matrixB, r, c):
matrixBCol = getColAsList(matrixB, c)
lenOfList = len(matrixBCol)
productList = [matrixA[r][i]*matrixBCol[i] for i in range(lenOfList)]
return sum(productList)
matrixA = getMatrix('A')
matrixB = getMatrix('B')
rowA = len(matrixA)
colA = len(matrixA[0])
rowB = len(matrixB)
colB = len(matrixB[0])
result = [[0 for p in range(colB)] for q in range(rowA)]
if (colA != rowB):
print('The two matrices cannot be multiplied')
else:
print('\nThe result is')
for i in range(rowA):
for j in range(colB):
result[i][j] = getCell(matrixA, matrixB, i, j)
print(result[i])
EDIT: This is the parallel code with time library. Initially didn't include it as I thought it was wrong so just wanted to see if anyone had ideas to parallize it instead
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
def getMatrix(name):
matrixCreated = []
i = 0
while True:
i += 1
row = input('\nEnter elements in row %s of Matrix %s (separated by commas)\nOr -1 to exit: ' %(i, name))
if row == '-1':
break
else:
strList = row.split(',')
matrixCreated.append(list(map(int, strList)))
return matrixCreated
def getColAsList(matrixToManipulate, col):
myList = []
numOfRows = len(matrixToManipulate)
for i in range(numOfRows):
myList.append(matrixToManipulate[i][col])
return myList
def getCell(matrixA, matrixB, r, c):
matrixBCol = getColAsList(matrixB, c)
lenOfList = len(matrixBCol)
productList = [matrixA[r][i]*matrixBCol[i] for i in range(lenOfList)]
return sum(productList)
matrixA = getMatrix('A')
matrixB = getMatrix('B')
rowA = len(matrixA)
colA = len(matrixA[0])
rowB = len(matrixB)
colB = len(matrixB[0])
import time
start_time = time.time()
result = [[0 for p in range(colB)] for q in range(rowA)]
if (colA != rowB):
print('The two matrices cannot be multiplied')
else:
print('\nThe result is')
for i in range(rowA):
for j in range(colB):
result[i][j] = getCell(matrixA, matrixB, i, j)
print(result[i])
print (" %s seconds " % (time.time() - start_time))
results = [pool.apply(getMatrix, getColAsList, getCell)]
pool.close()
So I would agree that you are doing something wrong. I would say that your code is not parallelable.
For the code to be parallelable it has to be dividable into smaller pieces and it either has to be:
1, Independent, meaning when it runs it doesn't rely on other processes to do its job.
For example if I have a list with 1,000,000 objects that need to be processed. And I have 4 workers to process them with. Then give each worker 1/4 of the objects to process and then when they finish all objects have been processed. But worker 3 doesn't care if worker 1, 2 or 4 completed before or after it did. Nor does worker3 care about what worker 1, 2 or 4 returned or did. It actually shouldn't even know that there are any other workers out there.
2, Managed, meaning there is dependencies between workers but thats ok cause you have a main thread that coordinates the workers. Still though, workers shouldn't know or care about each other. Think of them as mindless muscle, they only do what you tell them to do. Not to think for themselves.
For example I have a list with 1,000,000 objects that need to be processed. First all objects need to go through func1 which returns something. Once ALL objects are done with func1 those results should then go into func2. So I create 4 workers, give each worker 1/4 of the objects and have them process them with func1 and return the results. I wait for all workers to finish processing the objects. Then I give each worker 1/4 of the results returned by func1 and have them process it with func2. And I can keep doing this as many times as I want. All I have to do is have the main thread coordinate the workers so they dont start when they aren't suppose too and tell them what and when to process.
Take this with a grain of salt as this is a simplified version of parallel processing.
Tip for parallel and concurrency
You shouldn't get user input in parallel. Only the main thread should handle that.
If your work load is light then you shouldn't use parallel processing.
If your task can't be divided up into smaller pieces then its not parallelable. But it can still be run on a background thread as a way of running something concurrently.
Concurrency Example:
If your task is long running and not parallelable, lets say it takes 10 minutes to complete. And it requires a user to give input. Then when the user gives input start the task on a worker. If the user gives input again 1 minute later then take that input and start the 2nd task on worker2. Input at 5 minutes start task3 on worker3. At the 10 minute mark task1 is complete. Because everything is running concurrently by the 15 minute mark all task are complete. That's 2x faster then running the tasks in serial which would take 30 minutes. However this is concurrency not parallel.

Why does multiprocess yield different timing because order of execution

I am running the following benchmark script on Windows machine. I noticed the order when multiprocess() get executed affected it's performance. If I execute multiprocess first, the execution speed is faster than simple & multithread() method, if I executed it in the end, the processing speed is almost double compared to multithread() and simple method.
import random
from threading import Thread
from multiprocessing import Process
import time
size = 10000000 # Number of random numbers to add to list
threads = 8 # Number of threads to create
my_list = []
for i in range(0,threads):
my_list.append([])
def func(count, mylist):
for i in range(count):
mylist.append(random.random())
processes = []
for i in range(0, threads):
p = Process(target=func,args=(size,my_list[i]))
processes.append(p)
def multithreaded():
jobs = []
for i in range(0, threads):
thread = Thread(target=func,args=(size,my_list[i]))
jobs.append(thread)
# Start the threads
for j in jobs:
j.start()
# Ensure all of the threads have finished
for j in jobs:
j.join()
def simple():
for i in range(0, threads):
func(size,my_list[i])
def multiprocessed():
global processes
# Start the processes
for p in processes:
p.start()
# Ensure all processes have finished execution
for p in processes:
p.join()
if __name__ == "__main__":
start = time.time()
multiprocessed()
print("elasped time:{}".format(time.time()-start))
start = time.time()
simple()
print("elasped time:{}".format(time.time()-start))
start = time.time()
multithreaded()
print("elasped time:{}".format(time.time()-start))
Results #1 : multiprocessed (2.85s) -> simple (7.39s) -> multithread
(7.84s)
Results #2 : multithread (7.84s) -> simple (7.53s) ->
multiprocessed (13.96 s)
Why is that ? How do I properly use multiprocess function on windows in order to improve the speed by utilizing CPU cores
Your timing code doesn't isolate each test from the effects of the others. If you execute multiprocessed first, the sublists of my_list are empty. If you execute it last, the sublists are full of elements added by the other runs, dramatically increasing the communication overhead involved in sending the data to the worker processes.

Multiprocessing and Queues

`This code is an attempt to use a queue to feed tasks to a number worker processes.
I wanted to time the difference in speed between different number of process and different methods for handling data.
But the output is not doing what I thought it would.
from multiprocessing import Process, Queue
import time
result = []
base = 2
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 23, 45, 76, 4567, 65423, 45, 4, 3, 21]
# create queue for new tasks
new_tasks = Queue(maxsize=0)
# put tasks in queue
print('Putting tasks in Queue')
for i in data:
new_tasks.put(i)
# worker function definition
def f(q, p_num):
print('Starting process: {}'.format(p_num))
while not q.empty():
# mimic some process being done
time.sleep(0.05)
print(q.get(), p_num)
print('Finished', p_num)
print('initiating processes')
processes = []
for i in range(0, 2):
if __name__ == '__main__':
print('Creating process {}'.format(i))
p = Process(target=f, args=(new_tasks, i))
processes.append(p)
#record start time
start = time.time()
# start process
for p in processes:
p.start()
# wait for processes to finish processes
for p in processes:
p.join()
#record end time
end = time.time()
# print time result
print('Time taken: {}'.format(end-start))
I expect this:
Putting tasks in Queue
initiating processes
Creating process 0
Creating process 1
Starting process: 1
Starting process: 0
1 1
2 0
3 1
4 0
5 1
6 0
7 1
8 0
9 1
10 0
11 1
23 0
45 1
76 0
4567 1
65423 0
45 1
4 0
3 1
21 0
Finished 1
Finished 0
Time taken: <some-time>
But instead I actually get this:
Putting tasks in Queue
initiating processes
Creating process 0
Creating process 1
Time taken: 0.01000523567199707
Putting tasks in Queue
Putting tasks in Queue
initiating processes
Time taken: 0.0
Starting process: 1
initiating processes
Time taken: 0.0
Starting process: 0
1 1
2 0
3 1
4 0
5 1
6 0
7 1
8 0
9 1
10 0
11 1
23 0
45 1
76 0
4567 1
65423 0
45 1
4 0
3 1
21 0
Finished 0
There seem to be two major problems, I am not sure how related they are:
The print statements such as:
Putting tasks in Queue
initiating processes
Time taken: 0.0
are repeated systematically though out the code - I say systematically becasue they repeat exactly every time.
The second process never finishes, it never recognizes the queue is empty and therefore fails to exit
1) I cannot reproduce this.
2) Look at the following code:
while not q.empty():
time.sleep(0.05)
print(q.get(), p_num)
Each line can be run in any order by any proces. Now consider q having a single item and two processes A and B. Now consider the following order of execution:
# A runs
while not q.empty():
time.sleep(0.05)
# B runs
while not q.empty():
time.sleep(0.05)
# A runs
print(q.get(), p_num) # Removes and prints the last element of q
# B runs
print(q.get(), p_num) # q is now empty so q.get() blocks forever
Swapping the order of time.sleep and q.get removes the blocking in all of my runs, but it's still possible to have more than one processes enter the loop with a single item left.
The way to fix this is using a non-blocking get call and catching the queue.Empty exception:
import queue
while True:
time.sleep(0.05)
try:
print(q.get(False), p_num)
except queue.Empty:
break
Your worker threads should be like this:
def f(q, p_num):
print('Starting process: {}'.format(p_num))
while True:
value = q.get()
if value is None:
break
# mimic some process being done
time.sleep(0.05)
print(value, p_num)
print('Finished', p_num)
And the queue should be filled with markers after the real data:
for i in data:
new_tasks.put(i)
for _ in range(num_of_threads):
new_tasks.put(None)

Complete a multithreading parallelize process with k threads

3sum Problem is defined as
Given: A positive integer k≤20, a postive integer n≤104, and k arrays of size n containing integers from −105 to 105.
Return: For each array A[1..n], output three different indices 1≤p<q<r≤n such that A[p]+A[q]+A[r]=0 if exist, and "-1" otherwise.
Sample Dataset
4 5
2 -3 4 10 5
8 -6 4 -2 -8
-5 2 3 2 -4
2 4 -5 6 8
Sample Output
-1
1 2 4
1 2 3
-1
However I want to speed up the code using threads, To do so I am applying python code
def TS(arr):
original = arr[:]
arr.sort()
n = len(arr)
for i in xrange(n-2):
a = arr[i]
j = i+1
k = n-1
while j < k:
b = arr[j]
c = arr[k]
if a + b + c == 0:
return sorted([original.index(a)+1,original.index(b)+1,original.index(c)+1])
elif a + b + c > 0:
k = k - 1
else:
j = j +1
return [-1]
with open("dataset.txt") as dataset:
k = int(dataset.readline().split()[0])
for i in xrange(k):
aux = map(int, dataset.readline().split())
results = TS(aux)
print ' ' . join(map(str, results))
I was thinking on creating k threads, and a global array output, however do not know how to continue developing the idea
from threading import Thread
class thread_it(Thread):
def __init__ (self,param):
Thread.__init__(self)
self.param = param
def run(self):
mutex.acquire()
output.append(TS(aux))
mutex.release()
threads = [] #k threads
output = [] #global answer
mutex = thread.allocate_lock()
with open("dataset.txt") as dataset:
k = int(dataset.readline().split()[0])
for i in xrange(k):
aux = map(int, dataset.readline().split())
current = thread_it(aux)
threads.append(current)
current.start()
for t in threads:
t.join()
What would be the correct way to get the results = TS(aux) inside a thread and then wait until all threads have finish and then print ' ' . join(map(str,results)) for all of them?
Update
Got this issue when running script from console
First, like #Cyphase said, because of GIL, you cannot speed things up with threading. Every thread will run on the same core. Consider using multiprocessing to utilize multiple cores, multiprocessing has a very similar API as threading.
Second, even if we pretend GIL doesn't exist. Putting everything in a critical section protected by mutex, you are actually serializing all the threads. What you need to protect is access to output, so put the processing code out of critical section, to make them run concurrently:
def run(self):
result = TS(aux)
mutex.acquire()
output.append(result)
mutex.release()
But don't re-invent the wheel, python standard library provides a thread-safe Queue, use that:
try:
import Queue as queue # python2
except:
import queue
output = queue.Queue()
def run(self):
result = TS(self.param)
output.append(result)
With multiprocessing, the final code looks something like this:
from multiprocessing import Process, Queue
output = Queue()
class TSProcess(Process):
def __init__ (self, param):
Process.__init__(self)
self.param = param
def run(self):
result = TS(self.param)
output.put(result)
processes = []
with open("dataset.txt") as dataset:
k = int(dataset.readline().split()[0])
for i in xrange(k):
aux = map(int, dataset.readline().split())
current = TSProcess(aux)
processes.append(current)
current.start()
for p in processes:
p.join()
# process result with output.get()

Multiprocessing and Queue with Dataframe

I have some troubles with exchange of the object (dataframe) between 2 processes through the Queue.
First process get the data from a queue, second put data into a queue.
The put-process is faster, so the get-process should clear the queue with reading all object.
I've got strange behaviour, because my code works perfectly and as expected but only for 100 rows in dataframe, for 1000row the get-process takes always only 1 object.
import multiprocessing, time, sys
import pandas as pd
NR_ROWS = 1000
i = 0
def getDf():
global i, NR_ROWS
myheader = ["name", "test2", "test3"]
myrow1 = [ i, i+400, i+250]
df = pd.DataFrame([myrow1]*NR_ROWS, columns = myheader)
i = i+1
return df
def f_put(q):
print "f_put start"
while(1):
data = getDf()
q.put(data)
print "P:", data["name"].iloc[0]
sys.stdout.flush()
time.sleep(1.55)
def f_get(q):
print "f_get start"
while(1):
data = pd.DataFrame()
while not q.empty():
data = q.get()
print "get"
if not data.empty:
print "G:", data["name"].iloc[0]
else:
print "nothing new"
time.sleep(5.9)
if __name__ == "__main__":
q = multiprocessing.Queue()
p = multiprocessing.Process(target=f_put, args=(q,))
p.start()
while(1):
f_get(q)
p.join()
Output for 100rows dataframe, get-process takes all objects
f_get start
nothing new
f_put start
P: 0 # put 1.object into the queue
P: 1 # put 2.object into the queue
P: 2 # put 3.object into the queue
P: 3 # put 4.object into the queue
get # get-process takes all 4 objects from the queue
get
get
get
G: 3
P: 4
P: 5
P: 6
get
get
get
G: 6
P: 7
P: 8
Output for 1000rows dataframe, get-process takes only one object.
f_get start
nothing new
f_put start
P: 0 # put 1.object into the queue
P: 1 # put 2.object into the queue
P: 2 # put 3.object into the queue
P: 3 # put 4.object into the queue
get <-- #!!! get-process takes ONLY 1 object from the queue!!!
G: 1
P: 4
P: 5
P: 6
get
G: 2
P: 7
P: 8
P: 9
P: 10
get
G: 3
P: 11
Any idea what I am doing wrong and how to pass also the bigger dataframe through?
At the risk of not being completely able to provide a fully functional example, here is what goes wrong.
First of all, its a timing issue.
I tried your code again with larger DataFrames (10000 or even 100000) and I start to see the same things as you do. This means you see this behaviour as soon as the size of the arrays crosses a certain threshold that will be system(CPU?) dependent.
I modified your code a bit to make it easier to see what happens. First, 5 DataFrames are put into the queue without any custom time.sleep. In the f_get function I added a counter (and a time.sleep(0), see below) to the loop (while not q.empty()).
The new code:
import multiprocessing, time, sys
import pandas as pd
NR_ROWS = 10000
i = 0
def getDf():
global i, NR_ROWS
myheader = ["name", "test2", "test3"]
myrow1 = [ i, i+400, i+250]
df = pd.DataFrame([myrow1]*NR_ROWS, columns = myheader)
i = i+1
return df
def f_put(q):
print "f_put start"
j = 0
while(j < 5):
data = getDf()
q.put(data)
print "P:", data["name"].iloc[0]
sys.stdout.flush()
j += 1
def f_get(q):
print "f_get start"
while(1):
data = pd.DataFrame()
loop = 0
while not q.empty():
data = q.get()
print "get (loop: %s)" %loop
time.sleep(0)
loop += 1
time.sleep(1.)
if __name__ == "__main__":
q = multiprocessing.Queue()
p = multiprocessing.Process(target=f_put, args=(q,))
p.start()
while(1):
f_get(q)
p.join()
Now, if you run this for different number of rows, you will see something like this:
N=100:
f_get start
f_put start
P: 0
P: 1
P: 2
P: 3
P: 4
get (loop: 0)
get (loop: 1)
get (loop: 2)
get (loop: 3)
get (loop: 4)
N=10000:
f_get start
f_put start
P: 0
P: 1
P: 2
P: 3
P: 4
get (loop: 0)
get (loop: 1)
get (loop: 0)
get (loop: 0)
get (loop: 0)
What does this tell us?
As long as the DataFrame is small, your assumption that the put process is faster than the get seems true, we can fetch all 5 items within one loop of while not q.empty().
But, as the number of rows increases, something changes. The while-condition q.empty() evaluates to True (the queue is empty) and the outer while(1) cycles.
This could mean that put is now slower than get and we have to wait. But if we set the sleep time for the whole f_get to something like 15, we still get the same behaviour.
On the other hand, if we change the time.sleep(0) in the inner q.get() loop to 1,
while not q.empty():
data = q.get()
time.sleep(1)
print "get (loop: %s)" %loop
loop += 1
we get this:
f_get start
f_put start
P: 0
P: 1
P: 2
P: 3
P: 4
get (loop: 0)
get (loop: 1)
get (loop: 2)
get (loop: 3)
get (loop: 4)
This looks right! And it means that actually get does something strange. It seems that while it is still processing a get, the queue state is empty, and after the get is done the next item is available.
I'm sure there is a reason for that, but I'm not familiar enough with multiprocessing to see that.
Depending on your application, you could just add the appropriate time.sleep to your inner loop and see if thats enough.
Or, if you want to solve it (instead of using a workaround as the time.sleep method), you could look into multiprocessing and look for information on blocking, non-blocking or asynchronous communication - I think the solution will be found there.

Categories

Resources