How to measure time taken of multi-threads created in a loop? - python

I want to measure how much time it takes to finish running the code with multiple threads in python.
If I put join inside the loop, it will stop the loop (main thread) from keep creating new threads. It will run the sleep() one by one.
If I put join on the thread which I use to create thread_testing, the join won't work somehow. It prints out the time immediately.
def sleep(name):
print("{} going to sleep".format(name))
time.sleep(5)
print("{} wakes up after 5 seconds".format(name))
def thread_testing():
for i in range(3):
t = threading.Thread(target=sleep, name='thread' + str(i), args=(i,)
t.start()
# t.join() #1
if __name__ == '__main__':
start = time.time()
t = threading.Thread(target=thread_testing, name='threadx')
t.start()
t.join() #2
print(time.time() - start)
Desired output:
1 sleep
2 sleep
3 sleep
1 wake up after 5
2 wake up after 5
3 wake up after 5
5.xxx secs

Join will wait for your thread. That is why your threads were executed one by one.
What you have to do is:
Start all threads
Store them somewhere
Once everything is started wait for every thread to finish.
Assuming you don't need the first thread started in main:
import time
import threading
def sleep(name):
print("{} going to sleep".format(name))
time.sleep(5)
print("{} wakes up after 5 seconds".format(name))
def thread_testing():
threads = []
for i in range(3):
t = threading.Thread(target=sleep, name='thread' + str(i), args=(i,))
t.start()
threads.append(t)
for t in threads:
t.join()
if __name__ == '__main__':
start = time.time()
thread_testing()
print(time.time() - start)

Related

How to add a pause in python without disrupting other code

I'm attempting to make a script that keeps the times of each boss in a game through text.
an example would be:
if line == 'boss1 down':
print('boss1 timer set for 10 seconds')
time.sleep(10)
print("boss1 due")
if line == 'boss2 down':
print('boss2 timer set for 15 seconds')
time.sleep(15)
print("boss2 due")
However, the clear issue is that only one boss can be timed at a time. Is there a function I can use that won't disrupt the code and allow me to time multiple at a given time?
You can use the Thread class from the built-in threading module:
from threading import Thread
import time
def timer(num, secs, line):
if line == f'boss{num} down':
print(f'boss{num} timer set for {secs} seconds ')
time.sleep(secs)
print(f"boss{num} due")
boss1 = Thread(target=timer, args=(1, 10, "boss1 down"))
boss2 = Thread(target=timer, args=(2, 15, "boss2 down"))
boss1.start()
boss2.start()
Output:
boss1 timer set for 10 seconds boss2 timer set for 15 seconds
boss1 due
boss2 due
You can use Asynchronous function :
import asyncio
async def boss1_down():
print('boss1 timer set for 10 seconds')
await asyncio.sleep(10)
print("boss1 due")
asyncio.run(boss1_down())
, add arguments to the function for timers and boss.
As #Mayank mentioned, you can also use threads, which are a bit more complex to settle but you can control them (you cannot stop or wait an async function).
From this answer
You can use threading to do asynchronous tasks.
from threading import Thread
from time import sleep
def threaded_function(line):
if line == 'boss1 down':
print('boss1 timer set for 10 seconds')
time.sleep(10)
print("boss1 due")
if line == 'boss2 down':
print('boss2 timer set for 15 seconds')
time.sleep(15)
print("boss2 due")
if __name__ == "__main__":
thread1 = Thread(target = threaded_function, args= 'boss1 down')
thread1.start()
thread1.join()
thread2 = Thread(target = threaded_function, args= 'boss2 down')
thread2.start()
thread2.join()
print("thread finished...exiting")

Why threads are not working simultaneously?

For starters i'm new in Python.
I will be brief. I'm trying to fetch all links from the website using threads.
The problem is that threads are waiting for their turn, but I want them to work simultaneously with other threads.
For example, I set the number of threads to 2, and then get 2 chunks with links.
I want the first thread to iterate over the links in the first chunk, and the second thread to iterate over the links in the second chunk SIMULTANEOUSLY. But my program works in such a way that threads are waiting for their turn. What am I doing wrong, guys? Much obliged for your help
My code:
target()
def url_target(text, e):
global links
global chunks
number = int(sys.argv[1])
for m in text:
time.sleep(0.2)
print(m, e)
print('\n')
main()
def main():
global links
global chunks
url = sys.argv[2]
links = fetch_links(url)
number = int(sys.argv[1])
url_chunk = len(links) // number
start, stop = 0, url_chunk + len(links) % number
chunks = []
time.sleep(1)
while start < len(links):
for i in range(number):
part_links = links[start:stop]
p = Thread(name='myThread', target=url_target, args=(part_links, i+1))
p.start()
chunks.append(p)
start, stop = stop, stop + url_chunk
p.join()
time.sleep(1)
while chunks:
d = chunks.pop()
print(f'{d.ident} done')
Thanks! I'd appreciate any help you can give!
p.join() blocks until p completes. You want to start all the threads first, then wait on each in turn.
while start < len(links):
for i in range(number):
part_links = links[start:stop]
p = Thread(name='myThread', target=url_target, args=(part_links, i+1))
p.start()
chunks.append(p)
start, stop = stop, stop + url_chunk
time.sleep(1)
for p in chunks:
p.join()
If you aren't planning on doing anything while waiting for all the threads to complete, this is fine. However, you might want to block until any thread completes, rather than an arbitrarily chosen one. A thread pool can help, but
a simple way to implement a thread pool is to wait for a short period of time for a thread to complete. If it doesn't, wait on another one and come back to the first one later. For example,
from collections import deque
chunks = deque()
for start in range(0, len(links), url_chunk):
for i in range(1, number+1):
part_links = links[start:start + url_chunk]
p = Thread(name='myThread', target=url_target, args=(part_links, i))
p.start()
chunks.append(p)
while chunks:
p = chunks.popleft()
p.join(5) # Wait 5 seconds, or some other small period of time
if p.is_alive():
chunks.append(p) # put it back

Python: Sharing a time Lock in between spawned Processes so that there is a delay between them

I am trying to print the id's in this list with them having a delay in between the start and end of the process, and having a delay between the queue.get(which I implement using a threading.Timer with a shared lock). The problem I am having is that while my current setup of having a Timer allows me to lock the processes so that there is a 2 second period after one process acquires the record from the queue that all other processes cannot start, my program only closes 2 of the 4 processes at the end of the program run. How can I fix this so that all the processes close and the program can exit.
My output below shows this as I want there to be 2 more "worker closed" notifications:
Process started
Process started
Process started
Process started
begin 1 : 1560891818.0307562
begin 2 : 1560891820.0343137
begin 3 : 1560891822.0381632
end 2 : 3.0021514892578125
end 1 : 6.004615068435669
begin 4 : 1560891824.0439706
begin 5 : 1560891826.0481522
end 4 : 3.004107713699341
end 3 : 6.005637168884277
begin 6 : 1560891828.0511773
begin 7 : 1560891830.0557532
end 6 : 3.0032966136932373
end 5 : 6.006829261779785
begin 8 : 1560891832.056265
begin 9 : 1560891834.0593572
end 8 : 3.011284112930298
end 7 : 6.005618333816528
begin 10 : 1560891836.0627353
end 10 : 3.0014095306396484
worker closed
end 9 : 6.000675916671753
worker closed
import multiprocessing
from time import sleep, time
import threading
class TEMP:
lock = multiprocessing.Lock()
id_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
queue = multiprocessing.Queue(10)
DELAY = 2
def mp_worker(self, queue, lock):
while queue.qsize() > 0:
lock.acquire()
# Release the lock after a delay
threading.Timer(self.DELAY,lock.release).start()
record = queue.get()
start_time = time()
print("begin {0} : {1}".format(record, start_time))
if (record % 2 == 0):
sleep(3)
else:
sleep(6)
print("end {0} : {1}".format(record, time() - start_time))
threading.Timer.join()
print("worker closed")
def mp_handler(self):
# Spawn two processes, assigning the method to be executed
# and the input arguments (the queue)
processes = [multiprocessing.Process(target=self.mp_worker, args=([self.queue, self.lock])) \
for _ in range(4)]
for process in processes:
process.start()
print('Process started')
for process in processes:
process.join()
def start_mp(self):
for id in self.id_list:
self.queue.put(id)
self.mp_handler()
if __name__ == '__main__':
temp = TEMP()
temp.start_mp()
I actually fixed this problem. The main reason my code was not joining was because my code was checking if the queue was empty, waiting for a delay, then attempting to get something from the queue. This meant that towards the end of the program that while the queue had become empty and 2 of the 4 processes successfully finished at the same time, the remaining 2 processes were in a delay. When this delay ended they attempted to get something from the queue but since the queue was empty they just blocked the remainder of the process's code from running which meant that they could never join back up.
I fixed this by also checking if the queue is empty right before the process attempts to get something from the queue. My fixed workerfunction is below:
def mp_worker(self, queue, lock):
while not queue.empty():
print(mp.current_process().name)
lock.acquire()
# Release the lock after a delay
timer = Timer(self.DELAY, lock.release)
timer.start()
if not queue.empty():
record = queue.get(False)
start_time = time()
print("begin {0} : {1}".format(record, start_time))
if (record % 2 == 0):
sleep(3)
else:
sleep(6)
print("end {0} : {1}".format(record, time() - start_time))
print("{0} closed".format(mp.current_process().name))

Daemon thread not exiting despite main program finishing

I've already referred to this thread, but it seems to be outdated
and there doesn't seem to be a clean explanation
Python daemon thread does not exit when parent thread exits
I'm running python 3.6 and trying to run the script from either IDLE or Spyder IDE.
Here is my code:
import threading
import time
total = 4
def creates_items():
global total
for i in range(10):
time.sleep(2)
print('added item')
total += 1
print('creation is done')
def creates_items_2():
global total
for i in range(7):
time.sleep(1)
print('added item')
total += 1
print('creation is done')
def limits_items():
#print('finished sleeping')
global total
while True:
if total > 5:
print ('overload')
total -= 3
print('subtracted 3')
else:
time.sleep(1)
print('waiting')
limitor = threading.Thread(target = limits_items, daemon = True)
creator1 = threading.Thread(target = creates_items)
creator2 = threading.Thread(target = creates_items_2)
print(limitor.isDaemon())
creator1.start()
creator2.start()
limitor.start()
creator1.join()
creator2.join()
print('our ending value of total is' , total)
limitor thread doesn't seem to be ending despite being a daemon thread.
Is this a way to get this working from IDLE or Spyder?
Thanks.
I had the same Problem and solved it by using multiprocessing instead of threading:
from multiprocessing import Process
import multiprocessing
from time import sleep
def daemon_thread():
for _ in range(10):
sleep(1)
print("Daemon")
if __name__ == '__main__':
multiprocessing.freeze_support()
sub_process = Process(target = daemon_thread, daemon = True)
sub_process.start()
print("Exiting Main")
I haven't yet really understood why I need the call to freeze_support() but it makes the code work.

Python - multithreading - Threads terminate in one case. In another they don't. Why?

Consider the following example I've been doing to learn multithreading. It's just an extended example of the Python 3.5 queue documentation.
It prints some numbers over 4 threads, produces one error in the queue, retries this element and should print the remaining queue if a KeyboardInterrupt exception occurs.
import threading
import queue
import time
import random
import traceback
def worker(q, active):
while True:
worker_item = q.get()
#if worker_item == None:
if not active.is_set():
break
time.sleep(random.random())
with threading.Lock():
if worker_item == 5 or worker_item == '5':
try:
print(threading.current_thread().name + ': ' + worker_item + ' | remaining queue: ' + str(list(q.queue)))
except TypeError:
print(threading.current_thread().name + ': ')
print(traceback.format_exc())
q.put(str(worker_item))
else:
print(threading.current_thread().name + ': ' + str(worker_item) + ' | remaining queue: ' + str(list(q.queue)))
q.task_done()
def main():
# INITIALIZE
num_threads = 4
stack1 = list(range(1, 21))
stack2 = list(range(101, 121))
q = queue.Queue()
active = threading.Event()
active.set()
# START THREADS
threads = []
for _ in range(num_threads):
t = threading.Thread(target=worker, args=(q, active))
t.start()
threads.append(t)
try:
# PUT STACK ITEMS ON QUEUE AND BLOCK UNTIL ALL TASKS ARE DONE
for stack1_item in stack1:
q.put(stack1_item)
q.join()
for stack2_item in stack2:
q.put(stack2_item)
q.join()
# STOP WORKER LOOP IN EVERY THREAD
#for _ in threads:
#q.put(None)
active.clear()
# WAIT UNTIL ALL THREADS TERMINATE
for t in threads:
t.join()
except KeyboardInterrupt:
print(traceback.format_exc())
print('remaining queue: ' + str(list(q.queue)))
#for _ in threads:
#q.put(None)
active.clear()
for t in threads:
t.join()
if __name__ == '__main__':
main()
If I run the script as it is (without a KeyboardInterrupt), it won't terminate. I have to kill the signal. But if I comment/uncomment the following lines (not using the event and doing it the docs way...)
comment / worker / if not active.is_set():
uncomment / worker / #if worker_item == None:
comment / main / active.clear()
uncomment / main / #for _ in threads:
#q.put(None)
comment / main / except / active.clear()
uncomment / main / except / #for _ in threads:
#q.put(None)
it does exit with exit code 0. Why?
Why is putting Nones to the queue necessary?
What would be the solution without putting Nones to the queue?
There are two types of threads: daemon and non-daemon. By default, all threads are non-daemon. The process is kept alive as long as there is at least one non-daemon thread.
This means that to stop the process, you either have to:
stop all of its threads (this is what your commented out code does by using None to kick the worker out of the infinite wait in q.get()); or
make the workers daemon threads, in which case the process will stop as soon as the main thread stops (this will require extra care if you want to ensure the workers have finished their tasks).

Categories

Resources