I am trying to run a simple multiprocessing task as shown below:
def main():
def do_something():
print('sleeping 1 second')
time.sleep(1)
print('Done sleeping')
p1 = multiprocessing.Process(target=do_something())
p2 = multiprocessing.Process(target=do_something())
p1.start()
p2.start()
if __name__ == '__main__':
main()
Here is the output:
sleeping 1 second
Done sleeping
sleeping 1 second
Done sleeping
Process finished with exit code 0
But I was expecting to output:
sleeping 1 second
sleeping 1 second
Done sleeping
Done sleeping
Process finished with exit code 0
I am using a windows machine using vscode. It seems that multiprocessing isn't doing its function entirely, do I have to enable multiprocessing support or is it something else?
Please help. Thanks!
A process is not a thread, it is doing a diferente task, the process blocks until done. Note that it is deferment from a thread pool too. Olso is good practice to wait until all thread are done with .join() an make sure that they exit correctly
import threading as th
import time
def do_something():
print('sleeping 1 second')
time.sleep(1)
print('Done sleeping')
def main():
th_1 = th.Thread(target=do_something)
th_2 = th.Thread(target=do_something)
th_1.start()
th_2.start()
th_1.join()
th_2.join()
if __name__ == '__main__':
main()
Related
I have a loop which makes a get request to a webservice to fetch data and do some stuff, but I want to 'manually' terminate the thread/event, which I achieved with the following example:
from threading import Event
exit = Event()
if external_condition():
exit.set()
for _ in range(mins):
fetch_data_and_do_stuff()
exit.wait(10) #wait 10 seconds
With that, the only thing that terminates it's the sleep time between loops. How can I also kill the loop so it doesn't keep running until it gets to the last iteration?
nvm i've solved it like this
from threading import Event
exit = Event()
if external_condition():
exit.set()
for _ in range(mins):
fetch_data_and_do_stuff()
if exit.wait(10):
break
the condition returns true when killed and also sleeps the 10 seconds, so it works
you have 2 options ,
kill the thread or process entirely
or making the loop's boolean false. going that way
you could use a global variable in this way: [Python 3.7] , run it to see
from threading import Thread
from time import sleep
global glob
glob=True
def threaded_function():
while glob:
print("\n [Thread] this thread is running until main function halts this")
sleep(0.8)
if __name__ == "__main__":
thread = Thread(target = threaded_function, args = ())
thread.start()
for i in range(4,0,-1):
print("\n [Main] thread will be terminated in "+str(i)+" seconds")
sleep(1)
glob=False
while True:
print("[Main] program is over")
sleep(1)
I'm writing a program and made a "pseudo" program which imitates same thing as the main one does. The main idea is that a program starts and it scans a game. First part detects if game started, then it open 2 processes. 1 that scans the game all the time and sends info to the second process, which analyzes the data and plots it. In short, its 2 infinite loops running simultaneously.
I'm trying to put it all into functions now so I can run it through tkinter and make a GUI for it.
The issue is, every time a process starts, it loops back on start of parent function, executes it again, then goes to start second process. What is the issue here? In this test model, one process sends value of X to second process which prints it out.
import multiprocessing
import time
from multiprocessing import Pipe
def function_start():
print("GAME DETECTED AND STARTED")
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
function_load(p1)
function_load(p2)
def function_load(process):
if __name__ == '__main__':
print("slept 1")
process.start()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
function_start()
I've also tried rearranging functions a bit on different ways. This is one of them:
import multiprocessing
import time
from multiprocessing import Pipe
def function_load():
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
p1.start()
p2.start()
#FIRST
def function_start():
print("GAME LOADED AND STARTED")
function_load()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
#
function_start()
You should always tag your question tagged with multiprocessing with platform you are running under, but I will infer that it is probably Windows or some other platform that uses the spawn method to launch new processes. That means when a new process is created, a new Python interpreter is launched an the program source is processed from the top and any code at global scope that is not protected by the check if __name__ == '__main__': will be executed, which means that each started process will be executing the statement function_start().
So, as #PranavHosangadi rightly pointed out you need the __name__ check in the correct place.
import multiprocessing
from multiprocessing import Pipe
import time
def function_start():
print("GAME DETECTED AND STARTED")
parent_conn, child_conn = Pipe()
p1 = multiprocessing.Process(target=function_first_process_loop, args=(child_conn,))
p2 = multiprocessing.Process(target=function_second_process_loop, args=(parent_conn,))
function_load(p1)
function_load(p2)
def function_load(process):
print("slept 1")
process.start()
def function_first_process_loop(conn):
x=0
print("FIRST PROCESS STARTED")
while True:
time.sleep(1)
x += 1
conn.send(x)
print(x)
def function_second_process_loop(conn):
print("SECOND PROCESS STARTED")
while True:
data = conn.recv()
print(data)
if __name__ == '__main__':
function_start()
Let's do an experiment: Before function_start(), add this line:
print(__name__, "calling function_start()")
Now, you get the following output:
__main__ calling function_start()
GAME DETECTED AND STARTED
slept 1
slept 1
__mp_main__ calling function_start()
GAME DETECTED AND STARTED
__mp_main__ calling function_start()
GAME DETECTED AND STARTED
FIRST PROCESS STARTED
SECOND PROCESS STARTED
1
1
2
2
...
Clearly, function_start() is called by the child process every time you start it. This is because python loads the entire script, and then calls the function you want from that script. The new processes have the name __mp_main__ to differentiate them from the main process, and you can make use of that to prevent the call to function_start() by these processes.
So instead of function_start(), call it this way:
if __name__ == "__main__":
print(__name__, "calling function_start()")
function_start()
and now you get what you wanted:
__main__ calling function_start()
GAME DETECTED AND STARTED
slept 1
slept 1
FIRST PROCESS STARTED
SECOND PROCESS STARTED
1
1
2
2
...
Hey everyone i have script that works parallel, i was using APScheduler for scheduling the tasks but it works synchron (BlockingScheduler,BackgroundScheduler) doesnt work on parallel processes. What would be your advices , how can i run the parallel processes for every second ? also im using multiprocesses for parallel
EDİT:I have just solved it, if anyone gets trouble like this issue, here the example
from multiprocessing import Process
from apscheduler.schedulers.background import BlockingScheduler
def work_log_cpu1():
print(" Proces work_log_cpu1")
list11=[]
for i in range(10000000):
list11.append(i*2)
print("Proces work_log_cpu1 finished")
def work_log_cpu2():
print("Proces work_log_cpu2")
list12=[]
for i in range(10000000):
list12.append(i*2)
print("Proces work_log_cpu2 finished")
def work_log_cpu3():
print(" Proces work_log_cpu3")
list13=[]
for i in range(10000000):
list13.append(i*2)
print("Proces work_log_cpu3 finished")
def main():
# sleeps=[3,5,2,7]
process=Process(target=work_log_cpu1)
process2=Process(target=work_log_cpu2)
process3=Process(target=work_log_cpu3)
process.start()
process2.start()
process3.start()
process.join()
process2.join()
process3.join()
if __name__ == '__main__':
# main()
sched.add_job(main, 'interval', seconds=1,id='first_job',max_instances=1)
sched.start()
What's wrong with multiprocessing?
import multiprocessing
p1 = multiprocessing.Process(target=func1, args=("var1", "var2",))
p2 = multiprocessing.Process(target=func2, args=("var3", "var4",))
p1.start()
p2.start()
p2.join()
I want to run a function every few seconds in Python. The function execution takes some time, and I want to include that in the waiting time as well.
I don't want to do this, because it is not strictly executed every 2 seconds and will break the periodicity (my_function also takes time to execute.)
while True:
time.sleep(2)
my_function()
I don't want to do this neither, because it uses too much CPU on the while loop of Thread-2.
# Thread-1
While True:
time.sleep(2)
event.set()
# Thread-2
While True:
if event.is_set():
my_function()
else:
pass
Can anyone please help me?
You can consider ischedule. It takes care of the function execution time right out the box, and doesn't waste CPU time for busy waiting. You can use:
from ischedule import schedule, run_loop
schedule(my_function, interval=2)
run_loop()
I believe the schedule module is your friend
I found this code works pretty well, if I understood your question correctly.
Code broken down:
runs func1
runs func2
waits 2s
does something else after that
waits 1s
does it all again
import threading
import time
def func1():
print("function 1 has been called")
def func2():
print("function 2 has been called")
def loop():
print("loop print 1")
thread = threading.Thread(target=func1, name="thread")
thread.start()
while thread.is_alive():
continue
if not thread.is_alive():
thread2 = threading.Thread(target=func2, name="thread2")
thread2.start()
while thread2.is_alive():
continue
time.sleep(2)
while True:
loop()
print("Some other thing")
time.sleep(1)
I'm looking to terminate some threads after a certain amount of time. These threads will be running an infinite while loop and during this time they can stall for a random, large amount of time. The thread cannot last longer than time set by the duration variable.
How can I make it so after the length set by duration, the threads stop.
def main():
t1 = threading.Thread(target=thread1, args=1)
t2 = threading.Thread(target=thread2, args=2)
time.sleep(duration)
#the threads must be terminated after this sleep
This will work if you are not blocking.
If you are planing on doing sleeps, its absolutely imperative that you use the event to do the sleep. If you leverage the event to sleep, if someone tells you to stop while "sleeping" it will wake up. If you use time.sleep() your thread will only stop after it wakes up.
import threading
import time
duration = 2
def main():
t1_stop = threading.Event()
t1 = threading.Thread(target=thread1, args=(1, t1_stop))
t2_stop = threading.Event()
t2 = threading.Thread(target=thread2, args=(2, t2_stop))
time.sleep(duration)
# stops thread t2
t2_stop.set()
def thread1(arg1, stop_event):
while not stop_event.is_set():
stop_event.wait(timeout=5)
def thread2(arg1, stop_event):
while not stop_event.is_set():
stop_event.wait(timeout=5)
If you want the threads to stop when your program exits (as implied by your example), then make them daemon threads.
If you want your threads to die on command, then you have to do it by hand. There are various methods, but all involve doing a check in your thread's loop to see if it's time to exit (see Nix's example).
If you want to use a class:
from datetime import datetime,timedelta
class MyThread():
def __init__(self, name, timeLimit):
self.name = name
self.timeLimit = timeLimit
def run(self):
# get the start time
startTime = datetime.now()
while True:
# stop if the time limit is reached :
if((datetime.now()-startTime)>self.timeLimit):
break
print('A')
mt = MyThread('aThread',timedelta(microseconds=20000))
mt.run()
An alternative is to use signal.pthread_kill to send a stop signal. While it's not as robust as #Nix's answer (and I don't think it will work on Windows), it works in cases where Events don't (e.g., stopping a Flask server).
test.py
from signal import pthread_kill, SIGTSTP
from threading import Thread
import time
DURATION = 5
def thread1(arg):
while True:
print(f"processing {arg} from thread1...")
time.sleep(1)
def thread2(arg):
while True:
print(f"processing {arg} from thread2...")
time.sleep(1)
if __name__ == "__main__":
t1 = Thread(target=thread1, args=(1,))
t2 = Thread(target=thread2, args=(2,))
t1.start()
t2.start()
time.sleep(DURATION)
# stops all threads
pthread_kill(t2.ident, SIGTSTP)
result
$ python test.py
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
processing 1 from thread1...
processing 2 from thread2...
[19]+ Stopped python test.py