If I got it right, Python doesn't accept a process to be started out of a process?! For example:
def function1():
while True:
wait_for_condition
#then....
process2.start()
def function2():
does something
process2.join()
process1 = multiprocessing.Process(target=function1,))
process2 = multiprocessing.Process(target=function2,))
process1.start()
In my test python denied to open a process out of a process.
Is there a solution with another way to solve this?
If not - Id have another way to go, but this way would include a modification of the electronics (connect one output to one input and use this to let a process wait for an event and then start. ... but I think this is not a clean way. Its more kind of an workaround. And I'd have a little risk to cause a shortcut if Input and Output is not set correctly).
Edit:
The Task:
Having three processes parallel. These wait for an input at one attached sensor each.
If one of these processes get an input change they should reset a counter (LED_counter) and start another process (LED_process) in not already started. After that the process waits for an input change again.
Beside that...
The LED_process starts to active one output and counting down the LED_counter. If the LED_counter reaches zero, the process terminates. If the code starts again it must be able to restart from the top of the code.
Edit 2:
Latest try with threading (don't be confused by some german words). If I try this code -> the different threads mixes in some strange way together. But for now I can't find a mistake. Same code with multiprocessing works fine:
import RPi.GPIO as GPIO
import time
import threading
import sys
LED_time = 10 #LEDs active time
#Sensor Inputs
SGT = 25
SGA = 23
SHT = 12
GPIO.setmode(GPIO.BCM)
GPIO.setup(SGT, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(SGA, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(SHT, GPIO.IN, pull_up_down=GPIO.PUD_UP)
def Sens_check(Sensor,Name):
print("Thread_{} aktiv".format(Name))
while True:
GPIO.wait_for_edge(Sensor, GPIO.FALLING)
#LcGT.value = LED_time
print("{} Offen".format(Name))
time.sleep(0.1)
GPIO.wait_for_edge(SGT, GPIO.RISING)
print("{} Geschlossen".format(Name))
time.sleep(0.1)
SensGT_Thread = threading.Thread(
target=Sens_check,
args=(SGT,"Gartentor",))
SensGA_Thread = threading.Thread(
target=Sens_check,
args=(SGA,"Garage",))
SensHT_Thread = threading.Thread(
target=Sens_check,
args=(SHT,"Haustuere",))
try:
SensGT_Thread.start()
time.sleep(0.1)
SensGA_Thread.start()
time.sleep(0.1)
SensHT_Thread.start()
SensGT_Thread.join()
SensGA_Thread.join()
SensHT_Thread.join()
except:
print("FAILURE")
finally:
sys.exit(1)
Processes can only be started within the process they were created in. In the code provided, process2 was created in the main process, and yet tried to be started within another one (process1). Also, processes cannot be restarted, so they should be created each time .start is used.
Here's an example of starting processes within a process:
import multiprocessing
import time
def function1():
print("Starting more processes")
sub_procs = [multiprocessing.Process(target=function2) for _ in range(5)]
for proc in sub_procs:
proc.start()
for proc in sub_procs:
proc.join()
print("Done with more processes")
def function2():
print("Doing work")
time.sleep(1) # work
print("Done with work")
print("Starting one subprocess")
process1 = multiprocessing.Process(target=function1)
process1.start()
print("Moving on without joining")
"""Output of this:
Starting one subprocess
Moving on without joining
Starting more processes
Doing work
Doing work
Doing work
Doing work
Doing work
Done with work
Done with work
Done with work
Done with work
Done with work
Done with more processes
"""
Related
I'm opening multiple python instances using multiprocessing package and subprocess object. So basicly 10 different python instances that have two sockets in them that serve as client socket and server socket.
Here is example how I launch two python instances with two different files:
from time import sleep
from multiprocessing import Process
import subprocess
def task1():
print('This is task1')
subprocess.Popen(['python','server_client_pair1.py'])
sleep(1)
def task2():
# block for a moment
sleep(1)
# display a message
print('This is task2')
p1 = subprocess.Popen(['python','server_client_pair2.py'])
sleep(1)
if __name__ == '__main__':
# create a process
process1 = Process(target=task1)
sleep(.5)
process2 = Process(target=task2)
sleep(.5)
# run the process
process1.start()
sleep(.5)
process2.start()
sleep(.5)
# wait for the process to finish
print('Waiting for the process...')
process1.join()
process2.join()
I need to pass argument which changes variable PORT which is port number and I'd like to change it with PORT+1 every loop in the file ('server_client_pair.py')
Right now I have working code that uses 10 different server_client_pair.py files (server_client_pair1.py, server_client_pair2.py, server_client_pair3.py, etc)
I'm wondering how to do this with just one file. Any help would be welcome.
*edited the post for more info
first you need to add arguments to your
server_client_pair.py
file then
it will work for you as well :
def task1():
for i in range(10):
subprocess.Popen(['python','server_client_pair.py',str(i)])
sleep(1)
check here to learn how to pass arguments to your python files:
https://www.tutorialspoint.com/python/python_command_line_arguments.htm
I'm trying to launch a function (my_function) and stop its execution after a certain time is reached.
So i challenged multiprocessing library and everything works well. Here is the code, where my_function() has been changed to only create a dummy message.
from multiprocessing import Queue, Process
from multiprocessing.queues import Empty
import time
timeout=1
# timeout=3
def my_function(something):
time.sleep(2)
return f'my message: {something}'
def wrapper(something, queue):
message ="too late..."
try:
message = my_function(something)
return message
finally:
queue.put(message)
try:
queue = Queue()
params = ("hello", queue)
child_process = Process(target=wrapper, args=params)
child_process.start()
output = queue.get(timeout=timeout)
print(f"ok: {output}")
except Empty:
timeout_message = f"Timeout {timeout}s reached"
print(timeout_message)
finally:
if 'child_process' in locals():
child_process.kill()
You can test and verify that depending on timeout=1 or timeout=3, i can trigger an error or not.
My main problem is that the real my_function() is a torch model inference for which i would like to limit the number of threads (to 4 let's say)
One can easily do so if my_function were in the main process, but in my example i tried a lot of tricks to limit it in the child process without any success (using threadpoolctl.threadpool_limits(4), torch.set_num_threads(4), os.environ["OMP_NUM_THREADS"]=4, os.environ["MKL_NUM_THREADS"]=4).
I'm completely open to other solution that can monitor the time execution of a function while limiting the number of threads used by this function.
thanks
Regards
You can limit simultaneous process with Pool. (https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool)
You can set max tasks done per child. Check it out.
Here you have a sample from superfastpython by Jason Brownlee:
# SuperFastPython.com
# example of limiting the number of tasks per child in the process pool
from time import sleep
from multiprocessing.pool import Pool
from multiprocessing import current_process
# task executed in a worker process
def task(value):
# get the current process
process = current_process()
# report a message
print(f'Worker is {process.name} with {value}', flush=True)
# block for a moment
sleep(1)
# protect the entry point
if __name__ == '__main__':
# create and configure the process pool
with Pool(2, maxtasksperchild=3) as pool:
# issue tasks to the process pool
for i in range(10):
pool.apply_async(task, args=(i,))
# close the process pool
pool.close()
# wait for all tasks to complete
pool.join()
I just start a new thread:
self.thread = ThreadedFunc()
self.thread.start()
after something happens I want to exit my program so I'm calling os._exit():
os._exit(1)
The program still works. Everything is functional and it just looks like the os._exit() didn't execute.
Is there a different way to exit a whole program from different thread? How to fix this?
EDIT: Added more complete code sample.
self.thread = DownloadThread()
self.thread.data_downloaded.connect(self.on_data_ready)
self.thread.data_progress.connect(self.on_progress_ready)
self.progress_initialized = False
self.thread.start()
class DownloadThread(QtCore.QThread):
# downloading stuff etc.
sleep(1)
subprocess.call(os.getcwd() + "\\another_process.exe")
sleep(2)
os._exit(1)
EDIT 2: SOLVED! There is a quit(), terminate() or exit() function which just stops the thread. It was that easy. Just look at the docs.
Calling os._exit(1) works for me.
You should use the standard lib threading.
I guess you are using multiprocessing, which is a process-based “threading” interface, which uses similar API to threading, but creates child process instead of child thread. so os._exit(1) only exits child process, not affecting the main process
Also you should ensure you have called join() function in the main thread. Otherwise, it is possible that the operating system schedules to run the main thread to the end before starting to do anything in child thread.
sys.exit() does not work because it is the same as raising a SystemExit exception. Raising an exception in thread only exits that thread, rather than the entire process.
Sample code. Tested under ubuntu by python3 thread.py; echo $?.
Return code is 1 as expected
import os
import sys
import time
import threading
# Python Threading Example for Beginners
# First Method
def greet_them(people):
for person in people:
print("Hello Dear " + person + ". How are you?")
os._exit(1)
time.sleep(0.5)
# Second Method
def assign_id(people):
i = 1
for person in people:
print("Hey! {}, your id is {}.".format(person, i))
i += 1
time.sleep(0.5)
people = ['Richard', 'Dinesh', 'Elrich', 'Gilfoyle', 'Gevin']
t = time.time()
#Created the Threads
t1 = threading.Thread(target=greet_them, args=(people,))
t2 = threading.Thread(target=assign_id, args=(people,))
#Started the threads
t1.start()
t2.start()
#Joined the threads
t1.join() # Cannot remove this join() for this example
t2.join()
# Possible to reach here if join() removed
print("I took " + str(time.time() - t))
Credit: Sample code is copied and modified from https://www.simplifiedpython.net/python-threading-example/
I'm not too familiar with threading, and probably not using it correctly, but I have a script that runs a speedtest a few times and prints the average. I'm trying to use threading to call a function which displays something while the tests are running.
Everything works fine unless I try to put input() at the end of the script to keep the console window open. It causes the thread to run continuously.
I'm looking for some direction in terminating a thread correctly. Also open to any better ways to do this.
import speedtest, time, sys, datetime
from threading import Thread
s = speedtest.Speedtest()
best = s.get_best_server()
def downloadTest(tries):
x=0
downloadList = []
for x in range(tries):
downSpeed = (s.download()/1000000)
downloadList.append(downSpeed)
x+=1
results_dict = s.results.dict()
global download_avg, isp
download_avg = (sum(downloadList)/len(downloadList))
download_avg = round(download_avg,1)
isp = (results_dict['client']['isp'])
print("")
print(isp)
print(download_avg)
def progress():
while True:
print('~ ',end='', flush=True)
time.sleep(1)
def start():
now=(datetime.datetime.today().replace(microsecond=0))
print(now)
d = Thread(target= downloadTest, args=(3,))
d.start()
d1 = Thread(target = progress)
d1.daemon = True
d1.start()
d.join()
start()
input("Complete...") # this causes progress thread to keep running
There is no reason for your thread to exit, which is why it does not terminate. A daemon thread normally terminates when your programm (all other threads) terminate, which does not happen in this as the last input does not quit.
In general it is a good idea to make a thread stop by itself, rather than forcefully killing it, so you would generally kill this kind of thread with a flag. Try changing the segment at the end to:
killflag = False
start()
killflag = True
input("Complete...")
and update the progress method to:
def progress():
while not killflag:
print('~ ',end='', flush=True)
time.sleep(1)
Below is the code which demonstrates the problem. Please note that this is only an example, I am using the same logic in a more complicated application, where I can't use sleep as the amount of time, it will take for process1 to modify the variable, is dependent on the speed of the internet connection.
from multiprocessing import Process
code = False
def func():
global code
code = True
pro = Process(target=func)
pro.start()
while code == False:
pass
pro.terminate()
pro.join()
print('Done!')
On running this nothing appears on the screen. When I terminate the program, by pressing CTRL-C, the stack trace shows that the while loop was being executed.
Python has a few concurrency libraries: threading, multiprocessing and asyncio (and more).
multiprocessing is a library which uses subprocesses to bypass python's inability to concurrently run CPU intensive tasks. To share variables between different multiprocessing.Processes, create them via a multiprocessing.Manager() instance. For example:
import multiprocessing
import time
def func(event):
print("> func()")
time.sleep(1)
print("setting event")
event.set()
time.sleep(1)
print("< func()")
def main():
print("In main()")
manager = multiprocessing.Manager()
event = manager.Event()
p = multiprocessing.Process(target=func, args=(event,))
p.start()
while not event.is_set():
print("waiting...")
time.sleep(0.2)
print("OK! joining func()...")
p.join()
print('Done!')
if __name__ == "__main__":
main()