Why this code
import multiprocessing
import time
class Bot(multiprocessing.Process):
def __init__(self):
self.val = 0
multiprocessing.Process.__init__(self)
def setVal(self):
self.val = 99
def run(self):
while True:
print 'IN: ', self.val
time.sleep(2)
if __name__ == '__main__':
bot = Bot()
bot.start()
bot.setVal()
while True:
print 'OUT: ', bot.val
time.sleep(2)
gives following output?
OUT: 99
IN: 0
OUT: 99
IN: 0
OUT: 99
IN: 0
OUT: 99
IN: 0
OUT: 99
IN: 0
OUT: 99
IN: 0
...
As you may guess i expect to get all 99, IN and OUT. But i do not. Why? What am i missing?
Once you've called start() on your object, the stuff inside that object is running in a separate process, and using methods of that class to "communicate" with it are not really the best way. What you need to do is called inter-process communication (IPC for short) and there is special machinery for doing it correctly.
For Python's multiprocessing module there are two mechanisms for communicating between processes: Pipe and Queue. I would suggest looking into those (e.g. here).
To use the Pipe mechanism in your example, you might do it this way (just a quick illustration) :
class Bot(multiprocessing.Process):
def __init__(self, pipe):
multiprocessing.Process.__init__(self)
self.val = 0
self.ipcPipe = pipe
def run(self):
while True:
newData = self.ipcPipe.recv()
self.val = newData[0]
print 'IN: ', self.val
self.ipcPipe.send([self.val])
time.sleep(2)
if __name__ == '__main__':
parent_conn, child_conn = multiprocessing.Pipe()
bot = Bot(child_conn)
bot.start()
value = 0
while True:
value += 1
parent_conn.send([value])
outVal = parent_conn.recv()
print 'OUT: ', outVal[0]
time.sleep(2)
See what's been done here: We create parent and child "ends" of the Pipe, and give the child end to your object. Then from the parent process you use send() to communicate a new value to the object, and recv() to get an updated value back. Likewise inside your object (a separate process, remember) you conversely use send() and recv() on the pipe's other end to communicate with the parent process.
Also, I would recommend calling Process.__init__(self) in your class __init__ method before doing any other initialization. Since you're inheriting from Process it's a good idea to make sure all the process-y stuff under the hood gets initialized correctly before you do anything in your own class.
Hope this helps.
The problem is that once you start the second process, you are printing from 2 different processes.
The parent process has the original instance of bot, with the value then set to 99. The parent process is printing OUT which is why you get the value 99 printed.
The (new) subprocess starts with state copied from the bot object as it was when you called the multiprocessing method start(). Because of this, it has a state of 0. You never call setVal in the subprocess and so it's value remains 0, and the IN prints print 0.
If you want to share state information like this between the parent process and the subprocess, have a read of this:
http://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes
bot = Bot() # creates a bot object in this process
bot.start() # creates an other process and creates a bot object there
These are two different objects.
If you want to have a look at how to share state between two processes: http://docs.python.org/2/library/multiprocessing.html#namespace-objects
Or at the pipes also proposed by DMH:
http://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes
Related
I have a program that makes a call to an API every minute and do some operations, when some condition is met, I want to create a new process that will make calls to another API every seconds and do some operations. Parent process doesn't care the result that this child process produce, the child will run on its own until everything is done. This way the parent process can continue making call to the api every minute and doing operations without interruption.
I looked into multiprocessing. However I cant get it to work outside of main. I tried passing a callback function, but that created unexpected result (where parent process starting running again in parallel at some point).
Another solution I can think of is just create another project, then make a request. However then I will have a lot of repeated code.
What is the best approach to my problem?
example code:
class Main:
[...]
foo = Foo()
child = Child()
foo.Run(child.Process)
class Foo:
[...]
def Run(callbackfunction):
while(True):
x = self.dataServices.GetDataApi()
if(x == 1020):
callbackfunction()
#start next loop after a minute
class Child:
[...]
def Compute(self):
while(True):
self.dataServics.GetDataApiTwo()
#do stuff
#start next loop after a second
def Process(self):
self.Compute() # i want this function to run from a new process, so it wont interfer
Edit2: added in multiprocess attempt
class Main:
def CreateNewProcess(self, callBack):
if __name__ == '__main__':
p = Process(target=callBack)
p.start()
p.join()
foo = Foo()
child = Child(CreateNewProcess)
foo.Run(child.Process)
class Foo:
def Run(callbackfunction):
while(True):
x = dataServices.GetDataApi()
if(x == 1020):
callbackfunction()
#start next loop after a minute
class Child:
_CreateNewProcess = None
def __init__(self, CreateNewProcess):
self._CreateNewProcess = CreateNewProcess
def Compute(self, CreateNewProcess):
while(True):
dataServics.GetDataApiTwo()
#do stuff
#start next loop after a second
def Process(self):
self.CreateNewProcess(self.Compute) # i want this function to run from a new process, so it wont interfer
I had to reorganize a few things. Among others:
The guard if __name__ == '__main__': should include creation of
objects and especially calls to functions and methods. Usually it is
placed on the global level at the end of code.
Child objects shouldn't be created in main process. In theory you can
do this to use them as containers for necessary data for the child
process and then sending them as parameter but I think a separate
class should be used for this if seen as necessary. Here I used a
simple data parameter which can be anything pickleable.
It is cleaner to have a function on global level as process
target (in my opinion)
Finally it looks like:
from multiprocessing import Process
class Main:
#staticmethod
def CreateNewProcess(data):
p = Process(target=run_child, args=(data,))
p.start()
p.join()
class Foo:
def Run(self, callbackfunction):
while(True):
x = dataServices.GetDataApi()
if(x == 1020):
callbackfunction(data)
#start next loop after a minute
class Child:
def __init__(self, data):
self._data = data
def Compute(self):
while(True):
dataServics.GetDataApiTwo()
#do stuff
#start next loop after a second
# Target for new process. It is cleaner to have a function outside of a
# class for this
def run_child(data): # "data" represents one or more parameters from
# parent to child necessary to run specific child.
# "data" must be pickleable.
# Can be omitted if unnecessary
global child
child = Child(data)
child.Compute()
if __name__ == '__main__':
foo = Foo()
foo.Run(Main.CreateNewProcess)
I have 4 different Python custom objects and an events queue. Each obect has a method that allows it to retrieve an event from the shared events queue, process it if the type is the desired one and then puts a new event on the same events queue, allowing other processes to process it.
Here's an example.
import multiprocessing as mp
class CustomObject:
def __init__(events_queue: mp.Queue) -> None:
self.events_queue = event_queue
def process_events_queue() -> None:
event = self.events_queue.get()
if type(event) == SpecificEventDataTypeForThisClass:
# do something and create a new_event
self.events_queue.put(new_event)
else:
self.events_queue.put(event)
# there are other methods specific to each object
These 4 objects have specific tasks to do, but they all share this same structure. Since I need to "simulate" the production condition, I want them to run all at the same time, indipendently from eachother.
Here's just an example of what I want to do, if possible.
import multiprocessing as mp
import CustomObject
if __name__ == '__main__':
events_queue = mp.Queue()
data_provider = mp.Process(target=CustomObject, args=(events_queue,))
portfolio = mp.Process(target=CustomObject, args=(events_queue,))
engine = mp.Process(target=CustomObject, args=(events_queue,))
broker = mp.Process(target=CustomObject, args=(events_queue,))
while True:
data_provider.process_events_queue()
portfolio.process_events_queue()
engine.process_events_queue()
broker.process_events_queue()
My idea is to run each object in a separate process, allowing them to communicate with events shared through the events_queue. So my question is, how can I do that?
The problem is that obj = mp.Process(target=CustomObject, args=(events_queue,)) returns a Process instance and I can't access the CustomObject methods from it. Also, is there a smarter way to achieve what I want?
Processes require a function to run, which defines what the process is actually doing. Once this function exits (and there are no non-daemon threads) the process is done. This is similar to how Python itself always executes a __main__ script.
If you do mp.Process(target=CustomObject, args=(events_queue,)) that just tells the process to call CustomObject - which instantiates it once and then is done. This is not what you want, unless the class actually performs work when instantiated - which is a bad idea for other reasons.
Instead, you must define a main function or method that handles what you need: "communicate with events shared through the events_queue". This function should listen to the queue and take action depending on the events received.
A simple implementation looks like this:
import os, time
from multiprocessing import Queue, Process
class Worker:
# separate input and output for simplicity
def __init__(self, commands: Queue, results: Queue):
self.commands = commands
self.results = results
# our main function to be run by a process
def main(self):
# each process should handle more than one command
while True:
value = self.commands.get()
# pick a well-defined signal to detect "no more work"
if value is None:
self.results.put(None)
break
# do whatever needs doing
result = self.do_stuff(value)
print(os.getpid(), ':', self, 'got', value, 'put', result)
time.sleep(0.2) # pretend we do something
# pass on more work if required
self.results.put(result)
# placeholder for what needs doing
def do_stuff(self, value):
raise NotImplementedError
This is a template for a class that just keeps on processing events. The do_stuff method must be overloaded to define what actually happens.
class AddTwo(Worker):
def do_stuff(self, value):
return value + 2
class TimesThree(Worker):
def do_stuff(self, value):
return value * 3
class Printer(Worker):
def do_stuff(self, value):
print(value)
This already defines fully working process payloads: Process(target=TimesThree(in_queue, out_queue).main) schedules the main method in a process, listening for and responding to commands.
Running this mainly requires connecting the individual components:
if __name__ == '__main__':
# bookkeeping of resources we create
processes = []
start_queue = Queue()
# connect our workers via queues
queue = start_queue
for element in (AddTwo, TimesThree, Printer):
instance = element(queue, Queue())
# we run the main method in processes
processes.append(Process(target=instance.main))
queue = instance.results
# start all processes
for process in processes:
process.start()
# send input, but do not wait for output
start_queue.put(1)
start_queue.put(248124)
start_queue.put(-256)
# send shutdown signal
start_queue.put(None)
# wait for processes to shutdown
for process in processes:
process.join()
Note that you do not need classes for this. You can also compose functions for a similar effect, as long as everything is pickle-able:
import os, time
from multiprocessing import Queue, Process
def main(commands, results, do_stuff):
while True:
value = commands.get()
if value is None:
results.put(None)
break
result = do_stuff(value)
print(os.getpid(), ':', do_stuff, 'got', value, 'put', result)
time.sleep(0.2)
results.put(result)
def times_two(value):
return value * 2
if __name__ == '__main__':
in_queue, out_queue = Queue(), Queue()
worker = Process(target=main, args=(in_queue, out_queue, times_two))
worker.start()
for message in (1, 3, 5, None):
in_queue.put(message)
while True:
reply = out_queue.get()
if reply is None:
break
print('result:', reply)
(New to Python and OO - I apologize in advance if I'm being stupid here)
I'm trying to define a Python 3 class such that when an instance is created two subprocesses are also created. These subprocesses do some work in the background (sending and listening for UDP packets). The subprocesses also need to communicate with each other and with the instance (updating instance attributes based on what is received from UDP, among other things).
I am creating my subprocesses with os.fork because I don't understand how to use the subprocess module to send multiple file descriptors to child processes - maybe this is part of my problem.
The problem I am running into is how to kill the child processes when the instance is destroyed. My understanding is I shouldn't use destructors in Python because stuff should get cleaned up and garbage collected automatically by Python. In any case, the following code leaves the children running after it exits.
What is the right approach here?
import os
from time import sleep
class A:
def __init__(self):
sfp, pts = os.pipe() # senderFromParent, parentToSender
pfs, stp = os.pipe() # parentFromSender, senderToParent
pfl, ltp = os.pipe() # parentFromListener, listenerToParent
sfl, lts = os.pipe() # senderFromListener, listenerToSender
pid = os.fork()
if pid:
# parent
os.close(sfp)
os.close(stp)
os.close(lts)
os.close(ltp)
os.close(sfl)
self.pts = os.fdopen(pts, 'w') # allow creator of A inst to
self.pfs = os.fdopen(pfs, 'r') # send and receive messages
self.pfl = os.fdopen(pfl, 'r') # to/from sender and
else: # listener processes
# sender or listener
os.close(pts)
os.close(pfs)
os.close(pfl)
pid = os.fork()
if pid:
# sender
os.close(ltp)
os.close(lts)
sender(self, sfp, stp, sfl)
else:
# listener
os.close(stp)
os.close(sfp)
os.close(sfl)
listener(self, ltp, lts)
def sender(a, sfp, stp, sfl):
sfp = os.fdopen(sfp, 'r') # receive messages from parent
stp = os.fdopen(stp, 'w') # send messages to parent
sfl = os.fdopen(sfl, 'r') # received messages from listener
while True:
# send UDP packets based on messages from parent and process
# responses from listener (some responses passed back to parent)
print("Sender alive")
sleep(1)
def listener(a, ltp, lts):
ltp = os.fdopen(ltp, 'w') # send messages to parent
lts = os.fdopen(lts, 'w') # send messages to sender
while True:
# listen for and process incoming UDP packets, sending some
# to sender and some to parent
print("Listener alive")
sleep(1)
a = A()
Running the above produces:
Sender alive
Listener alive
Sender alive
Listener alive
...
Actually, you should use destructors. Python objects have a __del__ method, which is called just before the object is garbage-collected.
In your case, you should define
def __del__(self):
...
within your class A that sends the appropriate kill signals to your child processes. Don't forget to store the child PIDs in your parent process, of course.
As suggested here, you can create a child process using multiprocessing module with flag daemon=True.
Example:
from multiprocessing import Process
p = Process(target=f, args=('bob',))
p.daemon = True
p.start()
There's no point trying to reinvent the wheel. subprocess does all you want and more, though multiprocessing will simply the process, so we'll use that.
You can use multiprocessing.Pipe to create connections and can send messages back and forth between a pair of processes. You can make a pipe "duplex", so both ends can send and receive if that's what you need. You can use multiprocessing.Manager to create a shared Namespace between processes (sharing a state between listener, sender and parent). There is a warning with using multiprocessing.list, multiprocessing.dict or multiprocessing.Namespace. Any mutable object assigned to them will not see changes made to that object until it is reassigned to the managed object.
eg.
namespace.attr = {}
# change below not cascaded to other processes
namespace.attr["key"] = "value"
# force change to other processes
namespace.attr = namespace.attr
If you need to have more than one process write to the same attribute then you will need to use synchronisation to prevent concurrent modification by one processes wiping out changes made by another process.
Example code:
from multiprocessing import Process, Pipe, Manager
class Reader:
def __init__(self, writer_conn, namespace):
self.writer_conn = writer_conn
self.namespace = namespace
def read(self):
self.namespace.msgs_recv = 0
with self.writer_conn:
try:
while True:
obj = self.writer_conn.recv()
self.namespace.msgs_recv += 1
print("Reader got:", repr(obj))
except EOFError:
print("Reader has no more data to receive")
class Writer:
def __init__(self, reader_conn, namespace):
self.reader_conn = reader_conn
self.namespace = namespace
def write(self, msgs):
self.namespace.msgs_sent = 0
with self.reader_conn:
for msg in msgs:
self.reader_conn.send(msg)
self.namespace.msgs_sent += 1
def create_child_processes(reader, writer, msgs):
p_write = Process(target=Writer.write, args=(writer, msgs))
p_write.start()
# This is very important otherwise reader will hang after writer has finished.
# The order of this statement coming after p_write.start(), but after
# p_read.start() is also important. Look up file descriptors and how they
# are inherited by child processes on Unix and how a any valid fd to the
# write side of a pipe will keep all read ends open
writer.reader_conn.close()
p_read = Process(target=Reader.read, args=(reader,))
p_read.start()
return p_read, p_write
def run_mp_pipe():
manager = Manager()
namespace = manager.Namespace()
read_conn, write_conn = Pipe()
reader = Reader(read_conn, namespace)
writer = Writer(write_conn, namespace)
p_read, p_write = create_child_processes(reader, writer,
msgs=["hello", "world", {"key", "value"}])
print("starting")
p_write.join()
p_read.join()
print("done")
print(namespace)
assert namespace.msgs_sent == namespace.msgs_recv
if __name__ == "__main__":
run_mp_pipe()
Output:
starting
Reader got: 'hello'
Reader got: 'world'
Reader got: {'key', 'value'}
Reader has no more data to receive
done
Namespace(msgs_recv=3, msgs_sent=3)
Here is the code sample:
class RunGui (QtGui.QMainWindow)
def __init__(self, parent=None):
...
QtCore.Qobject.connect(self.ui.actionNew, QtCore.SIGNAL("triggered()"), self.new_select)
...
def normal_output_written(self, qprocess):
self.ui.text_edit.append("caught outputReady signal") #works
self.ui.text_edit.append(str(qprocess.readAllStandardOutput())) # doesn't work
def new_select(self):
...
dialog_np = NewProjectDialog()
dialog_np.exec_()
if dialog_np.is_OK:
section = dialog_np.get_section()
project = dialog_np.get_project()
...
np = NewProject()
np.outputReady.connect(lambda: self.normal_output_written(np.qprocess))
np.errorReady.connect(lambda: self.error_output_written(np.qprocess))
np.inputNeeded.connect(lambda: self.input_from_line_edit(np.qprocess))
np.params = partial(np.create_new_project, section, project, otherargs)
np.start()
class NewProject(QtCore.QThread):
outputReady = QtCore.pyqtSignal(object)
errorReady = QtCore.pyqtSignal(object)
inputNeeded = QtCore.pyqtSignal(object)
params = None
message = ""
def __init__(self):
super(NewProject, self).__init__()
self.qprocess = QtCore.QProcess()
self.qprocess.moveToThread(self)
self._inputQueue = Queue()
def run(self):
self.params()
def create_new_project(self, section, project, otherargs):
...
# PyDev for some reason skips the breakpoints inside the thread
self.qprocess.start(command)
self.qprocess.waitForReadyRead()
self.outputReady.emit(self.qprocess) # works - I'm getting signal in RunGui.normal_output_written()
print(str(self.qprocess.readAllStandardOutput())) # prints empty line
.... # other actions inside the method requiring "command" to finish properly.
The idea is beaten to death - get the GUI to run scripts and communicate with the processes. The challenge in this particular example is that the script started in QProcess as command runs an app, that requires user input (confirmation) along the way. Therefore I have to be able to start the script, get all output and parse it, wait for the question to appear in the output and then communicate back the answer, allow it to finish and only then to proceed further with other actions inside create_new_project()
I don't know if this will fix your overall issue, but there are a few design issues I see here.
You are passing around the qprocess between threads instead of just emitting your custom signals with the results of the qprocess
You are using class-level attributes that should probably be instance attributes
Technically you don't even need the QProcess, since you are running it in your thread and actively using blocking calls. It could easily be a subprocess.Popen...but anyways, I might suggest changes like this:
class RunGui (QtGui.QMainWindow)
...
def normal_output_written(self, msg):
self.ui.text_edit.append(msg)
def new_select(self):
...
np = NewProject()
np.outputReady.connect(self.normal_output_written)
np.params = partial(np.create_new_project, section, project, otherargs)
np.start()
class NewProject(QtCore.QThread):
outputReady = QtCore.pyqtSignal(object)
errorReady = QtCore.pyqtSignal(object)
inputNeeded = QtCore.pyqtSignal(object)
def __init__(self):
super(NewProject, self).__init__()
self._inputQueue = Queue()
self.params = None
def run(self):
self.params()
def create_new_project(self, section, project, otherargs):
...
qprocess = QtCore.QProcess()
qprocess.start(command)
if not qprocess.waitForStarted():
# handle a failed command here
return
if not qprocess.waitForReadyRead():
# handle a timeout or error here
return
msg = str(self.qprocess.readAllStandardOutput())
self.outputReady.emit(msg)
Don't pass around the QProcess. Just emit the data. And create it from within the threads method so that it is automatically owned by that thread. Your outside classes should really not have any knowledge of that QProcess object. It doesn't even need to be a member attribute since its only needed during the operation.
Also make sure you are properly checking that your command both successfully started, and is running and outputting data.
Update
To clarify some problems you might be having (per the comments), I wanted to suggest that QProcess might not be the best option if you need to have interactive control with processes that expect periodic user input. It should work find for running scripts that just produce output from start to finish, though really using subprocess would be much easier. For scripts that need user input over time, your best bet may be to use pexpect. It allows you to spawn a process, and then watch for various patterns that you know will indicate the need for input:
foo.py
import time
i = raw_input("Please enter something: ")
print "Output:", i
time.sleep(.1)
print "Another line"
time.sleep(.1)
print "Done"
test.py
import pexpect
import time
child = pexpect.spawn("python foo.py")
child.setecho(False)
ret = -1
while ret < 0:
time.sleep(.05)
ret = child.expect("Please enter something: ")
child.sendline('FOO')
while True:
line = child.readline()
if not line:
break
print line.strip()
# Output: FOO
# Another line
# Done
I have a threaded python application with a long-running mainloop in the background thread. This background mainloop is actually a call to pyglet.app.run(), which drives a GUI window and also can be configured to call other code periodically. I need a do_stuff(duration) function to be called at will from the main thread to trigger an animation in the GUI, wait for the animation to stop, and then return. The actual animation must be done in the background thread because the GUI library can't handle being driven by separate threads.
I believe I need to do something like this:
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.max_n_times = 0
self.total_n_times = 0
self.paused_ev = threading.Event()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.paused_ev.is_set():
if self.max_n_times > self.total_n_times:
self.paused_ev.clear()
else:
if self.total_n_times >= self.max_n_times:
self.paused_ev.set()
if not self.paused_ev.is_set():
# do stuff that must execute in the background thread
self.total_n_times += 1
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.max_n_times += n_times
sd.paused_ev.wait_for_clear() # wait_for_clear() does not exist
sd.paused_ev.wait()
assert (sd.total_n_times == sd.max_n_times)
EDIT: use max_n_times instead of stop_time to clarify why Thread.join(duration) won't do the trick.
From the documentation for threading.Event:
wait([timeout])
Block until the internal flag is true.
If the internal flag is true on entry,
return immediately. Otherwise, block
until another thread calls set() to
set the flag to true, or until the
optional timeout occurs.
I've found I can get the behavior I'm looking for if I have a pair of events, paused_ev and not_paused_ev, and use not_paused_ev.wait(). I could almost just use Thread.join(duration), except it needs to only return precisely when the background thread actually registers that the time is up. Is there some other synchronization object or other strategy I should be using instead?
I'd also be open to arguments that I'm approaching this whole thing the wrong way, provided they're good arguments.
Hoping I get some revision or additional info from my comment, but I'm kind of wondering if you're not overworking things by subclassing Thread. You can do things like this:
class MyWorker(object):
def __init__(self):
t = Thread(target = self._do_work, name "Worker Owned Thread")
t.daemon = True
t.start()
def _do_work(self):
While True:
# Something going on here, forever if necessary. This thread
# will go away if the other non-daemon threads terminate, possibly
# raising an exception depending this function's body.
I find this makes more sense when the method you want to run is something that is more appropriately a member function of some other class than it would be to as the run method on the thread. Additionally, this saves you from having to encapsulate a bunch of business logic inside of a Thread. All IMO, of course.
It appears that your GUI animation thread is using a spin-lock in its while True loop. This can be prevented using thread-safe queues. Based on my reading of your question, this approach would be functionally equivalent and efficient.
I'm omitting some details of your code above which would not change. I'm also assuming here that the run() method which you do not control uses the self.stop_time value to do its work; otherwise there is no need for a threadsafe queue.
from Queue import Queue
from threading import Event
class StuffDoer:
def __init__(self, inq, ready):
self.inq = inq
self.ready = ready
def _do_stuff(self):
self.ready.set()
self.stop_time = self.inq.get()
GUIqueue = Queue()
control = Event()
sd = StuffDoer(GUIqueue, control)
def do_stuff(duration):
control.clear()
GUIqueue.put(time.time() + duration)
control.wait()
I ended up using a Queue similar to what #wberry suggested, and making use of Queue.task_done and Queue.wait:
import Queue
import threading
class StuffDoer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.setDaemon(True)
self.max_n_times = 0
self.total_n_times = 0
self.do_queue = Queue.Queue()
def run(self):
# this part is outside of my control
while True:
self._do_stuff()
# do other stuff
def _do_stuff(self):
# this part is under my control
if self.total_n_times >= self.max_n_times:
try:
self.max_n_times += self.do_queue.get(block=False)
except Queue.Empty, e:
pass
if self.max_n_times > self.total_n_times:
# do stuff that must execute in the background thread
self.total_n_times += 1
if self.total_n_times >= self.max_n_times:
self.do_queue.task_done()
sd = StuffDoer()
sd.start()
def do_stuff(n_times):
sd.do_queue.put(n_times)
sd.do_queue.join()
assert (sd.total_n_times == sd.max_n_times)
I made solution based on #g.d.d.c advice for this question. There is my code:
threads = []
# initializing aux thread(s) in the main thread ...
t = threading.Thread(target=ThreadF, args=(...))
#t.setDaemon(True) # I'm not sure does it really needed
t.start()
threads.append(t.ident)
# Block main thread
while filter(lambda thread: thread.ident in threads, threading.enumerate()):
time.sleep(10)
Also, you can use Thread.join to block the main thread - it is better way.