I am trying to run 2 loops at the same time using multiprocessing, but they only seem to run sequentially.
When the first loop starts the mainloop() process for tkinter the other loop doesn't start until the GUI window is shut down, then the count loop starts.
I have tried multithreading and multiprocessing with the same result. I need them to run concurrently. Below is a simple example that demonstrates the problem. I'm Using python 2.7.10.
from multiprocessing import Process
from Tkinter import *
import time
count = 0
def counting():
while True:
global count
count = count + 1
print count
time.sleep(1)
class App():
def __init__(self):
self.myGUI = Tk()
self.myGUI.geometry('800x600')
self.labelVar = StringVar()
self.labelVar.set("test")
self.label1 = Label(self.myGUI, textvariable=self.labelVar)
self.label1.grid(row=0, column=0)
app = App()
t1 = Process(target = app.myGUI.mainloop())
t2 = Process(target = counting())
t1.start()
t2.start()
You are calling the functions, and waiting for them to finish, in order to pass their result as the Process target. Pass the functions themselves instead - that is, change this:
t1 = Process(target = app.myGUI.mainloop())
t2 = Process(target = counting())
to this:
t1 = Process(target=app.myGUI.mainloop)
t2 = Process(target=counting)
So that the Process can call those functions (in a subprocess).
Related
How can I share values from one process with another?
Apparently I can do that through multithreading but not multiprocessing.
Multithreading is slow for my program.
I cannot show my exact code so I made this simple example.
from multiprocessing import Process
from threading import Thread
import time
class exp:
def __init__(self):
self.var1 = 0
def func1(self):
self.var1 = 5
print(self.var1)
def func2(self):
print(self.var1)
if __name__ == "__main__":
#multithreading
obj1 = exp()
t1 = Thread(target = obj1.func1)
t2 = Thread(target = obj1.func2)
print("multithreading")
t1.start()
time.sleep(1)
t2.start()
time.sleep(3)
#multiprocessing
obj = exp()
p1 = Process(target = obj.func1)
p2 = Process(target = obj.func2)
print("multiprocessing")
p1.start()
time.sleep(2)
p2.start()
Expected output:
multithreading
5
5
multiprocessing
5
5
Actual output:
multithreading
5
5
multiprocessing
5
0
I know there has been a couple of close votes against this question, but the supposed duplicate question's answer does not really explain why the OP's program does not work as is and the offered solution is not what I would propose. Hence:
Let's analyze what is happening. The creation of obj = exp() is done by the main process. The execution of exp.func1 occurs is a different process/address space and therefore the obj object a must be serialized/de-serialized to the address space of that process. In that new address space self.var1 comes across with the initial value of 0 and is then set to 5, but only the copy of the obj object that is in the address space of process p1 is being modified; the copy of that object that exists in the main process has not been modified. Then when you start process p2, another copy of obj from the main process is sent to the new process, but still with self.var1 having a value of 0.
The solution is for self.var1 to be an instance of multiprocessing.Value, which is a special variable that exists in shared memory accessible to all procceses. See the docs.
from multiprocessing import Process, Value
class exp:
def __init__(self):
self.var1 = Value('i', 0, lock=False)
def func1(self):
self.var1.value = 5
print(self.var1.value)
def func2(self):
print(self.var1.value)
if __name__ == "__main__":
#multiprocessing
obj = exp()
p1 = Process(target = obj.func1)
p2 = Process(target = obj.func2)
print("multiprocessing")
p1.start()
# No need to sleep, just wait for p1 to complete
# before starting p2:
#time.sleep(2)
p1.join()
p2.start()
p2.join()
Prints:
multiprocessing
5
5
Note
Using shared memory for this particular problem is much more efficient than using a managed class, which is referenced by the "close" comment.
The assignment of 5 to self.var1.value is an atomic operation and does not need to be a serialized operation. But if:
We were performing a non-atomic operation (requires multiple steps) such as self.var1.value += 1 and:
Multiple processes were performing this non-atomic operation in parallel, then:
We should create the value with a lock: self.var1 = Value('i', 0, lock=True) and:
Update the value under control of the lock: with self.var1.get_lock(): self.var1.value += 1
There are several ways to do that: you can use shared memory, fifo or message passing
Good evening all,
I have created a CPU heavy function which I need to iterate 50 times with input from a list and then I am using the multiprocessing pool to synchronously compute the 50 functions. It works but creates a new window for each new process and only after I close these windows I do get the result. Is there a way to not open the window each time? I have tried multiple multiprocessing methods, but all do the same.
import sqlite3
import random
import time
import concurrent.futures
from multiprocessing import Pool
from tkinter import *
import time
window = Tk()
window.title("APP")
def bigF(list):
n=list[0] # Extract multiple var from list
n2=list[1]
new = f(n) # The CPU heavy computation.
if n2 == new:
return True
else:
return False
def Screen1():
window.geometry("350x250")
# Lables and entry windows.
btn = Button(window, text='S2', command=Screen2)
btn.pack(pady=10)
def Screen2():
window.geometry("350x220")
# Lables and entry windows.
def getP():
user = user.get()
cursor.execute(database)
try:
return cursor.fetchall()[0][0]
except IndexError:
raise ValueError('')
def getS():
user = user.get()
cursor.execute(database)
return cursor.fetchall()[0][0]
def getS2():
user = user.get()
cursor.execute(database)
return cursor.fetchall()[0][0]
def function():
try:
var = getP()
except ValueError:
return False
s = getS()
s2 = getS2()
t = t.get()
if __name__ == '__main__':
list1=[]
listt=[]
list3=[]
list1[:0]=somestring
random.shuffle(list1)
for i in range(len(list1)):
listt.append(list1[i])
listt.append(var)
listt.append(s)
listt.append(s2)
listt.append(t)
list3.append(listt)
listt=[]
with Pool(50) as p:
values = p.map(bigF, list3)
if True in values:
someF()
else:
# Lables
btn = Button(window, text='function', command=function)
btn.pack(pady=10)
sgn = Button(window, text='S1', command=Screen1)
sgn.pack(padx=10, side=RIGHT)
def someF():
window.geometry("350x350")
# Lables and entry windows.
Screen1()
window.mainloop()
I don't know if the problem is in bigF(list) or the multiprocessing way. I aim to shorten the processing time to less than 2 seconds where 1 bigF(list) takes around 0.5 seconds.
Thank you for any suggestions and comments.
You need to read the cautions in the multiprocessing doc. When your secondary processes start, it starts a new copy of the interpreter and re-executes your main code. That includes starting a new main window. Anything that should be run in the main process only needs to be inside if __name__=='__main__':. So you need:
if __name__ == '__main__':
windows = Tk()
window.title("APP")
Screen1()
window.mainloop()
I am trying to make a python program that runs multiple processes each in an infinite loop at the same time, but only one process will execute at a time, the first one in code, then the rest of the program will not run. What do i need to do to make both procceses and the main one execute at the same time?
from multiprocessing import *
import time
def test1(q):
while True:
q.put("Banana")
time.sleep(2)
def test2(q):
while True:
q.put("internet")
time.sleep(3)
if __name__ == "__main__":
q = Queue()
t1 = Process(target=test1(q))
t2 = Process(target=test2(q))
t1.start()
t2.start()
q.put("rice")
while True:
print(q.get())
The reason for your problem is with the lines:
t1 = Process(target=test1(q))
t2 = Process(target=test2(q))
There you will actually call test1 and test2, respectively (even though you will never reach the test2 call). After running the functions it will then use the return result a target. What you want is:
t1 = Process(target=test1, args=(q,))
t2 = Process(target=test2, args=(q,))
Thus, you do not want to actually run the test1 and test2functions, but use their references (addresses) as target and then you have to provide their input arguments in a separate parameter args.
Currently I have 3 Process A,B,C created under main process. However, I would like to start B and C in Process A. Is that possible?
process.py
from multiprocessing import Process
procs = {}
import time
def test():
print(procs)
procs['B'].start()
procs['C'].start()
time.sleep(8)
procs['B'].terminate()
procs['C'].termiante()
procs['B'].join()
procs['C'].join()
def B():
while True:
print('+'*10)
time.sleep(1)
def C():
while True:
print('-'*10)
time.sleep(1)
procs['A'] = Process(target = test)
procs['B'] = Process(target = B)
procs['C'] = Process(target = C)
main.py
from process import *
print(procs)
procs['A'].start()
procs['A'].join()
And I got error
AssertionError: can only start a process object created by current process
Are there any alternative way to start process B and C in A? Or let A send signal to ask master process start B and C
I would recommend using Event objects to do the synchronization. They permit to trigger some actions across the processes. For instance
from multiprocessing import Process, Event
import time
procs = {}
def test():
print(procs)
# Will let the main process know that it needs
# to start the subprocesses
procs['B'][1].set()
procs['C'][1].set()
time.sleep(3)
# This will trigger the shutdown of the subprocess
# This is cleaner than using terminate as it allows
# you to clean up the processes if needed.
procs['B'][1].set()
procs['C'][1].set()
def B():
# Event will be set once again when this process
# needs to finish
event = procs["B"][1]
event.clear()
while not event.is_set():
print('+' * 10)
time.sleep(1)
def C():
# Event will be set once again when this process
# needs to finish
event = procs["C"][1]
event.clear()
while not event.is_set():
print('-' * 10)
time.sleep(1)
if __name__ == '__main__':
procs['A'] = (Process(target=test), None)
procs['B'] = (Process(target=B), Event())
procs['C'] = (Process(target=C), Event())
procs['A'][0].start()
# Wait for events to be set before starting the subprocess
procs['B'][1].wait()
procs['B'][0].start()
procs['C'][1].wait()
procs['C'][0].start()
# Join all the subprocess in the process that created them.
procs['A'][0].join()
procs['B'][0].join()
procs['C'][0].join()
note that this code is not really clean. Only one event is needed in this case. But you should get the main idea.
Also, the process A is not needed anymore, you could consider using callbacks instead. See for instance the concurrent.futures module if you want to chain some async actions.
I have a simple example script constructed that defines three separate processes using multiprocessing in python. My objective is to have one parent thread that spawns two smaller threads that will collect and process data.
Currently, my implementation looks like this:
from Queue import Queue,Empty
from multiprocessing import Process
import time
import hashlib
class FillQueue(Process):
def __init__(self,q):
Process.__init__(self)
self.q = q
def run(self):
i = 0
while i is not 5:
print 'putting'
self.q.put('foo')
i+=1
self.q.put('|STOP|')
class ConsumeQueue(Process):
def __init__(self,q):
Process.__init__(self)
self.q = q
def run(self):
print 'Consume'
while True:
try:
value = self.q.get(False)
print value
if value == '|STOP|':
print 'done'
break;
except Empty:
print 'Nothing to process atm'
class Ripper(Process):
q = Queue()
def __init__(self):
self.fq = FillQueue(self.q)
self.cq = ConsumeQueue(self.q)
self.fq.daemon = True
self.cq.daemon = True
def run(self):
try:
self.fq.start()
self.cq.start()
except KeyboardInterrupt:
print 'exit'
if __name__ == '__main__':
r = Ripper()
r.start()
As it runs presently, the output from the script on CLI looks like this:
putting
putting
putting
putting
putting
Consume
foo
foo
foo
foo
foo
|STOP|
done
Obviously, the way I am starting my two threads is blocking, since the consumer doesn't even begin to process the items in the queue until the filler finishes adding items.
How should I rewrite this to make both threads begin immediately and not block, so the consumer will simply pass to the Empty except block while there is no work to process, but will exit completely when it receives the stop message?
EDIT: typo, had the start and run methods mixed up
You seem to be starting multiple processes using multiprocessing.Process.
However, you are using Queue.Queue which is only threadsafe, and not designed to be used by multiple processes.
shevek's answer is valid as well, but as a start, you should replace Queue.Queue with multiprocessing.Queue.
try this:
from Queue import Empty
from multiprocessing import Process, Queue
import time
import hashlib
class FillQueue(object):
def __init__(self, q):
self.q = q
def run(self):
i = 0
while i < 5:
print 'putting'
self.q.put('foo %d' % i )
i+=1
time.sleep(.5)
self.q.put('|STOP|')
class ConsumeQueue(object):
def __init__(self, q):
self.q = q
def run(self):
while True:
try:
value = self.q.get(False)
print value
if value == '|STOP|':
print 'done'
break;
except Empty:
print 'Nothing to process atm'
time.sleep(.2)
if __name__ == '__main__':
q = Queue()
f = FillQueue(q)
c = ConsumeQueue(q)
p1 = Process(target=f.run)
p1.start()
p2 = Process(target=c.run)
p2.start()
p1.join()
p2.join()
I think your program works fine. The CPU processes only one thing at a time, for a short time. However, the time required to put all your stuff in the queue is very short. So there is no reason that the filler cannot do this in one time slice.
If you add some delays in the filler, I think you should see that it actually works as you expect.