Python: Plugging wx.py.shell.Shell into a separate process - python

I would like to create a shell which will control a separate process that I created with the multiprocessing module. Possible? How?
EDIT:
I have already achieved a way to send commands to the secondary process: I created a code.InteractiveConsole in that process, and attached it to an input queue and an output queue, so I can command the console from my main process. But I want it in a shell, probably a wx.py.shell.Shell, so a user of the program could use it.

First create the shell
Decouple the shell from your app by making its locals empty
Create your code string
Compile the code string and get a code object
Execute the code object in the shell
from wx.py.shell import Shell
frm = wx.Frame(None)
sh = Shell(frm)
frm.Show()
sh.interp.locals = {}
codeStr = """
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get() # prints "[42, None, 'hello']"
p.join()
"""
code = compile(codeStr, '', 'exec')
sh.interp.runcode(code)
Note:
The codeStr I stole from the first poster may not work here due to some pickling issues. But the point is you can execute your own codeStr remotely in a shell.

You can create a Queue which you pass to the separate process. From the Python Docs:
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get() # prints "[42, None, 'hello']"
p.join()
EXAMPLE: In the wx.py.shell.Shell Docs the constructur parameters are given as
__init__(self, parent, id, pos, size, style, introText, locals,
InterpClass, startupScript, execStartupScript, *args, **kwds)
I have not tried it, but locals might be a dictionary of local variables, which you can pass to the shell. So, I would try the following:
def f(cmd_queue):
shell = wx.py.shell.Shell(parent, id, pos, size, style, introText, locals(),
...)
q = Queue()
p = Process(target=f, args=(q,))
p.start()
Inside the shell, you should then be able to put your commands into cmd_queue which have then to be read in the parent process to be executed.

Related

Multiprocessing.Process not starting without any error

Below is my demo:
Thread xxx:
import xxx
import xxxx
def test_process1(hostname,proxy_param):
# just never run
try: # breakpoint 0
with open("/xxx","w+") as f: # breakpoint 1
f.write("something")
except Exception as e:
pass # just never run breakpoint 3
def test():
try:
a = Process(target=test_process1, args=(hostname,proxy_param))
a.start()
a.join() # you are blocking here. test_process1 not working and never quit
except Exception as e:
pass # breakpoint 4
function test_process1 just never run. No error, No breakpoint.
The test function code is in a big project, here is a demo.
Hope! this piece of code helps.
Workers list will get divided based on the number of processes in use.
Sample Code with ManagerList.
from subprocess import PIPE, Popen
from multiprocessing import Pool,Pipe
from multiprocessing import Process, Queue, Manager
def child_process(child_conn,output_list,messenger):
input_recvd = messenger["input"]
output_list.append(input_recvd)
print(input_recvd)
child_conn.close()
def parent_process(number_of_process=2):
workers_inputs = [{"input":"hello"}, {"input":"world"}]
with Manager() as manager:
processes = []
output_list = manager.list() # <-- can be shared between processes.
parent_conn, child_conn = Pipe()
for single_id_dict in workers_inputs:
pro_obj = Process(target=child_process, args=(child_conn,output_list,single_id_dict)) # Passing the list
pro_obj.start()
processes.append(pro_obj)
for p in processes:
p.join()
output_list = [single_feature for single_feature in output_list]
return output_list
parent_process()
OUTPUT:
hello
world
['hello', 'world']
ManagerList is useful to get the output from various parallel process it's like an inbuilt Queue Mechanism with easy to use and safe from deadlocks.

Python multiprocessing - problem with empty queue and pool freezing

I have problems with python multiprocessing
python version 3.6.6
using Spyder IDE on windows 7
1.
queue is not being populated -> everytime I try to read it, its empty. Somewhere I read, that I have to get() it before process join() but it did not solve it.
from multiprocessing import Process,Queue
# define a example function
def fnc(i, output):
output.put(i)
if __name__ == '__main__':
# Define an output queue
output = Queue()
# Setup a list of processes that we want to run
processes = [Process(target=fnc, args=(i, output)) for i in range(4)]
print('created')
# Run processes
for p in processes:
p.start()
print('started')
# Exit the completed processes
for p in processes:
p.join()
print(output.empty())
print('finished')
>>>created
>>>started
>>>True
>>>finished
I would expect output to not be empty.
if I change it from .join() to
for p in processes:
print(output.get())
#p.join()
it freezes
2.
Next problem I have is with pool.map() - it freezes and has no chance to exceed memory limit. I dont even know how to debug such simple pieace of code.
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4)
print('Pool created')
# print "[0, 1, 4,..., 81]"
print(pool.map(f, range(10))) # it freezes here
Hope its not a big deal to have two questions in one topic
Apperently the problem is Spyder's IPython console. When I run both from cmd, its executed properly.
Solution
for debugging in Spyder add .dummy to multiprocessing import
from multiprocessing.dummy import Process,Queue
It will not be executed by more processors, but you will get results and can actualy see the output. When debugging is done simply delete .dummy, place it in another file, import it and call it for example as function
multiprocessing_my.py
from multiprocessing import Process,Queue
# define a example function
def fnc(i, output):
output.put(i)
print(i)
def test():
# Define an output queue
output = Queue()
# Setup a list of processes that we want to run
processes = [Process(target=fnc, args=(i, output)) for i in range(4)]
print('created')
# Run processes
for p in processes:
p.start()
print('started')
# Exit the completed processes
for p in processes:
p.join()
print(output.empty())
print('finished')
# Get process results from the output queue
results = [output.get() for p in processes]
print('get results')
print(results)
test_mp.py
executed by selecting code and pressing ctrl+Enter
import multiprocessing_my
multiprocessing_my.test2()
...
In[9]: test()
created
0
1
2
3
started
False
finished
get results
[0, 1, 2, 3]

multiprocess messaging queue between functions or process python

Im trying to understand how processes are messaging the other one, below example;
i use second function to do my main job, and queue feeds first function sometimes to do it own job and no matter when its finished, i look many example and try different ways, but no success, is any one can explain how can i do it over my example.
from multiprocessing import Process, Queue, Manager
import time
def first(a,b):
q.get()
print a+b
time.sleep(3)
def second():
for i in xrange(10):
print "seconf func"
k+=1
q.put=(i,k)
if __name__ == "__main__":
processes = []
q = Queue()
manager = Manager()
p = Process(target=first, args=(a,b))
p.start()
processes.append(p)
p2 = Process(target=second)
p2.start()
processes.append(p2)
try:
for process in processes:
process.join()
except KeyboardInterrupt:
print "Interupt"

How to give Class to child process in python?

from multiprocessing import Process, Pipe
def f(a1):
print a1.name
a1.conn2.send('why!?!?!?!?!!?!??')
a1.conn2.close()
class test1:
name = 'this is name in class'
conn1,conn2 = Pipe()
if __name__ == '__main__':
a1 = test1
p = Process(target=f, args=(a1,))
p.start()
print a1.conn1.recv()
p.join()
I have to give multiple argument to child process and pipe communication
between parent and child.
So I try to give Class to child process include pipe.
But this code's child process can't send anything.
So parent process hanging at recv().....
How can solve this? help me plz.....T.T
PS : This is python 2.7
Assuming you're using Windows:
You are initializing the Pipes statically inside the class test1, so when the new process is created (when you call start()), the test1 class is recreated and with it the Pipe. That means your function running in the new process is using a different Pipe altogether.
This can be solved by creating the Pipes in an instance of test1 (or passing the connection directly):
from multiprocessing import Process, Pipe
def f(a1):
print a1.name
a1.conn2.send('why!?!?!?!?!!?!??')
a1.conn2.close()
class Test1(object):
def __init__(self):
self.name = 'this is name in class'
self.conn1, self.conn2 = Pipe()
if __name__ == '__main__':
a1 = Test1()
p = Process(target=f, args=(a1,))
p.start()
print a1.conn1.recv()
p.join()

Python pipe.send() hangs on Mac OS

Following program always hangs on Mac OS (Python 2.7.5) if I return big enough string on Mac OS. I can't says for sure what is the limit, but it works for smaller text.
It works fine on Ubuntu, but hangs on pipe_to_parent.send(result).
Does anybody know how to fix this? Is there anything wrong with the code bellow?
#!/usr/bin/python
import sys
from multiprocessing import Process, Pipe
def run(text, length):
return (text * ((length / len(text))+1))[:length]
def proc_func(pipe_to_parent):
result = {'status': 1, 'log': run('Hello World', 20000), 'details': {}, 'exception': ''}
pipe_to_parent.send(result)
sys.exit()
def call_run():
to_child, to_self = Pipe()
proc = Process(target=proc_func, args=(to_self,))
proc.start()
proc.join()
print(to_child.recv())
to_child.close()
to_self.close()
call_run()
The documentation shows an example that has some differences, as follows:
from multiprocessing import Process, Pipe
def f(conn):
conn.send([42, None, 'hello'])
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
# This is the important part
# Note: conn.recv() is called _before_ process.join()
print parent_conn.recv() # prints "[42, None, 'hello']"
p.join()
In your example, you call .recv() after you've already called process.join().
...
proc = Process(target=proc_func, args=(to_self,))
proc.start()
proc.join()
print(to_child.recv())
...
To see exactly what is happening, we would have to look at the multiprocessing module code, but I'm guessing that the hanging is occurring because the pipe is attempting to begin a read from a closed end and blocking to wait for a response.

Categories

Resources