How Can multiple keywords run at a time in Robot frameowrk? - python

I tried to use Multithreding and Multiprocessing concept but it is not working. I want to take my output in a file. Can someone please assist how to resolve this issue?
I am observing both keyword running times are not same.They are executing one by one.
from robot.libraries.BuiltIn import BuiltIn
import threading
from multiprocessing import Process
class importABR:
def __init__(self):
pass
def abr1_keyword(self):
BuiltIn().import_resource('${EXECDIR}/Resources/HealthCheck.robot')
BuiltIn().run_keyword('keyword1')
def aaa_radius(self):
BuiltIn().import_resource('${EXECDIR}/Resources/HealthCheck.robot')
BuiltIn().run_keyword('keyword2')
def custom_keyword(self,file):
abr = Process(target=importABR.abr1_keyword(self)).start()
radius = Process(target=importABR.aaa_radius(self)).start()
with open(str(file), 'w') as out_file:
writer = csv.writer(out_file)
writer.writerows(abr)
writer.writerows(radius)
Here, customer_keyword function I am calling in robot.

I believe you can't do it that way. You should look for a library that implements keyword parallel execution, such as Async.
Also you can run test (actually suites) in parallel using Pabot.

Related

Python multiprocessing pipe and cloudpickle

I have observed that this code can't run on windows, it seems that the PipeConnection handles can't be copied 1:1 so I asume that multiprocessing lib does some kind of extra work when dealing with Process args of type PipeConnection. This is a toy example of the problem:
import multiprocessing, cloudpickle
def _thunk(pipe):
def f():
test = pipe[1].recv()
pipe[1].send(test)
return f
def _worker(pickled_f):
f = cloudpickle.loads(pickled_f)
f()
if __name__ == '__main__':
pipe = multiprocessing.Pipe()
pickled_f = cloudpickle.dumps(_thunk(pipe))
multiprocessing.Process(target=_worker, args=(pickled_f,)).start()
pipe[0].send('test')
test = pipe[0].recv()
print(test)
I want to get around this but I can't modify the multiprocessing.Process call, because it is in a lib outside of my code. Other synchronization mechanisms are welcome as long as I can encapsulate the logic inside the thunk. But ideally I want to be able to reconstruct a working Pipe in my new process.
Thanks.

Is it possible to use Asyncio with win32com in Python?

My python program receives data via Windows COM objects record it into an hdf5 file. In order to use the COM objects, I have used win32com.client.DispatchWithEvents. The code below shows the simplified structure of my program.
class Handler_realTime(object):
def __init__(self):
pass
def OnReceiveRealData(self, eventTime, eventData01, eventData02, eventData03):
m.target_table.append( np.array([( eventTime, eventData01, eventData02, eventData03 )] )
m.file.flush() # <--- being delayed due to IO processing!
class MainClass(object):
def __init__(self):
self.file = tb.open_file('hdf5File.h5', 'a')
self.target_table = self.file.root.realTime
self.realReceving = win32com.client.DispatchWithEvents("Session.RealTime",Handler_realTime)
# calls Handler_realTime whenever new data arrives
m = MainClass()
It works fine, but I have recently felt the low-performance of the program, due to its frequent flush() and excessive calls. Then, I thought that Asyncio might improve the performance, by randomly flushing the file and processing other tasks at the same time.
I have looked up some books on Asyncio in Python, but I couldn't find any examples that use classes instead of funcsions(async def). In other words, I don't know if it's possible to use functions(not classes) with DispatchWithEvents or if it's possible to adopt Asyncio in this situation.

How to use multiprocess in python on a class object

I am fairly new to Python, and my experience is specific to its use in Powerflow modelling through the API provided in Siemens PSS/e. I have a script that I have been using for several years that runs some simulation on a large data set.
In order to get to finish quickly, I usually split the inputs up into multiple parts, then run multiple instances of the script in IDLE. Ive recently added a GUI for the inputs, and have refined the code to be more object oriented, creating a class that the GUI passes the inputs to but then works as the original script did.
My question is how do I go about splitting the process from within the program itself rather than making multiple copies? I have read a bit about the mutliprocess module but I am not sure how to apply it to my situation. Essentially I want to be able to instantiate N number of the same object, each running in parallel.
The class I have now (called Bot) is passed a set of arguments and creates a csv output while it runs until it finishes. I have a separate block of code that puts the pieces together at the end but for now I just need to understand the best approach to kicking multiple Bot objects off once I hit run in my GUI. Ther are inputs in the GUI to specify the number of N segments to be used.
I apologize ahead of time if my question is rather vague. Thanks for any information at all as Im sort of stuck and dont know where to look for better answers.
Create a list of configurations:
configurations = [...]
Create a function which takes the relevant configuration, and makes use of your Bot:
def function(configuration):
bot = Bot(configuration)
bot.create_csv()
Create a Pool of workers with however many CPUs you want to use:
from multiprocessing import Pool
pool = Pool(3)
Call the function multiple times which each configuration in your list of configurations.
pool.map(function, configurations)
For example:
from multiprocessing import Pool
import os
class Bot:
def __init__(self, inputs):
self.inputs = inputs
def create_csv(self):
pid = os.getpid()
print('TODO: create csv in process {} using {}'
.format(pid, self.inputs))
def use_bot(inputs):
bot = Bot(inputs)
bot.create_csv()
def main():
configurations = [
['input1_1.txt', 'input1_2.txt'],
['input2_1.txt', 'input2_2.txt'],
['input3_1.txt', 'input3_2.txt']]
pool = Pool(2)
pool.map(use_bot, configurations)
if __name__ == '__main__':
main()
Output:
TODO: create csv in process 10964 using ['input2_1.txt', 'input2_2.txt']
TODO: create csv in process 8616 using ['input1_1.txt', 'input1_2.txt']
TODO: create csv in process 8616 using ['input3_1.txt', 'input3_2.txt']
If you'd like to make life a little less complicated, you can use multiprocess instead of multiprocessing, as there is better support for classes and also for working in the interpreter. You can see below, we can now work directly with a method on a class instance, which is not possible with multiprocessing.
>>> from multiprocess import Pool
>>> import os
>>>
>>> class Bot(object):
... def __init__(self, x):
... self.x = x
... def doit(self, y):
... pid = os.getpid()
... return (pid, self.x + y)
...
>>> p = Pool()
>>> b = Bot(5)
>>> results = p.imap(b.doit, range(4))
>>> print dict(results)
{46552: 7, 46553: 8, 46550: 5, 46551: 6}
>>> p.close()
>>> p.join()
Above, I'm using imap, to get an iterator on the results, which I'll just dump into a dict. Note that you should close your pools after you are done, to clean up. On Windows, you may also want to look at freeze_support, for cases where the code otherwise fails to run.
>>> import multiprocess as mp
>>> mp.freeze_support

Are the use of module threading and multiprocessing mutually exclusive in Python 3?

https://docs.python.org/3/library/threading.html
https://docs.python.org/3/library/multiprocessing.html
Python: what are the differences between the threading and multiprocessing modules?
Using Python 3.4 on Linux
I’m new to parallel programming and I’m encountering problems when running the threading.Threads() for a specific method, and the module multiprocessing.Process() for another. Both methods work fine when the other one is commented out. Neither method has anything to do with the other (eg no attempt to share data). But when I have them both running neither works and everything freezes. As far as I can tell the multiprocessing seems to lock up. I assume the same thing applies for the Threading.
So the first step is to assert whether or not this is even possible?
(I have a feeling some of you will ask the reason for this... The threading does a simple capture user key checking while the multiprocessing does some heavy lifting)
I’m providing an example (more like pseudo code) to help illustrate how the methods are used.
file t.py
import threading
Class T:
Def __init__():
t = threading.Thread(target = self.ThreadMethod)
t.daemon = True
t.start()
Def ThreadMehod():
# capture key
file m.py
import multiproceessing
Class M:
Def __init__():
mp = multiprocessing.Process(target = self.ProcessMethod)
mp.start()
Def ProcessMethod():
# heavy lifting
file main.py
import T
import M
Class main:
Def __init__():
T()
Def DoTheProcess()
for i in range(5):
M()
"no. threading and multiprocessing are not mutually exlusive. Though there are known issues (e.g., the reason for atfork existance) that constrain how they can be used together."
- J.F. Sebastian

Running a Python Subprocess

All,
I have read several threads on how to run subprocesses in python and none of them seem to help me. It's probably because I don't know how to use them properly. I have several methods that I would like to run at the same time rather than in sequence and I thought that the subprocess module would do this for me.
def services():
services = [method1(),
method2(),
method3(),
mrthod4(),
method5()]
return services
def runAll():
import subprocess
for i in services():
proc = subprocess.call(i,shell=True)
The problem with this approach is that method1() starts and method2() doesn't start until 1 finishes. I have tried several approaches including using subprocess.Popen[] in my services method with no luck. Can anyone lend me a hand on how to get methods 1-5 running at the same time?
Thanks,
Adam
According to the Python documentation subprocess.call() waits for the command to complete. You should directly use the subprocess.Popen objects which will give you the flexibility you need.
In python 3.2.x the concurrent futures module makes this sort of things very easy.
Python threads are more appropriate to what you are looking for: http://docs.python.org/library/threading.html or even the multiprocessing module: http://docs.python.org/library/multiprocessing.html#module-multiprocessing.
By saying method1(), you're calling the function and waiting for it to return. (It's a function, not a method.)
If you just want to run a bunch of heavy-duty function in parallel and collect their result, you can use joblib:
from joblib import Parallel, delayed
functions = [fn1, fn2, fn3, fn4]
results = Parallel(n_jobs=4)(delayed(f)() for f in functions)
subprocess.call() blocks until the process completes.
multiprocessing sounds more appropriate for what you are doing.
for example:
from multiprocessing import Process
def f1():
while True:
print 'foo'
def f2():
while True:
print 'bar'
def f3():
while True:
print 'baz'
if __name__ == '__main__':
for func in (f1, f2, f3):
Process(target=func).start()
You need to use & to execute them asynchronously. Here is an example:
subprocess.call("./foo1&", shell=True)
subprocess.call("./foo2&", shell=True)
This is just like the ordinary unix shell.
EDIT: Though there are multiple, much better ways to do this. See the other answers for some examples.
subprocess does not make the processes asynchronous. What you are trying to achieve can be done using multithreading or multiprocessing module.
I had a similar problem recently, and solved it like this:
from multiprocessing import Pool
def parallelfuncs(funcs, args, results, bad_value = None):
p = Pool()
waiters = []
for f in funcs:
waiters.append(p.apply_async(f, args, callback = results.append))
p.close()
for w in waiters:
if w.get()[0] == bad_value:
p.terminate()
return p
The nice thing is that the functions funcs are executed in parallel on args (kind of the reverse of map), and the result returned. The Pool of multiprocessing uses all processors and handles job execution.
w.get blocks, if that wasn't clear.
For your use case, you would call
results = []
parallelfuncs(services, args, results).join()
print results

Categories

Resources