python: how to get result from a call functions in real time? - python

What I want to do is get result from call functions in real time.
For example, I want to get the result of i in class model in real time.However, if I use return,I can only get the result of i once.
import threading
class model(object):
"""docstring for model"""
def __init__(self):
pass
def func(self):
for i in range(1000):
print('i',i)
return i
class WorkThread(threading.Thread):
# trigger = pyqtSignal()
def __int__(self):
super(WorkThread,self).__init__()
def run(self):
model1=model()
result = model1.func() #I want to get `i` from class model in real time,however return can only get once.
print('result',result)
if __name__ == '__main__':
WorkThread=WorkThread()
WorkThread.start()
for j in range(1000,2000):
print('j',j)
Anyone has a good idea? Hopefully for help.

You have several options; you could:
Use a generator function, to produce the results as you iterate. This requires that the model1.func() call loops over the generator returned by the model1.func() call. Use this if you don't need access to the data from another thread.
Use a queue; push i results into the queue as you produce them, and another thread can receive them from the queue.

Related

Return List from ROS subscriber

I have a ROS subscriber that passes data into a callback function, which then prints out all passed-on data. I would like to append all values that are printed out into a list, and to use that list outside of ROS.
I first thought about appending all data to the list in the callback function, and return the list. But when I tried to call and print the subscriber, which would not give me the list I wanted.
ROS callbacks do not return anything so you cannot rely on a return statement. The way this should be done is via setting a class attribute list. Take the following for example:
class mock_class:
def __init__(self):
rospy.init_node('my_node')
self.my_list = []
rospy.Subscriber('/some_topic', String, self.my_cb)
def my_cb(self, msg):
self.my_list.append(msg.data)
def get_data(self):
return self.my_list
Here every time the callback is called it will append the current data to a list that's stored as a class attribute. This keeps the data persistent and able to be retrieved at a later time.

Difference between holding a reference of class in _init_ and calling it inside another class

I am currently working on a project in which security has an important role and I am not sure on which of the following ways should I apply in the project.
class First():
def some_function(self):
print('this is a function')
class Second():
def second_other_function(self):
First().some_function()
print('this is another function')
class Second():
def __init__(self):
self.first = First()
def some_other_function(self):
self.first.some_function()
print('this is another function')
What would be the better solution between the First() class and one of the Second() classes, if I do not want the second class to be associated with the first, as in not access it.
Thank you
The only difference between the two examples is that the first one creates a new instance of First and then dumps it (whenever the GC decides to). The second keeps a reference to the same First object.
The first approach is quite wasteful (new objects will be created and thrown away every single time Second.second_other_function is called), and I suspect that what you are really after is a static method:
class First():
#staticmethod
def some_function():
print('this is a function')
class Second():
def second_other_function(self):
First.some_function()
print('this is another function')
This then begs the question if you even need the First class.

Python can i access a class method from another class without passing it as a parameter?

Experimenting with ex43.py (make a game) from LearnPythonTheHardWay and using time.sleep() to create pauses between events. In order to enable and disable the feature, or change the length of the pauses, I am just holding a variable in class Engine(), which is referenced from the calls to time.sleep() throughout the program.
So:
import time
class Engine(object):
sleep1 = 1
#some methods
def say_hello(self, exit):
self.exit = exit
print "Hello"
time.sleep(Engine.sleep1)
self.exit.say_goodbye()
class Another(object):
def say_goodbye(self):
print "ok"
time.sleep(Engine.sleep1)
print "goodbye."
me = Engine()
bye = Another()
me.say_hello(bye)
The question is, if I want time.sleep(1) to be available to multiple methods of various classes, does it need to be passed to each method that needs it as a parameter like this:
import time
class Engine(object):
sleep1 = 1
#some methods
def say_hello(self, exit):
self.exit = exit
print "Hello."
time.sleep(Engine.sleep1)
pause = time.sleep
self.exit.say_goodbye(pause)
class Another(object):
def say_goodbye(self, pause):
self.pause = pause
print "ok"
self.pause(Engine.sleep1)
print "goodbye."
me = Engine()
bye = Another()
me.say_hello(bye)
And in order to have just one place in which to set the length of the pause, is there an improvement on simple calling time.sleep(Class.var) from whatever method?
From what I've seen, I don't get the advantage of having sleep1 inside a class, except if you want to change it during the game.
Passing time.sleep(1) around looks quite tedious. I would say that calling time.sleep(Class.var) every time is OK and makes the code clearer.
I also tested your code and passing the pause doesn't add any advantage to it:
the code is harder to read
you need to call it by doing this self.pause(Engine.sleep1) which is a strange alias for time.sleep(Engine.sleep1). The one second is not remembered, so you have to input it again to time.sleep() EDIT BELOW
you have more source lines of code which seem obscure, making it a lot harder to maintain
Keep it simple. time.sleep(Engine.sleep1) is good.
One small addition: if you only wait on specific functions, calling time.sleep() is perfectly fine. But if you're doing this after each function, maybe you could do some kind of "action queue" and call time.sleep() after each function has completed. So that if your logic changes, you can update one single line of code.
EDIT: To keep the function and it's parameters for time.sleep(), I would do the following:
MY_SLEEP = 1
def my_pause():
time.sleep(MY_SLEEP)
my_pause() # waits 1 second
my_pause() # waits 1 second again.
but then again, you could simply:
time.sleep(MY_SLEEP)
each time...
EDIT 2
Here I somewhat combined the idea with the static method and the printing into a say()function.
import time
MY_SLEEP = 1 # inside a config file
class Engine(object):
#staticmethod
def say(text):
print(text)
time.sleep(MY_SLEEP)
#some methods
def say_hello(self, exit):
Engine.say("Hello")
exit.say_goodbye()
class Another(object):
def say_goodbye(self):
Engine.say("ok")
Engine.say("goodbye")
me = Engine()
bye = Another()
me.say_hello(bye)
When I was talking about an action queue, it was more like a list containing which class/function/arg to call at each round. Then a loop would pop one element from the queue at each turn, do the action, wait one second and start again.
An approach that I use is to create a module with all constants and simple helper methods that might be needed by other modules, classes and their methods, so it's not even a class, e.g. a helper.py module can look like this:
var1 = 1
var2 = 2
def method1():
...
def method2():
...
I would put that helper.py to a library, say mylib, and then use it from other modules, classes and executable like this:
from mylib import helper
print helper.var1
helper.method1()
helper.method2()
Needless to say that I don't need to instantiate anything with this approach.

lazy load inside multithread.pool.imap() occure multiple times instead one

Let's assume a have class
class Helper(object):
#property
def lazy_prop(self):
if not self.__model:
self.__model = init()
return self.__model
...
and I have function
def action(data):
#handle some actions which using Helper
and I have some data, which need to be handled with action function like this
data = ['bla-bla', 'foo-foo']
pool = Pool()
pool.imap(action, data)
The problems is that lazy property goes to be initialized many times, instead of single one. Why that happens and how to fix that?
With multiprocessing, if you spawn multiple jobs then the Helper.lazy_prop will be initialized in each Process.

Python Multiprocessing - apply class method to a list of objects

Is there a simple way to use Multiprocessing to do the equivalent of this?
for sim in sim_list:
sim.run()
where the elements of sim_list are "simulation" objects and run() is a method of the simulation class which does modify the attributes of the objects. E.g.:
class simulation:
def __init__(self):
self.state['done']=False
self.cmd="program"
def run(self):
subprocess.call(self.cmd)
self.state['done']=True
All the sim in sim_list are independent, so the strategy does not have to be thread safe.
I tried the following, which is obviously flawed because the argument is passed by deepcopy and is not modified in-place.
from multiprocessing import Process
for sim in sim_list:
b = Process(target=simulation.run, args=[sim])
b.start()
b.join()
One way to do what you want is to have your computing class (simulation in your case) be a subclass of Process. When initialized properly, instances of this class will run in separate processes and you can set off a group of them from a list just like you wanted.
Here's an example, building on what you wrote above:
import multiprocessing
import os
import random
class simulation(multiprocessing.Process):
def __init__(self, name):
# must call this before anything else
multiprocessing.Process.__init__(self)
# then any other initialization
self.name = name
self.number = 0.0
sys.stdout.write('[%s] created: %f\n' % (self.name, self.number))
def run(self):
sys.stdout.write('[%s] running ... process id: %s\n'
% (self.name, os.getpid()))
self.number = random.uniform(0.0, 10.0)
sys.stdout.write('[%s] completed: %f\n' % (self.name, self.number))
Then just make a list of objects and start each one with a loop:
sim_list = []
sim_list.append(simulation('foo'))
sim_list.append(simulation('bar'))
for sim in sim_list:
sim.start()
When you run this you should see each object run in its own process. Don't forget to call Process.__init__(self) as the very first thing in your class initialization, before anything else.
Obviously I've not included any interprocess communication in this example; you'll have to add that if your situation requires it (it wasn't clear from your question whether you needed it or not).
This approach works well for me, and I'm not aware of any drawbacks. If anyone knows of hidden dangers which I've overlooked, please let me know.
I hope this helps.
For those who will be working with large data sets, an iterable would be your solution here:
import multiprocessing as mp
pool = mp.Pool(mp.cpu_count())
pool.imap(sim.start, sim_list)

Categories

Resources