How to call "process" function using ALAudioDevice "subscribe" method - python

I'm new at NAO programming and I'm having some trouble regarding the ALAudioDevice API.
My problem is the following one: I wrote a python module that should record raw data from the front microphone.
The documentation of the ALAudioDevice API says that the method "subscribe(...)" calls the function "process" automatically
and regularly with raw data from microphones as inputs. I wrote a code to execute this process (see bellow), and it runs without raising
the error flag. However, the subscribe is bypassing the function "process" and the module doesn't get any audio at all.
Has someone had the same problem?
import qi
class AudioModule(object):
def __init__(self):
super(AudioModule, self).__init__()
self.moduleName = "AudioModule"
try :
self.ALAudioDevice = ALProxy("ALAudioDevice")
except Exception, e:
self.logger.error("Error when creating proxy on ALAudioDevice:")
self.logger.error(e)
def begin_stream(self):
self.ALAudioDevice.setClientPreferences(self.moduleName, 16000, 3, 0)
self.ALAudioDevice.subscribe(self.moduleName)
def end_stream(self):
self.ALAudioDevice.unsubscribe(self.moduleName)
def processRemote( self, nbOfChannels, samplesByChannel, altimestamp, buffer ):
nbOfChannels = nbOfChannels
mylogger = qi.Logger("data")
mylogger.info("It works !" + str(nbOfChannels))
class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self, False)
self.audio = AudioModule()
def onLoad(self):
self.serviceId = self.session().registerService("AudioModule", self.audio)
pass
def onUnload(self):
if self.serviceId != -1:
self.session().unregisterService(self.serviceId)
self.serviceId = -1
pass
def onInput_onStart(self):
self.audio.begin_stream()
self.onInput_onStop()
pass
def onInput_onStop(self):
self.audio.end_stream()
self.onUnload
self.onStopped()
pass

It appears you are subscribing to the audio from a Choregraphe box. I'm not sure it is supposed to work.
But in this configuration the Python code is executed from within the same process as the ALAudioDevice service. So probably you should name your callback "process" instead of "processRemote".
Otherwise, you can still do this from a separate Python script.

Related

How to integrate python scripting in my python code

I am writing Python code (based on PyQt5 signals and slots). I need to make my code "Scriptable". By scriptable I mean the user can utilize the internal objects himself in a user-defined python script to develop/automate some functions,..etc. But I have no idea of how to implement it clearly.
I have tried using (exec) function in python in the following way:
user-def.py
def script_entry(main_object_in_my_code):
# Connecting signal from main_object_in_my_code (inherited from QObject) to other functions in this
# file. example:
main_object_in_my_code.event_1.connect(function_1)
#QtCore.pyqtSlot(str)
def function_1 (args):
#do user-defined logic using those args.
then in my script when user want to execute it, he inputs (as example)
source user-def.py
the main script reads the script and uses exec as the following:
with open(script_path) as f:
script = f.read()
exec(script, globals())
the problem is that events are triggered but function function_1 is not executed.
I am sure this is not the right way to do this. So, How can I implement my code to be (scriptable) using user defined scripts?
I would recomend to create a class and extend from it, let the 'user' call the functions when s/he needs.
If you are not in touch with class inheritance check this tutorial
source_def.py
class Main:
def __init__(self):
super(Main, self).__init__()
def script_entry(self, main_object_in_my_code):
main_object_in_my_code.event_1.connect( function_1 )
#QtCore.pyqtSlot(str)
def function_1( self, args ):
#this checks if the function is set
invert_op = getattr(self, "user_function", None)
if callable(user_function):
eval('self.user_function( args )')
user_def.py
from source_def import Main
class UserClass( Main ):
def __init__(self):
super(UserClass, self).__init__()
def user_function(self , args ):
print( args )
Try this

modify a function of a class from another class

In pymodbus library in server.sync, SocketServer.BaseRequestHandler is used, and defines as follow:
class ModbusBaseRequestHandler(socketserver.BaseRequestHandler):
""" Implements the modbus server protocol
This uses the socketserver.BaseRequestHandler to implement
the client handler.
"""
running = False
framer = None
def setup(self):
""" Callback for when a client connects
"""
_logger.debug("Client Connected [%s:%s]" % self.client_address)
self.running = True
self.framer = self.server.framer(self.server.decoder, client=None)
self.server.threads.append(self)
def finish(self):
""" Callback for when a client disconnects
"""
_logger.debug("Client Disconnected [%s:%s]" % self.client_address)
self.server.threads.remove(self)
def execute(self, request):
""" The callback to call with the resulting message
:param request: The decoded request message
"""
try:
context = self.server.context[request.unit_id]
response = request.execute(context)
except NoSuchSlaveException as ex:
_logger.debug("requested slave does not exist: %s" % request.unit_id )
if self.server.ignore_missing_slaves:
return # the client will simply timeout waiting for a response
response = request.doException(merror.GatewayNoResponse)
except Exception as ex:
_logger.debug("Datastore unable to fulfill request: %s; %s", ex, traceback.format_exc() )
response = request.doException(merror.SlaveFailure)
response.transaction_id = request.transaction_id
response.unit_id = request.unit_id
self.send(response)
# ----------------------------------------------------------------------- #
# Base class implementations
# ----------------------------------------------------------------------- #
def handle(self):
""" Callback when we receive any data
"""
raise NotImplementedException("Method not implemented by derived class")
def send(self, message):
""" Send a request (string) to the network
:param message: The unencoded modbus response
"""
raise NotImplementedException("Method not implemented by derived class")
setup() is called when a client is connected to the server, and finish() is called when a client is disconnected. I want to manipulate these methods (setup() and finish()) in another class in another file which use the library (pymodbus) and add some code to setup and finish functions. I do not intend to modify the library, since it may cause strange behavior in specific situation.
---Edited ----
To clarify, I want setup function in ModbusBaseRequestHandler class to work as before and remain untouched, but add sth else to it, but this modification should be done in my code not in the library.
The simplest, and usually best, thing to do is to not manipulate the methods of ModbusBaseRequestHandler, but instead inherit from it and override those methods in your subclass, then just use the subclass wherever you would have used the base class:
class SoupedUpModbusBaseRequestHandler(ModbusBaseRequestHandler):
def setup(self):
# do different stuff
# call super().setup() if you want
# or call socketserver.BaseRequestHandler.setup() to skip over it
# or call neither
Notice that a class statement is just a normal statement, and can go anywhere any other statement can, even in the middle of a method. So, even if you need to dynamically create the subclass because you won't know what you want setup to do until runtime, that's not a problem.
If you actually need to monkeypatch the class, that isn't very hard—although it is easy to screw things up if you aren't careful.
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
If you want to be able to call the normal implementation, you have to stash it somewhere:
_setup = ModbusBaseRequestHandler.setup
def setup(self):
# do different stuff
# call _setup whenever you want
ModbusBaseRequestHandler.setup = setup
If you want to make sure you copy over the name, docstring, etc., you can use `wraps:
#functools.wraps(ModbusBaseRequestHandler.setup)
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
Again, you can do this anywhere in your code, even in the middle of a method.
If you need to monkeypatch one instance of ModbusBaseRequestHandler while leaving any other instances untouched, you can even do that. You just have to manually bind the method:
def setup(self):
# do different stuff
myModbusBaseRequestHandler.setup = setup.__get__(myModbusBaseRequestHandler)
If you want to call the original method, or wraps it, or do this in the middle of some other method, etc., it's otherwise basically the same as the last version.
It can be done by Interceptor
from functools import wraps
def iterceptor(func):
print('this is executed at function definition time (def my_func)')
#wraps(func)
def wrapper(*args, **kwargs):
print('this is executed before function call')
result = func(*args, **kwargs)
print('this is executed after function call')
return result
return wrapper
#iterceptor
def my_func(n):
print('this is my_func')
print('n =', n)
my_func(4)
more explanation can be found here

Apache Storm calling function from python bolt

I have a simple storm bolt which only needs to call a function from another python module. Everything works until I call a method which has print statements inside the function.
So my bolt:
import storm
from pipeline import module as m
class ExampleBolt(storm.BasicBolt):
def initialize(self, conf, context):
self._conf = conf;
self._context = context;
storm.logInfo("ExampleBolt instance starting ...")
def process(self, tuple):
id, text = tuple.values
result = m.dummy_funct(text)
storm.emit([result])
ExampleBolt().run()
the method:
def dummy_funct(text):
print "log info"
return text
The bolt calls the method one time and then hangs on output. Using Apache Storm 0.9.3
Check that, whether from second time only process block is getting executed. Not the whole program is getting executed.

Using callbacks to run function using the current values in a class

I struggled to think of a good title so I'll just explain it here. I'm using Python in Maya, which has some event callback options, so you can do something like on save: run function. I have a user interface class, which I'd like it to update when certain events are triggered, which I can do, but I'm looking for a cleaner way of doing it.
Here is a basic example similar to what I have:
class test(object):
def __init__(self, x=0):
self.x = x
def run_this(self):
print self.x
def display(self):
print 'load user interface'
#Here's the main stuff that used to be just 'test().display()'
try:
callbacks = [callback1, callback2, ...]
except NameError:
pass
else:
for i in callbacks:
try:
OpenMaya.MEventMessage.removeCallback(i)
except RuntimeError:
pass
ui = test(5)
callback1 = OpenMaya.MEventMessage.addEventCallback('SomeEvent', ui.run_this)
callback2 = OpenMaya.MEventMessage.addEventCallback('SomeOtherEvent', ui.run_this)
callback3 = ......
ui.display()
The callback persists until Maya is restarted, but you can remove it using removeCallback if you pass it the value that is returned from addEventCallback. The way I have currently is just check if the variable is set before you set it, which is a lot more messy than the previous one line of test().display()
Would there be a way that I can neatly do it in the function? Something where it'd delete the old one if I ran the test class again or something similar?
There are two ways you might want to try this.
You can an have a persistent object which represents your callback manager, and allow it to hook and unhook itself.
import maya.api.OpenMaya as om
import maya.cmds as cmds
om.MEventMessage.getEventNames()
class CallbackHandler(object):
def __init__(self, cb, fn):
self.callback = cb
self.function = fn
self.id = None
def install(self):
if self.id:
print "callback is currently installed"
return False
self.id = om.MEventMessage.addEventCallback(self.callback, self.function)
return True
def uninstall(self):
if self.id:
om.MEventMessage.removeCallback(self.id)
self.id = None
return True
else:
print "callback not currently installed"
return False
def __del__(self):
self.uninstall()
def test_fn(arg):
print "callback fired 2", arg
cb = CallbackHandler('NameChanged', test_fn)
cb.install()
# callback is active
cb.uninstall()
# callback not active
cb.install()
# callback on again
del(cb) # or cb = None
# callback gone again
In this version you'd store the CallbackHandlers you create for as long as you want the callback to persist and then manually uninstall them or let them fall out of scope when you don't need them any more.
Another option would be to create your own object to represent the callbacks and then add or remove any functions you want it to trigger in your own code. This keeps the management entirely on your side instead of relying on the api, which could be good or bad depending on your needs. You'd have an Event() class which was callable (using __call__() and it would have a list of functions to fire then its' __call__() was invoked by Maya. There's an example of the kind of event handler object you'd want here

QProcess.readAllStandardOutput() doesn't seem to read anything - PyQt

Here is the code sample:
class RunGui (QtGui.QMainWindow)
def __init__(self, parent=None):
...
QtCore.Qobject.connect(self.ui.actionNew, QtCore.SIGNAL("triggered()"), self.new_select)
...
def normal_output_written(self, qprocess):
self.ui.text_edit.append("caught outputReady signal") #works
self.ui.text_edit.append(str(qprocess.readAllStandardOutput())) # doesn't work
def new_select(self):
...
dialog_np = NewProjectDialog()
dialog_np.exec_()
if dialog_np.is_OK:
section = dialog_np.get_section()
project = dialog_np.get_project()
...
np = NewProject()
np.outputReady.connect(lambda: self.normal_output_written(np.qprocess))
np.errorReady.connect(lambda: self.error_output_written(np.qprocess))
np.inputNeeded.connect(lambda: self.input_from_line_edit(np.qprocess))
np.params = partial(np.create_new_project, section, project, otherargs)
np.start()
class NewProject(QtCore.QThread):
outputReady = QtCore.pyqtSignal(object)
errorReady = QtCore.pyqtSignal(object)
inputNeeded = QtCore.pyqtSignal(object)
params = None
message = ""
def __init__(self):
super(NewProject, self).__init__()
self.qprocess = QtCore.QProcess()
self.qprocess.moveToThread(self)
self._inputQueue = Queue()
def run(self):
self.params()
def create_new_project(self, section, project, otherargs):
...
# PyDev for some reason skips the breakpoints inside the thread
self.qprocess.start(command)
self.qprocess.waitForReadyRead()
self.outputReady.emit(self.qprocess) # works - I'm getting signal in RunGui.normal_output_written()
print(str(self.qprocess.readAllStandardOutput())) # prints empty line
.... # other actions inside the method requiring "command" to finish properly.
The idea is beaten to death - get the GUI to run scripts and communicate with the processes. The challenge in this particular example is that the script started in QProcess as command runs an app, that requires user input (confirmation) along the way. Therefore I have to be able to start the script, get all output and parse it, wait for the question to appear in the output and then communicate back the answer, allow it to finish and only then to proceed further with other actions inside create_new_project()
I don't know if this will fix your overall issue, but there are a few design issues I see here.
You are passing around the qprocess between threads instead of just emitting your custom signals with the results of the qprocess
You are using class-level attributes that should probably be instance attributes
Technically you don't even need the QProcess, since you are running it in your thread and actively using blocking calls. It could easily be a subprocess.Popen...but anyways, I might suggest changes like this:
class RunGui (QtGui.QMainWindow)
...
def normal_output_written(self, msg):
self.ui.text_edit.append(msg)
def new_select(self):
...
np = NewProject()
np.outputReady.connect(self.normal_output_written)
np.params = partial(np.create_new_project, section, project, otherargs)
np.start()
class NewProject(QtCore.QThread):
outputReady = QtCore.pyqtSignal(object)
errorReady = QtCore.pyqtSignal(object)
inputNeeded = QtCore.pyqtSignal(object)
def __init__(self):
super(NewProject, self).__init__()
self._inputQueue = Queue()
self.params = None
def run(self):
self.params()
def create_new_project(self, section, project, otherargs):
...
qprocess = QtCore.QProcess()
qprocess.start(command)
if not qprocess.waitForStarted():
# handle a failed command here
return
if not qprocess.waitForReadyRead():
# handle a timeout or error here
return
msg = str(self.qprocess.readAllStandardOutput())
self.outputReady.emit(msg)
Don't pass around the QProcess. Just emit the data. And create it from within the threads method so that it is automatically owned by that thread. Your outside classes should really not have any knowledge of that QProcess object. It doesn't even need to be a member attribute since its only needed during the operation.
Also make sure you are properly checking that your command both successfully started, and is running and outputting data.
Update
To clarify some problems you might be having (per the comments), I wanted to suggest that QProcess might not be the best option if you need to have interactive control with processes that expect periodic user input. It should work find for running scripts that just produce output from start to finish, though really using subprocess would be much easier. For scripts that need user input over time, your best bet may be to use pexpect. It allows you to spawn a process, and then watch for various patterns that you know will indicate the need for input:
foo.py
import time
i = raw_input("Please enter something: ")
print "Output:", i
time.sleep(.1)
print "Another line"
time.sleep(.1)
print "Done"
test.py
import pexpect
import time
child = pexpect.spawn("python foo.py")
child.setecho(False)
ret = -1
while ret < 0:
time.sleep(.05)
ret = child.expect("Please enter something: ")
child.sendline('FOO')
while True:
line = child.readline()
if not line:
break
print line.strip()
# Output: FOO
# Another line
# Done

Categories

Resources