I try to build a udp server to receive binary messages, the socket emits processMsg signal when it received message and the processMsg function tries to emit different signal according to the message type. The QDefines object defines the message type and signal to be generated. I use dict to work around the missing switch/case in python. The problem is that the setRfRsp function didn't execute when UCSI_SET_RF_RSP_E message recevied.
Main.py file:
class mainWindow(QtGui.QMainWindow):
def __init__(self, parent = None):
super(mainWindow, self).__init__()
self.ui = Ui_MainWindow()
self.defines = QDefines()
self.connect(self.defines,QtCore.SIGNAL("signalSetRfRsp(PyQt_PyObject)"), self.setRfRsp)
self.socket = QUdp(self.localIp, self.localPort, self.remoteIp, self.remotePort)
self.connect(self.socket, QtCore.SIGNAL("processMsg(int,PyQt_PyObject)"), self.processMsg)
def setRfRsp(self, msg):
if msg == 0x00000000:
print "open"
else:
print "closed"
def processMsg(self, msgType, msg):
defines = QDefines()
msg_dict = defines.msgDictGen();
msg_dict[msgType](msg)
defines.py file:
class QDefines(QtCore.QObject):
UCSI_SET_RF_RSP_E = 0x000d
def __init__(self, parent = None):
super(QDefines, self).__init__()
def UCSI_SET_RF_RSP(self, msg):
self.emit(QtCore.SIGNAL("signalSetRfRsp(PyQt_PyObject)"), msg)
def msgDictGen(self):
self.msgDict = {
self.UCSI_SET_RF_RSP_E : self.UCSI_SET_RF_RSP
}
return self.msgDict
The instance of QDefines that emits the signal never has any of its signals connected to anything, and it just gets garbage-collected when processMsg returns.
Perhaps you meant to write:
def processMsg(self, msgType, msg):
msg_dict = self.defines.msgDictGen()
msg_dict[msgType](msg)
You should also consider getting rid of that nasty, old-style signal syntax, and use the nice, clean new-style instead:
class QDefines(QtCore.QObject):
signalSetRfRsp = QtCore.pyqtSignal(object)
...
def UCSI_SET_RF_RSP(self, msg):
self.signalSetRfRsp.emit(msg)
class mainWindow(QtGui.QMainWindow):
def __init__(self, parent = None):
...
self.defines = QDefines()
self.defines.signalSetRfRsp.connect(self.setRfRsp)
Also, I would advise you to forget about trying to replicate switch statements in python, and just use if/elif instead. You'd need a very large number of branches before this started to become a significant performance issue.
Related
I have two classes, a consumer and a producer. To prevent a race condition between threads, I've tried using QWaitCondition and QMutex, but I keep running into timing issues. The producer object fills my home object's buffer and a buffer of its own. The consumer object copies this buffer and displays the data in the buffer. As it is displaying, I would like the producer object to fill the buffer again and then wait until the displaying is complete. Immediately, the consumer object would copy the buffer and continue the display. Rinse and repeat. However, I can not figure out the timing sequence with my code.
from PyQt5.QtCore import QThread, pyqtSignal
from PyQt5.QtWidgets import QWidget
from PyQt5.QtCore import QWaitCondition, QMutex
mutex = QMutex()
displayFramesDone = QWaitCondition()
copyFramesDone = QWaitCondition()
sequenceDone = QWaitCondition()
class Producer(QThread):
finished = pyqtSignal()
def __init__(self, home_obj):
self.parent = self
self.home = home_obj
self.prod_buffer = []
"""
functions
"""
def run(self):
self.runSequence()
sequenceDone.wakeAll()
while self.home.condition == True:
self.runSequence()
mutex.lock()
displayFramesDone.wait(mutex)
mutex.unlock()
sequenceDone.wakeAll()
mutex.lock()
copyFramesDone.wait(mutex)
mutex.unlock()
self.finished.emit()
class Consumer(QThread):
finished = pyqtSignal()
def __init__(self, home_obj):
self.parent = self
self.home = home_obj
self.con_buffer = []
"""
functions
"""
def run(self):
while self.home.condition == True:
mutex.lock()
sequenceDone.wait(mutex)
mutex.unlock()
self.copyFrames()
copyFramesDone.wakeAll()
self.displayFrames()
displayFramesDone.wakeAll()
self.finished.emit()
class Home(QWidget):
def __init__(self):
super().__init__()
self.condition == True
self.buffer = []
I've never used QWaitCondition or QMutex before, so if I'm using them appropriately is also another matter, however, this seems to work only occasionally when the timing manages to align with the required sequence.
Additionally, I'd like to know if anyone could answer the following:
Must a wake signal be inside of a mutex block?
Is QWaitCondition/QMutex a good tool for this type of timing problem?
import gnsq
class something():
def __init__(self, pb=None, pk=None, address=None):
self.pb = pb
self.pk = pk
self.address = address
def connect(self):
consumer = gnsq.Consumer(self.pb, 'ch', self.address)
#consumer.on_message.connect
def response_handler(consumer, msg):
return msg.body
consumer.start()
how would i get the return value of response_handler so in turn, I'd be able to pass to the parent function connect(), so when i call it, it will be returning the value of message.body from the child function.
I would think something like the following:
import gnsq
class something():
def __init__(self, pb=None, pk=None, address=None):
self.pb = pb
self.pk = pk
self.address = address
def connect(self):
consumer = gnsq.Consumer(self.pb, 'ch', self.address)
#consumer.on_message.connect
def response_handler(consumer, msg):
return msg.body
consumer.start()
return response_handler
nsq = something('pb', 'pk', 'address')
# should print whatever message.body is
print nsq.connect()
but It's not working. Note: consumer.start() is blocking
What you're asking doesn't make sense in the context of what the Consumer() really is.
In your connect() method, you set up a consumer, set up a response handler and start the consumer with consumer.start(). From that point onward, whenever there is a message to consume, the consumer will call the handler with that message. Not just once, but again and again.
Your handler may be called many times and unless the consumer is closed, you never know when it will be done - so, there's no way your connect() method could return the complete result.
What you could do is have the connect method return a reference to a collection that will at any time contain all the messages collected so far. It would be empty at first, but after some time, could contain all the received messages.
Something like:
import gnsq
class Collector():
def __init__(self, topic, address):
self.topic = topic
self.address = address
self.messages = []
def connect(self):
self.messages = []
consumer = gnsq.Consumer(self.pb, 'ch', self.address)
#consumer.on_message.connect
def response_handler(consumer, msg):
self.messages.append(msg)
consumer.start()
return self.messages
I don't think this is really how you want to be using this, it would only really make sense if you provide more context on why and how you want to use this output.
I have a callback chain with an errback at the end. If any of the callbacks fail, I need to pass an object to be used on errBack.
How can I pass an object from callback to the errback?
The following code exemplifies what I want to do:
from twisted.internet.defer import FAILURE
from twisted.internet import defer
class CodMsg(object):
def __init__(self, code, msg):
self.code = code
self.msg = msg
class Resource(object):
#classmethod
def checkCondition(cls, result):
if result == "error":
cdm = CodMsg(1, 'Error 1')
raise FAILURE, cdm
else:
return "ok"
#classmethod
def erBackTst (cls, result):
####### How to get the value of cdm here? ######## <<<===
print 'Error:'
print result
return result
d = defer.Deferred()
d.addCallback(Resource.checkCondition)
d.addErrback(Resource.erBackTst)
d.callback("error")
print d.result
In this case you can just raise an exception, containing all info you need
For example:
from twisted.internet import defer
class MyCustomException(Exception):
def __init__(self, msg, code):
self.code = code
self.message = msg
def callback(result):
print result
raise MyCustomException('Message', 23)
def errback(failure):
# failure.value is an exception instance that you raised in callback
print failure.value.message
print failure.value.code
d = defer.Deferred()
d.addCallback(callback)
d.addErrback(errback)
d.callback("error")
Also for better understanding deffereds and async programming you can read this nice twisted tutorial http://krondo.com/an-introduction-to-asynchronous-programming-and-twisted/.
It uses a little bit outdated twisted version in examples but it is still an exellent source to start learning twisted
Im stuck in one situation where my first few signals are getting passed and later on no signals are getting emitted.
I'll elaborate it in detail:
I've a job which required heavy processing and ram and can take upto 2-4 hours to complete it without threading in linear way, so I decided to use QTheadPool, so I created (while testing) 355 QRunners which gets started by QThreadPool. And few of them(QRunners) are depends on another QRunners to finish. Its all working fine, I'm able to derive dependencies by emitting signals and catching them. Its all working absolutely perfect when I run this without GUI
For example (below codes are not tested, i just typed here):
from PyQt4 import QtCore
class StreamPool(QtCore.QObject):
def __init__(self, inputs, q_app):
super(....)
self.inputs = inputs
self.q_app = q_app
self.pool = QtCore.QThreadPool()
def start(self):
for each_input in self.inputs:
runner = StreamRunner(each_input, self.q_app)
runner.signal.operation_started.connect(self.mark_as_start)
runner.signal.operation_finished.connect(self.mark_as_start)
self.pool.start(runner)
def mark_as_start(self, name):
print 'operation started..', name
# Some operation...
def mark_as_finish(self, name):
print 'operation finished..', name
# Some operation...
class StreamRunner(QtCore.QRunnable):
def __init__(self, input, q_app):
super(..)
self.input = input
self.q_app = q_app
self.signal = WorkSignals()
def run(self):
self.signal.operation_started.emit(input)
self.q_ap
# Doing some operations
self.signal.operation_finished.emit(input)
self.q_app.processEvents()
class WorkSignals(QtCore.QObject):
operation_started = QtCore.pyqtSignal(str)
operation_finished= QtCore.pyqtSignal(str)
if __name__ == '__main__':
app = QtGui.QApplication([])
st = StreamPool(['a', 'b', 'c'], app)
st.start()
app.exec_()
It works brilliant in above case.
And I want to show the statuses in ui as there can be hundreds of task can be executed, so I wrote a simple ui, and running StreamPool() from another QThread lets say named - StreamWorkThread() which is getting spawned by ui, and StreamWorkThread catching StreamPool.signal.* and sending them back to ui, but in this case StreamPool can only emit few of them, the starting 4 or 5, though the tasks are still getting executed but the dependent task are not getting initialized due to this behavior and no status updates are getting displayed in ui.
I cant share the code with you guys, as its from my work place, I can write similar approach here
class StreamWorkThread (QtCore.QThread):
def __init__(self, inputs, q_app):
super(..)
self.signal = WorkSignals()
self.stream_pool = StreamPool(inputs, q_app)self.stream_pool.signal.operation_started.connect(self.signal.operation_started.emit)
self.stream_pool.signal.operation_finished.connect(self.signal.operation_finished.emit)
def run(self):
self.stream_pool.start()
def print_start(name):
print 'Started --', name
def print_finished(name):
print 'Finished --', name
if __name__ == '__main__':
app = QtGui.QApplication([])
th = StreamWorkThread (['a', 'b', 'c'], app)
th.signal.operation_started.connect(print_start)
th.signal.operation_finshed.connect(print_finished)
th.start()
app.exec_()
Consolidated code:
from PyQt4 import QtCore
class StreamPool(QtCore.QObject):
def __inti__(self, inputs, q_app):
super(StreamPool, self).__init()
self.inputs = inputs
self.q_app = q_app
self.pool = QtCore.QThreadPool()
def start(self):
for each_input in self.inputs:
runner = StreamRunner(each_input, self.q_app)
runner.signal.operation_started.connect(self.mark_as_start)
runner.signal.operation_finished.connect(self.mark_as_start)
self.pool.start(runner)
def mark_as_start(self, name):
print 'operation started..', name
# Some operation...
def mark_as_finish(self, name):
print 'operation finished..', name
# Some operation...
class StreamRunner(QtCore.QRunnable):
def __init__(self, input, q_app):
super(StreamRunner, self).__init()
self.input = input
self.q_app = q_app
self.signal = WorkSignals()
def run(self):
self.signal.operation_started.emit(input)
self.q_ap
# Doing some operations
self.signal.operation_finished.emit(input)
self.q_app.processEvents()
class WorkSignals(QtCore.QObject):
operation_started = QtCore.pyqtSignal(str)
operation_finished= QtCore.pyqtSignal(str)
class StreamWorkThread (QtCore.QThread):
def __init__(self, inputs, q_app):
super(StreamWorkThread, self).__init()
self.signal = WorkSignals()
self.stream_pool = StreamPool(inputs,q_app)
self.stream_pool.signal.operation_started.connect(self.signal.operation_started.emit)
self.stream_pool.signal.operation_finished.connect(self.signal.operation_finished.emit)
def run(self):
self.stream_pool.start()
def print_start(name):
print 'Started --', name
def print_finished(name):
print 'Finished --', name
if __name__ == '__main__':
app = QtGui.QApplication([])
th = StreamWorkThread (['a', 'b', 'c'], app)
th.signal.operation_started.connect(print_start)
th.signal.operation_finshed.connect(print_finished)
th.start()
app.exec_()
Please guys help me, im not getting what exactly the problem here..! :(
Okay, I got the solution for this behavior.
The root cause of blocking the signal is inheriting QObject in StrealPool, when I replace QObject with QThread it worked seamlessly.
Here are the changes I made, only tow places
class StreamPool(**QtCore.QThread**):
def __inti__(self, inputs, q_app):
super(StreamPool, self).__init()
self.inputs = inputs
self.q_app = q_app
self.pool = QtCore.QThreadPool()
def **run**(self):
for each_input in self.inputs:
runner = StreamRunner(each_input, self.q_app)
runner.signal.operation_started.connect(self.mark_as_start)
runner.signal.operation_finished.connect(self.mark_as_start)
self.pool.start(runner)
and that set, it worked ! :D
very difficult without the source code, but the problem is probably when "app.exec_()" is run, the mainloop of the gui is started and interferes with your Streamx classes
I am trying to find a good way to log a warning message but appending to it information that is only known by the caller of the function.
I think it will be clear with an example.
# log method as parameter
class Runner1(object):
def __init__(self, log):
self.log = log
def run(self):
self.log('First Warning')
self.log('Second Warning')
return 42
class Main1(object):
def __init__(self):
self._runner = Runner1(self.log)
def log(self, message):
print('Some object specific info: {}'.format(message))
def run(self):
print(self._runner.run())
e1 = Main1()
e1.run()
The Main object has a log function that adds to any message its own information before logging it. This log function is given as a parameter (in this case to a Runner object). Carrying this extra parameter all the time is extremely annoying and I would like to avoid it. There are usually lots of object/functions and therefore I have discarded the use of the logging method as I would need to create a different logger for each object. (Is this correct?)
I have tried to bubble the warning using the warning module:
# warning module
import warnings
class Runner2(object):
def run(self):
warnings.warn('First Warning')
warnings.warn('Second Warning')
return 42
class Main2(object):
def __init__(self):
self._runner = Runner2()
def log(self, message):
print('Some object specific info: {}'.format(message))
def run(self):
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter("always")
out = self._runner.run()
for w in ws:
self.log(w.message)
print(out)
e2 = Main2()
e2.run()
But according to the docs, this is not thread safe.
Finally, I have also tried some generators:
# yield warning
class _Warning(object):
def __init__(self, message):
self.message = message
class Runner3(object):
def run(self):
yield _Warning('First Warning')
yield _Warning('Second Warning')
yield 42
class Main3(object):
def __init__(self):
self._runner = Runner3()
def log(self, message):
print('Some object specific info: {}'.format(message))
def run(self):
for out in self._runner.run():
if not isinstance(out, _Warning):
break
self.log(out.message)
print(out)
e3 = Main3()
e3.run()
But the fact that you have to modify the Runner.run to yield (instead of return) the final result is inconvenient as functions will have to be specifically changed to be used in this way (Maybe this will change in the future? Last QA in PEP255). Additionally, I am not sure if there is any other trouble with this type of implementation.
So what I am looking for is a thread-safe way of bubbling warnings that does not require passing parameters. I also would like that methods that do not have warnings remain unchanged. Adding a special construct such as yield or warning.warn to bubble the warnings would be fine.
Any ideas?
import Queue
log = Queue.Queue()
class Runner1(object):
def run(self):
log.put('First Warning')
log.put('Second Warning')
return 42
class Main1(object):
def __init__(self):
self._runner = Runner1()
def log(self, message):
print('Some object specific info: {0}'.format(message))
def run(self):
out=self._runner.run()
while True:
try:
msg = log.get_nowait()
self.log(msg)
except Queue.Empty:
break
print(out)
e1 = Main1()
e1.run()
yields
Some object specific info: First Warning
Some object specific info: Second Warning
42