Twisted Tkinter Manual Data Input Not Working - python

I'm working on an application for use in a reception/office environment that allows for the sending of notices to specific offices/computers or in my case for testing at the moment globally to all connected clients. The issue I have faced is that the server claims to send and write to the transport the notification but the client never receives it, all the initial connection data is processed and printed but nothing further unless apart of the main reconnection functions and other methods built-in to the twisted override methods.
Is there perhaps something in the server end that is trying to write to transport but just isn't actually being done? I've split the code in a few files which are all over here: https://github.com/FlopsiBunny/ReceptionDemo but the main culprit is:
# ReceptionDemo/Console/lib/offices.py
def submit(self, *args):
# Submit Notification to Server
title = self.titleEntry.get("1.0", END)
message = self.messageEntry.get("1.0", END)
# Notification Setup
notif = Notification(title, message)
# Send Notification
if self.office == None:
reactor.callFromThread(self.parent.factory.send_global_notice, notif)
# Destroy Window
self.destroy()
# ReceptionDemo/Console/lib/server.py (Function inside the Client Protocol class)
def sendNotice(self, notification):
# Send Notification
token = self.token
notification.update_token(token)
notifData = jsonpickle.encode(notification)
response = {
"token": token,
"payload": notifData
}
responseData = jsonpickle.encode(response)
print("Sending...")
reactor.callFromThread(self.transport.write, responseData.encode("utf-8"))
self.transport.write(str("Tada").encode("utf-8"))
self.console.log("Sending Notice to " + str(self._peer.host + ":" + str(self._peer.port)))
print(responseData)
print("Sent")
If anyone could provide a solution that would be great, I have a feeling it's a threading issue but I am not the most adept at threading.

Found the problem, bug with Tksupport of Twisted and child windows of the root window, one example is dialogs will freeze the Twisted reactor whereas a Toplevel window will deadlock the reactor.

Related

How can I write a socket server in a different thread from my main program (using gevent)?

I'm developing a Flask/gevent WSGIserver webserver that needs to communicate (in the background) with a hardware device over two sockets using XML.
One socket is initiated by the client (my application) and I can send XML commands to the device. The device answers on a different port and sends back information that my application has to confirm. So my application has to listen to this second port.
Up until now I have issued a command, opened the second port as a server, waited for a response from the device and closed the second port.
The problem is that it's possible that the device sends multiple responses that I have to confirm. So my solution was to keep the port open and keep responding to incoming requests. However, in the end the device is done sending requests, and my application is still listening (I don't know when the device is done), thereby blocking everything else.
This seemed like a perfect use case for a thread, so that my application launches a listening server in a separate thread. Because I'm already using gevent as a WSGI server for Flask, I can use the greenlets.
The problem is, I have looked for a good example of such a thing, but all I can find is examples of multi-threading handlers for a single socket server. I don't need to handle a lot of connections on the socket server, but I need it launched in a separate thread so it can listen for and handle incoming messages while my main program can keep sending messages.
The second problem I'm running into is that in the server, I need to use some methods from my "main" class. Being relatively new to Python I'm unsure how to structure it in a way to make that possible.
class Device(object):
def __init__(self, ...):
self.clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def _connect_to_device(self):
print "OPEN CONNECTION TO DEVICE"
try:
self.clientsocket.connect((self.ip, 5100))
except socket.error as e:
pass
def _disconnect_from_device(self):
print "CLOSE CONNECTION TO DEVICE"
self.clientsocket.close()
def deviceaction1(self, ...):
# the data that is sent is an XML document that depends on the parameters of this method.
self._connect_to_device()
self._send_data(XMLdoc)
self._wait_for_response()
return True
def _send_data(self, data):
print "SEND:"
print(data)
self.clientsocket.send(data)
def _wait_for_response(self):
print "WAITING FOR REQUESTS FROM DEVICE (CHANNEL 1)"
self.serversocket.bind(('10.0.0.16', 5102))
self.serversocket.listen(5) # listen for answer, maximum 5 connections
connection, address = self.serversocket.accept()
# the data is of a specific length I can calculate
if len(data) > 0:
self._process_response(data)
self.serversocket.close()
def _process_response(self, data):
print "RECEIVED:"
print(data)
# here is some code that processes the incoming data and
# responds to the device
# this may or may not result in more incoming data
if __name__ == '__main__':
machine = Device(ip="10.0.0.240")
Device.deviceaction1(...)
This is (globally, I left out sensitive information) what I'm doing now. As you can see everything is sequential.
If anyone can provide an example of a listening server in a separate thread (preferably using greenlets) and a way to communicate from the listening server back to the spawning thread, it would be of great help.
Thanks.
EDIT:
After trying several methods, I decided to use Pythons default select() method to solve this problem. This worked, so my question regarding the use of threads is no longer relevant. Thanks for the people who provided input for your time and effort.
Hope it can provide some help, In example class if we will call tenMessageSender function then it will fire up an async thread without blocking main loop and then _zmqBasedListener will start listening on separate port untill that thread is alive. and whatever message our tenMessageSender function will send, those will be received by client and respond back to zmqBasedListener.
Server Side
import threading
import zmq
import sys
class Example:
def __init__(self):
self.context = zmq.Context()
self.publisher = self.context.socket(zmq.PUB)
self.publisher.bind('tcp://127.0.0.1:9997')
self.subscriber = self.context.socket(zmq.SUB)
self.thread = threading.Thread(target=self._zmqBasedListener)
def _zmqBasedListener(self):
self.subscriber.connect('tcp://127.0.0.1:9998')
self.subscriber.setsockopt(zmq.SUBSCRIBE, "some_key")
while True:
message = self.subscriber.recv()
print message
sys.exit()
def tenMessageSender(self):
self._decideListener()
for message in range(10):
self.publisher.send("testid : %d: I am a task" %message)
def _decideListener(self):
if not self.thread.is_alive():
print "STARTING THREAD"
self.thread.start()
Client
import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9997')
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9998')
subscriber.setsockopt(zmq.SUBSCRIBE, "testid")
count = 0
print "Listener"
while True:
message = subscriber.recv()
print message
publisher.send('some_key : Message received %d' %count)
count+=1
Instead of thread you can use greenlet etc.

Responding to client disconnects using bottle and gevent.wsgi?

I have a small asynchronous server implemented using bottle and gevent.wsgi. There is a routine used to implement long poll that looks pretty much like the "Event Callbacks" example in the bottle documentation:
def worker(body):
msg = msgbus.recv()
body.put(msg)
body.put(StopIteration)
#route('/poll')
def poll():
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body)
return body
Here, msgbus is a ZMQ sub socket.
This all works fine, but if a client breaks the connection while
worker is blocked on msgbus.recv(), that greenlet task will hang
around "forever" (well, until a message is received), and will only
find out about the disconnected client when it attempts to send a
response.
I can use msgbus.poll(timeout=something) if I don't want to block
forever waiting for ipc messages, but I still can't detect a client
disconnect.
What I want to do is get something like a reference to the client
socket so that I can use it in some kind of select or poll loop,
or get some sort of asynchronous notification inside my greenlet, but
I'm not sure how to accomplish either of these things with these
frameworks (bottle and gevent).
Is there a way to get notified of client disconnects?
Aha! The wsgi.input variable, at least under gevent.wsgi, has an rfile member that is a file-like object. This doesn't appear to be required by the WSGI spec, so it might not work with other servers.
With this I was able to modify my code to look something like:
def worker(body, rfile):
poll = zmq.Poller()
poll.register(msgbus)
poll.register(rfile, zmq.POLLIN)
while True:
events = dict(poll.poll())
if rfile.fileno() in events:
# client disconnect!
break
if msgbus in events:
msg = msgbus.recv()
body.put(msg)
break
body.put(StopIteration)
#route('/poll')
def poll():
rfile = bottle.request.environ['wsgi.input'].rfile
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body, rfile)
return body
And this works great...
...except on OpenShift, where you will have to use the
alternate frontend on port 8000 with websockets support.

Why is there no content when calling readAll in a Qtcpsocket's readyRead callback?

I am building a minimal socket server that display a window when it receives a message.
The server uses the newConnection signal to get the client connections and connect the proper signals (connected, disconnected and readyRead) for each socket.
When running the server program, and sending some data to the proper address, for each datum sent, a QTCPSocket is connected, then readyRead, then immediately disconnected.
And within the readyRead callback, the readAll return nothing. (I suppose this is the cause of the disconnected signal).
Still, the client gets no "broken pipe error" and can continue to send data.
How that can happen ?
here's the code :
class Client(QObject):
def connects(self):
self.connect(self.socket, SIGNAL("connected()"), SLOT(self.connected()))
self.connect(self.socket, SIGNAL("disconnected()"), SLOT(self.disconnected()))
self.connect(self.socket, SIGNAL("readyRead()"), SLOT(self.readyRead()))
self.socket.error.connect(self.error)
print "Client Connected from IP %s" % self.socket.peerAddress().toString()
def error(self):
print "error somewhere"
def connected(self):
print "Client Connected Event"
def disconnected(self):
self.readyRead()
print "Client Disconnected"
def readyRead(self):
print "reading"
msg = self.socket.readAll()
print QString(msg), len(msg)
n = Notification(QString(msg))
n.show()
class Server(QObject):
def __init__(self, parent=None):
QObject.__init__(self)
self.clients = []
def setup_client_socket(self):
print "incoming"
client = Client(self)
client.socket = self.server.nextPendingConnection()
#self.client.socket.nextBlockSize = 0
client.connects()
self.clients.append(client)
def StartServer(self):
self.server = QTcpServer()
self.server.listen(QHostAddress.LocalHost, 8888)
print self.server.isListening(), self.server.serverAddress().toString(), self.server.serverPort()
self.server.newConnection.connect(self.setup_client_socket)
update :
I tested directly with the socket module from python standard lib and it works. So the general setup of my machine and the network are not the guilty parties there. This is likely some QT issue.
The SLOT function expects a string representing the name of a slot, and it should be preceded by the target of the slot:
self.connect(self.socket, SIGNAL("connected()"), self, SLOT("connected()"))
but without quotes, you are making an immediate call to the function where the connect line is.
As you didn't even declare these functions as slots using the QtCore.Slot/pyqtSlot decorator, they wouldn't be called anyway when the signals are emitted, if you use the SLOT function.
For undecorated python functions, you should use one of these syntaxes:
self.connect(self.socket, SIGNAL("connected()"), self.connected)
Or
self.socket.connected.connect(self.connected)
Notice the lack of () at the end of the last parameter.
Additionally you shouldn't call readyRead in disconnected, if there was data to read before the disconnection, the readyRead function was most likely already called.
Probably a half-open socket - you can't trust clients to always open or terminate sessions correctly. Client might be disconnecting without doing a proper hangup on the server?
I haven't looked at the QTcpServer code, but it might be caused by someone port scanning your box?
Either way, the server should always be able to handle bad or no data gracefully. The internet is a crazy place with all sorts of weird packets floating around.

pyzmq create a process with its own socket

I have some code thats monitoring some other changing files, what i would like to do is to start that code that uses zeromq with different socket, the way im doing it now seems to cause assertions to fail somewhere in libzmq, since i may be reusing the same socket. how do i ensure when i create a new process from the monitor class the context will not be reused? thats what i think is going on, if you can tell there is some other stupidity on my part, please advise.
here is some code:
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
class Monitor(object):
def __init(self)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.connect("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.somefunc)
def initialize(self,id)
self._id = id
def somefunc(self, something)
"""work here and send back results if any """
import json
jdecoded = json.loads(something)
if self_id == jdecoded['_id']
""" good im the right monitor for you """
work = jdecoded['message']
results = algorithm (work)
self.socket.send(json.dumps(results))
else:
"""let some other process deal with it, not mine """
pass
class Prefect(object):
def __init(self, id)
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
self.socket.bind("tcp//127.0.0.1:5055")
self.stream = ZMQStream(self._socket)
self.stream.on_recv(self.check_if)
self._id = id
self.monitors = []
def check_if(self,message):
"""find out from message's id whether we have
started a proces for it previously"""
import json
jdecoded = json.loads(message)
this_id = jdecoded['_id']
if this_id in self.monitors:
pass
else:
"""start new process for it should have its won socket """
new = Monitor()
import Process
newp = Process(target=new.initialize,args=(this_id) )
newp.start()
self.monitors.append(this_id) ## ensure its remembered
what is going on is that i want all the monitor processess and a single prefect process listening on the same port, so when prefect sees a request that it hasnt seen it starts a process for it, all the processes that exist probably should listen too but ignore messages not meant for them.
as it stands, if i do this i get some crash possibly related to concurrent access of the same zmq socket by something (i tried threading.thread, still crashes) i read somewhere that concurrent access of a zmq socket by different threads is not possible. How would i ensure that new processes get their own zmq sockets?
EDIT:
the main deal in my app is that a request comes in via zmq socket, and a process(s) thats listening reacts to the message by:
1. If its directed at that process judged by the _id field, do some reading on a file and reply since one of the monitors match the messages _id, if none match, then:
2 If the messages _id files is not recognized, all monitors ignore it but the Prefect creates a process to handle that _id and all future messages to that id.
3. I want all the messages to be seen by the monitor processes as well as the prefect process, seems that seems easiest,
4. All the messages are very small, avarage ~4096 bytes.
5. The monitor does some non-blocking read and for each ioloop it sends what it has found out
more-edit=>and the prefect process binds now, and it will receive messages and echo them so they can be seen by monitors. This is what i have in mind, as the architecture but its not final.
.
All the messages are arriving from remote users over a browser that lets the server know what a client wants and the server sends the message to the backend via zmq(i did not show this, but is not hard) so in production they might not bind/connect to localhost.
I chose DEALER since it allows asyc / unlimited messages in either direction (see point 5.) and DEALER can bind with DEALER, and initial request/reply can arrive from either side. The other that can do this is possibly DEALER/ROUTER.
You are correct that you cannot keep using the same socket in a subprocess (multiprocessing usually uses fork to create subprocesses). In general, what this means is that you don't want to create the socket that will be used in the subprocess until after the subprocess starts.
Since, in your case, the socket is an attribute on the Monitor object, you probably don't want to create the Monitor in the main process at all. That would look something like this:
def start_monitor(this_id):
monitor = Monitor()
monitor.initialize(this_id)
# run the eventloop, or this will return immediately and destroy the monitor
... inside Prefect.check_if():
proc = Process(target=start_monitor, args=(this_id,))
proc.start()
self.monitors.append(this_id)
rather than your example, where the only thing the subprocess does is assign an ID and then kill the process, ultimately having no effect.

Simple continuously running XMPP client in python

I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages.
So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required.
My trivial (non-working) approach:
import time
import xmpp
class Jabber(object):
def __init__(self):
server = 'example.com'
username = 'bot'
passwd = 'password'
self.client = xmpp.Client(server)
self.client.connect(server=(server, 5222))
self.client.auth(username, passwd, 'bot')
self.client.sendInitPresence()
self.sleep()
def sleep(self):
self.awake = False
delay = 1
while not self.awake:
time.sleep(delay)
def wake(self):
self.awake = True
def auth(self, jid):
self.client.getRoster().Authorize(jid)
self.sleep()
def send(self, jid, msg):
message = xmpp.Message(jid, msg)
message.setAttr('type', 'chat')
self.client.send(message)
self.sleep()
if __name__ == '__main__':
j = Jabber()
time.sleep(3)
j.wake()
j.send('receiver#example.org', 'hello world')
time.sleep(30)
The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that?
EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.
There is a good example on the homepage of xmpppy itself (which is another name for python-xmpp), which does almost what you want: xtalk.py
It is basically a console jabber-client, but shouldn't be hard to rewrite into bot you want.
It's always online and can send and receive messages. I don't see a need for multiprocessing (or other concurrency) module here, unless you need to receive and send messages at exact same time.
A loop over the Process(timeout) method is a good way to wait and process any new incoming stanzas while keeping the connection up.

Categories

Resources