Simple continuously running XMPP client in python - python

I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages.
So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required.
My trivial (non-working) approach:
import time
import xmpp
class Jabber(object):
def __init__(self):
server = 'example.com'
username = 'bot'
passwd = 'password'
self.client = xmpp.Client(server)
self.client.connect(server=(server, 5222))
self.client.auth(username, passwd, 'bot')
self.client.sendInitPresence()
self.sleep()
def sleep(self):
self.awake = False
delay = 1
while not self.awake:
time.sleep(delay)
def wake(self):
self.awake = True
def auth(self, jid):
self.client.getRoster().Authorize(jid)
self.sleep()
def send(self, jid, msg):
message = xmpp.Message(jid, msg)
message.setAttr('type', 'chat')
self.client.send(message)
self.sleep()
if __name__ == '__main__':
j = Jabber()
time.sleep(3)
j.wake()
j.send('receiver#example.org', 'hello world')
time.sleep(30)
The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that?
EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.

There is a good example on the homepage of xmpppy itself (which is another name for python-xmpp), which does almost what you want: xtalk.py
It is basically a console jabber-client, but shouldn't be hard to rewrite into bot you want.
It's always online and can send and receive messages. I don't see a need for multiprocessing (or other concurrency) module here, unless you need to receive and send messages at exact same time.

A loop over the Process(timeout) method is a good way to wait and process any new incoming stanzas while keeping the connection up.

Related

How to send websocket updates to any connected clients while running a while-True loop?

I have a Python3 program that runs a "while True"-loop until stopped, which occasionally saves data to a MySQL database. I am creating an administrative website, separate from the Python program, where I will be able to observe this data.
I now want to be able to be notified, on the website, when changes have been made to the database. My thought was to set up a websocket connection, so that the Python program can send a message through the socket to all connected clients, i.e. all open browsers, if there has been any changes to the database table.
I have done something similar before, but in that case I had to wait for a websocket connection before the "while True"-loop would start. In the new scenario I want to be able to have multiple website clients at once, and let them connect at any time, as well as disconnect without interrupting the Python programs loop.
This is a simplified version of my previous code, which I now want to update to be able to run both with & without websocket clients.
import asyncio
import websockets
socket_server = websockets.serve(run, "127.0.0.1", 5055)
asyncio.get_event_loop().run_until_complete(socket_server)
console_log("Waiting for socket connection...")
asyncio.get_event_loop().run_forever()
async def run(ws):
while True:
db_has_updated = do_stuff()
if db_has_updated:
await ws.send(data)
I just can't seem to be able to come up with the right search terms to find a solution, so I'm asking here instead.
I figured it out, finally! Here is my solution with a websocket server running in a separate thread from the other logic. I'm probably changing some things to make it neater, but this does everything I need. Feel free to ask any questions.
Be aware that this blocks when messaging all the connected clients. That is the way I needed it to work, but you could always thread/subprocess the logic/data-gen part of the program if you want it to run completely asynchronously.
#!/usr/bin/env python3
import asyncio
import websockets
import threading
import time
import random
def gen_data():
print("Generating data...")
time.sleep(3)
data = random.randint(1, 10)
return data
async def send(client, data):
await client.send(data)
async def handler(client, path):
# Register.
print("Websocket Client Connected.", client)
clients.append(client)
while True:
try:
print("ping", client)
pong_waiter = await client.ping()
await pong_waiter
print("pong", client)
time.sleep(3)
except Exception as e:
clients.remove(client)
print("Websocket Client Disconnected", client)
break
clients = []
start_server = websockets.serve(handler, "localhost", 5555)
asyncio.get_event_loop().run_until_complete(start_server)
threading.Thread(target = asyncio.get_event_loop().run_forever).start()
print("Socket Server Running. Starting main loop.")
while True:
data = str(gen_data())
message_clients = clients.copy()
for client in message_clients:
print("Sending", data, "to", client)
try:
asyncio.run(send(client, data))
except:
# Clients might have disconnected during the messaging process,
# just ignore that, they will have been removed already.
pass

Understanding async await in python socket io / aiohttp server

I am trying to setup an socket.io server using python-socketio.
Here is a minimal working example:
import asyncio
from aiohttp import web
import socketio
import random
sio = socketio.AsyncServer(async_mode='aiohttp')
app = web.Application()
sio.attach(app)
#sio.on('connect')
def connect(sid, environ):
print("connected: ", sid)
#sio.on('sendText')
async def message(sid, data):
print("message ", data)
# await asyncio.sleep(1 * random.random())
# print('waited', data)
#sio.on('disconnect')
def disconnect(sid):
print('disconnect ', sid)
if __name__ == '__main__':
web.run_app(app, host='0.0.0.0', port=8080)
This runs fine, and I can execute (here in node.js) for instance
const io = require('socket.io-client');
const socket = io('ws://localhost:8080');
socket.emit('sendText', 'hey 1')
socket.emit('sendText', 'hey 2')
socket.emit('sendText', 'hey 3')
If I run the server and run the node script above I get server-side
connected: c1e687f0e2724b339fcdbefdb5aaa8f8
message hey 1
message hey 2
message hey 3
However, if I uncomment the lines with await sleep in the code, I only receive the first message:
connected: 816fb6700f5143f7875b20a252c65f33
message hey 1
waited hey 1
I don't understand why the next messages are not appearing.
Can only one instance of async def message run at the same time? Or why?
I am sure that I am not understanding something very fundamental about how this works. I would be very grateful if someone could point out what I am not understanding.
I'm the author of the python-socketio package. There are two problems here, I think. I can answer your question:
Can only one instance of async def message run at the same time? Or why?
My Socket.IO server serializes the events that are received from a given client. So for example, if client A sends an event that runs for one minute, any additional events sent by A during that minute will be queued, waiting for the first event to complete first. If client B sends an event during that minute, it will be handled immediately. The reason why events from a client are artificially serialized is to prevent race conditions or other side effects from occurring as a result of two or more handlers for the same client running in parallel. This serialization of events can be turned off, with the async_handlers option:
sio = socketio.AsyncServer(async_mode='aiohttp', async_handlers=True)
Using aiohttp 2.3.7 and async_handlers=True your three events are received at more or less the same time, and then all handlers wait in parallel during their sleep periods.
Unfortunately this does not explain the 2nd and 3rd events never reaching the server. I have verified that these events are properly queued and executed in sequence with aiohttp 2.2.5, but this breaks with 2.3.0 all the way to 2.3.7. My current theory is that a change that was introduced in 2.3.0 is causing these messages that arrive while the task is sleeping to get dropped, but haven't found why that happens yet.

How to reconnect to RabbitMQ?

My python script constantly has to send messages to RabbitMQ once it receives one from another data source. The frequency in which the python script sends them can vary, say, 1 minute - 30 minutes.
Here's how I establish a connection to RabbitMQ:
rabt_conn = pika.BlockingConnection(pika.ConnectionParameters("some_host"))
channel = rbt_conn.channel()
I just got an exception
pika.exceptions.ConnectionClosed
How can I reconnect to it? What's the best way? Is there any "strategy"? Is there an ability to send pings to keep a connection alive or set timeout?
Any pointers will be appreciated.
RabbitMQ uses heartbeats to detect and close "dead" connections and to prevent network devices (firewalls etc.) from terminating "idle" connections. From version 3.5.5 on, the default timeout is set to 60 seconds (previously it was ~10 minutes). From the docs:
Heartbeat frames are sent about every timeout / 2 seconds. After two missed heartbeats, the peer is considered to be unreachable.
The problem with Pika's BlockingConnection is that it is unable to respond to heartbeats until some API call is made (for example, channel.basic_publish(), connection.sleep(), etc).
The approaches I found so far:
Increase or deactivate the timeout
RabbitMQ negotiates the timeout with the client when establishing the connection. In theory, it should be possible to override the server default value with a bigger one using the heartbeat_interval argument, but the current Pika version (0.10.0) uses the min value between those offered by the server and the client. This issue is fixed on current master.
On the other hand, is possible to deactivate the heartbeat functionality completely by setting the heartbeat_interval argument to 0, which may well drive you into new issues (firewalls dropping connections, etc)
Reconnecting
Expanding on #itsafire's answer, you can write your own publisher class, letting you reconnect when required. An example naive implementation:
import logging
import json
import pika
class Publisher:
EXCHANGE='my_exchange'
TYPE='topic'
ROUTING_KEY = 'some_routing_key'
def __init__(self, host, virtual_host, username, password):
self._params = pika.connection.ConnectionParameters(
host=host,
virtual_host=virtual_host,
credentials=pika.credentials.PlainCredentials(username, password))
self._conn = None
self._channel = None
def connect(self):
if not self._conn or self._conn.is_closed:
self._conn = pika.BlockingConnection(self._params)
self._channel = self._conn.channel()
self._channel.exchange_declare(exchange=self.EXCHANGE,
type=self.TYPE)
def _publish(self, msg):
self._channel.basic_publish(exchange=self.EXCHANGE,
routing_key=self.ROUTING_KEY,
body=json.dumps(msg).encode())
logging.debug('message sent: %s', msg)
def publish(self, msg):
"""Publish msg, reconnecting if necessary."""
try:
self._publish(msg)
except pika.exceptions.ConnectionClosed:
logging.debug('reconnecting to queue')
self.connect()
self._publish(msg)
def close(self):
if self._conn and self._conn.is_open:
logging.debug('closing queue connection')
self._conn.close()
Other possibilities
Other possibilities which I yet didn't explore:
Using an asynchronous adapter for publishing
Keeping your RabbitMQ connection and your "publish" code on a background thread, which calls periodically connection.sleep() to responde to server heartbeats.
Dead simple: some pattern like this.
import time
while True:
try:
communication_handles = connect_pika()
do_your_stuff(communication_handles)
except pika.exceptions.ConnectionClosed:
print 'oops. lost connection. trying to reconnect.'
# avoid rapid reconnection on longer RMQ server outage
time.sleep(0.5)
You will probably have to re-factor your code, but basically it is about catching the exception, mitigate the problem and continue doing your stuff.
The communication_handles contain all the pika elements like channels, queues and whatever that your stuff needs to communicate with RabbitMQ via pika.

How can I write a socket server in a different thread from my main program (using gevent)?

I'm developing a Flask/gevent WSGIserver webserver that needs to communicate (in the background) with a hardware device over two sockets using XML.
One socket is initiated by the client (my application) and I can send XML commands to the device. The device answers on a different port and sends back information that my application has to confirm. So my application has to listen to this second port.
Up until now I have issued a command, opened the second port as a server, waited for a response from the device and closed the second port.
The problem is that it's possible that the device sends multiple responses that I have to confirm. So my solution was to keep the port open and keep responding to incoming requests. However, in the end the device is done sending requests, and my application is still listening (I don't know when the device is done), thereby blocking everything else.
This seemed like a perfect use case for a thread, so that my application launches a listening server in a separate thread. Because I'm already using gevent as a WSGI server for Flask, I can use the greenlets.
The problem is, I have looked for a good example of such a thing, but all I can find is examples of multi-threading handlers for a single socket server. I don't need to handle a lot of connections on the socket server, but I need it launched in a separate thread so it can listen for and handle incoming messages while my main program can keep sending messages.
The second problem I'm running into is that in the server, I need to use some methods from my "main" class. Being relatively new to Python I'm unsure how to structure it in a way to make that possible.
class Device(object):
def __init__(self, ...):
self.clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
def _connect_to_device(self):
print "OPEN CONNECTION TO DEVICE"
try:
self.clientsocket.connect((self.ip, 5100))
except socket.error as e:
pass
def _disconnect_from_device(self):
print "CLOSE CONNECTION TO DEVICE"
self.clientsocket.close()
def deviceaction1(self, ...):
# the data that is sent is an XML document that depends on the parameters of this method.
self._connect_to_device()
self._send_data(XMLdoc)
self._wait_for_response()
return True
def _send_data(self, data):
print "SEND:"
print(data)
self.clientsocket.send(data)
def _wait_for_response(self):
print "WAITING FOR REQUESTS FROM DEVICE (CHANNEL 1)"
self.serversocket.bind(('10.0.0.16', 5102))
self.serversocket.listen(5) # listen for answer, maximum 5 connections
connection, address = self.serversocket.accept()
# the data is of a specific length I can calculate
if len(data) > 0:
self._process_response(data)
self.serversocket.close()
def _process_response(self, data):
print "RECEIVED:"
print(data)
# here is some code that processes the incoming data and
# responds to the device
# this may or may not result in more incoming data
if __name__ == '__main__':
machine = Device(ip="10.0.0.240")
Device.deviceaction1(...)
This is (globally, I left out sensitive information) what I'm doing now. As you can see everything is sequential.
If anyone can provide an example of a listening server in a separate thread (preferably using greenlets) and a way to communicate from the listening server back to the spawning thread, it would be of great help.
Thanks.
EDIT:
After trying several methods, I decided to use Pythons default select() method to solve this problem. This worked, so my question regarding the use of threads is no longer relevant. Thanks for the people who provided input for your time and effort.
Hope it can provide some help, In example class if we will call tenMessageSender function then it will fire up an async thread without blocking main loop and then _zmqBasedListener will start listening on separate port untill that thread is alive. and whatever message our tenMessageSender function will send, those will be received by client and respond back to zmqBasedListener.
Server Side
import threading
import zmq
import sys
class Example:
def __init__(self):
self.context = zmq.Context()
self.publisher = self.context.socket(zmq.PUB)
self.publisher.bind('tcp://127.0.0.1:9997')
self.subscriber = self.context.socket(zmq.SUB)
self.thread = threading.Thread(target=self._zmqBasedListener)
def _zmqBasedListener(self):
self.subscriber.connect('tcp://127.0.0.1:9998')
self.subscriber.setsockopt(zmq.SUBSCRIBE, "some_key")
while True:
message = self.subscriber.recv()
print message
sys.exit()
def tenMessageSender(self):
self._decideListener()
for message in range(10):
self.publisher.send("testid : %d: I am a task" %message)
def _decideListener(self):
if not self.thread.is_alive():
print "STARTING THREAD"
self.thread.start()
Client
import zmq
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect('tcp://127.0.0.1:9997')
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:9998')
subscriber.setsockopt(zmq.SUBSCRIBE, "testid")
count = 0
print "Listener"
while True:
message = subscriber.recv()
print message
publisher.send('some_key : Message received %d' %count)
count+=1
Instead of thread you can use greenlet etc.

Responding to client disconnects using bottle and gevent.wsgi?

I have a small asynchronous server implemented using bottle and gevent.wsgi. There is a routine used to implement long poll that looks pretty much like the "Event Callbacks" example in the bottle documentation:
def worker(body):
msg = msgbus.recv()
body.put(msg)
body.put(StopIteration)
#route('/poll')
def poll():
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body)
return body
Here, msgbus is a ZMQ sub socket.
This all works fine, but if a client breaks the connection while
worker is blocked on msgbus.recv(), that greenlet task will hang
around "forever" (well, until a message is received), and will only
find out about the disconnected client when it attempts to send a
response.
I can use msgbus.poll(timeout=something) if I don't want to block
forever waiting for ipc messages, but I still can't detect a client
disconnect.
What I want to do is get something like a reference to the client
socket so that I can use it in some kind of select or poll loop,
or get some sort of asynchronous notification inside my greenlet, but
I'm not sure how to accomplish either of these things with these
frameworks (bottle and gevent).
Is there a way to get notified of client disconnects?
Aha! The wsgi.input variable, at least under gevent.wsgi, has an rfile member that is a file-like object. This doesn't appear to be required by the WSGI spec, so it might not work with other servers.
With this I was able to modify my code to look something like:
def worker(body, rfile):
poll = zmq.Poller()
poll.register(msgbus)
poll.register(rfile, zmq.POLLIN)
while True:
events = dict(poll.poll())
if rfile.fileno() in events:
# client disconnect!
break
if msgbus in events:
msg = msgbus.recv()
body.put(msg)
break
body.put(StopIteration)
#route('/poll')
def poll():
rfile = bottle.request.environ['wsgi.input'].rfile
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body, rfile)
return body
And this works great...
...except on OpenShift, where you will have to use the
alternate frontend on port 8000 with websockets support.

Categories

Resources