How to solve websocket ping timeout? - python

While running the following piece of code (in theory it should send a value every minute)
from __future__ import print_function
from twisted.internet.ssl import CertificateOptions
options = CertificateOptions()
from os import environ
from twisted.internet.defer import inlineCallbacks
from twisted.internet import reactor
from autobahn.twisted.wamp import ApplicationSession, ApplicationRunner
from autobahn import wamp
from datetime import datetime, timedelta
import xlwings as wb
import time
import xlwings as wb
class Component(ApplicationSession):
"""
An application component that publishes an event every second.
"""
#inlineCallbacks
def onJoin(self, details):
print("session attached")
while True:
try:
wb.Book(r'C:\Users\Administrator\Desktop\Datasets\test_feed.xlsx')
e = wb.Range('A2').value
b = wb.Range('C2').value
c = wb.Range('E2').value
except Exception:
print("----Waiting for RTD server response----")
time.sleep(1)
try:
epoch = datetime(now.year, now.month, now.day)
result = epoch + timedelta(days=c)
result = result.replace(microsecond=0, second=0)
if result > now:
now = result
print("Stock", e, "Time", now, "Price", b)
self.publish(u'com.myapp.ma', b)
except Exception:
print("-----Waiting1 for RTD server response----")
time.sleep(1)
def onDisconnect(self):
print("disconnected")
reactor.stop()
if __name__ == '__main__':
runner = ApplicationRunner(
environ.get("AUTOBAHN_DEMO_ROUTER", u"ws://127.0.0.1:8080/ws"),
u"crossbardemo")
runner.run(Component)
The following error is returned
2017-12-28T18:43:52+0100 [Router 1604] dropping connection to peer tcp4:127.0.0.1:61531 with abort=True: WebSocket ping timeout (peer did not respond with pong in time)
2017-12-28T18:43:52+0100 [Router 1604] session "8526139172223346" left realm "crossbardemo"
What I've tried to solve this problem:
I)
from twisted.internet.ssl import CertificateOptions
options = CertificateOptions()
if __name__ == '__main__':
runner = ApplicationRunner(
environ.get("AUTOBAHN_DEMO_ROUTER", u"ws://127.0.0.1:8080/ws"),
u"crossbardemo", ssl=options)
runner.run(Component)
II)
if __name__ == '__main__':
runner = ApplicationRunner(
environ.get("AUTOBAHN_DEMO_ROUTER", u"ws://127.0.0.1:8080/ws"),
u"crossbardemo",
)
runner.run(Component, auto_reconnect=True)
III)
Regedit
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS
1.0]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS
1.0\Client]
"DisabledByDefault"=dword:00000000
"Enabled"=dword:00000001
IV)
install certifi module (pip install certifi) set SSL_CERT_FILE, like
export SSL_CERT_FILE="$(python -m certifi)"
With still the same error. I am running on Windows 10, with crossbar demo router, autobahn and twisted.
Link to router configuration:
https://github.com/crossbario/autobahn-python/tree/master/examples/twisted/wamp/pubsub/basic/.crossbar
Also, the following example code is working properly:
counter = 100
while True:
print("publish: com.myapp.ma", counter)
self.publish(u'com.myapp.ma', counter)
counter += 100
yield sleep(30)

For Twisted to process further I/O events, you have to give control back to the reactor. Twisted implements a cooperative multitasking system. Various tasks run in the reactor thread. This is accomplished by each task only spending a brief time in control. Code like:
while True:
...
sleep(1)
prevents any other tasks from gaining control to execute and also prevents the reactor from gaining control to service I/O events.
Since this code is within a function decorated with inlineCallbacks, there is a very small change that will make it at least not completely incompatible with Twisted's mode of operation.
Instead of time.sleep(1), try this expression:
yield deferLater(reactor, 1, lambda: None)
And import deferLater from twisted.internet.task. This will perform a "sleep" which gives control back to the reactor and lets other tasks execute during the sleep. This should allow Autobahn to send the necessary ping/pong messages as well as allow it to process your publish call.

Related

Why is ZeroMQ multipart sending/reciving wrong messages?

In python I'm creating an application also using ZeroMQ. I'm using the PUSH/PULL method to send the loading status of one script to another. The message received on the PULL script runs inside of a Thread. The PULL script looks like this:
import time
from threading import Thread
import threading
import os
import zmq
import sys
context = zmq.Context()
zmqsocket = context.socket(zmq.PULL)
zmqsocket.bind("tcp://*:5555")
class TaskstatusUpdater(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
while True:
# Wait for next request from client
task_id = int(zmqsocket.recv_multipart()[0])
taskcolorstat = int(zmqsocket.recv_multipart()[1])
taskstatus = zmqsocket.recv_multipart()[2]
time.sleep(0.1)
print(task_id, taskstatus, taskcolorstat)
thread = TaskstatusUpdater()
thread.start()
The PUSH part sends constantly updates about the status of the other script. It looks something like this:
import time
import sys
import zmq
# zmq - client startup and connecting
try:
context = zmq.Context()
print("Connecting to server…")
zmqsocket = context.socket(zmq.PUSH)
zmqsocket.connect("tcp://localhost:5555")
print("succesful")
except:
print('error could not connect to service')
# zmq - client startup and connecting
for i in range(10):
zmqsocket.send_multipart([b_task_id, b"0", b"first message"])
time.sleep(3)# doing stuff
zmqsocket.send_multipart([b_task_id, b"1", b"second message"])
b_task_id is generated earlier in the program and is a simple binary value created out of an integer. There are multiple of those PUSH scripts running at the same time and thru the b_task_id I can define which script is responding to the PULL.
It is now often the case that those multipart messages get mixed up between each other. Can somebody explain to me why that is and how I can fix this problem?
For example, sometimes the output is:
2 b'second message' 0
The output that I was expecting is:
2 b'second message' 1

How do I exit a wsgiserver that was started on its own thread?

I have a project that I'm working on where I hope to be able to:
start a wsgiserver on its own thread
do stuff (some of which involves interacting with the wsgiserver
close the thread
end the program
I can do the first two steps, but I'm having trouble with the last two. I've provided a simpler version of my project that exhibits the issue I have where I can do the first two steps from above, just not the last two.
A couple questions:
How do I get the thread to stop the wsgi server?
Do I just need to pull out the wsgiserver code and start it on its own process?
Some details of my project that may head off some questions:
My project currently spins up other processes that are intended to talk to my wsgi server. I can spin everything up and get my processes to talk to my server, but I'm not able to get a graceful shutdown. This code sample is intended to provide a 'relatively simple' sample that can be more easily reviewed.
there are remnants of failed attempts at solving this in the code, hopefully, they aren't too distracting.
#Simple echo program
#listens on port 3000 and returns anything posted by http to that port
#installing required libraries
#download/install Microsoft Visual C++ 9.0 for Python
#https://www.microsoft.com/en-us/download/details.aspx?id=44266
#pip install greenlet
#pip install gevent
import sys
import threading
import urllib
import urllib2
import time
import traceback
from gevent.pywsgi import WSGIServer, WSGIHandler
from gevent import socket
server = ""
def request_error(start_response):
global server
# Send error to atm - must provide start_response
start_response('500', [])
#server.stop()
return ['']
def handle_transaction(env, start_response):
global server
try:
result = env['wsgi.input'].read()
print("Received: " + result)
sys.stdout.flush()
start_response('200 OK', [])
if (result.lower()=="exit"):
#server.stop()
return result
else:
return result
except:
return request_error(start_response)
class ErrorCapturingWSGIHandler(WSGIHandler):
def read_requestline(self):
result = None
try:
result = WSGIHandler.read_requestline(self)
except:
protocol_error()
raise # re-raise error, to not change WSGIHandler functionality
return result
class ErrorCapturingWSGIServer(WSGIServer):
handler_class = ErrorCapturingWSGIHandler
def start_server():
global server
server = ErrorCapturingWSGIServer(
('', 3000), handle_transaction, log=None)
server.serve_forever()
def main():
global server
#start server on it's own thread
print("Echoing...")
commandServerThread = threading.Thread(target=start_server)
commandServerThread.start()
#now that the server is started, send data
req = urllib2.Request("http://127.0.0.1:3000", data='ping')
response = urllib2.urlopen(req)
reply = response.read()
print(reply)
#take a look at the threading info
print(threading.active_count())
#try to exit
req = urllib2.Request("http://127.0.0.1:3000", data='exit')
response = urllib2.urlopen(req)
reply = response.read()
print(reply)
#Now that I'm done, exit
#sys.exit(0)
return
if __name__ == '__main__':
main()

Tornado main IOLoop takes 20s to process new request despite ProcessPoolExecutor

I have some heavy computation that needs to be done upon receiving a request without blocking the main IOLoop. To achieve that goal, I'm using ProcessPoolExecutor in a coroutine:
from concurrent.futures import ProcessPoolExecutor
from functools import partial
from random import uniform
import uuid
import time
from datetime import datetime
import tornado.ioloop
import tornado.web
import tornado.httpserver
def worker_function(msg):
start = time.time()
count = 0
seed = 1
while count < 99999999:
seed = uniform(1.1,1.2)
count += 1
end = time.time()
msg['seed'] = seed
msg['local_time'] = end - start
return msg
class EventHandler(tornado.web.RequestHandler):
def initialize(self):
self.executor = ProcessPoolExecutor(2)
#tornado.gen.coroutine
def get(self):
print "Received request at %s" % datetime.now()
result = yield self.executor.submit(
worker_function, {'id':str(uuid.uuid1())}
)
self.write(result)
self.finish()
print "Finished processing at %s" % datetime.now()
if __name__ == "__main__":
counter = {'count':0}
application = tornado.web.Application([
(r"/test", EventHandler),
])
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
To test the correct behavior, I'm loading the url in two separate browser's tab at around 1 second delay. Here is what the script outputs:
Received request at 2015-09-09 23:58:00.899278
Received request at 2015-09-09 23:58:23.329648
Finished processing at 2015-09-09 23:58:44.530322
Finished processing at 2015-09-09 23:59:05.120466
The two process are indeed running in parallel and I can see two CPU cores being used at 100% in htop. The problem is the 20 seconds delay between the two "Received request".
How can I make sure that the main IOLoop stays snappy?
Ps: The script is running on a Linux VM with 2 CPU cores.
The main issue is that you're testing with a browser, and browsers don't like to request the same url twice at the same time even if it's in two different tabs (they wait for the first request to finish before starting the second to see if they get a cacheable response). Add some unique query parameter to each url and you should see both tabs proceed in parallel (or test with two different browsers instead of two tabs in the same browser).
Also, your ProcessPoolExecutor should be a global (or a member of your Application) instead of a member of your RequestHandler. All requests should share the same executor.

Get pymodbus to read registers from multiple clients asynchronously

I can connect to my modbus slaves using pymodbus and read those connections using the synchronous client. When I attempt to use the asynchronous client with twisted I can read multiple values, and get output from one of the clients and the subsequent one hangs if I don't issue a disconnect, but if I issue a disconnect the client disconnects before the values are returned.
I'm a python novice and this is just me scraping together code from various sources. I'm sure there is a super simple solution. Forgive the code, I'm no programmer. Thanks!!
#!/usr/bin/env python
import logging
from twisted.internet import reactor
from twisted.internet import defer, task
from twisted.internet.endpoints import TCP4ClientEndpoint
from pymodbus.constants import Defaults
from pymodbus.client.async import ModbusClientFactory
import time
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)
hosts = ['10.3.72.89', '10.3.72.57']
start = time.time()
def show_hreg(response):
log.debug('1st hreg = {0}'.format(response.getRegister(0)))
print response.registers
def foo(protocol):
rlist = []
r2 = protocol.read_holding_registers(1699, 4, unit=1)
r2.addCallback(show_hreg)
r3 = protocol.read_holding_registers(1099, 4, unit=1)
r3.addCallback(show_hreg)
r4 = protocol.read_holding_registers(1599, 4, unit=1)
r4.addCallback(show_hreg)
rlist.append(r2)
rlist.append(r3)
rlist.append(r4)
results = defer.gatherResults(rlist)
return results
#protocol.transport.loseConnection()
#reactor.callLater(1, protocol.transport.loseConnection)
#reactor.callLater(1.5, reactor.stop)
def main(reactor, hosts):
dlist = []
for server in hosts:
d = TCP4ClientEndpoint(reactor, server, Defaults.Port)
protocol = d.connect(ModbusClientFactory())
protocol.addCallback(foo)
dlist.append(protocol)
# finish the process when the "queue" is done
results = defer.gatherResults(dlist).addCallback(printElapsedTime)
return results
def printElapsedTime(ignore):
print "Elapsed Time: %s" % (time.time() - start)
task.react(main, [hosts])

Implementing Twisted style local multiple deferred callbacks in Celery

I am quite new to using Celery and was wondering how TWSITED type multiple deferred callbacks can be implemented in Celery
MY TWISTED CODE uses perspective broker and is as follows. I have a Handler (server) which handles some events and returns the result. The Dispatcher (Client) prints the result returned using a deferred callback.
Handler.py (Server)
from twisted.application import service, internet
from twisted.internet import reactor, task
from twisted.spread import pb
from Dispatcher import Event
from Dispatcher import CopyEvent
class ReceiverEvent(pb.RemoteCopy, Event):
pass
pb.setUnjellyableForClass(CopyEvent, ReceiverEvent)
class Handler(pb.Root):
def remote_eventEnqueue(self, pond):
d = task.deferLater(reactor,5,handle_event,sender=self)
return d
def handle_event(sender):
print "Do Something"
return "did something"
if __name__ == '__main__':
h=Handler()
reactor.listenTCP(8739, pb.PBServerFactory(h))
reactor.run()
Now the Dispatcher.py (Client)
from twisted.spread import pb, jelly
from twisted.python import log
from twisted.internet import reactor
from Event import Event
class CopyEvent(Event, pb.Copyable):
pass
class Dispatcher:
def __init__(self, event):
self.event = event
def dispatch_event(self, remote):
d = remote.callRemote("eventEnqueue", self.event)
d.addCallback(self.printMessage)
def printMessage(self, text):
print text
def main():
from Handler import CopyEvent
event = CopyEvent()
d = Dispatcher(event)
factory = pb.PBClientFactory()
reactor.connectTCP("localhost", 8739, factory)
deferred = factory.getRootObject()
deferred.addCallback(d.dispatch_event)
reactor.run()
if __name__ == '__main__':
main()
I tried implementing this in Celery.
Handler.py (Server)
from celery import Celery
app=Celery('tasks',backend='amqp',broker='amqp://guest#localhost//')
#app.task
def handle_event():
print "Do Something"
return "did something"
Dispatcher.py (Client)
from Handler import handle_event
from datetime import datetime
def print_message(text):
print text
t=handle_event.apply_async(countdown=10,link=print_message.s('Done')) ##HOWTO?
My exact question is how can one implement deferred callbacks TWISTED style on local functions like print_message in Celery. When handle_Event method is finished it returns result on which I would like to have another callback method (print_message) which is LOCAL
Any other possible Design workflow to do this in Celery?
Thanks
JR
Ok, so finally figured it out. It is not quite possible to add callbacks directly in the Celery client like the Twisted style. But Celery supports task monitoring functionality, that enables the client to monitor different kinds of worker events and add callbacks on it.
A simple task monitor (Task_Monitor.py) would look something like this. (Details can be found in Celery real processing documentation http://docs.celeryproject.org/en/latest/userguide/monitoring.html#real-time-processing)
Task_Monitor.py
from celery import Celery
def task_monitor(app):
state = app.events.State()
def announce_completed_tasks(event):
state.event(event)
task = state.tasks.get(event['uuid'])
print('TASK SUCCEEDED: %s[%s] %s' % (task.name, task.uuid, task.info(), ))
with app.connection() as connection:
recv = app.events.Receiver(connection, handlers={'task-succeeded': announce_completed_tasks})
recv.capture(limit=None, timeout=None, wakeup=True)
if __name__ == '__main__':
app = Celery(broker='amqp://guest#REMOTEHOST//')
task_monitor(app)
Task_Monitor.py has to be run as a separate process (client side). Besides the Celery application (server side) needs to be configured using
app.conf.CELERY_SEND_EVENTS = TRUE
or using -E option while running celery
so that it sends events in order for worker to be monitored.
I would recommend using chains or one of the similar mechanisms for the Celery Canvas docs.
Example taken from the docs:
>>> from celery import chain
>>> from proj.tasks import add, mul
# (4 + 4) * 8 * 10
>>> res = chain(add.s(4, 4), mul.s(8), mul.s(10))
proj.tasks.add(4, 4) | proj.tasks.mul(8) | proj.tasks.mul(10)
>>> res.apply_async()

Categories

Resources