So here's the deal : I'm writing a simple lightweight IRC app, hosted locally, that basically does the same job as Xchat and works in your browser, just as Sabnzbd. I display search results in the browser as an html table, and using an AJAX GET request with an on_click event, the download is launched. I use another AJAX GET request in a 1 second loop to request the download information (status, progress, speed, ETA, etc.). I hit a bump with the simultaneous AJAX requests, since my CGI handler seems to only be able to handle one thread at a time : indeed, the main thread processes the download, while requests for download status are sent too.
Since I had a Django app somewhere, I tried implementing this IRC app and everything works fine. Simultaneous requests are handled properly.
So is there something I have to know with the HTTP handler ? Is it not possible with the basic CGI handle to deal with simultaneous requests ?
I use the following for my CGI IRC app :
from http.server import BaseHTTPRequestHandler, HTTPServer, CGIHTTPRequestHandler
If it's not about theory but about my code, I can gladly post various python scripts if it helps.
A little bit deeper into the documentation:
These four classes process requests synchronously; each request must be completed before the next request can be started.
TL;DR: Use a real web server.
So, after further research, here's my code, whick works :
from http.server import BaseHTTPRequestHandler, HTTPServer, CGIHTTPRequestHandler
from socketserver import ThreadingMixIn
import threading
import cgitb; cgitb.enable() ## This line enables CGI error reporting
import webbrowser
class HTTPRequestHandler(CGIHTTPRequestHandler):
"""Handle requests in a separate thread."""
def do_GET(self):
if "shutdown" in self.path:
self.send_head()
print ("shutdown")
server.stop()
else:
self.send_head()
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
allow_reuse_address = True
daemon_threads = True
def shutdown(self):
self.socket.close()
HTTPServer.shutdown(self)
class SimpleHttpServer():
def __init__(self, ip, port):
self.server = ThreadedHTTPServer((ip,port), HTTPRequestHandler)
self.status = 1
def start(self):
self.server_thread = threading.Thread(target=self.server.serve_forever)
self.server_thread.daemon = True
self.server_thread.start()
def waitForThread(self):
self.server_thread.join()
def stop(self):
self.server.shutdown()
self.waitForThread()
if __name__=='__main__':
HTTPRequestHandler.cgi_directories = ["/", "/ircapp"]
server = SimpleHttpServer('localhost', 8020)
print ('HTTP Server Running...........')
webbrowser.open_new_tab('http://localhost:8020/ircapp/search.py')
server.start()
server.waitForThread()
Related
I have a python based page which recieves data by POST, which is then forwarded to the Crossbar server using Autobahn (Wamp). It works well the first 1-2 times but when it's called again after that it throws ReactorNotRestartable.
Now, I need this to work whichever way possible, either by reusing this "Reactor" based on a conditional check or by stopping it properly after every run. (The first one would be preferable because it might reduce the execution time)
Thanks for your help!
Edit:
This is in a webpage (Django View) so it needs to run as many times as the page is loaded/data is sent to it via POST.
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks
from twisted.internet.endpoints import TCP4ClientEndpoint
from twisted.application.internet import ClientService
from autobahn.wamp.types import ComponentConfig
from autobahn.twisted.wamp import ApplicationSession, WampWebSocketClientFactory
class MyAppSession(ApplicationSession):
def __init__(self, config):
ApplicationSession.__init__(self, config)
def onConnect(self):
self.join(self.config.realm)
def onChallenge(self, challenge):
pass
#inlineCallbacks
def onJoin(self, details):
yield self.call('receive_data', data=message)
yield self.leave()
def onLeave(self, details):
self.disconnect()
def onDisconnect(self):
reactor.stop()
message = "data from POST[]"
session = MyAppSession(ComponentConfig('realm_1', {}))
transport = WampWebSocketClientFactory(session, url='ws://127.0.0.1:8080')
endpoint = TCP4ClientEndpoint(reactor, '127.0.0.1', 8080)
service = ClientService(endpoint, transport)
service.startService()
reactor.run()
I figured out a probably hacky-and-not-so-good way by using multiprocessing and putting reactor.stop() inside onJoin() right after the function call. This way I don't have to bother with the "twisted running in the main thread" thing because its process gets killed as soon as my work is done.
Is there a better way?
today I was working to create some unittests for my application: a websocket client..
In the real world, ws server is an embeeded pc in the home network.
Now, for my unittest, I'd like to create a fake ws server and use it to test the client.
can you suggest me some ws-server plug&play that I can call inside my unittest setup and use it for testing?
I tried to use Autobahn ws server, but it is not plug&play.. It should work but I'm not able to handle correctly it in a separate thread.
My goal is to test the client not to develop a dummy server.
Can you help me with something easy and ready-to-use?
Thanks in advance,
Salvo
Here the minimal code, I wrote, in order to avoid the blocking command (serve_forever)
I used ws4py as websocket library.
from wsgiref.simple_server import make_server
from ws4py.websocket import WebSocket
from ws4py.server.wsgirefserver import WSGIServer, WebSocketWSGIRequestHandler
from ws4py.server.wsgiutils import WebSocketWSGIApplication
import threading
class TestWebSocket(WebSocket):
def received_message(self, message):
self.send("+OK", False)
class TestServer:
def __init__(self, hostname='127.0.0.1', port=8080):
self.server = make_server(hostname,\
port,\
server_class=WSGIServer,\
handler_class=WebSocketWSGIRequestHandler,\
app=WebSocketWSGIApplication(handler_cls=TestWebSocket)\
)
self.server.initialize_websockets_manager()
self.thread = threading.Thread(target=self.server.serve_forever)
self.thread.start()
print("Server started for {}:{}".format(hostname, str(port)))
def shutdown(self):
self.server.shutdown()
I am trying to understand the examples given here: https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/wamp/basic/pubsub/basic
I built this script which is supposed to handle multiple pub/sub websocket connections and also open a tcp port ( 8123 ) for incoming control messages. When a message comes on the 8123 port, the application should broadcast to all the connected subscribers the message received on port 8123. How do i make NotificationProtocol or NotificationFactory talk to the websocket and make the websocket server broadcast a message.
Another thing that i do not understand is the url. The client javascript connects to the url http://:8080/ws . Where does the "ws" come from ?
Also can someone explain the purpose of RouterFactory, RouterSessionFactory and this bit:
from autobahn.wamp import types
session_factory.add( WsNotificationComponent(types.ComponentConfig(realm = "realm1" )))
my code is below:
import sys, time
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, Factory
from twisted.internet.defer import inlineCallbacks
from autobahn.twisted.wamp import ApplicationSession
from autobahn.twisted.util import sleep
class NotificationProtocol(Protocol):
def __init__(self, factory):
self.factory = factory
def dataReceived(self, data):
print "received new data"
class NotificationFactory(Factory):
protocol = NotificationProtocol
class WsNotificationComponent(ApplicationSession):
#inlineCallbacks
def onJoin(self, details):
counter = 0
while True:
self.publish("com.myapp.topic1", "test %d" % counter )
counter += 1
yield sleep(1)
## we use an Autobahn utility to install the "best" available Twisted reactor
##
from autobahn.twisted.choosereactor import install_reactor
reactor = install_reactor()
## create a WAMP router factory
##
from autobahn.wamp.router import RouterFactory
router_factory = RouterFactory()
## create a WAMP router session factory
##
from autobahn.twisted.wamp import RouterSessionFactory
session_factory = RouterSessionFactory(router_factory)
from autobahn.wamp import types
session_factory.add( WsNotificationComponent(types.ComponentConfig(realm = "realm1" )))
from autobahn.twisted.websocket import WampWebSocketServerFactory
transport_factory = WampWebSocketServerFactory(session_factory)
transport_factory.setProtocolOptions(failByDrop = False)
from twisted.internet.endpoints import serverFromString
## start the server from an endpoint
##
server = serverFromString(reactor, "tcp:8080")
server.listen(transport_factory)
notificationFactory = NotificationFactory()
reactor.listenTCP(8123, notificationFactory)
reactor.run()
"How do i make NotificationProtocol or NotificationFactory talk to the websocket and make the websocket server broadcast a message":
Check out one of my other answers on SO: Persistent connection in twisted. Jump down to the example code and model your websocket logic like the "IO" logic and you'll have a good fit (You might also want to see the follow-on answer about the newer endpoint calls from one of the twisted core-team too)
"Where does the "ws" come from ?"
Websockets are implemented by retasking http connections, which by their nature have to have a specific path on the request. That "ws" path typically would map to a special http handler that autobahn is building for you to process websockets (or at least that's what your javascript is expecting...). Assuming thing are setup right you can actually point your web-browswer at that url and it should print back an error about the websocket handshake (Expected WebSocket Headers in my case, but I'm using cyclones websockets not autobahn).
P.S. one of the cool side-effects from "websockets must have a specific path" is that you can actually mix websockets and normal http content on the same handler/listen/port, this gets really handy when your trying to run them all on the same SSL port because your trying to avoid the requirement of a proxy front-ending your code.
Recently I started a small personal project. It's a realtime web system based on asyncio and autobahn-python. However I also would like to serve some static files via HTTP and do it from the same process. My HTTP server is Tornado sitting on top of asyncio event loop and everything works perfectly fine except that I have to start tornado and autobahn handlers on different ports. Here is a stripped down version of what I currently have:
import six
import datetime
import asyncio
import tornado.web
import tornado.httpserver
import tornado.netutil
from tornado.platform.asyncio import AsyncIOMainLoop
from autobahn.wamp import router
from autobahn.asyncio import wamp, websocket
# WAMP server
class MyBackendComponent(wamp.ApplicationSession):
def onConnect(self):
self.join(u"realm1")
#asyncio.coroutine
def onJoin(self, details):
def utcnow():
now = datetime.datetime.utcnow()
return six.u(now.strftime("%Y-%m-%dT%H:%M:%SZ"))
reg = yield from self.register(utcnow, 'com.timeservice.now')
# HTTP server
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world!")
tornado_app = tornado.web.Application(
[
(r"/", MainHandler),
],
)
if __name__ == '__main__':
router_factory = router.RouterFactory()
session_factory = wamp.RouterSessionFactory(router_factory)
session_factory.add(MyBackendComponent())
transport_factory = websocket.WampWebSocketServerFactory(session_factory,
debug=True,
debug_wamp=True)
AsyncIOMainLoop().install()
tornado_app.listen(80, "127.0.0.1")
loop = asyncio.get_event_loop()
coro = loop.create_server(transport_factory, "127.0.0.1", 8080)
server = loop.run_until_complete(coro)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
server.close()
loop.close()
Question: Is there the Right Way to make autobahn-wamp and tornado handlers listen on the same port?
My initial idea was to implement some kind of socket.socket wrapper and dispatch incoming messages there but it turned out to be awfully messy. I don't want to use any external proxies because the backend should be portable as much as possible.
Also I'm not asking anybody to implement it for me(but of course you can if you want to!) - only to know if somebody have already done something similar before diving into autobahn/tornado code.
Thanks in advance!
PS: Sorry for my poor English - it's not my mother tongue.
I'm writing a small web server in Python, using BaseHTTPServer and a custom subclass of BaseHTTPServer.BaseHTTPRequestHandler. Is it possible to make this listen on more than one port?
What I'm doing now:
class MyRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def doGET
[...]
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
pass
server = ThreadingHTTPServer(('localhost', 80), MyRequestHandler)
server.serve_forever()
Sure; just start two different servers on two different ports in two different threads that each use the same handler. Here's a complete, working example that I just wrote and tested. If you run this code then you'll be able to get a Hello World webpage at both http://localhost:1111/ and http://localhost:2222/
from threading import Thread
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write("Hello World!")
class ThreadingHTTPServer(ThreadingMixIn, HTTPServer):
daemon_threads = True
def serve_on_port(port):
server = ThreadingHTTPServer(("localhost",port), Handler)
server.serve_forever()
Thread(target=serve_on_port, args=[1111]).start()
serve_on_port(2222)
update:
This also works with Python 3 but three lines need to be slightly changed:
from socketserver import ThreadingMixIn
from http.server import HTTPServer, BaseHTTPRequestHandler
and
self.wfile.write(bytes("Hello World!", "utf-8"))
Not easily. You could have two ThreadingHTTPServer instances, write your own serve_forever() function (don't worry it's not a complicated function).
The existing function:
def serve_forever(self, poll_interval=0.5):
"""Handle one request at a time until shutdown.
Polls for shutdown every poll_interval seconds. Ignores
self.timeout. If you need to do periodic tasks, do them in
another thread.
"""
self.__serving = True
self.__is_shut_down.clear()
while self.__serving:
# XXX: Consider using another file descriptor or
# connecting to the socket to wake this up instead of
# polling. Polling reduces our responsiveness to a
# shutdown request and wastes cpu at all other times.
r, w, e = select.select([self], [], [], poll_interval)
if r:
self._handle_request_noblock()
self.__is_shut_down.set()
So our replacement would be something like:
def serve_forever(server1,server2):
while True:
r,w,e = select.select([server1,server2],[],[],0)
if server1 in r:
server1.handle_request()
if server2 in r:
server2.handle_request()
I would say that threading for something this simple is overkill. You're better off using some form of asynchronous programming.
Here is an example using Twisted:
from twisted.internet import reactor
from twisted.web import resource, server
class MyResource(resource.Resource):
isLeaf = True
def render_GET(self, request):
return 'gotten'
site = server.Site(MyResource())
reactor.listenTCP(8000, site)
reactor.listenTCP(8001, site)
reactor.run()
I also thinks it looks a lot cleaner to have each port be handled in the same way, instead of having the main thread handle one port and an additional thread handle the other. Arguably that can be fixed in the thread example, but then you're using three threads.