How to listen for events that happen on stdin in Tornado loop?
In particular, in a tornado-system, I want to read from stdin, react on it, and terminate if stdin closes. At the same time, the Tornado web service is running on the same process.
While looking for this, the most similar I could find was handling streams of an externally spawned process. However, this is not what I want: I want to handle i/o stream of the current process, i.e. the one that has the web server.
Structurally, my server is pretty much hello-world tornado, so we can base the example off that. I just need to add an stdin handler.
You can use the add_handler method on the IOLoop instance to watch for events on stdin.
Here's a minimal working example:
from tornado.ioloop import IOLoop
from tornado.web import Application, RequestHandler
import sys
class MainHandler(RequestHandler):
def get(self):
self.finish("foo")
application = Application([
(r"/", MainHandler),
])
def on_stdin(fd, events):
content = fd.readline()
print "received: %s" % content
if __name__ == "__main__":
application.listen(8888)
IOLoop.instance().add_handler(sys.stdin, on_stdin, IOLoop.READ)
IOLoop.instance().start()
Related
I'm trying to make a Raspberry Pi send plain text to my phone over my local network, from where I plan to pick it up.
I tried the following "hello world"-like program from their official website, but I cannot get it to proceed after a point.
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Ugh, the world? Well.. hello, I guess")
application = tornado.web.Application([
(r"/", MainHandler),
])
application.listen(8881)
tornado.ioloop.IOLoop.instance().start()
# I cannot get this line to execute!!
print("Hi!!")
Experience: basics of Python, intermediate with Arduino C++, none in networking/web
You're trying to print to STDOUT after starting the event loop, so that print statement never sees the light of the day. Basically, you're creating a HTTP server at port 8881 that is constantly listening for requests. Whatever logic you wish the server to do needs to be in a callback, like MainHandler
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Ugh, the world? Well.. hello, I guess")
# All your SMS logic potentially goes here
self.write("Sent SMS to xyz...")
application = tornado.web.Application(
[
(r"/", MainHandler),
]
)
application.listen(8881)
tornado.ioloop.IOLoop.instance().start()
Then trigger the endpoint by making an HTTP call
curl <IP ADDRESS OF YOUR PI>:8881
This is because Tornado's IOLoop.start() method is a "blocking" call, which means that it doesn't return until some condition is met. This is why your code "gets stuck" on that line. The documentation for this method states:
Starts the I/O loop.
The loop will run until one of the callbacks calls stop(), which will
make the loop stop after the current event iteration completes.
Typically, a call to IOLoop.start() will be the last thing in your program. The exception would be if you want to stop your Tornado application and then proceed to do something else.
Here are two possible solutions to your problem, depending on what you want to accomplish:
call self.stop() from the handler. This will stop the Tornado application, IOLoop.start() will return, and then your print will execute.
call print("Hi!!") from your handler.
I have a flask app with SocketIO. Due to this, I choose to run the application using the eventlet library. Under the hood, eventlet uses green threads to achieve concurrency if I'm not mistaken.
In my app, I want to spawn a process and stream the output over web sockets. Below is a toy example
# eventlet==0.25.1
# flask==1.1.1
# flask-socketio==4.2.1
import eventlet
import subprocess
from flask import Flask, jsonify
from flask_socketio import SocketIO
app = Flask(__name__)
socketio = SocketIO(app)
socketio.init_app(app, cors_allowed_origins='*')
#app.route('/ping')
def start_ping():
eventlet.spawn_n(ping)
return jsonify(success=True)
#app.route('/hello')
def hello():
return jsonify(data='Hello World')
def ping():
proc = subprocess.Popen(
('ping', '-t', 'google.com'),
stdout=subprocess.PIPE,
)
for line in proc.stdout:
eventlet.sleep(0.5)
socketio.emit('log_receive', str(line))
if __name__ == '__main__':
socketio.run(app)
User hits the /ping endpoint.
The ping() function is executed in a green thread which runs the ping command in a child process
lines are read from the subprocess's stdout and emited via web sockets
eventlet.sleep(0.5) is used to give other parts of the application a chance to run.
Question:
for line in proc.stdout: is blocking. Until something comes through stdout, eventlet.sleep(0.5) will not execute and therefore the rest of the application isn't given a chance to run. Thus rendering the app, unresponsive.
I came across this question on how to do non-blocking reads from subprocess.PIPE and the suggestion is to essentially use a separate thread to do the reading.
Unfortunately, I can not use a separate thread because of the concurrent/coroutine programming model I'm restricted to because of eventlet/greenlet.
I could use fcntl() to do non-blocking reads but I'm on windows and so it isn't an option
What's an alternative to avoid having the application be at mercy of the subprocess's stdout
I have a multiproccessing tornado web server and I want to create another process that will do some things in the background.
I have a server with to following code
start_background_process
app = Application([<someurls>])
server = HTTPServer(app)
server.bind(8888)
server.start(4) # Forks multiple sub-processes
IOLoop.current().start()
def start_background_process():
process = multiprocessing.Process(target=somefunc)
process.start()
and everything is working great.
However when I try to close the server (by crtl c or send signal)
I get AssertionError: can only join a child process
I understood the cause of this problem:
when I create a process with multiprocess a call for the process join method
is registered in "atexit" and because tornado does a simple fork all its childs also call the join method of the process I created and the can't since the process is their brother and not their son?
So how can I open a process normally in tornado?
"HTTPTserver start" uses os.fork to fork the 4 sub-processes as it can be seen in its source code.
If you want your method to be executed by all the 4 sub-processes, you have to call it after the processes have been forked.
Having that in mind your code can be changed to look as below:
import multiprocessing
import tornado.web
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
# A simple external handler as an example for completion
from handlers.index import IndexHandler
def method_on_sub_process():
print("Executing in sub-process")
def start_background_process():
process = multiprocessing.Process(target=method_on_sub_process)
process.start()
def main():
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
if __name__ == "__main__":
main()
Furthermore to keep the behavior of your program clean during any keyboard interruption , surround the instantiation of the sever by a try...except clause as below:
def main():
try:
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
except KeyboardInterrupt:
IOLoop.instance().stop()
I'm using a Tornado web server to queue up items that need to be processed outside of the request/response cycle.
In my simplified example below, every time a request comes in, I add a new string to a list called queued_items. I want to create something that will watch that list and process the items as they show up in it.
(In my real code, the items are processed and sent over a TCP socket which may or may not be connected when the web request arrives. I want the web server to keep queuing up items regardless of the socket connection)
I'm trying to keep this code simple and not use external queues/programs like Redis or Beanstalk. It's not going to have very high volume.
What's a good way using Tornado idioms to watch the client.queued_items list for new items and process them as they arrive?
import time
import tornado.ioloop
import tornado.gen
import tornado.web
class Client():
def __init__(self):
self.queued_items = []
#tornado.gen.coroutine
def watch_queue(self):
# I have no idea what I'm doing
items = yield client.queued_items
# go_do_some_thing_with_items(items)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
client.queued_items.append("%f" % time.time())
self.write("Queued a new item")
if __name__ == "__main__":
client = Client()
# Watch the queue for when new items show up
client.watch_queue()
# Create the web server
application = tornado.web.Application([
(r'/', IndexHandler),
], debug=True)
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
There is a library called toro, which provides synchronization primitives for tornado. [Update: As of tornado 4.2, toro has been merged into tornado.]
Sounds like you could just use a toro.Queue (or tornado.queues.Queue in tornado 4.2+) to handle this:
import time
import toro
import tornado.ioloop
import tornado.gen
import tornado.web
class Client():
def __init__(self):
self.queued_items = toro.Queue()
#tornado.gen.coroutine
def watch_queue(self):
while True:
items = yield self.queued_items.get()
# go_do_something_with_items(items)
class IndexHandler(tornado.web.RequestHandler):
#tornado.gen.coroutine
def get(self):
yield client.queued_items.put("%f" % time.time())
self.write("Queued a new item")
if __name__ == "__main__":
client = Client()
# Watch the queue for when new items show up
tornado.ioloop.IOLoop.current().add_callback(client.watch_queue)
# Create the web server
application = tornado.web.Application([
(r'/', IndexHandler),
], debug=True)
application.listen(8888)
tornado.ioloop.IOLoop.current().start()
There are a few tweaks required, aside from switching the data structure from a list to a toro.Queue:
We need to schedule watch_queue to run inside the IOLoop using add_callback, rather than trying to call it directly outside of an IOLoop context.
IndexHandler.get needs to be converted to a coroutine, because toro.Queue.put is a coroutine.
I also added a while True loop to watch_queue, so that it will run forever, rather than just processing one item and then exiting.
Let's take the hello world application in the Tornado home page:
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
application = tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Is there a way, after the IOloop has been started and without stopping it, to essentially stop the application and start another one (on the same port or on another)?
I saw that I can add new application (listening on different ports) at runtime, but I do not know how I could stop existing ones.
Application.listen() method actually creates a HTTPServer and calls its listen() medthod. HTTPServer objects has stop() method which is probably what you need. But in order to do it you have to explicitly create HTTPServer object in your script.
server = HTTPServer(application)
server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
#somewhere in your code
server.stop()
Here is a gist about how to gracefully and safely shutdown the tornado ioloop.
https://gist.github.com/nicky-zs/6304878
However, you can refer to this implementation to achieve your goal.
To add to #Alex Shkop's answer a few years later, as of Tornado 4.3 .listen() returns a reference to its HTTPServer!
https://www.tornadoweb.org/en/stable/web.html#tornado.web.Application.listen
server = app.listen()
... # later
server.stop()
Further, if you're working in a Jupyter notebook and for some reason need a Tornado server, you can try to close the HTTPServer before you recreate it to avoid OSError: [Errno 98] Address already in use on re-running the cell
# some Jupyter cell
#
import tornado.web
try:
server.stop() # NameError on first cell run
except Exception as ex:
print(f"server not started to stop: {repr(ex)}")
else: # did not raise NameError: server was running
print(f"successfully stopped server: {server}")
app = tornado.web.Application(...)
server = app.listen(9006) # arbitrary listening port