Below is a snippet from the tornado documentation.
def handle_response(response):
if response.error:
print("Error: %s" % response.error)
else:
print(response.body)
http_client = AsyncHTTPClient()
http_client.fetch("http://www.google.com/", handle_response)
But this does not print anything to the console. I tried adding a time.sleep at the end but even then nothing prints.
Also, it does not send any request to my server when I change the url above to point to my server.
tornado.httpclient.HTTPClient works fine though.
I am on Macbook with Python 3.6.1.
Tornado is an asynchronous framework where all tasks are scheduled by a single event loop called the IOLoop. At the end of your program, put:
import tornado.ioloop
tornado.ioloop.IOLoop.current().start()
That will start the loop running and allow the AsyncHTTPClient to fetch the URL.
The IOLoop runs forever, so you need to implement some logic that determines when to call IOLoop.stop(). In your example program, call IOLoop.stop() at the bottom of handle_response. In a real HTTP client program, the loop should run until all work is complete and the program is ready to exit.
Related
I'm trying to make a Raspberry Pi send plain text to my phone over my local network, from where I plan to pick it up.
I tried the following "hello world"-like program from their official website, but I cannot get it to proceed after a point.
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Ugh, the world? Well.. hello, I guess")
application = tornado.web.Application([
(r"/", MainHandler),
])
application.listen(8881)
tornado.ioloop.IOLoop.instance().start()
# I cannot get this line to execute!!
print("Hi!!")
Experience: basics of Python, intermediate with Arduino C++, none in networking/web
You're trying to print to STDOUT after starting the event loop, so that print statement never sees the light of the day. Basically, you're creating a HTTP server at port 8881 that is constantly listening for requests. Whatever logic you wish the server to do needs to be in a callback, like MainHandler
import tornado.ioloop
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Ugh, the world? Well.. hello, I guess")
# All your SMS logic potentially goes here
self.write("Sent SMS to xyz...")
application = tornado.web.Application(
[
(r"/", MainHandler),
]
)
application.listen(8881)
tornado.ioloop.IOLoop.instance().start()
Then trigger the endpoint by making an HTTP call
curl <IP ADDRESS OF YOUR PI>:8881
This is because Tornado's IOLoop.start() method is a "blocking" call, which means that it doesn't return until some condition is met. This is why your code "gets stuck" on that line. The documentation for this method states:
Starts the I/O loop.
The loop will run until one of the callbacks calls stop(), which will
make the loop stop after the current event iteration completes.
Typically, a call to IOLoop.start() will be the last thing in your program. The exception would be if you want to stop your Tornado application and then proceed to do something else.
Here are two possible solutions to your problem, depending on what you want to accomplish:
call self.stop() from the handler. This will stop the Tornado application, IOLoop.start() will return, and then your print will execute.
call print("Hi!!") from your handler.
I am writing a script which involves opening an HTTP server and serving a single file. However, the request for this file is also instigated from further down the script. Currently, I am doing it like this:
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
httpd = SocketServer.TCPServer(("", 8000), Handler)
Thread(target=httpd.handle_request).start()
This works to handle a single request, but also creates some issues with keyboard input. What is the most efficient, non-blocking way to serve a single HTTP request? Ideally the server would close and release the port upon the completion of the request.
You can try many workarounds but flask is the way to go. It is not the simplest or fastest solution but it is the most relieble one.
Example for serving a single file with flask:
from flask import Flask, send_file
app = Flask(__name__)
#app.route('/file-downloads/')
def file_downloads():
try:
return render_template('downloads.html')
except Exception as e:
return str(e)
app.run()
for a non blocking solution you can do this instead of app.run():
Thread(target=app.run).start()
But I don't recommend running the flask app in a thread because of the GIL
You can use the handle_request method to handle a single request, and if you use the server inside a with statement then Python will close the server and release the port when the statement exits. (Alternatively, you can use the server_close method to close the server and release the port if you want, but the with statement provides better error handling.) If you do all of that in a separate thread, you should get the behaviour you are looking for.
Using Python 3:
from threading import Thread
from http.server import HTTPServer, SimpleHTTPRequestHandler
def serve_one_request():
with HTTPServer(("0.0.0.0", 8000), SimpleHTTPRequestHandler) as server:
server.handle_request()
thread = Thread(target=serve_one_request)
thread.start()
# Do other work
thread.join()
I'm not sure if this will fix the issues with keyboard input you mentioned. If you elaborate on that some more I will take a look.
I want to start a local server and then open a link with a browser from the same python program.
This is what I tried (a very naive and foolish attempt):
from subprocess import call
import webbrowser
call(["python", "-m", "SimpleHTTPServer"]) #This creates a server at port:8000
webbrowser.open_new_tab("some/url")
However, the program doesn't go to the second statement because the server is still running in the background. To open the browser, I need to exit the server which defeats the purpose of running the server.
Can anyone help me by suggesting a working solution?
You could start your web server in a daemon thread (a Python program exits if only daemon threads are left) and then make your requests from the main thread.
The only problem then is to synchronize your main thread to the server thread, since the HTTP-server will need some time to start up and won't handle any requests until this point. I am not aware of an easy and clean solution to do that, but you could (somewhat hackish) just pause your main thread for a number seconds (possibly shorter) and start making requests only after this. Another option would be to just send requests to the webserver from the very beginning and expect them to fail for some amount of time.
Here is a small sample script with a simple HTTP webserver that serves content from the local file system over TCP on localhost:8080 and a sample request, requesting a file foo.txt from the directory the webserver (and in this case also the script) was started in.
import sys
import requests
import threading
import time
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
# set up the HTTP server and start it in a separate daemon thread
httpd = HTTPServer(('localhost', 8080), SimpleHTTPRequestHandler)
thread = threading.Thread(target=httpd.serve_forever)
thread.daemon = True
# if startup time is too long we might want to be able to quit the program
try:
thread.start()
except KeyboardInterrupt:
httpd.shutdown()
sys.exit(0)
# wait until the webserver finished starting up (maybe wait longer or shorter...)
time.sleep(5)
# start sending requests
r = requests.get('http://localhost:8080/foo.txt')
print r.status_code
# => 200 (hopefully...)
Now I wrote ferver by this tutorial:
https://twistedmatrix.com/documents/14.0.0/web/howto/web-in-60/asynchronous-deferred.html
But it seems to be good only for delayng process, not actually concurently process 2 or more requests. My full code is:
from twisted.internet.task import deferLater
from twisted.web.resource import Resource
from twisted.web.server import Site, NOT_DONE_YET
from twisted.internet import reactor, threads
from time import sleep
class DelayedResource(Resource):
def _delayedRender(self, request):
print 'Sorry to keep you waiting.'
request.write("<html><body>Sorry to keep you waiting.</body></html>")
request.finish()
def make_delay(self, request):
print 'Sleeping'
sleep(5)
return request
def render_GET(self, request):
d = threads.deferToThread(self.make_delay, request)
d.addCallback(self._delayedRender)
return NOT_DONE_YET
def main():
root = Resource()
root.putChild("social", DelayedResource())
factory = Site(root)
reactor.listenTCP(8880, factory)
print 'started httpserver...'
reactor.run()
if __name__ == '__main__':
main()
But when I passing 2 requests console output is like:
Sleeping
Sorry to keep you waiting.
Sleeping
Sorry to keep you waiting.
But if it was concurrent it should be like:
Sleeping
Sleeping
Sorry to keep you waiting.
Sorry to keep you waiting.
So the question is how to make twisted not to wait until response is finished before processing next?
Also make_delayIRL is a large function with heavi logic. Basically I spawn lot of threads and make requests to other urls and collecting results intro response, so it can take some time and not easly to be ported
Twisted processes everything in one event loop. If somethings blocks the execution, it also blocks Twisted. So you have to prevent blocking calls.
In your case you have time.sleep(5). It is blocking. You found the better way to do it in Twisted already: deferLater(). It returns a Deferred that will continue execution after the given time and release the events loop so other things can be done meanwhile. In general all things that return a deferred are good.
If you have to do heavy work that for some reason can not be deferred, you should use deferToThread() to execute this work in a thread. See https://twistedmatrix.com/documents/15.5.0/core/howto/threading.html for details.
You can use greenlents in your code (like threads).
You need to install the geventreactor - https://gist.github.com/yann2192/3394661
And use reactor.deferToGreenlet()
Also
In your long-calculation code need to call gevent.sleep() for change context to another greenlet.
msecs = 5 * 1000
timeout = 100
for xrange(0, msecs, timeout):
sleep(timeout)
gevent.sleep()
I have a flask REST endpoint that does some cpu-intensive image processing and takes a few seconds to return. Often, this endpoint gets called, then aborted by the client. In these situations I would like to cancel processing. How can I do this in flask?
In node.js, I would do something like:
req.on('close', function(){
//some handler
});
I was expecting flask to have something similar, or a synchronous method (request.isClosed()) that I could check at certain points during my processing and return if it's closed, but I can't find one.
I thought about sending something to test that the connection is still open, and catching the exception if it fails, but it seems Flask buffers all outputs so the exception isn't thrown until the processing completes and tries to return the result:
An established connection was aborted by the software in your host machine
How can I cancel my processing half way through if the client aborts their request?
There is a potentially... hacky solution to your problem. Flask has the ability to stream content back to the user via a generator. The hacky part would be streaming blank data as a check to see if the connection is still open and then when your content is finished the generator could produce the actual image. Your generator could check to see if processing is done and return None or "" or whatever if it's not finished.
from flask import Response
#app.route('/image')
def generate_large_image():
def generate():
while True:
if not processing_finished():
yield ""
else:
yield get_image()
return Response(generate(), mimetype='image/jpeg')
I don't know what exception you'll get if the client closes the connection but I'm willing to bet its error: [Errno 32] Broken pipe
As far as I know you can't know if a connection was closed by the client during the execution because the server is not testing if the connection is open during the execution. I know that you can create your custom request_handler in your Flask application for detecting if after the request is processed the connection was "dropped".
For example:
from flask import Flask
from time import sleep
from werkzeug.serving import WSGIRequestHandler
app = Flask(__name__)
class CustomRequestHandler(WSGIRequestHandler):
def connection_dropped(self, error, environ=None):
print 'dropped, but it is called at the end of the execution :('
#app.route("/")
def hello():
for i in xrange(3):
print i
sleep(1)
return "Hello World!"
if __name__ == "__main__":
app.run(debug=True, request_handler=CustomRequestHandler)
Maybe you want to investigate a bit more and as your custom request_handler is created when a request comes you can create a thread in the __init__ that checks the status of the connection every second and when it detects that the connection is closed ( check this thread ) then stop the image processing. But I think this is a bit complicated :(.
I was just attempting to do this same thing in a project and I found that with my stack of uWSGI and nginx that when a streaming response was interrupted on the client's end that the following errors occurred
SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request
uwsgi_response_write_body_do(): Broken pipe [core/writer.c line 404] during GET
IOError: write error
and I could just use a regular old try and except like below
try:
for chunk in iter(process.stdout.readline, ''):
yield chunk
process.wait()
except:
app.logger.debug('client disconnected, killing process')
process.terminate()
process.wait()
This gave me:
Instant streaming of data using Flask's generator functionality
No zombie processes on cancelled connection