I am attempting to write some tests using webtest to test out my python GAE application. The problem I am running into is that the application is listening on port 8080 but I cannot configure webtest to hit that port.
For example, I want to use app.get('/getreport') to hit http://localhost:8080/getreport. Obviously, it hits just thits http:// localhost/getreport.
Is there a way to set up webtest to hit a particular port?
With paste.proxy.TransparentProxy you can test anything that responds to an http request...
from webtest import TestApp
from paste.proxy import TransparentProxy
testapp = TestApp(TransparentProxy())
res = testapp.get("http://google.com")
assert res.status=="200 OK","failure....."
In config, and I quote,
port
Required? No, defaults is "80"
Defines the port number to use for
executing requests, e.g. "8080".
Edit: the user clarified that they mean this webtest (pythonpaste's), not the widely used Canoo application. I wouldn't have guessed, because pythonpaste's webtest is a very different kettle of fish, and I quote...:
With this you can test your web
applications without starting an HTTP
server, and without poking into the
web framework shortcutting pieces of
your application that need to be
tested. The tests WebTest runs are
entirely equivalent to how a WSGI HTTP
server would call an application
No HTTP server being started, there is no concept of "port" -- things run in-process, at WSGI level, without actual TCP/IP and HTTP in play. So, the "application" is not listening on port 8080 (or any other port), but rather its WSGI entry points are called directly, "just as if" an HTTP server was calling them.
If you want to test an actual running HTTP server, then you need Canoo's webtest (or other equivalent frameworks), not pythonpaste's -- the latter will make for faster testing by avoiding any socket-layer and HTTP-layer overhead, but you can't test a separate, existing, running server (such as GAE's SDK's) in this way.
I think you're misunderstanding what WebTest does. Something like app.get('/getreport') shouldn't make any kind of request to localhost on any port. The beauty of WebTest is that it doesn't require your app to actually be running on any server.
Here's a quote from the "What This Does" section of the WebTest docs:
With this you can test your web applications without starting an HTTP server, and without poking into the web framework shortcutting pieces of your application that need to be tested. The tests WebTest runs are entirely equivalent to how a WSGI HTTP server would call an application.
Related
How can I set a maximum number of pending connections in a Flask application?
E.g.
After I run this code, I can send it two requests at the same time. While the first request is being processed, the other one will wait. When the first one is done, the second one will be processed.
from flask import Flask
application = Flask(__name__)
#application.route("/")
def hello():
for x in range(10000000):
x += 1
return "Hello World!"
if __name__ == '__main__':
application.run()
How can I make it so that when I send two requests at the same time, the first one will be processed and the second one, instead of waiting, will not be able to connected (maybe it will get a some kind of error instead).
You can use Flask with some sort of web server, such as Gunicorn, Nginx or Apache, to accept HTTP requests which it will then operate on. The reason why people run Nginx and Gunicorn together is that in addition to being a web server, Nginx can also proxy connections to Gunicorn which brings certain performance benefits.
Gunicorn is pre-forking software. For low latency communications, such as load balancer to app server or communications between services, pre-fork systems can be very successful. A gunicorn server can;
Runs any WSGI Python web application (and framework)
Can be used as a drop-in replacement for Paster (Pyramid), Django's Development Server, web2py etc.
Comes with various worker types and configurations
Manages worker processes automatically
HTTP/1.0 and HTTP/1.1 (Keep-Alive) support through synchronous and asynchronous workers
You can take help from this blogpost, for setting up a flask application with gunicorn.
What is Bottle doing in its wsgiref server implementation that the built in Python WSGIref simple server is not? When I look at Bottle, for example, it adheres to the WSGI standard and the documentation states:
1.5.1 Server Options The built-in default server is based on wsgiref WSGIServer. This non-threading HTTP server is perfectly fine for
development and early production, but may become a performance
bottleneck when server load increases.
There are three ways to eliminate this bottleneck:
• Use a different server that is either multi-threaded or asynchronous.
• Start multiple server processes and spread the load with a load-balancer.
• Do both
[emphasis mine]
Yet, everything I have read says to not use the Python wsgrief server for anything production.
What does Bottle do with wsgrief that the built in Python wsgiref does not? I'm not really questioning the wisdom of using asynch servers or "bigger" more "scalable" WSGI servers. But, I'd like to know what Bottle is doing with the wsgiref server that makes it okay for "early Production," the regular library does not.
My application would serve less than 20 people hitting a PostgreSQL or MySQL database, CRUD operations. I guess you could ask a similar question with Flask.
For reference,
http://bottlepy.org/docs/dev/bottle-docs.pdf [pdf]
https://docs.python.org/2/library/wsgiref.html#module-wsgiref.simple_server
https://github.com/bottlepy/bottle/blob/master/bottle.py
This is Bottle's implementation, at least for opening the port:
class WSGIRefServer(ServerAdapter):
def run(self, app): # pragma: no cover
from wsgiref.simple_server import make_server
from wsgiref.simple_server import WSGIRequestHandler, WSGIServer
import socket
class FixedHandler(WSGIRequestHandler):
def address_string(self): # Prevent reverse DNS lookups please.
return self.client_address[0]
def log_request(*args, **kw):
if not self.quiet:
return WSGIRequestHandler.log_request(*args, **kw)
handler_cls = self.options.get('handler_class', FixedHandler)
server_cls = self.options.get('server_class', WSGIServer)
if ':' in self.host: # Fix wsgiref for IPv6 addresses.
if getattr(server_cls, 'address_family') == socket.AF_INET:
class server_cls(server_cls):
address_family = socket.AF_INET6
self.srv = make_server(self.host, self.port, app, server_cls,
handler_cls)
self.port = self.srv.server_port # update port actual port (0 means random)
try:
self.srv.serve_forever()
except KeyboardInterrupt:
self.srv.server_close() # Prevent ResourceWarning: unclosed socket
raise
EDIT:
What is Bottle doing in its wsgiref server implementation that the built in Python WSGIref simple server is not?
What does Bottle do with wsgrief that the built in Python wsgiref does not?
Nothing (of substance).
Not sure I understand your question, but I'll take a stab at helping.
The reason for my confusion is: the code snippet you posted precisely answers [what I think is] your question. Bottle's WSGIRefServer class does nothing substantial except wrap wsgiref.simple_server. (I'm calling the logging and the IPv6 tweaks unsubstantial because they're not related to "production-readiness," which I gather is at the heart of your question.)
Is it possible that you misinterpreted the docs? I'm thinking perhaps yes, because you say:
I'd like to know what Bottle is doing with the wsgiref server that makes it okay for "early Production," the regular library does not.
but the Bottle docs are making the point that Bottle's WSGIRefServer should not be used to handle high throughput loads.
In other words, WSGIRefServer is the same as wsgiref, whereas I think you interpreted the docs as saying that the former is somehow improved over the latter. (It's not.)
Hope this helps!
I want to build a hybrid application (Web technologies embedded in a desktop app).
I will start with a Web version and the embed it using WebKit, but I don't want the embedded version to service requests through a TCP port.
With WebKit (Qt,Gtk) I can intercept all URL requests and act on them.
What I'm missing is a way to invoke the Flask URL-to-callable dispatcher without going through TCP (or WSGI).
Any ideas better than analyzing the call stack with a debugger?
Simon Sapin answered on the (quite active) Flask mailing list:
Why not WSGI ?
You have to get a Python interpreter somewhere. Then you need to call
your application somehow with data from WebKit like the URL being
requested, and get the response. WSGI is just that: a calling
convention for Python functions (or other callable objects.)
If WSGI is more complex than you’d like, you can use the test client:
http://flask.pocoo.org/docs/api/#flask.Flask.test_client
http://werkzeug.pocoo.org/docs/test/#werkzeug.test.Client
http://werkzeug.pocoo.org/docs/test/#werkzeug.test.EnvironBuilder
That’s how I do it in Frozen-Flask. It simulates HTTP requests to a
Flask app at the WSGI level and write the responses to static files.
The test client is just an easier way to make WSGI calls:
https://github.com/SimonSapin/Frozen-Flask/blob/master/flaskext/frozen/__init__.py#L228
WSGI really is Flask’s "entry point".
Other than that if you’re interested in Flask inner workings start
looking from here:
https://github.com/mitsuhiko/flask/blob/master/flask/app.py#L1477
The title may be a bit vague, but here's my goal: I have a frontend webserver which takes incoming HTTP requests, does some preprocessing on them, and then passes the requests off to my real webserver to get the HTTP response, which is then passed back to the client.
Currently, my frontend is built off of BaseHTTPServer.HTTPServer and the backend is CherryPy.
So the question is: Is there a way to take these HTTP requests / client connections and insert them into a CherryPy server to get the HTTP response? One obvious solution is to run an instance of the CherryPy backend on a local port or using UNIX domain sockets, and then the frontend webserver establishes a connection with the backend and relays any requests/responses. Obviously, this isn't ideal due to the overhead.
What I'd really like is for the CherryPy backend to not bind to any port, but just sit there waiting for the frontend to pass the client's socket (as well as the modified HTTP Request info), at which point it does its normal CherryPy magic and returns the request directly to the client.
I've been perusing the CherryPy source to find some way to accomplish this, and currently am attempting to modify wsgiserver.CherryPyWSGIServer, but it's getting pretty hairy and is probably not the best approach.
Is your main app a wsgi application? If so, you could write some middleware that wraps around it and does all the request wrangling before passing on to the main application.
If this this is possible it would avoid you having to run two webservers and all the problems you are encountering.
Answered the Upgrade question at Handling HTTP/1.1 Upgrade requests in CherryPy. Not sure if that addresses this one or not.
How should I implement reverse AJAX when building a chat application in Django? I've looked at Django-Orbited, and from my understanding, this puts a comet server in front of the HTTP server. This seems fine if I'm just running the Django development server, but how does this work when I start running the application from mod_wsgi? How does having the orbited server handling every request scale? Is this the correct approach?
I've looked at another approach (long polling) that seems like it would work, although I'm not sure what all would be involved. Would the client request a page that would live in its own thread, so as not to block the rest of the application? Would it even block? Wouldn't the script requested by the client have to continuously poll for information?
Which of the approaches is more proper? Which is more portable, scalable, sane, etc? Are there other good approaches to this (aside from the client polling for messages) that I have overlooked?
How about using the awesome nginx push module?
Have take a look at Tornado?
Using WSGI for comet/long-polling apps is not a good choice because don't support non-blocking requests.
The Nginx Push Stream Module provides a simple HTTP interface for both the server and the client.
The Nginx HTTP Push Module is similar, but seems to no longer be maintained.