I have the following routing url rule defined and would like to test it.
app.add_url_rule('/api/v1.0/worker/bbc-stage-0', 'stage0', view_func=BBCStage0TaskView.as_view('bbc_stage0_taskview'))
The following tests if the path is correct:
def test_url_to_view_stage0_exists(self):
self.assertEqual(api.app.url_map._rules_by_endpoint['stage0'][0].rule, '/api/v1.0/worker/bbc-stage-0')
I haven't found a way to test if view_func is pointing to the right class. Is there a way to test that?
Werkzeug's Map maps paths to endpoints. The Flask app maps these endpoints to view functions in app.view_functions, which is used during app.dispatch_request. So to check what view has been connected to an endpoint, simply get it from that map. Since you're using a class based View, the real view function will be different every instantiation, so you instead test that the view_class is the same.
self.assertEqual(api.app.view_functions['stage0'].view_class, BBCStage0Task)
This is sort of a meaningless test, as you're basically testing Flask internals, which are already tested by Flask. Your own tests would be much more useful by simply using the test client to see if a request to a url returns what you expect.
with api.app.test_client() as client:
rv = client.get('/api/v1.0/worker/bbc-stage-0')
# assert something about the response, such as 'expected string' in rv.data
Related
beginner's question:
is it possible to pass GET request parameters to a route function in Flask using add_url_rule?
I am getting the error message that the verify_username_route function I declare later (that takes 1 parameter) is called without any parameters passed.
self.application_.add_url_rule(self.path_ + '/verify', 'verify', self.verify_username_route, methods=['GET'])
To fetch query string parameters, you use request.args.get('argname') in your function. Nothing is passed in -- it's all done through the globals.
To pass any parameters in your URLs, you can use Flask's built-in patterns. These work for both #app.route decorators and add_url_route methods. Here is your code, with a parameter:
self.application_.add_url_rule(self.path_ + '/verify/<int:val>', 'verify', self.verify_username_route, methods=['GET'])
The important part from this is the exact route: /verify/<int:parameter>. This tells Flask that you want the route to be in the format of /verify/something, where something is any integer. Whatever integer is entered here when the request is made gets passed to your self.verify_username_route as a parameter called val.
Read more about this here.
I'm writing a RESTful API in Flask. I can access URL parameters via the Request Object. What is the best way to validate the given URL parameters?
For example:
/places?zip=97239 # This is a valid filter
/places?foo=bar # This is not a valid filter, 404 response?
One solution is to search through request.args and compare each entry against a set of valid URL parameters. Is there a better way?
Thanks!
Put the GET parameters in a dictionary and validate it using voluptuous.
For example:
parameters = Schema({
Required('zip'): Coerce(int),
})
will accept any dictionary with a "zip" key that has a value that can be coerced to an integer (so either 1 or "1" depending on how you get the values). You can then validate it using:
parameters(my_get_params) # should not raise an exception
Instead of doing validation by hand, you can use WTForms, which, besides helping you create actual forms, validates URL / POST parameters automatically according to specified models.
Whether this is better will depend on your specific situation.
You can use the flask-parameter-validation library in order to validate within Flask. This is useful as it uses Flask features under-the-hood, handles files/forms, and implicitly converts Query parameters.
For your example, you would do the following:
from flask import Flask
from typing import Optional
from flask_parameter_validation import ValidateParameters, Query
app = Flask(__name__)
#app.route("/places", methods=["GET"])
#ValidateParameters()
def check_places(
zip: Optional[int] = Query()
):
return "Hello World!"
if __name__ == "__main__":
app.run()
I've written a program in Python, which works with two distinct API to get the data from two different services (CKAN and MediaWiki).
In particular, there is a class Resource, which requests the data from the above mentioned services and process it.
At some point I've come to conclusion, that there is a need for tests for my app.
And the problem is that all examples I've found on web and in books do not deal with such cases.
For example, inside Resource class I've got a method:
def load_from_ckan(self):
"""
Get the resource
specified by self.id
from config.ckan_api_url
"""
data = json.dumps({'id': self.id})
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
url = config.ckan_api_url + '/action/resource_show'
r = requests.post(url, timeout=config.ckan_request_timeout, data=data, headers=headers)
assert r.ok, r
resource = json.loads(r.content)
resource = resource["result"]
for key in resource:
setattr(self, key, resource[key])
The load_from_ckan method get the data about resource from the CKAN API and assign it to the object. It is simple, but...
My question is: how to test the methods like this? OR What should I test here?
I thought about the possibility to pickle (save) results to HDD. Then I could load it in the test and compare with the object initialized with load_from_ckan(). But CKAN is community-driven platform and such behavior of such tests will be unpredictable.
If there exist any books on philosophy of automated tests (like what to test, what not to test, how to make tests meaningful etc.), please, give me a link to it.
With any testing, the key question really is - what could go wrong?
In your case, it looks like the three risks are:
The web API in question could stop working. But you check for this already, with assert r.ok.
You, or someone else, could make a mistaken change to the code in future (e.g. mistyping a variable) which breaks it.
The API could change, so that it no longer returns the fields or the format you need.
It feels like you could write a fairly simple test for the latter two, depending on what data from this API you actually rely on: for example, if you're expecting the JSON to have a field called "temperature" which is a floating-point Celsius number, you could write a test which calls your function, then checks that self.temperature is an instance of 'float' and is within a sensible range of values (-30 to 50?). That should leave you confident that both the API and your function are working as designed.
Typically if you want to test against some external service like this you will need to use a mock/dummy object to fake the api of the external service. This must be configurable at run-time either via the method's arguments or the class's constructor or another type of indirection. Another more complex option would be to monkey patch globals during testing, like "import requests; request.post = fake_post", but that can create more problems.
So for example your method could take an argument like so:
def load_from_ckan(self, post=requests.post):
# ...
r = post(url, timeout=config.ckan_request_timeout, data=data,
headers=headers)
# ...
Then during testing your would write your own post function that returned json results you'd see coming back from ckan. For example:
def mock_post(url, timeout=30, data='', headers=None):
# ... probably check input arguments
class DummyResponse:
pass
r = DummyResponse()
r.ok = True
r.content = json.dumps({'result': {'attr1': 1, 'attr2': 2}})
return r
Constructing the result in your test gives you a lot more flexibility than pickling results and returning them because you can fabricate error conditions or focus in on specific formats your code might not expect but you know could exist.
Overall you can see how complicated this could become so I would only start adding this sort of testing if you are experiencing repeated errors or other difficulties. This will just more code you have to maintain.
At this point, you can test that the response from CKAN is properly parsed. So you can pull the JSON from CKAN and ensure that it's returning data with the attributes you're interested in.
I'm working on a client library for a popular API. Currently, all of my unit tests of said client are making actual API calls against a test account.
Here's an example:
def test_get_foo_settings(self):
client = MyCustomClient(token, account)
results = client.get_foo_settings()
assert_is(type(results), list)
I'd like to stop making actual API calls against my test account.
How should I tackle this? Should I be using Mock to mock the calls to the client and response?
Also, I'm confused on the philosophy of what to test with this client library. I'm not interested in testing the actual API, but when there are different factors involved like the method being invoked, the permutations of possible return results, etc - I'm not sure what I should test and/or when it is safe to make assumptions (such as a mocked response).
Any direction and/or samples of how to use Mock in my type of scenario would be appreciated.
I would personally do it by first creating a single interface or function call which your library uses to actually contact the service, then write a custom mock for that during tests.
For example, if the service uses HTTP and you're using Requests to contact the service:
class MyClient(…):
def do_stuff(self):
result = requests.get(self.service_url + "/stuff")
return result.json()
I would first write a small wrapper around requests:
class MyClient(…):
def _do_get(self, suffix):
return requests.get(self.service_url + "/" + suffix).json()
def do_stuff(self):
return self._do_get("stuff")
Then, for tests, I would mock out the relevant functions:
class MyClientWithMocks(MyClient):
def _do_get(self, suffix):
self.request_log.append(suffix)
return self.send_result
And use it in tests like this:
def test_stuff(self):
client = MyClientWithMocks(send_result="bar")
assert_equal(client.do_stuff(), "bar")
assert_contains(client.request_log, "stuff")
Additionally, it would likely be advantageous to write your tests so that you can run them both against your mock and against the real service, so that if things start failing, you can quickly figure out who's fault it is.
I'm using HTTmock and I'm pretty happy with it : https://github.com/patrys/httmock
In the "Dispatching / Other Dispatchers" section of the CherryPy documentation, there is an example of Django-style regular-expression-to-view-function mapping definition, but there is no indication on how to attach this to cherrypy.tree.
How are you supposed to register this mapping?
Edit: Based on the "regex URL mapping" thread in the cherrypy-users Google group, I could figure out that to attach views using regular expressions, you need to use routes-style mapping using the cherrypy.dispatch.RoutesDispatcher class like so:
def hello(name='stranger'):
"""Sample view."""
return 'Hello, %s!'%name
dispatch = cherrypy.dispatch.RoutesDispatcher()
dispatch.connect('hello-1', '/hello', hello)
dispatch.connect('hello-2', '/hello/{name:([^/]+)}', hello)
cherrypy.tree.mount(None, config={
'/': {
'request.dispatch': dispatch,
}
})
Note the {argument-name:regular-expression} syntax in the URL pattern.
Is there a way to specifiy the route patterns using the list-of-pairs syntax as shown in the CherryPy documentation?
There's not any extra step required. During a request, cherrypy.tree performs a first routing stage, where the incoming request is mapped to an Application using its path-to-app mapping. When you call tree.mount(root=None, script_name='/', config=conf) at startup, the Tree creates a cherrypy.Application for you behind the scenes and mounts it at '/'.
Once the Application is found, its config takes over, and the "request.dispatch" config for the example app in the docs says "use the RoutesDispatcher for all URI's in this app". That RoutesDispatcher instance will then pass control of the request to the appropriate Controller.
The regex example in the docs isn't even that well-developed. You'd need to write a Dispatcher which uses it. That process "only" needs to find the handler and collect request.config, but those two activities can be very complex depending on the dispatch style chosen. See the existing dispatchers for inspiration.