Flask-Restful: How to send non-json POST argument? - python

I have a problem making the simple (non-json) arguments work with POST. Just taking the simple example from their tutorials, I can't make a unit test where the task is passing as an argument. However task is never passed in. Its none.
class TodoList(Resource):
def __init__(self):
self.reqparse = reqparse.RequestParser()
self.reqparse.add_argument('task', type = str)
super(TodoList, self).__init__()
def post(self):
args = self.reqparse.parse_args()
#args['task'] is None, but why?
return TODOS[args['task']], 201
Unit test:
def test_task(self):
rv = self.app.post('todos', data='task=test')
self.check_content_type(rv.headers)
resp = json.loads(rv.data)
eq_(rv.status_code, 201)
What am I missing please?

When you use 'task=test' test_client do not set application/x-www-form-urlencoded content type, because you put string to input stream. So flask can't detect form and read data from form and reqparse will return None for any values for this case.
To fix it you must set up content type or use dict {'task': 'test'} or tuple.
Also for request testing better to use client = self.app.test_client() instead app = self.app.test_client(), if you use FlaskTesting.TestCase class, then just call self.client.post.

Related

How can I properly test 2 parameters function using Pytest?

I' trying to properly test this simple function:
def get_content_from_header(request, header_name):
try:
content = request.headers[header_name]
except KeyError:
logging.error(f"BAD REQUEST: '{header_name}' header is missing from the request.")
except AttributeError:
logging.error(f"BAD REQUEST: request has no attributes 'headers'.")
else:
return content
return None
So this is my code so far, I'm using parametrize along with fixture to achieve my goal:
import main
import pytest
class ValidRequest:
def __init__(self):
self.headers = {
'Authorization': 'test_auth'
}
#pytest.fixture
def mocked_request():
request = ValidRequest()
return request
#pytest.mark.parametrize("possible_input, expected_output",
[('Authorization', 'test_auth'),
('InvalidHeader', None)])
def test_get_content_from_header(mocked_request, possible_input, expected_output):
# Run the function with mocked request
assert main.get_content_from_header(mocked_request, possible_input) == expected_output
Here's my problem: I only test the second parameter of the function get_content_from_header, not request which is the first one. How could I properly do that ?
Should I create a new class InvalidRequest and test my function with this new class in a new testing function just below test_get_content_from_header ?
Or should I add this new parameter trough parametrize in the existing testing function ?
What is the cleanest (more pythonic) way to do it ?
I would suggest a little change here, lets simplify that function a bit. Since that we are getting a certain header from the headers dict of the request we can just pass just the headers dict instead of the whole request.
def get_content_from_header(headers: dict, header_name: str):
if header_name in headers.keys():
return headers[header_name]
return None
This works the same way as your function, and you do not have to test your request parameter. Now you can test that in a very simple manner:
def test_get_content_from_header_returning_header_value():
headers = {"Authorization": "test_auth"}
assert get_content_from_header(headers, "Authorization") == "test_auth"
def test_get_content_from_header_returning_none():
headers = {"Authorization": None}
assert get_content_from_header(headers, "Authorization") == None
Now you don't need to test your request in that test, you can refer to https://flask.palletsprojects.com/en/2.0.x/testing/ and more specifically the client usage and test your endpoints, to test your request param.
Now about the loggers, I will usually place those in the place where you actually call the get_content_from_header function.

How to set request args with Flask test_client?

I have to test out a certain view that gets certain information from request.args.
I can't mock this since a lot of stuff in the view uses the request object.
The only alternative I can think of is to manually set request.args.
I can do that with the test_request_context(), e.g:
with self.app.test_request_context() as req:
req.request.args = {'code': 'mocked access token'}
MyView()
Now the request inside this view will have the arguments that I've set.
However I need to call my view, not just initialize it, so I use this:
with self.app.test_client() as c:
resp = c.get('/myview')
But I don't know how to manipulate the request arguments in this manner.
I have tried this:
with self.app.test_client() as c:
with self.app.test_request_context() as req:
req.request.args = {'code': 'mocked access token'}
resp = c.get('/myview')
but this does not set request.args.
Pass the query_string argument to c.get, which can either be a dict, a MultiDict, or an already encoded string.
with app.test_client() as c:
r = c.get('/', query_string={'name': 'davidism'})
The test client request methods pass their arguments to Werkzeug's EnvironBuilder, which is where this is documented.

Defining request and response objects for webapp2 handlers on GAE python

I already have a REST API with GAE python built using webapp2. I was looking at protorpc messages used in protorpc and Cloud Enpoints and really like how I can define the request and responses. Is there a way to incorporate that into my webapp2 handlers?
Firstly I use a decorator on the webapp2 method. I define the decorator as follows*:
# Takes webapp2 request (self.request on baseHandler) and converts to defined protoRpc object
def jsonMethod(requestType, responseType, http_method='GET'):
"""
NB: if both URL and POST parameters are used, do not use 'required=True' values in the protorpc Message definition
as this will fail on one of the sets of parms
"""
def jsonMethodHandler(handler):
def jsonMethodInner(self, **kwargs):
requestObject = getJsonRequestObject(self, requestType, http_method, kwargs)
logging.info(u'request={0}'.format(requestObject))
response = handler(self, requestObject) # Response object
if response:
# Convert response to Json
responseJson = protojson.encode_message(response)
else:
responseJson = '{}'
logging.info(u'response json={0}'.format(responseJson))
if self.response.headers:
self.response.headers['Content-Type'] = 'application/json'
if responseJson:
self.response.write(responseJson)
self.response.write('')
return jsonMethodInner
return jsonMethodHandler
The jsonMethod decorator uses a protorpc message for 'requestType' and 'responseType'.
I have constrained the http_method to be either GET, POST or DELETE for a method; you may wish to change this.
Note that this decorator must be applied to instance methods on a webapp2.RequestHandler class (see the example below) as it needs to access the webapp2 request and response objects.
The protorpc message is populated in getJsonRequestObject():
def getJsonRequestObject(self, requestType, http_method, kwargs):
"kwargs: URL keywords eg: /api/test/<key:\d+> => key"
"request.GET: used for 'GET' URL query string arguments"
"request.body: used for 'POST' or 'DELETE' form fields"
logging.info(u'URL parameters: {0}'.format(kwargs))
if http_method == 'POST' or http_method == 'DELETE':
requestJson = self.request.body
if requestJson == None:
requestJson = '' # Cater for no body (eg: IE10)
try:
logging.info("Content type = {}".format(self.request.content_type))
logRequest = requestJson if len(requestJson) < 1024 else requestJson[0:1024]
try:
logging.info(u'Request JSON: {0}'.format(logRequest))
except:
logging.info("Cannot log request JSON (invalid character?)")
postRequestObject = protojson.decode_message(requestType, requestJson)
except:
logError()
raise
if self.request.query_string:
# combine POST and GET parameters [GET query string overrides POST field]
getRequestObject = protourlencode.decode_message(requestType, self.request.query_string)
requestObject = combineRequestObjects(requestType, getRequestObject, postRequestObject)
else:
requestObject = postRequestObject
elif http_method == 'GET':
logging.info(u'Query strings: {0}'.format(self.request.query_string))
requestObject = protourlencode.decode_message(requestType, self.request.query_string)
logging.info(u'Request object: {0}'.format(requestObject))
else:
raise ValidationException(u'Invalid HTTP method: {0}'.format(http_method))
if len(kwargs) > 0:
#Combine URL keywords (kwargs) with requestObject
queryString = urllib.urlencode(kwargs)
keywordRequestObject = protourlencode.decode_message(requestType, queryString)
requestObject = combineRequestObjects(requestType, requestObject, keywordRequestObject)
return requestObject
getJsonRequestObject() handles GET, POST and webapp2 URL arguments (note: these are entered as kwargs).
combineRequestObjects() combines two objects of the requestType message:
def combineRequestObjects(requestType, request1, request2):
"""
Combines two objects of requestType; Note that request2 values will override request1 values if duplicated
"""
members = inspect.getmembers(requestType, lambda a:not(inspect.isroutine(a)))
members = [m for m in members if not m[0].startswith('_')]
for key, value in members:
val = getattr(request2, key)
if val:
setattr(request1, key, val)
return request1
Finally, a decorated webapp2 method example:
from protorpc import messages, message_types
class FileSearchRequest(messages.Message):
"""
A combination of file metadata and file information
"""
filename = messages.StringField(1)
startDateTime = message_types.DateTimeField(2)
endDateTime = message_types.DateTimeField(3)
class ListResponse(messages.Message):
"""
List of strings response
"""
items = messages.StringField(1, repeated=True)
...
class FileHandler(webapp2.RequestHandler):
#jsonMethod(FileSearchRequest, ListResponse, http_method='POST')
def searchFiles(self, request):
# Can now use request.filename etc
...
return ListResponse(items=items)
Hopefully this will give you some idea of how to go about implementing your own webapp2/protorpc framework.
You can also check and see how Cloud Endpoints is implementing their protorpc message handling. You may also need to dive into the protorpc code itself.
Please note that I have attempted to simplify my existing implementation, so you may come across various issues that you will need to address in your implementation.
In addition methods like 'logError()' and classes like 'ValidationException' are non-standard, so you will need to replace them as you see fit.
You may also wish to remove the logging at some point.

What is the best way to force a keyword while using **kwargs?

I'm not sure if I have used the correct terminology in the question.
Currently, I am trying to make a wrapper/interface around Google's Blogger API (Blog service).
[I know it has been done already, but I am using this as a project to learn OOP/python.]
I have made a method that gets a set of 25 posts from a blog:
def get_posts(self, **kwargs):
""" Makes an API request. Returns list of posts. """
api_url = '/blogs/{id}/posts'.format(id=self.id)
return self._send_request(api_url, kwargs)
def _send_request(self, url, parameters={}):
""" Sends an HTTP GET request to the Blogger API.
Returns JSON decoded response as a dict. """
url = '{base}{url}?'.format(base=self.base, url=url)
# Requests formats the parameters into the URL for me
try:
r = requests.get(url, params=parameters)
except:
print "** Could not reach url:\n", url
return
api_response = r.text
return self._jload(api_response)
The problem is, I have to specify the API key every time I call the get_posts function:
someblog = BloggerClient(url='http://someblog.blogger.com', key='0123')
someblog.get_posts(key=self.key)
Every API call requires that the key be sent as a parameter on the URL.
Then, what is the best way to do that?
I'm thinking a possible way (but probably not the best way?), is to add the key to the kwargs dictionary in the _send_request():
def _send_request(self, url, parameters={}):
""" Sends an HTTP get request to Blogger API.
Returns JSON decoded response. """
# Format the full API URL:
url = '{base}{url}?'.format(base=self.base, url=url)
# The api key will be always be added:
parameters['key']= self.key
try:
r = requests.get(url, params=parameters)
except:
print "** Could not reach url:\n", url
return
api_response = r.text
return self._jload(api_response)
I can't really get my head around what is the best way (or most pythonic way) to do it.
You could store it in a named constant.
If this code doesn't need to be secure, simply
API_KEY = '1ih3f2ihf2f'
If it's going to be hosted on a server somewhere or needs to be more secure, you could store the value in an environment variable
In your terminal:
export API_KEY='1ih3f2ihf2f'
then in your python script:
import os
API_KEY = os.environ.get('API_KEY')
The problem is, I have to specify the API key every time I call the get_posts function:
If it really is just this one method, the obvious idea is to write a wrapper:
def get_posts(blog, *args, **kwargs):
returns blog.get_posts(*args, key=key, **kwargs)
Or, better, wrap up the class to do it for you:
class KeyRememberingBloggerClient(BloggerClient):
def __init__(self, *args, **kwargs):
self.key = kwargs.pop('key')
super(KeyRememberingBloggerClient, self).__init__(*args, **kwargs)
def get_posts(self, *args, **kwargs):
return super(KeyRememberingBloggerClient, self).get_posts(
*args, key=self.key, **kwargs)
So now:
someblog = KeyRememberingBloggerClient(url='http://someblog.blogger.com', key='0123')
someblog.get_posts()
Yes, you can override or monkeypatch the _send_request method that all of the other methods use, but if there's only 1 or 2 methods that need to be fixed, why delve into the undocumented internals of the class, and fork the body of one of those methods just so you can change it in a way you clearly weren't expected to, instead of doing it cleanly?
Of course if there are 90 different methods scattered across 4 different classes, you might want to consider building these wrappers programmatically (and/or monkeypatching the classes)… or just patching the one private method, as you're doing. That seems reasonable.

Amazon SQS messages with arbitrary Python objects?

So, I'm trying to use SQS to pass a Python object between two EC2 instances. Here's my failed attempt:
import boto.sqs
from boto.sqs.message import Message
class UserInput(Message):
def set_email(self, email):
self.email = email
def set_data(self, data):
self.data = data
def get_email(self):
return self.email
def get_data(self):
return self.data
conn = boto.sqs.connect_to_region('us-west-2')
q = conn.create_queue('user_input_queue')
q.set_message_class(UserInput)
m = UserInput()
m.set_email('something#something.com')
m.set_data({'foo': 'bar'})
q.write(m)
It returns an error message saying that The request must contain the parameter MessageBody. Indeed, the tutorial tells us to do m.set_body('something') before writing the message to the queue. But here I'm not passing a string, I want to pass an instance of my UserInput class. So, what should MessageBody be? I've read the docs and they say that
The constructor for the Message class must accept a keyword parameter “body” which represents the content or body of the message. The format of this parameter will depend on the behavior of the particular Message subclass. For example, if the Message subclass provides dictionary-like behavior to the user the body passed to the constructor should be a dict-like object that can be used to populate the initial state of the message.
I guess the answer to my question might be in that paragraph, but I can't make sense of it. Could anyone provide a concrete code example to illustrate what they are talking about?
For an arbitrary Python object the answer is to serialize the object into a string, use SQS to send that string to the other EC2 instance and deserialize the string back to an instance of the same class.
For example, you could use JSON with base64 encoding to serialize an object into a string and that would be the body of your message.

Categories

Resources