I'm trying to set up a base class that contains common REST methods, to be used by more specific methods in a later testing framework. I decided that instead of creating different request.post methods that correspond to to the user passing in data, json, or files parameters, I would make one method that has the set parameters of url and header and let the user pass in whatever else they want within **kwargs. However, I'm not sure I can even use *kwargs in this context, as it seems the requests module expects a positional argument. This is what I have so far:
class Action:
def __init__(self, url, requestHeaders, **kwargs):
self.url = url
self.requestHeaders = requestHeaders
self.kwargs = kwargs
def postAction(self):
response = requests.post(self.url, headers=self.requestHeaders, self.kwargs)
resultCode = response.status_code
resultMessage = response.text
print(resultCode)
print(resultMessage)
return resultCode,resultMessage
For example, kwargs might contain files={'csv': ('/path/to/csv.csv', open('csv.csv, 'rb'), 'text/csv')} and verify=false. In another request, files might be replaced with data. However, when I try to test, I end up with this:
Traceback (most recent call last):
File "/home/user1/test/action.py", line 24
response = requests.post(self.url, headers=self.requestHeaders, self.kwargs)
SyntaxError: positional argument follows keyword argument
Is what I'm trying to do possible? If not, are there any alternatives?
If you want the kwargs to be treated as keyword arguments given to post, you need to use ** to apply them:
response = requests.post(self.url, headers=self.requestHeaders, **self.kwargs)
This, similar to sequence unpacking (*seq), will cause the data to be "expanded" into the argument list of the call.
Related
I'm not sure if I used the right terms in the title. This maybe a known way to program interface functions for a subsystem or module but because I don't know the keywords, I'm not finding the results in my search queries.
I want to create a function whose intention can be clearly described in the functions name but the parameters are flexible. I want to write the function to be generic enough so that the function can complete the intention with whatever parameters it receives from whichever caller.
Let's take a function do_foo.
do_foo can take in some_obj whose attributes allows do_foo to do its work. Additionally, do_foo can just take in the individual attributes it cares about like obj_attr0 or obj_attr1 and perform the same work. In both cases, the expected result is the same as well.
So this would look something like this:
Class SomeObj():
def __init__(self, obj_attr0, obj_attr1, obj_attrN):
self.obj_attr0 = None
self.obj_attr1 = None
self.obj_attrN = None # denotes an N number of attributes
def do_foo(params)
# unpack params. do_foo requires obj_attr0 and obj_attr1 and so its searching it in the params iterable
# determine which data is passed in
# execute the work the same way regardless of what form the data is passed in
pass
obj_attr0 = None
obj_attr1 = None
obj_attrN = None
some_obj = SomeObj(obj_attr0, obj_attr1, obj_attrN)
# One can either call with a subset of attributes that would make up SomeObj or SomeObj itself if all the information is there. E.g.:
params = (some_obj)
do_foo(params)
# OR
params = (obj_att0, obj_attr1)
do_foo(params)
I know python offers *args and **kwargs facilities that offer the flexibility above. I'm looking for some examples of where the implementation lends itself to reducing pitfalls. What is a good way to implement the above? And if there are any resources out there what are examples/articles/or terms that describe the above style of programming? Clearly, I'm trying to write my interface functions to be generic and usable in multiple logic paths where the users has its data in different forms where sticking to a specific parameter list is limiting.
Short answer:
You can use function decorators to do this
Long answer:
I have a concrete example for you. It might not be the prettiest code but it does something similar to what you are asking for.
Mini HTTP Testing library
I made a mini HTTP testing library because I make my REST http tests in python, and I realized that I always write the same code again and again. So I made a more general setup
The core
The core is kind of ugly and this is the part I don't want to write again and again.
Just skip this part quick and check how it is used in the interface section.
Then if you like it you can go back and try to understand how it is all tied together.
# base.py
import json, requests, inspect
# This function drops invallid parameters
def request(*args, **kwargs):
allowed = inspect.signature(requests.Session.request).parameters
return {k:v for (k,v) in kwargs.items() if k in allowed}
def response(r, code):
if r.status_code != code:
print(r.text)
return
data = r.json()
if data:
print(json.dumps(data, indent=2, ensure_ascii=False))
return data
# This is the core function it is not pretty but it creates all the abstaction in multiple levels of decorations.
def HTTP(base_url):
def outer(func_one):
def over(*args_one, **kwargs_one):
req, url, code = func_one(*args_one, **kwargs_one)
url = base_url + url
def inner(func_two):
def under(*args_two, **kwargs_two):
allowed = inspect.signature(func_two).parameters
kwparams = {k:v for (k,v) in kwargs_two.items() if k in allowed}
from_inner = func_two(*args_two, **kwparams)
u = url.format(id=kwargs_two.pop('_id')) if '{id}' in url else url
r = req(u, **request(**kwargs_two, **from_inner))
return response(r, code)
return under
return inner
return over
return outer
The interface
The interface functions are all each decorated by the HTTP function which makes them a HTTP caller function, it is still abstract since it will return a function.
Note: interface is just what I call it but it is really just functions which returns functions based on the HTTP decorator
BASE_URL = "https://example.com"
#HTTP(BASE_URL)
def POST(url, code=200): return requests.post, url, code
#HTTP(BASE_URL)
def PUT(url, code=200): return requests.put, url, code
#HTTP(BASE_URL)
def DELETE(url, code=200): return requests.delete, url, code
#HTTP(BASE_URL)
def GET(url, code=200): return requests.get, url, code
A middleware function
When one of the interface functions are decorated with this one then they need a token.
def AUTH(func):
def inner(token, *args, **kwargs):
headers = {'Authorization': f'bearer {token}'}
return func(*args, **kwargs, headers=headers)
return inner
The implementation
The interface can be used for many implementations.
Here I use the interface of POST, PUT, GET and DELETE for the user model.
This is the final decoration, and the functions returned will actually return content instead of other functions.
# users.py
from httplib.base import (
POST,
GET,
DELETE,
PUT,
AUTH,
request
)
#POST('/users',200)
def insert(user):
return request(json=user)
#AUTH
#GET('/users')
def find(_filter={}):
return request(params=_filter)
#AUTH
#GET('/users/{id}')
def find_one(_id):
return request()
#AUTH
#DELETE('/users/{id}')
def delete(_id):
return request()
#AUTH
#PUT('/users/{id}')
def update(_id, updates={}):
return request(json=updates)
Operation
Here you can see how the users delete insert and find functions work.
from httplib import users
def create_and_delete_users(token, n): return [
users.delete(token, _id=x['user']['id'])
for x in [
users.insert(user={
'username' : f'useruser{str(i).zfill(2)}',
'password' : 'secretpassword',
'email' : f'useruser{str(i).zfill(2)}#mail.com',
'gender' : 'male',
}) for i in range(n)]
]
def find_all_and_then_find_each(token): return [
users.find_one(token, _id=x['id'])
for x in users.find(token)['data']
]
I hope this was helpful.
I'm currently trying to mock a patch request to a server and I'm trying to make use of the ANY attribute in the mock object library. I have the following code:
#patch('path_to_patch.patch')
def test_job_restarted_succesfully(mock_patch):
make_patch_call()
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, kwargs=ANY)
I'm getting the following error:
AssertionError: Expected call: patch(async_run=<ANY>, callback=<ANY>, kwargs=<ANY>, payload=<ANY>, url=<ANY>)
E Actual call: patch(async_run=True, callback=<function JobSvc.send_job_patch_request.<locals>.retry_on_404 at 0x000002752B873168>, payload={'analyzer': {'state': 'started'}, 'meta': {}}, svc_auth=UUID('40ed1a00-a51f-11eb-b1ed-b46bfc345269'), url='http://127.0.0.1:8080/rarecyte/1.0/jobs/slide1#20210422_203831_955885')
I found ANY in the docs given below and can't figure out why assert_called_once_with() is expecting the actual parameter that's called.
Here is the relevant section in the docs: https://docs.python.org/3/library/unittest.mock.html#any
EDIT:
The make_patch_call() ultimately calls this patch function after computing all the parameters needed for the patch function.
def patch(self, url, payload, callback=None, async_run=False, **kwargs):
payload = self._serialize_payload(payload)
func = self._do_async_request if async_run else self._do_request
return func('patch', (url, payload), callback, kwargs)
For assert_called_with, the arguments and the used keywords have to exactly match. Substituting an argument for ANY will always match the argument value, but the keyword must still match the used keyword. The generic keywords args and kwargs are no exception: if you expect them, they have to be used in the call to match.
In this case, the kwargs keyword in the expected call:
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, kwargs=ANY)
has to be changed to the really used keyword svc_auth:
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, svc_auth=ANY)
Note that the same applies for keyword versus positional arguments, which is a common pitfall. If you have a function foo(bar), then you have to expect the call exactly as it is made, e.g:
#mock.patch("my_module.foo")
def test_foo(patched):
foo(42)
patched.assert_called_with(ANY) # passes
patched.assert_called_with(foo=ANY) # fails
foo(bar=42)
patched.assert_called_with(ANY) # fails
patched.assert_called_with(foo=ANY) # passes
Good day everyone,
I am having a really strange issue with Flask's RequestParser.parse_args(). Below is a snippet of my __init__ method of the relevant class:
def __init__(self, arg1, arg2):
self.arg1 = arg1
self.arg2 = arg2
self.arg_parser = reqparse.RequestParser()
self.arg_parser.add_argument('optional_arg_from_url', type=bool, default=False)
Notice I only add 1 argument to self.arg_parser. Then, in the post method, I want to retrieve optional_arg_from_url with arg_parser.parse_args() as follows:
def post(self):
# Authorization in headers
token = request.headers.get('Authorization').split()[1]
# Parse args
args = self.arg_parser.parse_args()
optional_arg_from_url = args['optional_arg_from_url']
Now, an example of the request url containing the optional arg would look as follows:
http://localhost:8080/path/to/endpoint/?optional_arg_from_url=True
The error that I am getting is ValueError: not enough values to unpack: expected 2, which is raised when args = self.arg_parser.parse_args() is called. I don't understand why 2 values are expected, since I only add one argument to the parser. Also, why would a value error be raised when I try to parse the args? Shouldn't it just parse all the args regardless?
Another interesting thing, is that the corresponding unit tests are working, regardless of whether the optional arg is included in the url or not. The code ALSO works if I do not include the optional arg in the url (which means the arg defaults to False) and gets parsed correspondingly. It is just when I try to overwrite the arg's value to True within the request url when a value error is raised.
I also made sure that optional_arg_from_url is spelled exactly the same everywhere, so that is not the issue.
Any assistance is appreciated.
I'm trying to inspect a request's argument before the get() is invoked. I have a route which is described as so:
user_route = r"/users/key=(?P<key>\w+)"
app = web.Application([
web.URLSpec(user_route, user_manager.UserHandler), ..])
Next, (in the handler) prepare() is used to inspect the request before get().
def prepare(self):
# inspect request arguments
print(self.request.arguments) # prints "{}"
The problem I'm having is that I cannot access the arguments from prepare(). The last statement prints an empty dict. My get() successfully uses the arguments as they are passed in the function like this:
def get(self, key):
print(key) #works
How do I access arguments in prepare()? I have also tried self.argument('key') which gives an error "400 GET .... Missing argument key", but requested URL does have a key argument in it.
In your code key is not a GET-argument, it's a part of a path. tornado.we.URLSpec passes any capturing groups in the regex into the handler’s get/post/etc methods as arguments.
tornado.web.RequestHandler has RequestHandler.path_args and RequestHandler.path_kwargs which contain the positional and keyword arguments from URLSpec. Those are available in prepare method:
def prepare(self):
# inspect request arguments
print(self.path_kwargs) # prints {"key": "something"}
As Gennady Kandaurov mentioned, you passed the key as a part of the we.URLSpec path and you can access it using Tornado's self.path_kwargs. If you wanted to pass it as an argument you could used RequestHandler.get_argument to get the argument on your get method and use self.request.arguments on your prepare method to access it as your initial intention.
Your code could be as follow:
class Application(tornado.web.Application):
def __init__(self):
user_route = r"/users"
app = tornado.web.Application([
tornado.web.url(user_route, user_manager.UserHandler), ..])
class UserHandler(tornado.web.RequestHandler):
def get(self):
key = self.get_argument('key')
print(key)
def prepare(self):
# inspect request arguments
print(self.request.arguments)
Please let me know if you have any further question.
It's generally bad to use a character like = in a URL path fragment, since they are generally used for query arguments. Either don't use it:
`r"/users/(?P<key>\w+)"`
or turn it into a proper query argument
`r"/users/\?key=(?P<key>\w+)"`
Otherwise it's confusing for a maintainer to try to figure out which scheme you intended to use (did you really want to route a path fragment called /key%3D\w+? Or did you really mean you wanted a query arg and forgot the ??)
In any case, for URL path fragment matching ("slug-matching"), using argument unpacking can let you access them in the handler too, without having to invoke path_kwargs:
# `r"/users/(?P<key>\w+)"`
class Handler(RequestHandler):
def get(self, **kwargs):
key = kwargs.get('key')
# I prefer dict.get() here, since if you change the `+` to a `*`,
# it's possible that no key was supplied, and kwargs['key']
# will throw a KeyError exception
If you intended to use a query argument for key, then #afxentios's answer is appropriate. (You can also use self.get_query_argument('key') which will explicitly only look for query arguments in the URL (whereas get_argument also checks in the request BODY for a www-url-encoded argument (such as if you POST)).
I came across the __getattr__ built-in and was wondering when it would be used. I had a hard time thinking of a practical use from the documentation
http://docs.python.org/reference/datamodel.html#. What would be an actual example of how it could be used and useful in code?
One example is to use object notation with dictionaries. For example, consider a dictionary
myDict = {'value': 1}
Typically in Python one accesses the 'value' variable as
myDict['value']
which will print 1 at the Python interpreter. However, one may wish to use the myDict.value notation. This may be achieved by using the following class:
class DictAsMember(dict):
def __getattr__(self, name):
value = self[name]
if isinstance(value, dict):
value = DictAsMember(value)
return value
my_dict = DictAsMember()
my_dict['property'] = {'sub_property': 1}
print(my_dict.property.sub_property) # 1 will be printed
An example usage would be to create a simple wrapper around some object. In order, for example, to log the calls, or modify its behavior without inheriting from it, and without having to implement the whole interface of the object.
There is several good documented examples out there, like, for example, http://western-skies.blogspot.fr/2008/02/complete-example-of-getattr-in-python.html.
Since __getattr__ is only called when an attribute is not found, it can be a useful way to define an alternate place to look up an attribute, or to give default values, similar to a defaultdict.
You could also emulate a base class higher than all the others in an object's MRO, by delegating all the lookups here to another object (though doing this you could potentially have an infinite loop if the other object is delegating the attribute back).
There is also __getattribute__, which is related in that it is called anytime any attribute is looked up on the object.
Edit: This is about the built-in function getattr, not the __getattr__ method.
I needed to do this for a REST client using bearer tokens. I wrapped Requests's Session object into my own interface so I could always send the auth header, and (more relevantly) make HTTP requests to the same site, just using the URL's path.
class RequestsWrapper():
def __init__(self, base_url):
self.client = requests.Session(
headers={'Authorization':'myauthtoken'}
)
self.base_url = base_url
def _make_path_request(self, http_method, path, **kwargs):
"""
Use the http_method string to find the requests.Session instance's
method.
"""
method_to_call = getattr(self.client, http_method.lower())
return method_to_call(self.base_url + path, **kwargs)
def path_get(self, path, **kwargs):
"""
Sends a GET request to base_url + path.
"""
return self._make_path_request('get', path, **kwargs)
def path_post(self, path, **kwargs):
"""
Sends a POST request to base_url + path.
"""
return self._make_path_request('post', path, **kwargs)
def path_put(self, path, **kwargs):
"""
Sends a PUT request to base_url + path.
"""
return self._make_path_request('put', path, **kwargs)
def path_delete(self, path, **kwargs):
"""
Sends a DELETE request to base_url + path.
"""
return self._make_path_request('delete', path, **kwargs)
Then, I could just make a request based on the path:
# Initialize
myclient = RequestsWrapper("http://www.example.com")
# Make a get request to http://www.example.com/api/spam/eggs
response = myclient.path_get("/api/spam/eggs")
# Print the response JSON data
if response.ok:
print response.json