mock object library ANY not working as expected - python

I'm currently trying to mock a patch request to a server and I'm trying to make use of the ANY attribute in the mock object library. I have the following code:
#patch('path_to_patch.patch')
def test_job_restarted_succesfully(mock_patch):
make_patch_call()
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, kwargs=ANY)
I'm getting the following error:
AssertionError: Expected call: patch(async_run=<ANY>, callback=<ANY>, kwargs=<ANY>, payload=<ANY>, url=<ANY>)
E Actual call: patch(async_run=True, callback=<function JobSvc.send_job_patch_request.<locals>.retry_on_404 at 0x000002752B873168>, payload={'analyzer': {'state': 'started'}, 'meta': {}}, svc_auth=UUID('40ed1a00-a51f-11eb-b1ed-b46bfc345269'), url='http://127.0.0.1:8080/rarecyte/1.0/jobs/slide1#20210422_203831_955885')
I found ANY in the docs given below and can't figure out why assert_called_once_with() is expecting the actual parameter that's called.
Here is the relevant section in the docs: https://docs.python.org/3/library/unittest.mock.html#any
EDIT:
The make_patch_call() ultimately calls this patch function after computing all the parameters needed for the patch function.
def patch(self, url, payload, callback=None, async_run=False, **kwargs):
payload = self._serialize_payload(payload)
func = self._do_async_request if async_run else self._do_request
return func('patch', (url, payload), callback, kwargs)

For assert_called_with, the arguments and the used keywords have to exactly match. Substituting an argument for ANY will always match the argument value, but the keyword must still match the used keyword. The generic keywords args and kwargs are no exception: if you expect them, they have to be used in the call to match.
In this case, the kwargs keyword in the expected call:
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, kwargs=ANY)
has to be changed to the really used keyword svc_auth:
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, svc_auth=ANY)
Note that the same applies for keyword versus positional arguments, which is a common pitfall. If you have a function foo(bar), then you have to expect the call exactly as it is made, e.g:
#mock.patch("my_module.foo")
def test_foo(patched):
foo(42)
patched.assert_called_with(ANY) # passes
patched.assert_called_with(foo=ANY) # fails
foo(bar=42)
patched.assert_called_with(ANY) # fails
patched.assert_called_with(foo=ANY) # passes

Related

Flask reqparse.RequestParser ValueError with parse_args()

Good day everyone,
I am having a really strange issue with Flask's RequestParser.parse_args(). Below is a snippet of my __init__ method of the relevant class:
def __init__(self, arg1, arg2):
self.arg1 = arg1
self.arg2 = arg2
self.arg_parser = reqparse.RequestParser()
self.arg_parser.add_argument('optional_arg_from_url', type=bool, default=False)
Notice I only add 1 argument to self.arg_parser. Then, in the post method, I want to retrieve optional_arg_from_url with arg_parser.parse_args() as follows:
def post(self):
# Authorization in headers
token = request.headers.get('Authorization').split()[1]
# Parse args
args = self.arg_parser.parse_args()
optional_arg_from_url = args['optional_arg_from_url']
Now, an example of the request url containing the optional arg would look as follows:
http://localhost:8080/path/to/endpoint/?optional_arg_from_url=True
The error that I am getting is ValueError: not enough values to unpack: expected 2, which is raised when args = self.arg_parser.parse_args() is called. I don't understand why 2 values are expected, since I only add one argument to the parser. Also, why would a value error be raised when I try to parse the args? Shouldn't it just parse all the args regardless?
Another interesting thing, is that the corresponding unit tests are working, regardless of whether the optional arg is included in the url or not. The code ALSO works if I do not include the optional arg in the url (which means the arg defaults to False) and gets parsed correspondingly. It is just when I try to overwrite the arg's value to True within the request url when a value error is raised.
I also made sure that optional_arg_from_url is spelled exactly the same everywhere, so that is not the issue.
Any assistance is appreciated.

Can I send **kwargs in place of set parameter in requests.post?

I'm trying to set up a base class that contains common REST methods, to be used by more specific methods in a later testing framework. I decided that instead of creating different request.post methods that correspond to to the user passing in data, json, or files parameters, I would make one method that has the set parameters of url and header and let the user pass in whatever else they want within **kwargs. However, I'm not sure I can even use *kwargs in this context, as it seems the requests module expects a positional argument. This is what I have so far:
class Action:
def __init__(self, url, requestHeaders, **kwargs):
self.url = url
self.requestHeaders = requestHeaders
self.kwargs = kwargs
def postAction(self):
response = requests.post(self.url, headers=self.requestHeaders, self.kwargs)
resultCode = response.status_code
resultMessage = response.text
print(resultCode)
print(resultMessage)
return resultCode,resultMessage
For example, kwargs might contain files={'csv': ('/path/to/csv.csv', open('csv.csv, 'rb'), 'text/csv')} and verify=false. In another request, files might be replaced with data. However, when I try to test, I end up with this:
Traceback (most recent call last):
File "/home/user1/test/action.py", line 24
response = requests.post(self.url, headers=self.requestHeaders, self.kwargs)
SyntaxError: positional argument follows keyword argument
Is what I'm trying to do possible? If not, are there any alternatives?
If you want the kwargs to be treated as keyword arguments given to post, you need to use ** to apply them:
response = requests.post(self.url, headers=self.requestHeaders, **self.kwargs)
This, similar to sequence unpacking (*seq), will cause the data to be "expanded" into the argument list of the call.

Python Tornado get URL arguments

I'm trying to inspect a request's argument before the get() is invoked. I have a route which is described as so:
user_route = r"/users/key=(?P<key>\w+)"
app = web.Application([
web.URLSpec(user_route, user_manager.UserHandler), ..])
Next, (in the handler) prepare() is used to inspect the request before get().
def prepare(self):
# inspect request arguments
print(self.request.arguments) # prints "{}"
The problem I'm having is that I cannot access the arguments from prepare(). The last statement prints an empty dict. My get() successfully uses the arguments as they are passed in the function like this:
def get(self, key):
print(key) #works
How do I access arguments in prepare()? I have also tried self.argument('key') which gives an error "400 GET .... Missing argument key", but requested URL does have a key argument in it.
In your code key is not a GET-argument, it's a part of a path. tornado.we.URLSpec passes any capturing groups in the regex into the handler’s get/post/etc methods as arguments.
tornado.web.RequestHandler has RequestHandler.path_args and RequestHandler.path_kwargs which contain the positional and keyword arguments from URLSpec. Those are available in prepare method:
def prepare(self):
# inspect request arguments
print(self.path_kwargs) # prints {"key": "something"}
As Gennady Kandaurov mentioned, you passed the key as a part of the we.URLSpec path and you can access it using Tornado's self.path_kwargs. If you wanted to pass it as an argument you could used RequestHandler.get_argument to get the argument on your get method and use self.request.arguments on your prepare method to access it as your initial intention.
Your code could be as follow:
class Application(tornado.web.Application):
def __init__(self):
user_route = r"/users"
app = tornado.web.Application([
tornado.web.url(user_route, user_manager.UserHandler), ..])
class UserHandler(tornado.web.RequestHandler):
def get(self):
key = self.get_argument('key')
print(key)
def prepare(self):
# inspect request arguments
print(self.request.arguments)
Please let me know if you have any further question.
It's generally bad to use a character like = in a URL path fragment, since they are generally used for query arguments. Either don't use it:
`r"/users/(?P<key>\w+)"`
or turn it into a proper query argument
`r"/users/\?key=(?P<key>\w+)"`
Otherwise it's confusing for a maintainer to try to figure out which scheme you intended to use (did you really want to route a path fragment called /key%3D\w+? Or did you really mean you wanted a query arg and forgot the ??)
In any case, for URL path fragment matching ("slug-matching"), using argument unpacking can let you access them in the handler too, without having to invoke path_kwargs:
# `r"/users/(?P<key>\w+)"`
class Handler(RequestHandler):
def get(self, **kwargs):
key = kwargs.get('key')
# I prefer dict.get() here, since if you change the `+` to a `*`,
# it's possible that no key was supplied, and kwargs['key']
# will throw a KeyError exception
If you intended to use a query argument for key, then #afxentios's answer is appropriate. (You can also use self.get_query_argument('key') which will explicitly only look for query arguments in the URL (whereas get_argument also checks in the request BODY for a www-url-encoded argument (such as if you POST)).

Python - Monkey Patching weird bug

My Fake Mock Class looks like following:
class FakeResponse:
method = None #
url = None # static class variables
def __init__(self, method, url, data):#, response):
self.status_code = 200 # always return 200 OK
FakeResponse.method = method #
FakeResponse.url = url #
#staticmethod
def check(method, url, values):
""" checks method and URL.
"""
print "url fake: ", FakeResponse.url
assert FakeResponse.method == method
assert FakeResponse.url == url
I have another decorator which is applicable over all the test cases:
#pytest.fixture(autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr('haas.cli.do_put',
lambda url,data: FakeResponse('PUT', url, data))
monkeypatch.setattr("haas.cli.do_post",
lambda url,data: FakeResponse('POST', url, data))
monkeypatch.setattr("haas.cli.do_delete",
lambda url: FakeResponse('DELETE', url, None))
I am using Py.test to test the code.
Some example test cases are:
class Test:
#test case passes
def test_node_connect_network(self):
cli.node_connect_network('node-99','eth0','hammernet')
FakeResponse.check('POST','http://abc:5000/node/node-99/nic/eth0/connect_network',
{'network':'hammernet'})
# test case fails
def test_port_register(self):
cli.port_register('1') # This make a indirect REST call to the original API
FakeResponse.check('PUT','http://abc:5000/port/1', None)
# test case fails
def test_port_delete(self):
cli.port_delete('port', 1)
FakeResponse.check('DELETE','http://abc:5000/port/1', None)
A sample error message which I get:
method = 'PUT', url = 'http://abc:5000/port/1', values = None
#staticmethod
def check(method, url, values):
""" checks method and URL.
'values': if None, verifies no data was sent.
if list of (name,value) pairs, verifies that each pair is in 'values'
"""
print "url fake: ", FakeResponse.url
> assert FakeResponse.method == method
E assert 'POST' == 'PUT'
E - POST
E + PUT
haas/tests/unit/cli_v1.py:54: AssertionError
--------------------------------------------- Captured stdout call -------------------------------------
port_register <port>
Register a <port> on a switch
url fake: http://abc:5000/node/node-99/nic/eth0/connect_network
--------------------------------------------- Captured stderr call -------------------------------------
Wrong number of arguements. Usage:
Whereas if I call the second test case in the following way considering the
check function takes "self" argument and #staticmethod is not used then the test case works:
def test_port_register(self):
cli.port_register('1')
fp = FakeResponse('PUT','http://abc:5000/port/1', None) #Create a FakeResponse class instance
fp.check('PUT','http://abc:5000/port/1', None) # Just call the check function with the same
arguments
Questions:
Are there any side effects of using monkey patching and #staticmethod
How is the url defined for a previous test function being used in the next function call.
Should'nt there be a scoping of argument to disallow the above unwanted behavior.
Is there a better way to monkey patch.
Sorry for the long post, I have been trying to resolve this for a week and wanted some perspective
of other programmers.
The issue was not having the right signature for one of the functions. It was resolved by changing the argument passed to the MonkeyPatch function as en empty dictionary {} instead of 'None' value which is kind of specific to my code.
The reason the I was initially hitting the issue was, as the current function's call(cli.port_register) was failing when the parameters where passed to port_register itself, so it picked up the argument values from a previous state and doing the assert with the FakeResponse call.

How do modules work in Python?

I was wondering how does a module really work in Python.For example I have this code
import requests
r = requests.get("http://www.google.com")
print r.status_code
Now according to my understanding, the requests module should have a python file which would be containing a class called "get" and within the "get" class there must be a member variable called "status_code"
So when I create the object "r", I get the variable status_code for it.
However, when I looked at all the files that come in the package, I could not find any class named "get".
I could however find a function called "get", under a class called "response". But since we did not create the object as an instance of the "response" class, how can we access the "get" function inside it?
I think I am missing a key concept here, can someone point it out for me please?
Thanks
When you import requests file __init__.pyis executed, if you examine that file in your case, you will find this line:
from .api import request, get, head, post, patch, put, delete, options
Which means that from api.py you are importing get() function:
def get(url, **kwargs):
kwargs.setdefault('allow_redirects', True)
return request('get', url, **kwargs)
And as you can see it calls request function from api.py that looks like:
def request(method, url, **kwargs):
session = sessions.Session()
return session.request(method=method, url=url, **kwargs)
That creates an object Session defined inside session.py, then calls its method request. This method will call method send() which returns a Responseobject which is defined in the class Response inside models.py (I copy the first lines):
class Response(object):
def __init__(self):
super(Response, self).__init__()
self._content = False
self._content_consumed = False
#: Integer Code of responded HTTP Status.
self.status_code = None
...
Here is where status_code is defined, so when you invoke r = requests.get("http://www.google.com") you are retrieving this object and then you can access to status_code
Your understanding is not entirely correct.
The requests module is an object; .get is then an attribute lookup on that object; it has to be a callable object because you try to call it with the (...) syntax. That means it can be a class, or a function or any other object with a __call__ method.
That callable returns something; all callables do. For a class, generally an instance is returned, but for a function that can be any Python object. Whatever .get() does, it returns an object that has a .status_code attribute, or has a .__getattr__ method that returns something when called with the name status_code.
In this specific case, get() is a function, which has been imported into the requests/__init__.py package initializer module. This function, indirectly, creates a Session() instance, and calls the .request() method on that instance. That method, eventually, returns a Response instance, which does have a .status_code attribute.

Categories

Resources