Testing and mocking a threaded function within a view - python

I'm trying to unit test a function that I run threaded within a view. Whenever I try to mock it, it always goes to the original function, no the mocked function.
The code I'm testing, from the view module:
def restart_process(request):
batch_name = request.POST.get("batch_name", "")
if batch_name:
try:
batch = models.Batch.objects.get(num=batch_name)
except models.Batch.DoesNotExist:
logger.warning("Trying to restart a batch that does not exist: " + batch_name)
return HttpResponse(404)
else:
logger.info(batch_name + " restarted")
try:
t = threading.Thread(target=restart_from_last_completed_state, args=(batch,))
t.daemon = True
t.start()
except RuntimeError:
return HttpResponse(500, "Threading error")
return HttpResponse(200)
else:
return HttpResponse(400)
The test function:
class ThreadTestCases(TransactionTestCase):
def test_restart_process(self):
client = Client()
mock_restart_from_last_completed_state = mock.Mock()
with mock.patch("processapp.views.restart_from_last_completed_state", mock_restart_from_last_completed_state):
response = client.post('/batch/restart/', {"batch_name": "BATCH555"})
self.assertEqual(response.status_code, 200)
mock_restart_from_last_completed_state.assert_called_once()
The URL:
url(r'^batch/restart/$', views.restart_from_last_completed_state, name="restart_batch"),
I always get this error:
ValueError: The view processapp.processing.process_runner.restart_from_last_completed_state didn't return an HttpResponse object. It returned None instead.
I put a print command in the original function (restart_from_last_completed_state) and it always runs so the mocking does not take place.
The error seems to take the function as a view although it is not.
I'm not sure where the error is, the threading, testing, something else?

The URL variable was wrong. Was supposed to be views.restart_process not views.restart_from_last_completed_state
A copy/paste error as so many times...

Related

mocked service call is not being used

I have the model object that looks like this
class CatalogModel(BaseModel):
#property
def custom_service(self):
return CustomService()
async def get_offers(self, catalog_name):
try:
svc_response = self.custom_service.get_offers(catalog_name=catalog_name)()
except BaseException:
raise SalesForceException()
return CustomOfferResponse().dump(svc_response)
I am trying to write a test for that get_offers function (that uses custom_services which is connecting to Salesforce)
My test is looking like this. I am using pytest, pytest vcr etc.
class TestCatalogModel:
catalog_name = 'CATALOG_1'
#freeze_time("2021-07-12")
async def test_get_offers(self, loop, offers, offers_response):
with MockUser(ident="test_model_get_offers"):
with patch(
"com.services.client.CustomService.get_offers", new=offers
):
eo_offers = await CatalogModel().get_offers(self.catalog_name)
assert offers_response == eo_offers
However when executing the test it fails with the error
E vcr.errors.CannotOverwriteExistingCassetteException: Can't overwrite existing cassette ('/test_api/recordings/2021-07-12/test_get_offers_model/salesforce/auth/client/services_oauth2_token.yaml') in your current record mode ('none').
E No match for the request (<Request (POST) https://server.salesforce.com/services/oauth2/token>) was found.
E No similar requests, that have not been played, found.
During handling of the above exception, another exception occurred:
...model.py:17: in test_get_offers
eo_offers = await CatalogModel().get_offers(self.catalog_name)
model.py:24: in get_offers
raise SalesForceException()
E ...SalesForceException: Error from Salesforce.
As far as I understand it is trying to connect to real Salesforce service, rather than using a mock. What is the problem?
I would avoid trying to partially patch methods on classes. Instead I would use inversion of control to allow mock instances to be used during tests. This avoids having to do any patching at all.
class CatalogModel(BaseModel):
def __init__(self, custom_service=None):
if custom_service is None:
custom_service = CustomService()
self.custom_service = custom_service
async def get_offers(self, catalog_name):
try:
svc_response = self.custom_service.get_offers(catalog_name=catalog_name)()
except BaseException:
raise SalesForceException()
return CustomOfferResponse().dump(svc_response)
Now writing the test becomes much simpler.
class TestCatalogModel:
catalog_name = 'CATALOG_1'
#freeze_time("2021-07-12")
async def test_get_offers(self, loop, offers, offers_response):
with MockUser(ident="test_model_get_offers"):
custom_service = mock.Mock()
custom_service.get_offers.return_value = offers
model = CatalogModel(custom_service)
eo_offers = await model.get_offers(self.catalog_name)
assert offers_response == eo_offers

Who/How to get the control of the program after an exception has ocurred

I have always wondered who takes the control of the program after an exception has thrown. I was seeking for a clear answer but did not find any. I have the following functions described, each one executes an API call which involves a network request, therefore I need to handle any possible errors by a try/except and possibly else block (JSON responses must be parsed/decoded as well):
# This function runs first, if this fails, none of the other functions will run. Should return a JSON.
def get_summary():
pass
# Gets executed after get_summary. Should return a string.
def get_block_hash():
pass
# Gets executed after get_block_hash. Should return a JSON.
def get_block():
pass
# Gets executed after get_block. Should return a JSON.
def get_raw_transaction():
pass
I wish to implement a kind of retry functionality on each function, so if it fails due to a timeout error, connection error, JSON decode error etc., it will keep retrying without compromising the flow of the program:
def get_summary():
try:
response = request.get(API_URL_SUMMARY)
except requests.exceptions.RequestException as error:
logging.warning("...")
#
else:
# Once response has been received, JSON should be
# decoded here wrapped in a try/catch/else
# or outside of this block?
return response.text
def get_block_hash():
try:
response = request.get(API_URL + "...")
except requests.exceptions.RequestException as error:
logging.warning("...")
#
else:
return response.text
def get_block():
try:
response = request.get(API_URL + "...")
except requests.exceptions.RequestException as error:
logging.warning("...")
#
else:
#
#
#
return response.text
def get_raw_transaction():
try:
response = request.get(API_URL + "...")
except requests.exceptions.RequestException as error:
logging.warning("...")
#
else:
#
#
#
return response.text
if __name__ == "__main__":
# summary = get_summary()
# block_hash = get_block_hash()
# block = get_block()
# raw_transaction = get_raw_transaction()
# ...
I want to keep clean code on the outermost part of it (block after if __name__ == "__main__":), I mean, I don't want to fill it with full of confused try/catch blocks, logging, etc.
I tried to call a function itself when an exception threw on any of those functions but then I read about stack limit and thought it was a bad idea, there should be a better way to handle this.
request already retries by itself N number of times when I call the get method, where N is a constant in the source code, it is 100. But when the number of retries has reached 0 it will throw an error I need to catch.
Where should I decode JSON response? Inside each function and wrapped by another try/catch/else block? or in the main block? How can I recover from an exception and keep trying on the function it failed?
Any advice will be grateful.
You could keep those in an infinite loop (to avoid recursion) and once you get the expected response just return:
def get_summary():
while True:
try:
response = request.get(API_URL_SUMMARY)
except requests.exceptions.RequestException as error:
logging.warning("...")
#
else:
# As winklerrr points out, try to return the transformed data as soon
# as possible, so you should be decoding JSON response here.
try:
json_response = json.loads(response)
except ValueError as error: # ValueError will catch any error when decoding response
logging.warning(error)
else:
return json_response
This function keeps executing until it receives the expected result (reaches return json_response) otherwise it will be trying again and again.
You can do the following
def my_function(iteration_number=1):
try:
response = request.get(API_URL_SUMMARY)
except requests.exceptions.RequestException:
if iteration_number < iteration_threshold:
my_function(iteration_number+1)
else:
raise
except Exception: # for all other exceptions, raise
raise
return json.loads(resonse.text)
my_function()
Where should I decode JSON response?
Inside each function and wrapped by another try/catch/else block or in the main block?
As a rule thumb: try to transform data as soon as possible into the format you want it to be. It makes the rest of your code easier if you don't have to extract everything again from a response object all the time. So just return the data you need, in the easiest format you need it to be.
In your scenario: You call that API in every function with the same call to requests.get(). Normally all the responses from an API have the same format. So this means, you could write an extra function which does that call for you to the API and directly loads the response into a proper JSON object.
Tip: For working with JSON make use of the standard library with import json
Example:
import json
def call_api(api_sub_path):
repsonse = requests.get(API_BASE_URL + api_sub_path)
json_repsonse = json.loads(repsonse.text)
# you could verify your result here already, e.g.
if json_response["result_status"] == "successful":
return json_response["result"]
# or maybe throw an exception here, depends on your use case
return json_response["some_other_value"]
How can I recover from an exception and keep trying on the function it failed?
You could use a while loop for that:
def main(retries=100): # default value if no value is given
result = functions_that_could_fail(retries)
if result:
logging.info("Finished successfully")
functions_that_depend_on_result_from_before(result)
else:
logging.info("Finished without result")
def functions_that_could_fail(retry):
while(retry): # is True as long as retry is bigger than 0
try:
# call all functions here so you just have to write one try-except block
summary = get_summary()
block_hash = get_block_hash()
block = get_block()
raw_transaction = get_raw_transaction()
except Exception:
retry -= 1
if retry:
logging.warning("Failed, but trying again...")
else:
# else gets only executed when no exception was raised in the try block
logging.info("Success")
return summary, block_hash, block, raw_transaction
logging.error("Failed - won't try again.")
result = None
def functions_that_depend_on_result_from_before(result):
[use result here ...]
So with the code from above you (and maybe also some other people who use your code) could start your program with:
if __name__ == "__main__":
main()
# or when you want to change the number of retries
main(retries=50)

Why is this code reporting function object is not scriptable

#pytest.fixture
def settings():
with open('../config.yaml') as yaml_stream:
return yaml.load(stream=yaml_stream)
#pytest.fixture
def viewers(settings):
try:
data = requests.get(settings['endpoints']['viewers']).json()
return data[0]['viewers']
except Exception:
print('ERROR retrieving viewers')
raise(SystemExit)
#pytest.fixture
def viewers_buffer_health(viewers):
print(viewers)
viewers_with_buffer_health = {}
for viewer in viewers:
try:
data = requests.get(settings['endpoints']['node_buffer_health']).replace('<NODE_ID>', viewer)
except Exception as e:
print('ERROR retrieving buffer_health for {}'.format(viewer))
raise(SystemExit)
viewers_with_buffer_health[viewer] = data[0]['avg_buffer_health']
return viewers_with_buffer_health
The fixture viewers_buffer_health is failing all the time on the requests because 'function' object is not subscriptable
Other times I have seen such error it has been because I was calling a variable and a function by the same name, but it's not the case (or I'm blind at all).
Although it shouldn't matter, the output of viewers is a list like ['a4a6b1c0-e98a-42c8-abe9-f4289360c220', '152bff1c-e82e-49e1-92b6-f652c58d3145', '55a06a01-9956-4d7c-bfd0-5a2e6a27b62b']
Since viewers_buffer_health() doesn't have a local definition for settings it is using the function defined previously. If it is meant to work in the same manner as viewers() then you will need to add a settings argument to its current set of arguments.
settings is a function.
data = requests.get(settings()['endpoints']['node_buffer_health']).replace('<NODE_ID>', viewer)

TypeError: get() takes exactly 3 arguments (2 given) error in Python

I have checked few of the answers in SO, but nothing worked for me. I'm still giving the same error:
TypeError: get() takes exactly 3 arguments (2 given)
Can anybody check the following code and let me know what I have done wrong?
my 'views.py' is as follows
def get(self, request, tag):
print("Tag for tagging :")
data_loader = SvnDataLoader()
print("Two :")
ss = SubsystemRevision.get_subsystem_for_tag(tag)
print("Subsystem is %s", ss)
try:
print("inside try")
pr = subprocess.Popen(['perl', './svntasktag.pl', 'ss'], stdout=subprocess.PIPE)
data = pr.communicate()
context = {'data':data}
except TagHistoryMissing:
data = 'Tag is missing.'
except SvnException as e:
data = "Problem while trying to fetch tag-history from svn. Try again later"
#logger.error("SvnException %s while trying to fetch the tag %s" % (str(e), tag.name))
return render_to_response('pai_app/create_tag.html', {'data': data}, context_instance=RequestContext(request))
First of all you should know that self refers to the object itself. You can add another argument and you error will be solved e.g.
def get(self, request, tag, foobar):
...
But it's better to know what arguments you want to have and what you are passing to the function.
The second point is that you are using get() in your views. Usually it is not the good practice to use method names for the functions (we have Model.objects.get()). So it is better to change your function to something like get_tag() to prevent any type of conflicts during development. Don't forget to update your app modules! Good luck!
If your function not in a class,you can use
def get(request, tag):
pass
Because the params self not be used in your function.
In django function views,the first params must be a request obj instance.

Mocking urllib.request.urlopen's read function returns MagicMock signature

I'm trying to mock out urllib.request.urlopen's Read method on Python 3:
Function Code:
try:
with request.urlopen(_webhook_url, json.dumps(_message).encode('utf-8')) as _response:
_response_body = _response.read()
return _response_body
Test Code:
with mock.patch('urllib.request.urlopen') as mock_urlopen:
response_mock = MagicMock()
response_mock.read.return_value = 'ok'
mock_urlopen.return_value = response_mock
with self.stubber:
_response = NotifySlack.lambda_handler(_event)
self.assertEqual('ok', _response)
If I call response_mock.read() I get the 'ok' value returned, however when I assert the return value I get a mock signature:
Expected :ok
Actual :<MagicMock name='urlopen().__enter__().read()' id='2148156925992'>
Any ideas on why the mock isn't returning the value assigned to read()?
To follow #jonrsharpe's comment and the Python: Mocking a context manager thread, to properly mock the context manager in this case, you would need this interestingly looking line:
mock_urlopen.return_value.__enter__.return_value.read.return_value = 'ok'
#^^^^^^context manager to return response^^^^^^^|^^^read method^^^

Categories

Resources