Correctly test with pytest - python

So I want to start using tests with pytest in my python programs.
EDIT: I'm only trying to test the response because it seemed like the easiest thing to test. I now understand that there are multiple ways to test the response, but I'm more looking to just get a general grip on building tests and using them.
I'm starting by testing if the correct response happens when I call a page using requests.
Like so:
**main.py**
def get_page(search_url):
page = requests.get(search_url)
return page
url = "https://www.google.com/search?q=weather+results&oq=weather+results&aqs=chrome..69i57.4626j0j1&sourceid=chrome&ie=UTF-8"
get_page(url)
Here is the test code I made to test the response. This is the first test I've ever written.
**test_main.py**
from main import get_page
def test_page_response():
test_url = "https://www.google.com/search?q=weather+results&oq=weather+results&aqs=chrome..69i57.4626j0j1&sourceid=chrome&ie=UTF-8"
assert str(get_page(test_url2)) == "<Response [200]>"
Am I doing this right? When I take out the url to break it and trigger a test, it shows me a ton of text. Sure, it's the error in it's full glory, but isn't testing supposed to make this simpler to read and understand what broke?
This leads me to believe I'm going about this the wrong way.
EDIT 2: Here's the output: http://pastebin.com/kTgc5bsR

### test_main.py ###
from main import get_page
def test_page_response():
test_url = "https://www.google.com/search?q=weather+results&oq=weather+results&aqs=chrome..69i57.4626j0j1&sourceid=chrome&ie=UTF-8"
response = get_page(test_url2) # get_page() returns a response object
assert response.status_code == 200
# also here reponse.text will contain the html string.
# You can parse it and have more assertions.
# This will be your real test to see if you got search results you expected.
Read more on how to use python-requests:
http://docs.python-requests.org/en/master/
Your url is basically your test input, you may modify the url to generate tests.
I suggest going through py.test basic examples:
http://pytest.org/latest/example/index.html
and also taking a primer on testing in general.

Is your goal to write a unit test?
If so, testing requests.get is already covered by rhe tests inside requests. It's considered unpythonic (and redundant) to re-check something that Python or your library already test for you. Instead, you should focus on testing the unique part of your app.
For example, mock the usage of requests. One way to do that is with the library requests-mock, though of course there are more one off approaches too.
Assuming you've mocked requests, the way I'd approach writing a unit test for get_page(...) is to assert that it returns the expected response body. You could also test for status code, but if you're mocking the request, this may not add a ton of value.
You may also consider testing retrieving the webpage itself in an integration test.
I'm happy to add code examples here if it would make it clearer.

Related

Why do I only get a function as a return value by using a fixture (from pytest) in a test script?

I want to write test functions for my code and decided to use pytest. I had a look into this tutorial: https://semaphoreci.com/community/tutorials/testing-python-applications-with-pytest
My real code involves another script, written by me, so I made an example, which also creates the same problem, but does not rely on my other code.
#pytest.fixture()
def example():
value = 10
return value
def test_value(example):
print(example)
assert(example == 10)
test_value(example)
When I run my script with this toy example, the print returns a function:
<function example at 0x0391E540>
and the assertion fails.
If I try to call example() with the parenthesis, I get this:
Failed: Fixture "example_chunks" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
I am sure, I am missing something important here, but searching google did not help me, which is why I hope somebody here can provide some assistance.
Remove this line from your script
test_value(example)
Run your script file with pytest file.py
Fixtures will be automatically resolved by pytest
In your example you run code directly and fixtures are just simple functions

Getting the results of a unittest programmatically

I wrote some 'unittest' code, and it's great. I can see it on the CLI when I run it manually.
Now I want to hook it up to run automatically as part of a merge to master hook on my repository. I have everything set up with dummy code except for the part of grabbing the results of the unittest programmatically.
When I call unittest.main() to run them all, it throws a SystemExit. I've tried catching it and rerouting the standard output, but I wasn't able to get it to work, and it also feels like I'm doing it wrong. Is there an easier way to get the results of the unittests, like in a Python list of line strings, or even a more complicated result object?
Really for my purposes, I'm only interested in 100% pass or fail, and then showing that visual in the repository on a pull request to master, with a link to the full unittest result details.
I'm also not married to 'unittest' if some other Python unit test framework can be called and pass off results easily.
You can pass exit=False to unittest.main, and capture the return value. To run it from another script, or the interactive interpreter, you can specify a target module as well.
That gives us an instance of the TestCase or TestSuite class that was executed. The internal unittest machinery will make a TestResult object available in the result attribute of that object.
That object will have a TestResult.wasSuccessful method that gives the result you're looking for.
Assuming you have some file tests.py:
from unittest import main
test = main(module='tests', exit=False)
print(test.result.wasSuccessful())

Py.test - Applying variables to decorators from a csv?

Please bear with me while I try to explain my predicament, I'm still a Python novice and so my terminology may not be correct. Also I'm sorry for the inevitable long-windedness of this post, but I'll try to expalin in as much relevant detail as possible.
A quick rundown:
I'm currently developing a suite of Selenium tests for a set of websites that are essentially the same in functionality, using py.test
Tests results are uploaded to TestRail, using the pytest plugin pytest-testrail.
Tests are tagged with the decorator #pytestrail.case(id) with a unique case ID
A typical test of mine looks like this:
#pytestrail.case('C100123') # associates the function with the relevant TR case
#pytest.mark.usefixtures()
def test_login():
# test code goes here
As I mentioned before, I'm aiming to create one set of code that handles a number of our websites with (virtually) identical functionality, so a hardcoded decorator in the example above won't work.
I tried a data driven approach with a csv and a list of the tests and their case IDs in TestRail.
Example:
website1.csv:
Case ID | Test name
C100123 | test_login
website2.csv:
Case ID | Test name
C222123 | test_login
The code I wrote would use the inspect module to find the name of the test running, find the relevant test ID and put that into a variable called test_id:
import csv
import inspect
class trp(object):
def __init__(self):
pass
with open(testcsv) as f: # testcsv could be website1.csv or website2.csv
reader = csv.reader(f)
next(reader) # skip header
tests = [r for r in reader]
def gettestcase(self):
self.current_test = inspect.stack()[3][3]
for row in trp.tests:
if self.current_test == row[2]:
self.test_id = (row[0])
print(self.test_id)
return self.test_id, self.current_test
def gettestid(self):
self.gettestcase()
The idea was that the decorator would change dynamically based on the csv that I was using at the time.
#pytestrail.case(test_id) # now a variable
#pytest.mark.usefixtures()
def test_login():
trp.gettestid()
# test code goes here
So if I ran test_login for website1, the decorator would look like:
#pytestrail.case('C100123')
and if I ran test_login for website2 the decorator would be:
#pytestrail.case('C222123')
I felt mighty proud of coming up with this solution by myself and tried it out...it didn't work. While the code does work by itself, I would get an exception because test_id is undefined (I understand why - gettestcase is executed after the decorator, so of course it would crash.
The only other way I can handle this is to apply the csv and testIDs before any test code is executed. My question is - how would I know how to associate the tests with their test IDs? What would an elegant, minimal solution to this be?
Sorry for the long winded question. I'll be watching closely to answer any questions if you need more explanation.
pytest is very good at doing all kind of metaprogramming stuff for the tests. If I understand your question correctly, the code below will do the dynamic tests marking with pytestrail.case marker. In the project root dir, create a file named conftest.py and place this code in it:
import csv
from pytest_testrail.plugin import pytestrail
with open('website1.csv') as f:
reader = csv.reader(f)
next(reader)
tests = [r for r in reader]
def pytest_collection_modifyitems(items):
for item in items:
for testid, testname in tests:
if item.name == testname:
item.add_marker(pytestrail.case(testid))
Now you don't need to mark the test with #pytestrail.case()at all - just write the rest of code and pytest will take care of the marking:
def test_login():
assert True
When pytest starts, the code above will read website1.csv and store the test IDs and names just as you did in your code. Before the test run starts, pytest_collection_modifyitems hook will execute, analyzing the collected tests - if a test has the same name as in csv file, pytest will add the pytestrail.case marker with the test ID to it.
I believe the reason this isn't working as you would expect has to do with how python reads and executes files. When python starts executing it reads in the linked python file(s) and executes each line one-by-one, in turn. For things at the 'root' indention level (function/class definitions, decorators, variable assignments, etc) this means they get run exactly one time as they are loaded in, and never again. In your case, the python interpreter reads in the pytest-testrail decorator, then the pytest decorator, and finally the function definition, executing each one once, ever.
(Side note, this is why you should never use mutable objects as function argument defaults: Common Gotchas)
Given that you want to first deduce the current test name, then associate that with a test case ID, and finally use that ID with the decorator, I'm not sure that is possible with pytest-testrail's current functionality. At least, not possible without some esoteric and difficult to debug/maintain hack using descriptors or the like.
I think you realistically have one option: use a different TestRail client and update your pytest structure to use the new client. Two python clients I can recommend are testrail-python and TRAW (TestRail Api Wrapper)(*)
It will take more work on your part to create the fixtures for starting a run, updating results, and closing the run, but I think in the end you will have a more portable suite of tests and better results reporting.
(*) full disclosure: I am the creator/maintainer of TRAW, and also made significant contributions to testrail-python

Python failure injection

Is there a neat way to inject failures in a Python script? I'd like to avoid sprinkling the source code with stuff like:
failure_ABC = True
failure_XYZ = True
def inject_failure_ABC():
raise Exception('ha! a fake error')
def inject_failure_XYZ():
# delete some critical file
pass
# some real code
if failure_ABC:
inject_failure_ABC()
# some more real code
if failure_XYZ:
inject_failure_XYZ()
# even more real code
Edit:
I have the following idea: insert "failure points" as specially-crafted comments. The write a simple parser that will be called before the Python interpreter, and will produce the actual instrumented Python script with the actual failure code. E.g:
#!/usr/bin/parser_script_producing_actual_code_and_calls python
# some real code
# FAIL_123
if foo():
# FAIL_ABC
execute_some_real_code()
else:
# FAIL_XYZ
execute_some_other_real_code()
Anything starting with FAIL_ is considered as a failure point by the script, and depending on a configuration file the failure is enabled/disabled. What do you think?
You could use mocking libraries, for example unittest.mock, there also exist many third party ones as well. You can then mock some object used by your code such that it throws your exception or behaves in whatever way you want it to.
When testing error handling, the best approach is to isolate the code that can throw errors in a new method which you can override in a test:
class ToTest:
def foo(...):
try:
self.bar() # We want to test the error handling in foo()
except:
....
def bar(self):
... production code ...
In your test case, you can extend ToTest and override bar() with code that throws the exceptions that you want to test.
EDIT You should really consider splitting large methods into smaller ones. It will make the code easier to test, to understand and to maintain. Have a look at Test Driven Development for some ideas how to change your development process.
Regarding your idea to use "Failure Comments". This looks like a good solution. There is one small problem: You will have to write your own Python parser because Python doesn't keep comments when it produces bytecode.
So you can either spend a couple of weeks to write this or a couple of weeks to make your code easier to test.
There is one difference, though: If you don't go all the way, the parser will be useless. Also, the time spent won't have improved one bit of your code. Most of the effort will go into the parser and tools. So after all that time, you will still have to improve the code, add failure comments and write the tests.
With refactoring the code, you can stop whenever you want but the time spent so far will be meaningful and not wasted. Your code will start to get better with the first change you make and it will keep improving.
Writing a complex tool takes time and it will have it's own bugs which need to fix or work around. None of this will improve your situation in the short term and you don't have a guarantee that it will improve the long term.
If you only want to stop your code at some point, and fall back to interactive interpreter, one can use:
assert 1==0
But this only works if you do not run python with -O
Edit
Actually, my first answer was to quick, without really understanding what you want to do, sorry.
Maybe your code becomes already more readable if you do parameterization through parameters, not through variable/function suffices. Something like
failure = {"ABC": False, "XYZ":False}
#Do something, maybe set failure
def inject_failure(failure):
if not any(failure.values()):
return
if failure["ABC"]:
raise Exception('ha! a fake error')
elif failure["XYZ"]:
# delete some critical file
pass
inject_failure(failure)

Using response.out.write from within an included file in Google App Engine

I think this is quite an easy question to answer, I just haven't been able to find anywhere detailing how to do it.
I'm developing a GAE app.
In my main file I have a few request handlers, for example:
class Query(webapp.RequestHandler):
def post(self):
queryDOI = cgi.escape(self.request.get('doiortitle'))
import queryCosine
self.response.out.write(queryCosine.cosine(queryDOI))
In that handler there I'm importing from a queryCosine.py script which is doing all of the work. If something in the queryCosine script fails, I'd like to be able to print a message or do a redirect.
Inside queryCosine.py there is just a normal Python function, so obviously doing things like
self.response.out.write("Done")
doesn't work. What should I use instead of self or what do I need to include within my included file? I've tried using Query.self.response.out.write instead but that doesn't work.
A much better, more modular approach, is to have your queryCosine.cosine function throw an exception if something goes wrong. Then, your handler method can output the appropriate response depending on the return value or exception. This avoids unduly coupling the code that calculates whatever it is you're calculating to the webapp that hosts it.
Pass it to the function.
main file:
import second
...
second.somefunction(self.response.out.write)
second.py:
def somefunction(output):
output('Done')

Categories

Resources