I have some pretty fragile code that I want to refactor. It's not very easy to unit test by itself because it interacts with database queries and Django form data.
That in itself is not a big deal. I already have extensive tests that, among other things, end up calling this function and check that results are as expected. But my full test suite takes about 5 minutes and I also don't want to have to fix other outstanding issues while working on this.
What I'd like to do is to run nosetests or nose2 on all my tests, track all test_xxx.py files that called the function of interest and then limit my testing during the refactoring to only that subset of test files.
I plan to use inspect.stack() to do this but was wondering if there is an existing plugin or if someone has done it before. If not, I intend to post whatever I come up with and maybe that will be of use later.
You can simply raise some exception in the function and do one run. All tests that fail do call you function.
Related
I have some resource creation and deletion code that needs to run before and after certain tests, which I've put into a fixture using yield in the usual way. However, before running the tests, I want to verify that the resource creation has happened correctly, and likewise after the deletion, I want to verify that it has happened. I can easily stick asserts into the fixtures themselves, but I'm not sure this is good pytest practice, and I'm concerned that it will make debugging and interpreting the logs harder. Is there a better or canonical way to do validation in pytest?
I had encountered something like this recently - although, I was using unittest instead of pytest.
What I ended up doing was something similar to a method level setup/teardown. That way, future test functions would never be affected by past test functions.
For my use-case, I loaded my test fixtures in this setup function, then ran a couple of basic tests against those tests to ensure validity of fixtures (as part of setup itself). This, I realized, added a bit of time to each test in the class, but ensured that all the fixture data was exactly what I expected it to be (we were loading stuff into a dockerized elasticsearch container). I guess time for running tests is something you can make a judgement call about.
I'm a newbie in automation testing.
Currently doing a manual testing and trying to automate the process with Selenium Webdriver using Pyhton.
I'm creating a test suite which will run different scripts. Each script will be running tests on different functionality.
And I got stuck.
I'm working on financial web application. The initial scrip will create financial deal, and all other scripts will be testing different functionality on this deal.
I'm not sure how to handle this situation. Should I just pass the URL from the first script (newly created deal) into all other scripts in the suite, so all the tests were run on the same deal, and didn't create a new one for each test? How do I do this?
Or may be there is a better way to do this?
Deeply appreciate any advise!!! Thank you!
Preferably you would have each test be able to run in isolation. If you have a way to create the deal through an API or Database rather than creating one through the UI, you could call that for each test. And, if possible, also clean up that data after your test runs.
If this is not possible, you could also record some data from a test in a database, xml, or json file. Then your following tests could read in that data to get what it needs to run the test. In this case it would be some reference to your financial deal.
The 2nd option is not ideal, but might be appropriate in some cases.
There's a couple of approaches here that might help, and some of it depends on if you're using a framework, or just building from scratch using the selenium api.
Use setup and teardown methods at the suite or test level.
This is probably the easiest method, and close to what you asked in your post. Every framework I've worked in supports some sort of setup and teardown method out of the box, and even if it doesn't, they're not hard to write. In your case, you've got a script that calls each of the test cases, so just add a before() method at the beginning of the suite that creates the financial deal you're working on.
If you'd like a new deal made for each individual test, just put the before() method in the parent class of each test case so they inherit and run it with every case.
Use Custom Test Data
This is probably the better way to do this, but assumes you have db access or a good relationship with your dbm. You generally don't want the success of one test case to rely on the success of another(what the first answer meant by isolaton). If the creation of the document fails in some way, every single test downstream of that will fail as well, even though they're testing a different feature that might be working. This results in a lot of lost coverage.
So, instead of creating a new financial document every time, speak to your DBM and see if it's possible to create a set of test data that either sits in your test db or is inserted at the beginning of the test suite.
This way you have 1 test that tests document creation, and X tests that verify it's functionality based on the test data, and those tests do not rely on each other.
So I'm trying to decide the way to plan and organize a testing suite for my python project but I have a doubt of when a unit test is no longer a unit test and I would love to have some feedback from the comunity.
If I understand correctly:
A Unit test test a minimal part of your code, be if a function/method that does one and only one simple thing, even if it has several use cases.
An Integration test tests that two or more units of you code that are executed under the same context, environment, etc (but trying to keep it to a minimmum of units per integration test) work well together and not only by themselves.
My doubt is: say I have a simple function that performs a HTTP request and returns the content of such request, be it HTML, JSON, etc, it doesn't matter, the fact is that the function is very, very simple but requests information from an external source, like:
import requests
def my_function(arg):
# do something very simple with `arg`, like removing spaces or the simplest thing you can imagine
return requests.get('http://www.google.com/' + arg).content
Now this is a very stupid example, but my doubt is:
Given that this function is requesting information from an external source, when you write a test for it, can you still consider such test a Unit test?
UPDATE: The test for my_function() would stub out calls to the external source so that it doesn't depend on network/db/filesystem/etc so that it's isolated. But the fact that the function that is being tested depends on external sources when running, for example, in production.
Thanks in advance!! :)
P.S.: Of course Maybe I'm not understading 100% de purposes of Unit and Integration testing, so, if I'm mistaken, please point me out where, I'll appreciate it.
Based on your update:
The test for my_function() would stub out calls to the external source
so that it doesn't depend on network/db/filesystem/etc so that it's
isolated. But the fact that the function that is being tested depends
on external sources when running, for example, in production.
As long as the external dependencies are stubbed out during your test then yes you can call it a Unit Test. It's a Unit Test based on how the unit under test behaves in your test suite rather than how the unit behaves in production.
Based on your original question:
Given that this function is requesting information from an external
source, when you write a test for it, can you still consider such test
a Unit test?
No, any test for code that touches or depends on things external to the unit under test is an integration test. This includes the existence of any web, file system, and database requests.
If your code is not 100% isolated from it's dependencies and not 100% reproducible without other components then it is an integration test.
For your example code to be properly unit tested you would need to mock out the call to google.com.
With the code calling google.com, your test would fail if google went down or you lost connection to the internet (ie the test is not 100% isolated). Your test would also fail if the behavior of google changed (ie the test is not 100% reproducable).
I don't think this is an integration test since it doesn't make use of different parts of your application. The function does really one thing and tests for it can be called unit.
On the other side, this particular function has an external dependency and you don't want to be dependent on the network in your tests. This is where mocking would really help a lot.
In other words, isolate the function and make unit tests for it.
Also, make integration tests that would use a more high-level approach and would test parts of your application which call my_function().
Whether or not your code is tested in isolation is not a distinguishing criterion whether your test is a unit test or not. For example, you would (except in very rare cases) not stub standard library functions like sin(x). But, if you don't stub sin(x) that does not make your test an integration test.
When is a test an integration test? A test is an integration test if your goal with the test is to find bugs on integration level. That means, with an integration test you want to find out whether the interaction between two (or more) components is based on the same assumptions on both (all) sides.
Mocking, however, is an orthogonal technique that can be combined with almost all kinds of tests. (In an integration test, however, you can not mock the partners of the interaction of which you want to test - but you can mock additional components).
Since mocking typically causes some effort, it must bring a benefit, like:
significantly speeding up your tests
testing although some software part is not ready yet or buggy
testing exceptional cases that are hard to set up in an integrated software
getting rid of nondeterministic behaviour like timing or randomness
...
If, however, mocking does not solve a real problem, you might be better off using the depended-on component directly.
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
The real test for the tests is the production code.
An effective way to check that a test code refactor hasn't broken your tests would be to do Mutation Testing, in which a copy of the code under test is mutated to introduce errors in order to verify that your tests catch the errors. This is a tactic used by some test coverage tools.
I haven't used it (and I'm not really a python coder), but this seems to be supported by the Python Mutant Tester, so that might be worth looking at.
Coverage.py is your friend.
Move over all the tests you want to refactor into "system tests" (or some such tag). Refactor the tests you want (you would be doing unit tests here right?) and monitor the coverage:
After running your new unit tests but before running the system tests
After running both the new unit tests and the system tests.
In an ideal case, the coverage would be same or higher but you can thrash your old system tests.
FWIW, py.test provides mechanism for easily tagging tests and running only the specific tests and is compatible with unittest2 tests.
Interesting question - I'm always keen to hear discussions of the type "how do I test the tests?!". And good points from #marksweb above too.
It's always a challenge to check your tests are actually doing what you want them to do and testing what you intend, but good to get this right and do it properly. I always try to consider the rule-of-thumb that testing should make up 1/3 of development effort in any project... regardless of project time constraints, pressures and problems that inevitably crop up.
If you intend to continue and grow your project have you considered refactoring like you say, but in a way that creates a proper test framework that allows test driven development (TDD) of any future additions of functionality or general expansion of the project?
Although you have mentioned Python, I would like to comment how refactoring is applied in Smalltalk. Most modern Smalltalk implementations include a "Refactoring Browser" integrated in a System Browser to restructure source code. The RB includes a rewrite framework to perform dynamically the transformations you asked about preserving the system behavior and stability. A way to use it is to open a scoped browser, apply refactorings and review/edit changes before commiting through a diff tool. I don't know about maturity of Python refactoring tools, but it took many iteration cycles (years) for the Smalltalk community to have such an amazing piece of software.
Don Roberts and John Brant wrote one of the first refactoring browser tools which now serves as the standard for refactoring tools. There are some videos and here demonstrating some of these features. For promoting a method into a superclass, in Pharo you just select the method, refactor and "pull up" menu item. The rule will detect and let you review the proposed duplicated sub-implementors for deletion before its execution. Application of refactorings are regardless of Testing code.
In theory you could write a test for the test, mocking the actualy object under test.But I guess that is just way to much work and not worth it.
So what you are left with are some strategies, that will help, but not make this fail safe.
Work very carefully and slowly. Use the features of you IDEs as much as possible in order to limit the chance of human error.
Work in pairs. A partner looking over your shoulder might just spot the glitch that you missed.
Copy the test, then refactor it. When done introduce errors in the production code to ensure, both tests find the the problem in the same (or equivalent) ways. Only then remove the original test.
The last step can be done by tools, although I don't know the python flavors. The keyword to search for is 'mutation testing'.
Having said all that, I'm personally satisfied with steps 1+2.
I can't see an easy way to refactor a test suite, and depending on the extent of your refactor you're obviously going to have to change the test suite. How big is your test suite?
Refactoring properly takes time and attention to detail (and a lot of Ctrl+C Ctrl+V!). Whenever I've refactored my tests I don't try and find any quick ways of doing things, besides find & replace, because there is too much risk involved.
You're best of doing things properly and manually albeit slowly if you want to make keep the quality of your tests.
Don't refactor the test suite.
The purpose of refactoring is to make it easier to maintain the code, not to satisfy some abstract criterion of "code niceness". Test code doesn't need to be nice, it doesn't need to avoid repetition, but it does need to be thorough. Once you have a test that is valid (i.e. it really does test necessary conditions on the code under test), you should never remove it or change it, so test code doesn't need to be easy to maintain en masse.
If you like, you can rewrite the existing tests to be nice, and run the new tests in addition to the old ones. This guarantees that the new combined test suite catches all the errors that the old one did (and maybe some more, as you expand the new code in future).
There are two ways that a test can be deemed invalid -- you realise that it's wrong (i.e. it sometimes fails falsely for correct code under test), or else the interface under test has changed (to remove the API tested, or to permit behaviour that previously was a test failure). In that case you can remove a test from the suite. If you realise that a whole bunch of tests are wrong (because they contain duplicated code that is wrong), then you can remove them all and replace them with a refactored and corrected version. You don't remove tests just because you don't like the style of their source.
To answer your specific question: to test that your new test code is equivalent to the old code, you would have to ensure (a) all the new tests pass on your currently-correct-as-far-as-you-known code base, which is easy, but also (b) the new tests detect all the errors that the old tests detect, which is usually not possible because you don't have on hand a suite of faulty implementations of the code under test.
Test code can be the best low level documentation of your API since they do not outdate as long as they pass and are correct. But messy test code doesn't serve that purpose very well. So refactoring is essential.
Also might your tested code change over time. So do the tests. If you want that to be smooth, code duplication must be minimized and readability is a key.
Tests should be easy to read and always test one thing at once and make the follwing explicit:
what are the preconditions?
what is being executed?
what is the expected outcome?
If that is considered, it should be pretty safe to refactor the test code. One step at a time and, as #Don Ruby mentioned, let your production code be the test for the test.
For many refactoring you can often safely rely on advanced IDE tooling – if you beware of side effects in the extracted code.
Although I agree that refactoring without proper test coverage should be avoided, I think writing tests for your tests is almost absurd in usual contexts.
I'm wondering if there is a test framework that allows for tests to be declared as being dependent on other tests. This would imply that they should not be run, or that their results should not be prominently displayed, if the tests that they depend on do not pass.
The point of such a setup would be to allow the root cause to be more readily determined in a situation where there are many test failures.
As a bonus, it would be great if there some way to use an object created with one test as a fixture for other tests.
Is this feature set provided by any of the Python testing frameworks? Or would such an approach be antithetical to unit testing's underlying philosophy?
Or would such an approach be
antithetical to unit testing's
underlying philosophy?
Yep...if it is a unit test, it should be able to run on its own. Anytime I have found someone wanting to create dependencies on tests was due to the code being structured in a poor manner. I am not saying this is the instance in your case but it can often be a sign of code smell.
Proboscis is a Python test framework that extends Python’s built-in unittest module and Nose with features from TestNG.
Sounds like what you're looking for. Note that it works a bit differently to unittest and Nose, but that page explains how it works pretty well.
This seems to be a recurring question - e.g. #3396055
It most probably isn't a unit-test, because they should be fast (and independent). So running them all isn't a big drag. I can see where this might help in short-circuiting integration/regression runs to save time. If this is a major need for you, I'd tag the setup tests with [Core] or some such attribute.
I then proceed to write a build script which has two tasks
Taskn : run all tests in X,Y,Z dlls marked with tag [Core]
Taskn+1 depends on Taskn: run all tests in X,Y,Z dlls excluding those marked with tag [Core]
(Taskn+1 shouldn't run if Taskn didn't succeed.) It isn't a perfect solution - e.g. it would just bail out if any one [Core] test failed. But I guess you should be fixing the Core ones instead of proceeding with Non-Core tests.
It looks like what you need is not to prevent the execution of your dependent tests but to report the results of your unit test in a more structured way that allows you to identify when an error in a test cascades onto other failed tests.
The test runners py.test, Nosetests and unit2/unittest2 all support the notion of "exiting after the first failure". py.test more generally allows to specify "--maxfail=NUM" to stop running and reporting after NUM failures. This may already help your case especially since maintaining and updating dependencies for tests may not be that interesting a task.