Unit Test cases for Python Eve Webservices - python

We have developed the APIs using the python eve framework . Is there a way we can write the unit test cases for the APIs we have developed in EVE. Is there a Unit Test case component bundled into Python EVE.I need to bundle them with my Continuous Integration setup.
If yes please help me with the steps how to proceed with it.

You could start by looking at Eve's own test suite. There's 600+ examples in there. There are two base classes that provide a lot of utility methods: TestMinimal and TestBase. Almost all other test classes inherit from either of those. You probably want to use TestMinimal as it takes care of setting up and dropping the MongoDB connection for you. It also provides stuff like assert200, assert404 etc.
In general, you use the test_client object, as you do with Flask itself. Have a look at Testing Flask Applications too and Eve's Running the Tests page.

Related

How to allow switching to (nullable) infrastructure stubs in Django

Question - What approach or design pattern would make it easy in Django to use stubs for external integrations when running tests? With 'external integrations' I mean a couple of external REST APIs, and NAS file system. The external integrations are already separate modules/classes.
What I do now -
Currently, I disable external dependencies in tests mainly by sprinkling mock.path() statements across my test code.
But this is getting unpractical (needs to be applied per test; also easy to forget especially in more high-level tests), and links too much to the internals of certain modules.
Some details of what I am looking for
I like the concept of 'nullable infrastructure' described at
https://www.jamesshore.com/v2/blog/2018/testing-without-mocks#nullable-infrastructure.
I am especially looking for an approach that integrates well with Django, i.e. considering the settings.py file approach, and running tests via python manage.py test.
I would like to be able to easily:
state that all tests should use the nullable counterpart of an infrastructure class or function
override that behaviour per test, or test class, when needed (e.g. when testing the actual external infrastructure).
I tried the approach outlined in https://medium.com/analytics-vidhya/mocking-external-apis-in-django-4a2b1c9e3025, which basically says to create an interface implementation, a real implementation and a stub implementation. The switching is done using a django setting parameter and a class decorator on the interface class (which returns the chosen class, rather than the interface). But it isn't working out very well: the class decorator in my setup does not work with #override_settings (the decorator applies the settings upon starting of django, not when running the test), and there is really a lot of extra code (which also feels un-pythonic).

How to run all the tests on the same deal. Selenium Webdriver + Python

I'm a newbie in automation testing.
Currently doing a manual testing and trying to automate the process with Selenium Webdriver using Pyhton.
I'm creating a test suite which will run different scripts. Each script will be running tests on different functionality.
And I got stuck.
I'm working on financial web application. The initial scrip will create financial deal, and all other scripts will be testing different functionality on this deal.
I'm not sure how to handle this situation. Should I just pass the URL from the first script (newly created deal) into all other scripts in the suite, so all the tests were run on the same deal, and didn't create a new one for each test? How do I do this?
Or may be there is a better way to do this?
Deeply appreciate any advise!!! Thank you!
Preferably you would have each test be able to run in isolation. If you have a way to create the deal through an API or Database rather than creating one through the UI, you could call that for each test. And, if possible, also clean up that data after your test runs.
If this is not possible, you could also record some data from a test in a database, xml, or json file. Then your following tests could read in that data to get what it needs to run the test. In this case it would be some reference to your financial deal.
The 2nd option is not ideal, but might be appropriate in some cases.
There's a couple of approaches here that might help, and some of it depends on if you're using a framework, or just building from scratch using the selenium api.
Use setup and teardown methods at the suite or test level.
This is probably the easiest method, and close to what you asked in your post. Every framework I've worked in supports some sort of setup and teardown method out of the box, and even if it doesn't, they're not hard to write. In your case, you've got a script that calls each of the test cases, so just add a before() method at the beginning of the suite that creates the financial deal you're working on.
If you'd like a new deal made for each individual test, just put the before() method in the parent class of each test case so they inherit and run it with every case.
Use Custom Test Data
This is probably the better way to do this, but assumes you have db access or a good relationship with your dbm. You generally don't want the success of one test case to rely on the success of another(what the first answer meant by isolaton). If the creation of the document fails in some way, every single test downstream of that will fail as well, even though they're testing a different feature that might be working. This results in a lot of lost coverage.
So, instead of creating a new financial document every time, speak to your DBM and see if it's possible to create a set of test data that either sits in your test db or is inserted at the beginning of the test suite.
This way you have 1 test that tests document creation, and X tests that verify it's functionality based on the test data, and those tests do not rely on each other.

Difference between django-webtest and selenium

I have been reading about testing in django. One thing that was recommended was use of django-webtest for functional testing. I found a decent article here that teaches how to go about functional testing in selenium using python. But people have also recommended Ian Bicking's WebTest's extension djagno-webtest to use for testing forms in django. How is testing with webtest and testing with selenium different in context of django forms?
So from functional testing point of view:
How does django-webtest and selenium go side by side?
Do we need to have both of them or any one would do?
The key difference is that selenium runs an actual browser, while WebTest hooks to the WSGI.
This results in the following differences:
You can't test JS code with WebTest, since there is nothing to run it.
WebTest is much faster since it hooks to the WSGI, this also means a smaller memory footprint
WebTest does not require to actually run the server on a port so it's a bit easier to parallize
WebTest does not check different problems that occur with actual browsers, like specific browser version bugs (cough.. internet explorer.. cough..)
Bottom line:
PREFER to use WebTest, unless you MUST use Selenium for things that can't be tested with WebTest.
The important thing to know about Selenium is that it's primarily built to be a server-agnostic testing framework. It doesn't matter what framework or server-side implementation is used to create the front-end as long as it behaves as expected. Also, while you can (and when possible you probably should) write tests manually in Selenium, many tests are recorded macros of someone going through the motions that are then turned into code automatically.
On the other hand, django-webtest is built to work specifically on Django websites. It's actually a Django-specific extension to WebTest, which is not Django-only, but WSGI-only (and therefore Python-only). Because of that, it can interact with the application with a higher level of awareness of how things work on the server. This can make running tests faster and can also makes it easy to write more granular, detailed tests. Also, unlike Selenium, your tests can't be automatically written as recorded macros.
Otherwise, the two tools have generally the same purpose and are intended to test the same kinds of things. That said, I would suggest picking one rather than using both.

Does Django have BDD testing tools comparable to Rails' testing tools?

Ruby/Rails enjoy some really nice and powerful Behavior Driven Design/Development testing frameworks like Cucumber and RSpec.
Does Python/Django enjoy the same thing (I'm not talking about simple unit testing like PyUnit)?
There is a new tools called Lettuce that promises to be a Pythonic version of Cucumber. It is starting with Django integration. That plus the existing testing tools in Django make it pretty good for unit testing.
There's also a tool called Windmill that provides a solid browser-based testing tool for building GUI tests. Couple that with a tool like Lettuce for writing acceptance tests and the straight unittest and nosetests and I'd say you're set.
The thing to remember, there's a slightly different culture between Ruby and Python. Ruby has a preference for tests above all else. In Python it's documentation. As such, there's not a million and one testing frameworks in Python, just a few really solid ones with the occasional outlier (like Lettuce).
Hope this helps.
in case someone is still visiting here,
Aloe, is a lettuce upgrade and has integration for django 1.8.
I'm starting to use it now.
If you are referring to high level testing, and not unit testing, then you can still use Cucumber. Cucumber can be hooked up with Webrat and Mechanize, and you can use that setup to write stories to test your Django app. I have a setup like that. You can also hook up Cucumber with Watir, and do browser based testing.
There isn't something like FactoryGirl to auto generate data for your model, but you can use Django fixtures to create initial data for your model, and only have the fixtures installed when you are in testing mode. Then you can start a test, and have some data to test with.
With those things, you can setup a fairly comprehensive high level test suite. Then you can use Django unit test to do low level testing.
I'm working with Robot Test Framework http://code.google.com/p/robotframework/. It is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD).
It's flexible and powerful and also allows BDD style in a similar way to lettuce/cucumber.
I had a good experience with TestBrowser, which allows you to write functional tests at a high level. See http://pypi.python.org/pypi/homophony for integration of TestBrowser with Django.

Testing for external resource consistency / skipping django tests

I'm writing tests for a Django application that uses an external data source. Obviously, I'm using fake data to test all the inner workings of my class but I'd like to have a couple of tests for the actual fetcher as well. One of these will need to verify that the external source is still sending the data in the format my application expects, which will mean making the request to retrieve that information in the test.
Obviously, I don't want our CI to come down when there is a network problem or the data provider has a spot of downtime. In this case, I would like to throw a warning that skips the rest of that test method and doesn't contribute to an overall failure. This way, if the data arrives successfully I can check it for consistency (failing if there is a problem) but if the data cannot be fetched it logs a warning so I (or another developer) know to quickly check that the data source is ok.
Basically, I would like to test my external source without being dependant on it!
Django's test suite uses Python's unittest module (at least, that's how I'm using it) which looks useful, given that the documentation for it describes Skipping tests and expected failures. This feature is apparently 'new in version 2.7', which explains why I can't get it to work - I've checked the version of unittest I have installed on from the console and it appears to be 1.63!
I can't find a later version of unittest in pypi so I'm wondering where I can get hold of the unittest version described in that document and whether it will work with Django (1.2).
I'm obviously open to recommendations / discussion on whether or not this is the best approach to my problem :)
[EDIT - additional information / clarification]
As I said, I'm obviously mocking the dependancy and doing my tests on that. However, I would also like to be able to check that the external resource (which typically is going to be an API) still matches my expected format, without bringing down CI if there is a network problem or their server is temporarily down. I basically just want to check the consistency of the resource.
Consider the following case...
If you have written a Twitter application, you will have tests for all your application's methods and behaviours - these will use fake Twitter data. This gives you a complete, self-contained set of tests for your application. The problem is that this doesn't actually check that the application works because your application inherently depends on the consistency of Twitter's API. If Twitter were to change an API call (perhaps change the URL, the parameters or the response) the application would stop working even though the unit tests would still pass. (Or perhaps if they were to completely switch off basic authentication!)
My use case is simpler - I have a single xml resource that is used to import information. I have faked the resource and tested my import code but I would like to have a test that checks the format of that xml resource has not changed.
My question is about skipping tests in Django's test runner so I can throw a warning if the resource is unavailable without the tests failing, specifically getting a version of Python's unittest module that supports this behaviour. I've given this much background information to allow anyone with experience in this area to offer alternative suggestions.
Apologies for the lengthy question, I'm aware most people won't read this now.
I've 'bolded' the important bits to make it easier to read.
I created a separate answer since your edit invalidated my last answer.
I assume you're running on Python version 2.6 - I believe the changes that you're looking for in unittest are available in Python version 2.7. Since unittest is in the standard library, updating to Python 2.7 should make those changes available to you. Is that an option that will work for you?
One other thing that I might suggest is to maybe separate the "external source format verification" test(s) into a separate test suite and run that separately from the rest of your unit tests. That way your core unit tests are still fast and you don't have to worry about the external dependencies breaking your main test suites. If you're using Hudson, it should be fairly easy to create a separate job that will handle those tests for you. Just a suggestion.
The new features in unittest in 2.7 have been backported to 2.6 as unittest2. You can just pip install and substitute unittest2 for unittest and your tests will work as thyey did plus you get the new features without upgrading to 2.7.
What are you trying to test? The code in your Django application(s) or the dependency? Can you just Mock whatever that external dependency is? If you're just wanting to test your Django application, then I would say Mock the external dependency, so your tests are not dependent upon that external resource's availability.
If you can post some code of your "actual fetcher" maybe you will get some tips on how you could use mocks.

Categories

Resources