Unit testing aspect-oriented features - python

I'd like to know what would you propose as the best way to unit test aspect-oriented application features (well, perhaps that's not the best name, but it's the best I was able to come up with :-) ) such as logging or security?
These things are sort of omni-present in the application, so how to test them properly?
E.g. say that I'm writing a Cherrypy web server in Python. I can use a decorator to check whether the logged-in user has the permission to access a given page. But then I'd need to write a test for every page to see whether it works oK (or more like to see that I had not forgotten to check security perms for that page).
This could maybe (emphasis on maybe) be bearable if logging and/or security were implemented during the web server "normal business implementation". However, security and logging usually tend to be added to the app as an afterthough (or maybe that's just my experience, I'm usually given a server and then asked to implement security model :-) ).
Any thoughts on this are very welcome. I have currently 'solved' this issue by, well - not testing this at all.
Thanks.

IMHO, the way of testing users permissions to the pages depends on the design of your app and design of the framework you're using.
Generally, it's probably enough to cover your permission checker decorator with unit tests to make sure it always works as expected and then write a test that cycles through your 'views' (or whatever term cherrypy uses, haven't used it for a very long time) and just check if these functions are decorated with appropriate decorator.
As for logging it's not quite clear what you want test specifically. Anyway, why isn't it possible to stub the logging functionality and check what's going on there?

Well... let's see. In my opinion you are testing three different things here (sorry for the "Java AOP jargon"):
the features implemented by the interceptors (i.e. the methods that implement the functions activated at the cutpoints)
the coverage of the filters (i.e. whether the intended cutpoints are activated correctly or not)
the interaction between the cutpoints and the interceptors (with the side effects)
You are unit testing (strictly speaking) if you can handle these three layers separatedly. You can actually unit test the first; you can use a coverage tool and some skeleton crew application with mock objects to test the second; but the third is not exactly unit testing, so you may have to setup a test environment, design an end-to-end test and write some scripts to input data in your application and gather the results (if it was a web app you could use Selenium, for example).

My answer is for the specific example you give, not for the possible problems with bolted-on security. Decorators are just regular functions, and you can test them as such. For instance:
# ... inside included module ...
def require_admin(function):
if (admin):
return function
else:
return None
#require_admin
def func1(arg1, arg2):
pass
# ... inside unit test ...
def test_require_admin(self):
admin = False
f = lambda x: x
g = require_admin(f)
assert_equal(g, None)
admin = True
g = require_admin(f)
assert_equal(g, f)
Note: this is a really terrible way to do a security check, but it gets the point across about testing decorators. That's one of the nice things about Python: it's REALLY consistent. The following expressions are equivalent:
#g
def f(x):
return x
and
def f(x):
return x
f = g(f)

Related

How to define a "limit test context" using TDD?

How to I can define the "limit context" of my tests?
I say this because of mocks, where my service has many other libs integrated, like: rabbit-mq, redis, etc, and instances from some one classes. Then, the greather part of time I'm writing test code, creating "complex" mocks to the service be possible to test.
Is possible defines this limit? Should be a possible way "no test" theses "expensive" method, and just test the "simple related method", that are simple to input the paramters, like a sum(a, b).
Is obvious that the more test coverage the better, but is many, many time writing some tests with a questionable useful result.
When is using TDD to well defined methods, like the sum(a,b), it is very useful, but when this method will receive instances or use instance from another integrated services, the mocks receive the greather part of attemption, not the "method objective".
Imagine this service/method to test:
class ClientService:
def ok_here_is_the_discount(self, some_args):
# Ok, receive the discount to thi signature, now is possible
# calcula the it, and response to the broker
self.calculate_user_discount_based_on_signature_type(some_args.discount_value)
def calculate_user_discount_based_on_signature_type(self, some_args):
# here will send the "result" to the broker
some_message_borker_publisher(
some_args.value - some_args.discount_signature
)
def request_the_signature_type_discount_in_another_service(self, some_args):
# Ok, I receive the client that has the signature type,
# now is need request what is the value to this signature
# in signature service
some_message_borker_publisher(
queue='siganture.service.what_discount_for_this_signature',
signature_type=some_args.client.singature_type
)
#Ok, this message goes to the broker, and the signature.service receive it
def method_that_receive_messages(self, some_args):
# Here we receives that want to calculate discount
# but just receive the client(dict/class/instance) with the signature type,
# we still dont know the value of theses discount, because when is responsible
# to manage the siganture, is the signature server
if some_args.message_type == 'please_calculate_discount':
self.request_the_signature_type_discount_in_another_service(some_args.client)
if some_args.message_type == 'message_from_signature_discount':
self.ok_here_is_the_discount(some_args.value)
1 - it receive a message 'please_calculate_discount' and call the method
self.request_the_signature_type_discount_in_another_service(some_args.client)
But it still not have the discount value, because this is signature service. (message to the broker)
2 - The supose that the signature server response to 'message_from_signature_discount', then it call the method:
self.ok_here_is_the_discount(some_args.value)
3 - ok, the method ok_here_is_the_discount receive discount and call method
calculate_user_discount_based_on_signature_type()
That now has the values to calculate to the discount and send these value to the broker.
Understand the complexity of these tests (TDD)? I need tests the "method_that_receive_messages" mocking all related nexted actions, or just test the related method, like "calculate_user_discount_based_on_signature_type"?
In this case is better uses a really broker to be possible test it?
Well it is easy to mock things in Python but it still comes with some overhead of course. What I have tended towards over many years now is to set up integration tests with mocks that tests the happy path through the system and then design the system with side effect free functions as much as possible and throw unit tests at them. Then you can start your TDDing with setting up the overall test for the happy path and then unit test particulars in easy to test functions.
It is still useless when the thing behind the mock changes though but it gives a great sense of security when refactoring.
The mock libraries in Python are fine but sometimes it is easier to just write your own and replace the real thing with patch. Pretty sweet actually.

Deciding whether a test is a Unit or Integration test

So I'm trying to decide the way to plan and organize a testing suite for my python project but I have a doubt of when a unit test is no longer a unit test and I would love to have some feedback from the comunity.
If I understand correctly:
A Unit test test a minimal part of your code, be if a function/method that does one and only one simple thing, even if it has several use cases.
An Integration test tests that two or more units of you code that are executed under the same context, environment, etc (but trying to keep it to a minimmum of units per integration test) work well together and not only by themselves.
My doubt is: say I have a simple function that performs a HTTP request and returns the content of such request, be it HTML, JSON, etc, it doesn't matter, the fact is that the function is very, very simple but requests information from an external source, like:
import requests
def my_function(arg):
# do something very simple with `arg`, like removing spaces or the simplest thing you can imagine
return requests.get('http://www.google.com/' + arg).content
Now this is a very stupid example, but my doubt is:
Given that this function is requesting information from an external source, when you write a test for it, can you still consider such test a Unit test?
UPDATE: The test for my_function() would stub out calls to the external source so that it doesn't depend on network/db/filesystem/etc so that it's isolated. But the fact that the function that is being tested depends on external sources when running, for example, in production.
Thanks in advance!! :)
P.S.: Of course Maybe I'm not understading 100% de purposes of Unit and Integration testing, so, if I'm mistaken, please point me out where, I'll appreciate it.
Based on your update:
The test for my_function() would stub out calls to the external source
so that it doesn't depend on network/db/filesystem/etc so that it's
isolated. But the fact that the function that is being tested depends
on external sources when running, for example, in production.
As long as the external dependencies are stubbed out during your test then yes you can call it a Unit Test. It's a Unit Test based on how the unit under test behaves in your test suite rather than how the unit behaves in production.
Based on your original question:
Given that this function is requesting information from an external
source, when you write a test for it, can you still consider such test
a Unit test?
No, any test for code that touches or depends on things external to the unit under test is an integration test. This includes the existence of any web, file system, and database requests.
If your code is not 100% isolated from it's dependencies and not 100% reproducible without other components then it is an integration test.
For your example code to be properly unit tested you would need to mock out the call to google.com.
With the code calling google.com, your test would fail if google went down or you lost connection to the internet (ie the test is not 100% isolated). Your test would also fail if the behavior of google changed (ie the test is not 100% reproducable).
I don't think this is an integration test since it doesn't make use of different parts of your application. The function does really one thing and tests for it can be called unit.
On the other side, this particular function has an external dependency and you don't want to be dependent on the network in your tests. This is where mocking would really help a lot.
In other words, isolate the function and make unit tests for it.
Also, make integration tests that would use a more high-level approach and would test parts of your application which call my_function().
Whether or not your code is tested in isolation is not a distinguishing criterion whether your test is a unit test or not. For example, you would (except in very rare cases) not stub standard library functions like sin(x). But, if you don't stub sin(x) that does not make your test an integration test.
When is a test an integration test? A test is an integration test if your goal with the test is to find bugs on integration level. That means, with an integration test you want to find out whether the interaction between two (or more) components is based on the same assumptions on both (all) sides.
Mocking, however, is an orthogonal technique that can be combined with almost all kinds of tests. (In an integration test, however, you can not mock the partners of the interaction of which you want to test - but you can mock additional components).
Since mocking typically causes some effort, it must bring a benefit, like:
significantly speeding up your tests
testing although some software part is not ready yet or buggy
testing exceptional cases that are hard to set up in an integrated software
getting rid of nondeterministic behaviour like timing or randomness
...
If, however, mocking does not solve a real problem, you might be better off using the depended-on component directly.

Is it acceptable practice to unit-test a program in a different language?

I have a static library I created from C++, and would like to test this using a Driver code.
I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments.
I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python.
I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language.
Is testing programs in a different language an accepted practice in the real world, or is this bad practice?
I'd say that it's best to test the API that your users will be exposed to. Other tests are good to have as well, but that's the most important aspect.
If your users are going to write C/C++ code linking to your library, then it would be good to have tests making use of your library the same way.
If you are going to ship a Python wrapper (why not?) then you should have Python tests.
Of course, there is a convenience aspect to this, as well. It may be easier to write tests in Python, and you might have time constraints that make it more appealing, etc.
I guess what I'm saying is: There's nothing inherently wrong with tests being in a different language from the code under test (that's totally normal for testing a REST API, for instance), but make sure you have tests for the public-facing API at a minimum.
Aside, on terminology:
I don't think the types of tests you are describing are "unit tests" in the usual sense of the term. Probably "functional test" would be more accurate.
A unit test typically tests a very small component - such as a function call - that might be one piece of larger functionality. Unit tests like these are often "white box" tests, so you can see the inner workings of your code.
Testing something from a user's point-of-view (such as your professor's commandline tests) are "black box" tests, and in these examples are at a more functional level rather than "unit" level.
I'm sure plenty of people may disagree with that, though - it's not a rigidly-defined set of terms.
A few things to keep in mind:
If you are writing tests as you code, then, by all means, use whatever language works best to give you rapid feedback. This enables fast test-code cycles (and is fun as well). BUT.
Always have well-written tests in the language of the consumer. How is your client/consumer going to call your functions? What language will they be using? Using the same language minimizes integration issues later on in the life-cycle.
It really depends on what it is you are trying to test. It almost always makes sense to write unit tests in the same language as the code you are testing so that you can construct the objects under test or invoke the functions under test, both of which can be most easily done in the same language, and verify that they work correctly. There are, however, cases in which it makes sense to use a different language, namely:
Integration tests that run a number of different components or applications together.
Tests that verify compilation or interpretation failures which could not be tested in the language, itself, since you are validating that an error occurs at the language level.
An example of #1 might be a program that starts up multiple different servers connected to each other, issues requests to the server, and verifies those responses. Or, as a simpler example, a program that simply forks an application under test as a subprocess and verifies that it produces the expected outputs for a given input.
An example of #2 might be a program that verifies that a certain piece of C++ code will produce a static assertion failure or that a particular template instantiation which is intentionally disallowed will result in a compilation failure if someone attempts to use it.
To answer your larger question, it is not bad practice per-se to write tests in a different language. Whatever makes the tests more convenient to write, easier to understand, more robust to changes in implementation, more sensitive to regressions, and better on any one of the properties that define good testing would be a good justification to write the tests one way vs another. If that means writing the tests in another language, then go for it. That being said, small unit tests typically need to be able to invoke the item under test directly which, in most cases, means writing the unit tests in the same language as the component under test.
I would say it depends on what you're actually trying to test. For true unit testing, it is, I think, best to test in the same language, or at least a binary-compatible language (i.e. testing Java with Groovy -- I use Spock in this case, which is Groovy based, to unit-test my Java code, since I can intermingle the Java with the Groovy), but if you are testing results, then I think it's fair to switch languages.
For example, I have tested the expected results when given a specific set of a data when running a Perl application via nose in Python. This works because I'm not unit testing the Perl code, per se, but the outcomes of that Perl code.
In that case, to unit test actual Perl functions that are part of the application, I would use a Perl-based test framework such as Test::More.
Why not, it's an awesome idea because you really understand that you are testing the unit like a black box.
Of course there may be technical issues involved, what if you need to mock some parts of the unit under test, that may be difficult in a different language.
This is a common practice for integration tests though, I've seen lots of programs driven from external tools such as a website from selenium, or an application from cucumber. Both those can be considered the same as a custom python script.
If you consider the difference between integration testing and unit testing is the number of things under test at any given time, the only reason why you shouldn't do this is tool support.

How to structure nose unit tests which build on each other?

Example
Let's say you have a hypothetical API like this:
import foo
account_name = foo.register()
session = foo.login(account_name)
session.do_something()
The key point being that in order to do_something(), you need to be registered and logged in.
Here's an over-simplified, first-pass, suite of unit tests one might write:
# test_foo.py
import foo
def test_registration_should_succeed():
foo.register()
def test_login_should_succeed():
account_name = foo.register()
foo.login(account_name)
def test_do_something_should_succeed():
account_name = foo.register()
session = foo.login(account_name)
session.do_something()
The Problem
When registration fails, all the tests fail and that makes it non-obvious where
the real problem is. It looks like everything's broken, but really only one, crucial, thing is broken. It's hard to find that once crucial thing unless you are familiar with all the tests.
The Question
How do you structure your unit tests so that subsequent tests aren't executed when core functionality on which they depend fails?
Ideas
Here are possible solutions I've thought of.
Manually detect failures in each test and and raise SkipTest. - Works, but a lot of manual, error-prone work.
Leverage generators to not yield subsequent tests when the primary ones fail. - Not sure if this would actually work (because how do I "know" the previously yielded test failed).
Group tests into test classes. E.g., these are all the unit tests that require you to be logged in. - Not sure this actually addresses my issue. Wouldn't there be just as many failures?
Rather than answering the explicit question, I think a better answer is to use mock objects. In general, unit tests should not require accessing external databases (as is presumably required to log in). However, if you want to have some integration tests that do this (which is a good idea), then those tests should test the integration aspects, and your unit tests should tests the unit aspects. I would go so far as to keep the integration tests and the unit tests in separate files, so that you can quickly run the unit tests on a very regular basis, and run the integration tests on a slightly less regular basis (although still at least once a day).
This problem indicates that Foo is doing to much. You need to separate concerns. Then testing will become easy. If you had a Registrator, a LoginHandler and a DoSomething class, plus a central controller class that orchestrated the workflow, then everything could be tested separately.

CGI - Good Design Practice: setattr/eval/if-then?

I have a question about good practice. In developing a back-end for my company, wherein our sales staff can manage clients remotely, I want them to simply be able to click "Edit Contact Info." (for instance) and have the python script "manageClients.py" call the "editContactInfo" function.
Obviously, the easiest way to do this would be to href manageClients.pyfunction=editContactInfo&client=ClientName
and in the script, after getting a fieldstorage instance:
if function:
eval(function)
However, everything I see says to not use eval if it can be avoided. I don't fully understand setattr(), and I don't really know how I would work it into such a simple script. I also think if function=="editClientInfo: (and so on) would be way too bulky.
Obviously the code/URL stuff listed above is simplistic so I could clearly ask the question. There are security measures in place, etc.
I just want a way to tackle this that is considered good practice, as this is my first big programming project, and I want to do it right.
If I understand correctly, you have a set of functions and you want to specify, by name, which one to use. I'd set up a dictionary mapping the name to the function, and then its just a simple lookup to get the function to use:
# Map names to functions. Note there is no need for the names
# to match the actual function name, and the functions can appear
# multiple times.
function_map = {
'editContactInfo': editContactInfo,
'editClientInfo': editClientInfo,
'editAlias': editContactInfo,
}
# Get name of function to run etc. I'm assuming that the function
# name is stored in the variable function_name, and any parameters
# call it with is in the dictionary parameters (i.e., the functions
# all take keyword arguments).
# Do a lookup to get the function.
try:
function = function_map[function_name]
except KeyError:
# Replace with your own error handling code.
print 'Function {0} does not exist.'.format(function_name)
# Call the function
result = function(**parameters)
This is the standard way to do a lookup, which is the what you seem to be asking for. However, you also asked about best practice, and I agree with RestRisiko: using a framework designed for web applications is the best practice in this case.
The only Python framework I have any real experience with is Django. This is a 'heavier' (more complex) framework than something like Flask or Bottle, but comes with a lot more inbuilt features. It really depends what your needs are, and I'd recommend checking them all out to see which fits your situation best.
CGI? This is an ancient culprit of the last century.
Please use a decent Python web-framework for implementing your functionality
in a decent and modern way.
Small framework like Flask or Bottle are much, much, much better than dealing with CGI
nowadays - no excuses please. There is no reason nowadays for using CGI in any way.
Even building something on top of the Python HTTPServer is a much better option.

Categories

Resources