I need to cover only code that is called directly from test function, every nested method call must be marked as missed. This must help me ensure that every unit/method has his own test.
Example: test function calls method A and method A calls method B inside. After that I want to have A method marked as covered and method B marked as missed, as it was not directly called from test function.
Does anybody know plugin or has any idea how to do that?
I have tried googling and reading coverage docs, the only thing slightly related is dynamic contexts, but they show which methods called the line. This differs from what I want, because in this case I must check every line caller method. I just want this lines(that are not called directly) to be marked red.
Coverage.py doesn't have a way to do this. There's discussion about marking which product functions are intended to be covered by each test as a possible future feature: https://github.com/nedbat/coveragepy/issues/696
Related
i have a function that has the following code in it
ses = boto3.Session()
as_role = aws_assume_role_lib.assume_role(session, "some_role")
i'm trying to unit test it and i need to mock those two calls.
What's the best way to do so?
Without more information, there's no way to answer your question. Any direct answer would require a lot more information. What is it that you want to test? It certainly can't be the code you're showing us because if you mock out both of these calls, there's nothing left to test.
Mocking involves injecting stand-ins for specific test cases such that for those test cases, the mocks return predetermined values without calling the real routines. So what do you want those values to be? The mocks could also have side effects, or change their behavior based on external state, but you want to try to avoid either of those situations. Mocks are usually functional...returning a specific output for each set of specific explicit inputs, and neither modifying or being affected by implicit (external) state.
I assume that ses should be session so that the result of the first call is passed to the second call. In this case, neither of these calls takes varying data, so you can mock the result of the two calls by just assigning a static value to as_role...whatever value your later code wants to see. Since there is only one set of inputs, there should only be one possible output.
It would be more complicated if you need the mocks to change their behavior based on external state. How you might mock in that case is entirely dependent on the external states that are possible, where they come from, and how they affects the value of as_role. Likewise, if you need your mocks to cause a change in external state, like modifying global variables, you'd need to specify that, and that might affect how you'd choose to mock.
In short, you should define a spec describing how you want your mocks to act under a restricted set of particular conditions. Having that spec will let you begin to decide how to implement them.
If you're just asking how to generally build mocks in Python, check out one or more of the existing Python mocking frameworks, like mock, flexmock, mox, Mocker, etc. A quick Google pointed me to this, which might be helpful: https://garybernhardt.github.io/python-mock-comparison/. Asking for library recommendations is against SO policy as it involves mostly personal opinions, so if you're asking for that, you should look elsewhere.
I just want to check whether a particular function is called by other function or not. If yes then I have to store it in a different category and the function that does not call a particular function will be stored in different category.
I have 3 .py files with classes and functions in them. I need to check each and every function. e.g. let's say a function trial(). If a function calls this function, then that function is in example category else non-example.
I have no idea what you are asking, but even if it is be technically possible, the one and only answer: don't do that.
If your design is as such that method A needs to know whether it was called from method B or C; then your design is most likely ... broken. Having such dependencies within your code will quickly turn the whole thing un-maintainable. Simply because you will very soon be constantly asking yourself "that path seems fine, but will happen over here?"
One way out of that: create different methods; so that B can call something else as C does; but of course, you should still extract the common parts into one method.
Long story short: my non-answer is: take your current design; and have some other people review it. As you should step back from whatever you are doing right now; and find a way to it differently! You know, most of the times, when you start thinking about strange/awkward ways to solve a problem within your current code, the real problem is your current code.
EDIT: given your comments ... The follow up questions would be: what is the purpose of this classification? How often will it need to take place? You know, will it happen only once (then manual counting might be an option), or after each and any change? For the "manual" thing - ideas such as pycharm are pretty good in analyzing python source code, you can do simple things like "search usages in workspaces" - so the IDE lists you all those methods that invoke some method A. But of course, that works only for one level.
The other option I see: write some test code that imports all your methods; and then see how far the inspect module helps you. Probably you could really iterate through a complete method body and simply search for matching method names.
I have a file, full of unit tests. I want to add a test that will only run if it's manually selected, similar to NCover's Explicit attribute.
I suspect this involves the skipIf decorator, but I don't see what to check against. Through debugging, I can see that some of the unittest classes hold this info, but it's not passed to my code.
So far, I see two ways to sort of do this:
1) Have setUp() increment a counter per test. If the test is not the first one, skip. This is close, but means I need to be careful about the name to keep it from being run first.
2) Put a flag in the file, and check against it. The main problem is that involves code changes whenever I want to run the test, with all that implies.
What is the easiest way to record function calls for debugging in Python? I'm usually interested in particular functions or all functions from a given class. Or sometimes even all functions called on a particular object attribute. Seeing the call arguments would be useful, too.
I can imagine writing decorators for all that, but then I'd still have to modify the source code in different places. And writing a class decorator which modifies all methods isn't that straightforward.
Is there a solution where I don't have to modify my source code? Ideally something which doesn't slow down Python too much.
You ought to be able to implement something that does what you want using either sys.setprofile() or perhaps sys.settrace(). They both let you define a function to be called when specific "events" occur, like function calls, and pass additional information to which can be used to to determine the function/method being called and examine its arguments.
If you look around, there's probably sample usage code to use as a good starting point.
Except decorators, for Python >= 3.0 you could use new __getattribute__ method for a class, which will be called every time you call any method of the object.
You could look through Lutz "Learning Python" chapters 31, 37 about it.
I am using mock for testing in Python. I am trying to unit test a metaclass which overwrites the __new__ method and then calls type.__new__(cls) internally.
I don't want to actually call type.__new__, so I want to mock out type. Of course, I can't patch __builtin__.type because it breaks object construction within the test.
So, I really want to limit mocking type within the module under test. Is this possible?
Yes. You patch as close to where you're going to call your function as possible for just these sort of reasons. So, in your test case, only around the function (or whatever callable) you'll call that's under tests, you can patch type.
The documentation for patch has plenty of examples for doing this if you'd like to peruse them.
Cheers.