I'm using the flake8-pytest-style plugin and it flags a certain test as violating PT012. This is about having too much logic in the raises() statement.
The code in question is this:
def test_bad_python_version(capsys) -> None:
import platform
from quendor.__main__ import main
with pytest.raises(SystemExit) as pytest_wrapped_e, mock.patch.object(
platform,
"python_version",
) as v_info:
v_info.return_value = "3.5"
main()
terminal_text = capsys.readouterr()
expect(terminal_text.err).to(contain("Quendor requires Python 3.7"))
expect(pytest_wrapped_e.type).to(equal(SystemExit))
expect(pytest_wrapped_e.value.code).to(equal(1))
Basically this is testing the following code:
def main() -> int:
if platform.python_version() < "3.7":
sys.stderr.write("\nQuendor requires Python 3.7 or later.\n")
sys.stderr.write(f"Your current version is {platform.python_version()}\n\n")
sys.exit(1)
What I do is just pass in a version of Python that is less than the required and make sure the error appears as expected. The test itself works perfectly fine. (I realize it can be questionable as to whether this should be a unit test at all since it's really testing more of an aspect of Python than my own code.)
Clearly the lint check is suggesting that my test is a little messy and I can certainly understand that. But it's not clear from the above referenced page what I'm supposed to do about it.
I do realize I could just disable the quality check for this particular test but I'm trying to craft as good of Python code as I can, particularly around tests. And I'm at a loss as to how to refactor this code to meet the criteria.
I know I can create some other test helper function and then have that function called from the raises block. But that strikes me as being less clear overall since now you have to look in two places in order to see what the test is doing.
the lint error is a very good one! in fact in your case because the lint error is not followed you have two lines of unreachable code (!) (the two capsys-related lines) because main() always raises
the lint is suggesting that you only have one line in a raises() block -- the naive refactor from your existing code is:
with mock.patch.object(
platform,
"python_version",
return_value="3.5",
):
with pytest.raises(SystemExit) as pytest_wrapped_e:
main()
terminal_text = capsys.readouterr()
expect(terminal_text.err).to(contain("Quendor requires Python 3.7"))
expect(pytest_wrapped_e.type).to(equal(SystemExit))
expect(pytest_wrapped_e.value.code).to(equal(1))
an aside, you should never use platform.python_version() for version comparisons as it produces incorrect results for python 3.10 -- more on that and a linter for it here
Related
When talking about pytest we know two things:
When a test pass, no output is given in principle
Sometimes the assertion failures can have very cryptic messages.
I took a course that solved this by using print to clarify desired outputs and calling the pytest as pytest -v -s. I think it is a great solution
Another developer in my company thinks that test code should be as free of "side effects" as possible (and considers prints as side effect). He suggests outputting to a file which I think it is not a good practice. (I think that is an undesirable side effect)
So I would like to hear about this from other developers.
How do you solve the two points given in the beginning and do you use prints in your tests?
As someone already pointed out you can provide your own assert message:
def test_something():
i = 2
assert i == 1, "i should be equal to one"
There should be really no difference between using assert messages and prints, but in case of an assert message only it would be visible in pytest report, and not all the stdout calls:
In this case 0-9 would be printed in pytest report
def test_something():
i = 2
for i in range(10):
print(i)
assert i == 1
Logging everything to a file would definitely make working with pytest harder, and would be a pain to debug if your tests fail in CI.
If you need descriptive messages I would prefer using assert messages and, maybe, prints for debug information.
using print() in your test is not a good solution, you need to see data in cli or on a pipeline.
so for assertion you can share custom messages for assertion in pass or fail case or even in raise an exception
here is a basic tutorial for this
https://docs.pytest.org/en/7.1.x/how-to/assert.html
for general test steps the best way to get the info is logging
logging on different levels
import logging as logger
logger.info('what info you want to share')
logger.error('what info you want to share')
logger.debug('what info you want to share')
for more info you can check this
https://docs.python.org/3/howto/logging.html
My python scripts often contain "executable code" (functions, classes, &c) in the first part of the file and "test code" (interactive experiments) at the end.
I want python, py_compile, pylint &c to completely ignore the experimental stuff at the end.
I am looking for something like #if 0 for cpp.
How can this be done?
Here are some ideas and the reasons they are bad:
sys.exit(0): works for python but not py_compile and pylint
put all experimental code under def test():: I can no longer copy/paste the code into a python REPL because it has non-trivial indent
put all experimental code between lines with """: emacs no longer indents and fontifies the code properly
comment and uncomment the code all the time: I am too lazy (yes, this is a single key press, but I have to remember to do that!)
put the test code into a separate file: I want to keep the related stuff together
PS. My IDE is Emacs and my python interpreter is pyspark.
Use ipython rather than python for your REPL It has better code completion and introspection and when you paste indented code it can automatically "de-indent" the pasted code.
Thus you can put your experimental code in a test function and then paste in parts without worrying and having to de-indent your code.
If you are pasting large blocks that can be considered individual blocks then you will need to use the %paste or %cpaste magics.
eg.
for i in range(3):
i *= 2
# with the following the blank line this is a complete block
print(i)
With a normal paste:
In [1]: for i in range(3):
...: i *= 2
...:
In [2]: print(i)
4
Using %paste
In [3]: %paste
for i in range(10):
i *= 2
print(i)
## -- End pasted text --
0
2
4
In [4]:
PySpark and IPython
It is also possible to launch PySpark in IPython, the enhanced Python interpreter. PySpark works with IPython 1.0.0 and later. To use IPython, set the IPYTHON variable to 1 when running bin/pyspark:1
$ IPYTHON=1 ./bin/pyspark
Unfortunately, there is no widely (or any) standard describing what you are talking about, so getting a bunch of python specific things to work like this will be difficult.
However, you could wrap these commands in such a way that they only read until a signifier. For example (assuming you are on a unix system):
cat $file | sed '/exit(0)/q' |sed '/exit(0)/d'
The command will read until 'exit(0)' is found. You could pipe this into your checkers, or create a temp file that your checkers read. You could create wrapper executable files on your path that may work with your editors.
Windows may be able to use a similar technique.
I might advise a different approach. Separate files might be best. You might explore iPython notebooks as a possible solution, but I'm not sure exactly what your use case is.
Follow something like option 2.
I usually put experimental code in a main method.
def main ():
*experimental code goes here *
Then if you want to execute the experimental code just call the main.
main()
With python-mode.el mark arbitrary chunks as section - for example via py-sectionize-region.
Than call py-execute-section.
Updated after comment:
python-mode.el is delivered by melpa.
M-x list-packages RET
Look for python-mode - the built-in python.el provides 'python, while python-mode.el provides 'python-mode.
Developement just moved hereto: https://gitlab.com/python-mode-devs/python-mode
I think the standard ('Pythonic') way to deal with this is to do it like so:
class MyClass(object):
...
def my_function():
...
if __name__ == '__main__':
# testing code here
Edit after your comment
I don't think what you want is possible using a plain Python interpreter. You could have a look at the IEP Python editor (website, bitbucket): it supports something like Matlab's cell mode, where a cell can be defined with a double comment character (##):
## main code
class MyClass(object):
...
def my_function():
...
## testing code
do_some_testing_please()
All code from a ##-beginning line until either the next such line or end-of-file constitutes a single cell.
Whenever the cursor is within a particular cell and you strike some hotkey (default Ctrl+Enter), the code within that cell is executed in the currently running interpreter. An additional feature of IEP is that selected code can be executed with F9; a pretty standard feature but the nice thing here is that IEP will smartly deal with whitespace, so just selecting and pasting stuff from inside a method will automatically work.
I suggest you use a proper version control system to keep the "real" and the "experimental" parts separated.
For example, using Git, you could only include the real code without the experimental parts in your commits (using add -p), and then temporarily stash the experimental parts for running your various tools.
You could also keep the experimental parts in their own branch which you then rebase on top of the non-experimental parts when you need them.
Another possibility is to put tests as doctests into the docstrings of your code, which admittedly is only practical for simpler cases.
This way, they are only treated as executable code by the doctest module, but as comments otherwise.
Is there a neat way to inject failures in a Python script? I'd like to avoid sprinkling the source code with stuff like:
failure_ABC = True
failure_XYZ = True
def inject_failure_ABC():
raise Exception('ha! a fake error')
def inject_failure_XYZ():
# delete some critical file
pass
# some real code
if failure_ABC:
inject_failure_ABC()
# some more real code
if failure_XYZ:
inject_failure_XYZ()
# even more real code
Edit:
I have the following idea: insert "failure points" as specially-crafted comments. The write a simple parser that will be called before the Python interpreter, and will produce the actual instrumented Python script with the actual failure code. E.g:
#!/usr/bin/parser_script_producing_actual_code_and_calls python
# some real code
# FAIL_123
if foo():
# FAIL_ABC
execute_some_real_code()
else:
# FAIL_XYZ
execute_some_other_real_code()
Anything starting with FAIL_ is considered as a failure point by the script, and depending on a configuration file the failure is enabled/disabled. What do you think?
You could use mocking libraries, for example unittest.mock, there also exist many third party ones as well. You can then mock some object used by your code such that it throws your exception or behaves in whatever way you want it to.
When testing error handling, the best approach is to isolate the code that can throw errors in a new method which you can override in a test:
class ToTest:
def foo(...):
try:
self.bar() # We want to test the error handling in foo()
except:
....
def bar(self):
... production code ...
In your test case, you can extend ToTest and override bar() with code that throws the exceptions that you want to test.
EDIT You should really consider splitting large methods into smaller ones. It will make the code easier to test, to understand and to maintain. Have a look at Test Driven Development for some ideas how to change your development process.
Regarding your idea to use "Failure Comments". This looks like a good solution. There is one small problem: You will have to write your own Python parser because Python doesn't keep comments when it produces bytecode.
So you can either spend a couple of weeks to write this or a couple of weeks to make your code easier to test.
There is one difference, though: If you don't go all the way, the parser will be useless. Also, the time spent won't have improved one bit of your code. Most of the effort will go into the parser and tools. So after all that time, you will still have to improve the code, add failure comments and write the tests.
With refactoring the code, you can stop whenever you want but the time spent so far will be meaningful and not wasted. Your code will start to get better with the first change you make and it will keep improving.
Writing a complex tool takes time and it will have it's own bugs which need to fix or work around. None of this will improve your situation in the short term and you don't have a guarantee that it will improve the long term.
If you only want to stop your code at some point, and fall back to interactive interpreter, one can use:
assert 1==0
But this only works if you do not run python with -O
Edit
Actually, my first answer was to quick, without really understanding what you want to do, sorry.
Maybe your code becomes already more readable if you do parameterization through parameters, not through variable/function suffices. Something like
failure = {"ABC": False, "XYZ":False}
#Do something, maybe set failure
def inject_failure(failure):
if not any(failure.values()):
return
if failure["ABC"]:
raise Exception('ha! a fake error')
elif failure["XYZ"]:
# delete some critical file
pass
inject_failure(failure)
The part of my assignment is to create tests for each function. This ones kinda long but I am so confused. I put a link below this function so you can see how it looks like
first code is extremely long because.
def load_profiles(profiles_file, person_to_friends, person_to_networks):
'''(file, dict of {str : list of strs}, dict of {str : list of strs}) -> NoneType
Update person to friends and person to networks dictionaries to include
the data in open file.'''
# for updating person_to_friends dict
update_p_to_f(profiles_file, person_to_friends)
update_p_to_n(profiles_file, person_to_networks)
heres the whole code: http://shrib.com/8EF4E8Z3, I tested it through mainblock and it works.
This is the text file(profiles_file) we were provided that we are using to convert them :
http://shrib.com/zI61fmNP
How do I run test cases for this through nose, what kinda of test outcomes are there? Or am I not being specific enough?
import nose
import a3_functions
def test_load_profiles_
if name == 'main':
nose.runmodule()
I went that far then I didn't know what I can test for the function.
Lets assume the code you wrote so far is in a module called "mycode".
Write a new module called testmycode. (i.e. create a python file called testmycode.py)
In there, import the module you want to test (mycode)
Write a function called testupdate().
In that function, first write a text file (with file.write) that you expect to be valid. Then let update_p_to_f update it. Verify that it did what you expect, using assert. This is a test for reading a text file.
Then you can write a second function called testupdate_write(), where you let your code write to a file -- then verify that what it wrote is correct.
To run the tests, use (on the commandline)
nosetests -sx testmycode.py
Which will load testmycode and run all functions it finds there that start with test.
You probably want to test both the overall output of your program is correct, and that individual parts of your program are correct.
#j13r has already covered how to test the overall correctness of your program for a full run.
You mention that you have four helper functions. You can write tests for these separately.
Testing smaller pieces of your code is helpful because you can test each piece in more numerous and more specific ways than if you only test the whole thing.
The unittest module is a framework for performing tests.
While browsing some code, I came across this line:
if False: #shedskin
I understand that Shedskin is a kind of Python -> C++ compiler, but I can't understand that line.
Shouldn't if False: never execute? What's going on here?
For context:
This is the whole block:
if False: # shedskin
AStar(SQ_MapHandler([1], 1, 1)).findPath(SQ_Location(1,1), SQ_Location(1,1))
More context is on Google Code (scroll down all the way).
It won't execute, because it isn't supposed to. The if False: is there to intentionally prevent the next line from executing, because that code's only purpose is seemingly to help Shed Skin infer type information about the argument to the AStar() function.
You can see another example of this in httplib:
# Useless stuff to help type info
if False :
conn._set_tunnel("example.com")
It will never get executed. It's one way to temporarily disable part of the code.
Theoretically, it could get executed:
True, False = False, True
if False: print 'foo'
But typically this will be used to temporarily disable a code path.
You are correct in assuming that this will never evaluate to true. This is sometimes done when the programmer has a lot of debugging code but does not want to remove the debugging code in a release, so they just put if False: above it all.
not enough reputation to comment yet apparently, but tim stone's answer is correct. suppose we have a function like this:
def blah(a,b):
return a+b
now in order to perform type inference, there has to be at least one call to blah, or it becomes impossible to know the types of the arguments at compile-time.
for a stand-alone program, this not a problem, since everything that has to be compiled for it to run is called indirectly from somewhere..
for an extension module, calls can come from the 'outside', so sometimes we have to add a 'fake' call to a function for type inference to become possible.. hence the 'if False'.
in the shedskin example set there are a few programs that are compiled as extension modules, in order to be combined with for example pygame or multiprocessing.