For a specific program I'm working in, we need to evaluate some code, then run a unittest, and then depending on whether or not the test failed, do A or B.
But the usual self.assertEqual(...) seems to display the results (fail, errors, success) instead of saving them somewhere, so I can't access that result.
I have been checking the modules of unittest for days but I can't figure out where does the magic happen or if there is somewhere a variable I can call to know the result of the test without having to read the screen (making the program read and try to find the words "error" or "failed" doesn't sound like a good solution).
After some days of researching, I sent an email to help#python.org and got the perfect solution for my issue.
The answer I got was:
I suspect that the reason that you're having trouble getting unittest
to do that is that that's not the sort of thing that unittest was
written to do. A hint that that's the case seems to me to be that over
at the documentation:
https://docs.python.org/3/library/unittest.html
there's a section on the command-line interface but nothing much about
using the module as an imported module.
A bit of Googling yields this recipe:
http://code.activestate.com/recipes/578866-python-unittest-obtain-the-results-of-all-the-test/
Which looks as though it might be useful to you but I can't vouch for
it and it seems to involve replacing one of the library's files.
(Replacing one of the library's files is perfectly reasonable in my
opinion. The point of Python's being open-source is that you can hack
it for yourself.)
But if I were doing what you're describing, I'd probably write my own
testing code. You could steal what you found useful from unittest
(kind of the inverse of changing the library in place). Or you might
find that your needs are sufficiently simple that a simple file of
testing code was sufficient.
If none of that points to a solution, let us know what you get and
I'll try to think some more.
Regards, Matt
After changing my result.py module from unittest, I'm able to access the value of the test (True, False, or Error).
Thank you very much, Matt.
P.S. I edited my question so it was more clear and didn't have unnecessary code.
You can use pdb to debug this issue, in the test simply add these two lines to halt execution and begin debugging.
import pdb
pdb.settrace()
Now for good testing practice you want deterministic test results, a test that fails only sometimes is not a good test. I recommend mocking the random function and using data sets that capture the errors you find.
Related
So you've got some legacy code lying around in a fairly hefty project. How can you find and delete dead functions?
I've seen these two references: Find unused code and Tool to find unused functions in php project, but they seem specific to C# and PHP, respectively.
Is there a Python tool that'll help you find functions that aren't referenced anywhere else in the source code (notwithstanding reflection/etc.)?
In Python you can find unused code by using dynamic or static code analyzers. Two examples for dynamic analyzers are coverage and figleaf. They have the drawback that you have to run all possible branches of your code in order to find unused parts, but they also have the advantage that you get very reliable results.
Alternatively, you can use static code analyzers that just look at your code, but don't actually run it. They run much faster, but due to Python's dynamic nature the results may contain false positives.
Two tools in this category are pyflakes and vulture. Pyflakes finds unused imports and unused local variables. Vulture finds all kinds of unused and unreachable code. (Full disclosure: I'm the maintainer of Vulture.)
The tools are available in the Python Package Index https://pypi.org/.
I'm not sure if this is helpful, but you might try using the coverage, figleaf or other similar modules, which record which parts of your source code is used as you actually run your scripts/application.
Because of the fairly strict way python code is presented, would it be that hard to build a list of functions based on a regex looking for def function_name(..) ?
And then search for each name and tot up how many times it features in the code. It wouldn't naturally take comments into account but as long as you're having a look at functions with less than two or three instances...
It's a bit Spartan but it sounds like a nice sleepy-weekend task =)
unless you know that your code uses reflection, as you said, I would go for a trivial grep. Do not underestimate the power of the asterisk in vim as well (performs a search of the word you have under your cursor in the file), albeit this is limited only to the file you are currently editing.
Another solution you could implement is to have a very good testsuite (seldomly happens, unfortunately) and then wrap the routine with a deprecation routine. if you get the deprecation output, it means that the routine was called, so it's still used somewhere. This works even for reflection behavior, but of course you can never be sure if you don't trigger the situation when your routine call is performed.
its not only searching function names, but also all the imported packages not in use.
you need to search the code for all the imported packages (including aliases) and search used functions, then create a list of the specific imports from each package (example instead of import os, replace with from os import listdir, getcwd,......)
How can I check if a part of my code functions as it should
without having to run my program from start every time, which takes
10+ seconds(gathering data from the web) before it gets to the part I'm testing?
Also debugging with print statements seem kind of tricky, because I can't see if the program is behaving as expected so I can only do print statement for what I think the problem is.
The solution to this would be to use a debugger, but I rarely hear anybody talk about it, it seems so weird because the debugger shows what the program is doing instead of you having assumptions of what it is doing.
Why do I never hear about this and how are you supposed to do this?
I've only used PyCharms debugger but prefer programming in online clients which don't have easy to use debuggers.
You need to lear to write unit tests. Read this question to start.
Read up on Python's out-of-box unittest module.
This may be a subjective question so I understand if it gets shut down, but this is something I've been wondering about ever since I started to learn python in a more serious way.
Is there a generally accepted 'best practice' about whether importing an additional module to accomplish a task more cleanly is better than avoiding the call and 'working around it'?
For example, I had some feedback on a script I worked on recently, and the suggestion was that I could have replaced the code below with a glob.glob() call. I avoided this at the time, because it meant adding another import that seemed unecessary to me (and the actual flow of filtering the lines just meshed with my thought process for the task).
headers = []
with open(hhresult_file) as result_fasta:
for line in result_fasta:
if line.startswith(">"):
line = line.split("_")[0]
headers.append(line.replace(">",""))
Similarly, I decided to use an os.rename() call later in the script for moving some files rather than import shutil.
Is there a right answer here? Are there any overheads associated with calling additional modules and creating more dependencies (lets say for instance, that the module wasn't a built-in python module) vs. writing a slightly 'messier' code using modules that are already in your script?
This is quite a broad question, but I'll try to answer it succinctly.
There is no real best practice, however, it is generally a good idea to recycle code that's already been written by others. If you find a bug in the imported code, it's more beneficial than finding one in your own code because you can submit a ticket to the author and have it fixed for potentially a large group of people.
There are certainly considerations to be made when making additional imports, mostly when they are not part of the Python standard library.
Sometimes adding in a package that is a little bit too 'magical' makes code harder to understand, because it's another library or file that somebody has to look up to understand what is going on, versus just a few lines that might not be as sophisticated as the third party library, but get the job done regardless.
If you can get away with not making additional imports, you probably should, but if it would save you substantial amounts of time and headache, it's probably worth importing something that has been pre-written to deal with the problem you're facing.
It's a continual consideration that has to be made.
So you've got some legacy code lying around in a fairly hefty project. How can you find and delete dead functions?
I've seen these two references: Find unused code and Tool to find unused functions in php project, but they seem specific to C# and PHP, respectively.
Is there a Python tool that'll help you find functions that aren't referenced anywhere else in the source code (notwithstanding reflection/etc.)?
In Python you can find unused code by using dynamic or static code analyzers. Two examples for dynamic analyzers are coverage and figleaf. They have the drawback that you have to run all possible branches of your code in order to find unused parts, but they also have the advantage that you get very reliable results.
Alternatively, you can use static code analyzers that just look at your code, but don't actually run it. They run much faster, but due to Python's dynamic nature the results may contain false positives.
Two tools in this category are pyflakes and vulture. Pyflakes finds unused imports and unused local variables. Vulture finds all kinds of unused and unreachable code. (Full disclosure: I'm the maintainer of Vulture.)
The tools are available in the Python Package Index https://pypi.org/.
I'm not sure if this is helpful, but you might try using the coverage, figleaf or other similar modules, which record which parts of your source code is used as you actually run your scripts/application.
Because of the fairly strict way python code is presented, would it be that hard to build a list of functions based on a regex looking for def function_name(..) ?
And then search for each name and tot up how many times it features in the code. It wouldn't naturally take comments into account but as long as you're having a look at functions with less than two or three instances...
It's a bit Spartan but it sounds like a nice sleepy-weekend task =)
unless you know that your code uses reflection, as you said, I would go for a trivial grep. Do not underestimate the power of the asterisk in vim as well (performs a search of the word you have under your cursor in the file), albeit this is limited only to the file you are currently editing.
Another solution you could implement is to have a very good testsuite (seldomly happens, unfortunately) and then wrap the routine with a deprecation routine. if you get the deprecation output, it means that the routine was called, so it's still used somewhere. This works even for reflection behavior, but of course you can never be sure if you don't trigger the situation when your routine call is performed.
its not only searching function names, but also all the imported packages not in use.
you need to search the code for all the imported packages (including aliases) and search used functions, then create a list of the specific imports from each package (example instead of import os, replace with from os import listdir, getcwd,......)
Since Python is a dynamic, interpreted language you don't have to compile your code before running it. Hence, it's very easy to simply write your code, run it, see what problems occur, and fix them. Using hotkeys or macros can make this incredibly quick.
So, because it's so easy to immediately see the output of your program and any errors that may occur, I haven't uses a debugger tool yet. What situations may call for using a real debugger vs. the method I currently use?
I'd like to know before I get into a situation and get frustrated because I don't know how to fix the problem.
In 30 years of programming I've used a debugger exactly 4 times. All four times were to read the core file produced from a C program crashing to locate the traceback information that's buried in there.
I don't think debuggers help much, even in compiled languages. Many people like debuggers, there are some reasons for using them, I'm sure, or people wouldn't lavish such love and care on them.
Here's the point -- software is knowledge capture.
Yes, it does have to run. More importantly, however, software has meaning.
This is not an indictment of your use of a debugger. However, I find that the folks who rely on debugging will sometimes produce really odd-looking code and won't have a good justification for what it means. They can only say "it may be a hack, but it works."
My suggestion on debuggers is "don't bother".
"But, what if I'm totally stumped?" you ask, "should I learn the debugger then?" Totally stumped by what? The language? Python's too simple for utter befuddlement. Some library? Perhaps.
Here's what you do -- with or without a debugger.
You have the source, read it.
You write small tests to exercise the library. Using the interactive shell, if possible. [All the really good libraries seem to show their features using the interactive Python mode -- I strive for this level of tight, clear simplicity.]
You have the source, add print functions.
I use pdb for basic python debugging. Some of the situations I use it are:
When you have a loop iterating over 100,000 entries and want to break at a specific point, it becomes really helpful.(conditional breaks)
Trace the control flow of someone else's code.
Its always better to use a debugger than litter the code with prints.
Normally there can be more than one point of failures resulting in a bug, all are not obvious in the first look. So you look for obvious places, if nothing is wrong there, you move ahead and add some more prints.. debugger can save you time here, you dont need to add the print and run again.
Usually when the error is buried in some function, but I don't know exactly what or where. Either I insert dozens of log.debug() calls and then have to take them back out, or just put in:
import pdb
pdb.set_trace ()
and then run the program. The debugger will launch when it reaches that point and give me a full REPL to poke around in.
Any time you want to inspect the contents of variables that may have caused the error. The only way you can do that is to stop execution and take a look at the stack.
pydev in Eclipse is a pretty good IDE if you are looking for one.
I find it very useful to drop into a debugger in a failing test case.
I add import pdb; pdb.set_trace() just before the failure point of the test. The test runs, building up a potentially quite large context (e.g. importing a database fixture or constructing an HTTP request). When the test reaches the pdb.set_trace() line, it drops into the interactive debugger and I can inspect the context in which the failure occurs with the usual pdb commands looking for clues as to the cause.
You might want to take a look at this other SO post:
Why is debugging better in an IDE?
It's directly relevant to what you're asking about.