How to compare cpp and python outputs? - python

I am trying to reproduce some functionality from Python in C++. It is an involved numerical method with a bunch of subfunctions.
Is there a nice way to compare the values of the Python functions with the C++ functions that I am writing and that should mirror them?
Could someone paste some code or give a mini tutorial?
Or some pointers or references?
Many thanks!
Artabalt

Testing.
Write tests for the functionality on Python (unless they already exist; Python code is usually tested).
Port the tests to C++. Then make your C++ functions pass the tests. Ideally, make the tests a target on your makefile and run them whenever you can.
Edit:
You can test randomly, if it's a good idea or not, might depend on your particular case.
The rule of thumb is to use a couple of each border case (you can use code coverage to see if there's a case that you're missing).
As an example, at my work I use an integration test that requires a lot of components. We simulate a betting process (all the process excluding user interface), at a given point in time, with a hardcoded server response, and we simulate the hardware printer with a redirection to a bmp.
Now, you can't test everything. I can't test every valid combination of numbers (6 numbers from 0 to 36?), in every possible order. Neither can I test every invalid combination (infinite?). You test a couple. The meaningful ones.
So we give it always the same numbers. The test is that it has to produce always the same bmp (has to render fonts the same, has to have the same texts, the same betting date, etc.).
Now, depending on your code and your application is the level of automation that generating this tests can have.
I don't know your particular case, but I would start creating a little program (the simpler, the better) that uses your library. One implementation in Python, one implementation in C++. If you were testing a string library, for example, you do a small program that deletes a letter inside a string, adds up a text, erases every other letter, etc.
Then you automate for a couple of test cases:
cat largetext.txt | ./my_python_program > output.pyhthon_program.txt
cat largetext.txt | ./my_cpp_program > output.cpp_program.txt
diff output.pyhthon_program.txt output.cpp_program.txt || echo "ERROR"
The ideal case is that everytime you compile, the tests get run (this can be done if the code is simple; if not, you can do testing at the end of the day, as an example).
This doesn't grant the programs are the same (you can't test every possible input). But gives you some confidence. If you see something is not right, you first add it to your test, then make it fail, then fix it until it passess.
Regards.

Related

Run Unittests in Python with highlighting of Keywords in Terminal (determined during the run)

I'm administrating a quite a huge amount of Python Unittests and as sometimes the Stacktraces are weirdly long I had one thought on optimizing the outputs in order to see the file I'm interested faster.
Currently I am running tests with ack:
python3 unittests.py | ack --passthru 'GraphTests.py|Versioning/Testing/Tests.py'
Which does work as I desired it. But as the amount of tests keeps growing and I want to keep it dynamic I wanted to read the classes from whatever I've set in the testing suite.
suite.addTest(loader.loadTestsFromModule(UC3))
What would be the best way to accomplish this?
My first thought was to split the unittests up into two files, one loader, one caller. The loader adds all the unittests as before and executes the caller.py with an ack, including the list of files.
Is that a reasonable approach? Are there better ways? I don't think it is possible to fill the ack patterns in after I executed the line.
There's also the idea of just piping the results into a file that I read afterwards, but from my experience until now I am not sure if that will work as planned and not just cause extra-troubles. (I use an experimental unittest-framework that adds colouring during the execution of unittests using format-characters like '\033[91m' - see: https://stackoverflow.com/questions/15580303/python-output-complex-line-with-floats-colored-by-value)
Is there an option I don't see?
Edit (1): Also I forgot to add: Getting into the debugger is a different experience. With ack it doesn't seem to work properly any more

Is there an easy way to have "checkpoints" in an extended python script?

To preface my question let me give a bit of context: I'm currently working on a data pipeline that has a number of different steps. Each step can go wrong and many take some time (not a huge amount, but on the order of minutes).
For this reason the pipeline is currently heavily supervised by humans. An analyst goes through each step, running the python code in a Jupyter notebook and upon experiencing problems will make minor code adjustments inline and repeat that section.
In the long run, the goal here is to have zero human intervention. However, in the shorter term we'd like to make this process more seamless. The simplest way to do that seems like it would be to split each section into it's own script, and have a parent script that runs each bit and verifies output. However, we also need the ability to rerun the a file with an identical setup if it fails.
For example:
run a --> ✅
run b --> ✅ (b relies on some data produced by a)
run c --> ❌ (c relies on data produced by a and b)
// make some changes to c
run c --> ✅ (c should run in an identical state to its original run)
The most obvious way to do this would be to write output from each script to a file, and load all of these scripts into the next one. This would work, but seems a bit inelegant. A database seems another valid option, but a lot of the data doesn't fit cleanly into a db format.
Does anyone have any suggestions for some ways to achieve what I'm looking for? If anything is unclear I'm also more than happy to clarify any points!
You could create an object that basically maintains the state after each step and use pickle to serialize that object to a file.
Then it's up to your python script to unpickle that file and then decide which step it needs to start from based on the state.
https://wiki.python.org/moin/UsingPickle
Queues and pipelines work very nicely with each other (architecturally speaking). One of the nicer bits is the fact that slower stages can get more workers than faster stages, allowing the pipeline to be optimized based on the workload.

Time trial version of a program

I want to create a trial version of a program for my customer. I want to give him/her some time to test the program (7 days in this case).
I have this command in the application (in *.py file):
if os.path.isfile('some_chars.txt') or datetime.now()<datetime.strptime('30-8-2015','%d-%m-%Y'):
# DO WHAT application HAS TO DO
else:
print 'TRIAL EXPIRED'
quit()
I'm curious whether is this approach enough for common customer or whether I have to change it. The thing is that the application has to find a file which name is, let's say, 'some_chars.txt'. If the file was found, the application works as it has to do, if not, it returns a text 'Trial version expired'.
So the main question is - is it enough for common customer? Can it be found somewhere or is it compiled to machine code so he would had to disassemble it?
EDIT: I forgot to mention very important thing, I'm using py2exe to make an executable file (main) with unnecessary files and folders.
Of course it has everything to do with the target (population) you're aiming: there are some cases when security is an offense (that involves lots of money so it's not our case);
Let's take an example:
Have a program that reads plain data from a file(registry,...); e.g. :the date (the program converts the date does a comparison and depending on the trial period close or lets the user go on)
Have everything from previous step, but the data is not in plain text, it is encrypted (e.g.: 1 is added to every char in the data so it is not immediately readable)
Use some well known encryption algorithms (would make the data unreadable to the user)
But, no matter the method you choose, it's just a matter of time til it will be broken.
A "hard to beat" way would be to have an existing server where the client could connect and "secretly talk" (I'm talking about a SSLed connecion anyway), even for trial period.
"Hiding the obvious info"(delivering a "compiled" .py script) is no longer the way (the most common Google search will point to a Python "decompiler")
Python is interpreted, so all they have to do is look at the source code to see time limiting section.
There are some options of turning a python script into an executable. I would try this and don't use any external files to set the date, keep it in the script.

Unit testing a script that opens a file

I've written a script that opens up a file, reads the content and does some operations and calculations and stores them in sets and dictionaries.
How would I write a unit test for such a thing? My questions specifically are:
Would I test that the file opened?
The file is huge (it's the unix dictionary file). How would I unit test the calculations? Do I literally have to manually calculate everything and test that the result is right? I have a feeling that this defeats the whole purpose of unit testing. I'm not taking any input through stdin.
That's not what unit-testing is about!
Your file doesn't represent an UNIT, so no you don't test the file or WITH the file!
your unit-test should test every single method of your functions/methods which deals with the a)file-processing b) calculations
it's not seldom that your unit-tests exceeds the line of code of your units under test.
Unit-test means (not complete and not the by-the-book definition):
minimalistic/atomic - you split your units down to the most basic/simple unit possible; an unit is normally a callable (method, function, callable object)
separation of concern - you test ONE and only ONE thing in every single test; if you want to test different conditions of a single unit, you write different tests
determinism - you give the unit something to process, with the beforehand knowledge of what it's result SHOULD be
if your unit-under-test needs a specific enviroment you create a fixture/test-setup/mock-up
unit-tests are (as a rule of thumb) blazingly fast! if it's slow check if you violated another point from above
if you need to test somethin which violates somethin from above you may have made the next step in testing towards integration-tests
you may use unit-test frameworks for not unit-testings, but don't call it unit-test just because of the use of the unittest-framework
This guy (Gary Bernhardt) has some interesting practical examples of what testing and unit-testing means.
Update for some clarifications:
"1. Would I test that the file opened?"
Well you could do that, but what would be the "UNIT" for that? Keep in mind, that a test has just two solutions: pass and fail. If your test fails, it should (ideally must) have only one reason for that: Your unit(=function) sucks! But in this case your test can fail, because:
* the file doesn't exist
* is locked
* is corrupted
* no file-handles left
* out of memeory (big file)
* moon- phase
and so on.
so what would a failing (or passing) "unit" test say about your unit? You don't test your unit alone, but the whole surrounding enviroment with it. That's more a system-test!
If you would like to test nontheless for successful file-opening you should at least mock a file.
"2 ... How would I unit test the calculations? Do I literally have to manually calculate everything and test that the result is right?"
No. You would write test for the corner- and regular-cases and check the expected outcome against the processed one. The amount of tests needed depends on the complexity of your calculations and the exceptions to the rule.
e.g.:
def test_negative_factor(self):
assert result
def test_discontinuity(self):
assert raise exception if x == undefined_value
I hope i made myself clearer!
You should refactor your code to be unit-testable. That, on the top of my head, would say:
Take the functionality of the file opening into a separate unit. Make that new unit receive the file name, and return the stream of the contents.
Make your unit receive a stream and read it, instead of opening a file and reading it.
Write a unit test for your main (calculation) unit. You would need to mock a stream, e.g. from a dictionary. Write several test cases, each time provide your unit with a different stream, and make sure your unit calculates the correct data for each input.
Get your code coverage as close to 100% as you can. Use nosetests for coverage.
Finally, write a test for your 'stream provider'. Feed it with several files (store them in your test folder), and make sure your stream provider reads them correctly.
Get the second unit test coverage as close to 100% as you can.
Now, and only now, commit your code.
You haven't explained what the calculations are, but I guess your program should be able to work with a subset of the big file, as well. If this is the case, make a unit test which opens up a small file , does the calculations and produces some result, which you can verify is correct.

Testing full program by comparing output file to reference file: what's it called, and is there a package for it?

I have a completely non-interactive python program that takes some command-line options and input files and produces output files. It can be fairly easily tested by choosing simple cases and writing the input and expected output files by hand, then running the program on the input files and comparing output files to the expected ones.
1) What's the name for this type of testing?
2) Is there a python package to do this type of testing?
It's not difficult to set up by hand in the most basic form, and I did that already. But then I ran into cases like output files containing the date and other information that can legitimately change between the runs - I considered writing something that would let me specify which sections of the reference files should be allowed to be different and still have the test pass, and realized I might be getting into "reinventing the wheel" territory.
(I rewrote a good part of unittest functionality before I caught myself last time this happened...)
I guess you're referring to a form of system testing.
No package would know which parts can legitimately change. My suggestion is to mock out the sections of code that result in the changes so that you can be sure that the output is always the same - you can use tools like Mock for that. Comparing two files is pretty straightforward, just dump each to a string and compare strings.
Functional testing. Or regression testing, if that is its purpose. Or code coverage, if you structure your data to cover all code paths.

Categories

Resources