Reproducible test framework - python

I am looking for a test or integration framework that supports long, costly tests for correctness. The tests should only be rerun if the code affecting the test has changed.
Ideally the test framework would
find the code of the test
produce a hash of it,
run the code and write to an output file with the hash as the name
or skip if that already exists.
provide a simple overview what tests succeeded and which failed.
It would be OK if the test has to specify the modules and files it depends on.
Python would be ideal, but this problem may be high-level enough that other languages would work too.
Perhaps there exists already a test or build integration framework I can adapt to fit this behaviour?

Basically you need to track what is the test doing so you can check whether it has changed.
Python code can be traced with sys.settrace(tracefunc). There is a module trace that can help with it.
But if it is not just Python code - if the tests execute other programs, test input files etc. and you need to watch it for changes too, then you would need tracing on operating system level, like strace, dtrace, dtruss.
I've created a small demo/prototype of simple testing framework that runs only tests that changed from last run: https://gist.github.com/messa/3825eba3ad3975840400 It uses the trace module. It works this way:
collect tests, each test is identified by name
load test fingerprints from JSON file (if present)
for each test:
if the fingerprint matches the current bytecode of functions listed in the fingerprint, the test is skipped
run test otherwise
trace it while running, record all functions being called
create test fingerprint with function names and bytecode MD5 hashes of each recorded function
save updated test fingerprints to a JSON file
But there is one problem: it's slow. Running code while tracing it with trace.Trace is about 40x slower than without tracing. So maybe you will be just better running all tests without tracing :) But if the tracer would be implemented in C like for example it is in the coverage module it should be faster. (Python trace module is not in C.)
Maybe some other tricks could help with speed. Maybe you are interested just in some top-level function whether they changed or not, so you don't need to trace all function calls.
Have you considered other ways how to speed up expensive tests? Like paralellization, ramdisk (tmpfs)... For example, if you test against a database, don't use the "system" or development one, but run a special instance of the database with lightweight configuration (no prealloc, no journal...) from tmpfs. If it is possible, of course - some tests need to be run on configuration similar to the production.
Some test frameworks (or their plugins) can run only the tests that failed last time - that's different, but kind of similar functinality.

This may not be the most efficient way to do this, but this can be done with Python's pickle module.
import pickle
At the end of your file, have it save itself as a pickle.
myfile = open('myfile.py', 'r') #Your script
savefile = open('savefile.pkl', 'w') #File the script will be saved to
#Any file extension can be used but I like .pkl for "pickle"
mytext = myfile.readlines()
pickle.dump(mytext, savefile) #Saves list from readlines() as a pickle
myfile.close()
savefile.close()
And then at the beginning of your script (after you have pickled it once already), add the code bit that checks it against the pickle.
myfile = ('myfile.py', 'r')
savefile = ('savefile.pkl', 'r')
mytext = myfile.readlines
savetext = pickle.load(savefile)
myfile.close()
savefile.close()
if mytext == savetext:
#Do whatever you want it to do
else:
#more code
That should work. It's a little long, but it's pure python and should do what you're looking for.

Related

Prevent any file system usage in Python's pytest

I have a program that, for data security reasons, should never persist anything to local storage if deployed in the cloud. Instead, any input / output needs to be written to the connected (encrypted) storage instead.
To allow deployment locally as well as to multiple clouds, I am using the very useful fsspec. However, other developers are working on the project as well, and I need a way to make sure that they aren't accidentally using local File I/O methods - which may pass unit tests, but fail when deployed to the cloud.
For this, my idea is to basically mock/replace any I/O methods in pytest with ones that don't work and make the test fail. However, this is probably not straightforward to implement. I am wondering whether anyone else has had this problem as well, and maybe best practices / a library exists for this already?
During my research, I found pyfakefs, which looks like it is very close what I am trying to do - except I don't want to simulate another file system, I want there to be no local file system at all.
Any input appreciated.
You can not use any pytest addons to make it secure. There will always be ways to overcome it. Even if you patch everything in the standard python library, the code always can use third-party C libraries which can't be patched from the Python side.
Even if you, by some way, restrict every way the python process can write the file, it will still be able to call the OS or other process to write something.
The only ways are to run only the trusted code or to use some sandbox to run the process.
In Unix-like operating systems, the workable solution may be to create a chroot and run the program inside it.
If you're ok with just preventing opening files using open function, you can patch this function in builtins module.
_original_open = builtins.open
class FileSystemUsageError(Exception):
pass
def patched_open(*args, **kwargs):
raise FileSystemUsageError()
#pytest.fixture
def disable_fs():
builtins.open = patched_open
yield
builtins.open = _original_open
I've done this example of code on the basis of the pytest plugin which is written by the company in which I work now to prevent using network in pytests. You can see a full example here: https://github.com/best-doctor/pytest_network/blob/4e98d816fb93bcbdac4593710ff9b2d38d16134d/pytest_network.py

Python Unit Test for writing Excel file

I'm not very familiar with pytest but try to incorporate it into my project. I already have some tests and understand main ideas.
But I got stuck with test for Excel output. I have a function that makes a report and saves it in Excel file (I use xlsxwriter to save in Excel format). It has some merged cells, different fonts and colors, but first of all I would like to be sure that values in cells are correct.
I would like to have a test that will automatically check content of this file to be sure that function logic isn't broken.
I'm not sure that binary comparison of generated excel file to the correct sample is a good idea (as excel format is rather complex and minor change of xlsxwriter library may make files completely different).
So, I seek an advice how to implement this kind of test. Had someone similar experience? May you give advice?
IMHO a unit test should not touch external things (like file system, database, or network). If your test does this, it is an integration test. These usually run much slower and tend to be brittle because of the external resources.
That said, you have 2 options: unit test it, mocking the xls writing or integration test it, reading the xls file again after writing.
When you mock the xlswriter, you can have your mock check that it receives what should be written. This assumes that you don't want to test the actual xlswriter, which makes sense cause it's not your code, and you usually just test your own code. This makes for a fast test.
In the other scenario you could open the excel file with xslsreader and compare the written file to what is expected. This is probably best if you can avoid the file system and write the xls data to a memory buffer from which you can read again. If you can't do that, try using a tempdir for your test, but with that you're already getting into integration test land. This makes for a slower, more complicated, but also more thorough test.
Personally, I'd write one integration test to see that it works in general, and then a lot of unit tests for the different things you want to write.

How can I test a procedural python script properly

I'm pretty new to Python. However, I am writing a script that loads some data from a file and generates another file. My script has several functions and it also needs two user inputs (paths) to work.
Now, I am wondering, if there is a way to test each function individually. Since there are no classes, I don't think I can do it with Unit tests, do I?
What is the common way to test a script, if I don't want to run the whole script all the time? Someone else has to maintain the script later. Therefore, something similar to unit tests would be awesome.
Thanks for your inputs!
If you write your code in the form of functions that operate on file objects (streams) or, if the data is small enough, that accept and return strings, you can easily write tests that feed the appropriate data and check the results. If the real data is large enough to need streams, but the test data is not, use the StringIO function in the test code to adapt.
Then use the __name__=="__main__" trick to allow your unit test driver to import the file without running the user-facing script.

Keep persistent variables in memory between runs of Python script

Is there any way of keeping a result variable in memory so I don't have to recalculate it each time I run the beginning of my script?
I am doing a long (5-10 sec) series of the exact operations on a data set (which I am reading from disk) every time I run my script.
This wouldn't be too much of a problem since I'm pretty good at using the interactive editor to debug my code in between runs; however sometimes the interactive capabilities just don't cut it.
I know I could write my results to a file on disk, but I'd like to avoid doing so if at all possible. This should be a solution which generates a variable the first time I run the script, and keeps it in memory until the shell itself is closed or until I explicitly tell it to fizzle out. Something like this:
# Check if variable already created this session
in_mem = var_in_memory() # Returns pointer to var, or False if not in memory yet
if not in_mem:
# Read data set from disk
with open('mydata', 'r') as in_handle:
mytext = in_handle.read()
# Extract relevant results from data set
mydata = parse_data(mytext)
result = initial_operations(mydata)
in_mem = store_persistent(result)
I've an inkling that the shelve module might be what I'm looking for here, but looks like in order to open a shelve variable I would have to specify a file name for the persistent object, and so I'm not sure if it's quite what I'm looking for.
Any tips on getting shelve to do what I want it to do? Any alternative ideas?
You can achieve something like this using the reload global function to re-execute your main script's code. You will need to write a wrapper script that imports your main script, asks it for the variable it wants to cache, caches a copy of that within the wrapper script's module scope, and then when you want (when you hit ENTER on stdin or whatever), it calls reload(yourscriptmodule) but this time passes it the cached object such that yourscript can bypass the expensive computation. Here's a quick example.
wrapper.py
import sys
import mainscript
part1Cache = None
if __name__ == "__main__":
while True:
if not part1Cache:
part1Cache = mainscript.part1()
mainscript.part2(part1Cache)
print "Press enter to re-run the script, CTRL-C to exit"
sys.stdin.readline()
reload(mainscript)
mainscript.py
def part1():
print "part1 expensive computation running"
return "This was expensive to compute"
def part2(value):
print "part2 running with %s" % value
While wrapper.py is running, you can edit mainscript.py, add new code to the part2 function and be able to run your new code against the pre-computed part1Cache.
To keep data in memory, the process must keep running. Memory belongs to the process running the script, NOT to the shell. The shell cannot hold memory for you.
So if you want to change your code and keep your process running, you'll have to reload the modules when they're changed. If any of the data in memory is an instance of a class that changes, you'll have to find a way to convert it to an instance of the new class. It's a bit of a mess. Not many languages were ever any good at this kind of hot patching (Common Lisp comes to mind), and there are a lot of chances for things to go wrong.
If you only want to persist one object (or object graph) for future sessions, the shelve module probably is overkill. Just pickle the object you care about. Do the work and save the pickle if you have no pickle-file, or load the pickle-file if you have one.
import os
import cPickle as pickle
pickle_filepath = "/path/to/picklefile.pickle"
if not os.path.exists(pickle_filepath):
# Read data set from disk
with open('mydata', 'r') as in_handle:
mytext = in_handle.read()
# Extract relevant results from data set
mydata = parse_data(mytext)
result = initial_operations(mydata)
with open(pickle_filepath, 'w') as pickle_handle:
pickle.dump(result, pickle_handle)
else:
with open(pickle_filepath) as pickle_handle:
result = pickle.load(pickle_handle)
Python's shelve is a persistence solution for pickled (serialized) objects and is file-based. The advantage is that it stores Python objects directly, meaning the API is pretty simple.
If you really want to avoid the disk, the technology you are looking for is a "in-memory database." Several alternatives exist, see this SO question: in-memory database in Python.
Weirdly, none of the earlier answers here mention simple text files. The OP says they don't like the idea, but as this is becoming a canonical for duplicates which might not have that constraint, this alternative deserves a mention. If all you need is for some text to survive between invocations of your script, save it in a regular text file.
def main():
# Before start, read data from previous run
try:
with open('mydata.txt', encoding='utf-8') as statefile:
data = statefile.read().rstrip('\n')
except FileNotFound:
data = "some default, or maybe nothing"
updated_data = your_real_main(data)
# When done, save new data for next run
with open('mydata.txt', 'w', encoding='utf-8') as statefile:
statefile.write(updated_data + '\n')
This easily extends to more complex data structures, though then you'll probably need to use a standard structured format like JSON or YAML (for serializing data with tree-like structures into text) or CSV (for a matrix of columns and rows containing text and/or numbers).
Ultimately, shelve and pickle are just glorified generalized versions of the same idea; but if your needs are modest, the benefits of a simple textual format which you can inspect and update in a regular text editor, and read and manipulate with ubiquitous standard tools, and easily copy and share between different Python versions and even other programming languages as well as version control systems etc, are quite compelling.
As an aside, character encoding issues are a complication which you need to plan for; but in this day and age, just use UTF-8 for all your text files.
Another caveat is that beginners are often confused about where to save the file. A common convention is to save it in the invoking user's home directory, though that obviously means multiple users cannot share this data. Another is to save it in a shared location, but this then requires an administrator to separately grant write access to this location (except I guess on Windows; but that then comes with its own tectonic plate of other problems).
The main drawback is that text is brittle if you need multiple processes to update the file in rapid succession, and slow to handle if you have lots of data and need to update parts of it frequently. For these use cases, maybe look at a database (probably start with SQLite which is robust and nimble, and included in the Python standard library; scale up to Postgres or etc if you have entrerprise-grade needs).
And, of course, if you need to store native Python structures, shelve and pickle are still there.
This is a os dependent solution...
$mkfifo inpipe
#/usr/bin/python3
#firstprocess.py
complicated_calculation()
while True:
with open('inpipe') as f:
try:
print( exec (f.read()))
except Exception as e: print(e)
$./first_process.py &
$cat second_process.py > inpipe
This will allow you to change and redefine variables in the first process without copying or recalculating anything. It should be the most efficient solution compared to multiprocessing, memcached, pickle, shelve modules or databases.
This is really nice if you want to edit and redefine second_process.py iteratively in your editor or IDE until you have it right without having to wait for the first process (e.g. initializing a large dict, etc.) to execute each time you make a change.
You can do this but you must use a Python shell. In other words, the shell that you use to start Python scripts must be a Python process. Then, any global variables or classes will live until you close the shell.
Look at the cmd module which makes it easy to write a shell program. You can even arrange so that any commmands that are not implemented in your shell get passed to the system shell for execution (without closing your shell). Then you would have to implement some kind of command, prun for instance, that runs a Python script by using the runpy module.
http://docs.python.org/library/runpy.html
You would need to use the init_globals parameter to pass your special data to the program's namespace, ideally a dict or a single class instance.
You could run a persistent script on the server through the os which loads/calcs, and even periodically reloads/recalcs the sql data into memory structures of some sort and then acess the in-memory data from your other script through a socket.

Success unit testing pyinotify?

I'm using pyinotify to mirror files from a source directory to a destination directory. My code seems to be working when I execute it manually, but I'm having trouble getting accurate unit test results. I think the problem boils down to this:
I have to use ThreadedNotifier
in my tests, otherwise they will
just hang, waiting for manual input.
Because I'm using another thread, my tests and the Notifier get out of sync. Tests that pass when running observational, manual tests fail when running the unit tests.
Has anyone succeeded in unit testing pyinotify?
When unit testing, things like threads and the file system should normally be factored out. Do you have a reason to unit test with the actual file system, user input, etc.?
Python makes it very easy to monkey patch; you could for example, replace the entire os/sys module with a mock object (such as Python Mock) so that you never need to deal with the file system. This will also make your tests run much more quickly.
If you want to do functional testing with the file system, I'd recommend setting up a virtual machine that will have a known state, and reverting to that state every time you run the tests. You could also simulate user input, file operations, etc. as needed.
Edit
Here's a simple example of how to fake, or mock the "open" function.
Say you've got a module, my_module, with a get_text_upper function:
def get_text_upper(filename):
return open(filename).read().upper()
You want to test this without actually touching the file system (eventually you'll start just passing file objects instead of file names to avoid this but for now...). You can mock the open function so that it returns a StringIO object instead:
from cStringIO import StringIO
def fake_open(text):
fp = StringIO()
fp.write(text)
fp.seek(0)
return fp
def test_get_text():
my_module.open = lambda *args, **kwargs : fake_open("foo")
text = my_module.get_text_upper("foo.txt")
assert text == "FOO", text
Using a mocking library just makes this process a lot easier and more flexible.
Here's a stackoverflow post on mocking libraries for python.

Categories

Resources