How to create a corrupt pkl file python - python

I'm creating a library that caches function return values to pkl files. However, sometimes when I terminate the program while writing to the pkl files, I wind up with corrupt pkl files (not always). I'm setting up the library to deal with these corrupt files (that lead mostly to an EOFError, but may also lead to an IOError). However, I need to create files that I know are corrupt to test this, and the method of terminating the program is not consistent. Is there some other way to write to a pkl file and be guaranteed an EOFError or IOError when I subsequently read from it?

Short answer: You don't need them.
Long answer: There's a better way to handle this, take a look below.
Ok, let's start by understanding each of these exception separately:
The EOFError happens whenever the parser reaches the end of file
without a complete representation of an object and, therefore, is unable to rebuild
the object.
An IOError represents a reading error, the file could be deleted or have it's permissions revoked during the process.
Now, let's develop a strategy for testing it.
One common idiom is to encapsulate the offending method, pickle.Pickler for example, with a method that may randomly throw these exceptions. Here is an example:
import pickle
from random import random
def chaos_pickle(obj, file, io_error_chance=0, eof_error_chance=0):
if random < io_error_chance:
raise IOError("Chaotic IOError")
if random < eof_error_chance:
raise EOFError("Chaotic EOFError")
return pickle.Pickler(obj, file)
Using this instead of the traditional pickle.Pickler ensures that your code randomly throws both of the exceptions (notice that there's a caveat, though, if you set io_error_chance to 1, it will never raise a EOFError.
This trick is quite useful when used along the mock library (unittest.mock) to create faulty objects for testing purposes.
Enjoy!

Take a bunch of your old, corrupted pickles and use those. If you don't have any, take a bunch of working pickles, truncate them quasi-randomly, and see which ones give errors when you try to load them. Alternatively, if the "corrupt" files don't need to even resemble valid pickles, you could just unpickle random crap you wouldn't expect to work. For example, mash the keyboard and try to unpickle the result.
Note that the docs say
The pickle module is not intended to be secure against erroneous or
maliciously constructed data. Never unpickle data received from an
untrusted or unauthenticated source.

Related

Python: How to link a filename to a memory file

What I want to realize has the following feature:
Python program (or say process, thread,...) create a memory file that can be read or written.
As long as the program is alive, the file data only exists in memory (NOT in disk). As long as the program is NOT alive, there is no date left.
However there is an interface on the disk, it has a filename. This interface is linked to the memory file. Read and write operations on the interface are possible.
Why not use IO
The memory file will be an input of another program (not Python). So a file name is needed.
Why not use tempfile?
The major reason is security. For different OS the finalization of tempfile will be different (right?) And for some occasional cases, such as interruptions on OS, data may remain on disk. So a program-holding data seems more secure (at least to an extent).
Anyway I just want a try to see if tempfile can be avoided.
You could consider using a named pipe (using mkfifo). Another option is to create an actual file which the two programs open. Once both open it, you can unlink it so that it's no longer accessible on disk.

Overwrite a python file while using it?

I have first file (data.py):
database = {
'school': 2,
'class': 3
}
my second python file (app.py)
import data
del data.database['school']
print(data.database)
>>>{'class': 3}
But in data.py didn't change anything? Why?
And how can I change it from my app.py?
del data.database['school'] modifies the data in memory, but does not modify the source code.
Modifying a source code to manage the persistence of your data is not a good practice IMHO.
You could use a database, a csv file, a json file ...
To elaborate on Gelineau's answer: at runtime, your source code is turned into a machine-usable representation (known as "bytecode") which is loaded into the process memory, then executed. When the del data.database['school'] statement (in it's bytecode form) is executed, it only modifies the in-memory data.database object, not (hopefully!) the source code itself. Actually, your source code is not "the program", it's a blueprint for the runtime process.
What you're looking for is known as data persistance (data that "remembers" it's last known state between executions of the program). There are many solutions to this problem, ranging from the simple "write it to a text or binary file somewhere and re-read it at startup" to full-blown multi-servers database systems. Which solution is appropriate for you depends on your program's needs and constraints, whether you need to handle concurrent access (multiple users / processes editing the data at the same time), etc etc so there's really no one-size-fits-all answers. For the simplest use cases (single user, small datasets etc), json or csv files written to disk or a simple binary key:value file format like anydbm or shelve (both in Python's stdlib) can be enough. As soon as things gets a bit more complex, SQL databases are most often your best bet (no wonder why they are still the industry standard and will remain so for long years).
In all cases, data persistance is not "automagic", you will have to write quite some code to make sure your changes are saved in timely manner.
As what you are trying to achieve is basically related to file operation.
So when you are importing data , it just loads instanc of your file in memory and create a reference from your new file, ie. app.py. So, if you modify it in app.py its just modifiying the instance which is in RAM not in harddrive where your actual file is stored in harddrive.
If you want to change source code of another file "As its not good practice" then you can use file operations.

How to detect end of file using scipy.io.FortranFile

I am reading an formatted sequential file output from a Fortran program. I am using the scipy.io.FortranFile class to do this, and am successfully extracting the information I need.
My problem: I do not know how long the input file is, and have no way of knowing how many records to read in. Currently, I am simply iteratively reading the file until an exception is raised (a TypeError, but I don't know if this is how it would always fail). I would prefer to do this more elegantly.
Is there anyway to detect EOF using the FortranFile class? Or alternately is there a better way to read in unformatted sequential files?
Some cursory research (I am not a Fortran programmer) indicates to me that if reading this using the Fortran READ function, one can check the IOSTAT flag to determine if you are at the end of the file. I would be surprised if a similar capability isn't provided in the FortranFile class, but I don't see any mention of it in the documentation.

What does it mean to flush file contents in Python?

I am trying to teach myself Python by reading documentation. I am trying to understand what it means to flush a file buffer. According to documentation, "file.flush" does the following.
Flush the internal buffer, like stdio‘s fflush().
This may be a no-op on some file-like objects.
I don't know what "internal buffer" and "no-op" mean, but I think it says that flush writes data from some buffer to a file.
Hence, I ran this file toggling the pound sign in the line in the middle.
with open("myFile.txt", "w+") as file:
file.write("foo")
file.write("bar")
# file.flush()
file.write("baz")
file.write("quux")
However, I seem to get the same myFile.txt with and without the call to file.flush(). What effect does file.flush() have?
Python buffers writes to files. That is, file.write returns before the data is actually written to your hard drive. The main motivation of this is that a few large writes are much faster than many tiny writes, so by saving up the output of file.write until a bit has accumulated, Python can maintain good writing speeds.
file.flush forces the data to be written out at that moment. This is hand when you know that it might be a while before you have more data to write out, but you want other processes to be able to view the data you've already written. Imagine a log file that grows slowly. You don't want to have to wait ages before enough entries have built up to cause the data to be written out in one big chunk.
In either case, file.close causes the remaining data to be flushed, so "quux" in your code will be written out as soon as file (which is a really bad name as it shadows the builtin file constructor) falls out of scope of the with context manager and gets closed.
Note: your OS does some buffering of its own, but I believe every OS where Python is implemented will honor file.flush's request to write data out to the drive. Someone please correct me if I'm wrong.
By the way, "no-op" means "no operation", as in it won't actually do anything. For example, StringIO objects manipulate strings in memory, not files on your hard drive. StringIO.flush probably just immediately returns because there's not really anything for it to do.
Buffer content might be cached to improve performance. Flush makes sure that the content is written to disk completely, avoiding data loss. It is also useful when, for example, you want the line asking for user input printed completely on-screen before the next file operation takes place.

Keep persistent variables in memory between runs of Python script

Is there any way of keeping a result variable in memory so I don't have to recalculate it each time I run the beginning of my script?
I am doing a long (5-10 sec) series of the exact operations on a data set (which I am reading from disk) every time I run my script.
This wouldn't be too much of a problem since I'm pretty good at using the interactive editor to debug my code in between runs; however sometimes the interactive capabilities just don't cut it.
I know I could write my results to a file on disk, but I'd like to avoid doing so if at all possible. This should be a solution which generates a variable the first time I run the script, and keeps it in memory until the shell itself is closed or until I explicitly tell it to fizzle out. Something like this:
# Check if variable already created this session
in_mem = var_in_memory() # Returns pointer to var, or False if not in memory yet
if not in_mem:
# Read data set from disk
with open('mydata', 'r') as in_handle:
mytext = in_handle.read()
# Extract relevant results from data set
mydata = parse_data(mytext)
result = initial_operations(mydata)
in_mem = store_persistent(result)
I've an inkling that the shelve module might be what I'm looking for here, but looks like in order to open a shelve variable I would have to specify a file name for the persistent object, and so I'm not sure if it's quite what I'm looking for.
Any tips on getting shelve to do what I want it to do? Any alternative ideas?
You can achieve something like this using the reload global function to re-execute your main script's code. You will need to write a wrapper script that imports your main script, asks it for the variable it wants to cache, caches a copy of that within the wrapper script's module scope, and then when you want (when you hit ENTER on stdin or whatever), it calls reload(yourscriptmodule) but this time passes it the cached object such that yourscript can bypass the expensive computation. Here's a quick example.
wrapper.py
import sys
import mainscript
part1Cache = None
if __name__ == "__main__":
while True:
if not part1Cache:
part1Cache = mainscript.part1()
mainscript.part2(part1Cache)
print "Press enter to re-run the script, CTRL-C to exit"
sys.stdin.readline()
reload(mainscript)
mainscript.py
def part1():
print "part1 expensive computation running"
return "This was expensive to compute"
def part2(value):
print "part2 running with %s" % value
While wrapper.py is running, you can edit mainscript.py, add new code to the part2 function and be able to run your new code against the pre-computed part1Cache.
To keep data in memory, the process must keep running. Memory belongs to the process running the script, NOT to the shell. The shell cannot hold memory for you.
So if you want to change your code and keep your process running, you'll have to reload the modules when they're changed. If any of the data in memory is an instance of a class that changes, you'll have to find a way to convert it to an instance of the new class. It's a bit of a mess. Not many languages were ever any good at this kind of hot patching (Common Lisp comes to mind), and there are a lot of chances for things to go wrong.
If you only want to persist one object (or object graph) for future sessions, the shelve module probably is overkill. Just pickle the object you care about. Do the work and save the pickle if you have no pickle-file, or load the pickle-file if you have one.
import os
import cPickle as pickle
pickle_filepath = "/path/to/picklefile.pickle"
if not os.path.exists(pickle_filepath):
# Read data set from disk
with open('mydata', 'r') as in_handle:
mytext = in_handle.read()
# Extract relevant results from data set
mydata = parse_data(mytext)
result = initial_operations(mydata)
with open(pickle_filepath, 'w') as pickle_handle:
pickle.dump(result, pickle_handle)
else:
with open(pickle_filepath) as pickle_handle:
result = pickle.load(pickle_handle)
Python's shelve is a persistence solution for pickled (serialized) objects and is file-based. The advantage is that it stores Python objects directly, meaning the API is pretty simple.
If you really want to avoid the disk, the technology you are looking for is a "in-memory database." Several alternatives exist, see this SO question: in-memory database in Python.
Weirdly, none of the earlier answers here mention simple text files. The OP says they don't like the idea, but as this is becoming a canonical for duplicates which might not have that constraint, this alternative deserves a mention. If all you need is for some text to survive between invocations of your script, save it in a regular text file.
def main():
# Before start, read data from previous run
try:
with open('mydata.txt', encoding='utf-8') as statefile:
data = statefile.read().rstrip('\n')
except FileNotFound:
data = "some default, or maybe nothing"
updated_data = your_real_main(data)
# When done, save new data for next run
with open('mydata.txt', 'w', encoding='utf-8') as statefile:
statefile.write(updated_data + '\n')
This easily extends to more complex data structures, though then you'll probably need to use a standard structured format like JSON or YAML (for serializing data with tree-like structures into text) or CSV (for a matrix of columns and rows containing text and/or numbers).
Ultimately, shelve and pickle are just glorified generalized versions of the same idea; but if your needs are modest, the benefits of a simple textual format which you can inspect and update in a regular text editor, and read and manipulate with ubiquitous standard tools, and easily copy and share between different Python versions and even other programming languages as well as version control systems etc, are quite compelling.
As an aside, character encoding issues are a complication which you need to plan for; but in this day and age, just use UTF-8 for all your text files.
Another caveat is that beginners are often confused about where to save the file. A common convention is to save it in the invoking user's home directory, though that obviously means multiple users cannot share this data. Another is to save it in a shared location, but this then requires an administrator to separately grant write access to this location (except I guess on Windows; but that then comes with its own tectonic plate of other problems).
The main drawback is that text is brittle if you need multiple processes to update the file in rapid succession, and slow to handle if you have lots of data and need to update parts of it frequently. For these use cases, maybe look at a database (probably start with SQLite which is robust and nimble, and included in the Python standard library; scale up to Postgres or etc if you have entrerprise-grade needs).
And, of course, if you need to store native Python structures, shelve and pickle are still there.
This is a os dependent solution...
$mkfifo inpipe
#/usr/bin/python3
#firstprocess.py
complicated_calculation()
while True:
with open('inpipe') as f:
try:
print( exec (f.read()))
except Exception as e: print(e)
$./first_process.py &
$cat second_process.py > inpipe
This will allow you to change and redefine variables in the first process without copying or recalculating anything. It should be the most efficient solution compared to multiprocessing, memcached, pickle, shelve modules or databases.
This is really nice if you want to edit and redefine second_process.py iteratively in your editor or IDE until you have it right without having to wait for the first process (e.g. initializing a large dict, etc.) to execute each time you make a change.
You can do this but you must use a Python shell. In other words, the shell that you use to start Python scripts must be a Python process. Then, any global variables or classes will live until you close the shell.
Look at the cmd module which makes it easy to write a shell program. You can even arrange so that any commmands that are not implemented in your shell get passed to the system shell for execution (without closing your shell). Then you would have to implement some kind of command, prun for instance, that runs a Python script by using the runpy module.
http://docs.python.org/library/runpy.html
You would need to use the init_globals parameter to pass your special data to the program's namespace, ideally a dict or a single class instance.
You could run a persistent script on the server through the os which loads/calcs, and even periodically reloads/recalcs the sql data into memory structures of some sort and then acess the in-memory data from your other script through a socket.

Categories

Resources