I have a large dictionary whose structure looks like:
dcPaths = {'id_jola_001': CPath instance}
where CPath is a self-defined class:
class CPath(object):
def __init__(self):
# some attributes
self.m_dAvgSpeed = 0.0
...
# a list of CNode instance
self.m_lsNodes = []
where m_lsNodes is a list of CNode:
class CNode(object):
def __init__(self):
# some attributes
self.m_nLoc = 0
# a list of Apps
self.m_lsApps = []
Here, m_lsApps is a list of CApp, which is another self-defined class:
class CApp(object):
def __init__(self):
# some attributes
self.m_nCount= 0
self.m_nUpPackets = 0
I serialize this dictionary by using cPickle:
def serialize2File(strFileName, strOutDir, obj):
if len(obj) != 0:
strOutFilePath = "%s%s" % (strOutDir, strFileName)
with open(strOutFilePath, 'w') as hOutFile:
cPickle.dump(obj, hOutFile, protocol=0)
return strOutFilePath
else:
print("Nothing to serialize!")
It works fine and the size of serialized file is about 6.8GB. However, when I try to deserialize this object:
def deserializeFromFile(strFilePath):
obj = 0
with open(strFilePath) as hFile:
obj = cPickle.load(hFile)
return obj
I find it consumes more than 90GB memory and takes a long time.
why would this happen?
Is there any way I could optimize this?
BTW, I'm using python 2.7.6
You can try specifying the pickle protocol; fastest is -1 (meaning: latest
protocol, no problem if you are pickling and unpickling with the same Python version).
cPickle.dump(obj, file, protocol = -1)
EDIT:
As said in the comments: load detects the protocol itself.
cPickle.load(obj, file)
When you store complex python objects, python usually stores a lot of useless data (look at the __dict__ object property).
In order to reduce the memory consumption of unserialized data you should pickle only python natives. You can achieve this easily implementing some methods on your classes: object.__getstate__() and object.__setstate__(state).
See Pickling and unpickling normal class instances on python documentation.
Related
I am trying to write python code that will be able to track the created instances of a class, and save it through sessions. I am trying to to this by creating a list inside the class deceleration, which keeps track of instances. My code is as follows:
class test_object:
_tracking = []
def __init__(self, text):
self.name = text
test_object._tracking.insert(0, self)
with open("tst.pkl", mode="rb") as f:
try:
pickles = dill.load(f)
except:
pickles = test_object
logger.warning("Found dill to be empty")
f.close()
My issue is handling when the pickled data is empty. What I'd like to do is in this case simply use the base class. The issue I'm running into is that test_object._tracking ends up being equal to pickles._tracking. Is there a way to make a copy of test_object so that when test_object._tracking gets updates, pickles._tracking stays the same?
You can do the following:
import dill
class test_object:
_tracking = []
def __init__(self, text):
self.name = text
test_object._tracking.insert(0, self)
test_1 = test_object("abc")
print(test_object._tracking)
# result: [<__main__.test_object object at 0x11a8cda50>]
with open("./my_file.txt", mode="rb") as f:
try:
pickles = dill.load(f)
except:
pickles = type('test_object_copy', test_object.__bases__, dict(test_object.__dict__))
pickles._tracking = []
print("Found dill to be empty")
# The above results in "Found dill to be empty"
print(pickles._tracking)
# prints []
it'll set pickles to a copy of the original class. It's tracking attribute would then be empty, and would be different than the original 'tracking'.
I have a large object (1.5 GB on the disk) that I save via the python module dill. I perform lengthy operations on the object and want to save the new state of the object once in a while. However, large parts of the object remain unchanged in the operations, and I would like to overwrite the file only where things have changed.
Is there a relatively simple way (e.g. with some existing module) to achieve this task?
My intuitive solution would be to save the object attributes one by one and rebuild the object from there. Changes could be noted after reading an already saved attribute by comparing its value (e.g. via a hash function) with the respective attribute that is to be saved. Alternatively, I could track which attributes have been changed during an operation.
Is there a package for that? Is there an alternative way?
I am working with python 3.7.
I have implemented a module that does - to a large extent - what I was looking for.
Blocks of files are overwritten only if their content has changed.
Attributes can be saved separately, and this works recursively.
If attributes are not saved separately, it is necessary to pickle the object completely to compare it to an object already present on the file system. However, since writing to the disk is often what makes saving large objects so slow, significant speedups can be gained for large objects. The exact speedup depends on the hardware of the storage medium.
The code contains a method save_object that saves any object without overwriting existing identical sections. Furthermore, I have implemented a class SeparatelySaveable that can be used as the base class for all objects for which some attributes shall be saved in separate files. Attributes that are instances of SeparatelySaveable also will be saved separately automatically. Further attributes that shall be saved separately can be specified via SeparatelySaveable.set_save_separately.
Attributes that are saved separately are placed in a folder next to the file to which the original object is saved. These attributes will be saved again only if they have been accessed after they have been saved initially. When the object is loaded, the separate attributes will not be loaded until they are accessed.
The code can be found at the bottom of this answer. The usage is as follows: saving an object without overwriting similar parts works for all objects:
save_object(myObject)
Saving attributes separately:
# defining classes
class MyClass1(SeparatelySaveable):
def __init__(self, value):
super().__init__()
self.attribute1 = value
# specify that self.attribute shall be
# saved separately
self.set_save_separately('attribute')
class MyClass2(SeparatelySaveable):
def __init__(self, value):
super().__init__()
# attributes that are instances of
# SeparatelySaveable will always be saved separately
self.attribute2 = MyClass1(value)
# creating objects
myObject1 = MyClass1()
myObject2 = MyClass2()
# Saves myObject1 to fileName1.ext and
# myObject1.attribute1 to fileName1.arx/attribute1.ext
myObject1.save_object("fileName1", ".ext", ".arx")
# Saves myObject2 to fileName2.ext and
# myObject2.attribute2 to fileName2.arx/attribute2.ext and
# myObject2.attribute2.attribute1 to fileName2.arx/attribute2.arx/attribute1.ext
myObject1.save_object("fileName2", ".ext", ".arx")
# load myObject2; myObject2.attribute2 will remain unloaded
loadedObject = load_object("fileName2.ext")
# myObject2.attribute1 will be loaded; myObject2.attribute2.attribute1
# will remain unloaded
loadedObject.attribute2
# Saves loadedObject to fileName2.ext and
# loadedObject.attribute2 to fileName2.arx/attribute2.ext
# loadedObject.attribute2.attribute1 will remain untouched
loadedObject.save_object("fileName2", ".ext", ".arx")
The code:
import dill
import os
import io
from itertools import count
from astropy.wcs.docstrings import name
DEFAULT_EXTENSION = ''
"""File name extension used if no extension is specified"""
DEFAULT_FOLDER_EXTENSION = '.arx'
"""Folder name extension used if no extension is specified"""
BLOCKSIZE = 2**20
"""Size of read/write blocks when files are saved"""
def load_object(filename):
"""Load an object.
Parameters
----------
filename : str
Path to the file
"""
with open(filename, 'rb') as file:
return dill.load(file)
def save_object(obj, filename, compare=True):
"""Save an object.
If the object has been saved at the same file earlier, only the parts
are overwritten that have changed. Note that an additional attribute
at the beginning of the file will 'shift' all data, making it
necessary to rewrite the entire file.
Parameters
----------
obj : object
Object to be saved
filename : str
Path of the file to which the object shall be saved
compare : bool
Whether only changed parts shall be overwitten. A value of `True` will
be beneficial for large files if no/few changes have been made. A
value of `False` will be faster for small and strongly changed files.
"""
if not compare or not os.path.isfile(filename):
with open(filename, 'wb') as file:
dill.dump(obj, file, byref=True)
return
stream = io.BytesIO()
dill.dump(obj, stream, byref=True)
stream.seek(0)
buf_obj = stream.read(BLOCKSIZE)
with open(filename, 'rb+') as file:
buf_file = file.read(BLOCKSIZE)
for position in count(0, BLOCKSIZE):
if not len(buf_obj) > 0:
file.truncate()
break
elif not buf_obj == buf_file:
file.seek(position)
file.write(buf_obj)
if not len(buf_file) > 0:
file.write(stream.read())
break
buf_file = file.read(BLOCKSIZE)
buf_obj = stream.read(BLOCKSIZE)
class SeparatelySaveable():
def __init__(self, extension=DEFAULT_EXTENSION,
folderExtension=DEFAULT_FOLDER_EXTENSION):
self.__dumped_attributes = {}
self.__archived_attributes = {}
self.extension = extension
self.folderExtension = folderExtension
self.__saveables = set()
def set_save_separately(self, *name):
self.__saveables.update(name)
def del_save_separately(self, *name):
self.__saveables.difference_update(name)
def __getattr__(self, name):
# prevent infinite recursion if object has not been correctly initialized
if (name == '_SeparatelySaveable__archived_attributes' or
name == '_SeparatelySaveable__dumped_attributes'):
raise AttributeError('SeparatelySaveable object has not been '
'initialized properly.')
if name in self.__archived_attributes:
value = self.__archived_attributes.pop(name)
elif name in self.__dumped_attributes:
value = load_object(self.__dumped_attributes.pop(name))
else:
raise AttributeError("'" + type(self).__name__ + "' object "
"has no attribute '" + name + "'")
setattr(self, name, value)
return value
def __delattr__(self, name):
try:
self.__dumped_attributes.pop(name)
try:
super().__delattr__(name)
except AttributeError:
pass
except KeyError:
super().__delattr__(name)
def hasattr(self, name):
if name in self.__dumped_attributes or name in self.__archived_attributes:
return True
else:
return hasattr(self, name)
def load_all(self):
for name in list(self.__archived_attributes):
getattr(self, name)
for name in list(self.__dumped_attributes):
getattr(self, name)
def save_object(self, fileName, extension=None, folderExtension=None,
overwriteChildExtension=False):
if extension is None:
extension = self.extension
if folderExtension is None:
folderExtension = self.folderExtension
# account for a possible name change - load all components
# if necessary; this could be done smarter
if not (self.__dict__.get('_SeparatelySaveable__fileName',
None) == fileName
and self.__dict__.get('_SeparatelySaveable__extension',
None) == extension
and self.__dict__.get('_SeparatelySaveable__folderExtension',
None) == folderExtension
and self.__dict__.get('_SeparatelySaveable__overwriteChildExtension',
None) == overwriteChildExtension
):
self.__fileName = fileName
self.__extension = extension
self.__folderExtension = folderExtension
self.__overwriteChildExtension = overwriteChildExtension
self.load_all()
# do not save the attributes that had been saved earlier and have not
# been accessed since
archived_attributes_tmp = self.__archived_attributes
self.__archived_attributes = {}
# save the object
dumped_attributes_tmp = {}
saveInFolder = False
for name, obj in self.__dict__.items():
if isinstance(obj, SeparatelySaveable) or name in self.__saveables:
if not saveInFolder:
folderName = fileName+folderExtension
if not os.access(folderName, os.F_OK):
os.makedirs(folderName)
saveInFolder = True
partFileName = os.path.join(folderName, name)
if isinstance(obj, SeparatelySaveable):
if overwriteChildExtension:
savedFileName = obj.save_object(partFileName, extension,
folderExtension,
overwriteChildExtension)
else:
savedFileName = obj.save_object(partFileName)
else:
savedFileName = partFileName+extension
save_object(obj, savedFileName)
dumped_attributes_tmp[name] = obj
self.__dumped_attributes[name] = savedFileName
for name in dumped_attributes_tmp:
self.__dict__.pop(name)
save_object(self, fileName+extension)
archived_attributes_tmp.update(dumped_attributes_tmp)
self.__archived_attributes = archived_attributes_tmp
return fileName+extension
I am trying to change my code to a more object oriented format. In doing so I am lost on how to 'visualize' what is happening with multiprocessing and how to solve it. On the one hand, the class should track changes to local variables across functions, but on the other I believe multiprocessing creates a copy of the code which the original instance would not have access to. I need to figure out a way to manipulate classes, within a class, using multiprocessing, and have the parent class retain all manipulated values in the nested classes.
A simple version (OLD CODE):
function runMultProc():
...
dictReports = {}
listReports = ['reportName1.txt', 'reportName2.txt']
tasks = []
pool = multiprocessing.Pool()
for report in listReports:
if report not in dictReports:
dictReports[today][report] = {}
tasks.append(pool.apply_async(worker, args=([report, dictReports[today][report]])))
else:
continue
for task in tasks:
report, currentReportDict = task.get()
dictReports[report] = currentFileDict
function worker(report, currentReportDict):
<Manipulate_reports_dict>
return report, currentReportDict
NEW CODE:
class Transfer():
def __init__(self):
self.masterReportDictionary[<todays_date>] = [reportObj1, reportObj2]
def processReports(self):
self.pool = multiprocessing.Pool()
self.pool.map(processWorker, self.masterReportDictionary[<todays_date>])
self.pool.close()
self.pool.join()
def processWorker(self, report):
# **process and manipulate report, currently no return**
report.name = 'foo'
report.path = '/path/to/report'
class Report():
def init(self):
self.name = ''
self.path = ''
self.errors = {}
self.runTime = ''
self.timeProcessed = ''
self.hashes = {}
self.attempts = 0
I don't think this code does what I need it to do, which is to have it process the list of reports in parallel AND, as processWorker manipulates each report class object, store those results. As I am fairly new to this I was hoping someone could help.
The big difference between the two is that the first one build a dictionary and returned it. The second model shouldn't really be returning anything, I just need for the classes to finish being processed and they should have relevant information within them.
Thanks!
This question is related to type evolution with jsonpickle (python)
Current state description:
I need to store an object to a JSON file using jsonpickle in python.
The object class CarState is generated by a script from another software component thus I can't change the class itself. This script automatically generates the __getstate__ and __setstate__ methods for the class that jsonpickle uses for serializing the object. The __getstate__ returns just a list of the values for each member variable, without the field names.
Therefore jsonpickle doesn't store the field name, but only the values within the JSON data (see code example below)
The Problem:
Let's say my program needs to extend the class CarState for a new version (Version 2) by an additional field (CarStateNewVersion). Now If it loads the JSON data from version 1, the data isn't assigned to the correct fields.
Here's an example code demonstrating the problem.
The class CarState is generated by the script and simplified here to show the problem. In Version 2 I update the class CarState with a new field (in the code snipped inserted as CarStateNewVersion to keep it simple)
#!/usr/bin/env python
import jsonpickle as jp
# Class using slots and implementing the __getstate__ method
# Let's say this is in program version 1
class CarState(object):
__slots__ = ['company','type']
_slot_types = ['string','string']
def __init__(self):
self.company = ""
self.type = ""
def __getstate__(self):
return [getattr(self, x) for x in self.__slots__]
def __setstate__(self, state):
for x, val in zip(self.__slots__, state):
setattr(self, x, val)
# Class using slots and implementing the __getstate__ method
# For program version 2 a new field 'year' is needed
class CarStateNewVersion(object):
__slots__ = ['company','year','type']
_slot_types = ['string','string','string']
def __init__(self):
self.company = ""
self.type = ""
self.year = "1900"
def __getstate__(self):
return [getattr(self, x) for x in self.__slots__]
def __setstate__(self, state):
for x, val in zip(self.__slots__, state):
setattr(self, x, val)
# Class using slots without the __getstate__ method
# Let's say this is in program version 1
class CarDict(object):
__slots__ = ['company','type']
_slot_types = ['string','string']
def __init__(self):
self.company = ""
self.type = ""
# Class using slots without the __getstate__ method
# For program version 2 a new field 'year' is needed
class CarDictNewVersion(object):
__slots__ = ['company','year','type']
_slot_types = ['string','string','string']
def __init__(self):
self.company = ""
self.type = ""
self.year = "1900"
if __name__ == "__main__":
# Version 1 stores the data
carDict = CarDict()
carDict.company = "Ford"
carDict.type = "Mustang"
print jp.encode(carDict)
# {"py/object": "__main__.CarDict", "company": "Ford", "type": "Mustang"}
# Now version 2 tries to load the data
carDictNewVersion = jp.decode('{"py/object": "__main__.CarDictNewVersion", "company": "Ford", "type": "Mustang"}')
# OK!
# carDictNewVersion.company = Ford
# carDictNewVersion.year = undefined
# carDictNewVersion.type = Mustang
# Version 1 stores the data
carState = CarState()
carState.company = "Ford"
carState.type = "Mustang"
print jp.encode(carState)
# {"py/object": "__main__.CarState", "py/state": ["Ford", "Mustang"]}
# Now version 2 tries to load the data
carStateNewVersion = jp.decode('{"py/object": "__main__.CarStateNewVersion", "py/state": ["Ford", "Mustang"]}')
# !!!! ERROR !!!!
# carDictNewVersion.company = Ford
# carDictNewVersion.year = Mustang
# carDictNewVersion.type = undefined
try:
carDictNewVersion.year
except:
carDictNewVersion.year = 1900
As you can see for the CarDict and CarDictNewVersion class, if __getstate__ isn't implemented, there's no problem with the newly added field because the JSON text also contains field names.
Question:
Is there a possibility to tell jsonpickle to not use __getstate__ and use the __dict__ instead to include the field names within the JSON data?
Or is there another possibility to somehow include the field names?
NOTE: I can't change the CarState class nor the containing __getstate__ method since it is generated through a script from another software component.
I can only change the code within the main method.
Or is there another serialization tool for python which creates human readable output and includes field names?
Additional Background info:
The class is generated using message definitions in ROS, namely by genpy
, and the generated class inherits from the Message class which implements the __getstate__ (see https://github.com/ros/genpy/blob/indigo-devel/src/genpy/message.py#L308)
Subclass CarState to implement your own pickle protocol methods, or register a handler with jsonpickle.
I have an object gui_project which has an attribute .namespace, which is a namespace dict. (i.e. a dict from strings to objects.)
(This is used in an IDE-like program to let the user define his own object in a Python shell.)
I want to pickle this gui_project, along with the namespace. Problem is, some objects in the namespace (i.e. values of the .namespace dict) are not picklable objects. For example, some of them refer to wxPython widgets.
I'd like to filter out the unpicklable objects, that is, exclude them from the pickled version.
How can I do this?
(One thing I tried is to go one by one on the values and try to pickle them, but some infinite recursion happened, and I need to be safe from that.)
(I do implement a GuiProject.__getstate__ method right now, to get rid of other unpicklable stuff besides namespace.)
I would use the pickler's documented support for persistent object references. Persistent object references are objects that are referenced by the pickle but not stored in the pickle.
http://docs.python.org/library/pickle.html#pickling-and-unpickling-external-objects
ZODB has used this API for years, so it's very stable. When unpickling, you can replace the object references with anything you like. In your case, you would want to replace the object references with markers indicating that the objects could not be pickled.
You could start with something like this (untested):
import cPickle
def persistent_id(obj):
if isinstance(obj, wxObject):
return "filtered:wxObject"
else:
return None
class FilteredObject:
def __init__(self, about):
self.about = about
def __repr__(self):
return 'FilteredObject(%s)' % repr(self.about)
def persistent_load(obj_id):
if obj_id.startswith('filtered:'):
return FilteredObject(obj_id[9:])
else:
raise cPickle.UnpicklingError('Invalid persistent id')
def dump_filtered(obj, file):
p = cPickle.Pickler(file)
p.persistent_id = persistent_id
p.dump(obj)
def load_filtered(file)
u = cPickle.Unpickler(file)
u.persistent_load = persistent_load
return u.load()
Then just call dump_filtered() and load_filtered() instead of pickle.dump() and pickle.load(). wxPython objects will be pickled as persistent IDs, to be replaced with FilteredObjects at unpickling time.
You could make the solution more generic by filtering out objects that are not of the built-in types and have no __getstate__ method.
Update (15 Nov 2010): Here is a way to achieve the same thing with wrapper classes. Using wrapper classes instead of subclasses, it's possible to stay within the documented API.
from cPickle import Pickler, Unpickler, UnpicklingError
class FilteredObject:
def __init__(self, about):
self.about = about
def __repr__(self):
return 'FilteredObject(%s)' % repr(self.about)
class MyPickler(object):
def __init__(self, file, protocol=0):
pickler = Pickler(file, protocol)
pickler.persistent_id = self.persistent_id
self.dump = pickler.dump
self.clear_memo = pickler.clear_memo
def persistent_id(self, obj):
if not hasattr(obj, '__getstate__') and not isinstance(obj,
(basestring, int, long, float, tuple, list, set, dict)):
return "filtered:%s" % type(obj)
else:
return None
class MyUnpickler(object):
def __init__(self, file):
unpickler = Unpickler(file)
unpickler.persistent_load = self.persistent_load
self.load = unpickler.load
self.noload = unpickler.noload
def persistent_load(self, obj_id):
if obj_id.startswith('filtered:'):
return FilteredObject(obj_id[9:])
else:
raise UnpicklingError('Invalid persistent id')
if __name__ == '__main__':
from cStringIO import StringIO
class UnpickleableThing(object):
pass
f = StringIO()
p = MyPickler(f)
p.dump({'a': 1, 'b': UnpickleableThing()})
f.seek(0)
u = MyUnpickler(f)
obj = u.load()
print obj
assert obj['a'] == 1
assert isinstance(obj['b'], FilteredObject)
assert obj['b'].about
This is how I would do this (I did something similar before and it worked):
Write a function that determines whether or not an object is pickleable
Make a list of all the pickleable variables, based on the above function
Make a new dictionary (called D) that stores all the non-pickleable variables
For each variable in D (this only works if you have very similar variables in d)
make a list of strings, where each string is legal python code, such that
when all these strings are executed in order, you get the desired variable
Now, when you unpickle, you get back all the variables that were originally pickleable. For all variables that were not pickleable, you now have a list of strings (legal python code) that when executed in order, gives you the desired variable.
Hope this helps
I ended up coding my own solution to this, using Shane Hathaway's approach.
Here's the code. (Look for CutePickler and CuteUnpickler.) Here are the tests. It's part of GarlicSim, so you can use it by installing garlicsim and doing from garlicsim.general_misc import pickle_tools.
If you want to use it on Python 3 code, use the Python 3 fork of garlicsim.
One approach would be to inherit from pickle.Pickler, and override the save_dict() method. Copy it from the base class, which reads like this:
def save_dict(self, obj):
write = self.write
if self.bin:
write(EMPTY_DICT)
else: # proto 0 -- can't use EMPTY_DICT
write(MARK + DICT)
self.memoize(obj)
self._batch_setitems(obj.iteritems())
However, in the _batch_setitems, pass an iterator that filters out all items that you don't want to be dumped, e.g
def save_dict(self, obj):
write = self.write
if self.bin:
write(EMPTY_DICT)
else: # proto 0 -- can't use EMPTY_DICT
write(MARK + DICT)
self.memoize(obj)
self._batch_setitems(item for item in obj.iteritems()
if not isinstance(item[1], bad_type))
As save_dict isn't an official API, you need to check for each new Python version whether this override is still correct.
The filtering part is indeed tricky. Using simple tricks, you can easily get the pickle to work. However, you might end up filtering out too much and losing information that you could keep when the filter looks a little bit deeper. But the vast possibility of things that can end up in the .namespace makes building a good filter difficult.
However, we could leverage pieces that are already part of Python, such as deepcopy in the copy module.
I made a copy of the stock copy module, and did the following things:
create a new type named LostObject to represent object that will be lost in pickling.
change _deepcopy_atomic to make sure x is picklable. If it's not, return an instance of LostObject
objects can define methods __reduce__ and/or __reduce_ex__ to provide hint about whether and how to pickle it. We make sure these methods will not throw exception to provide hint that it cannot be pickled.
to avoid making unnecessary copy of big object (a la actual deepcopy), we recursively check whether an object is picklable, and only make unpicklable part. For instance, for a tuple of a picklable list and and an unpickable object, we will make a copy of the tuple - just the container - but not its member list.
The following is the diff:
[~/Development/scratch/] $ diff -uN /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/copy.py mcopy.py
--- /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/copy.py 2010-01-09 00:18:38.000000000 -0800
+++ mcopy.py 2010-11-10 08:50:26.000000000 -0800
## -157,6 +157,13 ##
cls = type(x)
+ # if x is picklable, there is no need to make a new copy, just ref it
+ try:
+ dumps(x)
+ return x
+ except TypeError:
+ pass
+
copier = _deepcopy_dispatch.get(cls)
if copier:
y = copier(x, memo)
## -179,10 +186,18 ##
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
rv = reductor(2)
+ try:
+ x.__reduce_ex__()
+ except TypeError:
+ rv = LostObject, tuple()
else:
reductor = getattr(x, "__reduce__", None)
if reductor:
rv = reductor()
+ try:
+ x.__reduce__()
+ except TypeError:
+ rv = LostObject, tuple()
else:
raise Error(
"un(deep)copyable object of type %s" % cls)
## -194,7 +209,12 ##
_deepcopy_dispatch = d = {}
+from pickle import dumps
+class LostObject(object): pass
def _deepcopy_atomic(x, memo):
+ try:
+ dumps(x)
+ except TypeError: return LostObject()
return x
d[type(None)] = _deepcopy_atomic
d[type(Ellipsis)] = _deepcopy_atomic
Now back to the pickling part. You simply make a deepcopy using this new deepcopy function and then pickle the copy. The unpicklable parts have been removed during the copying process.
x = dict(a=1)
xx = dict(x=x)
x['xx'] = xx
x['f'] = file('/tmp/1', 'w')
class List():
def __init__(self, *args, **kwargs):
print 'making a copy of a list'
self.data = list(*args, **kwargs)
x['large'] = List(range(1000))
# now x contains a loop and a unpickable file object
# the following line will throw
from pickle import dumps, loads
try:
dumps(x)
except TypeError:
print 'yes, it throws'
def check_picklable(x):
try:
dumps(x)
except TypeError:
return False
return True
class LostObject(object): pass
from mcopy import deepcopy
# though x has a big List object, this deepcopy will not make a new copy of it
c = deepcopy(x)
dumps(c)
cc = loads(dumps(c))
# check loop refrence
if cc['xx']['x'] == cc:
print 'yes, loop reference is preserved'
# check unpickable part
if isinstance(cc['f'], LostObject):
print 'unpicklable part is now an instance of LostObject'
# check large object
if loads(dumps(c))['large'].data[999] == x['large'].data[999]:
print 'large object is ok'
Here is the output:
making a copy of a list
yes, it throws
yes, loop reference is preserved
unpicklable part is now an instance of LostObject
large object is ok
You see that 1) mutual pointers (between x and xx) are preserved and we do not run into infinite loop; 2) the unpicklable file object is converted to a LostObject instance; and 3) not new copy of the large object is created since it is picklable.