unpickle instance after refactoring module name - python

Previously I defined an ElectrodePositionsModel class in the module gselu.py in the package gselu, and pickled the ElectrodePositionsModel objects into some files.
Some time later it was decided to refactor the project and the package name gselu was changed to ielu
When I attempt to unpickle the old pickle files with pickle.load(), the process fails with the error, 'module' object has no attribute 'ElectrodePositionsModel'. What I understand of the Unpicklers behavior is that this is because the pickle thinks it has stored an instance of gselu.gselu.ElectrodePositionsModel, and tries to therefore import this class from this module. When it doesn't exist, it gives up.
I think that I am supposed to add something to the module's init.py to tell it where the gselu.gselu.ElectrodePositionsModel is, but I can't get the pickle.load() function to give me any error message other than 'module' has no attribute 'ElectrodePositionsModel' and I can't figure out where I am supposed to provide the correct path to find it. The code that does the unpickling is in the same module file (gselu.py) as the ElectrodePositionsModel class.
When I load the pickle file in an ipython session and manually import ElectrodePositionsModel, it loads correctly.
How do I tell the pickler where to load this module?

I realise this questions is old, but I just ran into a similar problem.
What I did to solve it was to take the old code and unpickle the data using that.
Then instead of pickling directly the custom classes, I pickled CustomClass.__dict__ which only contained raw python.
This data could then easily be imported in the new module by doing
a = NewNameOfCustomClass()
a.__dict__ = pickle.load('olddata.p', 'rb')
This method works if your custom class only has standard variables (such as builtins or numpy arrays, etc).

Related

How do I visualize and modify the content of a pickled function?

I need to use a pickled data processing function that was not written by me and I therefore do not know its content/structure. When I load it, a ModuleNotFound Error occurs:
ModuleNotFoundError: No module named 'sklearn.preprocessing.label'
I assume that the error occurs because the pickled object is trying to import a module named 'sklearn.preprocessing.label', which doesn't exist. I have tried to downgrade my sklearn version but that didn't work either. ¨
If I knew what the pickled object was doing I could simply make my own function to replace the function within the pickled object. In order to do that I would have to visualize the function contained within the pickled object, or remove the import sklearn.preprocessing.label.
sklearn.preprocessing.label was available in scikit-learn version 0.21 and below: https://github.com/scikit-learn/scikit-learn/tree/0.21.X/sklearn/preprocessing

When would re-importing a module cause a different object to be added to sys.modules?

The importlib library provides the reload function that can use used to re-import a library. I have often used this when importing a custom module in an interactive session (e.g. with an IPython or Jupyter notebook). I usually do the following to ensure the re-loaded object is actually the updated one.
import importlib
import mymodule
result = mymodule.fun()
...
mymodule = importlib.reload(mymodule)
I do this because the docs state
The return value is the module object (which can be different if re-importing causes a different object to be placed in sys.modules).
However, I'm wondering
Under what scenarios would re-importing a module "cause a different object to be placed in sys.modules"?
Is it necessary to assign the result of importlib.reload to the original object?

pickling and unpickling user-defined class

I have a user-defined class 'myclass' that I store on file with the pickle module, but I am having problem unpickling it. I have about 20 distinct instances of the same structure, that I save in distinct files. When I read each file, the code works on some files and not on others, when I get the error:
'module' object has no attribute 'myclass'
I have generated some files today and some other yesterday, and my code only works on the files generated today (I have NOT changed class definition between yesterday and today).
I was wondering if maybe my method is not robust, if I am not doing things as I should do, for example maybe I cannot pickled user-defined class, and if this is introducing some randomness in the process.
Another issue could be that the files that I generated yesterday were generated on a different machine --- because I work on an academic cluster, I have some login nodes and some computing nodes, that differs by architecture. So I generated yesterday files on the computing nodes, and today files on the login nodes, and I am reading everything on the login nodes.
As suggested in some of the comments, I have installed dill and loaded it with import dill as pickle. Now I can read the files from computing nodes to login nodes of the same cluster. But if I try to read the files generated on the computing node of one cluster, on the login node of another cluster I cannot. I get KeyError: 'ClassType' in _load_type(name) in dill.py
Can it be because the python version is different? I have generated the files with python2.7 and I read them with python3.3.
EDIT:
I can read the pickled files, if I use everywhere python 2.7. Sadly, part of my code, written in python 3, is not automatically compatible with python 2.7 :(
Can you from mymodule import myclass? Pickling does not pickle the class, just a reference to it. To load a pickled object python must be able to find the class that was to be used to create the object.
eg.
import pickle
class A(object):
pass
obj = A()
pickled = pickle.dumps(obj)
_A = A; del A # hide class
try:
pickle.loads(pickled)
except AttributeError as e:
print(e)
A = _A # unhide class
print(pickle.loads(pickled))

More on python ImportError No module named

Following the suggestion here, my package (or the directory containing my modules) is located at C:/Python34/Lib/site-packages. The directory contains an __init__.py and sys.path contains a path to the directory as shown.
Still I am getting the following error:
Traceback (most recent call last):
File "C:/Python34/Lib/site-packages/toolkit/window.py", line 6, in <module>
from catalogmaker import Catalog
File "C:\Python34\Lib\site-packages\toolkit\catalogmaker.py", line 1, in <module>
from patronmaker import Patron
File "C:\Python34\Lib\site-packages\toolkit\patronmaker.py", line 4, in <module>
class Patron:
File "C:\Python34\Lib\site-packages\toolkit\patronmaker.py", line 11, in Patron
patrons = pickle.load(f)
ImportError: No module named 'Patron'
I have a class in patronmaker.py named 'Patron' but no module named Patron so I am not sure what the last statement in the error message means. I very much appreciate your thoughts on what I am missing.
Python Version 3.4.1 on a Windows 32 bits machine.
You are saving all patron instances (i.e. self) to the Patron class attribute Patron.patrons. Then you are trying to pickle a class attribute from within the class. This can choke pickle, however I believe dill should be able to handle it. Is it really necessary to save all the class instances to a list in Patrons? It's a bit of an odd thing to do…
pickle serializes classes by reference, and doesn't play well with __main__ for many objects. In dill, you don't have to serialize classes by reference, and it can handle issues with __main__, much better. Get dill here: https://github.com/uqfoundation
Edit:
I tried your code (with one minor change) and it worked.
dude#hilbert>$ python patronmaker.py
Then start python…
>>> import dill
>>> f = open('patrons.pkl', 'rb')
>>> p = dill.load(f)
>>> p
[Julius Caeser, Kunte Kinta, Norton Henrich, Mother Teresa]
The only change I made was to uncomment the lines at the end of patronmaker.py so that it saved some patrons…. and I also replaced import pickle with import dill as pickle everywhere.
So, even by downloading and running your code, I can't produce an error with dill. I'm using the latest dill from github.
Additional Edit:
Your traceback above is from an ImportError. Did you install your module? If you didn't use setup.py to install it, or if you don't have your module on your PYTHONPATH, then you won't find your module regardless of how you are serializing things.
Even more edits:
Looking at your code, you should be using the singleton pattern for patrons… it should not be inside the class Patron. The block of code at the class level to load the patrons into Patron.patrons is sure to cause problems… and probably bound to be the source of some form of errors. I also see that you are pickling the attribute Patrons.patrons (not even the class itself) from inside the Patrons class -- this is madness -- don't do it. Also notice that when you are trying to obtain the patrons, you use Patron.patrons… this is calling the class object and not an instance. Move patrons outside of the class, and use the singleton directly as a list of patrons. Also you should typically be using the patrons instance, so if you wanted to have each patron know who all the other patrons are, p = Patron('Joe', 'Blow'), then p.patrons to get all patrons… but you'd need to write a Patrons.load method that reads the singleton list of patrons… you could also use a property to make the load give you something that looks like an attribute.
If you build a singleton of patrons (as a list)… or a "registry" of patrons (as a dict) if you like, then just check if a patrons pickle file exists… to load to the registry… and don't do it from inside the Patrons class… things should go much better. Your code currently is trying to load a class instance on a class definition while it builds that class object. That's bad...
Also, don't expect people to go downloading your code and debugging it for you, when you don't present a minimal test case or sufficient info for how the traceback was created.
You may have hit on a valid pickling error in dill for some dark corner case, but I can't tell b/c I can't reproduce your error. However, I can tell that you need some refactoring.
And just to be explicit:
Move your patrons initializing mess from Patrons into a new file patrons.py
import os
import dill as pickle
#Initialize patrons with saved pickle data
if os.path.isfile('patrons.pkl'):
with open("patrons.pkl", 'rb') as f:
patrons = pickle.load(f)
else: patrons = []
Then in patronmaker.py, and everywhere else you need the singleton…
import dill as pickle
import os.path
import patrons as the
class Patron:
def __init__(self, lname, fname):
self.lname = lname.title()
self.fname = fname.title()
self.terrCheckedOutHistory = {}
#Add any created Patron to patrons list
the.patrons.append(self)
#Preserve this person via pickle
with open('patrons.pkl', 'wb') as f:
pickle.dump(the.patrons, f)
And you should be fine unless your code is hitting one of the cases that attributes on modules can't be serialized because they were added dynamically (see https://github.com/uqfoundation/dill/pull/47), which should definitely make pickle fail, and in some cases dill too… probably with an AtrributeError on the module. I just can't reproduce this… and I'm done.

Unpickling a function into a different context in Python

I have written a Python interface to a process-centric job distribution system we're developing/using internally at my workplace. While reasonably skilled programmers, the primary people using this interface are research scientists, not software developers, so ease-of-use and keeping the interface out of the way to the greatest degree possible is paramount.
My library unrolls a sequence of inputs into a sequence of pickle files on a shared file server, then spawns jobs that load those inputs, perform the computation, pickle the results, and exit; the client script then picks back up and produces a generator that loads and yields the results (or rethrows any exception the calculation function did.)
This is only useful since the calculation function itself is one of the serialized inputs. cPickle is quite content to pickle function references, but requires the pickled function to be reimportable in the same context. This is problematic. I've already solved the problem of finding the module to reimport it, but the vast majority of the time, it is a top-level function that is pickled and, thus, does not have a module path. The only strategy I've found to be able to unpickle such a function on the computation nodes is this nauseating little approach towards simulating the original environment in which the function was pickled before unpickling it:
...
# At this point, we've identified the source of the target function.
# A string by its name lives in "modname".
# In the real code, there is significant try/except work here.
targetModule = __import__(modname)
globalRef = globals()
for thingie in dir(targetModule):
if thingie not in globalRef:
globalRef[thingie] = targetModule.__dict__[thingie]
# sys.argv[2]: the path to the pickle file common to all jobs, which contains
# any data in common to all invocations of the target function, then the
# target function itself
commonFile = open(sys.argv[2], "rb")
commonUnpickle = cPickle.Unpickler(commonFile)
commonData = commonUnpickle.load()
# the actual function unpack I'm having trouble with:
doIt = commonUnpickle.load()
The final line is the most important one here- it's where my module is picking up the function it should actually be running. This code, as written, works as desired, but directly manipulating the symbol tables like this is unsettling.
How can I do this, or something very much like this that does not force the research scientists to separate their calculation scripts into a proper class structure (they use Python like the most excellent graphing calculator ever and I would like to continue to let them do so) the way Pickle desperately wants, without the unpleasant, unsafe, and just plain scary __dict__-and-globals() manipulation I'm using above? I fervently believe there has to be a better way, but exec "from {0} import *".format("modname") didn't do it, several attempts to inject the pickle load into the targetModule reference didn't do it, and eval("commonUnpickle.load()", targetModule.__dict__, locals()) didn't do it. All of these fail with Unpickle's AttributeError over being unable to find the function in <module>.
What is a better way?
Pickling functions can be rather annoying if trying to move them into a different context. If the function does not reference anything from the module that it is in and references (if anything) modules that are guaranteed to be imported, you might check some code from a Rudimentary Database Engine found on the Python Cookbook.
In order to support views, the academic module grabs the code from the callable when pickling the query. When it comes time to unpickle the view, a LambdaType instance is created with the code object and a reference to a namespace containing all imported modules. The solution has limitations but worked well enough for the exercise.
Example for Views
class _View:
def __init__(self, database, query, *name_changes):
"Initializes _View instance with details of saved query."
self.__database = database
self.__query = query
self.__name_changes = name_changes
def __getstate__(self):
"Returns everything needed to pickle _View instance."
return self.__database, self.__query.__code__, self.__name_changes
def __setstate__(self, state):
"Sets the state of the _View instance when unpickled."
database, query, name_changes = state
self.__database = database
self.__query = types.LambdaType(query, sys.modules)
self.__name_changes = name_changes
Sometimes is appears necessary to make modifications to the registered modules available in the system. If for example you need to make reference to the first module (__main__), you may need to create a new module with your available namespace loaded into a new module object. The same recipe used the following technique.
Example for Modules
def test_northwind():
"Loads and runs some test on the sample Northwind database."
import os, imp
# Patch the module namespace to recognize this file.
name = os.path.splitext(os.path.basename(sys.argv[0]))[0]
module = imp.new_module(name)
vars(module).update(globals())
sys.modules[name] = module
Your question was long, and I was too caffeinated to make it through your very long question… However, I think you are looking to do something that there's a pretty good existing solution for already. There's a fork of the parallel python (i.e. pp) library that takes functions and objects and serializes them, sends them to different servers, and then unpikles and executes them. The fork lives inside the pathos package, but you can download it independently here:
http://danse.cacr.caltech.edu/packages/dev_danse_us
The "other context" in that case is another server… and the objects are transported by converting the objects to source code and then back to objects.
If you are looking to use pickling, much in the way you are doing already, there's an extension to mpi4py that serializes arguments and functions, and returns pickled return values… The package is called pyina, and is commonly used to ship code and objects to cluster nodes in coordination with a cluster scheduler.
Both pathos and pyina provide map abstractions (and pipe), and try to hide all of the details of parallel computing behind the abstractions, so scientists don't need to learn anything except how to program normal serial python. They just use one of the map or pipe functions, and get parallel or distributed computing.
Oh, I almost forgot. The dill serializer includes dump_session and load_session functions that allow the user to easily serialize their entire interpreter session and send it to another computer (or just save it for later use). That's pretty handy for changing contexts, in a different sense.
Get dill, pathos, and pyina here: https://github.com/uqfoundation
For a module to be recognized as loaded I think it must by in sys.modules, not just its content imported into your global/local namespace. Try to exec everything, then get the result out of an artificial environment.
env = {"fn": sys.argv[2]}
code = """\
import %s # maybe more
import cPickle
commonFile = open(fn, "rb")
commonUnpickle = cPickle.Unpickler(commonFile)
commonData = commonUnpickle.load()
doIt = commonUnpickle.load()
"""
exec code in env
return env["doIt"]
While functions are advertised as first-class objects in Python, this is one case where it can be seen that they are really second-class objects. It is the reference to the callable, not the object itself, that is pickled. (You cannot directly pickle a lambda expression.)
There is an alternate usage of __import__ that you might prefer:
def importer(modulename, symbols=None):
u"importer('foo') returns module foo; importer('foo', ['bar']) returns {'bar': object}"
if modulename in sys.modules: module = sys.modules[modulename]
else: module = __import__(modulename, fromlist=['*'])
if symbols == None: return module
else: return dict(zip(symbols, map(partial(getattr, module), symbols)))
So these would all be basically equivalent:
from mymodule.mysubmodule import myfunction
myfunction = importer('mymodule.mysubmodule').myfunction
globals()['myfunction'] = importer('mymodule.mysubmodule', ['myfunction'])['myfunction']

Categories

Resources