I am trying to run a code on colab given below:
vocab_file = 'vocabs/vocab_train.pkl'
vocab = utils.load_variables(vocab_file)
it calls a function load_variables
and here's a file of that function calling
import pickle as cPickle
def load_variables(pickle_file_name):
if os.path.exists(pickle_file_name):
with open(pickle_file_name, 'rb') as f:
d = cPickle.load(f)
return d
here's the error it is throwing
I tried changing it to "import pickle/import cPickle" as well but it keeps on showing same error.
cPickle was replaced by pickle in Python 3. You're likely using a Python 2.7 interpreter instead of a Python 3 one.
Related
I am trying to call other python script to another python script using import but it is giving some error. Can you please help me how to do this?
Example:
case.py is my one script which have one one function generate_case('rule_id'). This function is returning some value.
final.py is my another script in which I am trying to call above script and store return value into a variable.
I am trying to do in python :
import case as f_case
qry = ''
qry += f_case.generate_case('R162')
print(qry)
Error:
ModuleNotFoundError: No module named 'case'
Both the scripts are available in the same location.
Try this
import os
import sys
scriptpath=''
sys.path.append(os.path.abspath(scriptpath))
# Do the import
import case as f_case
I rename the script as new_cons.py. Now it working for me. May be in script name I have added integer so it is not importing the script.
I want to save a function using dill.
This function is using a function from a specific library (e.g re)
To save the dill/pickle file, I use the following :
import dill
import re
def fct(a):
#The function uses the re library
return ...
filename = 'test.pickle'
dill.dump(fct, open(filename, 'wb'))
However, when I try to load this function on a different notebook (using dill.load), I got the following error:
name 're' is not defined
I am guessing that dill.dump did not save the import with the function, how could I solve that? I tried including the import inside the function but that does not work.
Adding the following in the first script before dumping the function in the dill file seems to work :
dill.settings['recurse'] = True
There is a module named tzwhere in python. When I load it in the tz variable as in:-
from tzwhere import tzwhere
tz = tzwhere.tzwhere(shapely=True)
The line above takes about 45 seconds to load up and its scope is until that particular python session is stopped. So after this line I can get as many outputs of tz.tzNameAt(latitude, longitude) as I want, but the problem is that these outputs are quickly calculated only in that python shell.
I want to make tz variable shareable just like an API, such that if tz variable is called from any python session or even if from any java program using exec command, neither it should take 45 seconds to load up again and nor should it give me NameError: name 'tz' is not defined.
Please Help. Thank you very much!!
You could probably use the pickle module which can store class instances in files.
Try something like this:
from tzwhere import tzwhere
tz = tzwhere.tzwhere(shapely=True)
import pickle
# Dump the variable tz into file save.p
pickle.dump( tz, open( "save.p", "wb" ) )
To load tz from another script just do:
import pickle
tz = pickle.load( open( "save.p", "rb" ) )
NOTE: Python 2 ONLY, will use faster version automatically if on Python3
If you are still unhappy with the speed when loading tz from another script, there is an accelerated version of pickle called cPickle
Just do:
import cPickle as pickle
For more information goto this link: https://wiki.python.org/moin/UsingPickle
You cannot share any python object in memory even after python has closed. You can, however, save the state of the object (if the library does support it, if it doesn't, this solution will not work) with pickle. pickle is a library that ships with python, and can save the state of most objects to a file.
Here is an example for pickle:
To save the state:
import pickle
obj = [1, 2, 3, 4]
f = open("/path/to/the/file/where/the/data/should/be/stored.pickle", 'wb') # you do not have to use the suffix '.pickle' however.
pickle.dump(obj, f)
f.close()
To retrieve it:
import pickle
f = open("/path/to/the/file/where/the/data/should/be/stored.pickle", 'rb')
obj = pickle.load(f)
f.close()
Or for your example, run this thing once:
from tzwhere import tzwhere
import pickle
f = open("/path/to/the/file", 'wb')
pickle.dump(tzwhere.tzwhere(shapely=True))
f.close()
And use this to retrieve it:
import pickle
f = open("/path/to/the/file", 'rb')
tz = pickle.load(f)
f.close()
Or as a one-liner, so that it doesn't take up that much space:
import pickle;f=open("/path/to/the/file",'rb');tz=pickle.load(f);f.close()
I hope that helps,
CodenameLambda
PS: If you want to know how pickle works, just look at the documentation.
I'm using python sklearn RandomForestClassifier and try to export decisiontrees.
The basic code is as following:
from sklearn import tree
with open(dot_file_name, 'w') as my_file:
tree.export_graphviz(tree1, out_file = my_file,feature_names = feature_names)
After run the python script, following error show up:
Attribute error: 'DecisionTreeClassifier' object has no attribute 'export_graphviz'
I'm using python 2.7. Is it because of the version of python? Do I have to use python 3.0?
That's because you used somewhere tree name as name for DecisionTreeClassifier. Use another name there.
my task is to export an imported (compiled) module loaded from a container.
I have a Py.-Script importing a module. Upon using print(module1) I can see that it is a compiled python (pyc) file, loaded from an archive. As I cannot access the archive, my idea was to import the module and have it decompiled with uncompyle2.
This is my minimum code:
import os, sys
import uncompyle2
import module1
with open("module1.py", "wb") as fileobj:
uncompyle2.uncompyle_file(module1, fileobj)
However, this prints my an error. If I substitute module1 in the uncompyle argument with the actual path, it does not make a difference. I tried the code snippet successfully when the pyc-file not loaded from a container but rather a single file in a directory and it worked.
Error:
Traceback (most recent call last):
File "C:\....\run.py", line 64, in <module>
uncompyle2.uncompyle_file(module1, fileobj)
File "C:\....\Python\python-2.7.6\lib\site-packages\uncompyle2\__init__.py", line 124, in uncompyle_file
version, co = _load_module(filename)
File "C:\.....\Python\python-2.7.6\lib\site-packages\uncompyle2\__init__.py", line 67, in _load_module
fp = open(filename, 'rb')
TypeError: coercing to Unicode: need string or buffer, module found
Does anyone know where I am going wrong?
You are going wrong with your initial assumption:
As I cannot access the archive, my idea was to import the module and
have it decompiled with uncompyle2.
Uncompiling an already loaded module is unfortunately not possible. A loaded Python module is not a mirror of the on-disk representation of a .pyc file. Instead, it is a collection of objects created as a side effect of executing the code in the .pyc. Once the code has been executed, its byte code is discarded and it (in the general case) cannot be reconstructed.
As an example, consider the following Python module:
import gtk
w = gtk.Window(gtk.WINDOW_TOPLEVEL)
w.add(gtk.Label("A quick brown fox jumped over the lazy dog"))
w.show_all()
Importing this module inside an application that happens to run a GTK main loop will pop up a window with some text as a side effect. The module will have a dict with two entries, gtk pointing to the gtk module, and w pointing to an already created GTK window. There is no hint there how to create another GTK window of the sort, nor how to create another such module. (Remember that the object created might have been arbitrarily complex and that its creation could be a very involved process.)
You might ask, then, if that is so, then what is the content of the pyc file? How did it get loaded the first time? The answer is that the pyc file contains an on-disk rendition of the byte-compiled code in the module, ready for execution. Creating a pyc file is roughly equivalent to doing something like:
import marshal
def make_pyc(source_code, filename):
compiled = compile(source_code, filename, "exec")
serialized = marshal.dumps(compiled)
with open(filename, "wb") as out:
out.write(serialized)
# for example:
make_pyc("import gtk\nw = gtk.Window(gtk.WINDOW_TOPLEVEL)...",
"somefile.pyc", "exec")
On the other hand, loading a compiled module is approximately equivalent to:
import sys, marshal, imp
def load_pyc(modname):
with open(modname + ".pyc", "rb") as in_:
serialized = in_.read()
compiled = marshal.loads(serialized)
module = sys.modules[modname] = imp.new_module(modname)
exec compiled in module.__dict__
load_pyc("somefile")
Note how, once the code has been executed with the exec statement, the string and deserialized bytecode is no longer used and will be swept up by the garbage collector. The only remaining effect of the pyc having been loaded is the presence of a new module with living functions, classes, and other objects that are impossible to serialize, such as references to open files, network connections, OpenGL canvases, or GTK windows.
What modules like uncompyle2 do is the inverse of the compile function. You must have the actual code of the module (either serialized as in a pyc file or deserialized code object as in the compiled variable in the snippets above), from which uncompyle2 will produce a fairly faithful representation of the original source.
pass the filename string first and then the file object to write to:
with open("out.txt","w") as f:
uncompyle2.uncompyle_file('path_to.pyc',f)
You can see the output:
with open("/home/padraic/test.pyc","rb") as f:
print(f.read())
with open("out.txt","r+") as f:
uncompyle2.uncompyle_file('/home/padraic/test.pyc',f)
f.seek(0)
print(f.read())
Output:
�
d�ZdS(cCs dGHdS(Nshello world((((stest.pytfoosN(R(((stest.pyt<module>s
#Embedded file name: test.py
def foo():
print 'hello world'