I had trained a classifier using python 3.6 and sklearn 0.22 and pickled this file a few years ago. I don't have the original classifier and now when I try to reuse this .pkl file in python 3.8 and sklearn 1.2 I get the following error
lassifier_object = pickle.load(fi, encoding="latin1")
ModuleNotFoundError: No module named 'sklearn.ensemble.forest'
I understand that this happens because it was generated on an older version of sklearn and pickle. Is it possible to use python 3.6 and unpickle this to a raw file and then use python 3.8 to pickle this back? Or does this need the original classifier to create a python 3.8 and sklearn 1.2 version of it?
Related
I have a question about scikit models and (retro-)compatibility.
I have a model (saved using joblib) created in Python 3.5 from scikit-learn 0.21.2, which I then analyze with the package shap version 0.30. Since I upgraded to Ubuntu 20.04 I have Python 3.8 (and newer versions of both scikit-learn and shap).
Because of the new packages version I cannot load them with Python 3.8, so I make a virtual environment with Py3.5 and the original package versions.
Now my question is: is there a way to re-dump with joblib the models so I can also open them with Python 3.8? I'd like to re-analyze the model with the newest version of the package shap (but of course it has a scikit version requirement that would break the joblib loading).
Alternatively, what other options do I have? (The only thing I do not want is to re-train the model).
There are no standard solutions within scikit-learn. If your model is supported, you can try sklearn-json.
Although this does not solve your current issue, you can in the future save your models in formats with fewer compatibility issues – see the Interoperable formats section in scikit-learn's Model persistence page.
I have been getting multiple errors which are due to conflicts in the TensorFlow version installed in my system and the version used to write the code in Tensorflow API.
I am using python 3.6.7 and Tensorflow 2.0 to get started with the code https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/installation.md
But I am getting several errors :
flags = tf.app.flags
AttributeError: module 'tensorflow' has no attribute 'app.
As I am using 2.0 , I replaced tf.app.flags with tf.compat.v1.flags.
from tensorflow.contrib import slim as contrib_slim
ModuleNotFoundError: No module named 'tensorflow.contrib'
I am not able to solve the second one.
Can I get help to know which python and tensorflow version should be used to run DeepLab v3+?
You should use the Tensorflow version 1.x to run the DeepLabV3+ model because it uses a session to run and also the slim library which is based on the TensorFlow 1.x. And so your two problems can be solved as:
Do not need to replace tf.app.flags with tf.compat.v1.flags.
To run DeepLabV3+ model, you need to put deeplab and slim folder in a folder (deeplab_slim),
and export them by running following export commands from this parent folder (deeplab_slim):
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/deeplab
I have been trying to get the classification report in the form of a dictionary.
So according to the scikit-learn 0.20 documentation, I do:
from sklearn import metrics
rep = metrics.classification_report(y_true, y_pred, output_dict=True)
But get an error saying
TypeError: classification_report() got an unexpected keyword argument 'output_dict'
The scikit-learn module in my machine was initially 0.19.1 but even after updating it to 0.20, the same error message shows.
This error should not show up as long as you have scikit-learn 0.20.0 installed. If you are trying this in a jupyter notebook, make sure the correct version reflects in your notebook using:
import sklearn
print(sklearn.__version__)
If you've upgraded scikit-learn but jupyter shows the wrong version of the package, make sure jupyter is installed in your current environment (and restart jupyter in a new terminal).
I have a fully trained and saved Tensorflow model that I would like to load and use as a plug-in to a third party application (UCSF Chimera). This third party application runs on Python 2.7 which does not support Tensorflow. If even possible, is there a way for me to use this model at all in python 2.7?
I was originally looking at this previous post but it was for Java/C++.
First, save your Tensorflow model using pickle
with open("xxx.pkl", "wb") as outfile:
pickle.dump(checkpointfile, outfile)
Second, install anaconda and create a python2.7 environment
Third, install tensorflow again in the python2.7 environment
conda install tensorflow
Fourth, read the model using pickle
pkl_file = open("xxx.pkl", "rb")
data = pickle.load(pkl_file, encoding="latin1")
I am trying to learn data analysis using the iris data set. So I just am copying the already written code for this subject and I get the following error regarding the libraries :
Traceback (most recent call last):
File "iris.py", line 6, in <module>
from sklearn import model_selection
ImportError: cannot import name model_selection
And here is how I import this module:
from sklearn import model_selection
I am using python 2.7,
What could be the problem?
I suspect there might be a problem with the version!right?or not?
Please don't suggest Anaconda, I am not willing to use it.
Thanks a bunch
sklearn.model_selection is available for version 0.18 or late version
Please update your sklearn by pip or other tools
org website
http://scikit-learn.org/stable/whats_new.html#version-0-18
Version 0.18
September 28, 2016
Last release with Python 2.6 support
Scikit-learn 0.18 will be the last version of scikit-learn to support Python 2.6. Later versions of scikit-learn will require Python 2.7 or above.
Model Selection Enhancements and API Changes
The model_selection module
The new module sklearn.model_selection, which groups together the functionalities of formerly sklearn.cross_validation, sklearn.grid_search and sklearn.learning_curve, introduces new possibilities such as nested cross-validation and better manipulation of parameter searches with Pandas.
Many things will stay the same but there are some key differences. Read below to know more about the changes.
Data-independent CV splitters enabling nested cross-validation