Whenever I try to run the following line of code:
from imblearn.under_sampling import NearMiss
for under-sampling (or over-sampling) imbalanced data on Jupyter notebook, I get this error:
ImportError Traceback (most recent call last)
<ipython-input-21-c701341dce49> in <module>
----> 1 from imblearn.under_sampling import NearMiss
~\anaconda3\lib\site-packages\imblearn\under_sampling\__init__.py in <module>
4 """
5
----> 6 from ._prototype_generation import ClusterCentroids
7
8 from ._prototype_selection import RandomUnderSampler
~\anaconda3\lib\site-packages\imblearn\under_sampling\_prototype_generation\__init__.py in <module>
4 """
5
----> 6 from ._cluster_centroids import ClusterCentroids
7
8 __all__ = ['ClusterCentroids']
~\anaconda3\lib\site-packages\imblearn\under_sampling\_prototype_generation\_cluster_centroids.py in <module>
15 from sklearn.cluster import KMeans
16 from sklearn.neighbors import NearestNeighbors
---> 17 from sklearn.utils import safe_indexing
18
19 from ..base import BaseUnderSampler
ImportError: cannot import name 'safe_indexing' from 'sklearn.utils' (C:\Users\anaconda3\lib\site-packages\sklearn\utils\__init__.py)
For imblearn.under_sampling, did you try reinstalling the package?:
pip install imbalanced-learn
conda:
conda install -c conda-forge imbalanced-learn
in jupyter notebook:
import sys
!{sys.executable} -m pip install <package_name>
If you have scikitlearn>=0.24 (as far as i see there is a dependency for imblearn now,as scikit-learn (>=0.23) https://imbalanced-learn.org/stable/install.html) you may want to try:
try:
from sklearn.utils import safe_indexing
except ImportError:
from sklearn.utils import _safe_indexing
Edit ..\Anaconda3\Lib\site-packages\sklearn\utils\ __init__.py.
Copy def _safe_indexing ... till next def and paste the code with renaming to def safe_indexing... .
Keep previous _safe_indexing as it is.
It solved the issue in this way.
In ~\Anaconda3\Lib\site-packages\yellowbrick\classifier\threshold.py module replace:
from sklearn.utils import indexable, safe_indexing
with
from sklearn.utils import indexable, _safe_indexing
After that, restart the kernel.
Related
I tried
pip install tensorflow
it says it is incompatible with my numpy version(1.20.0)
Then I tried unistall numpy to required version numpy~=1.19.2
Then
pip install fancyimpute
It installed without any errors in AnacondaPromt
But it still not working in Jupyter Notebook
The Error is
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-34-e68ac4972d28> in <module>
16 import tensorflow as tf
17 import numpy.core.multiarray
---> 18 from fancyimpute import KNN
~\anaconda3\lib\site-packages\fancyimpute\__init__.py in <module>
2
3 from .solver import Solver
----> 4 from .nuclear_norm_minimization import NuclearNormMinimization
5 from .matrix_factorization import MatrixFactorization
6 from .iterative_svd import IterativeSVD
~\anaconda3\lib\site-packages\fancyimpute\nuclear_norm_minimization.py in <module>
11 # limitations under the License.
12
---> 13 import cvxpy
14
15 from .solver import Solver
~\anaconda3\lib\site-packages\cvxpy\__init__.py in <module>
16
17 __version__ = "1.1.10"
---> 18 from cvxpy.atoms import *
19 from cvxpy.constraints import NonPos, Zero, SOC, PSD
20 from cvxpy.expressions.expression import Expression
~\anaconda3\lib\site-packages\cvxpy\cvxcore\python\__init__.py in <module>
1 # TODO(akshayka): This is a hack; the swig-auto-generated cvxcore.py
2 # tries to import cvxcore as `from . import _cvxcore`
----> 3 import _cvxcore
ImportError: numpy.core.multiarray failed to import
I had the same issue and I upgraded NumPy by running pip install numpy --upgrade. It worked around for me.
I found a solution right here
Opencv/numpy issue: "module compiled against API version X but this version of numpy is Y"
Here is the table and since my version is 1.19.5(0xd) now,
I uninstalled this version and installed 0xe version of numpy which is 1.20.0.
import pandas
from sklearn import tree
import pydotplus
from sklearn.tree import DecisionTreeClassifier
import matplotlib.pyplot as plt
import matplotlib.image as pltimg
dtree = DecisionTreeClassifier()
dtree = dtree.fit(X, y)
y_pred = dtree.predict(X)
y_pred
The error ::
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-33-745bbfa9769b> in <module>
1 import pandas
----> 2 from sklearn import tree
3 import pydotplus
4 from sklearn.tree import DecisionTreeClassifier
5 import matplotlib.pyplot as plt
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/__init__.py in <module>
62 else:
63 from . import __check_build
---> 64 from .base import clone
65 from .utils._show_versions import show_versions
66
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/base.py in <module>
12 from scipy import sparse
13 from .externals import six
---> 14 from .utils.fixes import signature
15 from .utils import _IS_32BIT
16 from . import __version__
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/utils/__init__.py in <module>
12 from .murmurhash import murmurhash3_32
13 from .class_weight import compute_class_weight, compute_sample_weight
---> 14 from . import _joblib
15 from ..exceptions import DataConversionWarning
16 from .fixes import _Sequence as Sequence
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/utils/_joblib.py in <module>
20 from joblib import parallel_backend, register_parallel_backend
21 else:
---> 22 from ..externals import joblib
23 from ..externals.joblib import logger
24 from ..externals.joblib import dump, load
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/__init__.py in <module>
117 from .numpy_pickle import load
118 from .compressor import register_compressor
--> 119 from .parallel import Parallel
120 from .parallel import delayed
121 from .parallel import cpu_count
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/parallel.py in <module>
26 from .my_exceptions import TransportableException
27 from .disk import memstr_to_bytes
---> 28 from ._parallel_backends import (FallbackToBackend, MultiprocessingBackend,
29 ThreadingBackend, SequentialBackend,
30 LokyBackend)
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/_parallel_backends.py in <module>
20 from .pool import MemmappingPool
21 from multiprocessing.pool import ThreadPool
---> 22 from .executor import get_memmapping_executor
23
24 # Compat between concurrent.futures and multiprocessing TimeoutError
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/executor.py in <module>
12 from .disk import delete_folder
13 from ._memmapping_reducer import get_memmapping_reducers
---> 14 from .externals.loky.reusable_executor import get_reusable_executor
15
16
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/externals/loky/__init__.py in <module>
10
11 from .backend.context import cpu_count
---> 12 from .backend.reduction import set_loky_pickler
13 from .reusable_executor import get_reusable_executor
14 from .cloudpickle_wrapper import wrap_non_picklable_objects
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/externals/loky/backend/reduction.py in <module>
123 # global variable to change the pickler behavior
124 try:
--> 125 from sklearn.externals.joblib.externals import cloudpickle # noqa: F401
126 DEFAULT_ENV = "cloudpickle"
127 except ImportError:
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/externals/cloudpickle/__init__.py in <module>
1 from __future__ import absolute_import
2
----> 3 from .cloudpickle import *
4
5 __version__ = '0.6.1'
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py in <module>
165
166
--> 167 _cell_set_template_code = _make_cell_set_template_code()
168
169
~/opt/anaconda3/lib/python3.8/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py in _make_cell_set_template_code()
146 )
147 else:
--> 148 return types.CodeType(
149 co.co_argcount,
150 co.co_kwonlyargcount,
TypeError: an integer is required (got type bytes)
So I keep getting this type error and have no clue what to do pretty sure that sklearn is installed. Both X and y are previously defined after creating the training set to be used. I've tried reinstalling scikit-learn, downgrading python and installing through the terminal but none of that helped. Just cant figure out what the issue is here and will really appreciate some help on this
I was able to reproduce the problem in a conda environment by running conda install python==3.8 scikit-learn but then manually downgrading joblib by running pip install joblib<0.14 since joblib<0.14 is incompatible with Python 3.8.
I'm not exactly sure how you got into the this situation, but it should fix it to first uninstall any joblib package that might have been mis-installed:
$ pip uninstall joblib
Then force reinstall/upgrade it with conda:
$ conda update --force-reinstall joblib
Confirm the correct version was installed:
$ python -c 'import joblib; print(joblib.__version__)'
0.17.0
If all else fails, try creating a fresh Anaconda install. Make sure to install packages with conda install and not pip.
In my case, joblib in scikit needs to be reinstalled.
pip uninstall joblib
This happened to me because I pip installed sklearn instead of scikit-learn.
In my case, I did
pip uninstall sklearn --yes
followed by
pip install sklearn
ImportError: cannot import name 'mean_absolute_difference'
Tried uninstalling and installing again.
import gensim
ImportError Traceback (most recent call last)
<ipython-input-28-e70e92d32c6e> in <module>()
----> 1 import gensim
2 frames
/usr/local/lib/python3.6/dist-packages/gensim/models/hdpmodel.py in
<module>()
61
62 from gensim import interfaces, utils, matutils
---> 63 from gensim.matutils import dirichlet_expectation,
mean_absolute_difference
64 from gensim.models import basemodel, ldamodel
65
ImportError: cannot import name 'mean_absolute_difference'
I tried installing gensim on my linux using !pip install gensim and after that i imported as you did it worked fine. Like shown below.
>>> !pip install gensim
>>> import gensim
>>> from gensim import interfaces, utils, matutils
>>> from gensim.matutils import dirichlet_expectation, mean_absolute_difference
>>>
I installed scikit-learn library in python using the command
pip install -U scikit-learn
When I am trying to import the library or it's module like
from sklearn.model_selection import train_test_split
or simply import sklearn
I am getting the error
ImportError Traceback (most recent call last)
<ipython-input-24-73edc048c06b> in <module>()
----> 1 from sklearn.model_selection import train_test_split
c:\users\ajain9\appdata\local\programs\python\python36-32\lib\site-packages\sklearn\__init__.py in <module>()
132 else:
133 from . import __check_build
--> 134 from .base import clone
135 __check_build # avoid flakes unused variable error
136
c:\users\ajain9\appdata\local\programs\python\python36-32\lib\site-packages\sklearn\base.py in <module>()
11 from scipy import sparse
12 from .externals import six
---> 13 from .utils.fixes import signature
14 from . import __version__
15
c:\users\ajain9\appdata\local\programs\python\python36-32\lib\site-packages\sklearn\utils\__init__.py in <module>()
7 import warnings
8
----> 9 from .murmurhash import murmurhash3_32
10 from .validation import (as_float_array,
11 assert_all_finite,
ImportError: cannot import name 'murmurhash3_32'
Any reason this error might be happening?
I am using Python version 3.6.3 Numpy v 1.13.3 pandas v 0.21.0
I am using windows
Try using virutalenv and install all the libraries required there, it worked for me.
I am trying to load tensorflow within jupyter notebook and receive the following error:
ImportError Traceback (most recent call last) <ipython-input-1-28a19874e1dd> in <module>()
5 import tensorflow as tf
6 from tensorflow.python.framework import ops
----> 7 from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
8
9 get_ipython().magic(u'matplotlib inline')
/Pathway/tf_utils.py in <module>()
17 import sys
18 import tensorflow as tf
---> 19 import src.utils as utils
20 import logging
21 from tensorflow.contrib import slim
ImportError: No module named src.utils
I have the latest tensorflow installed and have also added models to the PYTHONPATH via:
export PYTHONPATH="$PYTHONPATH:/Pathway/tfmodels-1.9.0"
Do you know how I can resolve this import error?
To resolve this import error I manually imported the missing functions from the following github repo > https://github.com/andersy005/deep-learning-specialization-coursera/blob/master/02-Improving-Deep-Neural-Networks/week3/Programming-Assignments/tf_utils.py