Error from sklearn.neural_network import MLPRegressor - python

I am building a Neural Network Code on Python. When I executed the code on my laptop, everything was smooth. When I sent it to my friend: it gave the error shown in the picture. I tried uninstalling python and reinstalling it, unistalling pip, uninstalling sklearn, and then installing them. Nothing worked.
ImportError:DLL load failed.
Can anyone please help ?
The complete error is the following:
RESTART: C:\D-drive\AUB 2019-2020\Fall\CHEN499\Mohamad Ibrahim\499 research\neural networks\example 6\NN1.py
Traceback (most recent call last):
File "C:\D-drive\AUB 2019-2020\Fall\CHEN499\Mohamad Ibrahim\499 research\neural networks\example 6\NN1.py", line 1, in <module>
from sklearn.neural_network import MLPRegressor
File "C:\Users\jz08\AppData\Local\Programs\Python\Python37-32\lib\site-packages\sklearn\__init__.py", line 75, in <module>
from .utils._show_versions import show_versions
File "C:\Users\jz08\AppData\Local\Programs\Python\Python37-32\lib\site-packages\sklearn\utils\_show_versions.py", line 12, in <module>
from ._openmp_helpers import _openmp_parallelism_enabled
ImportError: DLL load failed: The specified module could not be found.
Thank you.

I had the same problem and resolved it only through getting back to one of the previous versions of scikit-learn (0.20.2) Try downgrading sklearn.
I'm aware that this is a sub-optimal solution, but hopefully, you'll be able to advance in whatever project you are currently working one. Hope this helps !

Related

Problems with NumPy version with Python 3.10

Need help please with fixing the NumPy problem I have. Tried to fix it for 2 days and didn't get far.
I have installed Python 3.10. I am using PyCharm.
When I debug my app I keep on getting the following error:
{Traceback (most recent call last):\r\n File \"C:\\Users\\Administrator\\PycharmProjects\\PaySlip_New\\main.py\", line 2, in <module>\r\n
from src.common.ParserFactory import ParserFactory\r\n
File \"C:\\Users\\Administrator\\PycharmProjects\\PaySlip_New\\src\\common\\ParserFactory.py\", line 1, in <module>\r\n
from src.parsers.regular_parser import Parser\r\n File \"C:\\Users\\Administrator\\PycharmProjects\\PaySlip_New\\src\\parsers\\regular_parser.py\", line 1, in <module>\r\n from src.common.ParserMain import ParserMain\r\n File \"C:\\Users\\Administrator\\PycharmProjects\\PaySlip_New\\src\\common\\ParserMain.py\", line 1, in <module>\r\n
import pandas as pd\r\n File \"C:\\Users\\Administrator\\PycharmProjects\\PaySlip_New\\venv\\lib\\site-packages\\pandas\\__init__.py\",
line 16, in <module>\r\n
raise ImportError(\r\nImportError: Unable to import required dependencies:\r\nnumpy: \r\n\r\nIMPORTANT:
PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!\r\n\r\nImporting the numpy C-extensions failed.
This error can happen for\r\nmany reasons, often due to issues with your setup or how NumPy was\r\ninstalled.\r\n\r\nWe
have compiled some common reasons and troubleshooting tips at:\r\n\r\n
https://numpy.org/devdocs/user/troubleshooting-importerror.html\r\n\r\nPlease note and check the following:\r\n\r\n * T
he Python version is: Python3.10 from \"C:\\Users\\Administrator\\PycharmProjects\\PaySlip_New\\venv\\Scripts\\python.exe\"\r\n
* The NumPy version is: \"1.19.4\"\r\n\r\nand make sure that they are the versions you expect.\r\nPlease carefully study the documentation
linked above for further help.\r\n\r\nOriginal error was: No module named 'numpy.core._multiarray_umath'\r\n\r\n}
When I run cmd version I get this:
C:\Users\Administrator>pip3 show numpy
Name: numpy
Version: 1.21.4
pypvenv.cfg
home = C:\Users\Administrator\AppData\Local\Programs\Python\Python310
implementation = CPython
version_info = 3.10.6.final.0
virtualenv = 20.13.0
Please help me. Thank you!
The error message says "Numpy version is 1.19.4", but you expect it to be version 1.21.4. Your pandas version seems to be incompatible with numpy 1.19.4.

How to fix "ERROR: Failed building wheel for pytorch3d" error on Colab?

I'm trying to use a very cool machine learning Colab. https://colab.research.google.com/drive/1eQLZrNYRZMo9zdnGGccE0hFswGiinO-Z?usp=sharing Running their steps as is, I keep getting ERROR: Failed building wheel for pytorch3d.
After much Googling, I've tried for instance replacing the install line with
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git#stable'
and also
!pip install pytorch3d
The former doesn't work. The latter makes another issue arise:
"ImportError: /usr/local/lib/python3.6/dist-packages/pytorch3d/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor7is_cudaEv"
I also tried doing !pip install torch==1.6.0 which causes
Traceback (most recent call last):
File "./meshrcnn/demo/demo.py", line 11, in <module>
from detectron2.data import MetadataCatalog
File "/usr/local/lib/python3.6/dist-packages/detectron2/data/__init__.py", line 4, in <module>
from .build import (
File "/usr/local/lib/python3.6/dist-packages/detectron2/data/build.py", line 14, in <module>
from detectron2.structures import BoxMode
File "/usr/local/lib/python3.6/dist-packages/detectron2/structures/__init__.py", line 6, in <module>
from .keypoints import Keypoints, heatmaps_to_keypoints
File "/usr/local/lib/python3.6/dist-packages/detectron2/structures/keypoints.py", line 6, in <module>
from detectron2.layers import interpolate
File "/usr/local/lib/python3.6/dist-packages/detectron2/layers/__init__.py", line 3, in <module>
from .deform_conv import DeformConv, ModulatedDeformConv
File "/usr/local/lib/python3.6/dist-packages/detectron2/layers/deform_conv.py", line 10, in <module>
from detectron2 import _C
ImportError: /usr/local/lib/python3.6/dist-packages/detectron2/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceISt7complexIdEEEPKNS_6detail12TypeMetaDataEv
Have done !pip install mmcv-full===1.2.1 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.7.0/index.html to try fixing that and the error persists.
Does anyone have ideas for how to make the Colab environment work?
I got the same error and found your post. I tried to install current master (with:
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
and after downloading and compiling pytorch3D==0.4.0 it actually worked correctly.
I think that what solved the problem was downloading and compiling pytorch3d, so that it got properly linked with cuda: if it downloads a precompiled wheel you'll probably get that undefined symbol error.
Only other change I made: I had to remove a demo=True from the call to group_keypoints() in the bounding rectangle calculation cell (it said that parameter was unknown)
I found the same problem for Google Colab. Then I solved it from
https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md
In this area, you can see the required solution for your current machine. If you face a problem with Google Colab you can try this code.
In general, from inside IPython, or in Google Colab or a Jupyter notebook, you can install with
import sys
import torch
pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
version_str="".join([
f"py3{sys.version_info.minor}_cu",
torch.version.cuda.replace(".",""),
f"_pyt{pyt_version_str}"
])
!pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
You have to restart your runtime environment after installing this. In my case, it worked perfectly in Google Colab with GPU instance.

Making torch data set error "cannot import name 'read_data_sets' from 'dataloader'"

I'm trying to create my own torch dataset class and I ran into this problem
Traceback (most recent call last):
File "us_name_train.py", line 10, in <module>
from dataloader.usname_dl import NameDataset
File "C:\ProgramData\Anaconda3\lib\site-packages\dataloader\__init__.py", line 1, in <module>
from dataloader import read_data_sets
ImportError: cannot import name 'read_data_sets' from 'dataloader' (C:\ProgramData\Anaconda3\lib\site-packages\dataloader\__init__.py)
I've seen people post about this problem, but I think mine is extra odd, because the solution is to change 'dataloader' to 'DataLoader' cause it's a typo that was supposedly fixed in 2018, but my file is actually called 'dataloader.py' in the torch library and when I look through the file I do see the 'read_data_sets' function.
Also when I do change 'import dataloader' to 'import DataLoader' it says it can't find the module, but with 'import dataloader' it finds the module just can't find the function 'read_data_sets.' Other people had this problem cause they created their own module called dataloader, but I definitely don't have anything named 'dataloader' in my project dir. Anyone else deal with this issue?
I solved it by updating pytorch using
pip install --upgrade torch torchvision

Tensorflow, Python: ImportError: undefined symbol: _PyUnicode_AsWideCharString

I keep getting the following error, when using tensorflow in PyCharm:
/home/user/tensorflow/bin/python /home/user/PycharmProjects /TensorPlay/hello.py
Traceback (most recent call last):
File "/home/user/PycharmProjects/TensorPlay/hello.py", line 2, in <module>
import tensorflow as tf
File "/home/user/tensorflow/lib/python3.5/site-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/home/user/tensorflow/lib/python3.5/site-packages/tensorflow/python/__init__.py", line 27, in <module>
import ctypes
File "/usr/lib/python3.5/ctypes/__init__.py", line 7, in <module>
from _ctypes import Union, Structure, Array
ImportError: /home/user/tensorflow/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _PyUnicode_AsWideCharString
Process finished with exit code 1
hello.py is this simple example code:
import tensorflow as tf
node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0)
print(node1, node2)
PyCharm detects all the Tensorflow elements and autocomplete everything i want to.
I also tried to run the virtualenv in the console.
Any Python related leads to the same error. I tried to upgrade tensorflow with
source ~/tensorflow/bin/activate
pip3 install --upgrade tensorflow
and had the exact same error too (Just instead of hello.py there was an error in file pip3)
Any suggestions?
EDIT:
I guess I see the problem. Might it be that my virtualenv wants Python 3.5.3? I thing with the last upgrade my Linux upgraded to Python 3.5.4 How can I fix it without creating a new virtualenv? And how I can make sure it doesn't happen on future updates?
I could only fix the issue with deleting the old virtualenv and setting up a new one.

Installing python-cv from

I am following the installation process stated here. I am stuck on step 7; I get an ImportError stating:
"RuntimeError: module compiled against API version 9 but this version of numpy is 7
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import cv2
ImportError: numpy.core.multiarray failed to import"
In the instructions it recommends me to install the latest version of opencv, so as trial and error i tried an earlier model to see what would happen...i get the same thing -_-.
Could anyone point to me what I'm doing wrong, if there are no feasible solutions, is there a more stable process of doing this
Thanks in advance.
Someone experienced a similar error here. Make sure your numpy is up to date and don't forget to change the path to the new bin directory in your environment variable.

Categories

Resources