Continuing issue installing GDAL for use with Python (Mac User) - python

For an upcoming research project I am going to need to use Python with GIS data (mostly rasters). I have experience using Matlab and R; however, Python is still a relative unknown to me. I've been able to get Anaconda on my machine and download the requisite packages I needed and import them successfully (e.g. Richdem); however, gdal has been a continuous pain.
import gdal
in Spyder results in...
ImportError: dlopen(/Users/matthew/anaconda3/lib/python3.6/site-packages/osgeo/_gdal.cpython-36m-darwin.so, 2): Library not loaded: #rpath/libfontconfig.1.dylib
Referenced from: /Users/matthew/anaconda3/lib/libpoppler.78.dylib
Reason: Incompatible library version: libpoppler.78.dylib requires version 14.0.0 or later, but libfontconfig.1.dylib provides version 13.0.0
I have tried:
1) updating conda
2) updating anaconda
3) updating python
4) installing GDAl through pip
5) reinstalling GDAL with conda-forge
conda list gdal
# Name Version Build Channel
gdal 2.4.1 py36h7eb7563_3 conda-forge
libgdal 2.4.1 h1405c63_3 conda-forge
conda info
active environment : None
user config file : /Users/matthew/.condarc
populated config files : /Users/matthew/.condarc
conda version : 4.6.14
conda-build version : 3.17.8
python version : 3.6.8.final.0
base environment : /Users/matthew/anaconda3 (writable)
channel URLs : https://conda.anaconda.org/conda-forge/osx-64
https://conda.anaconda.org/conda-forge/noarch
package cache : /Users/matthew/anaconda3/pkgs
/Users/matthew/.conda/pkgs
envs directories : /Users/matthew/anaconda3/envs
/Users/matthew/.conda/envs
platform : osx-64
user-agent : conda/4.6.14 requests/2.22.0 CPython/3.6.8 Darwin/17.6.0 OSX/10.13.5
UID:GID : 501:20
netrc file : None
offline mode : False
I've spent several hours googling and looking around the Stack Exchange before posting here. I would love some insights and any thoughts anyone may have on how to resolve this issue.

I see this is an old question but hopefully it helps others. The proper import would be
from osgeo import gdal
rather than
import gdal

Related

Using rpy2 in python, but can not find R package(arm64) in conda (miniforge3 arm64)

I have tried conda install -c conda-forge r-Cubist, but no arm64 package in the arm64 channel.
The CRAN has the newest release which is the arm64 package, I tried to download the release version:macOS binaries: r-release (arm64), and put this package to /Users/rui/miniforge3/lib/R/library and run the code importr('Cubist')
but the error is:
rpy2.rinterface_lib.embedded.RRuntimeError: Error in library.dynam(lib, package, package.lib) : shared object ‘Cubist.dylib’ not found.
I checked the difference between the package downloaded from the CRAN package and the package downloaded from conda install -c conda-forge r-packagename, the lib folder of the former one has "so" file, and the latter one has "dylib" file.
How to use the arm64 r-package from the CRAN website in python? or how to get the 'dylib' file in the R package.
Update:
Following the question Using conda to build and install local or custom R package, I tried
conda skeleton cran <pckg>
conda-build r-<pckg>
conda install --use-local r-<pcgk>
However, which needs r-base=3.5,the arm64 architecture requires r-base==4.2.1.
Unsatisfiable dependencies for platform osx-arm64: {'r-base=3.5'}
Update:
The best way to solve this problem is to use the code that #onyambu provided and change the environment to google colab.
I cannot reproduce/work out your case since I'm running Windows, but you can try the following in Python:
from rpy2.robjects.packages import importr
# one-time execution to build & install the Cubist R package
utils= importr('utils')
utils.chooseCRANmirror(ind=1)
utils.install_packages(StrVector(['devtools']))
devtools = importr('devtools')
devtools.install_github('topepo/Cubist')
# if success you can then import the package
Cubist = importr('Cubist')
The install_github may fails if you don't have a compiler toolchain already set up. (e.g., Windows R needs Rtools package)

basemap-data-hires not found despite being installed

I have the same problem as this post:
Declaring a var usable by another function using a import in a secondary script, but the answer does not work on my side.
For context: basemap and basemap-data-hires are installed, yet when using resolution = 'f' it triggers the following error:
OSError: Unable to open boundary dataset file. Only the 'crude' and 'low',
resolution datasets are installed by default.
If you are requesting an, 'intermediate', 'high' or 'full'
resolution dataset, you may need to download and install those
files separately with
conda install -c conda-forge basemap-data-hires.
Here is the conda list output:
C:\Users\AlxndrLhr>conda list
# packages in environment at C:\Users\AlxndrLhr\Anaconda3\envs\map:
#
# Name Version Build Channel
basemap 1.2.2 py39h689385a_5 conda-forge
basemap-data 1.3.2 pyhd8ed1ab_0 conda-forge
basemap-data-hires 1.3.2 pyhd8ed1ab_0 conda-forge
As you can see, basemap-data-hires is present. I tried installing it in the base environment of conda, didn't work either.
Before basemap 1.3.0, the library was packaged in conda-forge by splitting the heavy data files into a separate basemap-data-hires conda package (and whose files were installed in the share folder).
Since basemap 1.3.0, a complete reorganisation of the basemap package has been done upstream by splitting the library into basemap, basemap-data and basemap-data-hires. These three packages are Python packages and get installed in the corresponding Python site-packages folder. This new structuring is propagated to the conda-forge packages.
Your installation is mixing the old basemap conda package (pre-1.3.0) with the new basemap-data-hires conda package (post-1.3.0). You can solve the issue by pinning versions during installation, either the following to install the latest basemap:
conda install "basemap>=1.3.0" "basemap-data-hires>=1.3.0"
or the following to install the pre-1.3.0 version:
conda install "basemap==1.2.2" "basemap-data-hires==1.2.2"

conda install and conda build result in different dependency versions

I'm trying to build a package which includes h5py. When using conda build, it seems to install the wrong version of the dependency. It installs 3.2.1-py37h6c542dc_0, which includes hdf5: 1.10.6-nompi_h6a2412b_1114.
The problem is that this hdf5 lib, seems to have these setting:
(Read-Only) S3 VFD: yes
This causes an error for me. When just running conda install h5py==3.2.1, it does install the right version (hdf5-1.10.6-nompi_h3c11f04_101).
Why is there a difference?
"Why is there a difference?
Using conda install h5py=3.2.1 additionally includes all the previous constraints in the current environment, whereas during a conda build run, a new environment is created only with requirements that the package specifies. That is, it is more like running conda create -n foo h5py=3.2.1.
So, that covers the mechanism, but we can also look at the particular package dependencies to see why the current environment constrains to the older hdf5-1.10.6-nompi_h3c11f04_101, which OP states is preferred. Here is the package info for the two:
hdf5-1.10.6-nompi_h6a2412b_1114
$ mamba search --info conda-forge/linux-64::hdf5[version='1.10.6',build='nompi_h6a2412b_1114']
hdf5 1.10.6 nompi_h6a2412b_1114
-------------------------------
file name : hdf5-1.10.6-nompi_h6a2412b_1114.tar.bz2
name : hdf5
version : 1.10.6
build : nompi_h6a2412b_1114
build number: 1114
size : 3.1 MB
license : LicenseRef-HDF5
subdir : linux-64
url : https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.10.6-nompi_h6a2412b_1114.tar.bz2
md5 : 0a2984b78f51148d7ff6219abe73509e
timestamp : 2021-01-08 23:10:11 UTC
dependencies:
- libcurl >=7.71.1,<8.0a0
- libgcc-ng >=9.3.0
- libgfortran-ng
- libgfortran5 >=9.3.0
- libstdcxx-ng >=9.3.0
- openssl >=1.1.1i,<1.1.2a
- zlib >=1.2.11,<1.3.0a0
hdf5-1.10.6-nompi_h3c11f04_101
$ mamba search --info conda-forge/linux-64::hdf5[version='1.10.6',build='nompi_h3c11f04_101']
hdf5 1.10.6 nompi_h3c11f04_101
------------------------------
file name : hdf5-1.10.6-nompi_h3c11f04_101.tar.bz2
name : hdf5
version : 1.10.6
build : nompi_h3c11f04_101
build number: 101
size : 3.0 MB
license : HDF5
subdir : linux-64
url : https://conda.anaconda.org/conda-forge/linux-64/hdf5-1.10.6-nompi_h3c11f04_101.tar.bz2
md5 : 9f1ccc4d36edf8ea15ce19f52cf6d601
timestamp : 2020-07-31 12:26:29 UTC
dependencies:
- libgcc-ng >=7.5.0
- libgfortran-ng >=7,<8.0a0
- libstdcxx-ng >=7.5.0
- zlib >=1.2.11,<1.3.0a0
The difference here is that the latter works with older versions of libgcc-ng, libstdcxx-ng, and libgfortran-ng (below 9.3.0), as well as has no constraint on openssl or libcurl. So, we can guess that the current environment where the conda install h5py=3.2.1 was invoked has one of these restrictions.

Launching Spyder and JupyterNotebook causes importError when image not found

I'm new to python and the virtual environment stuff. I'm facing issues opening jupyter notebook and Spyder after updating conda.
Here are some info about the versions I have:
$ conda info
active environment : None '''is it caused by this ? '''
user config file : /Users/-/.condarc
populated config files : /Users/-/.condarc
conda version : 4.5.4
conda-build version : 3.0.27
python version : 2.7.14.final.0
base environment : /Users/-/anaconda2 (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/osx-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/pro/osx-64
https://repo.anaconda.com/pkgs/pro/noarch
package cache : /Users/-/anaconda2/pkgs
/Users/-/.conda/pkgs
envs directories : /Users/-/anaconda2/envs
/Users/-/.conda/envs
platform : osx-64
user-agent : conda/4.5.4 requests/2.18.4 CPython/2.7.14 Darwin/15.5.0 OSX/10.11.5
UID:GID : 501:20
netrc file : None
offline mode : False
JupyerNotebook version : 5.5.0
Jupyter version : 4.4.0
Spyder version : 3.2.8
Spyder fails to launch and when trying to launch spyder from anaconda navigator I get this error:
/anaconda2/lib/python2.7/site-packages/zmq/backend/cython/init.py", line 6, in
from . import (constants, error, message, context,
ImportError: dlopen(/Users/-/anaconda2/lib/python2.7/site-packages/zmq/backend/cython/error.so, 2): Library not loaded: #rpath/libsodium.23.dylib
Referenced from: /Users/-/anaconda2/lib/libzmq.5.dylib
Reason: image not found
Trying to launch jupyter notebook from anaconda throws the same image not found error:
/anaconda2/lib/python2.7/site-packages/zmq/backend/cython/init.py", line 6, in
from . import (constants, error, message, context,
ImportError: dlopen(/Users/-/anaconda2/lib/python2.7/site-packages/zmq/backend/cython/error.so, 2): Library not loaded: #rpath/libsodium.23.dylib
Referenced from: /Users/-/anaconda2/lib/libzmq.5.dylib
Reason: image not found
I found out that after updating, a new Anaconda2 folder was initialized for only the zmq file /anaconda2/lib/python2.7/site-packages/zmq/backend/cffi/__pycache__.
Everything was working smoothly but after following anaconda instructions and recommendations to update I started getting those errors.
How can I resolve this issue? Is it because the active environment : None?
Faced same issue. From https://github.com/jupyter/notebook/issues/1632 it explained that some dependencies when updating conda are messed up so try:
conda remove zeromq
conda install zeromq
Then try:
conda update conda-build
Good luck,

"import torch" giving error "from torch._C import *, DLL load failed: The specified module could not be found"

I am currently using Python 3.5.5 on Anaconda and I am unable to import torch. It is giving me the following error in Spyder:
Python 3.5.5 |Anaconda, Inc.| (default, Mar 12 2018, 17:44:09) [MSC v.1900
64 bit (AMD64)]
Type "copyright", "credits" or "license" for more information.
IPython 6.2.1 -- An enhanced Interactive Python.
import torch
Traceback (most recent call last):
File "<ipython-input-1-eb42ca6e4af3>", line 1, in <module>
import torch
File "C:\Users\trish\Anaconda3\envs\virtual_platform\lib\site-
packages\torch\__init__.py", line 76, in <module>
from torch._C import *
ImportError: DLL load failed: The specified module could not be found.
Many suggestions on the internet say that the working directory should not be the same directory that the torch package is in, however I've manually set my working directory to C:/Users/trish/Downloads, and I am getting the same error.
Also I've already tried the following: reinstalling Anaconda and all packages from scratch, and I've ensured there is no duplicate "torch" folder in my directory.
Pls help! Thank you!
I had this similar problem in windows 10...
Solution:
Download win-64/intel-openmp-2018.0.0-8.tar.bz2 from https://anaconda.org/anaconda/intel-openmp/files
Extract it and put the dll files in Library\bin into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin
Make sure your cuda directory is added to your %PATH% environment variable
I had the same problem. In my case I didn't want the GPU version of pytorch.
I uninstalled it. The version was pytorch: 0.3.1-py36_cuda80_cudnn6he774522_2 peterjc123.
The problem is that cuda and cudnn . then installed with the following command and now it works!
conda install -c peterjc123 pytorch-cpu
I also encountered the same problem when I used a conda environment with python 3.6.8 and pytorch installed by conda from channel -c pytorch.
Here is what worked for me:
1:) conda create -n envName python=3.6 anaconda
2:) conda update -n envName conda
3:) conda activate envName
4:) conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
and then tested torch with the given code:
5:) python -c "import torch; print(torch.cuda.get_device_name(0))"
Note: 5th step will return your gpu name if you have a cuda compatible gpu
Summary: I just created a conda environment containing whole anaconda and then to tackle the issue of unmatched conda version I updated conda of new environment from the base environment and then installed pytorch in that environment and tested pytorch.
For CPU version, here is the link for my another answer: https://gist.github.com/peterjc123/6b804651288e76db7b5fabe5348e1f03#gistcomment-2842825
https://gist.github.com/peterjc123/6b804651288e76db7b5fabe5348e1f03#gistcomment-2842837
Had the same problem and fixed it by re-installing numpy with mkl (Intel's math kernel library)
https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy
Download the right .whl for your machine. For me it was numpy‑1.14.5+mkl‑cp36‑cp36m‑win_amd64.whl (python 3.6, windows, 64-bit)
and then install using pip.
pip install numpy‑1.14.5+mkl‑cp36‑cp36m‑win_amd64.whl
I am using a Windows 10 computer with an NVIDIA GeForce graphics card. NVIDIA showed I had CUDA 10.1, but I was getting this error when running import torch in Jupyter Lab and suspected it had something to do with CUDA support.
I fixed this problem by downloading and installing the CUDA Toolkit directly from NVIDIA. It installed all required Visual Studio components. When I returned to Jupyter Lab, import torch ran without error.
Make sure you installed the right version of pytorch for your enviroment. I had the same problem I was using pytorch on windows but I had the default package installed which was meant for cuda 8. So I reinstalled the pytorch package for cpu which was what I needed.
I had the same issue with running torch installed with pure pip and solved it by switching to conda.
Following steps:
uninstall python 3.6 from python.org (if exists)
install miniconda
install torch in conda ("conda install pytorch -c pytorch")
Issue with pip installation:
import torch
File "C:\Program Files\Python35\lib\site-packages\torch\__init__.py", line 78, in <module>
from torch._C import *
ImportError: DLL load failed: The specified module could not be found.
After switching to conda it works fine. I believe the issue was resolved by conda through installing the vs_redist 2017
vs2017_runtime 15.4.27004.2010 peterjc123
But I have tried it w/o conda and it did not help. Could not find how to check (and tweak) Python's vs_redist.
Windows10 Solution(This worked for my system):
I was having the same issue in my system. Previously I was using Python 3.5 and I created a virtual environment named pytorch_test using the virtualenv module because I didn't want to mess up my tensorflow installation(which took me a lot of time). I followed every instruction but it didn't seem to work. I installed python 3.6.7 added it to the path. Then I created the virtual environment using:
virtualenv --python=3.6 pytorch_test
Then go to the destination folder
cd D:\pytorch_test
and activate the virtual environment entering the command in cmd:
.\Scripts\activate
After you do this the command prompt will show:
(pytorch_test) D:\pytorch_test>
Update pip if you have not done it before using:
(pytorch_test) D:\pytorch_test>python -m pip install --upgrade pip
Then go for installing numpy+mkl from the site:
https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy
Choose the correct version from the list if you have python 3.6.7 go with the wheel file:
numpy‑1.15.4+mkl‑cp36‑cp36m‑win_amd64.whl (For 64 bit)
(Note if the whole thing doesnot work just go with simple numpy installation and mkl installation separately)
Then go for installing openmp using:
(pytorch_test) D:\pytorch_test>pip install intel-openmp
Now you are done with the prerequisites. To install pytorch go to the previous versions site:
https://pytorch.org/get-started/previous-versions/
Here select the suitable version from the list of Windows Binaries. For example I am having CUDA 9.0 installed in my system with python 3.6.7 so I went with the gpu version:
cu90/torch-1.0.0-cp36-cp36m-win_amd64.whl
(There are two available versions 0.4.0 and 1.0.0 for pytorch, I went with 1.0.0)
After downloading the file install it using pip(assuming the whl file is in D:).You have to do this from the virtual environment pytorch_test itself:
(pytorch_test) D:\pytorch_test>pip install D:\torch-1.0.0-cp36-cp36m-win_amd64.whl
Prerequisites like six, pillow will be installed automatically.
Then once everything is done, install the models using torchvision.
Simply type :
(pytorch_test) D:\pytorch_test>pip install torchvision
To check everything is working fine try the following script:
import torch
test = torch.rand(4, 7)
print(test)
If everything was good then it wont be an issue. Whenever there is an issue like this it is related to version mismatch of one or more dependencies. This also occurred during tensorflow installation.
Deactivate the following virtual environment using the command deactivate in the cmd:
(pytorch_test) D:\pytorch_test>deactivate
This is the output of pip list in my system:
Package Version
------------ -----------
intel-openmp 2019.0
mkl 2019.0
numpy 1.16.2
Pillow 6.0.0
pip 19.0.3
setuptools 41.0.0
six 1.12.0
torch 1.0.0
torchvision 0.2.2.post3
wheel 0.33.1
Hope this helps. This is my first answer in this community, hope you all find it helpful. I setup pytorch today in the afternoon after trying all sorts of combinations. The same import problem occurred to me while installing CNTK and tensorflow. Anyway I kept them separate in different virtual environments so that I can use them anytime.

Categories

Resources