error running sphinx due to dyld: Library not loaded: #rpath/Python - python

I'm trying to use sphinx to build documentation of a package I'm developing. The commands I use used to work. It looks like a link to a library has disappeared on my machine. I'm using a Mac.
> sphinx-autobuild . _build/html
dyld: Library not loaded: #rpath/Python
Referenced from: /Users/XXX/Library/Enthought/Canopy_64bit/User/bin/python
Reason: image not found
where XXX is my user name
Most similar question I can find is pyside-rcc "dyld: Library not loaded:..."
but the answer provided seems to be to copy over a bunch of files from one directory to another, which seems to risk causing other configuration problems.
Other answers relate to issues with
virtualenv (which I am not using) `dyld: Library not loaded` error preventing virtualenv from loading
brew + awscli (again, not being used by me) How to resolve "dyld: Library not loaded: #executable_path.." error
Based on the questions I've seen, it looks like I should fix this by changing the path. Currently
>echo $PATH
Applications/anaconda/bin:/Users/XXX/Library/Enthought/Canopy_64bit/User/bin:/Users/XXX/anaconda/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Library/TeX/texbin:/opt/X11/bin
My .bash_profile is
# added by Anaconda 2.1.0 installer
export PATH="/Users/XXX/anaconda/bin:$PATH"
# Added by Canopy installer on 2016-08-08
# VIRTUAL_ENV_DISABLE_PROMPT can be set to '' to make the bash prompt show that Canopy is active, otherwise 1
alias activate_canopy="source '/Users/XXX/Library/Enthought/Canopy_64bit/User/bin/activate'"
VIRTUAL_ENV_DISABLE_PROMPT=1 source '/Users/XXX/Library/Enthought/Canopy_64bit/User/bin/activate'
# added by Anaconda3 4.3.1 installer
export PATH="/Applications/anaconda/bin:$PATH"
That activate command that canopy is doing looks to be part of the problem.

I fixed this by removing
alias activate_canopy="source '/Users/XXX/Library/Enthought/Canopy_64bit/User/bin/activate'"
VIRTUAL_ENV_DISABLE_PROMPT=1 source '/Users/XXX/Library/Enthought/Canopy_64bit/User/bin/activate'
from my .bash_profile. Still waiting to see if this breaks Canopy.

Related

No module named 'meshpy._triangle'

I installed meshpy (using python 2.7) following the instructions here on my ubuntu 16.04 LTS and trying to run examples from here after browsing into the directory of meshpy. Part of the example that I'm trying to run is below:
from __future__ import division
from __future__ import absolute_import
import meshpy.triangle as triangle
but I keep getting error No module named meshpy._triangle
Does anyone have a hint of what I might be missing ?
Likely you have created file named meshpy within your python package, which leads to the module shadowing, renaming your file shall fix the problem.
See more by next links:
The name shadowing trap
Python: Problem with local modules shadowing global modules
After an entire day of labor I realized the python packages that I had were not correct and causing conflicts. To begin with here is the link to the installation documentation of meshpy which I followed Here is a pointwise summary of what I realized caused problem
Step 1 says download the file, unzip it using the command given in the doc, and browse to the directory 'MeshPy-XXXXX', where 'XXXXX' refers to the version.
The issue in this step is that a file called CMakeList.txt is missing in this directory and while configuring in step 2 the system complains about the missing file.
The solution is to download the git version instead of the direct download as mentioned in the second part of step1 or manually copy the file CMakeList.txt into the MeshPy-XXXXX directory. I chose the latter solution.
In step 2 asks us to browse to the directory and issue the command ./configure on the terminal. This didn't work for me. The directory contains a script called configure.py . Hence instead I issued python3.5 configure.py
If you issue python configure.py and python calling python2.7 then you should make sure python2.7 has matplotlib, numpy installed as meshpy depends on these packages
The last of step2 where you need to issue command python setup.py install is a tricky part where things went crazy for me. Firstly, I issued python setup.py but what I should have done is issuing python3.5 setup.py (or better creating an alias to python3.5 in bash).
When I pinned down the mistake, I started getting another error both with python2.7 and python3.5, last three lines of which looks like below :
bpl-subset/bpl_subset/boost/python/detail/wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
When I looked up stackoverflow for possible similar errors, I came across this article and used the second solution in the post and installed python2.7-dev/python3.5-dev which solved the problem .
Go to the installation page and click on 'Download MeshPy' link. Click on 'Download Files'. Download the tar file. Unzip it. Then copy the 'meshpy' folder and paste it inside your python lib directory where other packages are stored. Hope it will solve the problem.

"Unable to locate the SpatiaLite library." Django

I'm trying to make Django's SQLite3 accept spatial queries. This tutorial suggests that I add this to settings:
SPATIALITE_LIBRARY_PATH = 'mod_spatialite'
Which produces this error:
django.core.exceptions.ImproperlyConfigured: Unable to load the
SpatiaLite library extension "mod_spatialite" because: The specified
module could not be found.
I also tried doing this :
SPATIALITE_LIBRARY_PATH = r'C:\\Program Files (x86)\\Spatialite\\mod_spatialite-4.3.0a-win-x86\\mod_spatialite-4.3.0a-win-x86\\mod_spatialite.dll'
If I don't add this variable I receive this error when I migrate:
django.core.exceptions.ImproperlyConfigured: Unable to locate the
SpatiaLite library. Make sure it is in your library path, or set
SPATIALITE_LIBRARY_PATH in your settings.
Thank you..
Amusingly enough 5 days later I'm having the same issue. After a little bit of poking around I got it working:
Set
SPATIALITE_LIBRARY_PATH = 'mod_spatialite'
and extract ALL the DLL files from the mod_spatialite-x.x.x-win-x86.7z to your Python installation directory. The dll's apparently need to be in the same folder with python.exe. Also I imagine the mod_spatialite package needs to 32/64 bit according to your python installation. If you're missing some dll's, you get the same error "specified module not found" regardless of what dll file is missing, so it's a bit misleading.
Downloaded from http://www.gaia-gis.it/gaia-sins/
I used mod_spatialite stable version 4.3.0a x86 with Python 3.5.2 32-bit.
Other threads on the same issue with all sorts of answers:
Use spatialite extension for SQLite on Windows
Getting a working SpatiaLite + SQLite system for x64 c#
https://gis.stackexchange.com/questions/85674/sqlite-python-2-7-and-spatialite
On Ubuntu18.04,
adding SPATIALITE_LIBRARY_PATH = 'mod_spatialite.so'
with libsqlite3-mod-spatialite installed worked for me.
Note: The answer has mod_spatialite, while for me mod_spatialite.so worked.
This is how to install SpatiaLite (almost) inside virtualenv for Python 3:
Download cyqlite a special SQLite build with R-Tree enabled. It is required by GoeDjango.
Download mod_spatialite (Windows binaries are in the pink box at the bottom of the page) i.e. mod_spatialite-[version]-win-x86.7z
Unzip mod_spatialite files (flatten the tree and ignore the folder in the archive) into the virtuelenv Scripts folder.
This part I don't like but I not could find a workable solution without touching main Python3 installation.
Rename or otherwise backup c:\Python35\DLLs\sqlite3.dll
From cyqlite uznip sqlite3.dll file into the c:\Python35\DLLs\
Kudos: https://gis.stackexchange.com/a/169979/84121
I ran into this problem when trying to deploy GeoDjango on AWS Elastic Beanstalk. Turns out I needed to set SPATIALITE_LIBRARY_PATH = 'mod_spatialite.so' to SPATIALITE_LIBRARY_PATH = 'libspatialite.so' (installed at /user/lib64/libspatialite.so after running sudo yum install libspatialite and sudo yum install libspatialite-devel from my .ebextensions).
HOW ACTIVATE CORRECTLY SPATIALITE IN VIRTUAL DJANGO ENVIRONNEMENT IN WINDOWS OPERATING SYSTEM LIKE 7, 8, 10
I hope my answer will come to the safety of my developer friends who use Sqlite delivered by default in django to manage their geographic data by the spatialite link which is an extension also delivered by default in python (3 or +). I'm starting from the fact that you have a preconfigured virtual environment. The first thing to do is to download two zip: sqlite-dll-win32-x86-[version].zip and mod_spatialite-[version]-win-x86.7z and unzipp this in the same directory(overwrite if there are any conflicts)
Copy all previously unzipped files into your directory and paste them into your Scripts directory of your virtual environment
It all restart your pc if necessary, deactivate and reactivate your virtual environnement and code...

Tensorflow can't find libcuda.so (CUDA 7.5)

I've installed CUDA 7.5 toolkit, and Tensorflow inside anaconda env. The CUDA driver is also installed. The folder containing the so libraries is in LD_LIBRARY_PATH. When I import tensorflow I get the following error:
Couldn't open CUDA library libcuda.so. LD_LIBRARY_PATH:
/usr/local/cuda-7.5/lib64
In this folder, there exist a file named libcudart.so (which is actually a symbolic link to libcudart.so.7.5). So (just as a guess) I created a symbolic link to libcudart.so named libcuda.so. Now the library is found by Tensorflow, but as soon as I call tensorflow.Session() I get the following error:
F tensorflow/stream_executor/cuda/cuda_driver.cc:107] Check failed: f
!= nullptr could not find cuInitin libcuda DSO; dlerror:
/usr/local/cuda-7.5/lib64/libcudart.so.7.5: undefined symbol: cuInit
Any ideas?
For future reference, here is what I found out and what I did to solve this problem.
The system is Ubuntu 14.04 64 bit. The NVIDIA driver version that I was trying to install was 367.35. The installation resulted in an error towards the end, with message:
ERROR: Unable to load the kernel module 'nvidia-drm'
However the CUDA samples compiled and run with no problem, so the driver was at least partially installed correctly. However, when I checked the version using:
cat /proc/driver/nvidia/version
The version I got was different (I don't remember exactly but some 352 sub-version).
So I figured out I better remove all traces of the driver and re-install. I followed the instructions in the accepted answer here: https://askubuntu.com/questions/206283/how-can-i-uninstall-a-nvidia-driver-completely, except for the command that makes sure nouveau driver will be loaded in boot.
I finally reinstalled the most up-to-date NVIDIA driver (367.35). The installation finished with no errors and Tensorflow was able to load all libraries.
I think the problem began when someone who worked on the installation before me used apt-get to install the driver, and not a run script. Not sure however.
PS during installation there is a warning:
The distribution-provided pre-install script failed! Are you sure
you want to continue?
Looking at the logs I could locate this pre-install script, and its content is simply:
# Trigger an error exit status to prevent the installer from overwriting
# Ubuntu's nvidia packages.
exit 1
so it seems ok to install despite this warning.
I had this error on a couple of Ubuntu 16.04 machines. I tried just updating the NVIDIA drivers and Cuda toolkit hoping that apt would take care of replacing the missing file, but that didn't happen.
Here's a hopefully clear explanation of how I fixed an error like:
...libcuda.so.1: cannot open shared object file: No such file or directory
You are missing this libcuda.so.1 file apparently.
If you look at other SO posts, you will discover that libcuda.so.1 is actually a symbolic link (fancy Unix term for a thing that looks like a file but actually is just a pointer to another file). Specifically, it is a symbolic link to a libcuda.so.# file that is part of the NVIDIA graphics drivers!!! (not part of the Cuda toolkit). So if you do find wherever the package manager has put the libcuda.so.1 file on your system, you'll see it's pointing to this driver-related file:
$ ls /usr/lib/x86_64-linux-gnu/libcuda.so.1 -la
lrwxrwxrwx 1 root root 17 Oct 25 14:29 /usr/lib/x86_64-linux-gnu/libcuda.so.1 -> libcuda.so.410.73
Okay, so you need to make a symbolic link like the one you found, but where?
I.e., where is Tensorflow looking for this libcuda.so.1? Obviously not where your package manager stuck it.
It turns out that Tensorflow looks in the "load library path".
You can see this path like so:
$ echo $LD_LIBRARY_PATH
and what you get back should include the installed Cuda toolkit:
/usr/local/cuda/lib64
(The exact path might vary on your system)
If not, you need to add the toolkit to $LD_LIBRARY_PATH using some shell command like this (from the NVIDIA Toolkit install manual):
export LD_LIBRARY_PATH=/usr/local/cuda/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
(If you don't find anything in /usr/local/cuda you might not have the toolkit installed.)
Now that you know where Tensorflow looks on the $LD_LIBRARY_PATH for Cuda toolkit, you can add a symbolic link to the toolkit directory.
sudo ln -s /usr/lib/x86_64-linux-gnu/libcuda.so.410.73 /usr/local/cuda/lib64/libcuda.so.1
Or you can just listen to other posts that don't explain what's going on but instead tell you to try installing more things in a bunch of different ways. Didn't work for me though :(

dyld: Library not loaded: /usr/local/libodep/lib/libintl.8.dylib

I want to use unoconv with LibreOffice 4.2, but it seems like I can not start the python of LibreOffice.
When I run
/Applications/LibreOffice.app/Contents/MacOS/LibreOfficePython.framework/Versions/3.3/Resources/Python.app/Contents/MacOS/LibreOfficePython -v
The error is:
dyld: Library not loaded: /usr/local/libodep/lib/libintl.8.dylib
Referenced from: /Applications/LibreOffice.app/Contents/MacOS/LibreOfficePython.framework/Versions/3.3/Resources/Python.app/Contents/MacOS/LibreOfficePython
Reason: image not found
Trace/BPT trap: 5
Not directly an answer to the original question, but I ended up on this page after searching for the same error message, while running gpg. Turns out gettext was mangled during upgrade of ios. The following sorted it:
brew install gettext
brew link gettext --force
I just ran into the same issue. The fix is ugly, but essentially follows Michael's approach. The lib is provided by MacPorts and installed there:
> find /opt/ -name 'libintl*'
/opt/local/include/libintl.h
/opt/local/lib/libintl.8.dylib
...
A simple softlink into /usr/ does the trick, but gee do I dislike polluting trees like this!
> sudo bash
> mkdir -p /usr/local/libodep/lib
> ln -s /opt/local/lib/libintl.8.dylib /usr/local/libodep/lib/libintl.8.dylib
With this, the Python 3.3 from LibreOffice runs. I sure hope that somebody at LibreOffice is going to fix this. Anybody know if a bug has been filed?
The library it's attempting to load isn't a standard dylib on MacOS. Whoever created LibreOffice should have either included the dylib in their app package somewhere or they should have included instructions on how to set things up properly for LibreOffice.
From what I can tell, it looks like you need to install MacPorts in order to pick up libintl.8.dylib.
And MacPorts is likely to install that library into "/opt/local/lib/" instead of "/usr/local/libodep/". Not sure if LibreOffice is smart enough to know what to do in that case but you can do a symbolic link from a file in one directory to a sym link in another directory in a pinch if you're desperate.
I came across the same issue as:
dyld: Library not loaded: /usr/local/lib/libintl.8.dylib
Referenced from:
/opt/local/bin/yasm
Reason: image not found
Trace/BPT trap: 5
I had to install MacPorts and gettext, which installs the libintl.* libraries in /opt/local/lib
MacPorts defines the dependency of yasm on gettext
You'll find the same topic here for followup:
https://github.com/dagwieers/unoconv/issues/125
Though the provided answers didn't work for me, installing macports didn't work for me,
libintl.8.dylib I didn't find on /opt/local/lib/
I found others have the problem not being compatible.
gettext also didn't work for me.

Installing python on 1and1 shared hosting

I'm trying to install python to a 1and1.com shared linux hosting account.
There is a nice guide at this address:
http://www.jacksinner.com/wordpress/?p=3
However I get stuck at step 6 which is: "make install". The error I get is as follows:
(uiserver):u58399657:~/bin/python > make install
Creating directory /~/bin/python/bin
/usr/bin/install: cannot create directory `/~’: Permission denied
Creating directory /~/bin/python/lib
/usr/bin/install: cannot create directory `/~’: Permission denied
make: *** [altbininstall] Error 1
I look forward to some suggestions.
UPDATE:
Here is an alternative version of the configure step to fix the above error, however this time I'm getting a different error:
(uiserver):u58399657:~ > cd Python-2.6.3
(uiserver):u58399657:~/Python-2.6.3 > ./configure -prefix=~/bin/python
configure: error: expected an absolute directory name for --prefix: ~/bin/python
(uiserver):u58399657:~/Python-2.6.3 >
The short version is, it looks like you've set the prefix to /~/bin/python instead of simply ~/bin/python. This is typically done with a --prefix=path argument to configure or some other similar script. Try fixing this and it should then work. I'd suggest actual commands, but it's been a while (hence my request to see what you've been typing.)
Because of the above mistake, it is trying to install to a subdirectory called ~ of the root directory (/), instead of your home directory (~).
EDIT: Looking at the linked tutorial, this step is incorrect:
./configure --prefix=/~/bin/python
It should instead read:
./configure --prefix=~/bin/python
Note, this is addressed in the very first comment to that post.
EDIT 2: It seems that whatever shell you are using isn't expanding the path properly. Try this instead:
./configure --prefix=$HOME/bin/python
Failing even that, run echo $HOME and substitute that for $HOME above. It should look something like --prefix=/home/mscharley/bin/python
You really should consider using the AS binary package from Activestate for this kind of thing. Download the .tar.gz file, unpack it, change to the python directory and run the install shell script. This installs a completely standalone version of python without touching any of the system stuff. You don't need root permissions and you don't need to mess around with make.
Of course, maybe you are a C/C++ developer, make is a familiar tool and you are experienced at building packages from source. But if any of those is not true then it is worth your while to try out the Activestate AS binary package.
I was facing same issue with 1and1 shared hosting (Your provided linked tutorial is not available now). I followed Installing Python modules on Hostgator shared hosting using VirtualEnv tutorial with only one change for 1and1. That is:
Instead of:
> python virtualenv-1.11.6/virtualenv.py /home1/yourusername/public_html/yourdomain.com/env --no-site-package
I used:
> python virtualenv-1.11.6/virtualenv.py /kunden/homepages/29/yourusername/htdocs/env --no-site-package
Rest of the instructions worked and I successfully installed VirtualEnv.
Example: 1and1 does not provide Requests module and pip cannot be used in shared hosting. This screenshot demonstrates that after installing VirtualEnv, pip command can be used and at the end >>> import requests successfully worked.

Categories

Resources