I've tried to compile Python 2.7 on Ubuntu 10.4, but got the following error message after running make:
Python build finished, but the necessary bits to build these modules were not found:
_bsddb bsddb185 sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
What packages do I need? (setup.py was not helpful)
Assuming that you have all the dependencies installed (on Ubuntu that would be bunch of things like sudo apt-get install libdb4.8-dev and various other -dev packages, then this is how I build Python.
tar zxvf Python-2.7.1.tgz
cd Python-2.7.1
# 64 bit self-contained build in /opt
export TARG=/opt/python272
export CC="gcc -m64"
export LDFLAGS='-Wl,-rpath,\$${ORIGIN}/../lib -Wl,-rpath-link,\$${ORIGIN}/../lib -Wl,--enable-new-dtags'
./configure --prefix=$TARG --with-dbmliborder=bdb:gdbm --enable-shared --enable-ipv6
make
make install
The only modules that don't build during make are:
_tkinter - I don't do GUI apps and would use wxWindows if I did
bsddb185 - horribly obsolete version of bdb
dl - deprecated in 2.6
imageop - deprecated in 2.6
sunaudiodev - obsolete interface to some SparcStation device I think
Next I collect any .so files that are not already in the Python install directories and copy them over:
# collect binary libraries ##REDO THIS IF YOU ADD ANY ADDITIONAL MODULES##
cd /opt/python272
find . -name '*.so' | sed 's/^/ldd -v /' >elffiles
echo "ldd -v bin/python" >>elffiles
chmod +x elffiles
./elffiles | sed 's/.*=> //;s/ .*//;/:$/d;s/^ *//' | sort -u | sed 's/.*/cp -L & lib/' >lddinfo
# mkdir lib
chmod +x lddinfo
./lddinfo
And then add setuptools for good measure
#set the path
export PATH=/opt/python272/bin:$PATH
#install setuptools
./setuptools-0.6c11-py2.7.egg
At this point I can make a tarball of /opt/python272 and run it on any 64-bit Linux distro, even a stripped down one that has none of the dependencies installed, or a older distro that has old obsolete versions of the dependencies.
I also get pip installed but at this point there is a gap in my notes due to some failed struggles with virtualenv. Basically virtualenv does not support this scenario. Presumably I did easy_install pip and then:
export LD_RUN_PATH=\$${ORIGIN}/../lib
pip install cython
pip install {a whole bunch of other libraries that I expect to use}
After I'm done installing modules, I go back and rerun the commands to collect .so files, and make a new tarball. There were a couple of packages where I had to muck around with LDFLAGS to get them to install correctly, and I haven't done enough thorough testing yet, but so far it works and I'm using this Python build to run production applications on machines that don't have all the support libraries preinstalled.
Those are older, (mostly depreciated) modules that you probably won't use. You should be able to safely ignore the warnings.
The one that you may want to worry about trying to fix is _bsddb, which should go away once you install Berkeley DB 4.8... I'm not sure if it's in the Ubuntu repos or not. (edit: apparently it's the db package)
bsddb185 is an older version of the Oracle Berkley Database module. You can safely ignore it as far as I know.
sunaudiodev is depreciated, undocumented, I doubt you'd ever need to use it anyway. You should be able to safely ignore it.
Hope that helps a bit, anyway...
sudo apt-get build-dep python2.6 python-gdbm python-bsddb3 (Use python2.7 on maverick).
For more information, see this answer. Also look at this page, which applies equally for building on Lucid.
Related
I accidentally downloaded python2.6.6 on my centos Virtual Machine from python.org's official download package and compiled it to source.
Now in my /usr/local/bin I have a python2.6 shell available and now if I use which python it will give me the path of /usr/local/bin instead of original python2.7's path which is /usr/bin.
Since I installed it from source, yum doesn't recognise python2.6.6 as a package and I want to get rid of it.
If I do rpm -q python it gives me a result of python-2.7.5-48.0.1.el7.x86_64
Is it possible to uninstall python2.6.6 and I will just re-point my python system variable to /usr/bin again?
Sure, but you'll have to do it the hard way. Dig through /usr/local looking for anything Python-related and remove it. The python in /usr/bin should be revealed once the one in /usr/local/bin is removed.
Also, next time make altinstall. It will install a versioned executable that won't get in the way of the native executable.
On an ubuntu system on which I don't have sudo previleges, I wish to install a package via pip (matplotlib to be precise), but some source packages are not installed on the system (however the binaries are installed).
I have created a virtual environment in which to install, and have downloaded the required source code, but I can't place them in the default /usr/include/ etc.. When pip runs matplotlib's setup.py script, the source files are reported as missing.
Is there a way to instruct pip or setup.py where to look for the source?
ps: setting CFLAGS or CPPFLAGS adds the locations of the downloaded source to compile instructions, but setup.py didn't find the source, so didn't attempt to compile some components (graphic backends).
pps: this is similar to, but more specific than this question
I would suggest doing:
Rebuild whatever binaries you need in your own home directory (this also avoids an issue if the apps get upgraded on the system or are otherwise different versions from your source). Assuming the programs use the standard configure scripts, you can do
mkdir ~/dev
cd app_src
./configure --prefix=~/dev
make; make install
Then when you want to do your pip install, do
export PATH=~/dev/bin:$PATH
export LD_LIBRARY_PATh=~/dev/lib
(Note, what I should be suggesting is pointing it to your virtualenv but I haven't had the issue you're having)
Do the pip install; if memory serves, pkg-config should pick up the info you want (this assumes matplotlib uses pkg-config to figure out where packages are stored)
I am using pip and trying to install a python module called pyodbc which has some dependencies on non-python libraries like unixodbc-dev, unixodbc-bin, unixodbc. I cannot install these dependencies system wide at the moment, as I am only playing, so I have installed them in a non-standard location. How do I tell pip where to look for these dependencies ? More exactly, how do I pass information through pip of include dirs (gcc -I) and library dirs (gcc -L -l) to be used when building the pyodbc extension ?
pip has a --global-option flag
You can use it to pass additional flags to build_ext.
For instance, to add a --library-dirs (-L) flag:
pip install --global-option=build_ext --global-option="-L/path/to/local" pyodbc
gcc supports also environment variables:
http://gcc.gnu.org/onlinedocs/gcc/Environment-Variables.html
I couldn't find any build_ext documentation, so here is the command line help
Options for 'build_ext' command:
--build-lib (-b) directory for compiled extension modules
--build-temp (-t) directory for temporary files (build by-products)
--plat-name (-p) platform name to cross-compile for, if supported
(default: linux-x86_64)
--inplace (-i) ignore build-lib and put compiled extensions into the
source directory alongside your pure Python modules
--include-dirs (-I) list of directories to search for header files
(separated by ':')
--define (-D) C preprocessor macros to define
--undef (-U) C preprocessor macros to undefine
--libraries (-l) external C libraries to link with
--library-dirs (-L) directories to search for external C libraries
(separated by ':')
--rpath (-R) directories to search for shared C libraries at runtime
--link-objects (-O) extra explicit link objects to include in the link
--debug (-g) compile/link with debugging information
--force (-f) forcibly build everything (ignore file timestamps)
--compiler (-c) specify the compiler type
--swig-cpp make SWIG create C++ files (default is C)
--swig-opts list of SWIG command line options
--swig path to the SWIG executable
--user add user include, library and rpath
--help-compiler list available compilers
Building on Thorfin's answer and assuming that your desired include and library locations are in /usr/local, you can pass both in like so:
sudo pip install --global-option=build_ext --global-option="-I/usr/local/include/" --global-option="-L/usr/local/lib" <you package name>
Another way to indicate the location of include files and libraries are set relevant environment variables before running pip e.g.
export LDFLAGS=-L/usr/local/opt/openssl/lib
export CPPFLAGS=-I/usr/local/opt/openssl/include
pip install cryptography
Just FYI... If you are having trouble installing a package with pip, then you can use the
--no-clean option to see what is exactly going on (that is, why the build did not work). For instance, if numpy is not installing properly, you could try
pip install --no-clean numpy
then look at the Temporary folder to see how far the build got. On a Windows machine, this should be located at something like:
C:\Users\Bob\AppData\Local\Temp\pip_build_Bob\numpy
Just to be clear, the --no-clean option tries to install the package, but does not clean up after itself, letting you see what pip was trying to do.
Otherwise, if you just want to download the source code, then I would use the -d flag. For instance, to download the Numpy source code .tar file to the current directory, use:
pip install -d %cd% numpy
I was also helped by Thorfin's answer; I was building GTK3+ on windows and installing pygobject, I was having difficulties on how to include multiple folders with pip install.
I tried creating pip config file as per pip documentation. but failed.
the one working is with the command line:
pip install --global-option=build_ext --global-option="-IlistOfDirectories"
# and/or with: --global-option="-LlistofDirectories"
the separator that works with multiple folders in windows is ';' semicolon, NOT colon ':' it might be different in other OS.
sample working command line:
pip install --global-option=build_ext --global-option="-Ic:/gtk-build/gtk/x64/release/include;d:/gtk-build/gtk/x64/release/include/gobject-introspection-1.0" --global-option="-Lc:\gtk-build\gtk\x64\release\lib" pygobject==3.27.1
you can use '' or '/' for path, but make sure do not type backslash next to "
this below will fail because there is backslash next to double quote
pip install --global-option=build_ext --global-option="-Ic:\willFail\" --global-option="-Lc:\willFail\" pygobject==3.27.1
Have you ever used virtualenv? It's Python package that let's you create and maintain multiple isolated environments on one machine. Each can use different modules independent of one another without screwing up dependencies in your system library or a separate virtual environment.
If you don't have root privileges, you can download and use the virtualenv package from source:
$ curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-X.X.tar.gz
$ tar xvfz virtualenv-X.X.tar.gz
$ cd virtualenv-X.X
$ python virtualenv.py myVE
I followed the above steps this weekend on Ubuntu Server 12.0.4 and it worked perfectly. Each new virtual environment you create comes with PIP by default so installing packages into your new environment is easy.
Just in case it's of help to somebody, I still could not find a way to do it through pip, so ended up simply downloading the package and doing through its 'setup.py'. Also switched to what seems an easier to install API called 'pymssql'.
I did a simple pip install psycopg2 on mac system. It installed fine, but when I try to use psycopg2 I get the error:
Reason: Incompatible library version: _psycopg.so requires version 1.0.0 or later, but libssl.0.9.8.dylib provides version 0.9.8
pip freeze shows psycopg2==2.4.5 just right. I have installed psycopg2 on several virtualenvs but this is the first time I am seeing such error. I tried uninstalling and reinstalling, same results. Please help
The accepted answer here is correct (except I think it must be ln -fs , in fact I think it might even risk destabalizing your OS if not (?)). After bumping into this and dealing with it I just want to collect the full solution for this issue and the other lib problem (libcrypto.1.0.0.dylib) you will run into for Postgres 9.* on Mountain Lion and Snow Leopard, and perhaps other systems. This also blocked me from running psql, which complained about the same two libs.
Essentially there are two later-version libs needed in /usr/lib, libssl and libcrypto. You can find the needed versions of these libs in the Postgres lib directory.
If you're OSX and installed the Enterprise DB version of Postgres this will be in /Library/PostgreSQL/9.2/lib.
For other install types of Postgres, look for the lib directory inside the Postgress install directory, e.g., for Postgress.app, find the lib directory in /Applications/Postgres.app/Contents/MacOS/lib,
for brew somewhere in /usr/local/Cellar,
on *nix, wherever your install is. But see first on *nix if your distro has later versions just through the package manager.
First copy the latest of these two libs from the Postgres lib directory to /usr/lib:
sudo cp /Library/PostgreSQL/9.2/lib/libssl.1.0.0.dylib /usr/lib
sudo cp /Library/PostgreSQL/9.2/lib/libcrypto.1.0.0.dylib /usr/lib
Then update (or create) the /usr/lib symlinks for this libs. Either way the command is ln -fs:
sudo ln -fs /usr/lib/libssl.1.0.0.dylib /usr/lib/libssl.dylib
sudo ln -fs /usr/lib/libcrypto.1.0.0.dylib /usr/lib/libcrypto.dylib
Should be fixed. Pretty sure ln -fs is better than deleting the symlink and remaking it, so there is less chance of libssl being unfindable by something that needs it for the time it is not present (it does the same thing; it first deletes the symlink if it's already there, just faster than you can type it). Always wary of messing around on /usr/lib.
Worked for me:
env LDFLAGS='-L/usr/local/lib -L/usr/local/opt/openssl/lib
-L/usr/local/opt/readline/lib' pip install psycopg2
Source: Can't install psycopg2 with pip in virtualenv on Mac OS X 10.7
I ran into a similar problem after upgrading to Mountain Lion.
Instead of copying libssl.* files per Slack's suggestion, make sure that /usr/lib/libssl.dylib is actually a soft link to the most up-to-date version of the library.
E.g., on my machine, ls -l /usr/lib/libssl* gives:
lrwxr-xr-x 1 root wheel 46B Jun 27 15:24 /usr/lib/libssl.1.0.0.dylib -> /Library/PostgreSQL/9.1/lib/libssl.1.0.0.dylib
lrwxr-xr-x 1 root wheel 27B Jul 30 10:31 /usr/lib/libssl.dylib -> /usr/lib/libssl.1.0.0.dylib
If libssl.dylib doesn't link to the version that the error version mentions, make sure you have that version of the library, and then make sure /usr/lib/libssl.dylib points to it, and not an older version.
If the link doesn't exist, create it like so
sudo ln -s library_to_link_to link_to_create
using, of course, the proper locations for your machine. For me, this turned out to be:
sudo ln -s /usr/lib/libssl.1.0.0.dylib /usr/lib/libssl.dylib
Edit:
It seems like some are having trouble with part of my solution. Namely, deleting these important libraries even temporarily causes problems with the operating system.
Per Purrell's answer, make sure you include the -fs flags when you use the ln command, which helps ensure that the libraries don't go missing for a short period of time. E.g.,
sudo ln -fs /usr/lib/libssl.1.0.0.dylib /usr/lib/libssl.dylib
sudo ln -fs /usr/lib/libcrypto.1.0.0.dylib /usr/lib/libcrypto.dylib
On OSX 10.11, El Capitan, solution with replacing symlinks reported Operation not permitted. Solution that worked for me was using brew and setting up DYLD_LIBRARY_PATH. So:
brew install openssl
Find where openssl brew libs are located (brew --prefix openssl can help), start searching from directory /usr/local/Cellar/openssl. In my case it is in /usr/local/Cellar/openssl/1.0.2d_1/lib
Finally set up DYLD_LIBRARY_PATH, i.e. add a line like this into .bash_profile :
# replace location of lib files with folder name you found in previous step
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/usr/local/Cellar/openssl/1.0.2d_1/lib
UPDATE: More generic/better alternatives are (thanks to #dfrankow):
to use brew to find openssl location (a note, brew can be slow): DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$(brew --prefix openssl)/lib
for development purposes maybe it is better to use DYLD_FALLBACK_LIBRARY_PATH instead - check this
Restart shell, or just source ~/.bash_profile, reinstall psycopg2:
pip uninstall psycopg2
pip install psycopg2
and test if it works:
$ python -c"import psycopg2 ; print('psycopg2 is now ok')"
When trying to do a syncdb Postgres 9.1 and /psycopg2/_psycopg.so added a further error:
Library not loaded: #loader_path/../lib/libcrypto.dylib
Referenced from: /usr/lib/libpq.5.dylib
Reason: Incompatible library version: libpq.5.dylib requires version 1.0.0 or later, but libcrypto.0.9.8.dylib provides version 0.9.8
Solved by copying these six (6) files from:
LOCAL:/Library/PostgreSQL/9.1/lib/
libssl.1.0.0.dylib
libssl.a
libssl.dylib
libcrypto.1.0.0.dylib
libcrypto.a
libcrypto.dylib
to: LOCAL:/usr/lib
This was on Mac OSx 10.8.1 with a web in a virtualenv (1.8.2) and pgAdmin (1.14.3). Inside the virtualenv is:
Django==1.4
psycopg2==2.4.5
... etc... and now back to normal.
For me, the libcryto and libss version 1.0.0 resides below:
/Library/PostgreSQL/9.1/lib/libcrypto.1.0.0.dylib
/Library/PostgreSQL/9.1/lib/libssl.1.0.0.dylib
so the commands that fix my problem is:
sudo ln -fs /Library/PostgreSQL/9.1/lib/libssl.1.0.0.dylib /usr/lib/libssl.dylib
sudo ln -fs /Library/PostgreSQL/9.1/lib/libcrypto.1.0.0.dylib /usr/lib/libcrypto.dylib
my friend, just copy libssl.* files from PostgreSQL lib directory to /usr/lib and relaunch your application in this case all things will be perfect ^_^
For me on Mavericks, it worked to just copy the two dylib and relaunch Python:
cp /Library/PostgreSQL/9.3/lib/libssl.1.0.0.dylib /usr/lib/
cp /Library/PostgreSQL/9.3/lib/libcrypto.1.0.0.dylib /usr/lib/
If you are uncomfortable copying libraries into your system directory, you can use the DYLD_LIBRARY_PATH environment variable to force the OS to search Postgres's library directory for libssl. E.g.:
$ DYLD_LIBRARY_PATH=/Library/PostgreSQL/9.4/lib pip install psycopg2
(documented under the dyld man page).
I had similar problem on my Mac OS High Sierra.
ImportError: dlopen(/Users/chicha/Projects/CTMR/sample_registration/romans_env/lib/python3.7/site-packages/psycopg2/_psycopg.cpython-37m-darwin.so, 2): Library not loaded: /opt/local/lib/libssl.1.0.0.dylib
But after "pip install postgres" it's work fine.
According to pip show - "postgres is a high-value abstraction over psycopg2".
While installing it's also installed psycopg2-binary and psycopg2-pool.
So, all together they have repaired the situation somehow.
FEniCS that comes in the Ubuntu 12.04 repository does not work with Enthought EPD unless I do some crazy stuff with PYTHONPATH which can often result in EPD using Ubuntu repository python modules rather than EPD modules.
The alternative then is to compile and install all of the FEniCS modules manually. This is screwy because FEniCS needs sudo to install in the normal EPD directory, /usr/local/EPD. If you use sudo, this means that PATH environment variable is not being sourced from ~/.bashrc so it thinks it's working with the native python, not EPD. I tried using the -i option on sudo, and that did some screwy things also.
I managed to solve my own problem. There were a bunch of issues with this technique that I am about to describe, and they are detailed here and here. For reasons that I don't understand, reinstalling Ubuntu fixed the problems described in the links, but that's beyond the scope of what I'm trying to cover here. Suffice it to say that it's good to install Ubuntu with / and /home as separate partitions because it makes complete reinstall very easy.
Procedure for Installing FEniCS for use with EPD
Download all of the packages here. Create the directory ~/.local/src/fenics and save them there. Run tar -xvf on all the files in that directory. An easy easy to do this is with the command for i in *.tar.gz; do tar -xvf $i; done.
First install the python modules FFC, FIAT, Instant, Viper and UFL by going into each of their directories and running python setup.py install --user. The user flag causes them to be installed in /.local/lib.. something. This will be added to your sys.path in python. You can read more about the --user flag here.
Then navigate to the directories for dolfin and ufc, and in each of them run the following commands: cmake -DCMAKE_INSTALL_PREFIX=~/.local ., make, make install.
Lastly, add source /home/chad/.local/share/dolfin/dolfin.conf to ~/.bashrc using gedit or emacs if you want to use a powerful text editor.
EDIT
You must also install ScientificPython using python setup.py install --user, and this is relatively painless.
EDIT
This should get you up and running for the demos in ~/.local/share/dolfin/demo/pde/poisson/python. I hope this helps someone.