How to manually pass source of bzip2 install for Python install? - python

I've been through several StackOverflow questions about Python & bzip2. These have been very helpful in getting me to the state I'm clearly at now. Here's what I've done so far and the problem I'm having:
I do not have root access and cannot install libbz2-dev(el)
/usr/bin/bzip2 is version 1.0.3
/usr/bin/python is version 2.4.3
GNU Stow is being used to manage libraries similar to how homebrew works
I need Python 2.7.3 to install with the bzip2 module in order to properly compile node.js from source. And yes, I'm sorry, but I do actually have to do all of this as a regular user from source.
I have installed bzip2 from source as follows:
$ make -f Makefile-libbz2_so
$ make
$ make install PREFIX=${STOW}/bzip2-1.0.6
$ cp libbz2.so.1.0.6 ${STOW}/bzip2-1.0.6/lib/
$ cd ${STOW}/bzip2-1.0.6/lib
$ ln -s libbz2.so.1.0.6 libbz2.so.1.0
$ cd ${STOW}
$ stow bzip2-1.0.6
I have stow's root directory in my PATH before anything else, so this results in:
$ bzip2 -V
# [...] Version 1.0.6
Which indicates that the correct bzip2 is being utilized in my PATH.
Next I move on to compiling Python from source and run the following:
$ cd Python-2.7.3
$ ./configure --prefix=${STOW}/Python-2.7.3
$ make
# Complains about several missing modules, of which "bz2" is the one I care about
$ make install prefix=${STOW}/Python-2.7.3 # unimportant as bz2 module failed to install
What is the correct way to tell Python during it's source configuration where the source installed bzip 1.0.6 library lives so it will detect the bzip2 devel headers and install the module properly?

Alright, it took me a few months to get to this, but I'm finally back and managed to tackle this problem.
Install bzip2 from source:
# Upload bzip2-1.0.6.tar.gz to ${SRC}
$ cd ${SRC}
$ tar -xzvf bzip2-1.0.6.tar.gz
$ cd bzip2-1.0.6
$ export CFLAGS="-fPIC"
$ make -f Makefile-libbz2_so
$ make
$ make install PREFIX=${STOW}/bzip2-1.0.6
$ cp libbz2.so.1.0.6 ${STOW}/bzip2-1.0.6/lib/
$ cd ${STOW}/bzip2-1.0.6/lib
$ ln -s libbz2.so.1.0.6 libbz2.so.1.0
$ cd ${STOW}
$ stow bzip2-1.0.6
$ source ${HOME}/.bash_profile
$ bzip2 --version
#=> bzip2, a block-soring file compressor. Version 1.0.6...
Install Python from source:
# Upload Python-2.7.3.tar.gz to ${SRC}
$ cd ${SRC}
$ tar -xzvf Python-2.7.3.tar.gz
$ cd Python-2.7.3
$ export CLFAGS="-fPIC"
$ export C_INCLUDE_PATH=${STOW}/../include
$ export CPLUS_INCLUDE_PATH=${C_INCLUDE_PATH}
$ export LIBRARY_PATH=${STOW}/../lib
$ export LD_RUN_PATH=${LIBRARY_PATH}
$ ./configure --enable-shared --prefix=${STOW}/Python-2.7.3 --libdir=${STOW}/../lib
$ make
$ make install prefix=${STOW}/Python-2.7.3
$ cd ${STOW}
$ stow Python-2.7.3
$ source ${HOME}/.bash_profile
$ python -V
#=> Python 2.7.3
$ python -c "import bz2; print bz2.__doc__"
#=> The python bz2 module provides...
Although node.js wasn't technically part of the question, it is what drove me to go through all of the above so I may as well include the last few commands to get node.js installed from source using a source install Python 2.7.3 & bzip2 1.0.6:
Install node.js from source:
# Upload node-v0.10.0.tar.gz to ${SRC}
$ cd ${SRC}
$ tar -xzvf node-v0.10.0.tar.gz
$ cd node-v0.10.0
$ ./configure --prefix=${STOW}/node-v0.10.0
$ make
$ make install prefix=${STOW}/node-v0.10.0
$ cd ${STOW}
$ stow node-v0.10.0
$ source ${HOME}/.bash_profile
$ node -v
#=> v0.10.0

Related

Building Python 3.6.4 on Linux from scratch

I'm trying to build Python 3.6.4 from LFS 8.2-systemd so I run the configure command:
./configure --prefix=/usr \
--enable-shared \
--with-system-expat \
--with-system-ffi \
--with-ensurepip=yes
followed by make -j.
However, at this point the module "pyexpat" is not found by Python, but the file exists in /usr/lib/libexpat.so.
After reading building Python from source with zlib support, I created a symlink:
ln -s /usr/lib /usr/lib/x86_64-gnu-linux
If i run make install, I get an error:
ModuleNotFoundError: No module named pyexpat
My expat lib version is 2.2.5.
I'm doing the compilation inside env -i chroot /mnt bash
and my environment just contains a valid PATH and LX_ALL=POSIX variables.
I ran into this same problem for python 3.6.8 , when I initially configured using:
./configure --prefix=/opt/python-3.6/ --enable-optimizations
However, when I retried using the command in the BLFS book:
./configure --prefix=/opt/python-3.6/ --enable-shared --with-system-expat --with-system-ffi --with-ensurepip=yes
My pyexpat started working.
That being said, I think it may be helpful to just retry, since my second command is functionally identical to yours.
sudo add-apt-repository ppa:jonathonf/python-3.6
sudo apt-get update
sudo apt-get install python3.6
To make python3 use the new installed python 3.6 instead of the default 3.5 release, run following 2 commands:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.5 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 2
Finally switch between the two python versions for python3 via command:
sudo update-alternatives --config python3
After selecting version 3.6:
python3 -V

VIM with Python2&3 Support

Open VIM and run command:
:python print "Hello World!"
but failed with:
E448: Could not load library function _PyArg_Parse_SizeT
E263: Sorry, this command is disabled, the Python library could not be loaded.
My environment below:
OS: CentOS 7.3
Python2.7.13 Install:
wget -c https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz
tar -zxvf Python-2.7.13.tgz
cd Python-2.7.13
./configure
make
make install
Python3.6.1 Install:
wget -c https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tgz
tar -zxvf Python-3.6.1.tgz
cd Python-3.6.1/
./configure
make
make install
VIM8.0 Install:
rpm -e $(rpm -qa | grep vim) --nodeps
git clone https://github.com/vim/vim.git
cd vim/src
./configure --with-features=huge --enable-pythoninterp --with-python-config-dir=/usr/local/lib/python2.7/config/ --enable-python3interp --with-python3-config-dir=/usr/local/lib/python3.6/config-3.6m-x86_64-linux-gnu/ --enable-multibyte --enable-cscope --enable-gui=auto --enable-xim --with-x --enable-fontset --disable-selinux
make
make install
vim --version | grep python
other
echo "alias vi='vim'" >> ~/.bashrc
echo "export PATH=/usr/local/bin:\$PATH" >> ~/.bashrc
source ~/.bashrc

OpenNI2 with OpenCV 3

I'm on OSX 10.11.5
I have been trying to build OpenCV3 with OpenNI support. I have a Python script that is supposed to work with the XBox 360 Kinect, but OpenCV isn't detecting the Kinect. I have libfreenect installed and I can run freenect-glview which brings up the depth view.
I've done pretty extensive Google searches, but all guides seem to be for Windows or Linux. Can anyone help me? Guides for OpenNI 1 will work too.
Thanks in advance!
I use Kinect for Xbox 360 on macOS successfully. The steps are as follows:
1) Install OpenNI2
$ brew tap homebrew/science
$ brew install openni2
2) Install libfreenect
Note:
libfreenect for Kinect Xbox 360
libfreenect2 for Kinect Xbox One
Because of this issue: Using Microsoft Kinect with Opencv 3.0.0, we should build it by ourselves.
2.1) Download
$ curl -O -L https://github.com/OpenKinect/libfreenect/archive/v0.5.5.tar.gz
$ tar zxf v0.5.5.tar.gz
$ cd libfreenect-0.5.5/
2.2) Fix issue
# jump to line 173
$ vi OpenNI2-FreenectDriver/src/DepthStream.hpp
case XN_STREAM_PROPERTY_ZERO_PLANE_DISTANCE: // unsigned long long or unsigned int (for OpenNI2/OpenCV)
if (*pDataSize != sizeof(unsigned long long) && *pDataSize != sizeof(unsigned int))
{
LogError("Unexpected size for XN_STREAM_PROPERTY_ZERO_PLANE_DISTANCE");
return ONI_STATUS_ERROR;
} else {
if (*pDataSize != sizeof(unsigned long long)) {
*(static_cast<unsigned long long*>(data)) = ZERO_PLANE_DISTANCE_VAL;
} else {
*(static_cast<unsigned int*>(data)) = (unsigned int) ZERO_PLANE_DISTANCE_VAL;
}
}
return ONI_STATUS_OK;
2.3) Build
$ mkdir build
$ cd build
$ cmake .. -DBUILD_OPENNI2_DRIVER=ON
$ make
2.4) Install
$ export OPENNI2_DIR=$(python -c 'import subprocess, glob; \
prefix = subprocess.check_output("brew --prefix", shell=True); \
print(glob.glob("%s/Cellar/openni2/*" % prefix[:-1])[0])')
$ cd lib/OpenNI2-FreenectDriver/
$ export FREENECT_DRIVER_LIB=libFreenectDriver.dylib
$ export FREENECT_DRIVER_LIB=$(python - <<'EOF'
import os
lib = os.environ['FREENECT_DRIVER_LIB']
while os.path.islink(lib):
lib = os.readlink(lib)
print(lib)
EOF
)
$ ln -sf `pwd`/$FREENECT_DRIVER_LIB $OPENNI2_DIR/lib/ni2/OpenNI2/Drivers/libFreenectDriver.dylib
$ ln -sf `pwd`/$FREENECT_DRIVER_LIB $OPENNI2_DIR/share/openni2/tools/OpenNI2/Drivers/
2.5) Preview
$ cd `brew --prefix`/share/openni2/tools
$ ./NiViewer
3) Build OpenCV
3.1) Python
$ brew install python
$ python --version
Python 2.7.13
$ pip install numpy
3.2) Source
$ git clone https://github.com/opencv/opencv.git
$ cd opencv/
$ git checkout 3.2.0
# if use extra modules
$ git clone https://github.com/opencv/opencv_contrib.git
$ cd opencv_contrib/
$ git checkout 3.2.0
3.3) Build
$ cd opencv/
$ mkdir build
$ cd build/
$ export PY_HOME=/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7
$ export PY_NUMPY_DIR=$(python -c 'import os.path, numpy.core; print(os.path.dirname(numpy.core.__file__))')
$ export OPENNI2_INCLUDE=/usr/local/include/ni2
$ export OPENNI2_REDIST=/usr/local/lib/ni2
$ cmake -DCMAKE_BUILD_TYPE=RELEASE \
-DCMAKE_INSTALL_PREFIX=/usr/local \
\
-DBUILD_opencv_python2=ON \
-DPYTHON2_EXECUTABLE=$PY_HOME/bin/python \
-DPYTHON2_INCLUDE_DIR=$PY_HOME/Headers \
-DPYTHON2_LIBRARY=$PY_HOME/lib \
-DPYTHON2_PACKAGES_PATH=/usr/local/lib/python2.7/site-packages \
-DPYTHON2_NUMPY_INCLUDE_DIRS=$PY_NUMPY_DIR/include/ \
\
-DWITH_OPENNI2=ON \
\
-DBUILD_DOCS=OFF \
-DBUILD_EXAMPLES=OFF \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
\
-DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
..
$ make -j$(python -c 'import multiprocessing as mp; print(mp.cpu_count())')
$ make install
4) Finally
I posted the details to this gist. Samples here: C++, Python.
NOTE: Unable to retrieve correct image using Python code now. Please see this issue.

How to use virtualenv in makefile

I want to perform several operations while working on a specified virtualenv.
For example command
make install
would be equivalent to
source path/to/virtualenv/bin/activate
pip install -r requirements.txt
Is it possible?
I like using something that runs only when requirements.txt changes:
This assumes that source files are under project in your project's root directory and that tests are under project/test. (You should change project to match your actually project name.)
venv: venv/touchfile
venv/touchfile: requirements.txt
test -d venv || virtualenv venv
. venv/bin/activate; pip install -Ur requirements.txt
touch venv/touchfile
test: venv
. venv/bin/activate; nosetests project/test
clean:
rm -rf venv
find -iname "*.pyc" -delete
Run make to install packages in requirements.txt.
Run make test to run your tests (you can update this command if your tests are somewhere else).
run make clean to delete all artifacts.
In make you can run a shell as command. In this shell you can do everything you can do in a shell you started from comandline. Example:
install:
( \
source path/to/virtualenv/bin/activate; \
pip install -r requirements.txt; \
)
Attention must be paid to the ;and the \.
Everything between the open and close brace will be done in a single instance of a shell.
Normally make runs every command in a recipe in a different subshell. However, setting .ONESHELL: will run all the commands in a recipe in the same subshell, allowing you to activate a virtualenv and then run commands inside it.
Note that .ONESHELL: applies to the whole Makefile, not just a single recipe. It may change behaviour of existing commands, details of possible errors in the full documentation. This will not let you activate a virtualenv for use outside the Makefile, since the commands are still run inside a subshell.
Reference documentation: https://www.gnu.org/software/make/manual/html_node/One-Shell.html
Example:
.ONESHELL:
.PHONY: install
install:
source path/to/virtualenv/bin/activate
pip install -r requirements.txt
I have had luck with this.
install:
source ./path/to/bin/activate; \
pip install -r requirements.txt; \
This is an alternate way to run things that you want to run in virtualenv.
BIN=venv/bin/
install:
$(BIN)pip install -r requirements.txt
run:
$(BIN)python main.py
PS: This doesn't activate the virtualenv, but gets thing done. Hope you find it clean and useful.
I like to set my Makefile up so that it uses a .venv directory if one exists, but defaults to using the PATH.
For local development, I like to use a virtual environment, so I run:
# Running this: # Actually runs this:
make venv # /usr/bin/python3 -m venv .venv
make deps # .venv/bin/python setup.py develop
make test # .venv/bin/python -m tox
If I'm installing into a container though, or into my machine, I might bypass the virtual environment by skipping make venv:
# Running this: # Actually runs this:
make deps # /usr/bin/python3 setup.py develop
make test # /usr/bin/python3 -m tox
Setup
At the top of your Makefile, define these variables:
VENV = .venv
VENV_PYTHON = $(VENV_PYTHON)/bin/python
SYSTEM_PYTHON = $(or $(shell which python3), $(shell which python))
# If virtualenv exists, use it. If not, find python using PATH
PYTHON = $(or $(wildcard $(VENV_PYTHON)), $(SYSTEM_PYTHON))
If ./venv` exists, you get:
VENV = .venv
VENV_PYTHON = .venv/bin/python
SYSTEM_PYTHON = /usr/bin/python3
PYTHON = .venv/bin/python
If not, you get:
VENV = .venv
VENV_PYTHON = .venv/bin/python
SYSTEM_PYTHON = /usr/bin/python3
PYTHON = /usr/bin/python3
Note: /usr/bin/python3 could be something else on your system, depending on your PATH.
Then, where needed, run python stuff like this:
$(PYTHON) -m tox
$(PYTHON) -m pip ...
You might want to create a target called "venv" that creates the .venv directory:
$(VENV_PYTHON):
rm -rf $(VENV)
$(SYSTEM_PYTHON) -m venv $(VENV)
venv: $(VENV_PYTHON)
And a deps target to install dependencies:
deps:
$(PYTHON) setup.py develop
# or whatever you need:
#$(PYTHON) -m pip install -r requirements.txt
Example
Here's my Makefile:
# Variables
VENV = .venv
VENV_PYTHON = $(VENV)/bin/python
SYSTEM_PYTHON = $(or $(shell which python3), $(shell which python))
PYTHON = $(or $(wildcard $(VENV_PYTHON)), $(SYSTEM_PYTHON))
## Dev/build environment
$(VENV_PYTHON):
rm -rf $(VENV)
$(SYSTEM_PYTHON) -m venv $(VENV)
venv: $(VENV_PYTHON)
deps:
$(PYTHON) -m pip install --upgrade pip
# Dev dependencies
$(PYTHON) -m pip install tox pytest
# Dependencies
$(PYTHON) setup.py develop
.PHONY: venv deps
## Lint, test
test:
$(PYTHON) -m tox
lint:
$(PYTHON) -m tox -e lint
lintfix:
.PHONY: test lint lintfix
## Build source distribution, install
sdist:
$(PYTHON) setup.py sdist
install:
$(SYSTEM_PYTHON) -m pip install .
.PHONY: build install
## Clean
clean:
rm -rf .tox *.egg-info dist
.PHONY: clean
Based on the answers above (thanks #Saurabh and #oneself!) I've written a reusable Makefile that takes care of creating virtual environment and keeping it updated: https://github.com/sio/Makefile.venv
It works by referencing correct executables within virtualenv and does not rely on the "activate" shell script. Here is an example:
test: venv
$(VENV)/python -m unittest
include Makefile.venv
Differences between Windows and other operating systems are taken into account, Makefile.venv should work fine on any OS that provides Python and make.
You also could use the environment variable called "VIRTUALENVWRAPPER_SCRIPT". Like this:
install:
( \
source $$VIRTUALENVWRAPPER_SCRIPT; \
pip install -r requirements.txt; \
)
A bit late to the party but here's my usual setup:
# system python interpreter. used only to create virtual environment
PY = python3
VENV = venv
BIN=$(VENV)/bin
# make it work on windows too
ifeq ($(OS), Windows_NT)
BIN=$(VENV)/Scripts
PY=python
endif
all: lint test
$(VENV): requirements.txt requirements-dev.txt setup.py
$(PY) -m venv $(VENV)
$(BIN)/pip install --upgrade -r requirements.txt
$(BIN)/pip install --upgrade -r requirements-dev.txt
$(BIN)/pip install -e .
touch $(VENV)
.PHONY: test
test: $(VENV)
$(BIN)/pytest
.PHONY: lint
lint: $(VENV)
$(BIN)/flake8
.PHONY: release
release: $(VENV)
$(BIN)/python setup.py sdist bdist_wheel upload
clean:
rm -rf $(VENV)
find . -type f -name *.pyc -delete
find . -type d -name __pycache__ -delete
I did some more detailed writeup on that, but basically the idea is that you use the system's Python to create the virtual environment and for the other targets just prefix your command with the $(BIN) variable which points to the bin or Scripts directory inside your venv. This is equivalent to the activate function.
I found prepending to $PATH and adding $VIRTUAL_ENV was the best route:
No need to clutter up recipes with activate and constrain oneself to ; chaining
Shown here and here
Can simply use python as you would normally, and it will fall back onto system Python
No need for third party packages
Compatible with both Windows (if using bash) and POSIX
# SYSTEM_PYTHON defaults to Python on the local machine
SYSTEM_PYTHON = $(shell which python)
REPO_ROOT = $(shell pwd)
# Specify with REPO_ROOT so recipes can safely change directories
export VIRTUAL_ENV := ${REPO_ROOT}/venv
# bin = POSIX, Scripts = Windows
export PATH := ${VIRTUAL_ENV}/bin:${VIRTUAL_ENV}/Scripts:${PATH}
And for those interested in example usages:
# SEE: http://redsymbol.net/articles/unofficial-bash-strict-mode/
SHELL=/bin/bash -euo pipefail
.DEFAULT_GOAL := fresh-install
show-python: ## Show path to python and version.
#echo -n "python location: "
#python -c "import sys; print(sys.executable, end='')"
#echo -n ", version: "
#python -c "import platform; print(platform.python_version())"
show-venv: show-python
show-venv: ## Show output of python -m pip list.
python -m pip list
install: show-python
install: ## Install all dev dependencies into a local virtual environment.
python -m pip install -r requirements-dev.txt --progress-bar off
fresh-install: ## Run a fresh install into a local virtual environment.
-rm -rf venv
$(SYSTEM_PYTHON) -m venv venv
#$(MAKE) install
You should use this, it's functional for me at moment.
report.ipynb : merged.ipynb
( bash -c "source ${HOME}/anaconda3/bin/activate py27; which -a python; \
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python2 \
--ExecutePreprocessor.timeout=3000 \
--execute merged.ipynb \
--output=$< $<" )

Compiling Python 2.6.6 and need for external packages wxPython, setuptools, etc... in Ubuntu

I compiled Python 2.6.6 with google-perf tools (tcmalloc) library to eliminate some of the memory issues I was having with the default 2.6.5. After getting 2.6.6 going it seems to not work becuase I think having issues with the default 2.6.5 install in Ubuntu. Will none of the binaries installed from the software channel like wxPython and setuptools work properly with 2.6.6. Do these need to be recompiled? Any other suggestions to get it working smoothly. Can I still set 2.6.5 as default without changing the Path? The path looks in usr/local/bin first.
A good general rule of thumb is to NEVER use the default system installed Python for any software development beyond miscellaneous system admin scripts. This applies on all UNIXes including Linux and OS/X.
Instead, build a good Python distro that you control, with the libraries (Python and C) that you need, and install this tarball in a non-system directory such as /opt/devpy or /data/package/python or /home/python. And why mess with 2.6 when 2.7.2 is available?
And when you are building it, make sure that all of its dependencies are in its own directory tree (RPATH) and that any system dependencies (.so files) are copied into its directory tree. Here is my version. It might not work if you just run the whole shell script. I always copy and paste sections of this into a terminal window and verify that each step worked OK. Make sure your terminal properties are set to allow lots of lines of scrollback, or only paste a couple of lines at a time.
(actually, after making a few tweaks I think this may be runnable as a script, however I would recommend something like ./pybuild.sh >pylog 2>&1 so you can comb through the output and verify that everything built OK.
This was built on Ubuntu 64 bit
#!/bin/bash
shopt -s compat40
export WGET=echo
#uncomment the following if you are running for the first time
export WGET=wget
sudo apt-get -y install build-essential
sudo apt-get -y install zlib1g-dev libxml2-dev libxslt1-dev libssl-dev libncurses5-dev
sudo apt-get -y install libreadline6-dev autotools-dev autoconf automake libtool
sudo apt-get -y install libsvn-dev mercurial subversion git-core
sudo apt-get -y install libbz2-dev libgdbm-dev sqlite3 libsqlite3-dev
sudo apt-get -y install curl libcurl4-gnutls-dev
sudo apt-get -y install libevent-dev libev-dev librrd4 rrdtool
sudo apt-get -y install uuid-dev libdb4.8-dev memcached libmemcached-dev
sudo apt-get -y install libmysqlclient-dev libexpat1-dev
cd ~
$WGET 'http://code.google.com/p/google-perftools/downloads/detail?name=google-perftools-1.7.tar.gz'
$WGET http://www.python.org/ftp/python/2.7.2/Python-2.7.2.tgz
tar zxvf Python-2.7.2.tgz
cd Python-2.7.2
#following is needed if you have an old version of Mercurial installed
#export HAS_HG=not-found
# To provide a uniform build environment
unset PYTHONPATH PYTHONSTARTUP PYTHONHOME PYTHONCASEOK PYTHONIOENCODING
unset LD_RUN_PATH LD_LIBRARY_PATH LD_DEBUG LD_TRACE_LOADED_OBJECTS
unset LD_PRELOAD SHLIB_PATH LD_BIND_NOW LD_VERBOSE
## figure out whether this is a 32 bit or 64 bit system
m=`uname -m`
if [[ $m =~ .*64 ]]; then
export CC="gcc -m64"
NBITS=64
elif [[ $m =~ .*86 ]]; then
export CC="gcc -m32"
NBITS=32
else # we are confused so bail out
echo $m
exit 1
fi
# some stuff related to distro independent build
# extra_link_args = ['-Wl,-R/data1/python27/lib']
#--enable-shared and a relative
# RPATH[0] (eg LD_RUN_PATH='${ORIGIN}/../lib')
export TARG=/data1/packages/python272
export TCMALLOC_SKIP_SBRK=true
#export CFLAGS='-ltcmalloc' # Google's fast malloc
export COMMONLDFLAGS='-Wl,-rpath,\$$ORIGIN/../lib -Wl,-rpath-link,\$$ORIGIN:\$$ORIGIN/../lib:\$$ORIGIN/../../lib -Wl,-z,origin -Wl,--enable-new-dtags'
# -Wl,-dynamic-linker,$TARG/lib/ld-linux-x86-64.so.2
export LDFLAGS=$COMMONLDFLAGS
./configure --prefix=$TARG --with-dbmliborder=bdb:gdbm --enable-shared --enable-ipv6
# if you have ia32-libs installed on a 64-bit system
#export COMMONLDFLAGS="-L/lib32 -L/usr/lib32 -L`pwd`/lib32 -Wl,-rpath,$TARG/lib32 -Wl,-rpath,$TARG/usr/lib32"
make
# ignore failure to build the following since they are obsolete or deprecated
# _tkinter bsddb185 dl imageop sunaudiodev
#install it and collect any dependency libraries - not needed with RPATH
sudo mkdir -p $TARG
sudo chown `whoami`.users $TARG
make install
# collect binary libraries ##REDO THIS IF YOU ADD ANY ADDITIONAL MODULES##
function collect_binary_libs {
cd $TARG
find . -name '*.so' | sed 's/^/ldd -v /' >elffiles
echo "ldd -v bin/python" >>elffiles
chmod +x elffiles
./elffiles | sed 's/.*=> //;s/ .*//;/:$/d;s/^ *//' | sort -u | sed 's/.*/cp -L & lib/' >lddinfo
# mkdir lib
chmod +x lddinfo
./lddinfo
cd ~
}
collect_binary_libs
#set the path
cd ~
export PATH=$TARG/bin:$PATH
#installed setuptools
$WGET http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg
chmod +x setuptools-0.6c11-py2.7.egg
./setuptools-0.6c11-py2.7.egg
#installed virtualenv
tar zxvf virtualenv-1.6.1.tar.gz
cd virtualenv-1.6.1
python setup.py install
cd ~
# created a base virtualenv that should work for almost all projects
# we make it relocatable in case its location in the filesystem changes.
cd ~
python virtualenv-1.6.1/virtualenv.py /data1/py27base # first make it
python virtualenv-1.6.1/virtualenv.py --relocatable /data1/py27base #then relocatabilize
# check it out
source ~/junk/bin/activate
python --version
# fill the virtualenv with useful modules
# watch out for binary builds that may have dependency problems
export LD_RUN_PATH='\$$ORIGIN:\$$ORIGIN/../lib:\$$ORIGIN/../../lib'
easy_install pip
pip install cython
pip install lxml
pip install httplib2
pip install python-memcached
pip install amqplib
pip install kombu
pip install carrot
pip install py_eventsocket
pip install haigha
# extra escaping of $ signs
export LDFLAGS='-Wl,-rpath,\$\$$ORIGIN/../lib:\$\$$ORIGIN/../../lib -Wl,-rpath-link,\$\$$ORIGIN/../lib -Wl,-z,origin -Wl,--enable-new-dtags'
# even more complex to build this one since we need some autotools and
# have to pull source from a repository
mkdir rabbitc
cd rabbitc
hg clone http://hg.rabbitmq.com/rabbitmq-codegen/
hg clone http://hg.rabbitmq.com/rabbitmq-c/
cd rabbitmq-c
autoreconf -i
make clean
./configure --prefix=/usr
make
sudo make install
cd ~
# for zeromq we get the latest source of the library
$WGET http://download.zeromq.org/zeromq-2.1.7.tar.gz
tar zxvf zeromq-2.1.7.tar.gz
cd zeromq-2.1.7
make clean
./configure --prefix=/usr
make
sudo make install
cd ~
# need less escaping of $ signs
export LDFLAGS='-Wl,-rpath,\$ORIGIN/../lib:\$ORIGIN/../../lib -Wl,-rpath-link,\$ORIGIN/../lib -Wl,-z,origin -Wl,--enable-new-dtags'
pip install pyzmq
pip install pylibrabbitmq # need to build C library and install first
pip install pylibmc
pip install pycurl
export LDFLAGS=$COMMONLDFLAGS
pip install cherrypy
pip install pyopenssl # might need some ldflags on this one?
pip install diesel
pip install eventlet
pip install fapws3
pip install gevent
pip install boto
pip install jinja2
pip install mako
pip install paste
pip install twisted
pip install flup
pip install pika
pip install pymysql
# pip install py-rrdtool # not on 64 bit???
pip install PyRRD
pip install tornado
pip install redis
# for tokyocabinet we need the latest source of the library
$WGET http://fallabs.com/tokyocabinet/tokyocabinet-1.4.47.tar.gz
tar zxvf tokyocabinet-1.4.47.tar.gz
cd tokyocabinet-1.4.47
make clean
./configure --prefix=/usr --enable-devel
make
sudo make install
cd ..
$WGET http://fallabs.com/tokyotyrant/tokyotyrant-1.1.41.tar.gz
tar zxvf tokyotyrant-1.1.41.tar.gz
cd tokyotyrant-1.1.41
make clean
./configure --prefix=/usr --enable-devel
make
sudo make install
cd ..
pip install tokyo-python
pip install solrpy
pip install pysolr
pip install sunburnt
pip install txamqp
pip install littlechef
pip install PyChef
pip install pyvb
pip install bottle
pip install werkzeug
pip install BeautifulSoup
pip install XSLTools
pip install numpy
pip install coverage
pip install pylint
# pip install PyChecker ???
pip install pycallgraph
pip install mkcode
pip install pydot
pip install sqlalchemy
pip install buzhug
pip install flask
pip install restez
pip install pytz
pip install mcdict
# need less escaping of $ signs
pip install py-interface
# pip install paramiko # pulled in by another module
pip install pexpect
# SVN interface
$WGET http://pysvn.barrys-emacs.org/source_kits/pysvn-1.7.5.tar.gz
tar zxvf pysvn-1.7.5.tar.gz
cd pysvn-1.7.5/Source
python setup.py backport
python setup.py configure
make
cd ../Tests
make
cd ../Sources
mkdir -p $TARG/lib/python2.7/site-packages/pysvn
cp pysvn/__init__.py $TARG/lib/python2.7/site-packages/pysvn
cp pysvn/_pysvn_2_7.so $TARG/lib/python2.7/site-packages/pysvn
cd ~
# pip install protobuf #we have to do this the hard way
$WGET http://protobuf.googlecode.com/files/protobuf-2.4.1.zip
unzip protobuf-2.4.1.zip
cd protobuf-2.4.1
make clean
./configure --prefix=/usr
make
sudo make install
cd python
python setup.py install
cd ~
pip install riak
pip install ptrace
pip install html5lib
pip install metrics
#redo the "install binary libraries" step
collect_binary_libs
# link binaries in the lib directory to avoid search path errors and also
# to reduce the number of false starts to find the library
for i in `ls $TARG/lib/python2.7/lib-dynload/*.so`
do
ln -f $i $TARG/lib/`basename $i`
done
# for the same reason link the whole lib directory to some other places in the tree
ln -s ../.. $TARG/lib/python2.7/site-packages/lib
# bundle it up and save it for packaging
cd /
tar cvf - .$TARG |gzip >~/py272-$NBITS.tar.gz
cd ~
# after untarring on another machine, we have a program call imports.py which imports
# every library as a quick check that it works. For a more positive check, run it like this
# strace -e trace=stat,fstat,open python imports.py >strace.txt 2>&1
# grep -v ' = -1' strace.txt |grep 'open(' >opens.txt
# sed <opens.txt 's/^open("//;s/".*//' |sort -u |grep -v 'dynload' |grep '\.so' >straced.txt
# ls -1d /data1/packages/python272/lib/* |sort -u >lib.txt
# then examine the strace output to see how many places it searches before finding it.
# a successful library load will be a call to open that doesn't end with ' = -1'
# If it takes too many tries to find a particular library, then another symbolic link may
# be a good idea
I'm pretty sure you have to compile wxPython to the version of Python that you want to use it with. That's always been the case with anyone else who has done something like this on the wxPython mailing list. I think that applies to most packages and especially so if they have any C/C++ components, like wxPython does. Pure Python packages can sometimes be transferred from one version to the next intact in my experience.
There are fairly extensive wxPython build instructions here: http://wxpython.org/BUILD-2.8.html
Robin Dunn and others on the wxPython mailing list are very helpful if you run into any problems.
If you compiled 2.6.6 and installed 2.6.5 from the repos, then ubuntu is having a conflict in finding what python you're using.
I'm flagging this to move to Superuser.

Categories

Resources