How can I install the FeniCS dolfin module? - python

So I'm trying to install FEniCS from the instructions here. I did the
pip3 install fenics-ffc --upgrade
inside my virtualenv and it worked but when I try to import dolfin I get a ModuleNotFound error. I'm not sure how to get dolfin installed. I did
pip install pybind11
to get pybind11 installed then copied the code for dolfin installation into my cmd
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.__version__)")
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr
mkdir dolfin/build && cd dolfin/build && cmake .. && make install && cd ../..
mkdir mshr/build && cd mshr/build && cmake .. && make install && cd ../..
cd dolfin/python && pip3 install . && cd ../..
cd mshr/python && pip3 install . && cd ../..
but it just spat out dozens of errors like:
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.version)") 'FENICS_VERSION' is not recognized as an internal or external command, operable program or batch file.
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin Cloning into 'dolfin'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr Cloning into 'mshr'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
There were lots more errors after too. Am I not supposed to paste the dolfin code into cmd? I don't know much about this stuff so unsure of how to get the dolfin module. I've previously only used pip to get my packages but this does not work for dolfin as it doesn't appear to be on PyPI.

Do you have cmake? It says in the docs you need it. Also its says to to this to install pybind11 not pip install pybind11
For building optional Python interface of DOLFIN and mshr, pybind11 is needed since version 2018.1.0. To install it:
wget -nc --quiet https://github.com/pybind/pybind11/archive/v${PYBIND11_VERSION}.tar.gz
tar -xf v${PYBIND11_VERSION}.tar.gz && cd pybind11-${PYBIND11_VERSION}
mkdir build && cd build && cmake -DPYBIND11_TEST=off .. && make install
Also what is your os?

So here is how you can install fenics 2019.1 using conda (miniconda):
Install Conda:
First go to https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html
and follow the installation instructions.
Create a conda environment for fenics:
Open a terminal and type:
conda create -n fenics
To activate the created environment "fenics", type:
conda activate fenics
If you want the fenics environment to be activated automatically everytime you open a new terminal, then open you .bashrc file (should be under /home/username/.bashrc) and add the line "source activate fenics" below the ">>> conda initialize >>>" block.
Install fenics:
Type all these commands:
conda install -c conda-forge h5py=*=*mpich*
conda install -c conda-forge fenics
pip install meshio
pip install matplotlib
pip install --upgrade gmsh
conda install -c conda-forge paraview
pip install scipy
The second command will take a while. I added a few nice to have programs like gmsh and paraview which will help you to create meshes and view your solutions.

Related

How do I install dateinfer inside my Docker image

Some background : I'm new to understanding docker images and containers and how to write DOCKERFILE. I currently have a Dockerfile which installs all the dependencies that I want through PIP install command and so, it was very simple to build and deploy images.
But I currently have a new requirement to use the Dateinfer module and that cannot be installed through the pip install command.
The repo has to be first cloned and then has to be installed and I'm having difficulty achieving this through a DOCKERFILE. The current work around I've been following for now is to run the container and install it manually in the directory with all the other dependencies and Committing the changes with dateinfer installed.But this is a very tedious and time consuming process and I want to achieve the same by just mentioning it in the DOCKERFILE along with all my other dependencies.
This is what my Dockerfile looks like:
FROM ubuntu:20.04
RUN apt update
RUN apt upgrade -y
RUN apt-get install -y python3
RUN apt-get install -y python3-pip
RUN DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get -y install tzdata
RUN apt-get install -y libenchant1c2a
RUN apt install git -y
RUN pip3 install argparse
RUN pip3 install boto3
RUN pip3 install numpy==1.19.1
RUN pip3 install scipy
RUN pip3 install pandas
RUN pip3 install scikit-learn
RUN pip3 install matplotlib
RUN pip3 install plotly
RUN pip3 install kaleido
RUN pip3 install fpdf
RUN pip3 install regex
RUN pip3 install pyenchant
RUN pip3 install openpyxl
ADD core.py /
ENTRYPOINT [ "/usr/bin/python3.8", "/core.py”]
So when I try to install Dateinfer like this:
RUN git clone https://github.com/nedap/dateinfer.git
RUN cd dateinfer
RUN pip3 install .
It throws the following error :
ERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c pip3 install .' returned a non-zero code: 1
How do I solve this?
Each RUN directive in a Dockerfile runs in its own subshell. If you write something like this:
RUN cd dateinfer
That is a no-op: it starts a new shell, changes directory, and then the shell exits. When the next RUN command executes, you're back in the / directory.
The easiest way of resolving this is to include your commands in a single RUN statement:
RUN git clone https://github.com/nedap/dateinfer.git && \
cd dateinfer && \
pip3 install .
In fact, you would benefit from doing this with your other pip install commands as well; rather than a bunch of individual RUN
commands, consider instead:
RUN pip3 install \
argparse \
boto3 \
numpy==1.19.1 \
scipy \
pandas \
scikit-learn \
matplotlib \
plotly \
kaleido \
fpdf \
regex \
pyenchant \
openpyxl
That will generally be faster because pip only needs to resolve
dependencies once.
Rather than specifying all the packages individually on the command
line, you could also put them into a requirements.txt file, and then
use pip install -r requirements.txt.

colour methods not accessible after pip install of colour-science

Executed the following commands:
!pip install -q colour-science
!pip install -q matplotlib
# Uncomment the following lines for the latest develop branch content.
!pip uninstall -y colour-science
!if ! [ -d "colour" ]; then git clone https://github.com/colour-science/colour; fi
!if [ -d "colour" ]; then cd colour && git fetch && git checkout develop && git pull && cd ..; fi
import sys
sys.path.append('colour')
import colour works. However, when I call methods, it can no longer be found. Examples of such error:
ModuleNotFoundError: No module named 'colour.plotting'; 'colour' is not a package
ModuleNotFoundError: No module named 'colour.utilities'; 'colour' is not a package
This method to install Colour is that we use for our Google Colab environments and should not be used on personal machines. My guess is that you did a pip install colour instead of pip install colour-science.
You should instead create a Virtual Environment and install everything there. We use poetry but assuming you have virtualenv only:
virtualenv oklabtests
source oklabtests/bin/activate
pip3 install git+https://github.com/colour-science/colour#develop
Also worth noting that the main repo is available at https://github.com/colour-science/colour.
Try installing this & ten check
pip3 install git+https://github.com/crowsonkb/colour#JMh
If you still get error please let me know

docker build fails intermittently due to newly installed conda being unable to find libraries

I am building a docker image that uses conda (conda 4.5.10 & miniconda 4.5.4 are installed on the base image) to create an environment and install some packages. I am having a problem whereby after the conda create this subsequent docker command fails:
RUN source /opt/conda/bin/activate /opt/conda/envs/python2 && \
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir --upgrade -r /requirements_py2_pip.txt && \
rm /requirements_py2_pip.txt
Error is:
Step 12/17 : RUN source /opt/conda/bin/activate /opt/conda/envs/python2 && pip install --no-cache-dir --upgrade pip && pip install --no-cache-dir --upgrade -r /requirements_py2_pip.txt && rm /requirements_py2_pip.txt
---> Running in e828112b4ae0 .
Could not find platform independent libraries
Could not find platform dependent libraries
Consider setting $PYTHONHOME to [:]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007fe6d4851700 (most recent call first):
The command '/bin/bash -c source /opt/conda/bin/activate /opt/conda/envs/python2 && pip install --no-cache-dir --upgrade pip && pip install --no-cache-dir --upgrade -r /requirements_py2_pip.txt && rm /requirements_py2_pip.txt' returned a non-zero code: 134
The fact that it fails intermittently really bugged me, if it failed every time I wouldn't mind so much but it fails approximately 50% of the time. I googled around and found this SO thread: ImportError: No module named 'encodings' that suggested it may be a problem with the conda environments. Hence, I added this docker command prior to the one that was failing intermittently:
RUN conda env list
Annoyingly that command fails intermittently too. When it succeeds it returns:
Step 12/18 : RUN conda env list
---> Running in 05eaa6f726a9
# conda environments:
#
base * /opt/conda
python3 /opt/conda/envs/python3
python3.6 /opt/conda/envs/python3.6
When it fails it returns:
Step 12/18 : RUN conda env list
---> Running in 20247acc3824
Could not find platform independent libraries
Could not find platform dependent libraries
Consider setting $PYTHONHOME to [:]
Fatal Python error: Py_Initialize: Unable to get the locale encoding
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007f7a5f5ef700 (most recent call first):
The command '/bin/bash -c conda env list' returned a non-zero code: 139
Which as you will see is very similar to the previous error. This suggested to me that there was a problem with the conda environment which is created using this command:
# Create a Python 2.x environment using conda including at least the ipython kernel
# and the kernda utility. Add any additional packages you want available for use
# in a Python 2 notebook to the first line here (e.g., pandas, matplotlib, etc.)
RUN conda create --quiet --yes -p $CONDA_DIR/envs/python2 python=2.7 --file requirements_py2_conda.txt && \
source /opt/conda/bin/activate /opt/conda/envs/python2 && \
conda remove --quiet --yes --force qt pyqt && \
conda clean -tipsy && \
rm -rf $CONDA_DIR/envs/python2/share/jupyter/lab/staging && \
rm -rf /usr/local/share/.cache /tmp/* /opt/conda/pkgs/* /opt/conda/envs/python2/pkgs/* \
/requirements_py2_conda.txt /opt/conda/envs/python3/pkgs/*
I compared the output of that command in both a succeeded and a failed build. Both install the same packages, they just happen to install them in a different order.
By the way, requirements_py2_conda.txt contains:
beautifulsoup4=4.6.*
bokeh=0.13.*
bz2file=0.98
cloudpickle=0.5.*
colour=0.1.*
configparser=3.5.*
cython=0.28.*
dill=0.2.*
fastparquet=0.1.*
future=0.16.*
gensim=3.4.*
graphviz=2.40.*
h5py=2.8.*
hdf5=1.10.*
imageio=2.3.*
ipykernel=4.8.*
ipython=5.8.*
ipywidgets=7.4.*
keras=2.2.*
lxml=4.2.*
matplotlib=2.2.*
mysqlclient=1.3.*
mpld3=0.3
nltk=3.3.*
nose=1.3.*
numba=0.39.*
numexpr=2.6.*
numpy=1.15.*
pandas=0.23.*
pathlib2=2.3.*
patsy=0.5.*
pexpect=4.6.*
pivottablejs=0.9.*
protobuf=3.*
pyemd=0.5.*
pymc3=3.5
pyparsing=2.2.*
pystan=2.17.*
pytest=3.7.*
python=2.7.*
pylint
py-xgboost=0.72.*
pyyaml=3.13
requests=2.19.*
scandir=1.*
scikit-image=0.14.*
scikit-learn=0.19.*
scipy=1.1.*
seaborn=0.9.*
sh=1.12.*
simplegeneric=0.8.*
singledispatch=3.4.0.*
six=1.11.*
sortedcontainers=2.0.*
sqlalchemy=1.2.*
SQLAlchemy=1.2.*
statsmodels=0.9.*
subprocess32=3.5.*
sympy=1.2.*
tabulate=0.8.*
tensorflow=1.10.*
texlive-core=20180414
theano=1.0.*
widgetsnbextension=3.4.*
xlrd=1.1.*
The error message suggests
Consider setting $PYTHONHOME to [:]
In order to see if that was the issue I added:
echo $PYTHONHOME
This resulted in the same output (i.e. empty PYTHONHOME) both when the build succeeds and when it fails:
Step 12/20 : RUN echo $PYTHONHOME
---> Running in 59ce6f2776c7
Removing intermediate container 59ce6f2776c7
---> cee1ad9f695e
I'm now stumped. I don't know what to do next in order to investigate and I'm not familiar with conda (I didn't write the Dockerfile, but these intermittent failures are blocking me hence I'm investigating). Any suggestions of things I could do to try and uncover the problem would be welcomed.

Create base docker centos image with python 2.7.8

I've found this which walks you through creating a base bare-metal centos image. I want to however install some additional yum packages, download Python 2.7.8 and build it.
I had this in a dockerfile and already working like:
# Set the base image to Ubuntu
FROM centos:7
# File Author / Maintainer
MAINTAINER Sam Mohamed
# Update the sources list
RUN yum -y update
RUN yum install -y zlib-dev openssl-devel sqlite-devel bzip2-devel xz-libs gcc g++ build-essential make
# Install Python 2.7.8
RUN curl -o /root/Python-2.7.9.tar.xz https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tar.xz
RUN tar -xf /root/Python-2.7.9.tar.xz -C /root
RUN cd /root/Python-2.7.9 && ./configure --prefix=/usr/local && make && make altinstall
# Copy the application folder inside the container
ADD `pwd` /opt/iws_project
# Download Setuptools and install pip and virtualenv
RUN wget https://bootstrap.pypa.io/ez_setup.py -O - | /usr/local/bin/python2.7
RUN /usr/local/bin/easy_install-2.7 pip
RUN /usr/local/bin/pip2.7 install virtualenv
# Create virtualenv and install requirements:
RUN /usr/local/bin/virtualenv /opt/iws_project/venv && source /opt/iws_project/bin/activate && pip install -r /opt/iws_project/requirements.txt
How can I convert the above into a base image?
You are probably better off building the given Dockerfile and using the resulting image as the base for future images. This is much easier to maintain and doesn't really cost anything in terms of resource use.
But if you really want to create a single-layer "base image", the steps are as follows:
Install everything you want into some directory (docker-centos-65/ in the linked tutorial).
You can modify the febootstrap command from the tutorial you linked to install additional yum packages by specifying more -i flags.
You can perform any other custom installs (e.g. Python) manually, just make sure everything ends up in the same root directory
Create a tar archive of the directory where everything is installed, and pipe this to the docker import command:
tar c -C docker-centos-65/ . | docker import - my-base-image

Compiling Python 2.6.6 and need for external packages wxPython, setuptools, etc... in Ubuntu

I compiled Python 2.6.6 with google-perf tools (tcmalloc) library to eliminate some of the memory issues I was having with the default 2.6.5. After getting 2.6.6 going it seems to not work becuase I think having issues with the default 2.6.5 install in Ubuntu. Will none of the binaries installed from the software channel like wxPython and setuptools work properly with 2.6.6. Do these need to be recompiled? Any other suggestions to get it working smoothly. Can I still set 2.6.5 as default without changing the Path? The path looks in usr/local/bin first.
A good general rule of thumb is to NEVER use the default system installed Python for any software development beyond miscellaneous system admin scripts. This applies on all UNIXes including Linux and OS/X.
Instead, build a good Python distro that you control, with the libraries (Python and C) that you need, and install this tarball in a non-system directory such as /opt/devpy or /data/package/python or /home/python. And why mess with 2.6 when 2.7.2 is available?
And when you are building it, make sure that all of its dependencies are in its own directory tree (RPATH) and that any system dependencies (.so files) are copied into its directory tree. Here is my version. It might not work if you just run the whole shell script. I always copy and paste sections of this into a terminal window and verify that each step worked OK. Make sure your terminal properties are set to allow lots of lines of scrollback, or only paste a couple of lines at a time.
(actually, after making a few tweaks I think this may be runnable as a script, however I would recommend something like ./pybuild.sh >pylog 2>&1 so you can comb through the output and verify that everything built OK.
This was built on Ubuntu 64 bit
#!/bin/bash
shopt -s compat40
export WGET=echo
#uncomment the following if you are running for the first time
export WGET=wget
sudo apt-get -y install build-essential
sudo apt-get -y install zlib1g-dev libxml2-dev libxslt1-dev libssl-dev libncurses5-dev
sudo apt-get -y install libreadline6-dev autotools-dev autoconf automake libtool
sudo apt-get -y install libsvn-dev mercurial subversion git-core
sudo apt-get -y install libbz2-dev libgdbm-dev sqlite3 libsqlite3-dev
sudo apt-get -y install curl libcurl4-gnutls-dev
sudo apt-get -y install libevent-dev libev-dev librrd4 rrdtool
sudo apt-get -y install uuid-dev libdb4.8-dev memcached libmemcached-dev
sudo apt-get -y install libmysqlclient-dev libexpat1-dev
cd ~
$WGET 'http://code.google.com/p/google-perftools/downloads/detail?name=google-perftools-1.7.tar.gz'
$WGET http://www.python.org/ftp/python/2.7.2/Python-2.7.2.tgz
tar zxvf Python-2.7.2.tgz
cd Python-2.7.2
#following is needed if you have an old version of Mercurial installed
#export HAS_HG=not-found
# To provide a uniform build environment
unset PYTHONPATH PYTHONSTARTUP PYTHONHOME PYTHONCASEOK PYTHONIOENCODING
unset LD_RUN_PATH LD_LIBRARY_PATH LD_DEBUG LD_TRACE_LOADED_OBJECTS
unset LD_PRELOAD SHLIB_PATH LD_BIND_NOW LD_VERBOSE
## figure out whether this is a 32 bit or 64 bit system
m=`uname -m`
if [[ $m =~ .*64 ]]; then
export CC="gcc -m64"
NBITS=64
elif [[ $m =~ .*86 ]]; then
export CC="gcc -m32"
NBITS=32
else # we are confused so bail out
echo $m
exit 1
fi
# some stuff related to distro independent build
# extra_link_args = ['-Wl,-R/data1/python27/lib']
#--enable-shared and a relative
# RPATH[0] (eg LD_RUN_PATH='${ORIGIN}/../lib')
export TARG=/data1/packages/python272
export TCMALLOC_SKIP_SBRK=true
#export CFLAGS='-ltcmalloc' # Google's fast malloc
export COMMONLDFLAGS='-Wl,-rpath,\$$ORIGIN/../lib -Wl,-rpath-link,\$$ORIGIN:\$$ORIGIN/../lib:\$$ORIGIN/../../lib -Wl,-z,origin -Wl,--enable-new-dtags'
# -Wl,-dynamic-linker,$TARG/lib/ld-linux-x86-64.so.2
export LDFLAGS=$COMMONLDFLAGS
./configure --prefix=$TARG --with-dbmliborder=bdb:gdbm --enable-shared --enable-ipv6
# if you have ia32-libs installed on a 64-bit system
#export COMMONLDFLAGS="-L/lib32 -L/usr/lib32 -L`pwd`/lib32 -Wl,-rpath,$TARG/lib32 -Wl,-rpath,$TARG/usr/lib32"
make
# ignore failure to build the following since they are obsolete or deprecated
# _tkinter bsddb185 dl imageop sunaudiodev
#install it and collect any dependency libraries - not needed with RPATH
sudo mkdir -p $TARG
sudo chown `whoami`.users $TARG
make install
# collect binary libraries ##REDO THIS IF YOU ADD ANY ADDITIONAL MODULES##
function collect_binary_libs {
cd $TARG
find . -name '*.so' | sed 's/^/ldd -v /' >elffiles
echo "ldd -v bin/python" >>elffiles
chmod +x elffiles
./elffiles | sed 's/.*=> //;s/ .*//;/:$/d;s/^ *//' | sort -u | sed 's/.*/cp -L & lib/' >lddinfo
# mkdir lib
chmod +x lddinfo
./lddinfo
cd ~
}
collect_binary_libs
#set the path
cd ~
export PATH=$TARG/bin:$PATH
#installed setuptools
$WGET http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg
chmod +x setuptools-0.6c11-py2.7.egg
./setuptools-0.6c11-py2.7.egg
#installed virtualenv
tar zxvf virtualenv-1.6.1.tar.gz
cd virtualenv-1.6.1
python setup.py install
cd ~
# created a base virtualenv that should work for almost all projects
# we make it relocatable in case its location in the filesystem changes.
cd ~
python virtualenv-1.6.1/virtualenv.py /data1/py27base # first make it
python virtualenv-1.6.1/virtualenv.py --relocatable /data1/py27base #then relocatabilize
# check it out
source ~/junk/bin/activate
python --version
# fill the virtualenv with useful modules
# watch out for binary builds that may have dependency problems
export LD_RUN_PATH='\$$ORIGIN:\$$ORIGIN/../lib:\$$ORIGIN/../../lib'
easy_install pip
pip install cython
pip install lxml
pip install httplib2
pip install python-memcached
pip install amqplib
pip install kombu
pip install carrot
pip install py_eventsocket
pip install haigha
# extra escaping of $ signs
export LDFLAGS='-Wl,-rpath,\$\$$ORIGIN/../lib:\$\$$ORIGIN/../../lib -Wl,-rpath-link,\$\$$ORIGIN/../lib -Wl,-z,origin -Wl,--enable-new-dtags'
# even more complex to build this one since we need some autotools and
# have to pull source from a repository
mkdir rabbitc
cd rabbitc
hg clone http://hg.rabbitmq.com/rabbitmq-codegen/
hg clone http://hg.rabbitmq.com/rabbitmq-c/
cd rabbitmq-c
autoreconf -i
make clean
./configure --prefix=/usr
make
sudo make install
cd ~
# for zeromq we get the latest source of the library
$WGET http://download.zeromq.org/zeromq-2.1.7.tar.gz
tar zxvf zeromq-2.1.7.tar.gz
cd zeromq-2.1.7
make clean
./configure --prefix=/usr
make
sudo make install
cd ~
# need less escaping of $ signs
export LDFLAGS='-Wl,-rpath,\$ORIGIN/../lib:\$ORIGIN/../../lib -Wl,-rpath-link,\$ORIGIN/../lib -Wl,-z,origin -Wl,--enable-new-dtags'
pip install pyzmq
pip install pylibrabbitmq # need to build C library and install first
pip install pylibmc
pip install pycurl
export LDFLAGS=$COMMONLDFLAGS
pip install cherrypy
pip install pyopenssl # might need some ldflags on this one?
pip install diesel
pip install eventlet
pip install fapws3
pip install gevent
pip install boto
pip install jinja2
pip install mako
pip install paste
pip install twisted
pip install flup
pip install pika
pip install pymysql
# pip install py-rrdtool # not on 64 bit???
pip install PyRRD
pip install tornado
pip install redis
# for tokyocabinet we need the latest source of the library
$WGET http://fallabs.com/tokyocabinet/tokyocabinet-1.4.47.tar.gz
tar zxvf tokyocabinet-1.4.47.tar.gz
cd tokyocabinet-1.4.47
make clean
./configure --prefix=/usr --enable-devel
make
sudo make install
cd ..
$WGET http://fallabs.com/tokyotyrant/tokyotyrant-1.1.41.tar.gz
tar zxvf tokyotyrant-1.1.41.tar.gz
cd tokyotyrant-1.1.41
make clean
./configure --prefix=/usr --enable-devel
make
sudo make install
cd ..
pip install tokyo-python
pip install solrpy
pip install pysolr
pip install sunburnt
pip install txamqp
pip install littlechef
pip install PyChef
pip install pyvb
pip install bottle
pip install werkzeug
pip install BeautifulSoup
pip install XSLTools
pip install numpy
pip install coverage
pip install pylint
# pip install PyChecker ???
pip install pycallgraph
pip install mkcode
pip install pydot
pip install sqlalchemy
pip install buzhug
pip install flask
pip install restez
pip install pytz
pip install mcdict
# need less escaping of $ signs
pip install py-interface
# pip install paramiko # pulled in by another module
pip install pexpect
# SVN interface
$WGET http://pysvn.barrys-emacs.org/source_kits/pysvn-1.7.5.tar.gz
tar zxvf pysvn-1.7.5.tar.gz
cd pysvn-1.7.5/Source
python setup.py backport
python setup.py configure
make
cd ../Tests
make
cd ../Sources
mkdir -p $TARG/lib/python2.7/site-packages/pysvn
cp pysvn/__init__.py $TARG/lib/python2.7/site-packages/pysvn
cp pysvn/_pysvn_2_7.so $TARG/lib/python2.7/site-packages/pysvn
cd ~
# pip install protobuf #we have to do this the hard way
$WGET http://protobuf.googlecode.com/files/protobuf-2.4.1.zip
unzip protobuf-2.4.1.zip
cd protobuf-2.4.1
make clean
./configure --prefix=/usr
make
sudo make install
cd python
python setup.py install
cd ~
pip install riak
pip install ptrace
pip install html5lib
pip install metrics
#redo the "install binary libraries" step
collect_binary_libs
# link binaries in the lib directory to avoid search path errors and also
# to reduce the number of false starts to find the library
for i in `ls $TARG/lib/python2.7/lib-dynload/*.so`
do
ln -f $i $TARG/lib/`basename $i`
done
# for the same reason link the whole lib directory to some other places in the tree
ln -s ../.. $TARG/lib/python2.7/site-packages/lib
# bundle it up and save it for packaging
cd /
tar cvf - .$TARG |gzip >~/py272-$NBITS.tar.gz
cd ~
# after untarring on another machine, we have a program call imports.py which imports
# every library as a quick check that it works. For a more positive check, run it like this
# strace -e trace=stat,fstat,open python imports.py >strace.txt 2>&1
# grep -v ' = -1' strace.txt |grep 'open(' >opens.txt
# sed <opens.txt 's/^open("//;s/".*//' |sort -u |grep -v 'dynload' |grep '\.so' >straced.txt
# ls -1d /data1/packages/python272/lib/* |sort -u >lib.txt
# then examine the strace output to see how many places it searches before finding it.
# a successful library load will be a call to open that doesn't end with ' = -1'
# If it takes too many tries to find a particular library, then another symbolic link may
# be a good idea
I'm pretty sure you have to compile wxPython to the version of Python that you want to use it with. That's always been the case with anyone else who has done something like this on the wxPython mailing list. I think that applies to most packages and especially so if they have any C/C++ components, like wxPython does. Pure Python packages can sometimes be transferred from one version to the next intact in my experience.
There are fairly extensive wxPython build instructions here: http://wxpython.org/BUILD-2.8.html
Robin Dunn and others on the wxPython mailing list are very helpful if you run into any problems.
If you compiled 2.6.6 and installed 2.6.5 from the repos, then ubuntu is having a conflict in finding what python you're using.
I'm flagging this to move to Superuser.

Categories

Resources