Create base docker centos image with python 2.7.8 - python

I've found this which walks you through creating a base bare-metal centos image. I want to however install some additional yum packages, download Python 2.7.8 and build it.
I had this in a dockerfile and already working like:
# Set the base image to Ubuntu
FROM centos:7
# File Author / Maintainer
MAINTAINER Sam Mohamed
# Update the sources list
RUN yum -y update
RUN yum install -y zlib-dev openssl-devel sqlite-devel bzip2-devel xz-libs gcc g++ build-essential make
# Install Python 2.7.8
RUN curl -o /root/Python-2.7.9.tar.xz https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tar.xz
RUN tar -xf /root/Python-2.7.9.tar.xz -C /root
RUN cd /root/Python-2.7.9 && ./configure --prefix=/usr/local && make && make altinstall
# Copy the application folder inside the container
ADD `pwd` /opt/iws_project
# Download Setuptools and install pip and virtualenv
RUN wget https://bootstrap.pypa.io/ez_setup.py -O - | /usr/local/bin/python2.7
RUN /usr/local/bin/easy_install-2.7 pip
RUN /usr/local/bin/pip2.7 install virtualenv
# Create virtualenv and install requirements:
RUN /usr/local/bin/virtualenv /opt/iws_project/venv && source /opt/iws_project/bin/activate && pip install -r /opt/iws_project/requirements.txt
How can I convert the above into a base image?

You are probably better off building the given Dockerfile and using the resulting image as the base for future images. This is much easier to maintain and doesn't really cost anything in terms of resource use.
But if you really want to create a single-layer "base image", the steps are as follows:
Install everything you want into some directory (docker-centos-65/ in the linked tutorial).
You can modify the febootstrap command from the tutorial you linked to install additional yum packages by specifying more -i flags.
You can perform any other custom installs (e.g. Python) manually, just make sure everything ends up in the same root directory
Create a tar archive of the directory where everything is installed, and pipe this to the docker import command:
tar c -C docker-centos-65/ . | docker import - my-base-image

Related

Installing python in Dockerfile without using python image as base

I have a python script that uses DigitalOcean tools (doctl and kubectl) I want to containerize. This means my container will need python, doctl, and kubectl installed. The trouble is, I figure out how to install both python and DigitalOcean tools in the dockerfile.
I can install python using the base image "python:3" and I can also install the DigitalOcean tools using the base image "alpine/doctl". However, the rule is you can only use one base image in a dockerfile.
So I can include the python base image and install the DigitalOcean tools another way:
FROM python:3
RUN <somehow install doctl and kubectl>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Or I can include the alpine/doctl base image and install python3 another way.
FROM alpine/doctl
RUN <somehow install python>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Unfortunately, I'm not sure how I would do this. Any help in how I can get all these tools installed would be great!
just add this with any other thing you want to apt-get install:
RUN apt-get update && apt-get install -y \
python3.6 &&\
python3-pip &&\
in alpine it should be something like:
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python &&\
python3 -m ensurepip &&\
pip3 install --no-cache --upgrade pip setuptools &&\
This Dockerfile worked for me:
FROM alpine/doctl
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
This answer comes from here:(https://stackoverflow.com/a/62555259/7479816; I don't have enough street cred to comment)
You can try multi-stage build as shown below.
Also check your copy statement, you need to define where you want script.py file to be copied as second parameter. "." will copy it to root directory
FROM alpine/doctl
FROM python:3.6-slim-buster
ENV PYTHONUNBUFFERED 1
RUN pip install firebase-admin
COPY script.py .
CMD ["python", "script.py"]

Loading openvino python library on Raspebrry pi 4

After followign the guides (and coming through the github trackers), I was able to get OpenVINO to install on my pi4 and can run /opt/intel/openvino/bin/armv7l/Release/object_detection_sample_ssd successfully using my own trained model.
I used the follow cmake command most recently, but I've done pretty much all iterations I could find on the several instruction pages.
cmake -DCMAKE_BUILD_TYPE=Release /
-DENABLE_SSE42=OFF /
-DTHREADING=SEQ /
-DENABLE_GNA=OFF /
-DENABLE_PYTHON=ON /
-DPYTHON_EXECUTABLE=/usr/bin/python3 /
-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.7m.so /
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7m /
-NGRAPH_PYTHON_BUILD_ENABLE=ON /
-DCMAKE_CXX_FLAGS=-latomic /
-DOPENCV_EXTRA_EXE_LINKER_FLAGS=-latomic /
-D CMAKE_INSTALL_PREFIX=/usr/local /
.. && make
When I try one of the python examples I get
ModuleNotFoundError: No module named 'ngraph'
Looking more into it now and it appears the issue is the "setupvars.sh" script not calling into the right directory. I was able to get the module openvino to load by adjusting the export path. I must say the amount of documentation that is, quite frankly all over the place and seems to have wrong directory structures left and right.
As of right now, the builds for the Raspberry Pi are not being made with NGRAPH compiled. The user has to compile it themselves from scratch per this thread. https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/No-ngraph-bindings-for-python-in-raspbian-distribution/m-p/1263401#M23093
The comment by the Intel "Support" is wrong and won't work on new builds.
Please refer to the steps below for building Open Source OpenVINO™ Toolkit for Raspbian OS:
Set up build environment and install build tools
sudo apt update && sudo apt upgrade -y
sudo apt install build-essential
Install CMake from source
cd ~/
wget https://github.com/Kitware/CMake/releases/download/v3.14.4/cmake-3.14.4.tar.gz
tar xvzf cmake-3.14.4.tar.gz
cd ~/cmake-3.14.4
./bootstrap
make -j4 && sudo make install
Install OpenCV from source
sudo apt install git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev python3-scipy libatlas-base-dev
cd ~/
git clone --depth 1 --branch 4.5.2 https://github.com/opencv/opencv.git
cd opencv && mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local ..
make -j4 && sudo make install
Download source code and install dependencies
cd ~/
git clone --depth 1 --branch 2021.3 https://github.com/openvinotoolkit/openvino.git
cd ~/openvino
git submodule update --init --recursive
sh ./install_build_dependencies.sh
cd ~/openvino/inference-engine/ie_bridges/python/
pip3 install -r requirements.txt
Start CMake build
export OpenCV_DIR=/usr/local/lib/cmake/opencv4
cd ~/openvino
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/home/pi/openvino_dist \
-DENABLE_MKL_DNN=OFF \
-DENABLE_CLDNN=OFF \
-DENABLE_GNA=OFF \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_OPENCV=OFF \
-DNGRAPH_PYTHON_BUILD_ENABLE=ON \
-DNGRAPH_ONNX_IMPORT_ENABLE=ON \
-DENABLE_PYTHON=ON \
-DPYTHON_EXECUTABLE=$(which python3.7) \
-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7 \
-DCMAKE_CXX_FLAGS=-latomic ..
make -j4 && sudo make install
Configure the Intel® Neural Compute Stick 2 Linux USB Driver
sudo usermod -a -G users "$(whoami)"
source /home/pi/openvino_dist/bin/setupvars.sh
sh /home/pi/openvino_dist/install_dependencies/install_NCS_udev_rules.sh
Verify nGraph module binding to Python
cd /home/pi/openvino_dist/deployment_tools/inference_engine/samples/python/object_detection_sample_ssd
python3 object_detection_sample_ssd.py -h

Can I copy a directory from some location outside the docker area to my dockerfile?

I have installed a library called fastai==1.0.59 via requirements.txt file inside my Dockerfile.
But the purpose of running the Django app is not achieved because of one error. To solve that error, I need to manually edit the files /site-packages/fastai/torch_core.py and site-packages/fastai/basic_train.py inside this library folder which I don't intend to.
Therefore I'm trying to copy the fastai folder itself from my host machine to the location inside docker image.
source location: /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/
destination location: ../venv/lib/python3.6/site-packages/ which is inside my docker image.
being new to docker, I tried this using COPY command like:
COPY /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/ ../venv/lib/python3.6/site-packages/
which gave me an error:
ERROR: Service 'app' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder583041406/Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai: no such file or directory.
I tried referring this: How to include files outside of Docker's build context?
but seems like it bounced off my head a bit..
Please help me tackling this. Thanks.
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER model1
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano && \
apt-get install -y libsm6 libxext6 libxrender-dev
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN apt-get install -y libglib2.0-0
RUN mkdir -p /model/
COPY . /model/
WORKDIR /model/
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r ./requirements.txt
EXPOSE 8001
RUN chmod -R 777 /model/
COPY /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/ ../venv/lib/python3.6/site-packages/
CMD python3 -m /venv/activate
CMD /model/my_setup.sh development
CMD export API_ENV = development
CMD cd server && \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8001
Short Answer
No
Long Answer
When you run docker build the current directory and all of its contents (subdirectories and all) are copied into a staging area called the 'build context'. When you issue a COPY instruction in the Dockerfile, docker will copy from the staging area into a layer in the image's filesystem.
As you can see, this procludes copying files from directories outside the build context.
Workaround
Either download the files you want from their golden-source directly into the image during the build process (this is why you often see a lot of curl statements in Dockerfiles), or you can copy the files (dirs) you need into the build-tree and check them into source control as part of your project. Which method you choose is entirely dependent on the nature of your project and the files you need.
Notes
There are other workarounds documented for this, all of them without exception break the intent of 'portability' of your build. The only quality solutions are those documented here (though I'm happy to add to this list if I've missed any that preserve portability).

how to use multiple dockerfiles for single application

I am using a centos os base image and installing python3 with the following dockerfile
FROM centos:7
ENV container docker
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
RUN yum clean all \
&& yum update -q -y -t \
&& yum install file -q -y
RUN useradd -s /bin/bash -d ${HOMEDIR} ${USER}
RUN export LC_ALL=en_US.UTF-8
# install Development Tools to get gcc
RUN yum groupinstall -y "Development Tools"
# install python development so that pip can compile packages
RUN yum -y install epel-release && yum clean all \
&& yum install -y python34-setuptools \
&& yum install -y python34-devel
# install pip
RUN easy_install-3.4 pip
# install virtualenv or virtualenvwrapper
RUN pip3 install virtualenv \
&& pip3 install virtualenvwrapper \
&& pip3 install pandas
# # install django
# RUN pip3 install django
USER ${USER}
WORKDIR ${HOMEDIR}
I build and tag the above as follows:
docker build . --label validation --tag validation
I then need to add a .tar.gz file to the home directory. This file contains all the python scripts I maintain. This file will change frequently. If I add it to the dockerfile above, python is installed every time I change the .gz file. This adds a lot of time to development. As a workaround, I tried creating a second dockerfile file that uses the above image as the base and then just adds the .tar.gz file on it.
FROM validation:latest
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
USER ${USER}
WORKDIR ${HOMEDIR}
After that if I run docker image and do an ls, all the files in the folder have a owner of games.
-rw-r--r-- 1 501 games 35785 Nov 2 21:24 Validation_utility.py
To fix the above, I added the following lines to the second docker file:
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
RUN chown -R ${USER}:${USER} ${HOMEDIR} \
&& chmod +x ${HOMEDIR}/Validation_utility.py
but I get the error:
chown: changing ownership of '/home/dsadmin/Validation_utility.py': Operation not permitted
The goal is to have two docker files. The users will run the first docker file to install centos and python dependencies. The second dockerfile will install the custom python scripts. If the scripts change, they will just run the second docker file again. Is that the right way to think about docker? Thank you.
Is that the right way to think about docker?
This is the easy part of your question. Yes. You're thinking about the proper way to structure your Dockerfiles, reuse them, and keep your image builds efficient. Good job.
As for the error you're receiving, I'm less confident in answering why the ADD command is un-tarballing your tar.gz as the games user. I'm not nearly as familiar with CentOS. That's the start of the problem. dsadmin, as a regular non-privileged user, can't change ownership of files he doesn't own. Since this un-tarballed script is owned by games, the chown command fails.
I used your Dockerfiles and got the same issue on MacOS.
You can get around this by, well, not using ADD. Which is funny because local tarball extraction is the one use case where Docker thinks you should prefer ADD over COPY.
COPY code/validation_utility.tar.gz ${HOMEDIR}/.
RUN tar -xvf validation_utility.tar.gz
Properly extracts the tarball and, since dsadmin was the user at the time, the contents come out properly owned by dsadmin.
(An uglier route might be to switch the USER to root to set permissions, then set it back to dsadmin. I think this is icky, but it's an option.)

Compiling Python 2.6.6 and need for external packages wxPython, setuptools, etc... in Ubuntu

I compiled Python 2.6.6 with google-perf tools (tcmalloc) library to eliminate some of the memory issues I was having with the default 2.6.5. After getting 2.6.6 going it seems to not work becuase I think having issues with the default 2.6.5 install in Ubuntu. Will none of the binaries installed from the software channel like wxPython and setuptools work properly with 2.6.6. Do these need to be recompiled? Any other suggestions to get it working smoothly. Can I still set 2.6.5 as default without changing the Path? The path looks in usr/local/bin first.
A good general rule of thumb is to NEVER use the default system installed Python for any software development beyond miscellaneous system admin scripts. This applies on all UNIXes including Linux and OS/X.
Instead, build a good Python distro that you control, with the libraries (Python and C) that you need, and install this tarball in a non-system directory such as /opt/devpy or /data/package/python or /home/python. And why mess with 2.6 when 2.7.2 is available?
And when you are building it, make sure that all of its dependencies are in its own directory tree (RPATH) and that any system dependencies (.so files) are copied into its directory tree. Here is my version. It might not work if you just run the whole shell script. I always copy and paste sections of this into a terminal window and verify that each step worked OK. Make sure your terminal properties are set to allow lots of lines of scrollback, or only paste a couple of lines at a time.
(actually, after making a few tweaks I think this may be runnable as a script, however I would recommend something like ./pybuild.sh >pylog 2>&1 so you can comb through the output and verify that everything built OK.
This was built on Ubuntu 64 bit
#!/bin/bash
shopt -s compat40
export WGET=echo
#uncomment the following if you are running for the first time
export WGET=wget
sudo apt-get -y install build-essential
sudo apt-get -y install zlib1g-dev libxml2-dev libxslt1-dev libssl-dev libncurses5-dev
sudo apt-get -y install libreadline6-dev autotools-dev autoconf automake libtool
sudo apt-get -y install libsvn-dev mercurial subversion git-core
sudo apt-get -y install libbz2-dev libgdbm-dev sqlite3 libsqlite3-dev
sudo apt-get -y install curl libcurl4-gnutls-dev
sudo apt-get -y install libevent-dev libev-dev librrd4 rrdtool
sudo apt-get -y install uuid-dev libdb4.8-dev memcached libmemcached-dev
sudo apt-get -y install libmysqlclient-dev libexpat1-dev
cd ~
$WGET 'http://code.google.com/p/google-perftools/downloads/detail?name=google-perftools-1.7.tar.gz'
$WGET http://www.python.org/ftp/python/2.7.2/Python-2.7.2.tgz
tar zxvf Python-2.7.2.tgz
cd Python-2.7.2
#following is needed if you have an old version of Mercurial installed
#export HAS_HG=not-found
# To provide a uniform build environment
unset PYTHONPATH PYTHONSTARTUP PYTHONHOME PYTHONCASEOK PYTHONIOENCODING
unset LD_RUN_PATH LD_LIBRARY_PATH LD_DEBUG LD_TRACE_LOADED_OBJECTS
unset LD_PRELOAD SHLIB_PATH LD_BIND_NOW LD_VERBOSE
## figure out whether this is a 32 bit or 64 bit system
m=`uname -m`
if [[ $m =~ .*64 ]]; then
export CC="gcc -m64"
NBITS=64
elif [[ $m =~ .*86 ]]; then
export CC="gcc -m32"
NBITS=32
else # we are confused so bail out
echo $m
exit 1
fi
# some stuff related to distro independent build
# extra_link_args = ['-Wl,-R/data1/python27/lib']
#--enable-shared and a relative
# RPATH[0] (eg LD_RUN_PATH='${ORIGIN}/../lib')
export TARG=/data1/packages/python272
export TCMALLOC_SKIP_SBRK=true
#export CFLAGS='-ltcmalloc' # Google's fast malloc
export COMMONLDFLAGS='-Wl,-rpath,\$$ORIGIN/../lib -Wl,-rpath-link,\$$ORIGIN:\$$ORIGIN/../lib:\$$ORIGIN/../../lib -Wl,-z,origin -Wl,--enable-new-dtags'
# -Wl,-dynamic-linker,$TARG/lib/ld-linux-x86-64.so.2
export LDFLAGS=$COMMONLDFLAGS
./configure --prefix=$TARG --with-dbmliborder=bdb:gdbm --enable-shared --enable-ipv6
# if you have ia32-libs installed on a 64-bit system
#export COMMONLDFLAGS="-L/lib32 -L/usr/lib32 -L`pwd`/lib32 -Wl,-rpath,$TARG/lib32 -Wl,-rpath,$TARG/usr/lib32"
make
# ignore failure to build the following since they are obsolete or deprecated
# _tkinter bsddb185 dl imageop sunaudiodev
#install it and collect any dependency libraries - not needed with RPATH
sudo mkdir -p $TARG
sudo chown `whoami`.users $TARG
make install
# collect binary libraries ##REDO THIS IF YOU ADD ANY ADDITIONAL MODULES##
function collect_binary_libs {
cd $TARG
find . -name '*.so' | sed 's/^/ldd -v /' >elffiles
echo "ldd -v bin/python" >>elffiles
chmod +x elffiles
./elffiles | sed 's/.*=> //;s/ .*//;/:$/d;s/^ *//' | sort -u | sed 's/.*/cp -L & lib/' >lddinfo
# mkdir lib
chmod +x lddinfo
./lddinfo
cd ~
}
collect_binary_libs
#set the path
cd ~
export PATH=$TARG/bin:$PATH
#installed setuptools
$WGET http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg
chmod +x setuptools-0.6c11-py2.7.egg
./setuptools-0.6c11-py2.7.egg
#installed virtualenv
tar zxvf virtualenv-1.6.1.tar.gz
cd virtualenv-1.6.1
python setup.py install
cd ~
# created a base virtualenv that should work for almost all projects
# we make it relocatable in case its location in the filesystem changes.
cd ~
python virtualenv-1.6.1/virtualenv.py /data1/py27base # first make it
python virtualenv-1.6.1/virtualenv.py --relocatable /data1/py27base #then relocatabilize
# check it out
source ~/junk/bin/activate
python --version
# fill the virtualenv with useful modules
# watch out for binary builds that may have dependency problems
export LD_RUN_PATH='\$$ORIGIN:\$$ORIGIN/../lib:\$$ORIGIN/../../lib'
easy_install pip
pip install cython
pip install lxml
pip install httplib2
pip install python-memcached
pip install amqplib
pip install kombu
pip install carrot
pip install py_eventsocket
pip install haigha
# extra escaping of $ signs
export LDFLAGS='-Wl,-rpath,\$\$$ORIGIN/../lib:\$\$$ORIGIN/../../lib -Wl,-rpath-link,\$\$$ORIGIN/../lib -Wl,-z,origin -Wl,--enable-new-dtags'
# even more complex to build this one since we need some autotools and
# have to pull source from a repository
mkdir rabbitc
cd rabbitc
hg clone http://hg.rabbitmq.com/rabbitmq-codegen/
hg clone http://hg.rabbitmq.com/rabbitmq-c/
cd rabbitmq-c
autoreconf -i
make clean
./configure --prefix=/usr
make
sudo make install
cd ~
# for zeromq we get the latest source of the library
$WGET http://download.zeromq.org/zeromq-2.1.7.tar.gz
tar zxvf zeromq-2.1.7.tar.gz
cd zeromq-2.1.7
make clean
./configure --prefix=/usr
make
sudo make install
cd ~
# need less escaping of $ signs
export LDFLAGS='-Wl,-rpath,\$ORIGIN/../lib:\$ORIGIN/../../lib -Wl,-rpath-link,\$ORIGIN/../lib -Wl,-z,origin -Wl,--enable-new-dtags'
pip install pyzmq
pip install pylibrabbitmq # need to build C library and install first
pip install pylibmc
pip install pycurl
export LDFLAGS=$COMMONLDFLAGS
pip install cherrypy
pip install pyopenssl # might need some ldflags on this one?
pip install diesel
pip install eventlet
pip install fapws3
pip install gevent
pip install boto
pip install jinja2
pip install mako
pip install paste
pip install twisted
pip install flup
pip install pika
pip install pymysql
# pip install py-rrdtool # not on 64 bit???
pip install PyRRD
pip install tornado
pip install redis
# for tokyocabinet we need the latest source of the library
$WGET http://fallabs.com/tokyocabinet/tokyocabinet-1.4.47.tar.gz
tar zxvf tokyocabinet-1.4.47.tar.gz
cd tokyocabinet-1.4.47
make clean
./configure --prefix=/usr --enable-devel
make
sudo make install
cd ..
$WGET http://fallabs.com/tokyotyrant/tokyotyrant-1.1.41.tar.gz
tar zxvf tokyotyrant-1.1.41.tar.gz
cd tokyotyrant-1.1.41
make clean
./configure --prefix=/usr --enable-devel
make
sudo make install
cd ..
pip install tokyo-python
pip install solrpy
pip install pysolr
pip install sunburnt
pip install txamqp
pip install littlechef
pip install PyChef
pip install pyvb
pip install bottle
pip install werkzeug
pip install BeautifulSoup
pip install XSLTools
pip install numpy
pip install coverage
pip install pylint
# pip install PyChecker ???
pip install pycallgraph
pip install mkcode
pip install pydot
pip install sqlalchemy
pip install buzhug
pip install flask
pip install restez
pip install pytz
pip install mcdict
# need less escaping of $ signs
pip install py-interface
# pip install paramiko # pulled in by another module
pip install pexpect
# SVN interface
$WGET http://pysvn.barrys-emacs.org/source_kits/pysvn-1.7.5.tar.gz
tar zxvf pysvn-1.7.5.tar.gz
cd pysvn-1.7.5/Source
python setup.py backport
python setup.py configure
make
cd ../Tests
make
cd ../Sources
mkdir -p $TARG/lib/python2.7/site-packages/pysvn
cp pysvn/__init__.py $TARG/lib/python2.7/site-packages/pysvn
cp pysvn/_pysvn_2_7.so $TARG/lib/python2.7/site-packages/pysvn
cd ~
# pip install protobuf #we have to do this the hard way
$WGET http://protobuf.googlecode.com/files/protobuf-2.4.1.zip
unzip protobuf-2.4.1.zip
cd protobuf-2.4.1
make clean
./configure --prefix=/usr
make
sudo make install
cd python
python setup.py install
cd ~
pip install riak
pip install ptrace
pip install html5lib
pip install metrics
#redo the "install binary libraries" step
collect_binary_libs
# link binaries in the lib directory to avoid search path errors and also
# to reduce the number of false starts to find the library
for i in `ls $TARG/lib/python2.7/lib-dynload/*.so`
do
ln -f $i $TARG/lib/`basename $i`
done
# for the same reason link the whole lib directory to some other places in the tree
ln -s ../.. $TARG/lib/python2.7/site-packages/lib
# bundle it up and save it for packaging
cd /
tar cvf - .$TARG |gzip >~/py272-$NBITS.tar.gz
cd ~
# after untarring on another machine, we have a program call imports.py which imports
# every library as a quick check that it works. For a more positive check, run it like this
# strace -e trace=stat,fstat,open python imports.py >strace.txt 2>&1
# grep -v ' = -1' strace.txt |grep 'open(' >opens.txt
# sed <opens.txt 's/^open("//;s/".*//' |sort -u |grep -v 'dynload' |grep '\.so' >straced.txt
# ls -1d /data1/packages/python272/lib/* |sort -u >lib.txt
# then examine the strace output to see how many places it searches before finding it.
# a successful library load will be a call to open that doesn't end with ' = -1'
# If it takes too many tries to find a particular library, then another symbolic link may
# be a good idea
I'm pretty sure you have to compile wxPython to the version of Python that you want to use it with. That's always been the case with anyone else who has done something like this on the wxPython mailing list. I think that applies to most packages and especially so if they have any C/C++ components, like wxPython does. Pure Python packages can sometimes be transferred from one version to the next intact in my experience.
There are fairly extensive wxPython build instructions here: http://wxpython.org/BUILD-2.8.html
Robin Dunn and others on the wxPython mailing list are very helpful if you run into any problems.
If you compiled 2.6.6 and installed 2.6.5 from the repos, then ubuntu is having a conflict in finding what python you're using.
I'm flagging this to move to Superuser.

Categories

Resources