Creating a Portable Python (local install) for Linux - python

I'm looking to create the following:
A portable version of python that can be run on any system (with any previous version of python or no python installed) and have it pre-configured with various python packages (ie, django, lxml, pysqlite, etc)
The closest I've found to the above is virtualenv, but this only goes so far.
If I package up a nice virtualenv for python on one machine, it contains sym links to a lot of the libraries it needs. I can take those sym links and convert them to their actual files, but if I try to move this entire directory to another machine, I get seg fault after seg fault.
To launch python on a different machine, I'm using:
LD_LIBRARY_PATH=lib/ ./bin/python
and in lib/ I have all of the shared libraries I copied from the original machine. The problem here is these shared libraries might rely on other shared libraries that I'm not including, so executing this on other linux distros does not work. Probably due to it falling back on older shared libaries installed on the system that do not work with what I copied over.
Anyone have an idea on how to get this working? Is this even possible?
EDIT:
To clarify, the desired outcome is to create a tar.gz of a python binary and associated packages (django, lxml, pysqlite, etc) that can be extracted and run on any linux based system, ie (ubuntu 8.04, redhat 5, suse 11, etc), all 32bit distros, where the locally installed version of python doesn't impact what's in the tar.gz.

I just tested this and it works great.
Get the copy of python you want to install and untar it and cd to the untarred folder first.
Also get a copy of setuptools and untar that.
/opt/portapy used below is of course just the name I came up with for this post, it could be any path and the full path should be tarred up and the same path should be used on any systems you put this on due to absolute path linking.
mkdir /opt/portapy
cd <python source dir>
./configure --prefix=/opt/portapy && make && make install
cd <setuptools source dir>
/opt/portapy/bin/python ./setup.py install
Make the virtual env folder inside the portapy folder.
mkdir /opt/portapy/virtenv
/opt/portapy/bin/virtualenv /opt/portapy/virtenv
cd /opt/portapy/virtenv
source bin/activate
Done. You are ready to install all of your libraries here and have the option of creating multiple virtual envs this way.
You can then tar up the whole /opt/portapy folder and transport it to any Linux system of the same arch, within reason I suspect.
I compiled 2.7.5 ond centOS 5.8 64bit and moved the folder to a Cent6.9 system and it runs perfectly.

I don't know how this is even possible. If it were, they woudn't need to distribute binary packages of python for different platforms. You can't simply distribute python that will run on any platform. It has to be built from source for that arch. Virtualenv will expect you to tell it which system python to use (using links).
This pretty much goes for almost any binary package that links against system libs. Again, if it were possible, we wouldn't need any platform specific binary distributions.
You can, however, achieve part of what you want. That is, running python on another machine that doesn't have python installed as long as its the same arch. This is the same concept behind freezing, or py2exe/py2app/pyinstaller. An interpreter is bundled into a standalone environment. So the app can run on any similar platform.
Edit
I just realized that while your question speaks about "system" agnostically, your title contains the reference "linux". There are different flavors of linux, so in order for it to work you would have to build it fat for multiple archs and also completely contain the standalone links. You might try building a package with pyinstaller and using that to include in your project.
You can try just building python from source, in your virtualenv:
$ ./configure --prefix=/path/to/virtualenv && make && make install
If you still have problems with the links to libs, you can also investigate building it statically

I'm not sure that working solely in Python is the way to go here. You might have better luck with Puppet of Chef, which are configuration tools that can be used to create a local environment. There is plenty of code out there to install virtualenv and python on just about any Linux plus OSX (probably not Windows though).
Your workflow would be to install chef or Puppet (your choice), run a script to install the Python you want, then enter a virtualenv and pip install any packages you might need.
Sorry this isn't as easy as virtualenv alone, but it is much more robust.

Well, since I rarely accept "can't be done", there is a way to do it. Warning: it isn't pretty and you should probably look into a different scenario.
What you will need to to is determine a standard location for this top level directory. Second, using that directory as your root you will need to compile Python on each Linux distribution you want to run this on. For this you would use something like "/usr/local/myappname/platform/" to configure and compile Python to live in. In each case substitute "platform" with the name of the platform such as "/usr/local/rhel/". If memory serves the configure option you are looking for here is --prefix.
Once you have each distribution compiled you will need a script to determine which one to use and either set environment variables or have it create symlinks to the appropriate "installation" of python. I would then use virtualenv and bootstrap in that tree to keep the "in-use" python libraries even more specific.
I can't think of a common Linux distribution that doesn't have Python by default. As such you could use setup.py and/or basic python scripts to script this out since you should be able to rely in Python being present - even if its ye olde version as in RHEL installs. Personally I find the above method overly complicated but it would meet your stated requirements with the allowance for a final script. Of course, you could use shar (SHell ARchive) to tar all of this into a runnable shell script to do the installation and avoid the need for secondary scripts. If you gzip the resulting shel archive then you can decompress it on target systems and execute it to set everything up.
All that said, I would not recommend this. I would recommend determining the minimum Python version you can run on and ensuring that is installed by the distribution whenever possible and if needs be pulling down from a repo and installing. Then, use virtualenv and bootstrap with a requirements.txt to install necessary python libraries and apps into the virutalenv. For that see this documentation

I faced the same problem, so I created PortableVirtualenv. Your Question is just the definition of it.
I use it as a base for commercial multiplatform app I develop. (But PortableVirtualenv is public domain - use it freely.)
If needed, you can pip-install any package and zip the whole directory to distribute also packages you need.

One nice option is to make a "snap" portable linux application. They have a python mode which lets you specify you specify exactly what modules you need. From https://snapcraft.io/first-snap#python :
Snaps let you distribute a dependency-isolated Python app in an app store experience for end users.
Another option is to containerize your application with something like docker. Then instead of executing your script directly, the user is actually running a small OS with just your application and its dependencies. https://www.infoq.com/articles/docker-executable-images/ has more about executable containers.
Container images can also be used for short lived processes: a containerized executable meant to be run on your computer. These containers execute a single task, are short lived and can generally be removed after use. We call these executable images. Examples are compilers (Golang) or build tools (Maven), presentation software (I love to hack a simple presentation in Markdown format and let a RevealJS Docker image serve that) and browsers (a fresh contained browser to follow that fishy link). A real evangelist for executable images is Docker's own Jessie Frazelle. To get some great inspiration be sure to read her blog about them or check out this presentation at DockerCon 2015.

Related

any known way to create a device-to-device portable self-contained python runtime?

I want to distribute a python program on, say, Windows and/or Mac, but I don't want to give the user the headache of ensuring there is an appropriate python runtime installed on their machine. And i don't want to interfere with their machine's configuration by, let's say, requesting root privileges and installing a system-wide python runtime on their system that suits my program specifically because it's too invasive and might cause compatibility collisions with other installed versions of the runtime.
I would much rather have a self-contained executable that could be, for example, stored on a USB flash-drive, inserted into the system, and then maybe with a stepping-stone binary executable that just invokes the device-portable runtime on a python script that I provide, I could then run the program as if it were a self-contained binary executable (with only standard-library dependencies).
A link to this binary executable could be published into main-menu program lists, docks, or desktops. And it could be invoked by shell scripts or other executed-by-proxy mechanisms. Such a no-install/self-contained python program could potentially be a first-class user-invokable application. This is what I want to achieve.
I googled around for projects that provided a device-portable/mobile python installation and so far I've only found portablepython.com. Unfortunately it says the project is discontinued and no download link for the project is provided. it listed some similar projects but they all seemed defunkt or with a very different focus.
Does anyone know of an active project that is or includes such an independent/portable/mobile/no-install distribution for python?
or is there some way i could configure python's build system to build a noinstall-friendly product?
any ideas welcome. thanks for your input!
After more searching I found that Python.org publishes its own standalone-python distribution called the embeddable zip file.
This is exactly what I was searching for. It's a basic python standalone runtime that requires relatively few megabytes of storage.
I started with this embeddable distro and then cajoled a standalone copy of pip to work with it. Problem solved.
Improving upon #oreus2020's answer, you can download the embeddable zip file from here. Then, unzip the compressed file to a folder of your choice. Go to the root of your install and find python._pth file and open it in a text editor. Remove the "#" before import site(This file is the one which manages the environment of the portable install. If you want anything to be recognized by the portable python interpreter, just throw the path in here and that's it!). If you want pip, go to this page and save it in the root of your portable install and run it using the portable python interpreter like ./python get-pip.py from a commandline opened at the root of your install. Pip installed! To use the pip, do ./python -m pip <commands> from the commandline opened at the root of your install and then open the python._pth file and insert the following below the "." ./Lib/site-packages ./Scripts. Voila, you got yourself a python portable install!
My python._pth file looks like:
python39.zip
.
# Uncomment to run site.main() automatically
./Repo
./Repo/Code
./Repo/Code/cogs
./Lib/site-packages
./Scripts
import site
If you are still wondering, here is the link to the one I made for myself.
P.S. Pardon my bad English

Debian build package: Add python virtualenv into dpkg-buildpackage to be uploaded to launchpad

I would like to pack a python program and ship it in a deb package.
For reasons (I know in 99% it is bad practice) I want to ship the program in a python virtual environment within a debian package.
I know I can do this using dh-virtualenv. This works great - generally no problem.
But the problem arises when I want to upload this to launchpad. Uploading to launchpad means uploading a source package. In terms of dh-virtualenv a source package is the package description, where the virtualenv has not been created, yet.
What happens when I upload this to launchpad is, that the package will not build, since the dh-virtualenv which is executed during the build process on launchpad will try to install python modules into the virtualenv, which means installing these from the PyPI, which will not work, since launchpad does not allow external network access.
So basically there are two possible solutions:
Approach A
Prepare the virtualenv and somehow incorporate it into the source package and having the dh build process simply "move" this prepared virtualenv to its end location. This could work with virtualenv --relocatable. BUT the relocation strips the utf-8 marker at the beginning of all python scripts, rendering all python scripts in the virtualenv broken.
Apporach B
Somehow cache all necessary python packages in the source package and have dh_virtualenv install from the cache instead of from PyPI.
This seems like to be doable with pip2pi, but certain experiements show, that it will not install packages, although they are located in the local package index.
Both approaches seem a bit clumsy and prone to errors.
What do you think of this?
What are your experiences?
What would you recommend?

bundling/executing python script + modules to a remote machine

I have looked into other python module distribution questions. My need is a bit different (I think!, I am python newbie+)
I have a bunch of python scripts that I need to execute on remote machines. Here is what the target environment looks like;
The machines will have base python run time installed
I will have a SSH account; I can login or execute commands remotely using ssh
i can copy files (scp) into my home dir
I am NOT allowed to install any thing on the machine; the machines may not even have access to Internet
my scripts may use some 'exotic' python modules -- most likely they won't be present in the target machine
after the audit, my home directory will be nuked from the machine (leave no trace)
So what I like to do is:
copy a directory structure of python scripts + modules to remote machine (say in /home/audituser/scripts). And modules can be copied into /home/audituser/scripts/pythhon_lib)
then execute a script (say /home/audituser/scripts/myscript.py). This script will need to resolve all modules used from 'python_lib' sub directory.
is this possible? or is there a better way of doing this? I guess what I am looking is to 'relocate' the 3rd party modules into the scripts dir.
thanks in advance!
Are the remote machines the same as each other? And, if so, can you set up a development machine that's effectively the same as the remote machines?
If so, virtualenv makes this almost trivial. Create a virtualenv on your dev machine, use the virtualenv copy of pip to install any third-party modules into it, build your script within it, then just copy that entire environment to each remote machine.
There are three things that make it potentially non-trivial:
If the remote machines don't (and can't) have virtualenv installed, you need to do one of the following:
In many cases, just copying a --relocatable environment over just works. See the documentation section on "Making Environments Relocatable".
You can always bundle virtualenv itself, and pip install --user virtualenv (and, if they don't even have pip, a few steps before that) on each machine. This will leave the user account in a permanently-changed state. (But fortunately, your user account is going to be nuked, so who cares?)
You can write your own manual bootstrapping. See the section on "Creating Your Own Bootstrap Scripts".
By default, you get a lot more than you need—the Python executable, the standard library, etc.
If the machines aren't identical, this may not work, or at least might not be as efficient.
Even if they are, you're still often making your bundle orders of magnitude bigger.
See the documentation sections on Using Virtualenv without bin/python, --system-site-packages, and possibly bootstrapping.
If any of the Python modules you're installing also need C libraries (e.g., libxml2 for lxml), virtualenv doesn't help with that. In fact, you will need the C libraries to be almost exactly the same (same path, compatible version).
Three other alternatives:
If your needs are simple enough (or the least-simple parts involve things that virtualenv doesn't help with, like installing libxml2), it may be easier to just bundle the .egg/.tgz/whatever files for third-party modules, and write a script that does a pip install --user and so on for each one, and then you're done.
Just because you don't need a full app-distribution system doesn't mean you can't use one. py2app, py2exe, cx_freeze, etc. aren't all that complicated, especially in simple cases, and having a click-and-go executable to copy around is even easier than having an explicit environment.
zc.buildout is an amazingly flexible and manageable tool that can do the equivalent of any of the three alternatives. The main downside is that there's a much, much steeper learning curve.
You can use virtualenv to create a self-contained environment for your project. This can house your own script, as well as any dependency libraries. Then you can make the env relocatable (--relocatable), and sync it over to the target machine, activate it, and run your scripts.
If these machines do have network access (not internet, but just local network), you can also place the virtualenv on a shared location and activate from there.
It looks something like this:
virtualenv --no-site-packages portable_proj
cd portable_proj/
source bin/activate
# install some deps
pip install xyz
virtualenv --relocatable .
Now portable_proj can be disted to other machines.

Finding Tools/scripts/ subdirectory

Does anyone know where I could find this file on Ubuntu?
On my machine it's in:
/usr/share/doc/python2.7/examples/Tools/scripts/diff.py
However, it'll vary a little depending on your distrib and how up to date you are. But there's a really handy little tool called 'locate' that you can use to quickly find stuff on your machine.
locate diff.py | grep Tools
Gives me
/usr/share/doc/python2.7/examples/Tools/scripts/diff.py
/usr/share/doc/python2.7/examples/Tools/scripts/ndiff.py
/usr/share/doc/python3.1/examples/Tools/scripts/diff.py
/usr/share/doc/python3.1/examples/Tools/scripts/ndiff.py
As I have two versions of Python installed. I put it through grep, as sometimes locate can match quite a lot.
If you're sure that you have a file on your machine, but locate isn't finding it, you might need to update your database, which is done with the 'updatedb' command, as root. So, just run
sudo updatedb
and get a coffee (or two if you have a slow machine/very full drive) and then try again.
This command
$ locate "Tools/scripts/diff.py"
will find the location of the file if it's installed. This depends on the database generated regularly by the updatedb command (this usually runs as a cron job, but can also be invoked manually)
FWIW, I just checked my Ubuntu installation (10.04LTS) and didn't find it. Perhaps only Python versions 2.7+ have this (the default version that came with this install is still v2.6.5)
I needed to know this for a different environment, namely Linux running an Amazon Machine Image (AMI). I installed Python 3.5 via "yum" but couldn't find the Tools/scripts directory using find/locate, etc., or any other "yum" package which included it.
So in the end I cloned the python source tree mirror:
git clone https://github.com/python-git/python python
This downloads the Tools/scripts folder which I then moved to some standard location. I needed the "2to3" program from "scripts" and this worked. There is also a "clone or download" link at that URL where the package can be downloaded in the usual way if git is not available.
I couldn't find an easy way to install the Tools/scripts via "yum" on AMI Linux, which would still be my preference.

How do I do Debian packaging of a Python package?

I need to write, or find, a script to create a Debian package, using package python-support, from a Python package. The Python package will be pure Python without C extensions.
The Python package for testing purposes will just be a directory with an empty __init__.py file and a single Python module, package_test.py.
The packaging script must use python-support to provide the correct bytecode for possible multiple installations of Python on a target platform, i.e. v2.5 and v2.6 on Ubuntu 9.04 (Jaunty Jackalope).
Most advice I find while googling are just examples of nasty hacks that don't even use python-support or python-central.
I have spent hours researching this, and the best I can come up with is to hack around the script from an existing open source project, but I don't know which bits are required for what I'm doing.
Has anyone here made a Debian package out of a Python package in a reasonably non-hacky way?
I'm starting to think that it will take me more than a week to go from no knowledge of Debian packaging and python-support to getting a working script. How long has it taken others?
The right way of building a .deb package is using dpkg-buildpackage, but sometimes it is a little bit complicated. Instead you can use dpkg -b <folder>, and it will create your Debian package.
These are the basics for creating a Debian package with dpkg -b <folder> with any binary or with any kind of script that runs automatically without needing manual compilation (Python, Bash, Perl, and Ruby):
Create the files and folders in order to recreate the following structure:
ProgramName-Version/
ProgramName-Version/DEBIAN
ProgramName-Version/DEBIAN/control
ProgramName-Version/usr/
ProgramName-Version/usr/bin/
ProgramName-Version/usr/bin/your_script
The scripts placed at /usr/bin/ are directly called from the terminal. Note that I didn't add an extension to the script. Also you can notice that the structure of the .deb package will be the structure of the program once it's installed. So if you follow this logic, if your program has a single file, you can directly place it under ProgramName-Version/usr/bin/your_script, but if you have multiple files, you should place them under ProgramName-Version/usr/share/ProgramName/all your files and place only one file under /usr/bin/ that will call your scripts from /usr/share/ProgramName/.
Change all the folder permission to root:
chown root:root -R /path/to/ProgramName-Version
Change the script's permissions:
chmod 0755 /path/to/the/script
Finally, you can run: dpkg -b /path/to/the/ProgramName-Version and your .deb package will be created! (You can also add the post/pre install scripts and everything you want. It works like a normal Debian package.)
Here is an example of the control file. You only need to copy-paste it in to an empty file called "control" and put it in the DEBIAN folder.
Package: ProgramName
Version: VERSION
Architecture: all
Maintainer: YOUR NAME <EMAIL>
Depends: python2.7, etc , etc,
Installed-Size: in_kb
Homepage: http://example.com
Description: Here you can put a one line description. This is the short Description.
Here you put the long description, indented by one space.
The full article about Debian packages can be read here.
I would take the sources of an existing Debian package, and replace the actual package in it with your package. To find a list of packages that depend on python-support, do
apt-cache rdepends python-support
Pick a package that is Architecture: all, so that it is a pure-Python package. Going through this list, I found that e.g. python-flup might be a good starting point.
To get the source of one such package, do
apt-get source <package>
To build it, do
cd <packagesrc>
dpkg-buildpackage -rfakeroot
When editing it, expect that you only need the files in the debian folder; replace all references to flup with your own package name.
Once you get started, it should take you a day to complete.
I think you want http://pypi.python.org/pypi/stdeb:
stdeb produces Debian source packages
from Python packages via a new
distutils command, sdist_dsc.
Automatic defaults are provided for
the Debian package, but many aspects
of the resulting package can be
customized (see the customizing
section, below). An additional
command, bdist_deb, creates a Debian
binary package, a .deb file.
Most of the answers posted here are outdated, but fortunately a great Debian wiki post has been made recently, which explains the current best practices and describes how to build Debian packages for Python modules and applications.
http://wiki.debian.org/Python/Packaging
First off, there are plenty of Python packages already in Debian; you can download the source (including all the packaging) for any of them either using apt-get source or by visiting http://packages.debian.org.
You may find the following resources of use:
Debian New Maintainer's Guide
Debian Policy Manual
Debian Python Policy
Debian Python Modules Team

Categories

Resources