Buildout and Virtualenv - python

I am messing around with the combination of buildout and virtualenv to setup an isolated development environment in python that allows to do reproducible builds.
There is a recipe for buildout that let's you integrate virtualenv into buildout:
tl.buildout_virtual_python
With this my buildout.cfg looks like this:
[buildout]
develop = .
parts = script
virtualpython
[virtualpython]
recipe = tl.buildout_virtual_python
headers = true
executable-name = vp
site-packages = false
[script]
recipe = zc.recipe.egg:scripts
eggs = foo
python = virtualpython
This will deploy two executables into ./bin/:
vp
script
When I execute vp, I get an interactive, isolated python dialog, as expected (can't load any packages from the system).
What I would expect now, is that if I run
./bin/script
that the isolated python interpreter is used. But it doesn't, it's not isolated as "vp" is (meaning I can import libraries from system level). However I can run:
./bin/vp ./bin/script
Which will run the script in an isolated environment as I wished. But there must be a way to specify this to do so without chaining commands otherwise buildout only solves half of the problems I hoped :)
Thanks for your help!
Patrick

You don't need virtualenv: buildout already provides an isolated environment, just like virtualenv.
As an example, look at files buildout generates in the bin directory. They'll have something like:
import sys
sys.path[0:0] = [
'/some/thing1.egg',
# and other things
]
So the sys.path gets completely replaced with what buildout wants to have on the path: the same isolation method as virtualenv.

zc.buildout 2.0 and later does not provide the isolated environment anymore.
But virtualenv 1.9 and later provides complete isolation (including to not install setuptools).
Thus the easiest way to get a buildout in a complete controlled environment is to run the following steps (here i.e. for Python 2.7):
cd /path/to/buildout
rm ./bin/python
/path/to/virtualenv-2.7 --no-setuptools --no-site-packages --clear .
./bin/python2.7 bootstrap.py
./bin/buildout
Preconditions:
bootstrap.py has to be a recent one matching the buildout version you are using. You'll find the latest at http://downloads.buildout.org/2/
if there are any version pins in your buildout, ensure they do not pin buildout itself or recipes/ extensions to versions not compatible with zc.buildout 2 or later.

Had issue running buildout using bootstrap on ubuntu server, from then I use virtualenv and buildout together. Simply create virualenv and install buildout in it. This way only virtualenv has to be installed into system (in theory1).
$ virtualenv [options_you_might_need] virtual
$ source virtual/bin/activate
$ pip install zc.buildout
$ buildout -c <buildout.cfg>
Also tell buildout to put its scripts in to virtual/bin/ directory, that way scripts appear on $PATH.
[buildout]
bin-directory = ${buildout:directory}/virtual/bin
...
1: In practice you probably will need to eggs what require compilation to system level that require compilation. Eggs like mysql or memcache.

I've never used that recipe before, but the first thing I would try is this:
[buildout]
develop = .
parts = script
virtualpython
[virtualpython]
recipe = tl.buildout_virtual_python
headers = true
executable-name = vp
site-packages = false
[script]
recipe = zc.recipe.egg:scripts
eggs = foo
python = virtualpython
interpreter = vp
If that doesn't work, you can usually open up the scripts (in this case vp and script) in a text editor and see the Python paths that they're using. If you're on windows there will usually be a file called <script_name>-script.py. In this case, that would be vp-script.py and script-script.py.

Related

Python module development workflow - setup and build [duplicate]

I'm developing my own module in python 2.7. It resides in ~/Development/.../myModule instead of /usr/lib/python2.7/dist-packages or /usr/lib/python2.7/site-packages. The internal structure is:
/project-root-dir
/server
__init__.py
service.py
http.py
/client
__init__.py
client.py
client/client.py includes PyCachedClient class. I'm having import problems:
project-root-dir$ python
Python 2.7.2+ (default, Jul 20 2012, 22:12:53)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from server import http
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "server/http.py", line 9, in <module>
from client import PyCachedClient
ImportError: cannot import name PyCachedClient
I didn't set PythonPath to include my project-root-dir, therefore when server.http tries to include client.PyCachedClient, it tries to load it from relative path and fails. My question is - how should I set all paths/settings in a good, pythonic way? I know I can run export PYTHONPATH=... in shell each time I open a console and try to run my server, but I guess it's not the best way. If my module was installed via PyPi (or something similar), I'd have it installed in /usr/lib/python... path and it'd be loaded automatically.
I'd appreciate tips on best practices in python module development.
My Python development workflow
This is a basic process to develop Python packages that incorporates what I believe to be the best practices in the community. It's basic - if you're really serious about developing Python packages, there still a bit more to it, and everyone has their own preferences, but it should serve as a template to get started and then learn more about the pieces involved. The basic steps are:
Use virtualenv for isolation
setuptools for creating a installable package and manage dependencies
python setup.py develop to install that package in development mode
virtualenv
First, I would recommend using virtualenv to get an isolated environment to develop your package(s) in. During development, you will need to install, upgrade, downgrade and uninstall dependencies of your package, and you don't want
your development dependencies to pollute your system-wide site-packages
your system-wide site-packages to influence your development environment
version conflicts
Polluting your system-wide site-packages is bad, because any package you install there will be available to all Python applications you installed that use the system Python, even though you just needed that dependency for your small project. And it was just installed in a new version that overrode the one in the system wide site-packages, and is incompatible with ${important_app} that depends on it. You get the idea.
Having your system wide site-packages influence your development environment is bad, because maybe your project depends on a module you already got in the system Python's site-packages. So you forget to properly declare that your project depends on that module, but everything works because it's always there on your local development box. Until you release your package and people try to install it, or push it to production, etc... Developing in a clean environment forces you to properly declare your dependencies.
So, a virtualenv is an isolated environment with its own Python interpreter and module search path. It's based on a Python installation you previously installed, but isolated from it.
To create a virtualenv, install the virtualenv package by installing it to your system wide Python using easy_install or pip:
sudo pip install virtualenv
Notice this will be the only time you install something as root (using sudo), into your global site-packages. Everything after this will happen inside the virtualenv you're about to create.
Now create a virtualenv for developing your package:
cd ~/pyprojects
virtualenv --no-site-packages foobar-env
This will create a directory tree ~/pyprojects/foobar-env, which is your virtualenv.
To activate the virtualenv, cd into it and source the bin/activate script:
~/pyprojects $ cd foobar-env/
~/pyprojects/foobar-env $ . bin/activate
(foobar-env) ~/pyprojects/foobar-env $
Note the leading dot ., that's shorthand for the source shell command. Also note how the prompt changes: (foobar-env) means your inside the activated virtualenv (and always will need to be for the isolation to work). So activate your env every time you open a new terminal tab or SSH session etc..
If you now run python in that activated env, it will actually use ~/pyprojects/foobar-env/bin/python as the interpreter, with its own site-packages and isolated module search path.
A setuptools package
Now for creating your package. Basically you'll want a setuptools package with a setup.py to properly declare your package's metadata and dependencies. You can do this on your own by by following the setuptools documentation, or create a package skeletion using Paster templates. To use Paster templates, install PasteScript into your virtualenv:
pip install PasteScript
Let's create a source directory for our new package to keep things organized (maybe you'll want to split up your project into several packages, or later use dependencies from source):
mkdir src
cd src/
Now for creating your package, do
paster create -t basic_package foobar
and answer all the questions in the interactive interface. Most are optional and can simply be left at the default by pressing ENTER.
This will create a package (or more precisely, a setuptools distribution) called foobar. This is the name that
people will use to install your package using easy_install or pip install foobar
the name other packages will use to depend on yours in setup.py
what it will be called on PyPi
Inside, you almost always create a Python package (as in "a directory with an __init__.py) that's called the same. That's not required, the name of the top level Python package can be any valid package name, but it's a common convention to name it the same as the distribution. And that's why it's important, but not always easy, to keep the two apart. Because the top level python package name is what
people (or you) will use to import your package using import foobar or from foobar import baz
So if you used the paster template, it will already have created that directory for you:
cd foobar/foobar/
Now create your code:
vim models.py
models.py
class Page(object):
"""A dumb object wrapping a webpage.
"""
def __init__(self, content, url):
self.content = content
self.original_url = url
def __repr__(self):
return "<Page retrieved from '%s' (%s bytes)>" % (self.original_url, len(self.content))
And a client.py in the same directory that uses models.py:
client.py
import requests
from foobar.models import Page
url = 'http://www.stackoverflow.com'
response = requests.get(url)
page = Page(response.content, url)
print page
Declare the dependency on the requests module in setup.py:
install_requires=[
# -*- Extra requirements: -*-
'setuptools',
'requests',
],
Version control
src/foobar/ is the directory you'll now want to put under version control:
cd src/foobar/
git init
vim .gitignore
.gitignore
*.egg-info
*.py[co]
git add .
git commit -m 'Create initial package structure.
Installing your package as a development egg
Now it's time to install your package in development mode:
python setup.py develop
This will install the requests dependency and your package as a development egg. So it's linked into your virtualenv's site-packages, but still lives at src/foobar where you can make changes and have them be immediately active in the virtualenv without re-installing your package.
Now for your original question, importing using relative paths: My advice is, don't do it. Now that you've got a proper setuptools package, that's installed and importable, your current working directory shouldn't matter any more. Just do from foobar.models import Page or similar, declaring the fully qualified name where that object lives. That makes your source code much more readable and discoverable, for yourself and other people that read your code.
You can now run your code by doing python client.py from anywhere inside your activated virtualenv. python src/foobar/foobar/client.py works just as fine, your package is properly installed and your working directory doesn't matter any more.
If you want to go one step further, you can even create a setuptools entry point for your CLI scripts. This will create a bin/something script in your virtualenv that you can run from the shell.
setuptools console_scripts entry point
setup.py
entry_points='''
# -*- Entry points: -*-
[console_scripts]
run-fooobar = foobar.main:run_foobar
''',
client.py
def run_client():
# ...
main.py
from foobar.client import run_client
def run_foobar():
run_client()
Re-install your package to activate the entry point:
python setup.py develop
And there you go, bin/run-foo.
Once you (or someone else) installs your package for real, outside the virtualenv, the entry point will be in /usr/local/bin/run-foo or somewhere simiar, where it will automatically be in $PATH.
Further steps
Creating a release of your package and uploading it PyPi, for example using zest.releaser
Keeping a changelog and versioning your package
Learn about declaring dependencies
Learn about Differences between distribute, distutils, setuptools and distutils2
Suggested reading:
The Hitchhiker’s Guide to Packaging
The pip cookbook
So, you have two packages, the first with modules named:
server # server/__init__.py
server.service # server/service.py
server.http # server/http.py
The second with modules names:
client # client/__init__.py
client.client # client/client.py
If you want to assume both packages are in you import path (sys.path), and the class you want is in client/client.py, then in you server you have to do:
from client.client import PyCachedClient
You asked for a symbol out of client, not client.client, and from your description, that isn't where that symbol is defined.
I personally would consider making this one package (ie, putting an __init__.py in the folder one level up, and giving it a suitable python package name), and having client and server be sub-packages of that package. Then (a) you could do relative imports if you wanted to (from ...client.client import something), and (b) your project would be more suitable for redistribution, not putting two very generic package names at the top level of the python module hierarchy.

How to add new default packages to virtualenv?

When I create a virtualenv, it installs setuptools and pip. Is it possible to add new packages to this list?
Example use cases:
Following this solution to use ipython in virtualenv (from this question) requires installing ipython in every virtualenv (unless I allow system-site-packages).
Or if I'm doing a only flask/pygame/framework development, I'd want it in every virtualenv.
I took a different approach from what is chosen as the correct answer.
I chose I directory, like ~/.virtualenv/deps and installed packages in there by doing
pip install -U --target ~/.virtualenv/deps ...
Next in ~/.virtualenv/postmkvirtualenv I put the following:
# find directory
SITEDIR=$(virtualenvwrapper_get_site_packages_dir)
PYVER=$(virtualenvwrapper_get_python_version)
# create new .pth file with our path depending of python version
if [[ $PYVER == 3* ]];
then
echo "$HOME/.virtualenvs/deps3/" > "$SITEDIR/extra.pth";
else
echo "$HOME/.virtualenvs/deps/" > "$SITEDIR/extra.pth";
fi
Post that basically says the same thing.
You can write a python script, say personalize_venv.py that extends the EnvBuilder class and override its post_setup() method for installing any default packages that you need.
You can get the basic example from https://docs.python.org/3/library/venv.html#an-example-of-extending-envbuilder.
This doesn't need a hook. Directly run the script with command line argument dirs pointing to your venv directory/directories. The hook is the post_setup() method itself of EnvBuilder class.

Implementing a simple dexterity content type for plone 4

I have a very frustrating start learning Plone development. I would like to develop a dexterity based content type for Plone 4. I'm an experienced python developer, having some knowledge of Zope and Grok, being rather new to buildout. That said, I read "Professional Plone 4 Development" by Martin Aspeli, but quite some version information in the book seems to be outdated.
Using buildout I was able to get a Plone instance up and running. ZopeSkel is installed but when I try to create a new package, I get an error like this:
**************************************************************************
** Your new package supports local commands. To access them, change
** directories into the 'src' directory inside your new package.
** From there, you will be able to run the command `paster add
** --list` to see the local commands available for this package.
**************************************************************************
ERROR: No egg-info directory found (looked in ./domma.voucher/./domma.voucher.egg-info, ./domma.voucher/bootstrap.py/domma.voucher.egg-info, ./domma.voucher/bootstrap.pyo/domma.voucher.egg-info, ./domma.voucher/buildout.cfg/domma.voucher.egg-info, ./domma.voucher/CHANGES.txt/domma.voucher.egg-info, ./domma.voucher/CONTRIBUTORS.txt/domma.voucher.egg-info, ./domma.voucher/docs/domma.voucher.egg-info, ./domma.voucher/domma/domma.voucher.egg-info, ./domma.voucher/README.txt/domma.voucher.egg-info, ./domma.voucher/setup.cfg/domma.voucher.egg-info, ./domma.voucher/setup.py/domma.voucher.egg-info, ./domma.voucher/src/domma.voucher.egg-info)
If I try to run paster from within the given directory, it tells me, that the command "add" is not know. I tried different versions of ZopeSkel and tried the raw plone templates and also zopeskel.dexterity. The output changes slightly depending on version and template, but the result remains the same.
Obvisouly Plone development seems to be very sensible to version changes, which makes older documentation quite useless. http://plone.org/products/dexterity/documentation/manual/developer-manual tells me, that it has been updated last time 1114 ago.
Could somebody give me a starting point to develop a very simple dexterity content type for Plone 4 which really works?
For what it's worth, whilst there are a few newer versions of some packages, Professional Plone 4 Development is current with Plone 4.1. I would suggest you use it, and start from its sample code. Don't try to arbitrarily upgrade things until you know you have a working starting point, and you should be OK.
http://pigeonflight.blogspot.com/2012/01/dexterity-development-quickstart-using.html offers a nice quickstart. The most current Dexterity docs are at http://dexterity-developer-manual.readthedocs.org/en/latest/index.html. Yes, this is a little bit of a moving target, documentation-wise, not so much due to Dexterity, which is stable and in production, but mainly because Zopeskel is under heavy development/modernization right now. Sorry about that.
From [https://github.com/collective/templer.plone/blob/master/README.txt][1]
Templer cannot coexist with old ZopeSkel in the same buildout, or Python virtualenv.
Otherwise you will encounter the following error when trying to create packages::
IOError: No egg-info directory found (looked in ./mycompany.content/./mycompany.content.egg-info, ....
Templer is the latest incarnation of ZopeSkel(version 3). I am not sure what version of ZopeSkel you have or if you have mixed versions installed in buildout or virtualenv. But the conflicting installation of ZopeSkel is likely the culprit.
I would start from scratch and recreate vitualenv and just install the latest version of ZopeSkel 2 via buildout. ZopeSkel 3 or Templer is still in heavy development and not all templates have been migrated.
I was able to create a new Plone 4.1.4 site with a new Dexterity content-type using this buildout. This should not be an official answer but pasting the configuration to a volatile service like pastebin is not an option for permanent documentation.
# buildout.cfg file for Plone 4 development work
# - for production installations please use http://plone.org/download
# Each part has more information about its recipe on PyPi
# http://pypi.python.org/pypi
# ... just reach by the recipe name
[buildout]
parts =
instance
zopepy
i18ndude
zopeskel
test
# omelette
extends =
http://dist.plone.org/release/4.1-latest/versions.cfg
http://good-py.appspot.com/release/dexterity/1.2.1?plone=4.1.4
# Add additional egg download sources here. dist.plone.org contains archives
# of Plone packages.
find-links =
http://dist.plone.org/release/4.1-latest
http://dist.plone.org/thirdparty
extensions =
mr.developer
buildout.dumppickedversions
sources = sources
versions = versions
auto-checkout =
nva.borrow
# Create bin/instance command to manage Zope start up and shutdown
[instance]
recipe = plone.recipe.zope2instance
user = admin:admin
http-address = 16080
debug-mode = off
verbose-security = on
blob-storage = var/blobstorage
zope-conf-additional = %import sauna.reload
eggs =
Pillow
Plone
nva.borrow
sauna.reload
plone.app.dexterity
# Some pre-Plone 3.3 packages may need you to register the package name here in
# order their configure.zcml to be run (http://plone.org/products/plone/roadmap/247)
# - this is never required for packages in the Products namespace (Products.*)
zcml =
# nva.borrow
sauna.reload
# zopepy commands allows you to execute Python scripts using a PYTHONPATH
# including all the configured eggs
[zopepy]
recipe = zc.recipe.egg
eggs = ${instance:eggs}
interpreter = zopepy
scripts = zopepy
# create bin/i18ndude command
[i18ndude]
unzip = true
recipe = zc.recipe.egg
eggs = i18ndude
# create bin/test command
[test]
recipe = zc.recipe.testrunner
defaults = ['--auto-color', '--auto-progress']
eggs =
${instance:eggs}
# create ZopeSkel and paster commands with dexterity support
[zopeskel]
recipe = zc.recipe.egg
eggs =
ZopeSkel<=2.99
PasteScript
zopeskel.dexterity<=2.99
${instance:eggs}
# symlinks all Python source code to parts/omelette folder when buildout is run
# windows users will need to install additional software for this part to build
# correctly. See http://pypi.python.org/pypi/collective.recipe.omelette for
# relevant details.
# [omelette]
# recipe = collective.recipe.omelette
# eggs = ${instance:eggs}
# Put your mr.developer managed source code repositories here, see
# http://pypi.python.org/pypi/mr.developer for details on the format of
# this part
[sources]
nva.borrow = svn https://novareto.googlecode.com/svn/nva.borrow/trunk
# Version pindowns for new style products go here - this section extends one
# provided in http://dist.plone.org/release/
[versions]

How to prepend a path to a buildout-generated script

Scripts generated by zc.buildout using zc.recipe.egg, on our <package>/bin/ directory look like this:
#! <python shebang> -S
import sys
sys.path[0:0] = [
... # some paths derived from the eggs
... # some other paths included with zc.recipe.egg `extra-path`
]
# some user initialization code from zc.recipe.egg `initialization`
# import function, call function
What I have not been able to was to find a way to programmatically prepend a path at the sys.path construction introduced in every script. Is this possible?
Why: I have a version of my python project installed globally and another version of it installed locally (off-buildout tree). I want to be able to switch between these two versions.
Note: Clearly, one can use the zc.recipe.egg/initialization property to add something like:
initialization = sys.path[0:0] = [ /add/path/to/my/eggs ]
But, is there any other way? Extra points for an example!
Finally, I got a working environment by creating my own buildout recipe that you can find here: https://github.com/idiap/local.bob.recipe. The file that contains the recipe is this one: https://github.com/idiap/local.bob.recipe/blob/master/config.py. There are lots of checks which are specific to our software at the class constructor and some extra improvements as well, but don't get bothered with that. The "real meat (TM)" is on the install() method of that class. It goes like this more or less:
egg_link = os.path.join(self.buildout['buildout']['eggs-directory'], 'external-package.egg-link')
f = open(egg_link, 'wt')
f.write(self.options['install-directory'] + '\n')
f.close()
self.options.created(egg_link)
return self.options.created()
This will do the trick. My external (CMake-based) package now only has to create the right .egg-info file in parallel with the python package(s) it builds. Than, I can tie, using the above recipe, the usage of a specific package installation like this:
[buildout]
parts = external_package python
develop = .
eggs = my_project
external_package
recipe.as.above
[external_package]
recipe = recipe.as.above:config
install-directory = ../path/to/my/local/package/build
[python]
recipe = zc.recipe.egg
interpreter = python
eggs = ${buildout:eggs}
If you wish to switch installations, just change the install-directory property above. If you wish to use the default installation available system wide, just remove altogether the recipe.as.above constructions from your buildout.cfg file. Buildout will just find the global installation w/o requiring any extra configuration. Uninstallation will work properly as well. So, switching between builds will just work.
Here is a fully working buildout .cfg file that we use here: https://github.com/idiap/bob.project.example/blob/master/localbob.cfg
The question is: Is there an easier way to achieve the same w/o having this external recipe?
Well, what you miss is probably the most useful buildout extension, mr.developer.
Typically the package, let's say foo.bar will be in some repo, let's say git.
Your buildout will look like
[buildout]
extensions = mr.developer
[sources]
foo.bar = git git#github.com:foo/foo.bar.git
If you don't have your package in a repo, you can use fs instead of git, have a look at the documentation for details.
Activating the "local" version is done by
./bin/develop a foo.bar
Deactivating by
./bin/develop d foo.bar
There are quite a few other things you can do with mr.developer, do check it out!

Activate a python virtual environment using activate_this.py in a fabfile on Windows

I have a Fabric task that needs to access the settings of my Django project.
On Windows, I'm unable to install Fabric into the project's virtualenv (issues with Paramiko + pycrypto deps). However, I am able to install Fabric in my system-wide site-packages, no problem.
I have installed Django into the project's virtualenv and I am able to use all the "> python manage.py" commands easily when I activate the virtualenv with the "VIRTUALENV\Scripts\activate.bat" script.
I have a fabric tasks file (fabfile.py) in my project that provides tasks for setup, test, deploy, etc. Some of the tasks in my fabfile need to access the settings of my django project through "from django.conf import settings".
Since the only usable Fabric install I have is in my system-wide site-packages, I need to activate the virtualenv within my fabfile so django becomes available. To do this, I use the "activate_this" module of the project's virtualenv in order to have access to the project settings and such. Using "print sys.path" before and after I execute activate_this.py, I can tell the python path changes to point to the virtualenv for the project. However, I still cannot import django.conf.settings.
I have been able to successfully do this on *nix (Ubuntu and CentOS) and in Cygwin. Do you use this setup/workflow on Windows? If so Can you help me figure out why this wont work on Windows or provide any tips and tricks to get around this issue?
Thanks and Cheers.
REF:
http://virtualenv.openplans.org/#id9 | Using Virtualenv without
bin/python
Local development environment:
Python 2.5.4
Virtualenv 1.4.6
Fabric 0.9.0
Pip 0.6.1
Django 1.1.1
Windows XP (SP3)
After some digging, I found out that this is an issue with the activate_this.py script. In it's current state, virtualenv<=1.4.6, this script assumes that the path to the site-packages directory is the same for all platforms. However, the path to the site-packages directory differs between *nix like platforms and Windows.
In this case the activate_this.py script adds the *nix style path:
VIRTUALENV_BASE/lib/python2.5/site-packages/
to the python path instead of the Windows specific path:
VIRTUALENV_BASE\Lib\site-packages\
I have created an issue in the virtualenv issue tracker which outlines the problem and the solution. If you are interested, you may check on the issue here: http://bitbucket.org/ianb/virtualenv/issue/31/windows-activate_this-assumes-nix-path-to-site
Hopefully the fix will be made available in an upcomming release of virtualenv.
If you need a fix for this problem right now, and the virtualenv package has not yet been patched, you may "fix" your own activate_this.py as shown below.
Edit your VIRTUALENV\Scripts\activate_this.py file. Change the line (17 ?):
site_packages = os.path.join(base, 'lib', 'python%s' % sys.version[:3], 'site-packages')
to
if sys.platform == 'win32':
site_packages = os.path.join(base, 'Lib', 'site-packages')
else:
site_packages = os.path.join(base, 'lib', 'python%s' % sys.version[:3], 'site-packages')
With this in place, your activate_this.py script would first check which platform it is running on and then tailor the path to the site-packages directory to fit.
Enjoy!
You will have to execute the activate this, from within the fab file. Altho' I have not tested it, I believe following should work:
activate_this = '/path/to/env/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

Categories

Resources