After looking at the reusable apps chapter of Practical Django Projects and listening to the DjangoCon (Pycon?) lecture, there seems to be an emphasis on making your apps pluggable by installing them into the Python path, namely site-packages.
What I don't understand is what happens when the version of one of those installed apps changes. If I update one of the apps that's installed to site-packages, then won't that break all my current projects that use it? I never noticed anything in settings.py that let's you specify the version of the app you're importing.
I think in Ruby/Rails, they're able to freeze gems for this sort of situation. But what are we supposed to do in Python/Django?
Having multiple versions of the same package gets messy (setuptools can do it, though).
I've found it cleaner to put each project in its own virtualenv. We use virtualevwrapper to manage the virtualenvs easily, and the --no-site-packages option to make every project really self-contained and portable across machines.
This is the recommended setup for mod_wsgi servers.
You definitely don't want to put your Django apps into site-packages if you have more than one Django site.
The best way, as Ken Arnold answered, is to use Ian Bicking's virtualenv (Virtual Python Environment Builder). This is especially true if you have to run multiple versions of Django.
However, if you can run a single version of Python and Django then it might be a little easier to just install the apps into your project directory. This way if an external app gets updated you can upgrade each of your projects one at a time as you see fit. This is the structure Pinax used for external Django apps at one time, but I think it's using virtualenv + pip (instead of setuptools/distutils) now.
What we do.
We put only "3rd-party" stuff in site-packages. Django, XLRD, PIL, etc.
We keep our overall project structured as a collection of packages and Django projects. Each project is a portion of the overall site. We have two separate behaviors for port 80 and port 443 (SSL).
OverallProject/
aPackage/
anotherPackage/
djangoProject80/
settings.py
logging.ini
app_a_1/
models.py # app a, version 1 schema
app_a_2/
models.py # app a, version 2 schema
app_b_2/
models.py
app_c_1/
models.py
djangoProject443/
test/
tool/
We use a version number as part of the app name. This is the major version number, and is tied to the schema, since "uses-the-same-schema" is one definition of major release compatibility.
You have to migrated the data and prove that things work in the new version. Then you can delete the old version and remove the schema from the database. Migrating the data is challenging because you can't run both apps side-by-side.
Most applications have just one current version installed.
Related
I am relatively new to Django, and I must admin I'm getting confused (not to mention frustrated) with its frankly, bizarre folder structure - and the differentiation between website, projects, apps and modules.
I am trying to install and use django-realestate into my virtenv (using virtualenvwrapper)
After I created the new virtenv, I installed the Django app in the new environment. The problem is that the entire codebase is actually created under ~./virtualenvs/myenv
The problem is that I want to be able to modify and (quite extensively) extend the code to suit my own purposes. However, I can't do that - if the code is under the "control" of virtualenv. My gut instinct is to do either of the following:
Move the code from the src folder under ~./vitualenvs/myenv to /path/to/proj
Create a brand new Django install and add(/merge?) the folders realestate, testproject and test to my project folder?
It is a hideously complicated setup for what is a straightforward requirement: I want to have the latest Django version, and also have the code for the latest real-estate app (which I will modify extensively).
How do I solve this problem?
I have to find a solution for sharing code between two big Django projects. The main things to share are models and serializers and template tags. I've came up with 3 different solutions and I need you to find pro and cons to be able to make a choice.
I'll list you the solutions I found:
git submodules
Create a repository where to store my *.py files and include them as a django app such as 'common_deps'
Even if this is the purpose of git submodules there are a bit hard to use and its easy to fall into traps.
python package
Create a python package to store my *.py files.
It seems to be the best option to me event if that means that I'll need to change my requirements.txt file on my projects on each new release.
Simple git repository
Create a new repository to store my *.py files and include them as a django app such as 'common_deps'. Then add it to my PYTHON_PATH
I need some advices, I haven't chosen yet. I'm just telling myself that git submodules seems to be a bas idea.
Tell me guys.
I will definitely go with the second option you listed - packaging your app. If you follow steps in the Packaging your app part of official Django tutorial, you'll get tar.gz file which will allow you to include your app in any project you want by simply installing (e.g. with pip) to the virtual env connected with the project or globally
I will go with python package, after all this is what it is for.
This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness?
I would always recommend you use a virtualenv as a matter of course. There is almost no overhead in doing so, and it just makes things easier. In conjunction with virtualenvwrapper you can easily just type workon myproject to activate and cd to your virtualenv in one go. You avoid any issues with having to use sudo to install things, as well as any possible version incompatibilities with system-installed packages. There's just no reason not to, really.
I don't have any knowledge on Vagrant but I use virtualenvs for my Django projects. I would recommend it for anyone.
With that said, if you're only going to be using one Django project on a virtual machine you don't need to use a virtualenv. I haven't come across a situation where apps in the same project have conflicting dependencies. This could be a problem if you have multiple projects on the same machine however.
There are many benefit of working with virtual environment on your development machine.
You can go to any version of any supported module to check for issues
Your project runs under separate environment without conflicting with your system wide modules and settings
Testing is easy
Muliple version of same project can co-exist.
if you develop multiple projects with different django versions, virtualenv is just a must thing, there is no other way (not that i know). you feel in heaven in virtualenv if you once experience the dependency hell. Even if you develop one project I would recommend to code inside virtualenv, you never know what comes next, back in the days, my old laptop was almost crashing because of so many dependency problems, after i discovered virtualenv, my old laptop became a brand new laptop for my eyes..
No, in your case, you don't need to bother with virtualenv. Since you're using a dedicated virtual machine it's just a layer of complexity you, as a noob, don't really need.
Virtualenv is pretty simple, in concept and usage, so you'll layer it on simply enough when the need arises. But, imho, there is added value in learning how a python installation is truly laid out before adding indirection. When you hit a problem that it can solve, then go for it. But for now, keep it simple: don't bother.
I have a django project that uses a lot of 3rd party apps, so wanted to decide out of the two approaches to manage my situation :
I can use [ virtualenv + pip ] along with pip freeze as requirements file to manage my project dependencies.
I don't have to worry about the apps, but can't have that committed with my code to svn.
I can have a lib folder in my svn structure and have my apps sit there and add that to sys.path
This way, my dependencies can be committed to svn, but I have to manage sys.path
Which way should I proceed ?
What are the pros and cons of each approach ?
Update:
Method1 Disadvantage : Difficult to work with appengine.
This has been unanswered question (at least to me) so far. There're some discussion on this recently:-
https://plus.google.com/u/0/104537541227697934010/posts/a4kJ9e1UUqE
Ian Bicking said this in the comment:-
I do think we can do something that incorporates both systems. I
posted a recipe for handling this earlier, for instance (I suppose I
should clean it up and repost it). You can handle libraries in a very
similar way in Python, while still using the tools we have to manage
those libraries. You can do that, but it's not at all obvious how to
do that, so people tend to rely on things like reinstalling packages
at deploy time.
http://tarekziade.wordpress.com/2012/02/10/defining-a-wsgi-app-deployment-standard/
The first approach seem the most common among python devs. When I first started doing development in Django, it feels a bit weird since when doing PHP, it quite common to check third party lib into the project repo but as Ian Bicking said in the linked post, PHP style deployment leaves out thing such non-portable library. You don't want to package things such as mysqldb or PIL into your project which better being handled by tools like Pip or distribute.
So this is what I'm using currently.
All projects will have virtualenv directory at the project root. We name it as .env and ignore it in vcs. The first thing dev did when to start doing development is to initialize this virtualenv and install all requirements specified in requirements.txt file. I prefer having virtualenv inside project dir so that it obvious to developer rather than having it in some other place such as $HOME/.virtualenv and then doing source $HOME/virtualenv/project_name/bin/activate to activate the environment. Instead developer interact with the virtualenv by invoking the env executable directly from project root, such as:-
.env/bin/python
.env/bin/python manage.py runserver
To deploy, we have a fabric script that will first export our project directory together with the .env directory into a tarball, then copy the tarball to live server, untar it deployment dir and do some other tasks like restarting the server etc. When we untar the tarball on live server, the fabric script make sure to run virtualenv once again so that all the shebang path in .env/bin get fixed. This mean we don't have to reinstall dependencies again on live server. The fabric workflow for deployment will look like:-
fab create_release:1.1 # create release-1.1.tar.gz
fab deploy:1.1 # copy release-1.1.tar.gz to live server and do the deployment tasks
fab deploy:1.1,reset_env=1 # same as above but recreate virtualenv and re-install all dependencies
fab deploy:1.1,update_pkg=1 # only reinstall deps but do not destroy previous virtualenv like above
We also do not install project src into virtualenv using setup.py but instead add path to it to sys.path. So when deploying under mod_wsgi, we have to specify 2 paths in our vhost config for mod_wsgi, something like:-
WSGIDaemonProcess project1 user=joe group=joe processes=1 threads=25 python-path=/path/to/project1/.env/lib/python2.6/site-packages:/path/to/project1/src
In short:
We still use pip+virtualenv to manage dependencies.
We don't have to reinstall requirements when deploying.
We have to maintain path into sys.path a bit.
Virtualenv and pip are fantastic for working on multiple django projects on one machine. However, if you only have one project that you are editing, it is not necessary to use virtualenv.
I'm building a Django app, which I comfortably run (test :)) on a Ubuntu Linux host. I would like to package the app without source code and distribute it to another production machine. Ideally the app could be run by ./runapp command which starts a CherryPy server that runs the python/django code.
I've discovered several ways of doing this:
Distributing the .pyc files only and building and installing all the requirements on target machine.
Using one of the many tools to package Python apps into a distributable package.
I'm really gunning for nr.2 option, I'd like to have my Django app contained, so it's possible to distribute it without needing to install or configure additional things. Searching the interwebs provided me with more questions than answers and a very sour taste that Django packing is an arcane art that everybody knows but nobody speaks about. :)
I've tried Freeze (fails), Cx_freeze (easy install version fails, repository version works, but the app output fails) and red up on dbuilder.py (which is supposed to work but doesn't work really - I guess). If I understand correctly most problems originate form the way that Django imports modules (example) but I have no idea how to solve it.
I'll be more than happy if anyone can provide any pointers or good resources online regarding packing/distributing standalone Django applications.
I suggest you base your distro on setuptools (a tool that enhances the standard Python distro mechanizm distutils).
Using setuptools, you should be able to create a Python egg containing your application. The egg's metadata can contain a list of dependencies that will be automatically installed by easy_install (can include Django + any third-party modules/packages that you use).
setuptools/distutils distros can include scripts that will be installed to /usr/bin, so that's how you can include your runapp script.
If you're not familiar with virtualenv, I suggest you take a look at that as well. It is a way to create isolated Python environments, it will be very useful for testing your distro.
Here's a blog post with some info on virtualenv, as well as a discussion about a couple of other nice to know tools: Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip
The --noreload option will stop Django auto-detecting which modules have changed. I don't know if that will fix it, but it might.
Another option (and it's not ideal) is to obscure some of your core functionality by packaging it as a dll, which your plain text code will call.