Sorry for the trivial question:
I manage multiple projects, each one is deployed or interacts with one or more server(s).
I maintain my main fabric.py in home. Now, since projects come and go every day, is there a way to define per-project settings with Fabric (or other system), but not in the fabric.py? I'm aware of roledefs in Fabric, but I really want to separate the Fabric defs from server access data.
e.g.
Main fabfile
/home/fradeve/fabric.py
def deploy():
some commands
pass
Project's dir
/home/fradeve/projects
|
|--> project1/.fabricrc
|--> project2/.fabricrc
\--> project3/.fabricrc
When launching a project, it should read its own .fabricrc, read server settings and deploy accordingly.
The problem is that I usually work in the main project dir (project1) and starting Fabric with the fabfile outside the project dir (in home) will give me an error.
Any hint on how to manage?
I've ended up with writing my own script to handle this.
Starting from the marvelous fabfile from Alexey Bezhan, I've added Vagrant support and a function to create all needed files for a new project (.vimrc, Vagrantfile, .gitignore, etc).
Every project server(s) is defined in a per-project server_config.yaml, with easily customizable options. All functions are documented in the script, hope this will be useful for someone. My enhanced version of the script is here.
Related
This question is mostly about the technical details + some best practices of how to efficiently deploy a python web app that's built using platter.
Taking Django for instance, I have a project that's already built into a tarball distribution. This includes all wheels of all deps + the package of the app itself.
My repo directory also contains some other files that need to be distributed with the deployed code, such as: manage.py, a fabfile package with fabric utils, and some configuration files (for supervisor, nginx, etc).
So my questions are:
How can I wrap these extra files into the distribution that contains the project?
If I simply use git to clone/pull the project on the server I have these files, but then I have duplicate of the source code being both in the project and zipped in the tarball. How can I avoid that? Committing the tarball into a separate repo?
Perhaps the duplication is not so bad, and I'll end up with multiple tarballs in my dist/ directory and only one symlinked to the current from which I deploy?
Same goes for a Tornado based app.
My first rule of deployment is "whatever works". Every production environment has different requirements. But to give opinions on your questions:
Not everything should be in your Python project. Perhaps there is a way to do it, but I think it's using the wrong hammer.
You can create a separate Git repo that handles configuration and asset files for your production deployment (this does not even be managed by Git if you don't care about old, irrelevant configuration files). This does not have to be a Python project, just the files for the production deployment. You may optionally put a Python script or two in here (or just a README.txt or fab files or a Buildout config) to automate tasks such as unpacking your platter or copying config files around.
It's tempting (and possible) to put production config things in your main Git repo. This is even suggested by apps that create boilerplate files for development and production configuration. This doesn't mean it's the best way to do things though.
My rule is that the main Git repo is "development only". It's cloned by developers who are setting up and working in development environments. It conflates a Python project far too much to try and be an Python application and also be a place to manage a production system, IMHO.
Production is managed separately. Sometimes by people different from the developers or at least the developer is wearing a different hat when thinking about a production deployment. This way you can also have a small, clean repo that tracks just changes to your production system.
Playing with symlinks within a single deployment that represents different builds is an extra layer of confusion. And the impetus to do so comes from trying to do everything from a single Python project.
Deploy your python application to something like /var/myapp/build-2015-10-29/. Then create a symlink at /var/myapp/current/ that points to this location. This way you can create a full deployment at /var/myapp/build-2015-11-05/ and tweak the config to start on a separate port, bring the app up and ensure everything works, then just switch from the symlink from the old build to the new build with minimal downtime.
I've a puzzle of a development and production Django setup that I can't figure out a good way to deploy in a simple way. Here's the setup:
/srv/www/projectprod contains my production code, served at www.domain.com
/srv/www/projectbeta contains my development code, served at www.dev.domain.com
Prod and Dev are also split into two different virtualenvs, to isolate their various Python packages, just in case.
What I want to do here is to make a bunch of changes in dev, then push to my Mercurial server, and then re-pull those changes in production when stable. But there are a few things making this complicated:
wsgi.py contains the activate_this.py call for the virtualenv, but the path is scoped to either prod or dev, so that needs to be edited before deployment.
manage.py has a shebang at the top to define the correct python path for the virtualenv. (This is currently #!/srv/ve/.virtualenvs/project-1.2/bin/python so I'm wondering if I can just remove this to simplify things)
settings.py contains paths to the templates, staticfiles, media root, etc. which are all stored under /srv/www/project[prod|dev]/*
I've looked into Fabric, but I don't see anything in it that would re-write these files for me prior to doing the mercurial push/pull.
Does anyone have any tips for simplifying this, or a way to automate this deployment?
Two branches for different environment (with env-specific changes in each, thus - additional merge before deploy)
or
MQ extension, "clean" code in changesets, MQ-patch for every environment on top of single branch (and accuracy with apply|unapply of patches)
As a fledgling Django developer, I was wondering if it was customary, or indeed possible, to create a site with Django then transfer the complete file structure to a different machine where it would "go live".
Thanks,
~Caitlin
You could use GIT or Mercurial - or other version control system. To put the site structure on a central server. After that you could deploy the site for example with fabric to multiple servers. For deployment process you should consider using for example virtualenv to isolate the project from global python packages and requirements.
Of course that's possible and in fact it's the only way to "go live". You don't want to develop in your live server, do you? And it's true for any platform, not just django.
If I understood your question correctly, you need a system to push your development code to live.
Use a version control system: git, svn, mercurial etc.
Identify environment specific code like setting/config files etc. and have separate instances of them for each environment.
Create a testing/staging/PP environment which has live data or live-like data and deploy your code there before pushing it to live.
To avoid any downtime during deployment process, usually a symbolic link is created which points to the existing code folder. When a new release is to be pushed, a new folder is created with new code, after all other dependencies are done (like setting and database changes) and the sym link is pointed to the new folder.
Well I want to use WEb2Py because it's pretty nice..
I just need to change the working directory to the directory where all my modules/libraries/apps are so i can use them. I want to be able to import my real program when I use the web2py interface/applications. I need to do this instead of putting all my apps and stuff inside the Web2Py folder... I'm trying to give my program a web frontend without putting the program in the Web2Py folder.. Sorry if this is hard to understand .
In any multi-threaded Python program (and not only Python) you should not use os.chdir and you should not change sys.path when you have more than one thread running. It is not safe because it affects other threads. Moreover you should not sys.path.append() in a loop because it may explode.
All web frameworks are multi-threaded and requests are executed in a loop. Some web frameworks do not allow you to install/un-install applications without restarting the web server and therefore IF os.chdir/sys.path.append are only executed at startup then there is no problem.
In web2py we want to be able to install/uninstall applications without restarting the web server. We want apps to be very dynamical (for example define models based on information provided with the http request). We want each app to have its own models folder and we want complete separation between apps so that if two apps need to different versions of the same module, they do not conflict with each other, so we provide APIs to do so (request.folder, local_import).
You can still use the normal os.chdir and sys.path.append but you should do it outside threads (and this is not a web2py specific issue). You can use import anywhere you like as you would in any other Python program.
I strongly suggest moving this discussion to the web2py mailing list.
os.chdir lets you change the OS's working directory, but for your purposes (enabling imports of a bunch of modules &c which are constrained to live in some strange place) it seems better to add the needed directories to sys.path instead.
I had to do this very thing. I have few modules that I wanted to use from my controllers. If you want to be able to use the code that resides in the modules directory in the controller, you can use:
# use this in your controller code
impname = local_import('module_in_modules', reload=True)
# reload true will ensure that it will re load whenever
# there are changes to the module
Jay
I am trying to create my first site in Django and as I'm looking for example apps out there to draw inspiration from, I constantly stumble upon a term called "reusable apps".
I understand the concept of an app that is reusable easy enough, but the means of reusing an app in Django are quite lost for me. Few questions that are bugging me in the whole business are:
What is the preferred way to re-use an existing Django app? Where do I put it and how do I reference it?
From what I understand, the recommendation is to put it on your "PYTHONPATH", but that breaks as soon as I need to deploy my app to a remote location that I have limited access to (e.g. on a hosting service).
So, if I develop my site on my local computer and intend to deploy it on an ISP where I only have ftp access, how do I re-use 3rd party Django apps so that if I deploy my site, the site keeps working (e.g. the only thing I can count on is that the service provider has Python 2.5 and Django 1.x installed)?
How do I organize my Django project so that I could easily deploy it along with all of the reusable apps I want to use?
In general, the only thing required to use a reusable app is to make sure it's on sys.path, so that you can import it from Python code. In most cases (if the author follows best practice), the reusable app tarball or bundle will contain a top-level directory with docs, a README, a setup.py, and then a subdirectory containing the actual app (see django-voting for an example; the app itself is in the "voting" subdirectory). This subdirectory is what needs to be placed in your Python path. Possible methods for doing that include:
running pip install appname, if the app has been uploaded to PyPI (these days most are)
installing the app with setup.py install (this has the same result as pip install appname, but requires that you first download and unpack the code yourself; pip will do that for you)
manually symlinking the code directory to your Python site-packages directory
using software like virtualenv to create a "virtual Python environment" that has its own site-packages directory, and then running setup.py install or pip install appname with that virtualenv active, or placing or symlinking the app in the virtualenv's site-packages (highly recommended over all the "global installation" options, if you value your future sanity)
placing the application in some directory where you intend to place various apps, and then adding that directory to the PYTHONPATH environment variable
You'll know you've got it in the right place if you can fire up a Python interpreter and "import voting" (for example) without getting an ImportError.
On a server where you have FTP access only, your only option is really the last one, and they have to set it up for you. If they claim to support Django they must provide some place where you can upload packages and they will be available for importing in Python. Without knowing details of your webhost, it's impossible to say how they structure that for you.
An old question, but here's what I do:
If you're using a version control system (VCS), I suggest putting all of the reusable apps and libraries (including django) that your software needs in the VCS. If you don't want to put them directly under your project root, you can modify settings.py to add their location to sys.path.
After that deployment is as simple as cloning or checking out the VCS repository to wherever you want to use it.
This has two added benefits:
Version mismatches; your software always uses the version that you tested it with, and not the version that was available at the time of deployment.
If multiple people work on the project, nobody else has to deal with installing the dependencies.
When it's time to update a component's version, update it in your VCS and then propagate the update to your deployments via it.