Creating and transferring a site with Django - python

As a fledgling Django developer, I was wondering if it was customary, or indeed possible, to create a site with Django then transfer the complete file structure to a different machine where it would "go live".
Thanks,
~Caitlin

You could use GIT or Mercurial - or other version control system. To put the site structure on a central server. After that you could deploy the site for example with fabric to multiple servers. For deployment process you should consider using for example virtualenv to isolate the project from global python packages and requirements.

Of course that's possible and in fact it's the only way to "go live". You don't want to develop in your live server, do you? And it's true for any platform, not just django.
If I understood your question correctly, you need a system to push your development code to live.
Use a version control system: git, svn, mercurial etc.
Identify environment specific code like setting/config files etc. and have separate instances of them for each environment.
Create a testing/staging/PP environment which has live data or live-like data and deploy your code there before pushing it to live.
To avoid any downtime during deployment process, usually a symbolic link is created which points to the existing code folder. When a new release is to be pushed, a new folder is created with new code, after all other dependencies are done (like setting and database changes) and the sym link is pointed to the new folder.

Related

How do I deploy a Python Dash app locally without the desired users and the server having all associated dependencies installed?

Forgive me, I'm new to all this. It might not even be possible?
I have a Dash app that does a number of calculations, and I need to deploy it locally somehow.
I need all users in our company to be able to view it, but without the dependencies of the packages. I cannot use any web-based (Heroku, Git, etc) method as the data is commercially sensitive and must remain site-only.
I can successfully run it through waitress-serve on my machine and it can be viewed on other computers, but I'd rather it run from the server and be accessible by anyone that wants to use it.
What's the solution? Is is possible to have a folder on the server that has all the associated files and dependencies, and then a batch file (or similar - that's what I use now to launch mine) to launch the app on a wsgi server? Would our network have to have the python dependencies installed however?

Develop whole Zope&Plone app in git (decentralised and without ZMI)

We have few Zope&Plone projects in our company and until today I was only one single developer developing all changes throught ZMI or ZopeEdit. Our company is growing so I need to start cooperating with others developers which can help me with developing features and solving bugs in projects. This means that is no more possible to use ZMI but every developer needs to make and test own changes without affecting others work and paste own changes to production enviroment using git merge in git repo.
I need to move development to git - this means I need to start tracking all portals files and settings in git.
I think I need to move whole projects from ZODB/ZMI (including templates, scripts, sql methods, properties as portal_properties or portal_javascripts etc.) to filesystem and run git on this file system. In the next step every developer can install own pure Plone instance, pull source code and settings from git, create own branch, make changes, test, commit, push, code review ...
My question is: Is there any way to do this and start well-known rapid development process using git? Supports ZODB something like "live migration" of content/settings to/from filesystem? Is there any way to tel Zope to load some folder with content/settings from filesystem instead only from ZODB?
I know there is something called eggs, but is possible to move all types of files mentioned above to separated egg?
Thank you for your help.
The way your company was following until now was the "Old Way Way" of Plone development, but this was a deprecated and discouraged way to do.
Nowadays ZMI can still be used for "quick and dirty" fixes, but commonly this changes stored in DB must be removed (and moved to real code) as soon as possible. This was already possible on Plone 2.0!
More important: every new Plone release tend to reduce the ZMI powers (for example: until Plone 2.1 you were able to do lot of stuff from ZMI, starting from Plone 2.5 some UI elements where impossible to be modified TTW).
So: the answer to your question is "yes". Plone can (must) read code from filesystem, and this code can be stored on VCS (it can be git svn, ...).
All of those information can be found in the Plone Developer Manual.
Creating a new Plone package for modern Plone? Use mr.bob.
Automatically integrate VCS in your buildout? Use mr.developer
If you are starting today from a project you where developing through ZMI you must probably mode code from ZMI to filesystem.
This can be done manually; it's simpler as you are using Zope External Editor.
There is also a very old add-on (Plone Skin Dump) for flushing skin content to filesystem, but I fear it won't work on recent Plone, thus it was not supporting some stuff like SQL methods (if you are using them).
You can have a look at http://docs.plone.org/develop/
There you can find how to create a package (egg). Its source code can be added to git. You can checkout your git repository using mr.developer during buildout. https://pypi.python.org/pypi/mr.developer/
You can use mr.bob to create a PloneAddon. For MySQL you can use MySQL Python. Add this Packages to your Buildout in the eggs section.
Then you can write your own MySQL Statements in the Addon. I know reimplementation is expensive, but you have more control in the future.

How to deploy a Django/Tornado based web app that's built with platter?

This question is mostly about the technical details + some best practices of how to efficiently deploy a python web app that's built using platter.
Taking Django for instance, I have a project that's already built into a tarball distribution. This includes all wheels of all deps + the package of the app itself.
My repo directory also contains some other files that need to be distributed with the deployed code, such as: manage.py, a fabfile package with fabric utils, and some configuration files (for supervisor, nginx, etc).
So my questions are:
How can I wrap these extra files into the distribution that contains the project?
If I simply use git to clone/pull the project on the server I have these files, but then I have duplicate of the source code being both in the project and zipped in the tarball. How can I avoid that? Committing the tarball into a separate repo?
Perhaps the duplication is not so bad, and I'll end up with multiple tarballs in my dist/ directory and only one symlinked to the current from which I deploy?
Same goes for a Tornado based app.
My first rule of deployment is "whatever works". Every production environment has different requirements. But to give opinions on your questions:
Not everything should be in your Python project. Perhaps there is a way to do it, but I think it's using the wrong hammer.
You can create a separate Git repo that handles configuration and asset files for your production deployment (this does not even be managed by Git if you don't care about old, irrelevant configuration files). This does not have to be a Python project, just the files for the production deployment. You may optionally put a Python script or two in here (or just a README.txt or fab files or a Buildout config) to automate tasks such as unpacking your platter or copying config files around.
It's tempting (and possible) to put production config things in your main Git repo. This is even suggested by apps that create boilerplate files for development and production configuration. This doesn't mean it's the best way to do things though.
My rule is that the main Git repo is "development only". It's cloned by developers who are setting up and working in development environments. It conflates a Python project far too much to try and be an Python application and also be a place to manage a production system, IMHO.
Production is managed separately. Sometimes by people different from the developers or at least the developer is wearing a different hat when thinking about a production deployment. This way you can also have a small, clean repo that tracks just changes to your production system.
Playing with symlinks within a single deployment that represents different builds is an extra layer of confusion. And the impetus to do so comes from trying to do everything from a single Python project.
Deploy your python application to something like /var/myapp/build-2015-10-29/. Then create a symlink at /var/myapp/current/ that points to this location. This way you can create a full deployment at /var/myapp/build-2015-11-05/ and tweak the config to start on a separate port, bring the app up and ensure everything works, then just switch from the symlink from the old build to the new build with minimal downtime.

What is the best way to distribute code across servers?

I have a directory of python programs, classes and packages that I currently distribute to 5 servers. It seems I'm continually going to be adding more servers and right now I'm just doing a basic rsync over from my local box to the servers.
What would a better approach be for distributing code across n servers?
thanks
I use Mercurial with fabric to deploy all the source code. Fabric's written in python, so it'll be easy for you to get started. Updating the production service is as simple as fab production deploy. Which ends ups doing something like this:
Shut down all the services and put an "Upgrade in Progress" page.
Update the source code directory.
Run all migrations.
Start up all services.
It's pretty awesome seeing this all happen automatically.
First, make sure to keep all code under revision control (if you're not already doing that), so that you can check out new versions of the code from a repository instead of having to copy it to the servers from your workstation.
With revision control in place you can use a tool such as Capistrano to automatically check out the code on each server without having to log in to each machine and do a manual checkout.
With such a setup, deploying a new version to all servers can be as simple as running
$ cap deploy
from your local machine.
While I also use version control to do this, another approach you might consider is to package up the source using whatever package management your host systems use (for example RPMs or dpkgs), and set up the systems to use a custom repository Then an "apt-get upgrade" or "yum update" will update the software on the systems. Then you could use something like "mussh" to run the stop/update/start commands on all the tools.
Ideally, you'd push it to a "testing" repository first, have your staging systems install it, and once the testing of that was signed off on you could move it to the production repository.
It's very similar to the recommendations of using fabric or version control in general, just another alternative which may suit some people better.
The downside to using packages is that you're probably using version control anyway, and you do have to manage version numbers of these packages. I do this using revision tags within my version control, so I could just as easily do an "svn update" or similar on the destination systems.
In either case, you may need to consider the migration from one version to the next. If a user loads a page that contains references to other elements, you do the update and those elements go away, what do you do? You may wish to do something either within your deployment scripting, or within your code where you first push out a version with the new page, but keep the old referenced elements, deploy that, and then remove the referenced elements and deploy that later.
In this way users won't see broken elements within the page.

How to re-use a reusable app in Django

I am trying to create my first site in Django and as I'm looking for example apps out there to draw inspiration from, I constantly stumble upon a term called "reusable apps".
I understand the concept of an app that is reusable easy enough, but the means of reusing an app in Django are quite lost for me. Few questions that are bugging me in the whole business are:
What is the preferred way to re-use an existing Django app? Where do I put it and how do I reference it?
From what I understand, the recommendation is to put it on your "PYTHONPATH", but that breaks as soon as I need to deploy my app to a remote location that I have limited access to (e.g. on a hosting service).
So, if I develop my site on my local computer and intend to deploy it on an ISP where I only have ftp access, how do I re-use 3rd party Django apps so that if I deploy my site, the site keeps working (e.g. the only thing I can count on is that the service provider has Python 2.5 and Django 1.x installed)?
How do I organize my Django project so that I could easily deploy it along with all of the reusable apps I want to use?
In general, the only thing required to use a reusable app is to make sure it's on sys.path, so that you can import it from Python code. In most cases (if the author follows best practice), the reusable app tarball or bundle will contain a top-level directory with docs, a README, a setup.py, and then a subdirectory containing the actual app (see django-voting for an example; the app itself is in the "voting" subdirectory). This subdirectory is what needs to be placed in your Python path. Possible methods for doing that include:
running pip install appname, if the app has been uploaded to PyPI (these days most are)
installing the app with setup.py install (this has the same result as pip install appname, but requires that you first download and unpack the code yourself; pip will do that for you)
manually symlinking the code directory to your Python site-packages directory
using software like virtualenv to create a "virtual Python environment" that has its own site-packages directory, and then running setup.py install or pip install appname with that virtualenv active, or placing or symlinking the app in the virtualenv's site-packages (highly recommended over all the "global installation" options, if you value your future sanity)
placing the application in some directory where you intend to place various apps, and then adding that directory to the PYTHONPATH environment variable
You'll know you've got it in the right place if you can fire up a Python interpreter and "import voting" (for example) without getting an ImportError.
On a server where you have FTP access only, your only option is really the last one, and they have to set it up for you. If they claim to support Django they must provide some place where you can upload packages and they will be available for importing in Python. Without knowing details of your webhost, it's impossible to say how they structure that for you.
An old question, but here's what I do:
If you're using a version control system (VCS), I suggest putting all of the reusable apps and libraries (including django) that your software needs in the VCS. If you don't want to put them directly under your project root, you can modify settings.py to add their location to sys.path.
After that deployment is as simple as cloning or checking out the VCS repository to wherever you want to use it.
This has two added benefits:
Version mismatches; your software always uses the version that you tested it with, and not the version that was available at the time of deployment.
If multiple people work on the project, nobody else has to deal with installing the dependencies.
When it's time to update a component's version, update it in your VCS and then propagate the update to your deployments via it.

Categories

Resources