Upgrading Pyramid/SQLAlchemy web apps - python

I've got a standard run of the mill Pylons Pyramid application, that uses SQLAlchemy for its database persistence.
I have set up an SQLAlchemy-migrate repo and have it functioning, but I really want to have the ability to use paster to upgrade and downgrade the database, or at least some way of having the user (after installing the egg) upgrade/downgrading the database to the version required.
I've got it built-into my app now, so upon app startup it does the version upgrade, but I would rather go with something where the user explicitly has to upgrade the database so that they know exactly what is going on, and know to make backups beforehand.
How would I go about that? How do I add commands to paste?
The way users would set up the application is:
paste make-config appname production.ini
paste setup-app production.ini#appname
To set it up the first time, to do the database upgrade or upgrade in general I would want:
paste upgrade-app production.ini#appname
Or something along those lines.

You can create your own paster command, e.g. upgrade-app, and then call it from anywhere with paster --plugin=appname upgrade-app /path/to/production.ini appname. You can refer to how pyramid implements the PShellCommand.

It's not quite what you're looking for, but one way I handle this is with Fabric commands. My OSS app I have a fabric command you run that creates a .ini file for your app and then, after you adjust the sqlalchemy.url in it, you run a fabric command that init's SA migrations and runs the upgrade. From then on, to upgrade you run fab db_upgrade.
http://bmark.us/install.html
is an example of the install docs I have setup.
https://github.com/mitechie/Bookie/blob/master/fabfile/database.py
Is the set of db specific commands available through the Fabric interface.

Related

Flask & SQLAlchemy - Command Line & Shell use correct database connection, but API/Curl does not

Hive -
I have a Flask + React application that is running on Debian 11 via Nginx and Gunicorn. In development, everything works great ask it uses SQLAlchemy + SQLite to manage data queries.
In production, my .env file includes the connection details to the PostgreSQL database. After that is when it gets weird (at least for me, but this may be something that people commonly run into that my hours on Google just didn't turn up):
When I installed the app on production and set the .env file, I performed the flask db upgrade, and it wrote to the PostgreSQL database (confirmed tables exist).
When I ran the command line command to create an admin user in the new environment, it created my user in PostgreSQL on the users table with my admin flag.
When I go into flask shell I can import db from the app (which is just an instantiation of SQLAlchemy) and import User from the AUTH API. Once those are imported, I can run User.get.all() and it will return all users from the PostgreSQL table. I've even ensured there is a unique user in that table by manually creating it in the DB to validate that it doesn't get created in two systems.
When I use curl to hit the API to login in, it says that the users table is not found and references that it tried to query SQLite.
To summarize, I can not figure out why command line/shell interfaces correctly pull in the PostgreSQL connection but hitting the API falls back to SQLite. I'm not even sure where to start in debugging...even in the os_env call in the main app that says, "Pull from the env or fall back to development," I made the fall back = production.
All commands are executed in venv. Gunicorn is running within the same venv, and validated by tailing the logs that supervisor compiles for Gunicorn.
I am happy to provide any code that might be needed, but I am unsure what is and is not relevant. If it helps, the original base was built off of this boilerplate, and we have just expanded the API calls and models and defined a connection string to PostgreSQL in Production but left the SQLite connection string in development...the operation of the app works exactly the same: https://github.com/a-luna/flask-api-tutorial/tree/part-6
I finally found the answer.
When you launch Gunicorn, it ignores your .env file and any environment variables you may have set in the Shell. Even when your app specifically loads the .env file, Gunicorn still ignores it.
There are a variety of solutions but, since I was using Supervisor and also had a large number of variables to load, using the --env flag on Gunicorn was not an option.
Instead, add this to your Gunicorn file. Since I was using a virtualenv and had installed it via pip, my gunicorn command was running from ./project-root/venv/bin/gunicorn.
Modify that file as so:
At the top where your imports are, you will want to add:
import os
from dotenv import load_dotenv
Then, anywhere before you actually load the app (I put mine right after all of the imports), add this block of code where I have two environment files called .env and .flaskenv:
for env_file in ('.env', '.flaskenv'):
env = os.path.join(os.getcwd(), env_file)
if os.path.exists(env):
load_dotenv(env)

Moving Django 1.6 to new server

I'm wondering what steps are evolved in moving a django project to a new server.
basicly i'm completely new to Django and have a few questions. The server it is on is now is not stable so I need to act fast. I did not build the app that is there but have pulled down the www folder from the root server. The server is running centOS.
Questions.
is Django backwards compatible or will I need to insure that the same version is installed?
Apart from moving the files what other steps are involved in running the app?
Will I need to use centOS or will any linux server do?
I have a database cluster of PostgreSQL ready to go also.
Start with the docs here - this will give you a good overview.
To your specific questions:
1/ Django is not backwards compatible. You should install 1.6.x. Likely, there's a requirements.txt file in the root directory of your app. On your new server, install pip and then pip install -r requirements.txt will install your dependencies. I would personally use virtualenvwrapper to manage your dependencies on the server
2/ Check the docs, but the main steps are:
Choose a web server. I personally use nginx. You'll need to setup your nginx.conf.
Choose a Python WSGI HTTP Server - I use gunicorn. You'll also need to configure this. This tutorial is a great place to start.
If you use the DigitalOcean tutorial above, any linux server will do. Last, you'll need to upload your Postgres database to the server but sounds like you're able to do that.
3/ You will need to edit your settings.py of the Django project and update certain variables.
If you're changing your database, as well as the app deployment, you'll need to edit the database connection, run ./manage.py syncdb and ./manage.py migrate (if you're using South) to set up the database schema.
It's also recommended to change the SECRET_KEY between deployments.
If you're deploying on a different hosts, you'll need to edit ALLOWED_HOSTS appropriately for your new deployment as well.
Good luck!

How should I debug an app running on server in Django 1.3/Postgres 8.4 when local is Django 1.7/Postgres 9.3?

As a Django / Python newbie, should I try to debug on a server running 4 year old software versions, try to recreate the old software installations on my local, or just try to run the software in current version of Django/Python/Postgres/PostGIS on my local Mac OS X 10.9.5?
Background:
On a project where I was supposed to just load data into Postgres/PostGIS, I need to debug why a 2010 year old Django / Postgres / Postgis project is getting an error. I'm a LAMP developer who's never used Django or done much in Python, but I've been able to get a staging site working on the server, and make one or two changes. I thought it would make sense to debug locally on my Mac OS X 10.9.5. So I've used homebrew to install Django 1.7 and Postgres 9.3. Looking at the version differences, I'm worried it will be a more of a hassle now to try to migrate and upgrade the project than to attempt to debug it on the staging site instance running on the server.
FWIW, I know the lines of code that I'd like to investigate (seems like maybe an object is not getting loaded properly from db, since it is in the db), but I'm not positive what sort of echo statements to use. I like using IDE's to inspect things. The project is a bit of an orphan, as the first professional project of a developer who is no longer available to help. And of course, the deadline is last week. :(
Differences between your production and development environments can cause a myriad of headaches.
I recommend you use a tool such as Vagrant to set up a development environment running inside of a virtual machine that mirrors your production server.
Use VirtualEnv to emulate the necessary Django version. PostgreSQL is trickier, in theory you can have a second instance with the required version running simultaneously, but that can also cause very subtle conflicts. It would be better to have it running on another machine (virtual or physical) and access it through your local network.
The simplest way I think is to look at using unittest and mock object to set up some unit tests on the functions that you suspect are the cause of the problem. By using unittest and mock objects, you can control how the existing code interacts with Django and Postgres objects and allow for version differences by setting the expected return values.
With mock object, you can mock all or just part of an existing Python object, which reduces the dependencies you require for your development environment. Depending on how the code is structured, you might not need to install either Django or Postgres at all or a webserver for that matter. This blog explains Mock object in detail.
Even though you're pressed for time, you could do worse than setting up unittests for the whole project, future developers will thank you.
In terms of debugging, I personally can't reccomend pudb enough, it's an interactive command line debugger which you can use with unittest to zero in on what part of the code is causing the problem.
If you do need to install Django and Postgres, I would suggest looking at virtualenv which allows you to set up a virtual environment for Python. That way you can just install the specific dependencies you need without interfering with your global system wide installation. You can also install earlier versions of packages which would do the trick to emulate the existing system's state.

How to require different packages on Heroku to local box?

I'm writing a Python Flask app to deploy on Heroku. It will use a database. For local development, I'd like to use Sqlite, but when deployed to Heroku I'd like to use Postgresql. How can I achieve this?
I'm stuck because I don't know how to require a different set of packages between my box and the Heroku server.
Were this a Ruby app I would write in my Gemfile
gem "pg", :group => :production
gem "sqlite3", :group => :development
Then Bundler would install the appropriate packages in development and in production. But I don't know any analogous workflow for Python's pip
Well, you have two things to solve.
First, the requirements.txt which isn't that much of a problem. You can either throw all the requirements in the same requirements.txt file, having both database bindings installed doesn't harm anything. If you want to separate, however, just use requirements.txt for deploying, and requirements-dev.txt for local development.
More important is the DB settings itself, and for that you have a one liner solution:
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get(
'DATABASE_URL', 'sqlite:////tmp/test.db')
Since DATABASE_URL is set on Heroku, but not on local (make sure this is the case), os.environ.get will not find it, thus reverting to the default, which is the sqlite connection string .

What is the best way to distribute code across servers?

I have a directory of python programs, classes and packages that I currently distribute to 5 servers. It seems I'm continually going to be adding more servers and right now I'm just doing a basic rsync over from my local box to the servers.
What would a better approach be for distributing code across n servers?
thanks
I use Mercurial with fabric to deploy all the source code. Fabric's written in python, so it'll be easy for you to get started. Updating the production service is as simple as fab production deploy. Which ends ups doing something like this:
Shut down all the services and put an "Upgrade in Progress" page.
Update the source code directory.
Run all migrations.
Start up all services.
It's pretty awesome seeing this all happen automatically.
First, make sure to keep all code under revision control (if you're not already doing that), so that you can check out new versions of the code from a repository instead of having to copy it to the servers from your workstation.
With revision control in place you can use a tool such as Capistrano to automatically check out the code on each server without having to log in to each machine and do a manual checkout.
With such a setup, deploying a new version to all servers can be as simple as running
$ cap deploy
from your local machine.
While I also use version control to do this, another approach you might consider is to package up the source using whatever package management your host systems use (for example RPMs or dpkgs), and set up the systems to use a custom repository Then an "apt-get upgrade" or "yum update" will update the software on the systems. Then you could use something like "mussh" to run the stop/update/start commands on all the tools.
Ideally, you'd push it to a "testing" repository first, have your staging systems install it, and once the testing of that was signed off on you could move it to the production repository.
It's very similar to the recommendations of using fabric or version control in general, just another alternative which may suit some people better.
The downside to using packages is that you're probably using version control anyway, and you do have to manage version numbers of these packages. I do this using revision tags within my version control, so I could just as easily do an "svn update" or similar on the destination systems.
In either case, you may need to consider the migration from one version to the next. If a user loads a page that contains references to other elements, you do the update and those elements go away, what do you do? You may wish to do something either within your deployment scripting, or within your code where you first push out a version with the new page, but keep the old referenced elements, deploy that, and then remove the referenced elements and deploy that later.
In this way users won't see broken elements within the page.

Categories

Resources