How to separate development and production in python - python

I am developing a Python application to automate my tasks, I would like to have two separate environments, development and production, and in the future maybe a Web app environment or a CLI tool environment.
The development phase for example has modules for unit testing and API keys that I don't want to be shipped to production. Is it possible to have a package.json equivalent that can help me?
I also want to define the entry file that has to be executed first, which is main.py for my project, can this be achieved?

For package.json equivalents: You can define different requirements.txt files for development and production. For example, requirements.prod.txt and requirements.dev.txt. Inside the dev requirements, you can actually define the prod requirements by placing -r requirements.prod.txt inside the requirements.dev.txt. This way, development requirements will include all the production packages, plus something else for e.g. testing purposes.
For API keys: I would create a .ini file for production and development and take care of only shipping the production version to production. In the code, you can primarily read the development .ini file, and only use the production one if development version does not exist. This way it is easier for the prod.ini and dev.ini config files to coexist in the project folder. When there is no dev.ini in production, prod.ini will be used.
You can define the entry point in your script by running python main.py, please elaborate what you meant with this.
If you need more info, please comment so I can modify my answer.

Related

How can I configure a test environment with Falcon

I started to write a small REST API using Python with Falcon and Gunicorn. I would like to write some integration tests and I am not sure how to set up a proper test environment (for example to switch to another database). Do you have some good advice or tutorials?
My current idea is to maybe introduce some middleware and to provide a header. If the header is set, I could switch to my test configuration.
Definitely don't add middleware for the sole purpose of integration testing. What you should do is set up some configuration files for your server to use. Dev, Test, and Prod is a decent setup. Each file can point to a different database and have different ports for your server. I'm sure you will even be able to have Dev and Test servers up and running at the same time on your personal computer without any issues. Python has a build in config module that you can use. You can set environment variables in your shell so your server knows which configuration file to use. E.G. in bash FALCON_ENV='DEV' Then in python you can use the os module to get the environment variable - os.environ['FALCON_ENV']. Hope that helps, feel free to ask any more questions.
You might want try using the virtual testing environment and testing helpers provided by falcon core:
http://falcon.readthedocs.io/en/stable/api/testing.html

How to deploy a Django/Tornado based web app that's built with platter?

This question is mostly about the technical details + some best practices of how to efficiently deploy a python web app that's built using platter.
Taking Django for instance, I have a project that's already built into a tarball distribution. This includes all wheels of all deps + the package of the app itself.
My repo directory also contains some other files that need to be distributed with the deployed code, such as: manage.py, a fabfile package with fabric utils, and some configuration files (for supervisor, nginx, etc).
So my questions are:
How can I wrap these extra files into the distribution that contains the project?
If I simply use git to clone/pull the project on the server I have these files, but then I have duplicate of the source code being both in the project and zipped in the tarball. How can I avoid that? Committing the tarball into a separate repo?
Perhaps the duplication is not so bad, and I'll end up with multiple tarballs in my dist/ directory and only one symlinked to the current from which I deploy?
Same goes for a Tornado based app.
My first rule of deployment is "whatever works". Every production environment has different requirements. But to give opinions on your questions:
Not everything should be in your Python project. Perhaps there is a way to do it, but I think it's using the wrong hammer.
You can create a separate Git repo that handles configuration and asset files for your production deployment (this does not even be managed by Git if you don't care about old, irrelevant configuration files). This does not have to be a Python project, just the files for the production deployment. You may optionally put a Python script or two in here (or just a README.txt or fab files or a Buildout config) to automate tasks such as unpacking your platter or copying config files around.
It's tempting (and possible) to put production config things in your main Git repo. This is even suggested by apps that create boilerplate files for development and production configuration. This doesn't mean it's the best way to do things though.
My rule is that the main Git repo is "development only". It's cloned by developers who are setting up and working in development environments. It conflates a Python project far too much to try and be an Python application and also be a place to manage a production system, IMHO.
Production is managed separately. Sometimes by people different from the developers or at least the developer is wearing a different hat when thinking about a production deployment. This way you can also have a small, clean repo that tracks just changes to your production system.
Playing with symlinks within a single deployment that represents different builds is an extra layer of confusion. And the impetus to do so comes from trying to do everything from a single Python project.
Deploy your python application to something like /var/myapp/build-2015-10-29/. Then create a symlink at /var/myapp/current/ that points to this location. This way you can create a full deployment at /var/myapp/build-2015-11-05/ and tweak the config to start on a separate port, bring the app up and ensure everything works, then just switch from the symlink from the old build to the new build with minimal downtime.

Django Dev/Prod Deployment using Mercurial

I've a puzzle of a development and production Django setup that I can't figure out a good way to deploy in a simple way. Here's the setup:
/srv/www/projectprod contains my production code, served at www.domain.com
/srv/www/projectbeta contains my development code, served at www.dev.domain.com
Prod and Dev are also split into two different virtualenvs, to isolate their various Python packages, just in case.
What I want to do here is to make a bunch of changes in dev, then push to my Mercurial server, and then re-pull those changes in production when stable. But there are a few things making this complicated:
wsgi.py contains the activate_this.py call for the virtualenv, but the path is scoped to either prod or dev, so that needs to be edited before deployment.
manage.py has a shebang at the top to define the correct python path for the virtualenv. (This is currently #!/srv/ve/.virtualenvs/project-1.2/bin/python so I'm wondering if I can just remove this to simplify things)
settings.py contains paths to the templates, staticfiles, media root, etc. which are all stored under /srv/www/project[prod|dev]/*
I've looked into Fabric, but I don't see anything in it that would re-write these files for me prior to doing the mercurial push/pull.
Does anyone have any tips for simplifying this, or a way to automate this deployment?
Two branches for different environment (with env-specific changes in each, thus - additional merge before deploy)
or
MQ extension, "clean" code in changesets, MQ-patch for every environment on top of single branch (and accuracy with apply|unapply of patches)

How to set up a staging environment on Google App Engine

Having properly configured a Development server and a Production server, I would like to set up a Staging environment on Google App Engine useful to test new developed versions live before deploying them to production.
I know two different approaches:
A. The first option is by modifying the app.yaml version parameter.
version: app-staging
What I don't like of this approach is that Production data is polluted with my staging tests because (correct me if I'm wrong):
Staging version and Production version share the same Datastore
Staging version and Production version share the same logs
Regarding the first point, I don't know if it could be "fixed" using the new namespaces python API.
B. The second option is by modifying the app.yaml application parameter
application: foonamestaging
with this approach, I would create a second application totally independent from the Production version.
The only drawback I see is that I'm forced to configure a second application (administrators set up).
With a backup\restore tool like Gaebar this solution works well too.
What kind of approach are you using to set up a staging environment for your web application?
Also, do you have any automated script to change the yaml before deploying?
If separate datastore is required, option B looks cleaner solution for me because:
You can keep versions feature for real versioning of production applications.
You can keep versions feature for traffic splitting.
You can keep namespaces feature for multi-tenancy.
You can easily copy entities from one app to another. It's not so easy between namespaces.
Few APIs still don't support namespaces.
For teams with multiple developers, you can grant upload to production permission for a single person.
I chose the second option in my set-up, because it was the quickest solution, and I didn't make any script to change the application-parameter on deployment yet.
But the way I see it now, option A is a cleaner solution. You can with a couple of code lines switch the datastore namespace based on the version, which you can get dynamically from the environmental variable CURRENT_VERSION_ID as documented here: http://code.google.com/appengine/docs/python/runtime.html#The_Environment
We went with the option B. And I think it is better in general as it isolates the projects completely. So for example playing around with some of the configurations on the staging server will not affect and wont compromise security or cause any other butterfly effect in your production environment.
As for the deployment script, you can have any application name you want in your app.yaml. Some dummy/dev name and when you deploy, just use an -A parameter:
appcfg.py -A your-app-name update .
That will simplify your deploy script quite much, no need to string replace or anything similar in your app.yaml
We use option B.
In addition to Zygmantas suggestions about the benefits of separating dev from prod at application level, we also use our dev application to test performance.
Normally the dev instance runs without much available in the way of resources, this helps to see where the application "feels" slow. We can then also independently tweak the performance settings to see what makes a difference (e.g. front-end instance class).
Of course sometimes we need to bite the bullet and tweak & watch on live. But it's nice to have the other application to play with.
Still use namespaces and versions, just dev is dirty and experimental.
Here is what the Google documentation says :
A general recommendation is to have one project per application per
environment. For example, if you have two applications, "app1" and
"app2", each with a development and production environment, you would
have four projects: app1-dev, app1-prod, app2-dev, app2-prod. This
isolates the environments from each other, so changes to the
development project do not accidentally impact production, and gives
you better access control, since you can (for example) grant all
developers access to development projects but restrict production
access to your CI/CD pipeline
With this in mind, add a dispatch.yaml file at the root directory, and in each directory or repository that represents a single service and contain that service, add a app.yaml file along with the associated source code, as explained here : Structuring web services in App Engine
Edit, check out the equivalent link in the python section if you're using python.
No need to create a separate project. You can use dispatch.yaml to route your staging URL to another service (staging) in the same project.
Create a custom domain staging.yourdomain.com
Create a separate app-staging.yaml, that specifies staging service.
...
service: staging
...
Create distpatch.yaml that contains something like
...
url: "*staging.mydomain.com/"
service: staging
url: "*mydomain.com/"
service: default
...
gloud app deploy app-staging.yaml dispatch.yaml
use of application in app.yaml has been shut down.
Instead Google recommends
gcloud app deploy --project [YOUR_PROJECT_ID]
Please see https://cloud.google.com/appengine/docs/standard/python/config/appref

How to re-use a reusable app in Django

I am trying to create my first site in Django and as I'm looking for example apps out there to draw inspiration from, I constantly stumble upon a term called "reusable apps".
I understand the concept of an app that is reusable easy enough, but the means of reusing an app in Django are quite lost for me. Few questions that are bugging me in the whole business are:
What is the preferred way to re-use an existing Django app? Where do I put it and how do I reference it?
From what I understand, the recommendation is to put it on your "PYTHONPATH", but that breaks as soon as I need to deploy my app to a remote location that I have limited access to (e.g. on a hosting service).
So, if I develop my site on my local computer and intend to deploy it on an ISP where I only have ftp access, how do I re-use 3rd party Django apps so that if I deploy my site, the site keeps working (e.g. the only thing I can count on is that the service provider has Python 2.5 and Django 1.x installed)?
How do I organize my Django project so that I could easily deploy it along with all of the reusable apps I want to use?
In general, the only thing required to use a reusable app is to make sure it's on sys.path, so that you can import it from Python code. In most cases (if the author follows best practice), the reusable app tarball or bundle will contain a top-level directory with docs, a README, a setup.py, and then a subdirectory containing the actual app (see django-voting for an example; the app itself is in the "voting" subdirectory). This subdirectory is what needs to be placed in your Python path. Possible methods for doing that include:
running pip install appname, if the app has been uploaded to PyPI (these days most are)
installing the app with setup.py install (this has the same result as pip install appname, but requires that you first download and unpack the code yourself; pip will do that for you)
manually symlinking the code directory to your Python site-packages directory
using software like virtualenv to create a "virtual Python environment" that has its own site-packages directory, and then running setup.py install or pip install appname with that virtualenv active, or placing or symlinking the app in the virtualenv's site-packages (highly recommended over all the "global installation" options, if you value your future sanity)
placing the application in some directory where you intend to place various apps, and then adding that directory to the PYTHONPATH environment variable
You'll know you've got it in the right place if you can fire up a Python interpreter and "import voting" (for example) without getting an ImportError.
On a server where you have FTP access only, your only option is really the last one, and they have to set it up for you. If they claim to support Django they must provide some place where you can upload packages and they will be available for importing in Python. Without knowing details of your webhost, it's impossible to say how they structure that for you.
An old question, but here's what I do:
If you're using a version control system (VCS), I suggest putting all of the reusable apps and libraries (including django) that your software needs in the VCS. If you don't want to put them directly under your project root, you can modify settings.py to add their location to sys.path.
After that deployment is as simple as cloning or checking out the VCS repository to wherever you want to use it.
This has two added benefits:
Version mismatches; your software always uses the version that you tested it with, and not the version that was available at the time of deployment.
If multiple people work on the project, nobody else has to deal with installing the dependencies.
When it's time to update a component's version, update it in your VCS and then propagate the update to your deployments via it.

Categories

Resources