How can I configure a test environment with Falcon - python

I started to write a small REST API using Python with Falcon and Gunicorn. I would like to write some integration tests and I am not sure how to set up a proper test environment (for example to switch to another database). Do you have some good advice or tutorials?
My current idea is to maybe introduce some middleware and to provide a header. If the header is set, I could switch to my test configuration.

Definitely don't add middleware for the sole purpose of integration testing. What you should do is set up some configuration files for your server to use. Dev, Test, and Prod is a decent setup. Each file can point to a different database and have different ports for your server. I'm sure you will even be able to have Dev and Test servers up and running at the same time on your personal computer without any issues. Python has a build in config module that you can use. You can set environment variables in your shell so your server knows which configuration file to use. E.G. in bash FALCON_ENV='DEV' Then in python you can use the os module to get the environment variable - os.environ['FALCON_ENV']. Hope that helps, feel free to ask any more questions.

You might want try using the virtual testing environment and testing helpers provided by falcon core:
http://falcon.readthedocs.io/en/stable/api/testing.html

Related

How to separate development and production in python

I am developing a Python application to automate my tasks, I would like to have two separate environments, development and production, and in the future maybe a Web app environment or a CLI tool environment.
The development phase for example has modules for unit testing and API keys that I don't want to be shipped to production. Is it possible to have a package.json equivalent that can help me?
I also want to define the entry file that has to be executed first, which is main.py for my project, can this be achieved?
For package.json equivalents: You can define different requirements.txt files for development and production. For example, requirements.prod.txt and requirements.dev.txt. Inside the dev requirements, you can actually define the prod requirements by placing -r requirements.prod.txt inside the requirements.dev.txt. This way, development requirements will include all the production packages, plus something else for e.g. testing purposes.
For API keys: I would create a .ini file for production and development and take care of only shipping the production version to production. In the code, you can primarily read the development .ini file, and only use the production one if development version does not exist. This way it is easier for the prod.ini and dev.ini config files to coexist in the project folder. When there is no dev.ini in production, prod.ini will be used.
You can define the entry point in your script by running python main.py, please elaborate what you meant with this.
If you need more info, please comment so I can modify my answer.

Python base deployment techniques/package libraries for different environment like development, production or staging

I am familiar with environment settings for Node.js using npm packages like "settings". This package allowed me to import different environment settings based on what the NODE_ENV variable is set to.
I've been searching for something similar for python but most of the environments settings tutorial is catered towards python developed on Django framework.
The only one that I found close to what I want is https://pypi.python.org/pypi/yconf
However the config settings for different environment should not be limited to just development, production and staging. Was wondering if anyone can suggest similar alternatives or maybe even argue whether using Django framework is relevant in my case.
I think virtualenv is what you needed. Click here for more info.

Continuous Deployment: Version Numbering and Jenkins for Deployment?

We want to use continuous deployment.
We have:
all sources (python) in a local RhodeCode (git) server.
Jenkins for automated testing
SSH connections to the production systems (linux).
a tool which can update servers in one command.
Now something like this should be implemented:
run tests with Jenkins
if there is a failure. Stop, mail developers
If all tests are OK:
deploy
We are long enough in the business to write some scripts to do this.
My questions:
How to you update the version numbers? You could increment them, you could use a timestamp ...
Since we already use Jenkins, I think we do it in a script called by Jenkins. Any reason to do it with a different (better) tool?
My fear: Jenkins becomes a central server for things which are not related to testing (deploy). I think other tools like SaltStack or Ansible should be used for this. Up to now we use Fabric (simple layer above ssh). Maybe we should switch to a central management system before starting with continuous deployment.
Since we already use Jenkins, I think we do it in a script called by
Jenkins. Any reason to do it with a different (better) tool?
To answer your question: No, there aren't any big reasons to not go with Jenkins for deployment.
Pros:
You already know Jenkins (and you probably know some of the quirks)
You don't need to introduce yet another technology
You said that you want to write scripts called by Jenkins, so you can switch easily to a different system later.
Cons:
there might be better tools out there for deployment
Does not tie the best with Change Control tools.
Additional Considerations:
Do not use the same server for prod deployment and continuous build/integration. These are two different tasks performed by two different roles. Therefore two different permission schemes might be employed.
Use permissions wisely. I use two different permissions for my deploy and CI servers. We have 3 Jenkins servers right now.
CI and deploy to uncontrolled environments (Developers can play with these environments)
Deploy to controlled environments. (QA environemnts and upwards)
Deploy to prod (yes, that's the only purpose in live of this server.) with the most restrictive permission scheme.
sandbox, actually there is this forth server for Jenkins admins to play with.
Store your deployable artifacts outside of Jenkins (and you do if I read your question correctly).
So depending on your existing infrastructure and procedure you decide for the tooling. Jenkins won't log you in as long as you keep as much of the logic as possible in scripts that are only executed by Jenkins.

How to set up a staging environment on Google App Engine

Having properly configured a Development server and a Production server, I would like to set up a Staging environment on Google App Engine useful to test new developed versions live before deploying them to production.
I know two different approaches:
A. The first option is by modifying the app.yaml version parameter.
version: app-staging
What I don't like of this approach is that Production data is polluted with my staging tests because (correct me if I'm wrong):
Staging version and Production version share the same Datastore
Staging version and Production version share the same logs
Regarding the first point, I don't know if it could be "fixed" using the new namespaces python API.
B. The second option is by modifying the app.yaml application parameter
application: foonamestaging
with this approach, I would create a second application totally independent from the Production version.
The only drawback I see is that I'm forced to configure a second application (administrators set up).
With a backup\restore tool like Gaebar this solution works well too.
What kind of approach are you using to set up a staging environment for your web application?
Also, do you have any automated script to change the yaml before deploying?
If separate datastore is required, option B looks cleaner solution for me because:
You can keep versions feature for real versioning of production applications.
You can keep versions feature for traffic splitting.
You can keep namespaces feature for multi-tenancy.
You can easily copy entities from one app to another. It's not so easy between namespaces.
Few APIs still don't support namespaces.
For teams with multiple developers, you can grant upload to production permission for a single person.
I chose the second option in my set-up, because it was the quickest solution, and I didn't make any script to change the application-parameter on deployment yet.
But the way I see it now, option A is a cleaner solution. You can with a couple of code lines switch the datastore namespace based on the version, which you can get dynamically from the environmental variable CURRENT_VERSION_ID as documented here: http://code.google.com/appengine/docs/python/runtime.html#The_Environment
We went with the option B. And I think it is better in general as it isolates the projects completely. So for example playing around with some of the configurations on the staging server will not affect and wont compromise security or cause any other butterfly effect in your production environment.
As for the deployment script, you can have any application name you want in your app.yaml. Some dummy/dev name and when you deploy, just use an -A parameter:
appcfg.py -A your-app-name update .
That will simplify your deploy script quite much, no need to string replace or anything similar in your app.yaml
We use option B.
In addition to Zygmantas suggestions about the benefits of separating dev from prod at application level, we also use our dev application to test performance.
Normally the dev instance runs without much available in the way of resources, this helps to see where the application "feels" slow. We can then also independently tweak the performance settings to see what makes a difference (e.g. front-end instance class).
Of course sometimes we need to bite the bullet and tweak & watch on live. But it's nice to have the other application to play with.
Still use namespaces and versions, just dev is dirty and experimental.
Here is what the Google documentation says :
A general recommendation is to have one project per application per
environment. For example, if you have two applications, "app1" and
"app2", each with a development and production environment, you would
have four projects: app1-dev, app1-prod, app2-dev, app2-prod. This
isolates the environments from each other, so changes to the
development project do not accidentally impact production, and gives
you better access control, since you can (for example) grant all
developers access to development projects but restrict production
access to your CI/CD pipeline
With this in mind, add a dispatch.yaml file at the root directory, and in each directory or repository that represents a single service and contain that service, add a app.yaml file along with the associated source code, as explained here : Structuring web services in App Engine
Edit, check out the equivalent link in the python section if you're using python.
No need to create a separate project. You can use dispatch.yaml to route your staging URL to another service (staging) in the same project.
Create a custom domain staging.yourdomain.com
Create a separate app-staging.yaml, that specifies staging service.
...
service: staging
...
Create distpatch.yaml that contains something like
...
url: "*staging.mydomain.com/"
service: staging
url: "*mydomain.com/"
service: default
...
gloud app deploy app-staging.yaml dispatch.yaml
use of application in app.yaml has been shut down.
Instead Google recommends
gcloud app deploy --project [YOUR_PROJECT_ID]
Please see https://cloud.google.com/appengine/docs/standard/python/config/appref

Methods of sending web-generated config files to servers and restarting services

We're writing a web-based tool to configure our services provided by multiple servers. This includes interfaces configuration, dhcp configs etc. etc.
Having configs in database and views that generate proper output, how to send it/make it available for servers?
I'm thinking about sending it through scp and invoking reload command to services through ssh. I'm also thinking about using Func to do all the job, as this is Python tool and will seemingly integrate with python-based (django) config tool.
Any other proposals?
I tried using Puppet for config management, mostly because of all the buzz around it. Unfortunately, I discovered (too late) that the puppetmaster scales horribly, and does not handle heterogeneous environments well. It works for tens of servers, but its inherent architecture prevents scaling.
So I switched to Cfengine 3, which you barely notice any performance impact of, and scales much better because of its distributed architecture. Also, I later discovered that Puppet is just an attempt to reimplement Cfengine 2 inefficiently in Ruby. See http://verticalsysadmin.com/blog/uncategorized/relative-origins-of-cfengine-chef-and-puppet
If your setup is going to be used for something useful, not just play around with, go with Cfengine 3!
You can take a look at Fabric.
As an example, this is an adapted excerpt from one of my backup scripts that starts Mercurial server on remote host and pushes local changesets there:
from fabric.api import *
env.hosts = ['login#my.host.com']
def mybckp():
run('cd ~/somedir; hg serve -a 111.222.111.222 -d') # start mercurial server in daemon mode
local('hg push') # push local changesets
To execute it, I simply type:
fab mybckp
Basically, what Fabric offers is easy&convenient SSH access to shell of one more (remote) hosts, from inside of Python script.
I think you are looking for Puppet and Foreman to manage puppet (create groups of servers).
There are many ways to do this, including Chef, Bcfg2, Capistrano etc. Puppet has biggest "lead" now. There is definitely a learning curve, but the results are worth it.
You could keep your servers config files on the puppet master (in version control). And when you deploy the latest config files on the master, puppet clients can automatically pull them and restart services. Puppet "templates" can dynamically generate config files for each server.
Puppet has "Providers" for things like Packages(apt, yum), Files and OS awareness.
It really depends what you're intending to do, as the question is a little vague. The other answers cover the tools available; choosing one over the other comes down to purpose.
Are you intending to manage servers, and services on those servers? If so, try Puppet, CFEngine, or some other tool for managing server configurations.
Or, more specifically, are you looking for a deployment/buildout tool that talks to servers? So that you can type in something along the lines of "mytool deploy myproject", and have your project propagate to all the servers? In which case, fabric would be the tool to use.
Generally a good configuration will consist of both anyway... but for what it's worth, from the sound of it (managing DHCP/network/etc.), Puppet's the way to go.

Categories

Resources