I'd like to migrate a django app to multiple hosts with a fabfile. The problem is all hosts connects to same database (in another server) and migrate command runs for each host.
I may select one host as master and run migrate command only from master but wonder if there is more elegant and proper solution for this.
fabfile.py
def migrate():
virtualenv('python manage.py makemigrations')
virtualenv('python manage.py migrate')
def prod():
env.user = 'myuser'
env.hosts = ['X1', 'X2']
You have about three options.
There is a #runs_once decorator you can use. Documented here. Where you'd just do something like:
#runs_once
def migrate():
virtualenv('python manage.py makemigrations')
virtualenv('python manage.py migrate')
def prod():
env.user = 'myuser'
env.hosts = ['X1', 'X2']``
Called like:
$ fab -R myRole migrate update
You can also just apply specific roles to use on said tasks which is shown here:
from fabric.api import run, roles
env.roledefs = {
'db': ['db1'],
'web': ['web1', 'web2', 'web3'],
}
#roles('db')
def migrate():
# Database stuff here.
pass
#roles('web')
def update():
# Code updates here.
pass
Called like:
$ fab migrate update
And if you'd like to get more fine grained those same functions can be coupled with the execute() function (as is shown in that section's docs) and make a deploy function that calls those other tasks for you. Looking like this:
def deploy():
execute(migrate)
execute(update)
Called like:
$ fab deploy
Related
I need to ssh to a remote Ubuntu server to do some routine job, in following steps:
ssh in as userA
sudo su - userB
run daliy_python.py script with use psycopg2 to read some info from the database (via local connection (non-TCP/IP))
scp readings to my local machine
The question is: How to do that automatically?
I've try to use Fabric, but I run into a problem with psycopg2, after I run the Fabric script below, I received error from my daliy_python.py
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/xxx/.s.xxxx"?
My fabfile.py code is as below:
from fabric.api import *
import os
import socket
import pwd
# Target machine setting
srv = 'server.hostname.com'
env.hosts = [srv]
env.user = 'userA'
env.key_filename = '/location/to/my/key'
env.timeout = 2
# Force fabric abort at timeout
env.skip_bad_hosts = False
def run_remote():
user = 'userB'
with settings(warn_only=True):
run('whoami')
with cd('/home/%s/script/script_folder' % user):
sudo('whoami')
sudo('pwd', user=user)
sudo('ls', user=user)
sudo('python daliy_python.py', user=user)
Any suggestions? My database can only be access via userB locally, but only userA can ssh to the server. That might be a limitation. Both local and remote machine is running Ubuntu 14.04.
This is what I do to read my root accessible logfiles without extra login
ssh usera#12.34.56.78 "echo hunter2 | sudo -S tail -f /var/log/nginx/access.log"
That is: ssh usera#12.34.56.78 "..run this code on the remote.."
Then on the remote, you pipe the sudo password into sudo -S echo hunter2 | sudo -S
Add a -u userb to sudo to switch to a particular user, I am using root in my case. So then as the sudo'ed user, run your script. In my case tail -f /var/log/nginx/access.log.
But, reading your post, I would probably simply set up a cronjob on the remote, so it runs automatically. I actually do that for all my databases. A cronjob dumps them once a day to a certain directory, with the date as filename. Then I download them to my local PC with rsync an hour later.
I finally find out where my problem is.
Thanks #chishake and #C14L, I look at the problem in another way.
After inspired by this posts link1 link2, I start to think this problem is related to environmental variables.
Thus I add a with statement to alter $HOME and it worked.
fabfile.py is as below:
from fabric.api import *
import os
import socket
import pwd
# Target machine setting
srv = 'server.hostname.com'
env.hosts = [srv]
env.user = 'userA'
env.key_filename = '/location/to/my/key'
env.timeout = 2
# Force fabric abort at timeout
env.skip_bad_hosts = False
def run_remote():
user = 'userB'
with settings(warn_only=True):
run('whoami')
with shell_env(HOME='/home/%s' % user):
sudo('echo $HOME', user=user)
with cd('/home/%s/script/script_folder' % user):
sudo('whoami')
sudo('pwd', user=user)
sudo('ls', user=user)
sudo('python daliy_python.py', user=user)
Fabric has a hosts setting to specify which computers to SSH into.
Amazon Web Services has more of a dynamic inventory that can be queried in python using tools like boto.
Is there a way to combine these two services? Ideally, I wanted something as simple as ansible's approach with an inventory file and using an external file like ec2.py.
More specifically, is there a prebaked solution for this use case? Ideally, I would like to run something straightforward like this:
from fabric.api import env, task
import ec2
env.roledefs = ec2.Inventory()
#task
def command():
run("lsb_release -a")
And run it like so, assuming env.roledefs['nginx'] exists:
$ fab -R nginx command
You can use fabric and boto concurrently.
First you need to export the aws_secret_key, aws_secret_access_key and default regions from your console. Fabric file name should be fabfile.py and should not ec2.py/other.
import boto, urllib2
from boto.ec2 import connect_to_region
from fabric.api import env, run, cd, settings, sudo
from fabric.api import parallel
import os
import sys
REGION = os.environ.get("AWS_EC2_REGION")
env.user = "ec2-user"
env.key_filename = ["/home/user/uswest.pem"]
#task
def command():
run("lsb_release -a")
def _create_connection(region):
print "Connecting to ", region
conn = connect_to_region(
region_name = region,
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
)
print "Connection with AWS established"
return connection
Finally this program can be executed by using below command.
$ fab command
From http://docs.python-guide.org/en/latest/scenarios/admin/
You can see that if you set
env.hosts = ['my_server1', 'my_server2']
You'll then be able to target those hosts.
With boto, if you just have a function that does
ec2_connection.get_only_instances(filter={'tag':< whatever>})
and returns a list of their dns names, you'll be able to then set
env.hosts = [< list of dns names from ec2>]
Piece of cake!
I would like to deploy an application using fabric to a proxied server. Normally we ssh to a proxy server then ssh to the production server, however fabric doesn’t seem to allow for this directly.
An example of the setup would be local --> server A (Proxy) --> Server B (App server)
The destination is server B.
I have tried using the fab file below to test.
import os.path
from fabric.api import env, run, sudo, cd, local, put, settings
from fabric.contrib.files import sed, exists
from datetime import datetime
def proxy():
env.user = "root"
env.hosts = ['proxy']
env.key_filename = "/home/root/monitorserver.pem"
def production():
"""Defines production environment ."""
env.is_debuggable = False
env.user = "root"
env.hosts = ['appserver']
env.key_filename = "/home/root/appserver.pem"
def createfile():
"""Start Excecute test commands"""
sudo("touch /tmp/test_%s" % datetime.now().strftime('%H:%M:%S'))
but running the commands
fab proxy createfile production createfile
only seems to work as
fab proxy createfile
fab production createfile
Is there a way I can run fabric locally and deploy to server B with the proxy in place?
I think this can be done by creating 2 fabfiles: 1 on local, and 1 on the proxy server.
from fabric.api import env, run, sudo, cd
from datetime import datetime
def proxy():
env.user = "root"
env.hosts = ['proxy']
env.key_filename = "/home/root/monitorserver.pem"
with cd('/home/root/'):
createfile()
run("fab production")
def production():
"""Defines production environment ."""
env.is_debuggable = False
env.user = "root"
env.hosts = ['appserver']
env.key_filename = "/home/root/appserver.pem"
createfile()
def createfile():
"""Start Excecute test commands"""
sudo("touch /tmp/test_%s" % datetime.now().strftime('%H:%M:%S'))
Run fab proxy.
(Haven't tested the code, but something like this should work.)
This question already has answers here:
Capistrano for Django
(3 answers)
Closed 8 years ago.
I want to deploy a django app to a remote server. In rails they have capistrano that handles installing dependencies, gem updating, git updating, shell commands, etc.
Is there anything built for django that is as complete and as easy to use as capistrano?
Note: I think you can also use capistrano with django, but is there anything specifically built in python for django?
Close?: The solution many people have brought up was given to a 2010 question. Unless you guys have absolute confidence in the solution, please don't close this question. Software constantly changes and there's always innovation. Are there any new/addition solutions since 2010?
Use Fabric to deploy a django app to a remote server. Fabric is a Python library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. You can use Fabric same as capistrano gem. take a look of this live code.
from fabric.api import *
def dev():
env.user = "example"
env.hosts = ["example.com", ]
env.dev = True
env.prod = False
def prod():
env.user = "example"
env.hosts = ["192.68.1.23", ]
env.dev = False
env.prod = True
def start_virtualenv():
local("workon django_test")
# Local developent
def start_dev_server():
local("python manage.py runserver_plus --settings django_test.settings.dev")
def start_dev_server_z():
local("python manage.py runserver_plus --settings django_test.settings.dev 0.0.0.0:9000")
def start_dev_shell():
local("python manage.py shell --settings django_test.settings.dev")
def start_dev_dbshell():
local("python manage.py dbshell --settings django_test.settings.dev")
def run_dev_command(command_name=""):
"""Run a command with the settings thing already setup"""
local("python manage.py %s --settings django_test.settings.dev" % command_name)
# Remote serving
def run_prod_command(command_name=""):
""" Just run this command on remote server """
with cd("/srv/www/django_test/app/"):
with prefix("source /home/user/.virtualenvs/agn/bin/activate"):
run("python manage.py %s --settings django_test.settings.prod" % command_name)
def restart_prod_server():
""" Start a gunicorn instance using the supervisor daemon from the server """
run("sudo supervisorctl restart django_test")
# Deploy and shit
def deploy(commit="true"):
"""
TODO: there is sure a better way to set that prefix thing
"""
if commit == "true":
local("git add .")
local("git commit -a")
local("git push")
with cd("/srv/www/agn/app"):
run("git pull")
if env.dev:
account_name = 'exampledev'
else:
account_name = 'user'
prefix_string = 'source /home/%s/.virtualenvs/django_test/bin/activate' % account_name
with cd("/srv/www/django_test/app/requirements"):
with prefix(prefix_string):
run("pip install -r prod.txt")
with cd("/srv/www/django_test/app"):
with prefix(prefix_string):
run("python manage.py migrate --settings django_test.settings.prod")
run("python manage.py collectstatic --settings django_test.settings.prod --noinput")
restart_prod_server()
This question already has answers here:
How can I properly set the `env.hosts` in a function in my Python Fabric `fabfile.py`?
(5 answers)
Closed 9 years ago.
I'm cutting my teeth on Python as I work with Fabric. Looks like I have a basic misunderstanding of how Python and/or Fabric works. Take a look at my 2 scripts
AppDeploy.py
from fabric.api import *
class AppDeploy:
# Environment configuration, all in a dictionary
environments = {
'dev' : {
'hosts' : ['localhost'],
},
}
# Fabric environment
env = None
# Take the fabric environment as a constructor argument
def __init__(self, env):
self.env = env
# Configure the fabric environment
def configure_env(self, environment):
self.env.hosts.extend(self.environments[environment]['hosts'])
fabfile.py
from fabric.api import *
from AppDeploy import AppDeploy
# Instantiate the backend class with
# all the real configuration and logic
deployer = AppDeploy(env)
# Wrapper functions to select an environment
#task
def env_dev():
deployer.configure_env('dev')
#task
def hello():
run('echo hello')
#task
def dev_hello():
deployer.configure_env('dev')
run('echo hello')
Chaining the first 2 tasks works
$ fab env_dev hello
[localhost] Executing task 'hello'
[localhost] run: echo hello
[localhost] out: hello
Done.
Disconnecting from localhost... done.
however, running the last task, which aims to configure the environment and do something in a single task, it appears fabric does not have the environment configured
$ fab dev_hello
No hosts found. Please specify (single) host string for connection:
I'm pretty lost though, because if I tweak that method like so
#task
def dev_hello():
deployer.configure_env('dev')
print(env.hosts)
run('echo hello')
it looks like env.hosts is set, but still, fabric is acting like it isn't:
$ fab dev_hello
['localhost']
No hosts found. Please specify (single) host string for connection:
What's going on here?
I'm not sure what you're trying to do, but...
If you're losing info on the shell/environment -- Fabric runs each command in a separate shell statement, so you need to either manually chain the commands or use the prefix context manager.
See http://docs.fabfile.org/en/1.8/faq.html#my-cd-workon-export-etc-calls-don-t-seem-to-work
If you're losing info within "python", it might be tied into this bug/behavior that I ran into recently [ https://github.com/fabric/fabric/issues/1004 ] where the shell i entered into Fabric with seems to be obliterated.