Python/Fabric misunderstanding [duplicate] - python

This question already has answers here:
How can I properly set the `env.hosts` in a function in my Python Fabric `fabfile.py`?
(5 answers)
Closed 9 years ago.
I'm cutting my teeth on Python as I work with Fabric. Looks like I have a basic misunderstanding of how Python and/or Fabric works. Take a look at my 2 scripts
AppDeploy.py
from fabric.api import *
class AppDeploy:
# Environment configuration, all in a dictionary
environments = {
'dev' : {
'hosts' : ['localhost'],
},
}
# Fabric environment
env = None
# Take the fabric environment as a constructor argument
def __init__(self, env):
self.env = env
# Configure the fabric environment
def configure_env(self, environment):
self.env.hosts.extend(self.environments[environment]['hosts'])
fabfile.py
from fabric.api import *
from AppDeploy import AppDeploy
# Instantiate the backend class with
# all the real configuration and logic
deployer = AppDeploy(env)
# Wrapper functions to select an environment
#task
def env_dev():
deployer.configure_env('dev')
#task
def hello():
run('echo hello')
#task
def dev_hello():
deployer.configure_env('dev')
run('echo hello')
Chaining the first 2 tasks works
$ fab env_dev hello
[localhost] Executing task 'hello'
[localhost] run: echo hello
[localhost] out: hello
Done.
Disconnecting from localhost... done.
however, running the last task, which aims to configure the environment and do something in a single task, it appears fabric does not have the environment configured
$ fab dev_hello
No hosts found. Please specify (single) host string for connection:
I'm pretty lost though, because if I tweak that method like so
#task
def dev_hello():
deployer.configure_env('dev')
print(env.hosts)
run('echo hello')
it looks like env.hosts is set, but still, fabric is acting like it isn't:
$ fab dev_hello
['localhost']
No hosts found. Please specify (single) host string for connection:
What's going on here?

I'm not sure what you're trying to do, but...
If you're losing info on the shell/environment -- Fabric runs each command in a separate shell statement, so you need to either manually chain the commands or use the prefix context manager.
See http://docs.fabfile.org/en/1.8/faq.html#my-cd-workon-export-etc-calls-don-t-seem-to-work
If you're losing info within "python", it might be tied into this bug/behavior that I ran into recently [ https://github.com/fabric/fabric/issues/1004 ] where the shell i entered into Fabric with seems to be obliterated.

Related

How to wait for a shell script reboot in Fabric 2

I'm using Fabric 2 and I'm trying to run a shell script on a number of hosts sequentially. In the script, it configures a few settings and reboots that host. When I run my task however it ends after the script has run on the first host (I'm guessing because the SSH connection terminates due to the reboot). I tried looking into setting 'warn_only' to True, but I don't see where to set this value on Fabric 2.
Adding:
with settings(warn_only=True):
throws a "NameError: global name 'settings' is not defined" error.
Is there a correct format to warn_only? If not possible yet in Fabric 2, is there a way to continue running my task regardless of this reboot?
My script:
from fabric import *
import time
import csv
#task
def test(ctx):
hosts = ['1.1.1.1', '2.2.2.2']
for host in hosts:
c = Connection(host=host, user="user", connect_kwargs={"password": "password"})
c.run("./shell_script.sh")
configured = False
while not configured:
result = c.run("cat /etc/hostname")
if result != "default": configured = True
time.sleep(10)
So a workaround has been to run the task with the -w flag which will enable warn_only and give me the desired functionality. It would be more preferable to be able to set this property in the code though.
Looks like you can use the config argument to the connection class instead of the with settings() construction, and warn_only as been renamed to warn;
with Connection(host=host, user="user", connect_kwargs={"password": "password"}) as c:
c.run("./shell_script.sh", warn=True)
More generally, you can get upgrade documentation at https://www.fabfile.org/upgrading.html#run

Fabric - possible to have local and remote task execution in one method?

Is it possible to have local and remote tasks execute from within the same task method?
e.g., I want to do something like the following:
#fabric.api.task
def Deploy():
PrepareDeploy()
PushDeploy()
execute(Extract())
execute(Start())
Where PrepareDeploy and PushDeploy are local tasks (executing only locally, via the fabric.api.local() method):
#fabric.api.task
#fabric.decorators.runs_once
def PrepareDeploy():
#fabric.api.task
#fabric.decorators.runs_once
def PushDeploy():
And Extract/Start are methods that should be run on the remote hosts themselves:
#fabric.api.task
def Extract():
#fabric.api.task
def Start():
However, when I try to do fab Deploy, I get something like:
[remote1.serv.com] Executing task 'Deploy'
[localhost] local: find . -name "*.java" > sources.txt
...
The first line seems wrong to me (and in fact, causes errors).
You can spawn new task and defining on what hosts should it run, for example - how to create rabbitmq of all hosts are provisioned with puppet with the same erlang cookie.
See around line 114 - there is an executon of the tasks on specific hosts.
https://gist.github.com/nvtkaszpir/17d2e2180771abd93c46
I hope this helps.

Migrate django app to multiple servers

I'd like to migrate a django app to multiple hosts with a fabfile. The problem is all hosts connects to same database (in another server) and migrate command runs for each host.
I may select one host as master and run migrate command only from master but wonder if there is more elegant and proper solution for this.
fabfile.py
def migrate():
virtualenv('python manage.py makemigrations')
virtualenv('python manage.py migrate')
def prod():
env.user = 'myuser'
env.hosts = ['X1', 'X2']
You have about three options.
There is a #runs_once decorator you can use. Documented here. Where you'd just do something like:
#runs_once
def migrate():
virtualenv('python manage.py makemigrations')
virtualenv('python manage.py migrate')
def prod():
env.user = 'myuser'
env.hosts = ['X1', 'X2']``
Called like:
$ fab -R myRole migrate update
You can also just apply specific roles to use on said tasks which is shown here:
from fabric.api import run, roles
env.roledefs = {
'db': ['db1'],
'web': ['web1', 'web2', 'web3'],
}
#roles('db')
def migrate():
# Database stuff here.
pass
#roles('web')
def update():
# Code updates here.
pass
Called like:
$ fab migrate update
And if you'd like to get more fine grained those same functions can be coupled with the execute() function (as is shown in that section's docs) and make a deploy function that calls those other tasks for you. Looking like this:
def deploy():
execute(migrate)
execute(update)
Called like:
$ fab deploy

How to access AWS's dynamic EC2 inventory using Fabric?

Fabric has a hosts setting to specify which computers to SSH into.
Amazon Web Services has more of a dynamic inventory that can be queried in python using tools like boto.
Is there a way to combine these two services? Ideally, I wanted something as simple as ansible's approach with an inventory file and using an external file like ec2.py.
More specifically, is there a prebaked solution for this use case? Ideally, I would like to run something straightforward like this:
from fabric.api import env, task
import ec2
env.roledefs = ec2.Inventory()
#task
def command():
run("lsb_release -a")
And run it like so, assuming env.roledefs['nginx'] exists:
$ fab -R nginx command
You can use fabric and boto concurrently.
First you need to export the aws_secret_key, aws_secret_access_key and default regions from your console. Fabric file name should be fabfile.py and should not ec2.py/other.
import boto, urllib2
from boto.ec2 import connect_to_region
from fabric.api import env, run, cd, settings, sudo
from fabric.api import parallel
import os
import sys
REGION = os.environ.get("AWS_EC2_REGION")
env.user = "ec2-user"
env.key_filename = ["/home/user/uswest.pem"]
#task
def command():
run("lsb_release -a")
def _create_connection(region):
print "Connecting to ", region
conn = connect_to_region(
region_name = region,
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
)
print "Connection with AWS established"
return connection
Finally this program can be executed by using below command.
$ fab command
From http://docs.python-guide.org/en/latest/scenarios/admin/
You can see that if you set
env.hosts = ['my_server1', 'my_server2']
You'll then be able to target those hosts.
With boto, if you just have a function that does
ec2_connection.get_only_instances(filter={'tag':< whatever>})
and returns a list of their dns names, you'll be able to then set
env.hosts = [< list of dns names from ec2>]
Piece of cake!

No hosts found: Fabric

when I run my python code it is asking for host.
No hosts found. Please specify (single) host string for connection:
I have the following code:
from fabric.api import *
from fabric.contrib.console import confirm
env.hosts = [ 'ipaddress' ]
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
remoteRun();
I even tried running fab with -H option and I am getting the same message. I'm using Ubuntu 10.10 any help is appreciated. Btw I am a newbie in Python.
In order to get hosts to work in a script outside of the fab command-line tool and fabfile.py, you'll have to use execute():
from fabric.api import run
from fabric.tasks import execute
def mytask():
run('uname -a')
results = execute(mytask)
If it's only one host, you can use env.host_string = 'somehost or ipaddress'.
You also don’t need the ; at the end of your remoteRun.
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
from fabric.api import env, run
env.host_string = 'ipaddress'
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
remoteRun()
I am not exactly sure what remoteRun(); is supposed to do in your example.
Is it part of your fabfile or is this your terminal command to invoke the script?
The correct way would be a command like this in your shell:
fab remoteRun
Generally it's better to specify the concrete hosts your command is supposed to run on like this:
def localhost():
env.hosts = [ '127.0.0.1']
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
You can run it like this from a terminal (assuming you are in the directory that contains your fabfile):
fab localhost remoteRun
As an alternative you could specify the host with the -H parameter:
fab -H 127.0.0.1 remoteRun
If you have a list of hosts you want to invoke the command for, do it like this:
http://readthedocs.org/docs/fabric/latest/usage/execution.html
Adjusted to your example:
env.hosts = [ 'localhost', '127.0.0.1']
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
And called via: fab remoteRun
This way the remoteRun is performed on all hosts in env.hosts.
#Nerdatastic is right, for simple: don't use env.hosts, use env.host_string instead. e.g.
def setup_db_server
env.host_string = 'db01.yoursite.com' # or the ip address
run("mysqladmin ...")
end
and running $ fab setup_db_server will execute the script on the target server.
Nerdatastic is right, you need to specify the env.host_string varaible for fabric to know what host string to use. I came across this problem trying to use a subclass of Task and call the run() method. It seemed to ignore env.hosts except when using execute from fabric.tasks in version 1.3.
i have same issue.
I think this is a bug. Because all work before today.
I store my env in .fabricrc.
Now i have same message as yours. Don't know why.

Categories

Resources