How to wait for a shell script reboot in Fabric 2 - python

I'm using Fabric 2 and I'm trying to run a shell script on a number of hosts sequentially. In the script, it configures a few settings and reboots that host. When I run my task however it ends after the script has run on the first host (I'm guessing because the SSH connection terminates due to the reboot). I tried looking into setting 'warn_only' to True, but I don't see where to set this value on Fabric 2.
Adding:
with settings(warn_only=True):
throws a "NameError: global name 'settings' is not defined" error.
Is there a correct format to warn_only? If not possible yet in Fabric 2, is there a way to continue running my task regardless of this reboot?
My script:
from fabric import *
import time
import csv
#task
def test(ctx):
hosts = ['1.1.1.1', '2.2.2.2']
for host in hosts:
c = Connection(host=host, user="user", connect_kwargs={"password": "password"})
c.run("./shell_script.sh")
configured = False
while not configured:
result = c.run("cat /etc/hostname")
if result != "default": configured = True
time.sleep(10)

So a workaround has been to run the task with the -w flag which will enable warn_only and give me the desired functionality. It would be more preferable to be able to set this property in the code though.

Looks like you can use the config argument to the connection class instead of the with settings() construction, and warn_only as been renamed to warn;
with Connection(host=host, user="user", connect_kwargs={"password": "password"}) as c:
c.run("./shell_script.sh", warn=True)
More generally, you can get upgrade documentation at https://www.fabfile.org/upgrading.html#run

Related

Why does a python-on-whales docker mysql command not work, but issuing the command in the shell does

I am using python-on-whales to run a mysql command in a mysql docker conatiner.
I use docker.execute to call the command inside the container from a python script, and it returns an error:
import mysql.connector
import os
from mysql.connector.errors import DatabaseError
from python_on_whales import docker
envs={"MYSQL_ROOT_PASSWORD":"TempPassword",
"MYSQL_USER":"root",
"MYSQL_ROOT_PASSWORD":"TempPassword"
}
#Start the mysql container
docker.run("mysql/mysql-server",name="test_mysql",
envs=envs,
publish=[(3306,3306)],
detach=True)
#copy scripts to container
docker.copy("./permissions.mysql","test_mysql:/etc/")
docker.copy("./permissions.sh","test_mysql:/etc/")
#try calling permissions.sh on the container
docker.execute("test_mysql", ["/etc/permissions.sh"])
This produced the following error:
result = run(full_cmd, tty=tty)
File "/usr/local/lib/python3.9/site-packages/python_on_whales/utils.py", line 150, in run
raise DockerException(
python_on_whales.exceptions.DockerException: The docker command executed was `/usr/local/bin/docker exec test_mysql /etc/permissions.sh`.
It returned with code 1
The content of stdout is ''
The content of stderr is 'mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
In the error python-on-whales reports the command that it called:
/usr/local/bin/docker exec test_mysql /etc/permissions.sh
But if I issue the command reported back from python-on-whales directly in the bash shell, rather than using python-on-whales the command executes.
Can anyone explain why?
Realised the issue.
python_on_whales doesn't automatically wait for the container with the server to be up before trying to run the script - which was the root of my problem. I no longer need to run the script, as what it does (allow me to access the server from outside the conatiner) can be handled with the correct argument to the connection set-up:
#set up test server and database:
envs={"MYSQL_ROOT_PASSWORD":"TempPassword",
"MYSQL_ROOT_HOST":"%",
"MYSQL_USER":"root",
}
But python_on_whales not waiting caused a few other issues in my code. So for anyone that it may be useful for you can do something similar to this waiting until the connection returns True like so:
not_connected=True
while not_connected:
try:
connection=mysql.connector.connect(**config)
if connection.is_connected():
not_connected=False
except:
time.sleep(1)
pass

fabric 2 traffic generation with non-blocking commands

I need to run some tests with a traffic generator that has different client and server commands. I would like to roll this into a fabric2 script which executes the traffic generation commands while cd'd into /root.
I have public-key authentication on the iperf machines. How can I run this traffic generation test under fabric2?
This was a little interesting to get running because the fabric2 docs don't include much information about run() arguments... you need to look at the invoke Runner.run() documentation to see all fabric run() keywords.
The key to making iperf work in this case was setting pty=True and asynchronous=True when I run the iperf server commands. If I did not run the iperf server as asynchronous, it would block execution of the iperf client command.
# Save this script as run_iperf.py and run with "python run_iperf.py"
from getpass import getuser
import os
#from fabric import Config, SerialGroup, ThreadingGroup, exceptions, runners
#from fabric.exceptions import GroupException
from fabric import Connection
server_vm = "10.1.0.1"
client_vm = "10.2.0.1"
# This matters because my user .ssh/id_rsa.pub is authorized on the remote sytems
assert getuser()=="mpenning"
hosts = list()
conn1 = Connection(host=client_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
conn2 = Connection(host=server_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
hosts.append(conn1)
hosts.append(conn2)
iperf_udp_client_cmd = "nice -19 iperf3 --plus-more-client-commands"
iperf_udp_server_cmd = "nice -19 iperf3 --plus-more-server-commands"
# ThreadingGroup is optional for this use case, but the iperf commands
# definitely require pty and asynchronous (server-side)...
# ThreadingGroup() is required for concurrent fabric commands.
#
# Uncomment below to use ThreadingGroup()...
# t_hosts = ThreadingGroup.from_connections(hosts)
#
# also ref invoke Runner.run() for more run() args:
# -> https://github.com/pyinvoke/invoke/blob/master/invoke/runners.py
with conn2.cd("/root"):
conn2.run(iperf_udp_server_cmd, pty=True, asynchronous=True, disown=False, echo=True)
with conn1.cd("/root"):
conn1.run("sleep 1;%s" % iperf_udp_client_cmd, pty=True, asynchronous=False, echo=True)
This script was loosely based-on this answer:
https://stackoverflow.com/a/53763786/667301

Fabric not using the correct key

In my fabfile, I have set env.use_ssh_config to True. Whenever I run the fabfile, it will get the correct hostname and user from the ssh config, but not the correct key. It will go though my keys(all stored in ~/.ssh/) at random, requiring me to enter the passphrase for all of them, till it gets to the correct key.
It's only fabric that gives me this problem. Running scp as a local command in the fabfile uses the correct key.
Host example
HostName example.com
User elssar
IdentityFile ~/.ssh/id_example
PreferredAuthentications publickey
Entries in my ssh config look like this.
I'm, using Fabric 1.10.1 and Paramiko 1.14.1, Python 2.7.3 and Ubuntu 12.04.
Edit - There is a related open issue in the fabric repository - https://github.com/fabric/fabric/issues/1282
Edit - basic structure of my fabfile, and how I run it
from fabric.api import env, run
def do_something():
run("echo test")
def setup(host):
env.hosts = [host]
# command
fab server:hostname do_something
I tried to check on my setup; here is what I did to debug:
>>> from fabric.network import key_filenames
>>> key_filenames()
[]
>>> from fabric.state import env
>>> env.use_ssh_config = True
>>> env.host_string = 'salt-states'
>>> key_filenames()
['/Users/damien/.ssh/salt.rsa.priv']
update: you could update your fabfile to instrument your task:
from fabric.api import env, run
from fabric.network import key_filenames
def do_something_debug():
env.use_ssh_config = True
print key_filenames()
run("echo test")
def server(host):
env.hosts = [host]
then run the command
fab server:hostname do_something_debug

No hosts found: Fabric

when I run my python code it is asking for host.
No hosts found. Please specify (single) host string for connection:
I have the following code:
from fabric.api import *
from fabric.contrib.console import confirm
env.hosts = [ 'ipaddress' ]
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
remoteRun();
I even tried running fab with -H option and I am getting the same message. I'm using Ubuntu 10.10 any help is appreciated. Btw I am a newbie in Python.
In order to get hosts to work in a script outside of the fab command-line tool and fabfile.py, you'll have to use execute():
from fabric.api import run
from fabric.tasks import execute
def mytask():
run('uname -a')
results = execute(mytask)
If it's only one host, you can use env.host_string = 'somehost or ipaddress'.
You also don’t need the ; at the end of your remoteRun.
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
from fabric.api import env, run
env.host_string = 'ipaddress'
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
remoteRun()
I am not exactly sure what remoteRun(); is supposed to do in your example.
Is it part of your fabfile or is this your terminal command to invoke the script?
The correct way would be a command like this in your shell:
fab remoteRun
Generally it's better to specify the concrete hosts your command is supposed to run on like this:
def localhost():
env.hosts = [ '127.0.0.1']
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
You can run it like this from a terminal (assuming you are in the directory that contains your fabfile):
fab localhost remoteRun
As an alternative you could specify the host with the -H parameter:
fab -H 127.0.0.1 remoteRun
If you have a list of hosts you want to invoke the command for, do it like this:
http://readthedocs.org/docs/fabric/latest/usage/execution.html
Adjusted to your example:
env.hosts = [ 'localhost', '127.0.0.1']
def remoteRun():
print "ENV %s" %(env.hosts)
out = run('uname -r')
print "Output %s"%(out)
And called via: fab remoteRun
This way the remoteRun is performed on all hosts in env.hosts.
#Nerdatastic is right, for simple: don't use env.hosts, use env.host_string instead. e.g.
def setup_db_server
env.host_string = 'db01.yoursite.com' # or the ip address
run("mysqladmin ...")
end
and running $ fab setup_db_server will execute the script on the target server.
Nerdatastic is right, you need to specify the env.host_string varaible for fabric to know what host string to use. I came across this problem trying to use a subclass of Task and call the run() method. It seemed to ignore env.hosts except when using execute from fabric.tasks in version 1.3.
i have same issue.
I think this is a bug. Because all work before today.
I store my env in .fabricrc.
Now i have same message as yours. Don't know why.

How to make Fabric ignore offline hosts in the env.hosts list?

This is related to my previous question, but a different one.
I have the following fabfile:
from fabric.api import *
host1 = '192.168.200.181'
offline_host2 = '192.168.200.199'
host3 = '192.168.200.183'
env.hosts = [host1, offline_host2, host3]
env.warn_only = True
def df_h():
with settings(warn_only=True):
run("df -h | grep sda3")
And the output is:
[192.168.200.199] run: df -h | grep sda3
Fatal error: Low level socket error connecting to host 192.168.200.199: No route to host
Aborting.
After the execution hits the offline server, it aborts immediately, regardless of the other servers in the env.hosts list.
I have used the env setting "warn_only=True", but maybe I'm using it improperly.
How can I modify this behavior so that it will only prints the error and continue executing?
As of version 1.4 Fabric has a --skip-bad-hosts option that can be set from the command line, or by setting the variable in your fab file.
env.skip_bad_hosts = True
Documentation for the option is here:
http://docs.fabfile.org/en/latest/usage/fab.html#cmdoption--skip-bad-hosts
Don't forget to explicitly set the timeout value also.
According to the Fabric documentation on warn_only,
env.warn_only "specifies whether or not to warn, instead of abort, when run/sudo/local encounter error conditions.
This will not help in the case of a server being down, since the failure occurs during the SSH attempt before executing run/sudo/local.
One solution would be to create a function to check if each server is up prior to executing your tasks. Below is the code that I used.
from __future__ import print_function
from fabric.api import run, sudo, local, env
import paramiko
import socket
host1 = '192.168.200.181'
offline_host2 = '192.168.200.199'
host3 = '192.168.200.183'
env.hosts = [host1, offline_host2, host3]
def df_h():
if _is_host_up(env.host, int(env.port)) is True:
run("df -h | grep sda1")
def _is_host_up(host, port):
# Set the timeout
original_timeout = socket.getdefaulttimeout()
new_timeout = 3
socket.setdefaulttimeout(new_timeout)
host_status = False
try:
transport = paramiko.Transport((host, port))
host_status = True
except:
print('***Warning*** Host {host} on port {port} is down.'.format(
host=host, port=port)
)
socket.setdefaulttimeout(original_timeout)
return host_status
You're not using it improperly. You can even just provide --warn-only=true on the command line. It's the documented method suggested by the development team.
Based on Matthew's answer, I came up with a decorator that accomplishes just that:
from __future__ import with_statement
from paramiko import Transport
from socket import getdefaulttimeout, setdefaulttimeout
from fabric.api import run, cd, env, roles
roledefs = {
'greece': [
'alpha',
'beta'
],
'arabia': [
'kha',
'saad'
]
}
env.roledefs = roledefs
def if_host_offline_ignore(fn):
def wrapped():
original_timeout = getdefaulttimeout()
setdefaulttimeout(3)
try:
Transport((env.host, int(env.port)))
return fn()
except:
print "The following host appears to be offline: " + env.host
setdefaulttimeout(original_timeout)
return wrapped
#roles('greece')
#if_host_offline_ignore
def hello_greece():
with cd("/tmp"):
run("touch hello_greece")
#roles('arabia')
#if_host_offline_ignore
def hello_arabia():
with cd("/tmp"):
run("touch hello_arabia")
It is especially useful when you have multiple hosts and roles.

Categories

Resources