I'm using fabric to run a command on one server against another server. Specifically, I'm running a SQL query through the psql command line.
The fabric run() function is throwing a SystemExit exception which I can catch.
If I go to the server and run the psql command directly I am told:
psql: could not connect to server: Connection timed out
Is the server running on host "xyz.example.com" (10.16.16.66) and accepting
TCP/IP connections on port 5432?
So, I know that the command is not working but what I want is to get that text under psql so my code can be explicit about the problem.
I think that the fabric code is fine because if I change the psql command so it executes on the same server but against a different database, I get no exception and the expected answer. So the problem is that the server I'm running psql on cannot communicate with one of the database servers.
Is it possible to get the results of the psql command through fabric after fabric throwsd the SystemExit exeption?
For reference, here's the sample code:
from __future__ import with_statement
from fabric.api import local, settings, abort, run, cd, execute, env
from fabric.contrib.console import confirm
import sys
import os
def test():
try:
count = run('psql blah blah blah',timeout=60)
print('count: {}'.format(count))
except Exception,ex:
print('====> Exception type: %s' % ex.__class__)
print('====> Exception: %s' % ex)
except SystemExit,ex:
print('====> Exception type: %s' % ex.__class__)
print('====> Exception: %s' % ex)
def go():
print "Working"
env.host_string = "jobs0.onshift.com"
execute(test)
Take a look at the settings context manager module in the fabric docs, and the succeeded and failed properties on the object returned by run
from fabric.api import *
from fabric.context_managers import settings
def test():
with settings(warn_only = True):
res = run('psql blah blah blah', timeout = 60)
if res.succeeded:
print('count: {}'.format(res))
def go():
print "Working"
execute(test, hosts = ["jobs0.onshift.com"])
Related
For monitoring purposees I try to get the output of the following shell commands but from a python script
:mongo --port 27040
-> enters mongodb shell
:rs.status()
see image
The result of the command is json that I want to access outside the mongo shell to write it to a file, I can run other command in python using pymongo like:
import json, os
# load mongo library
current_dir = os.path.dirname(os.path.realpath(__file__))
os.sys.path.append(os.path.join(current_dir, 'pymongo-3.7.1-cp27-cp27m-manylinux1_x86_64.whl'))
from bson import json_util
from pymongo import MongoClient
from pymongo.errors import OperationFailure, ConnectionFailure
#connection settings
port = 27040
hostname = "localhost"
#default database used by mongodb
database = "test"
try:
# connect to the database
client = MongoClient(hostname,int(port))
db = client[database] # select the database
serverstats = db.command("serverStatus")
serialized_serverstats = json.dumps(serverstats, default=json_util.default)
print serialized_serverstats
except Exception as e:
print("Unhandled Error is %s" % e)
This runs something equal to running db.serverStatus() in the mongo shell.
But how do I run rs.status() form inside a python script?
You should do it like this:
db = client ['admin']
db_stats = db.command({'replSetGetStatus' :1})
If you want to check what's the underlying command of any shell command:
> rs.status
function () {
return db._adminCommand("replSetGetStatus");
}
>
I'm using ParallelSSHClient to connect to multiple servers.
When I'm running the Python function, it is working perfectly.
However, when I'm calling the function from a test case in Robot Framework, I'm getting the following error.
SSHException: Error reading SSH protocol banner('This operation would
block forever', )
The Python function I have used is:
from pssh.pssh_client import ParallelSSHClient
from pssh.utils import load_private_key
from robot.libraries.BuiltIn import BuiltIn
def check101():
pkey = load_private_key('/root/test.pem')
hosts = ['2.2.2.2', '1.1.1.1']
client = ParallelSSHClient(hosts, pkey=pkey)
try:
output = client.run_command("<command>")
except (AuthenticationException):
print 'Error'
node=0
for host in output:
for line in output[host].stdout:
node=node+1
if (int(line)>0):
return node
break
return -1
Add the following at the start-
from gevent import monkey
monkey.patch_all()
how can I do it?
I thought, I can read something from database, but it looks too much, is there something like?:
settings.DATABASES['default'].check_connection()
All you need to do is start a application and if its not connected it will fail. Other way you can try is on shell try following -
from django.db import connections
from django.db.utils import OperationalError
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
connected = False
else:
connected = True
Run the shell
python manage.py shell
Execute this script
import django
print(django.db.connection.ensure_connection())
If it print None means everything is okay, otherwise it will throw an error if something wrong happens on your db connection
It's an old question but it needs an updated answer
python manage.py check --database default
If you're not using default or if you want to test other databases listed in your settings just name it.
It is available since version 3.1 +
Check the documentation
I use the following Django management command called wait_for_db:
import time
from django.db import connection
from django.db.utils import OperationalError
from django.core.management.base import BaseCommand
class Command(BaseCommand):
"""Django command that waits for database to be available"""
def handle(self, *args, **options):
"""Handle the command"""
self.stdout.write('Waiting for database...')
db_conn = None
while not db_conn:
try:
connection.ensure_connection()
db_conn = True
except OperationalError:
self.stdout.write('Database unavailable, waiting 1 second...')
time.sleep(1)
self.stdout.write(self.style.SUCCESS('Database available!'))
Assuming you needed this because of docker, BUT is not limitted to docker, remember this is at the end of the day Bash, and thus works everywhere *NIX.
You will first need to be using django-environ, since it will make this a whole lot easier.
The DATABASE_URL environment variable will be used inside your Django app, and here. Your settings would look like this:
import environ
env = environ.Env()
...
DATABASES = {
'default': env.db('DATABASE_URL'),
'other': env.db('DATABASE_OTHER_URL') # for illustration purposes
}
...
Your environment variables should look something like this: (more info here)
# This works with ALL the databases django supports ie (mysql/mssql/sqlite/...)
DATABASE_URL=postgres://user:pass#name_of_box:5432/database_name
DATABASE_OTHER_URL=oracle://user:pass#/(description=(address=(host=name_of_box)(protocol=tcp)(port=1521))(connect_data=(SERVICE_NAME=EX)))
Inside your entrypoint.sh do something like this:
function database_ready() {
# You need to pass a single argument called "evironment_dsn"
python << EOF
import sys
import environ
from django.db.utils import ConnectionHandler, OperationalError
env = environ.Env()
try:
ConnectionHandler(databases={'default': env.db('$1')})['default'].ensure_connection()
except (OperationalError, DatabaseError):
sys.exit(-1)
sys.exit(0)
EOF
}
Then, lets say you want to wait for your main db [the postgres in this case], you add this inside the same entrypoint.sh, under the database_ready function.
until database_ready DATABASE_URL; do
>&2 echo "Main DB is unavailable - sleeping"
sleep 1
done
This will only continue, IF postgres is up and running. What about oracle? Same thing, under the code above, we add:
until database_ready DATABASE_OTHER_URL; do
>&2 echo "Secondary DB is unavailable - sleeping"
sleep 1
done
Doing it this way will give you a couple of advantages:
you don't need to worry about other dependencies such as binaries and the likes.
you can switch databases and not have to worry about this breaking. (code is 100% database agnostic)
Create a file your_app_name/management/commands/waitdb.py and paste the bellow.
import time
from django.core.management.base import BaseCommand
from django.db import connection
from django.db.utils import OperationalError
from django.utils.translation import ngettext
class Command(BaseCommand):
help = 'Checks database connection'
def add_arguments(self, parser):
parser.add_argument(
'--seconds',
nargs='?',
type=int,
help='Number of seconds to wait before retrying',
default=1,
)
parser.add_argument(
'--retries',
nargs='?',
type=int,
help='Number of retries before exiting',
default=3,
)
def handle(self, *args, **options):
wait, retries = options['seconds'], options['retries']
current_retries = 0
while current_retries < retries:
current_retries += 1
try:
connection.ensure_connection()
break
except OperationalError:
plural_time = ngettext('second', 'seconds', wait)
self.stdout.write(
self.style.WARNING(
f'Database unavailable, retrying after {wait} {plural_time}!'
)
)
time.sleep(wait)
python manage.py waitdb --seconds 5 --retries 2
python manage.py waitdb # defaults to 1 seconds & 3 retries
I had a more complicated case where I am using mongodb behind djongo module, and RDS mysql. So not only is it multiple databases, but djongo throws an SQLDecode error instead. I also had to execute and fetch to get this working:
from django.conf import settings
if settings.DEBUG:
# Quick database check here
from django.db import connections
from django.db.utils import OperationalError
dbs = settings.DATABASES.keys()
for db in dbs:
db_conn = connections[db] # i.e. default
try:
c = db_conn.cursor()
c.execute("""SELECT "non_existent_table"."id" FROM "non_existent_table" LIMIT 1""")
c.fetchone()
print("Database '{}' connection ok.".format(db)) # This case is for djongo decoding sql ok
except OperationalError as e:
if 'no such table' in str(e):
print("Database '{}' connection ok.".format(db)) # This is ok, db is present
else:
raise # Another type of op error
except Exception: # djongo sql decode error
print("ERROR: Database {} looks to be down.".format(db))
raise
I load this in my app __init__.py, as I want it to run on startup only once and only if DEBUG is enabled. Hope it helps!
It seems Javier's answer is no longer working. He's one I put together to perform the task of checking database availability in a Docker entrypoint, assuming you have the psycopg2 library available (you're running a Django application, for instance):
function database_ready() {
python << EOF
import psycopg2
try:
db = psycopg2.connect(host="$1", port="$2", dbname="$3", user="$4", password="$5")
except:
exit(1)
exit(0)
EOF
}
until database_ready $DATABASE_HOST $DATABASE_PORT $DATABASE_NAME $DATABASE_USER $DATABASE_PASSWORD; do
>&2 echo "Database is unavailable at $DATABASE_HOST:$DATABASE_PORT/$DATABASE_NAME - sleeping..."
sleep 1
done
echo "Database is ready - $DATABASE_HOST:$DATABASE_PORT/$DATABASE_NAME"```
I am new to Flask and want to make sure the redis server is running and start it if it isn't. Here's what I have:
#app.before_first_request
def initialize():
cmd = 'src/redis-cli ping'
p = subprocess.Popen(cmd,stdout=subprocess.PIPE)
out, err = p.communicate()
#if out.startswith('Could not connect to Redis'): #start redis here
if err is not None: raise Exception(err)
However, I get an error "OSError: [Errno 2] No such file or directory"
Is there an easier way to check if the redis server is running?
Use ping cmd of redis:
import redis
from redis import ConnectionError
import logging
logging.basicConfig()
logger = logging.getLogger('redis')
rs = redis.Redis("localhost")
try:
rs.ping()
except ConnectionError:
logger.error("Redis isn't running. try `/etc/init.d/redis-server restart`")
exit(0)
Sample Output:
ERROR:redis:Redis isn't running. try `/etc/init.d/redis-server restart`
I would suggest you to use some kind of a supervision like supervisord or monit they are designed to check if process, host, file and so on is doing it;s job, and if not,then restart it.
For example here is the config to check redis:
check host redis with address <your_redis_host>
if failed icmp type echo count 3 with timeout 3 seconds then alert
if failed port 6379 with timeout 15 seconds then alert
I currently have a working python script that SSHs into a remote Linux machine and executes commands on that machine. I'm using paramiko to handle ssh connectivity. Here is the code in action, executing an hostname -s command:
blade = '192.168.1.15'
username='root'
password=''
# now, connect
try:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.WarningPolicy())
print '*** Connecting...'
client.connect(blade, 22, username, password)
# print hostname for verification
stdin, stdout, stderr = client.exec_command('hostname --short')
print stdout.readlines()
except Exception, e:
print '*** Caught exception: %s: %s' % (e.__class__, e)
traceback.print_exc()
try:
client.close()
except:
pass
sys.exit(1)
This works fine, but what I'm actually trying to do is more complicated. What I would actually like to do is SSH into that same Linux machine, as I did above, but then create a temporary virtual machine on it, and execute a command on that virtual machine. Here is my (nonworking) attempt:
blade='192.168.1.15'
username='root'
password=''
# now, connect
try:
# client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.WarningPolicy())
print '*** Connecting...'
client.connect(blade, 22, username, password)
# create VM, log in, and print hostname for verification
stdin, stdout, stderr = client.exec_command('sudo kvm -m 1024 -drive file=/var/lib/libvirt/images/oa4-vm$
time.sleep(60) #delay to allow VM to initialize
stdin.write(username + '\n') #log into VM
stdin.write(password + '\n') #log into VM
stdin, stdout, stderr = client.exec_command('hostname --short')
print stdout.readlines()
except Exception, e:
print '*** Caught exception: %s: %s' % (e.__class__, e)
traceback.print_exc()
try:
client.close()
except:
pass
sys.exit(1)
When I run this, I get the following:
joe#computer:~$ python automata.py
*** Connecting...
/home/joe/.local/lib/python2.7/site-packages/paramiko/client.py:95: UserWarning: Unknown ssh-rsa host key for 192.168.1.15: 25f6a84613a635f6bcb5cceae2c2b435
(key.get_name(), hostname, hexlify(key.get_fingerprint())))
*** Caught exception: <class 'socket.error'>: Socket is closed
Traceback (most recent call last):
File "automata.py", line 32, in function1
stdin.write(username + '\n') #log into VM
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/file.py", line 314, in write
self._write_all(data)
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/file.py", line 439, in _write_all
count = self._write(data)
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/channel.py", line 1263, in _write
self.channel.sendall(data)
File "/home/joe/.local/lib/python2.7/site-packages/paramiko/channel.py", line 796, in sendall
raise socket.error('Socket is closed')
error: Socket is closed
I'm not sure how to interpret this error -- "socket is closed" makes me think the SSH connection is terminating one I try to create the VM. Does anyone have any pointers?
update
I'm attempting to use the pexpect wrapper and having trouble getting it to interact with the un/pw prompt. I'm testing the process by ssh'ing into a remote machine and running a test.py script which prompts me for a username, then saves the username in a text file. Here is my fab file:
env.hosts = ['hostname']
env.user = 'userame'
env.password = 'password'
def vm_create():
run("python test.py")
And the contents of test.py on the remote machine are:
#! /usr/bin/env python
uname = raw_input("Enter Username: ")
f = open('output.txt','w')
f.write(uname + "\n")
f.close
So, I can execute "fab vm_create" on the local machine and it successfully establishes the SSH connection and prompts me for the username, as defined by test.py. However, if I execute a third python file on my local machine with the pexpect wrapper, like this:
import pexpect
child = pexpect.spawn('fab vm_create')
child.expect ('Enter Username: ')
child.sendline ('password')
Nothing seems to happen. I get no errors, and no output.txt is created on the remote machine. Am I using pexpect incorrectly?
As much as I love paramiko, this may be better suited to using Fabric.
Here's a sample fabfile.py:
from fabric.api import run
from fabric.api import sudo
from fabric.api import env
env.user = 'root'
env.password = ''
env.host = ='192.168.1.15'
def vm_up():
sudo("kvm -m 1024 -drive file=/var/lib/libvirt/images/oa4-vm$...")
run("hostname --short")
To then run this, use
$ fab vm_up
If you don't set the host and password in the fabfile itself (rightly so), then you can set these at the command line:
$ fab -H 192.168.1.15 -p PASSWORD vm_up
However, your kvm line is still expecting input. To send input (and wait for the expected prompts), write another script that uses pexpect to call fab:
child = pexpect.spawn('fab vm_up')
child.expect('username:') # Put this in the format you're expecting
child.send('root')
use fabric http://docs.fabfile.org/en/1.8/
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks
from fabric.api import run
def host_name():
run('hostname -s')