Fabric Assign Hosts Using a List Variable - python

I know I can assign hosts with fabric by doing this:
env.hosts = ['host1', 'host2']
But can I do this?
myList = ['host1', 'host2']
env.hosts = myList
I am getting a list of 'public_dns_name's using Boto (from Amazon AWS) and then want to run commands on those servers. The server list can be dynamic so I need to be able to assign the hosts environment variable rather than statically. Can anyone suggest a solution?
myHosts = []
for i in myInstances:
publicDnsAddress = i.public_dns_name
myHosts.append(i.public_dns_name)
print ("public dns address: " + publicDnsAddress)
print ("myHosts = " + str(myHosts))
env.hosts=myHosts
env.user='myUser'
run("/scripts/remote_script.py")
I get this error:
No hosts found. Please specify (single) host string for connection:
If the host names were bad I would expect at least a connection error rather than a message saying it could find no hosts. Granted I may be calling this thing wrong but then again, that is why I am asking for help.

When dynamically setting hosts within the code, I've needed to use this with settings pattern:
user = 'root'
hosts = ['server1', 'server2']
for host in hosts:
with settings(user=user, host_string=host):
run(whatever_my_command_is)

Related

Select query is working but insert query is not in Python with Couchbase library

I'm trying to do some remote operations with Couchbase Python library. I have written a small code piece to test if it is working as intended.
server_name = 'xxxxxxxxxx'
bucket_name = 'vit_app_clob'
username = "Administrator"
password = "xxxxxx"
json_path = 'D:\\' + bucket_name + '_' + server_name + '.json'
cluster = Cluster('couchbase://' + server_name + ':8091', ClusterOptions(PasswordAuthenticator(username, password)))
bucket = cluster.bucket(bucket_name)
def insert(bucket):
result = cluster.query("INSERT INTO `oracle_test`(KEY, VALUE) VALUES ('key1', { 'type' : 'hotel', 'name' : 'new hotel' })")
def select():
result = cluster.query('SELECT * FROM ' + bucket_name)
myDict = []
for row in result:
name = row[bucket_name]
# print(name)
myDict.append(name)
df = pd.DataFrame(myDict)
with open(json_path, "w") as f:
json.dump(myDict, f, indent=1)
select()
# insert(bucket)
Select function works well. Query runs and there is no problem. But the other one, which I'm trying to use for inserting, doesn't work. It gives a timeout error.
couchbase.exceptions.TimeoutException: <Key='key3', RC=0xC9[LCB_ERR_TIMEOUT (201)]
What could be the reason? The query is working when I run it in the query section of couchbase interface.
Edit: There is this part in error log: 'endpoint': 'xxxxxx:11210'. Do I need to have an open connection in this port? It seems I don't have any access right now, since the telnet is not working. If this is the case, that means connecting via 8091 is not enough?
Edit2: We opened connection to port but the same problem exists
You are correct in that port 8091 is generally not enough. Check out the documentation on Couchbase port numbers.
You may not need all those ports, depending on which services you're using, but for my day-to-day work, I usually open ports 8091-8097 and 11210. (There are counterpart ports for encrypted traffic).
I don't even know why, but adding .execute() to end of the query solved the problem. SELECT query was already working without it. Also, I needed to open a connection on port 11210

check if samba directory exist in python3

I have a samba directory smb://172.16.0.10/public_pictures/ and I would like to know if it is accessible.
try something like the following:
import urllib
if open("smb://172.16.0.10/public_pictures/"):
print("accessible")
else:
print("no accessible")
but obviously it does not work for me
Using pysmb (docs):
from smb.SMBConnection import SMBConnection
remote_address = "172.16.0.10"
share_name = "public_pictures"
conn = SMBConnection(username, password, name, remote_name)
conn.connect(remote_address)
accessible = share_name in conn.listShares()
One way of handling samba is to use pysmb. If so, then it goes something like the following:
# we need to provide localhost name to samba
hostname = socket.gethostname()
local_host = (hostname.split('.')[0] if hostname
else "SMB{:d}".format(os.getpid()))
# make a connection
cn = SMBConnection(
<username>, <password>, local_host, <netbios_server_name>,
domain=<domain>, use_ntlm_v2=<use_ntlm_v2>,
is_direct_tcp=<self.is_direct_tcp>)
# connect
if not cn.connect(<remote_host>, <remote_port>):
raise IOError
# working connection ... to check if a directory exists, ask for its attrs
attrs = cn.getAttributes(<shared_folder_name>, <path>, timeout=30)
Some notes:
in your example above, public_pictures is the shared folder, while path would be simply /
you'll need to know if you are using SMB on port 139 or 445 (or a custom port). If the latter you will usually want to pass is_direct_tcp=True (although some servers will still serve NetBIOS samba on 445)
if you expect not to need a username or password, then probably you are expecting to connect as username="guest" with an empty password.

How to specify pem file path when using gateway in Fabric

I have followed many question related to this topic.
My scenario:
Local host -> Gateway -> Remote host
I am using env.gateway variable to specify gateway host.
sample code
env.user = "ec2-user"
env.key_filename = ["/home/ec2-user/.ssh/internal.pem","/home/roshan.r/test.pem","/home/ec2-user/.ssh/test2.pem"]
env.hosts = ['x.x.x.244', 'x.x.x.132']
env.gateway = 'x.x.x.189'
def getdate():
content = run('date')
My problem is with pem key path.
/home/roshan.r/test.pem is located in current directory. which is used for login into gateway server.
Other two mentioned pem files are located in gateway server.
When i run this program i'm getting file not found error.
Thanks for any help !!
I havn't had to do this yet, but what about having a function that fecth those pem file ? something like :
#'x.x.x.189'
def get_pem():
env.key_filename.append(get("/home/ec2-user/.ssh/internal.pem")
env.key_filename.append(get("/home/ec2-user/.ssh/test2.pem")
Also, I could you try something ? i guess you got a fiel not found because fabric is looking for the /home/ec2-user/.ssh/internal.pem on your computer. It has no way knowing it's on a remote host. What if you try with :
x.x.x.189:/home/ec2-user/.ssh/internal.pem
I just changed path of the .pem file and It works. See below suggestion:
Keep gateway and app server .pem file in your local machine and try to execute it. See below my code.
from fabric.api import *
env.user = "ubuntu"
env.key_filename = ["~/folder/sub_folder/gate_way_instance.pem", "~/folder/sub_folder/test_server_ssh-key.pem"]
env.hosts = ['XX.XX.XX.XXX']
env.gateway = 'XX.XX.XX.XXX'
def uptime():
content = run('cat /proc/uptime')
print content
content = run('ls -la')
print content

Get all IP addresses associated with a host

I'm trying to write some code that can get all the IP addresses associated with a given hostname.
This is what I have so far:
def getips(hostname):
try:
result = socket.getaddrinfo(hostname, None, socket.AF_INET,\
socket.SOCK_DGRAM, socket.IPPROTO_IP, socket.AI_CANONNAME)
list = [x[4][0] for x in result]
return list
except Exception, err:
print "error"
return ""
ips = getips('bbc.co.uk')
print ips
The problem is, sometimes it returns all 4 IPs associated with the specific host in this example, sometimes it returns just one. Is there any way to do this in Python so it consistently returns all the IPs associated with a host?
getaddrinfo() calls the resolver library on your host to lookup IP addresses for any given host. There is no special magic in python that can force it to get a different set of results than what the resolver shows.
For e.g if you run strace on your python script, you will notice that the resolver is invoked:
open("/lib/x86_64-linux-gnu/libresolv.so.2", O_RDONLY|O_CLOEXEC) = 3

Webdriver connected to selenium-server:4444 a through proxy

When I run (WinXP OS, Python 2.7)
wd = webdriver.Remote (command_executor = 'http://127.0.0.1:4444/hub', desired_capabilities = webdriver.DesiredCapabilities.INTERNETEXPLORER)
in my system there is a proxy server by default, and is connected to selenium-server:4444 a through proxy. How to make that connection went directly to selenium-server:4444.
It is a bit late, but I stumbled over the same problem today and solved it, so for the next one who searches, here is the solution:
The system proxy settings are fetched from the *_proxy windows environment variables (http_proxy, https_proxy, ftp_proxy, ...), so if you have a company proxy defined there it will be used.
Add a new environment variable in windows options or, if you use intelliJ IDEA, in the Run configuration settings:
no_proxy=localhost,127.0.0.1
The reason you will find in python-2.7.6/Lib/urllib.py, around line 1387:
def proxy_bypass_environment(host):
"""Test if proxies should not be used for a particular host.
Checks the environment for a variable named no_proxy, which should
be a list of DNS suffixes separated by commas, or '*' for all hosts.
"""
no_proxy = os.environ.get('no_proxy', '') or os.environ.get('NO_PROXY', '')
# '*' is special case for always bypass
if no_proxy == '*':
return 1
# strip port off host
hostonly, port = splitport(host)
# check if the host ends with any of the DNS suffixes
no_proxy_list = [proxy.strip() for proxy in no_proxy.split(',')]
for name in no_proxy_list:
if name and (hostonly.endswith(name) or host.endswith(name)):
return 1
# otherwise, don't bypass
return 0

Categories

Resources