I'm trying to Ansible's Python API in order to write a test API (in Python) which can take advantage of a playbook programmatically and add new nodes to a Hadoop cluster. As we know, at least node in the cluster has to be the Namenode and JobTracker (MRv1). For simplicity lets say the JobTracker and the Namenode are in the same node (namenode_ip).
Thus, in order to use Ansible to create a new node, and have it self registered with the Namenode I've created this following Python utility:
from ansible.playbook import PlayBook
from ansible.inventory import Inventory
from ansible.inventory import Group
from ansible.inventory import Host
from ansible import constants as C
from ansible import callbacks
from ansible import utils
import os
import logging
import config
def run_playbook(ipaddress, namenode_ip, playbook, keyfile):
utils.VERBOSITY = 0
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
stats = callbacks.AggregateStats()
runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=utils.VERBOSITY)
host = Host(name=ipaddress)
group = Group(name="new-nodes")
group.add_host(host)
inventory = Inventory(host_list=[], vault_password="Hello123")
inventory.add_group(group)
key_file = keyfile
playbook_path = os.path.join(config.ANSIBLE_PLAYBOOKS, playbook)
pb = PlayBook(
playbook=playbook_path,
inventory = inventory,
remote_user='deploy',
callbacks=playbook_cb,
runner_callbacks=runner_cb,
stats=stats,
private_key_file=key_file
)
results = pb.run()
print results
However, Ansible documentation for the Python API is very poorly written (doesn't give any detail, except for a simple example). What I needed was to have a similar thing as:
ansible-playbook -i hadoop_config -e "namenode_ip=1.2.3.4"
That's it, pass the value of namenode_ip dynamically to the inventory file using the Python API. How can I do that?
This should be as simple as adding one or more of these lines to your script after instantiating your group object and before running your playbook:
group.set_variable("foo", "BAR")
Related
My code is work but... i think about simple solution.
Maybe someone knows?
I can't find in API Gitlab how import package from GitLab - file .py
In database_connect.py i have function with connect to database (example)
For example
database_connect = project.files.get(file_path='FUNCTION/DATABASE/database_connect.py', ref='main')
import database_connect
My code ...
import gitlab
import base64
gl = gitlab.Gitlab('link',
private_token='API')
projects = gl.projects.list()
for project in projects[3:4]:
# print(project) # prints all the meta data for the project
print("Project: ", project.name)
print("Gitlab URL: ", project.http_url_to_repo)
database_connect = project.files.get(file_path='FUNCTION/DATABASE/database_connect.py', ref='main')
database_connect_content = base64.b64decode(database_connect.content).decode("utf-8")
database_connect_package = database_connect_content.replace('\\n', '\n')
exec(database_connect_package)
I want to be able to output a value of a group_var variable for a host in my ansible inventory from a Python script. I can do this with an Ansible ad hoc command:
$ ansible all -i inventory/myInventory.yml -l,3.xx.7xx.175 -m debug -a'var=vertype'
3.xx.xx.175 | SUCCESS => {
"vertype": "rt"
}
But if I try this in Python I can't even find the "vertype" variable.
from ansible.inventory.manager import InventoryManager
from ansible.parsing.dataloader import DataLoader
import glob
from ansible.vars.manager import VariableManager
...
dl = DataLoader()
inventory_files = glob.glob('inventory/*yml')
im = InventoryManager(loader=dl, sources=inventory_files)
vm = VariableManager(loader=dl, inventory=im)
...
my_host = im.get_host(public_ip_address)
print(vm.get_vars(host=my_host).get('vertype'))
But vm.get_vars(host=my_host) does not seem to contain the key "vertype", so I get "None" as output. How can I access the value "rt"?
I came across this python class ResourcesMoveInfo for moving resources(Azure Images) from one subscription to another with Azure python SDK.
But it's failing when I use it like below:
Pattern 1
reference from https://buildmedia.readthedocs.org/media/pdf/azure-sdk-for-python/v1.0.3/azure-sdk-for-python.pdf
Usage:
metadata = azure.mgmt.resource.resourcemanagement.ResourcesMoveInfo(resources=rid,target_resource_group='/subscriptions/{0}/resourceGroups/{1}'.format(self.prod_subscription_id,self.resource_group))
Error:
AttributeError: module 'azure.mgmt.resource' has no attribute 'resourcemanagement'
Pattern 2
reference from - https://learn.microsoft.com/en-us/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2019_07_01.models.resourcesmoveinfo?view=azure-python
Usage:
metadata = azure.mgmt.resource.resources.v2020_06_01.models.ResourcesMoveInfo(resources=rid,target_resource_group='/subscriptions/{0}/resourceGroups/{1}'.format(self.prod_subscription_id,self.resource_group))
Error:
AttributeError: module 'azure.mgmt.resource.resources' has no attribute 'v2020_06_01'
Any help on this requirement/issue would be helpful. Thanks!
Adding code snippet here:
import sys
import os
import time
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
import azure.mgmt.resource
#from azure.mgmt.resource.resources.v2020_06_01.models import ResourcesMoveInfo
from azure.identity import ClientSecretCredential
from cred_wrapper import CredentialWrapper
class Move():
def __init__(self):
self.nonprod_subscription_id = "abc"
self.prod_subscription_id = "def"
self.credential = ClientSecretCredential(
client_id= os.environ["ARM_CLIENT_ID"],
client_secret= os.environ["ARM_CLIENT_SECRET"],
tenant_id= os.environ["ARM_TENANT_ID"]
)
#resource client for nonprod
self.sp = CredentialWrapper(self.credential)
self.resource_client = ResourceManagementClient(self.sp,self.nonprod_subscription_id)
self.resource_group = "imgs-rg"
def getresourceids(self):
resource_ids = list(resource.id for resource in self.resource_client.resources.list_by_resource_group("{0}".format(self.resource_group)) if resource.id.find("latest")>=0)
return resource_ids
def getresourcenames(self):
resource_names = list(resource.name for resource in self.resource_client.resources.list_by_resource_group("{0}".format(self.resource_group)) if resource.id.find("latest")>=0)
return resource_names
def deleteoldimages(self,name):
#resource client id for prod
rc = ResourceManagementClient(self.sp,self.prod_subscription_id)
for resource in rc.resources.list_by_resource_group("{0}".format(self.resource_group)):
if resource.name == name:
#2019-12-01 is the latest api_version supported for deleting the resource type "image"
rc.resources.begin_delete_by_id(resource.id,"2020-06-01")
print("deleted {0}".format(resource.name))
def moveimages(self):
rnames = self.getresourcenames()
for rname in rnames:
print(rname)
#self.deleteoldimages(rname)
time.sleep(10)
rids = self.getresourceids()
rid = list(rids[0:])
print(rid)
metadata = azure.mgmt.resource.resources.v2020_06_01.models.ResourcesMoveInfo(resources=rid,target_resource_group='/subscriptions/{0}/resourceGroups/{1}'.format(self.prod_subscription_id,self.resource_group))
#moving resources in the rid from nonprod subscription to prod subscription under the resource group avrc-imgs-rg
if rid != []:
print("moving {0}".format(rid))
print(self.resource_client.resources.move_resources(source_resource_group_name="{0}".format(self.resource_group),parameters=metadata))
#self.resource_client.resources.move_resources(source_resource_group_name="{0}".format(self.resource_group),resources=rid,target_resource_group='/subscriptions/{0}/resourceGroups/{1}'.format(self.prod_subscription_id,self.resource_group))
#self.resource_client.resources.begin_move_resources(source_resource_group_name="{0}".format(self.resource_group),parameters=metadata)
if __name__ == "__main__":
Move().moveimages()
From your inputs we can see that the code looks fine. From your error messages, the problem is with importing the modules.
Basically when we import a module few submodules will get installed along with and few will not. This will depend on the version of the package, to understand which modules are involved in a specific version we need to check for version-releases in official documentation.
In your case, looks like some resource modules were missing, if you could see the entire error-trace, there will be a path with sitepackages in our local. Try to find that package and its subfolder(Modules) and compare them with Azure SDK for Python under Resource module, you can find this here.
In such situation we need to explicitly add those sub modules under our package. In your case you can simple download the packaged code from Git link which I gave and can merge in your local folder.
I have developed few libraries for robot framework for my feature testing, for these libraries all variables are coming from a variables.py file. Below is the code block for variables.py:
#!/usr/bin/env python
import sys
import os
import optparse
import HostProperties
import xml.etree.ElementTree as ET
from robot.api import logger
testBed = 748
tree = ET.parse('/home/p6mishra/mybkp/testLibs/TestBedProperties.xml')
class raftGetTestBedProp(object):
def GetTestBedNumber(self):
_attributeDict = {}
root = tree.getroot()
for _tbProperties in root:
for _tbNumber in _tbProperties:
get_tb = _tbNumber.attrib
if get_tb['name']== str(testBed):
get_tb2 = _tbNumber.attrib
return root, get_tb2['name']
def GetTestBedProperties(self, root, testBedNumber):
propertyList = []
for _tbProperties in root:
get_tb = _tbProperties.attrib
for _tbProperty in _tbProperties:
get_tb1 = _tbProperty.attrib
if get_tb1['name']== str(testBedNumber):
for _tbPropertyVal in _tbProperty:
get_tb2 = _tbPropertyVal.attrib
if 'name' in get_tb2.keys():
propertyList.append(get_tb2['name'])
return propertyList
def GetIPNodeType(self, root, testBedNumber):
for tbNumber1 in root.findall('tbproperties'):
for tbNumber in tbNumber1:
ipv4support = tbNumber.find('ipv4support').text
ipv6support = tbNumber.find('ipv6support').text
lbSetup = tbNumber.find('lbSetup').text
name = tbNumber.get('name')
if name==str(testBedNumber):
return ipv4support, ipv6support, lbSetup
obj1, obj2 = raftGetTestBedProp().GetTestBedNumber()
ipv4support, ipv6support, lbSetup = raftGetTestBedProp().GetIPNodeType(obj1, obj2)
AlltestBedProperties = raftGetTestBedProp().GetTestBedProperties(obj1, obj2)
HostPropertyDict = {}
for testBedProperty in AlltestBedProperties:
try:
val1 = getattr(HostProperties, testBedProperty)
HostPropertyDict[testBedProperty] = val1
except:
logger.write("Error in the Configuration data. Please correct and then proceed with the testing", 'ERROR')
for indexVal in range(len(AlltestBedProperties)):
temp = AlltestBedProperties[indexVal]
globals()[temp] = HostPropertyDict[temp]
This variables.py file returns all variables defined in HostProperties.py file based on testbed number.
If i import this library like from variables import * in other libraries it gives me the required variables.
But the problem is here I have hardcoaded 748 so it works fine for me but i want to pass this testbed number information from pybot command and make it available for my Robot testcase as well as all the developed libraries.
Can you post Robot Framework code you use to call these Python files? I think you could use pybot -v testBed:748 and pass it as a parameter to __init__ your class. I am not sure without seeing how you start your Python variables.
A bit different way is to use environment variables:
#!/usr/bin/env python
import sys
import os
import optparse
import HostProperties
import xml.etree.ElementTree as ET
from robot.api import logger
testBed = os.environ['testbed']
tree = ET.parse('/home/p6mishra/mybkp/testLibs/TestBedProperties.xml')
Before starting pybot just define this environment parameter:
export testbed=748
pybot tests.txt
There are given two versions of a storage. Based on the version, I need to select the proper interface module to get the result.
The file structure looks like this:
lib/
__init__.py
provider.py
connection.py
device.py
storage/
__init__.py
interface_v1.py # interface for storage of version 1
interface_v2.py # interface for storage of version 2
main.py
The main.py imports provider.py, that should import one of the interfaces listed in the storage subpackage depending on the version of the storage.
main.py:
from lib.provider import Provider
from lib.connection import Connection
from lib.device import Device
connection = Connection.establish(Device)
storage_version = Device.get_storage_version()
massage = Provider.get_data(connection)
provider.py should import an interface to the storage based on storage_version and implement provide some functions:
from storage import interface
class Provider(object):
def __init_(self):
self.storage = interface.Storage
def get_data(self, connection):
return self.storage.get_data()
def clear_storage(self, connection):
self.storage.clear_storage()
This example is not complete, but should be sufficient for the problem explanation.
Additional question:
Is it possible to use storage.__init__ to use import just the
subpackage?
How to proper implement Factory in Python?
Assuming the interface_v1 and interface_v2 bith impleement a class StorageClassI guess something like this:
import storage.interface_v1
import storage.interface_v2
class Provider(object):
def __init__(self , version):
if version == 1:
self.storage = storage.interface_v1.StorageClass
else:
self.storage = storage.interface_v2.StorageClass
Would be the best solution - but https://docs.python.org/2/library/functions.html#import should provide a way to import a module based on name.