Related
I have written two codes to get some details about EC2, the reason written two code, I am not able to get the 'ComputerName' in EC2 describe_instance, so I have created separate code using boto3 client SSM get the 'ComputerName'. Now I tried to combine both codes into single code and get the output in single csv with separate columns and rows, someone help me with the below code to get the single csv output. Also please find the sample output.
import boto3
import csv
profiles = ['Dev_Databases','Dev_App','Prod_Database','Prod_App']
########################EC2-Details################################
csv_ob=open("EC2-Inventory.csv","w" ,newline='')
csv_w=csv.writer(csv_ob)
csv_w.writerow(["S_NO","profile","Instance_Id",'Instance_Type','Platform','State','LaunchTime','Privat_Ip'])
cnt=1
for ec2 in profiles:
aws_mag_con=boto3.session.Session(profile_name=ec2)
ec2_con_re=aws_mag_con.resource(service_name="ec2",region_name="ap-southeast-1")
for each in ec2_con_re.instances.all():
print(cnt,ec2,each.instance_id,each.instance_type,each.platform,each.state,each.launch_time.strftime("%Y-%m-%d"),each.private_ip_address,)
csv_w.writerow([cnt,ec2,each.instance_id,each.instance_type,each.platform,each.state,each.launch_time.strftime("%Y-%m-%d"),each.private_ip_address])
cnt+=1
csv_ob.close()
#######################HostName-Details###########################
csv_ob1=open("Hostname-Inventory.csv","w" ,newline='')
csv_w1=csv.writer(csv_ob1)
csv_w1.writerow(["S_NO",'Profile','InstanceId','ComputerName','PlatformName'])
cnt1=1
for ssm in profiles:
session = boto3.Session(profile_name=ssm)
ssm_client=session.client('ssm', region_name='ap-southeast-1')
paginator = ssm_client.get_paginator('describe_instance_information')
response_iterator = paginator.paginate(Filters=[{'Key': 'PingStatus','Values': ['Online']}])
for item in response_iterator:
for instance in item['InstanceInformationList']:
if instance.get('PingStatus') == 'Online':
InstanceId = instance.get('InstanceId')
ComputerName = instance.get('ComputerName')#.replace(".WORKGROUP", "")
PlatformName = instance.get('PlatformName')
print(InstanceId,ComputerName,PlatformName)
csv_w1.writerow([cnt1,ssm,InstanceId,ComputerName,PlatformName])
cnt1+=1
csv_ob1.close()
Sample Output Below:
I am trying to get all resources and providers from Azure subscription by using Python SDK.
Here is my code:
get all resources by "resource group"
extract id of each resource within "resource group"
then calling details about particular resource by its id
The problem is that each call from point 3. requires a correct "API version" and it differs from object to object. So obviously my code keeps failing when trying to find some common API version that fits to everything.
Is there a way to retrieve suitable API version per resource in resource group ??? (similarly as retrieving id, name, ...)
# Import specific methods and models from other libraries
from azure.mgmt.resource import SubscriptionClient
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
credential = AzureCliCredential()
client = ResourceManagementClient(credential, "<subscription_id>")
rg = [i for i in client.resource_groups.list()]
# Retrieve the list of resources in "myResourceGroup" (change to any name desired).
# The expand argument includes additional properties in the output.
rg_resources = {}
for i in range(0, len(rg)):
rg_resources[rg[i].as_dict()
["name"]] = client.resources.list_by_resource_group(
rg[i].as_dict()["name"],
expand="properties,created_time,changed_time")
data = {}
for i in rg_resources.keys():
details = []
for _data in iter(rg_resources[i]):
a = _data
details.append(client.resources.get_by_id(vars(_data)['id'], 'latest'))
data[i] = details
print(data)
error:
azure.core.exceptions.HttpResponseError: (NoRegisteredProviderFound) No registered resource provider found for location 'westeurope' and API version 'latest' for type 'workspaces'. The supported api-versions are '2015-03-20, 2015-11-01-preview, 2017-01-01-preview, 2017-03-03-preview, 2017-03-15-preview, 2017-04-26-preview, 2020-03-01-preview, 2020-08-01, 2020-10-01, 2021-06-01, 2021-03-01-privatepreview'. The supported locations are 'eastus, westeurope, southeastasia, australiasoutheast, westcentralus, japaneast, uksouth, centralindia, canadacentral, westus2, australiacentral, australiaeast, francecentral, koreacentral, northeurope, centralus, eastasia, eastus2, southcentralus, northcentralus, westus, ukwest, southafricanorth, brazilsouth, switzerlandnorth, switzerlandwest, germanywestcentral, australiacentral2, uaecentral, uaenorth, japanwest, brazilsoutheast, norwayeast, norwaywest, francesouth, southindia, jioindiawest, canadaeast, westus3
What information exactly do you want to retrieve from the resources?
In most cases, I would recommend to use the Graph API to query over all resources. This is very powerful, as you can query the whole platform using a simple Query language - Kusto Query Lanaguage (KQL)
You can try the queries directly in the Azure service Azure Resource Graph Explorer in the Portal
A query that summarizes all types of resources would be:
resources
| project resourceGroup, type
| summarize count() by type, resourceGroup
| order by count_
A simple python-codeblock can be seen on the linked documentation above.
The below sample uses DefaultAzureCredential for authentication and lists the first resource in detail, that is in a resource group, where its name starts with "rg".
# Import Azure Resource Graph library
import azure.mgmt.resourcegraph as arg
# Import specific methods and models from other libraries
from azure.mgmt.resource import SubscriptionClient
from azure.identity import DefaultAzureCredential
# Wrap all the work in a function
def getresources( strQuery ):
# Get your credentials from environment (CLI, MSI,..)
credential = DefaultAzureCredential()
subsClient = SubscriptionClient(credential)
subsRaw = []
for sub in subsClient.subscriptions.list():
subsRaw.append(sub.as_dict())
subsList = []
for sub in subsRaw:
subsList.append(sub.get('subscription_id'))
# Create Azure Resource Graph client and set options
argClient = arg.ResourceGraphClient(credential)
argQueryOptions = arg.models.QueryRequestOptions(result_format="objectArray")
# Create query
argQuery = arg.models.QueryRequest(subscriptions=subsList, query=strQuery, options=argQueryOptions)
# Run query
argResults = argClient.resources(argQuery)
# Show Python object
print(argResults)
getresources("Resources | where resourceGroup startswith 'rg' | limit 1")
Is there a way to get all the resources in the aws account through python code using boto3. I went through the documentation, didn't find any list function which might solve this.
Try this command.
but prerequisite enable aws config for this region before running this command.
import boto3
session = boto3.Session(profile_name=’your-profilename’)
client = session.client(‘config’)
resources = ["AWS::EC2::CustomerGateway", "AWS::EC2::EIP", "AWS::EC2::Host", "AWS::EC2::Instance", "AWS::EC2::InternetGateway", "AWS::EC2::NetworkAcl", "AWS::EC2::NetworkInterface", "AWS::EC2::RouteTable", "AWS::EC2::SecurityGroup", "AWS::EC2::Subnet", "AWS::CloudTrail::Trail", "AWS::EC2::Volume", "AWS::EC2::VPC", "AWS::EC2::VPNConnection", "AWS::EC2::VPNGateway", "AWS::EC2::RegisteredHAInstance", "AWS::EC2::NatGateway", "AWS::EC2::EgressOnlyInternetGateway", "AWS::EC2::VPCEndpoint", "AWS::EC2::VPCEndpointService", "AWS::EC2::FlowLog", "AWS::EC2::VPCPeeringConnection", "AWS::IAM::Group", "AWS::IAM::Policy", "AWS::IAM::Role", "AWS::IAM::User", "AWS::ElasticLoadBalancingV2::LoadBalancer", "AWS::ACM::Certificate", "AWS::RDS::DBInstance", "AWS::RDS::DBParameterGroup", "AWS::RDS::DBOptionGroup", "AWS::RDS::DBSubnetGroup", "AWS::RDS::DBSecurityGroup", "AWS::RDS::DBSnapshot", "AWS::RDS::DBCluster", "AWS::RDS::DBClusterParameterGroup", "AWS::RDS::DBClusterSnapshot", "AWS::RDS::EventSubscription", "AWS::S3::Bucket", "AWS::S3::AccountPublicAccessBlock", "AWS::Redshift::Cluster", "AWS::Redshift::ClusterSnapshot", "AWS::Redshift::ClusterParameterGroup", "AWS::Redshift::ClusterSecurityGroup", "AWS::Redshift::ClusterSubnetGroup", "AWS::Redshift::EventSubscription", "AWS::SSM::ManagedInstanceInventory", "AWS::CloudWatch::Alarm", "AWS::CloudFormation::Stack", "AWS::ElasticLoadBalancing::LoadBalancer", "AWS::AutoScaling::AutoScalingGroup", "AWS::AutoScaling::LaunchConfiguration", "AWS::AutoScaling::ScalingPolicy", "AWS::AutoScaling::ScheduledAction", "AWS::DynamoDB::Table", "AWS::CodeBuild::Project", "AWS::WAF::RateBasedRule", "AWS::WAF::Rule", "AWS::WAF::RuleGroup", "AWS::WAF::WebACL", "AWS::WAFRegional::RateBasedRule", "AWS::WAFRegional::Rule", "AWS::WAFRegional::RuleGroup", "AWS::WAFRegional::WebACL", "AWS::CloudFront::Distribution", "AWS::CloudFront::StreamingDistribution", "AWS::Lambda::Alias", "AWS::Lambda::Function", "AWS::ElasticBeanstalk::Application", "AWS::ElasticBeanstalk::ApplicationVersion", "AWS::ElasticBeanstalk::Environment", "AWS::MobileHub::Project", "AWS::XRay::EncryptionConfig", "AWS::SSM::AssociationCompliance", "AWS::SSM::PatchCompliance", "AWS::Shield::Protection", "AWS::ShieldRegional::Protection", "AWS::Config::ResourceCompliance", "AWS::LicenseManager::LicenseConfiguration", "AWS::ApiGateway::DomainName", "AWS::ApiGateway::Method", "AWS::ApiGateway::Stage", "AWS::ApiGateway::RestApi", "AWS::ApiGatewayV2::DomainName", "AWS::ApiGatewayV2::Stage", "AWS::ApiGatewayV2::Api", "AWS::CodePipeline::Pipeline", "AWS::ServiceCatalog::CloudFormationProvisionedProduct", "AWS::ServiceCatalog::CloudFormationProduct", "AWS::ServiceCatalog::Portfolio"]
for resource in resources:
response = client.list_discovered_resources(resourceType=resource)
print(‘##################### {} #################’.format(resource))
for i in range(len(response[‘resourceIdentifiers’])):
print( ‘{} , {}’.format(response[‘resourceIdentifiers’][i][‘resourceType’], response[‘resourceIdentifiers’][i][‘resourceId’]))
In boto3 you can use ResourceGroupsTaggingAPI method get_resources(). Which is used to get resources mainly based on tags but you can leave blank tag filter parameter and get all the resources supported.
Consider that not all resources are included and it is limited to a specific region but I hope that it can help you.
Examples:
Get all resources:
import boto3
client = boto3.client('resourcegroupstaggingapi')
client.get_resources()
Get resources of an especific service type:
import boto3
client = boto3.client('resourcegroupstaggingapi')
client.get_resources(
ResourceTypeFilters=[
'ec2:instance'
])
Official documentation:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/resourcegroupstaggingapi.html#ResourceGroupsTaggingAPI.Client.get_resources
Following on from the answer Omar gave, I came up with the following:
from pprint import pprint
import boto3
from botocore.exceptions import ClientError
client = boto3.client('resourcegroupstaggingapi', )
regions = boto3.session.Session().get_available_regions('ec2')
for region in regions:
print(region)
try:
client = boto3.client('resourcegroupstaggingapi', region_name=region)
pprint([x.get('ResourceARN') for x in client.get_resources().get('ResourceTagMappingList')])
except ClientError as e:
print(f'Could not connect to region with error: {e}')
print()
which will loop most regions and list any ARNs in that region, like so:
eu-north-1
[]
eu-west-1
['arn:aws:mq:eu-west-1:xxxxxxxxxxxx:broker:example:b-125099aa-8e22-462e-a8e9-bcc6b29c010a']
eu-west-2
[]
eu-west-3
[]
I'm beginner to kafka client in python, i need some help to describe the topics using the client.
I was able to list all my kafka topics using the following code:-
consumer = kafka.KafkaConsumer(group_id='test', bootstrap_servers=['kafka1'])
topicList = consumer.topics()
After referring multiple articles and code samples, I was able to do this through describe_configs using confluent_kafka.
Link 1 [Confluent-kafka-python]
Link 2 Git Sample
Below is my sample code!!
from confluent_kafka.admin import AdminClient, NewTopic, NewPartitions, ConfigResource
import confluent_kafka
import concurrent.futures
#Creation of config
conf = {'bootstrap.servers': 'kafka1','session.timeout.ms': 6000}
adminClient = AdminClient(conf)
topic_configResource = adminClient.describe_configs([ConfigResource(confluent_kafka.admin.RESOURCE_TOPIC, "myTopic")])
for j in concurrent.futures.as_completed(iter(topic_configResource.values())):
config_response = j.result(timeout=1)
I have found how to do it with kafka-python:
from kafka.admin import KafkaAdminClient, ConfigResource, ConfigResourceType
KAFKA_URL = "localhost:9092" # kafka broker
KAFKA_TOPIC = "test" # topic name
admin_client = KafkaAdminClient(bootstrap_servers=[KAFKA_URL])
configs = admin_client.describe_configs(config_resources=[ConfigResource(ConfigResourceType.TOPIC, KAFKA_TOPIC)])
config_list = configs.resources[0][4]
In config_list (list of tuples) you have all the configs for the topic.
Refer: https://docs.confluent.io/current/clients/confluent-kafka-python/
list_topics provide confluent_kafka.admin.TopicMetadata (topic,
partitions)
kafka.admin.TopicMetadata.partitions provide: confluent_kafka.admin.PartitionMetadata (Partition id, leader, replicas, isrs)
from confluent_kafka.admin import AdminClient
kafka_admin = AdminClient({"bootstrap.servers": bootstrap_servers})
for topic in topics:
x = kafka_admin.list_topics(topic=topic)
print x.topics, '\n'
for key, value in x.topics.items():
for keyy, valuey in value.partitions.items():
print keyy, ' Partition id : ', valuey, 'leader : ', valuey.leader,' replica: ', valuey.replicas
Interestingly, for Java this functionality (describeTopics()) sits within the KafkaAdminCLient.java.
So, I was trying to look for the python equivalent of the same and I discovered the code repository of kafka-python.
The documentation (in-line comments) in admin-client equivalent in kafka-python package says the following:
describe topics functionality is in ClusterMetadata
Note: if implemented here, send the request to the controller
I then switched to cluster.py file in the same repository. This contains the topics() function that you've used to retrieve the list of topics and the following 2 functions that could help you achieve the describe functionality:
partitions_for_topic() - Return set of all partitions for topic (whether available or not)
available_partitions_for_topic() - Return set of partitions with known leaders
Note: I haven't tried this myself so I'm not entierly sure if the behaviour would be identical to what you would see in the result for kafka-topics --describe ... command but worth a try.
I hope this helps!
I'm trying to create a Redis Elasticache cluster using boto in the sa-east-1 region, and boto is giving me this error message:
{"Error":{"Code":"InvalidParameterValue","Message":"sa-east-1 is not a valid availability zone.","Type":"Sender"},"RequestId":"2q34hj192-6902-11e4-8b4a-afafaefasefsadfsadf"}
with this code:
from boto.elasticache.layer1 import ElastiCacheConnection
self.elasticache = ElastiCacheConnection()
boto.elasticache.connect_to_region(
'sa-east-1a',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_SECRET_KEY
)
elasticache.create_cache_cluster(
cache_cluster_id='test1',
engine='redis',
cache_node_type='cache.m3.medium',
num_cache_nodes=1,
preferred_availability_zone='sa-east-1',
)
Thanks
It's asking you for an availability zone but you are providing it with a region. Correct values would be one of sa-east-1a or sa-east-1b or just leave it blank if you have no preference.
After searching in boto code, I found that
elasticache = boto.elasticache.connect_to_region(
'sa-east-1',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_SECRET_KEY
)
elasticache.create_cache_cluster(
cache_cluster_id=cache_cluster_id,
engine=engine,
cache_node_type=cache_node_type,
num_cache_nodes=num_cache_nodes,
preferred_availability_zone='sa-east-1a',
preferred_maintenance_window=preferred_maintenance_window,
)
works.