get the Alarm object of CloudWatch using boto 2 - python

I created an alarm and want to delete it afterward...
The documentation for boto 2 doesn't show how to do that.
Any help ?
Thanks

If you want to delete alarms, the API you need is DeleteAlarms. The link you have in your question is mentioning it (search for delete_alarms).
Also, boto 3 is the recommended version to use and here is the API you need: https://boto3.readthedocs.io/en/latest/reference/services/cloudwatch.html#CloudWatch.Client.delete_alarms
Example of how to do it with Boto 3:
import boto3
client = boto3.client('cloudwatch')
client.delete_alarms(AlarmNames=['SomeAlarmName'])
Boto 2 example:
import boto
client = boto.connect_cloudwatch()
client.delete_alarms('SomeAlarmName')
If you don't know the name, you can get a list of alarms with (the same for boto 2 and 3):
client.describe_alarms()

You should use Boto3. But if you are tied to Boto2, then:
import boto
cw = boto.connect_cloudwatch()
alarms= cw.describe_alarms()
for alarm in alarms:
print alarm.name
Check if the alarm you want to delete is listed. Then use that name:
cw.delete_alarms([<alarm_to_be_deleted>])

Related

boto3 eks client how to generate presigned url

I'm trying to update a docker image within a deployment in EKS. I'm running a python code from a lambda function. However, I don't know how to use generate_presigned_url(). What should I pass as ClientMethod parameter???
import boto3
client = boto3.client("eks")
url = client.generate_presigned_url()
These are the clientMethods that you could perform in case of EKS.
'associate_encryption_config'
'associate_identity_provider_config'
'can_paginate'
'create_addon'
'create_cluster'
'create_fargate_profile'
'create_nodegroup'
'delete_addon'
'delete_cluster'
'delete_fargate_profile'
'delete_nodegroup'
'describe_addon'
'describe_addon_versions'
'describe_cluster'
'describe_fargate_profile'
'describe_identity_provider_config'
'describe_nodegroup'
'describe_update'
'disassociate_identity_provider_config'
'generate_presigned_url'
'get_paginator'
'get_waiter'
'list_addons'
'list_clusters'
'list_fargate_profiles'
'list_identity_provider_configs'
'list_nodegroups'
'list_tags_for_resource'
'list_updates'
'tag_resource'
'untag_resource'
'update_addon'
'update_cluster_config'
'update_cluster_version'
'update_nodegroup_config'
'update_nodegroup_version'
You can get more information about these method in the documentation here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/eks.html#client
After over two weeks I suppose you've found your answer, anyway the ClientMethod mentioned (and, not really well explained on the boto3 docs) is just one of the methods you can use with the EKS client itself. I honestly think this is what KnowledgeGainer was trying to say by listing all the methods, basically you can just pick one. This would give you the presigned URL.
For example, here I'm using one method that isn't requiring any additional arguments, list_clusters:
>>> import boto3
>>> client = boto3.client("eks")
>>> client.generate_presigned_url("list_clusters")
'https://eks.eu-west-1.amazonaws.com/clusters?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAQKOXLHHBFT756PNG%2F20210528%2Feu-west-1%2Feks%2Faws4_request&X-Amz-Date=20210528T014603Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=d25dNCC17013ad9bc75c04b6e067105c23199c23cbadbbbeForExample'
If the method requires any additional arguments, you add those into Params as a dictionary:
>>> method_params = {'name': <your_cluster_name>}
>>> client.generate_presigned_url('describe_cluster', Params=method_params)

Confluent-Kafka Python : How to list all topics programmatically

I have a kafka broker running in my local system. To communicate with the broken using my Django based web application I am using confluent-kafka wrapper. However by browsing through the admin api, I could not find out any api for listing kafka topics. (The topics are created pragmatically and are dynamic).
Is there any way I could list them within my program?
The requirement is that, if my worker reboots all assigned consumers listening to those topics will have to be re-initialized so I want to loop to all the topics and assign a consumer to each.
Here is how to do it:
>>> from confluent_kafka.admin import AdminClient
>>> conf = {'bootstrap.servers': 'vps01:9092,vps02:9092,vps03:9092'}
>>> kadmin = AdminClient(conf)
>>> kadmin.list_topics().topics # Returns a dict(). See example below.
{'topic01': TopicMetadata(topic01, 3 partitions),}
I hope that helps.
Two ways you can achieve it:
from confluent_kafka import Consumer
consummer_config = {
'bootstrap.servers': 'localhost:9092',
'group.id': 'group1'
}
consumer = Consumer(consummer_config)
consumer.list_topics().topics
This will return a dictionary something like this:
{'raw_request': TopicMetadata(raw_request, 4 partitions),
'raw_requests': TopicMetadata(raw_requests, 1 partitions),
'idw': TopicMetadata(idw, 1 partitions),
'__consumer_offsets': TopicMetadata(__consumer_offsets, 50 partitions)}
the other way is already posted by #NYCeyes
Try searching again for list_topics
https://docs.confluent.io/5.2.1/clients/confluent-kafka-python/index.html#confluent_kafka.Consumer.list_topics
https://docs.confluent.io/5.2.1/clients/confluent-kafka-python/index.html#confluent_kafka.Producer.list_topics

Failed to filter and delete old indices of elasticsearch using curator python API

I'm trying to use curator python API to periodically delete old logs in elasticsearch.
In my code output, it seems I can't filter the indices I want which stuck me several days.
Anyone can help to have a look if I have done anything wrong?
I used elasticsearch-curator version 5.4.1 and tested elasticsearch 5.5 on EC2, aws elasticsearch 5.5 and aws elasticsearch 6.0 and the results are the same.
Following is my code:
from elasticsearch import Elasticsearch
import elasticsearch
import curator
def handler():
client = elasticsearch.Elasticsearch(['http://XX.153.17.133:9200'])
ilo = curator.IndexList(client)
print ilo.all_indices
print ilo.filter_by_regex(kind='prefix', value='mov')
print ilo.filter_by_age(source='creation_date', direction='older', unit='seconds', unit_count=2)
#delete_indices = curator.DeleteIndices(ilo)
#delete_indices.do_action()
return
handler()
And following is the output:
/Users/junyu/PycharmProjects/es-curator/bin/python
/Users/junyu/PycharmProjects/es-curator/es-curator.py
[u'movie']
None
None
Process finished with exit code 0
Thank you in advance!
You will only see output from your print ilo.filter* lines if you have logging set up.
If you want to see what indices remain after the filters, then try print ilo.indices, as that is the working list.

Who created an Amazon EC2 instance using Boto and Python?

I want to know who created a particular instance. I am using Cloud Trail to find out the statistics, but I am not able to get a particular statistics of who created that instance. I am using Python and Boto3 for finding out the details.
I am using this code- Lookup events() from Cloud trail in boto3, to extract the information about an instance.
ct_conn = sess.client(service_name='cloudtrail',region_name='us-east-1')
events=ct_conn.lookup_events()
I found out the solution to the above problem using lookup_events() function.
ct_conn = boto3.client(service_name='cloudtrail',region_name='us-east-1')
events_dict= ct_conn.lookup_events(LookupAttributes=[{'AttributeKey':'ResourceName', 'AttributeValue':'i-xxxxxx'}])
for data in events_dict['Events']:
json_file= json.loads(data['CloudTrailEvent'])
print json_file['userIdentity']['userName']
#Karthik - Here is the sample of creating session
import boto3
import json
import os
session = boto3.Session(region_name='us-east-1',aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'],aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'])
ct_conn = session.client(service_name='cloudtrail',region_name='us-east-1')
events_dict= ct_conn.lookup_events(LookupAttributes=[{'AttributeKey':'ResourceName', 'AttributeValue':'i-xxx'}])
for data in events_dict['Events']:
json_file= json.loads(data['CloudTrailEvent'])
print (json_file['userIdentity']['userName'])

Create and download an AWS ec2 keypair using python boto

I'm having difficulty figuring out a way (if possible) to create a new AWS keypair with the Python Boto library and then download that keypair.
The Key object returned by the create_keypair method in boto has a "save" method. So, basically you can do something like this:
>>> import boto
>>> ec2 = boto.connect_ec2()
>>> key = ec2.create_key_pair('mynewkey')
>>> key.save('/path/to/keypair/dir')
If you want a more detailed example, check out https://github.com/garnaat/paws/blob/master/ec2_launch_instance.py.
Does that help? If not, provide some specifics about the problems you are encountering.
Same for Boto3:
ec2 = boto3.resource('ec2')
keypair_name = 'my_key'
new_keypair = ec2.create_key_pair(KeyName=keypair_name)
with open('./my_key.pem', 'w') as file:
file.write(new_keypair.key_material)
print(new_keypair.key_fingerprint)

Categories

Resources