I'm trying to write a python script using boto3 in order to get hourly prices of an instance, given the instance ID. I should remark that I'm not speaking about costs that you can get from cost explorer, I'm speaking about nominal hourly price, for example for an 'ec2' instance.
I've already found some examples using "boto3.client('pricing',...)" and a bunch of parameters and filters as in:
https://www.saisci.com/aws/how-to-get-the-on-demand-price-of-ec2-instances-using-boto3-and-python/
which also requires region code to region name conversion.
I would like not to have to specify every instance detail and parameter for that query.
Can anybody help me to find a way to get that info just having the ec2 instance ID?
Thanks in advance.
You have to pass all that information. If you want to write a script that takes an instance ID and returns the hourly price, then you would first need to use the instance ID to lookup the instance details, and then pass those details to the pricing query.
You have to specify most of the information but not all of it.
For example, region_name is optional if you:
Have configured AWS CLI on the machine on which your Python script is running (ie. ~/.aws/config is present and the region is configured).
OR
You are running the Python script on an AWS resource that has a role attached to it with a policy that allows you to retrieve the spot pricing information.
For example, I am able to run this script that retrieves my current spot instances and gets their current hourly cost, and calculates a bid price for me based on the spot price history for that particular instance type without specifying the region anywhere:
#!/usr/bin/env python3
import boto3
import json
from datetime import datetime
from datetime import timedelta
from collections import namedtuple
def get_current_pricing():
pricing = []
ec2_client = boto3.client('ec2')
ec2_resource = boto3.resource('ec2')
response = ec2_client.describe_spot_instance_requests()
spot_instance_requests = response['SpotInstanceRequests']
for instance_request in spot_instance_requests:
if instance_request['State'] == 'active':
instance = ec2_resource.Instance(instance_request['InstanceId'])
for tag in instance.tags:
if tag['Key'] == 'Name':
application = tag['Value']
break
price = {
'application': application,
'instance_type': instance_request['LaunchSpecification']['InstanceType'],
'current_price': float(instance_request['SpotPrice']),
'bid_price': get_bid_price(instance_request['LaunchSpecification']['InstanceType'])
}
pricing.append(price)
return pricing
def get_bid_price(instancetype):
instance_types = [instancetype]
start = datetime.now() - timedelta(days=1)
ec2 = boto3.client('ec2')
price_dict = ec2.describe_spot_price_history(
StartTime=start,
InstanceTypes=instance_types,
ProductDescriptions=['Linux/UNIX']
)
if len(price_dict.get('SpotPriceHistory')) > 0:
PriceHistory = namedtuple('PriceHistory', 'price timestamp')
for item in price_dict.get('SpotPriceHistory'):
price_list = [PriceHistory(round(float(item.get('SpotPrice')), 5), item.get('Timestamp'))]
price_list.sort(key=lambda tup: tup.timestamp, reverse=True)
bid_price = round(float(price_list[0][0]), 5)
leeway = round(float(bid_price / 100 * 10), 5)
bid_price = round(float(bid_price + leeway), 5)
return bid_price
else:
raise ValueError(f'Invalid instance type: {instancetype} provided. '
'Please provide correct instance type.')
if __name__ == '__main__':
current_pricing = get_current_pricing()
print(json.dumps(current_pricing, indent=4, default=str))
Related
I am trying to extract the specific details like authored_date from the attribute which I am getting with help of python.
What is my end goal:
I want to extract the specific branches which are named as tobedeleted_branch1, tobedeleted_branch2 and delete them with help of my script if the authored_date is more than 7 days.
I am beginner in this and learning this currently.
So, what I want to do is,
Extract the authored date from the output and check if it is older than 7 days.
If it is older than 7 days I will go ahead and perform whatever I want to perform in if condition.
import gitlab, os
#from gitlab.v4.objects import *
# authenticate
TOKEN = "MYTOKEN"
GITLAB_HOST = 'MYINSTANCE' # or your instance
gl = gitlab.Gitlab(GITLAB_HOST, private_token=TOKEN)
# set gitlab group id
group_id = 6
group = gl.groups.get(group_id, lazy=True)
#get all projects
projects = group.projects.list(include_subgroups=True, all=True)
#get all project ids
project_ids = []
for project in projects:
# project_ids.append((project.path_with_namespace, project.id, project.name ))
project_ids.append((project.id))
print(project_ids)
for project in project_ids:
project = gl.projects.get(project)
branches = project.branches.list()
for branch in branches:
if "tobedeleted" in branch.attributes['name']:
print(branch.attributes['name'])
#print(branch)
print(branch.attributes['commit'])
#branch.delete()
The output which I am getting from printing the print(branch.attributes['commit']) is like :
{'id': '1245829930', 'short_id': '124582', 'created_at': '2021-11-15T09:10:26.000+00:00', 'parent_ids': None, 'title': 'branch name commit' into \'master\'"', 'message': 'branch name commit', 'author_name': 'Administrator', 'author_email': 'someemail#gmail.com', 'authored_date': '2021-11-15T09:10:26.000+00:00', 'committer_name': 'Administrator', 'committer_email': 'someemail#gmail.com', 'committed_date': '2021-11-15T09:10:26.000+00:00', 'trailers': None, 'web_url': 'someweburl'}
From the above output, I want to extract the 'authored_date' and check if it is greated than 7 days I will go ahead and delete the merged branch.
Any help regarding this is highly appreciated.
from datetime import datetime
def get_day_diff(old_date):
old_datetime = datetime.fromisoformat(old_date)
# Check timezone of date
tz_info = old_datetime.tzinfo
current_datetime = datetime.now(tz_info)
datetime_delta = current_datetime - old_datetime
return datetime_delta.days
# test with authored_date
authored_date = '2021-11-15T09:10:26.000+00:00'
if get_day_diff(authored_date) > 7:
# then delete branch
branch.delete()
import datetime
created_at = '2021-11-15T09:10:26.000+00:00'
t = datetime.datetime.fromisoformat(created_at)
n = datetime.datetime.now(tz=t.tzinfo)
if n - t <= datetime.timedelta(days=7):
... # do someting
using your branch.attributes about your commit inside your loop, you can do the following (make sure you import datetime). You'll want to append the branch name to be deleted to a list that you'll iterate over after, as you do not want to modify any object that you are currently iterating over (IE deleting items from the branches object that you are still iterating over).
from datetime import datetime
...
branches_to_delete = []
for project in project_ids:
project = gl.projects.get(project)
branches = project.branches.list()
for branch in branches:
if "tobedeleted" in branch.attributes['name']:
print(branch.attributes['name'])
#print(branch)
print(branch.attributes['commit'])
branch_date_object = datetime.strptime((branch.attributes['commit']['authored_date'].split('.')[0]), "%Y-%m-%dT%H:%M:%S")
days_diff = datetime.now() - branch_date_object
if days_diff.days > 7:
branches_to_delete.append(branch.attributes['name'])
for branch in branches_to_delete:
# perform your delete functionality
In the new Datastore Mode documentation, there is mention of allocateIds() method. However, beyond a single paragraph, there isn't an example code that illustrates how this method is used.
I am trying to allocate an ID each time I create a new entity so that I can save the ID as a property of the entity itself.
I assume that in pseudocode, it works like this:
user_id = allocateIds(number_id_ids=1)
user_key = datastore_client.key(kind='User', user_id)
user = datastore.Entity(key=user_key)
user.update({ 'user_id': user_id }) # Allows a get_user_by_id() query
datastore_client.put(user)
How exactly does allocateIds() work in practice?
When you call the allocateIds() function it invokes a new instance of class Key(object) when the consturctor of "Key" is called it takes all of the arguments you provided allocateIds and recombines them through a _combine_args method. That is what produces your key.
(and if you want to see the code yourself)
source: https://googleapis.dev/python/datastore/latest/_modules/google/cloud/datastore/key.html#Key
Yes, allocateIds() should work for the case where you want to get an ID from Datastore mode and use it as both an ID and property value:
from google.cloud import datastore
client = datastore.Client()
# Allocate a single ID in kind User
# Returns list of keys
keys = client.allocate_ids(client.key('User'), 1)
# Get key from list
key = keys[0]
print(key.id)
# Create a User entity using our key
user = datastore.Entity(key)
# Add ID as a field
user.update({
'user_id': key.id
})
# Commit to database
client.put(user)
# Query based on full key
query = client.query(kind='User')
query.key_filter(user.key, '=')
results = list(query.fetch())
print(results)
For most other cases where you just want a single auto-ID, you can skip allocate_ids:
# Create a User entity
# Use an incomplete key so Datastore assigns an ID
user = datastore.Entity(client.key('User'))
# Add some data
user.update({
'foo': 'bar'
})
# Datastore allocates an ID when you call client.put
client.put(user)
# user.key now contains an ID
user_id = user.key.id
print(user_id)
# Query with the ID and key
query = client.query(kind='User')
query.key_filter(user.key, '=')
results = list(query.fetch())
print(results)
How to filter AWS EC2 snapshots by current day?
I'm filtering snapshots by tag:Disaster_Recovery with value:Full, using python code below, and I need also filter it by the current day.
import boto3
region_source = 'us-east-1'
client_source = boto3.client('ec2', region_name=region_source)
# Getting all snapshots as per specified filter
def get_snapshots():
response = client_source.describe_snapshots(
Filters=[{'Name': 'tag:Disaster_Recovery', 'Values': ['Full']}]
)
return response["Snapshots"]
print(*get_snapshots(), sep="\n")
solve it, by code below:
import boto3
from datetime import date
region_source = 'us-east-1'
client_source = boto3.client('ec2', region_name=region_source)
date_today = date.isoformat(date.today())
# Getting all snapshots as per specified filter
def get_snapshots():
response = client_source.describe_snapshots(
Filters=[{'Name': 'tag:Disaster_Recovery', 'Values': ['Full']}]
)
return response["Snapshots"]
# Getting snapshots were created today
snapshots = [s for s in get_snapshots() if s["StartTime"].strftime('%Y-%m-%d') == date_today]
print(*snapshots, sep="\n")
This could do the trick:
import boto3
from datetime import date
region_source = 'us-east-1'
client_source = boto3.client('ec2', region_name=region_source)
# Getting all snapshots as per specified filter
def get_snapshots():
response = client_source.describe_snapshots(
Filters=[{'Name': 'tag:Disaster_Recovery', 'Values': ['Full']}]
)
snapshotsInDay = []
for snapshots in response["Snapshots"]:
if(snapshots["StartTime"].strftime('%Y-%m-%d') == date.isoformat(date.today())):
snapshotsInDay.append(snapshots)
return snapshotsInDay
print(*get_snapshots(), sep="\n")
After reading the docs the rest is a simple date comparision
I am using Lambda to detect faces and would like to send the response to a Dynamotable.
This is the code I am using:
rekognition = boto3.client('rekognition', region_name='us-east-1')
dynamodb = boto3.client('dynamodb', region_name='us-east-1')
# --------------- Helper Functions to call Rekognition APIs ------------------
def detect_faces(bucket, key):
response = rekognition.detect_faces(Image={"S3Object": {"Bucket": bucket,
"Name": key}}, Attributes=['ALL'])
TableName = 'table_test'
for face in response['FaceDetails']:
table_response = dynamodb.put_item(TableName=TableName, Item='{0} - {1}%')
return response
My problem is in this line:
for face in response['FaceDetails']:
table_response = dynamodb.put_item(TableName=TableName, Item= {'key:{'S':'value'}, {'S':'Value')
I am able to see the result in the console.
I don't want to add specific item(s) to the table- I need the whole response to be transferred to the table.
Do do this:
1. What to add as a key and partition key in the table?
2. How to transfer the whole response to the table
i have been stuck in this for three days now and can't figure out any result. Please help!
******************* EDIT *******************
I tried this code:
rekognition = boto3.client('rekognition', region_name='us-east-1')
# --------------- Helper Functions to call Rekognition APIs ------------------
def detect_faces(bucket, key):
response = rekognition.detect_faces(Image={"S3Object": {"Bucket": bucket,
"Name": key}}, Attributes=['ALL'])
TableName = 'table_test'
for face in response['FaceDetails']:
face_id = str(uuid.uuid4())
Age = face["AgeRange"]
Gender = face["Gender"]
print('Generating new DynamoDB record, with ID: ' + face_id)
print('Input Age: ' + Age)
print('Input Gender: ' + Gender)
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['test_table'])
table.put_item(
Item={
'id' : face_id,
'Age' : Age,
'Gender' : Gender
}
)
return response
It gave me two of errors:
1. Error processing object xxx.jpg
2. cannot concatenate 'str' and 'dict' objects
Can you pleaaaaase help!
When you create a Table in DynamoDB, you must specify, at least, a Partition Key. Go to your DynamoDB table and grab your partition key. Once you have it, you can create a new object that contains this partition key with some value on it and the object you want to pass itself. The partition key is always a MUST upon creating a new Item in a DynamoDB table.
Your JSON object should look like this:
{
"myPartitionKey": "myValue",
"attr1": "val1",
"attr2:" "val2"
}
EDIT: After the OP updated his question, here's some new information:
For problem 1)
Are you sure the image you are trying to process is a valid one? If it is a corrupted file Rekognition will fail and throw that error.
For problem 2)
You cannot concatenate a String with a Dictionary in Python. Your Age and Gender variables are dictionaries, not Strings. So you need to access an inner attribute within these dictionaries. They have a 'Value' attribute. I am not a Python developer, but you need to access the Value attribute inside your Gender object. The Age object, however, has 'Low' and 'High' as attributes.
You can see the complete list of attributes in the docs
Hope this helps!
I need to obtain the Tag values from the below code, it initially fetches the Id and then passes this to the describe_cluster, the value is then in the json format. Tryging to fetch a particular value from this "Cluster" json using "GET". However, it returns a error message as "'str' object has no attribute 'get'", Please suggest.
Here is a reference link of boto3 which I'm referring:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr.html#EMR.Client.describe_cluster
import boto3
import json
from datetime import timedelta
REGION = 'us-east-1'
emrclient = boto3.client('emr', region_name=REGION)
snsclient = boto3.client('sns', region_name=REGION)
def lambda_handler(event, context):
EMRS = emrclient.list_clusters(
ClusterStates = ['STARTING', 'RUNNING', 'WAITING']
)
clusters = EMRS["Clusters"]
for cluster_details in clusters :
id = cluster_details.get("Id")
describe_cluster = emrclient.describe_cluster(
ClusterId = id
)
cluster_values = describe_cluster["Cluster"]
for details in cluster_values :
tag_values = details.get("Tags")
print(tag_values)
The error is in the last part of the code.
describe_cluster = emrclient.describe_cluster(
ClusterId = id
)
cluster_values = describe_cluster["Cluster"]
for details in cluster_values: # ERROR HERE
tag_values = details.get("Tags")
print(tag_values)
The returned value from describe_cluster is a dictionary. The Cluster is also a dictionary. So you don't need to iterate over it. You can directly access cluster_values.get("Tags")