How to Stop EC2 Instance from the list - python

I have the below code it is working very well in local for stopping EC2 Instance but while I am trying to move its production it is stopping only the first instance not the second one/nth one
import boto3
import json
access_key = "AKIAJSVXXXXXXXXXX"
secret_key = "mSvhX5q7uw8dTZ543qtC6OXXXXXXXXXXXX"
client = boto3.client('ec2', aws_access_key_id=access_key, aws_secret_access_key=secret_key, region_name='us-east-1')
file1 = open("Myfile.txt", "r").read().split("\n")
print(file1)
ec2_result = client.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': file1
}
]
)
ids = [ec2_result['Reservations'][0]['Instances'][i]['InstanceId'] for i in
range(len(ec2_result['Reservations'][0]['Instances']))]
print(ids)
response = client.stop_instances(
InstanceIds=ids
)
print("stooped now")
Myfile.txt
instanceonename
instance2name
It is working very well in my local system but in production, it is stopping only the first EC2 instance
not second one

It stops only one instance, because your list comprehension iterates only over one instance. You need double for:
ids = [instance['InstanceId'] \
for reservation in ec2_result['Reservations'] \
for instance in reservation['Instances']]

Related

How do I get the GCE vm instance id using python?

I am using python. I have the correct project and vm instance names. So I can query Google Cloud metrics just fine. But now I need to query some Agent metrics, but it needs the instance id of my vm instead of the name. What is the simplest way for me to get the instance id of my vm with a query?
Sorry, I should be more clear. Here is my sample code:
results = client.list_time_series(
request={
"name": project_name,
"filter": filter,
"interval": interval,
"view": monitoring_v3.ListTimeSeriesRequest.TimeSeriesView.FULL,
}
)
I want to make a query similar to this. Any simple filter I can use, or something else, that will get me the instance_id of a particular instance name?
If you are inside the gce vm you use the Metadata server
import requests
metadata_server = "http://metadata/computeMetadata/v1/instance/"
metadata_flavor = {'Metadata-Flavor' : 'Google'}
gce_id = requests.get(metadata_server + 'id', headers = metadata_flavor).text
gce_name = requests.get(metadata_server + 'hostname', headers = metadata_flavor).text
gce_machine_type = requests.get(metadata_server + 'machine-type', headers = metadata_flavor).text
If you are looking to list gce vms check the example in the GCP documentation for using client libraries: Listing Instances

Python Boto3 to Aws Sdk for blob storage

This code retrieves the buckets of a Amazon S3-compatible storage (not Amazon AWS but the Zadara compatible cloud storage) and IT WORKS:
import boto3
from botocore.client import Config
session = boto3.session.Session( )
s3_client = session.client(
service_name = 's3',
region_name = 'IT',
aws_access_key_id = 'xyz',
aws_secret_access_key = 'abcedf',
endpoint_url = 'https://nothing.com:443',
config = Config(signature_version='s3v4'),
)
print('Buckets')
boto3.set_stream_logger(name='botocore')
print(s3_client.list_buckets())
I am trying to use the same method to access S3 via C# and AWS SDK, anyway I always obtain the error "The request signature we calculated does not match the signature you provided. Check your key and signing method.".
AmazonS3Config config = new AmazonS3Config();
config.AuthenticationServiceName = "s3";
config.ServiceURL = "https://nothing.com:443";
config.SignatureVersion = "s3v4";
config.AuthenticationRegion = "it";
AmazonS3Client client = new AmazonS3Client(
"xyz",
"abcdef",
config);
ListBucketsResponse r = await client.ListBucketsAsync();
What can I do? Why it is not working? I can't get a solution.
I tried also to trace debug infos:
Python
boto3.set_stream_logger(name='botocore')
C#
AWSConfigs.LoggingConfig.LogResponses = ResponseLoggingOption.Always;
AWSConfigs.LoggingConfig.LogMetrics = true;
AWSConfigs.LoggingConfig.LogTo = Amazon.LoggingOptions.SystemDiagnostics;
AWSConfigs.AddTraceListener("Amazon", new System.Diagnostics.ConsoleTraceListener());
but for C# it is not logging the whole request.
Any suggestion?

Can I get region value for a given instance using boto 3 by passing on just private IP of ec2-instance

Trying to get AWS Region for particular instance. Is that possible that by passing on only ec2 instance ip to get to know its region ?
What I tried:
import boto3
client = boto3.client('s3') # example client, could be any
client.meta.region_name
but it showing same region for all servers..
Unfortunately, there is no native cross-region get_instance_by_private_ip API available. But, you can do something like this
import boto3
def find_region_by_private_ip_address(ip):
ec2 = boto3.resource('ec2', 'us-east-1')
regions = [r['RegionName'] for r in ec2.meta.client.describe_regions()['Regions']]
for region in regions:
ec2 = boto3.resource('ec2', region)
instance_iterator = ec2.instances.filter(
Filters=[
{
'Name': 'private-ip-address',
'Values': [
ip
]
},
]
)
instance_list = list(instance_iterator)
if len(instance_list) > 0:
return region
If performance is critical, you can do multi-threading or multi-processing to query regions in parallel.

AWS Lambda : CodePipeline job never ends, even after retuning sucess

I am working on executing an AWS Lambda code from Code-pipeline. I have given the lambda role full access to EC2 and code-deploy. The commands generally work when I am not triggering them from code-pipeline. When Triggered from code-pipeline, they just keep on running, even though success is sent. What am i doing wrong?
Code :
import boto3
import json
def lambda_handler(event, context):
reservations = boto3.client('ec2').describe_instances()['Reservations']
instances_list = []
process_instance_list = []
command = 'COMMAND TO EXECUTE ON SERVER'
ssm = boto3.client('ssm')
for res in reservations:
instances = res['Instances']
for inst in res['Instances']:
for tag in inst['Tags']:
#print("Tag value is {}".format(tag['Value']))
if tag['Value']=='Ubuntu_Magento':
print("{} {} {}".format(tag['Value'], inst['InstanceId'], inst['LaunchTime']))
instances_list.append(inst)
instances_list.sort(key=lambda x: x['LaunchTime'])
instance_id = instances_list[0]['InstanceId']
ssmresponse = ssm.send_command(InstanceIds=[instance_id], DocumentName='AWS-RunShellScript', Parameters= { 'commands': [command]})
code_pipeline = boto3.client('codepipeline')
job_id = event['CodePipeline.job']['id']
code_pipeline.put_job_success_result(jobId=job_Id)
Any lambda by default has lifespan of 15 mins only, after that it exits no matter what. I think it has something to do with the way you are trigger it.

Retrieving public dns of EC2 instance with BOTO3

I'm using ipython to get an understanding of Boto3 and interacting with EC2 instances. Here is the code I'm using to create an instance:
import boto3
ec2 = boto3.resource('ec2')
client = boto3.client('ec2')
new_instance = ec2.create_instances(
ImageId='ami-d05e75b8',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName=<name_of_my_key>,
SecurityGroups=['<security_group_name>'],
DryRun = False
)
This starts an EC2 instance fine, and I can get the public DNS name, ip and other info from the AWS console. But, when I try to get the public DNS using Boto, by doing this:
new_instance[0].public_dns_name
Returns blank quotes. Yet, other instance details, such as:
new_instance[0].instance_type
Returns the correct information.
Any ideas? Thanks.
EDIT:
So if I do:
def get_name(inst):
client = boto3.client('ec2')
response = client.describe_instances(InstanceIds = [inst[0].instance_id])
foo = response['Reservations'][0]['Instances'][0]['NetworkInterfaces'][0]['Association']['PublicDnsName']
return foo
foo = get_name(new_instance)
print foo
Then it will return the public DNS. But it doesn't make sense to me why I need to do all of this.
The Instance object you get back is only hydrated with the response attributes from the create_instances call. Since the DNS name is not available until the instance has reached the running state [1], it will not be immediately present. I imagine the time between you creating the instance and calling describe instances is long enough for the micro instance to start.
import boto3
ec2 = boto3.resource('ec2')
instances = ec2.create_instances(
ImageId='ami-f0091d91',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName='<KEY-NAME>',
SecurityGroups=['<GROUP-NAME>'])
instance = instances[0]
# Wait for the instance to enter the running state
instance.wait_until_running()
# Reload the instance attributes
instance.load()
print(instance.public_dns_name)
Here my wrapper:
import boto3
from boto3.session import Session
def credentials():
"""Credentials:"""
session = Session(aws_access_key_id= 'XXXXXXXXX',
aws_secret_access_key= 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
ec2 = boto3.resource('ec2', region_name='us-east-2')
return ec2
def get_public_dns(instance_id):
"""having the instance_id, gives you the public DNS"""
ec2 = credentials()
instance = ec2.Instance(instance_id)
instancePublicDNS = instance.public_dns_name
return instancePublicDNS
Then you just need to use your instance_id to get public dns of any of your actives ec2:
dns = get_public_dns(instance_id)
Remember to change "region_name" to your zone and add your "aws_access_key_id" and "aws_secret_access_key"
import boto3
import pandas as pd
session = boto3.Session(profile_name='aws_dev')
dev_ec2_client = session.client('ec2')
response = dev_ec2_client.describe_instances()
df = pd.DataFrame(columns=['InstanceId', 'InstanceType', 'PrivateIpAddress','PublicDnsName'])
i = 0
for res in response['Reservations']:
df.loc[i, 'InstanceId'] = res['Instances'][0]['InstanceId']
df.loc[i, 'InstanceType'] = res['Instances'][0]['InstanceType']
df.loc[i, 'PrivateIpAddress'] = res['Instances'][0]['PrivateIpAddress']
df.loc[i, 'PublicDnsName'] = res['Instances'][0]['PublicDnsName']
i += 1
print df
Note:
Change this profile with your AWS profile name profile_name='aws_dev'
This code is working for Python3

Categories

Resources