Is it possible to create an ec2 instance using boto3 in python?
Boto3 document is not helping here, and I couldn't find any helping documents online. please provide some sample codes/links.
The API has changed but it's right there in the documentation
# Boto 3
ec2.create_instances(ImageId='<ami-image-id>', MinCount=1, MaxCount=5)
Link to the documentation:
http://boto3.readthedocs.org/en/latest/guide/migrationec2.html#launching-new-instances
You can run the code I used from the boto3 docs. You can add or remove parameters as per your requirements, but this is what you would normally require:
import boto3
client = boto3.client('ec2', region_name='us-west-2')
response = client.run_instances(
BlockDeviceMappings=[
{
'DeviceName': '/dev/xvda',
'Ebs': {
'DeleteOnTermination': True,
'VolumeSize': 8,
'VolumeType': 'gp2'
},
},
],
ImageId='ami-6cd6f714',
InstanceType='t3.micro',
MaxCount=1,
MinCount=1,
Monitoring={
'Enabled': False
},
SecurityGroupIds=[
'sg-1f39854x',
],
)
The link you're really looking for in the documentation is the create_instances() method of the ServiceResource object. This is the type of object you are calling if you create an EC2 resource like this:
s = boto3.Session(region_name="us-west-1")
ec2 = s.resource('ec2')
...
instance = ec2.create_instances(**y_kwargs)
This contains a more detailed example and a longer list of available parameters.
You can also get parameter values for AWS instances that are already running using the AWS command line interface:
$ aws ec2 describe-instances
This prints out a JSON file from which relevant parameters can be extracted and passed to the create_instances() method. (Or, you can use a boto client and call the describe_instances() method.)
(Note: If you're wondering what the difference is between the Client and the Resource, they serve different purposes for the same end - the client is a lower-level interface while the Resource is a higher-level interface.)
Refer to API docs has all available options to create instance
http://boto3.readthedocs.org/en/latest/reference/services/ec2.html#EC2.Subnet.create_instances
If your running from your windows computer you need configure AWS Cli with proper EC2 permisssion to launch instance.
#
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
ImageId='ami-5eb63a32',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
)
print(instance[0].id)
Related
Below is my code to launch an instance on custom VPC but i am seeing below error. Need your assistance in finding the problem:
import json
import boto3
def lambda_handler(event, context):
session_client=boto3.client(service_name='ec2')
response = session_client.run_instances(
#SecurityGroupIds=['sg-0ffb80bb93205a004'],
#SubnetId='subnet-0bb30254459fd4714',
MaxCount=1,
MinCount=1,
NetworkInterfaces=[
{
'NetworkInterfaceId':'eni-0846a5c0c47e98b3c',
'Description': 'Network Interface Description',
'DeviceIndex': 123,
'SubnetId': 'subnet-0bb30254459fd4714'
},
],
LaunchTemplate={
'LaunchTemplateId': 'lt-0d3c7dd295796ad6b',
'Version': '1'
})
Error: Network interfaces and an instance-level subnet ID may not be specified on the same request",
"errorType": "ClientError",
Thanks,
Bapu Reddy Battu
There are two subnetid parameters that can be passed while creating an EC2 instance through boto3
1) SubnetId at root level in the request :
SubnetId (string) -- [EC2-VPC] The ID of the subnet to launch the
instance into. If you specify a network interface, you must specify
any subnets as part of the network interface.
Since you are passing the NetworkInterfaces parameter, this SubnetId will not be passed.
2) SubnetId as part of NetworkInterfaces parameter
SubnetId (string) -- The ID of the subnet associated with the network
interface. Applies only if creating a network interface when launching
an instance.
Since you are using an already existing network interface. You should not pass the SubnetId parameter inside NetworkInterfaces either.
Reference documentation : EC2 run instances - Boto3 documentation
I want to write a Python 3.6 query in AWS Lambda to get details on an AWS SSM parameter store but I get a null response. If I query via AWS CLI I get the details on the parameter store item including the AMI ID which is my ultimate goal. The parameter store path is:
/aws/service/ami-windows-latest/Windows_Server-2019-English-Core-Base-2019.07.12
My code is below, any insight into why this is not returning the expected results would be greatly appreciated.
import json
import boto3
def lambda_handler(event, context):
client = boto3.client('ssm')
response=client.get_parameters(Names=['/aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base']),
#return "Success"
print (response)
I'm expecting the same output that I get when I run the following AWS CLI command.
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base --region us-east-1
I figured this out with the help of a co-worker with more Python experience. The code is below.
import boto3
client = boto3.client('ssm')
def lambda_handler(event, context):
parameter = client.get_parameter(Name='/aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base', WithDecryption=True)
print(parameter)
return parameter ['Parameter']['Value']
Worth checking your lambda has enough permissions to interact with aws SSM. Just for the initial checking i would suggest give full access by using policy policy/AmazonSSMFullAccess
Lambda IAM role should have the above policy.
docs aws boto3 docs
You can use this library which has error handling out-of-the-box:
AWStanding
from awstanding.parameter_store import load_parameters
LOOKUP_DICT = {
'/aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base': 'CONVENIENT_NAME'
}
load_parameters(LOOKUP_DICT)
os.environ.get('CONVENIENT_NAME')
The advantage here is that you can easily load a lot of parameters with minimmum overhead.
I am trying to create a basic cloud infrastructure which can run through any AWS account. I am using python boto3 module to create a cluster,task definition and a service, which are created successfully from my code. However the task in not running from my service, when I checked the Events I can see error 'service my_service was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster'.
import boto3
client = boto3.client('ecs')
cluster_response = client.create_cluster(
clusterName='my_cluster',
)
print(cluster_response)
taskdef_response = client.register_task_definition(
family = 'my_taskdef',
containerDefinitions=[
{
'name': 'my_taskdef',
'image': '****/sample:latest'
}
],
memory='256',
cpu='1024',
taskRoleArn='ecsTaskExecutionRole'
)
print(taskdef_response)
service_response = client.create_service(
cluster='my_cluster',
serviceName='my_service',
taskDefinition='my_taskdef',
desiredCount=1,
launchType='EC2'
)
print(service_response)
I am expecting this to run my docker image in the task spawned by the service. The AWS account does not have any existing EC2 instance and I want to run a task that creates EC2 instance from this code(without the need of changing anything from inside EC2 instance). How can I run the task?
I'm trying to set and get keys from ElastiCache (memcached) from a python lambda function using Boto3. I can figure out how to get the endpoints but that's pretty much it. Is there some documentation out there that shows the entire process?
It sounds like you are trying to interact with Memcached via Boto3. This is not possible. Boto3 is for interacting with the AWS API. You can manage your ElastiCache servers via the AWS API, but you can't interact with the Memcached software running on those servers. You need to use a Memcached client library like python-memcached in your Python code to actually get and set keys in your Memcached cluster.
Also, your Lambda function will need to reside in the same VPC as the ElastiCache node(s).
I had the exact timeout problem listed in the commment of the older post. My bug is in the security group for memcached. Here is the working version in terraform:
resource "aws_security_group" "memcached" {
vpc_id = "${aws_vpc.dev.id}"
name = "memcached SG"
ingress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
egress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
tags = {
Name = "memcached SG"
}
}
I tested the connection by creating a EC2 instance in public subnet and do "telnet (input your cache node URL) 11211".
I need to write a python script using boto3 which does the following,
set aws access & secret key for my session
then create an ec2 instance (using ami image)
execute a command in newly created ec2 instance
Its not really difficult, what you are asking is mostly covered on boto3 docs.
For creating a new t2.micro on us-east-1a running ubuntu 14.04. You should be able to do it like this :
# latest ubuntu ami
ami_id = 'ami-5189a661'
# define userdata to be run at instance launch
userdata = """#cloud-config
runcmd:
- touch /home/ubuntu/heythere.txt
"""
conn_args = {
'aws_access_key_id': 'YOURKEY',
'aws_secret_access_key': 'YOUSECACCESSKEY',
'region_name': 'us-east-1'
}
ec2_res = boto3.resource('ec2', **conn_args)
new_instance = ec2_res.create_instances(
ImageId=ami_id,
MinCount=1,
MaxCount=1,
UserData=userdata,
InstanceType='t2.micro'
)
print new_instance.id