What I'm trying to do, is not have to manually create new EC2 instances as needed through the AWS.Amazon.com site, but rather programtically start up new instances based off a ami using python's boto module
import boto.ec2
conn = boto.ec2.connect_to_region("us-west-2",
aws_access_key_id='<aws access key>',
aws_secret_access_key='<aws secret key>')
#how to now spin up new instances based off a snapshot (already) preconfigured on ec2?
As in my comment, I'm trying to startup new instances based off a specific given ami id?
I can't seem to find a good way to do this. Can anyone help here?
Thank you
From the documentation:
Possibly, the most important and common task you’ll use EC2 for is to
launch, stop and terminate instances. In its most primitive form, you
can launch an instance as follows:
conn.run_instances('<ami-image-id>')
This will launch an instance in the specified region with the default parameters. You will not be
able to SSH into this machine, as it doesn’t have a security group
set. See EC2 Security Groups for details on creating one.
Now, let’s say that you already have a key pair, want a specific type
of instance, and you have your security group all setup. In this case
we can use the keyword arguments to accomplish that:
conn.run_instances(
'<ami-image-id>',
key_name='myKey',
instance_type='c1.xlarge',
security_groups=['your-security-group-here'])
The <ami-image-id> is where you fill in your ami id.
Related
1) Instantiate an AWS Linux, micro instance using the AWS python API (include authentication to AWS)
2) Update the instance with tags: customer=ACME, environment=PROD
3) Assign a security group to the instance
To program in Python on AWS, you should use the boto3 library.
You will need to do the following:
supply credentials to the library (link)
create an EC2 client (link)
use the EC2 client to launch EC2 instances using run_instances (link)
You can specify both tags and security groups in the run_instances call. Additionally, the boto3 documentation provides some Amazon EC2 examples that will help.
Maybe you want to observe this project:
https://github.com/nchammas/flintrock
This is a hadoop and apache spark clustering project. But, it can inspire you.
Actually, there is many feature that you want like security group or filtering by tag name. Just, look around of code
I want to programatically get all the actions a user is allowed to do across aws services.
I've tried to fiddle with simulate_principal_policy but it seems this method expects a list of all actions, and I don't want to maintain a hard-coded list.
I also tried to call it with iam:* for example and got a generic 'implicitDeny' response so I know the user is not permitted all the actions but I require a higher granularity of actions.
Any ideas as to how do I get the action list dynamically?
Thanks!
To start with, there is no programmatic way to retrieve all possible actions (regardless of whether they are permitted to use an action).
You would need to construct a list of possible actions before checking the security. As an example, the boto3 SDK for Python contains an internal list of commands that it uses to validate commands before sending them to AWS.
Once you have a particular action, you could use Access the Policy Simulator API to validate whether a given user would be allowed to make a particular API call. This is much easier than attempting to parse the various Allow and Deny permissions associated with a given user.
However, a call might be denied based upon the specific parameters of the call. For example, a user might have permissions to terminate any Amazon EC2 instance that has a particular tag, but cannot terminate all instances. To correctly test this, an InstanceId would need to be provided to the simulation.
Also, permissions might be restricted by IP Address and even Time of Day. Thus, while a user would have permission to call an Action, where and when they do it will have an impact on whether the Action is permitted.
Bottom line: It ain't easy! AWS will validate permissions at the time of the call. Use the Policy Simulator to obtain similar validation results.
I am surprised no one has answered this question correctly. Here is code that uses boto3 that addresses the OP's question directly:
import boto3
session = boto3.Session('us-east-1')
for service in session.get_available_services ():
service_client = session.client (service)
print (service)
print (service_client.meta.service_model.operation_names)
IAM, however, is a special case as it won't be listed in the get_available_services() call above:
IAM = session.client ('iam')
print ('iam')
print (IAM.meta.service_model.operation_names)
Problem: Given N instances launched as part of VMSS, I would like my application code on each azure instance to discover the IP address of the other peer instances. How do I do this?
The overall intent is to cluster the instances so, as to provide active passive HA or keep the configuration in sync.
Seems like there is some support for REST API based querying : https://learn.microsoft.com/en-us/rest/api/virtualmachinescalesets/
Would like to know any other way to do it, i.e. either python SDK or instance meta data URL etc.
The RestAPI you mentioned has a Python SDK, the "azure-mgmt-compute" client
https://learn.microsoft.com/python/api/azure.mgmt.compute.compute.computemanagementclient
One way to do this would be to use instance metadata. Right now instance metadata only shows information about the VM it's running on, e.g.
curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute?api-version=2017-03-01"
{"compute":
{"location":"westcentralus","name":"imdsvmss_0","offer":"UbuntuServer","osType":"Linux","platformFaultDomain":"0","platformUpdateDomain":"0",
"publisher":"Canonical","sku":"16.04-LTS","version":"16.04.201703300","vmId":"e850e4fa-0fcf-423b-9aed-6095228c0bfc","vmSize":"Standard_D1_V2"},
"network":{"interface":[{"ipv4":{"ipaddress":[{"ipaddress":"10.0.0.4","publicip":"52.161.25.104"}],"subnet":[{"address":"10.0.0.0","dnsservers":[],"prefix":"24"}]},
"ipv6":{"ipaddress":[]},"mac":"000D3AF8BECE"}]}}
You could do something like have each VM send the info to a listener on VM#0, or to an external service, or you could combine this with Azure Files, and have each VM output to a common share. There's an Azure template proof of concept here which outputs information from each VM to an Azure File share.. https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-azure-files-linux - every VM has a mountpoint which contains info written by every VM.
EDIT Removed BOTO from question title as it's not needed.
Is there a way to find the security groups of an EC2 instance using Python and possible Boto?
I can only find docs about creating or removing security groups, but I want to trace which security groups have been added to my current EC2 instance.
To list the security groups of current instance, you don't need Boto/Boto3. Make use of the AWS meta-data server.
import os
sgs = os.popen("curl -s http://169.254.169.254/latest/meta-data/security-groups").read()
print sgs
You can check it from that instance and execute below command
curl http://169.254.169.254/latest/meta-data/security-groups
or from aws-cli also
aws ec2 describe-security-groups
I am a new user to memcache and ellasticache. I am using python environment for development. I have successfully created the ellacsticache cluster in aws and also created a node, therfore got two DNS, one for cluster itself and another for the node. Now I am using memcache in python from one of my instances that belongs to the same security group as the ellasticache cluster.
>>> import memcache
>>> mc = memcache.Client(['client-facing-pi.6qkr6p.0001.apse1.cache.amazonaws.com:11211'], debug=0)
>>> mc.set('hello','world')
0
So, I am getting 0 as return.
I even tried with the cluster dns, but that is also returning 0 in case of setting a value. What is the problem?
Thank you.
ElastiCache uses what's known as a Cache Security Group - note that it isn't the same as the regular security groups you've been dealing with so far. You will need to enable access to this and allow your EC2 instance to access it. You can read more on managine Cache Security Groups here.
I got the problem As I was using VPS, I had to go to my instances, and then in the security group, I had to add cache clusters port number i.e. 11211. And now it's working fine.