I have a lambda function which SSH ec2 instance and run some commands. This lambda function is triggered from SNS topic. SNS topic is integrated with a cloudwatch alarm. I am using python 2.7 in lambda function followed this thread https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/. is it possible to get EC2 public IP address which actually triggered alarm?
It depends on the CloudWatch Alarm you are using to trigger the SNS publish.
My suggestion is to print out the entire event dictionary in your function and check if there is any mention about EC2 instance ID.
In case of CloudWatch EC2 alarm (eg. CPU usage) you'll find the instance ID in the metric Dimension.
# Python example
import json
message = json.loads(event['Records'][0]['Sns']['Message'])
instance_id = message['Trigger']['Dimensions'][0]
If you have the instance ID you can easily retrieve the instance public IP using boto3 as follows:
# Python example
import boto3
instance_id = 'xxxx' # This is the instance ID from the event
ec2 = boto3.client('ec2')
instances = ec2.describe_instances(InstanceIds=[instance_id])
public_ip = instances[0]['Reservations'][0]['Instances'][0]['PublicIpAddress']
Finally, as you are performing SSH from Lambda function to your EC2 instance keep in mind that Lambda Functions out of VPC get a dynamic public IP therefore it is impossible to restrict your EC2 instance security group for SSH. Leaving SSH opened to the entire world is not a good practice from a security perspective.
I suggest to run both EC2 and Lambda function in VPC restricting SSH access to your EC2 instances from Lambda vpc security group only. In that case you'll need to retrieve the private Ip address rather than the public one to be able to ssh your instance (the python logic is the same as above, the only difference is that you use 'PrivateIpAddress' instead of 'PublicIpAddress') . This is way more secure than using public internet.
I hope this helps.
G
Related
I have aws lambda which is not able to connect with mongo through VPC.
import pymongo
def handler(event, context):
try:
client = pymongo.MongoClient(host="xxxxxxx", port=27017, username=x1, password=x2, authsource="x3, authMechanism='SCRAM-SHA-1')
except pymongo.errors.ServerSelectionTimeoutError as err:
print(err)
Not able to found the server.
I have created a security group and new roles have given VPC and lambda full access too but not able to connect.
Taken help from https://blog.shikisoft.com/access-mongodb-instance-from-aws-lambda-python/ as well as https://blog.shikisoft.com/running-aws-lambda-in-vpc-accessing-rds/ links.
Please be helpful.
Trying since yesterday but no luck.
Let me try to help you figured out where the problem is.
1. Are your MongoDB EC2 Instance and your Lambda hosted on the same VPC?
If this the cause of your problem, you should move your services into the same VPC.
2. Is your Security Group that attached to your MongoDB EC2 Instance
and your Lambda has whitelisted/include the default sg?
You have to include the default sg into your Security Group so, services/instances within that VPC can communicate.
3. Is your hostname publicly or privately accessed ?
If Lambda needs to connect over Internet to access your MongoDB instance, you don't need to attach your Lambda into a VPC.
Inside a VPC, Lambda requires a NAT Gateway to communicate to open
world. Try to communicate privately if your MongoDB instance and
Lambda are in the same VPC.
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
Hope these answers are helpful to you.
I have a python app running in a Docker container on a EC2 instance managed by ECS (well, that's what I would like...). However, to use services like SSM with boto3, I need to know the region where the instance is running. I dont need any credentials as I use a role for the instance which grants access to the service, so a default Session is ok.
I know that it is possible to fetch the region with a curl to get the dynamic metadata, but is there any more elegant way to instantiate a client with a region name (of credentials) inside an EC2 instance ?
I ran through the boto3 documentation and found
Note that if you've launched an EC2 instance with an IAM role configured, there's no explicit configuration you need to set in boto3 to use these credentials. Boto3 will automatically use IAM role credentials if it does not find credentials in any of the other places listed above.
So why do I need to pass the region name for SSM client for example ? Is there a workaround ?
Region is a required parameter for the SSM client to know which region it should be interacting with. It does not try to assume even if you’re in the AWS cloud.
If you want it to assume in your container the simplest way in which to implement is to use the AWS environment variables.
In your container definition specify the environment attribute specify a variable with name AWS_DEFAULT_REGION and the value of your current region.
By doing this you will not have to specify a region in the SDK within the container.
This example uses the environment attribute for more information.
Here is how to retrieve a parameter from the Parameter Store using the instance profile credentials:
#!/usr/bin/env python3
from ec2_metadata import ec2_metadata
import boto3
session = boto3.Session(region_name=ec2_metadata.region)
ssm = session.client('ssm')
parameter = ssm.get_parameter(Name='/path/to/a/parameter', WithDecryption=True)
print(parameter['Parameter']['Value'])
Replace the client section with the service of your choice and you should be set.
I am stopping my EC2 instances at night and restart them in the morning using CloudWatch Rules, however the public DNS/IP changes when we restart.
I want to use Elastic IPs and associate the with the instances.
I have read that we need to re-associate the Elastic IPs once the VM is restarted. I want to automate this. I got this lambda python code(Stop and Start EC2 Instances) to start the VM.
What should I do to assign the Elastic IPs also to the VMs while restarting?
Here is my code so far
import boto3
region = 'us-west-1'
instances = ['i-12345cb6de4f78g9h', 'i-08ce9b2d7eccf6d26']
ec2 = boto3.client('ec2', region_name=region)
def lambda_handler(event, context):
ec2.start_instances(InstanceIds=instances)
print('started your instances: ' + str(instances))
Please help. And also suggest if any other method is available to achieve this.
Thanks in advance!
If you attach an elastic IP address to your instance, everytime you stop and start the instance the EIP will remain attached.
You do not need to automate adding an EIP on startup.
Take a read of this to familiarize yourself more with elastic IP address functionality.
When the python lambda function is executed I get "Task timed out after 3.00 seconds" error. I am trying the same example function.
When I try to run the same code from eclipse it works fine and I can see the query result. Same way I can connect to the db instance from local-machine Mysql workbench without any issues.
I tried creating a role with with full administrator access policy for this lambda function and even then its not working fine. The db instance has a vpc and I just added my local ip address there using the edit CIDR option so I can access the instance through my local machine workbench. For VPC, subnet and security group parameter in lambda function I gave the same values as I have in the RDS db instance.
I have also increased the timeout for lambda function and still I see the timeout error.
Any input would be appreciated.
For VPC, subnet and security group parameter in lambda function I gave the same values as I have in the RDS db instance.
Security groups don't automatically trust their own members to access other members.
Add a rule to this security group for "MySQL" (TCP port 3306) but instead of specifying an IP address, start typing s g into the box and select the id of the security group that you are adding the rule to -- so that the group is self-referential.
Note that this is probably not the correct long-term fix, because if your Lambda function needs to access the Internet or most AWS services, the Lambda function needs to be on a private subnet behind a NAT device. That does not describe the configuration of the subnet where your RDS instance is currently configured, because you mentioned adding your local IP to allow access to RDS. That suggests your RDS is on a public subnet.
See also Why Do We Need Private Subnets in VPC for a better understanding of public vs. private subnets.
I have an AWS Lambda function which is triggered by an S3 bucket. The Lambda function pings an EC2 instance inside a VPC. The VPC does not have a NAT. For some reason the Lambda is not able to connect to the EC2 and it is timing out. I have tried keeping the lambda both in the VPC and outside the VPC. I have also tried using the public and private IP of the instance but not seems to work. Any suggestions on what I could do next.
The The code for the lambda function is as follows
from __future__ import print_function
import json
import boto3
import urllib2
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
url = urllib2.urlopen("http://ip-address/API/")
First, with the Lambda function inside the VPC, make sure you use the private IP of the EC2 instance. Then open port 80 in the EC2 instance's security group to allow incoming connections from anything belonging to the security group you assigned to the Lambda function.