Need to find a way to identify AWS VPC Subnets that route through virtual private gateway by using Python Boto3. In another word, how can I use python boto3 to identify Private Subnets within a VPC?
The goal is to create a Lambda function that will identify private subnets within a given VPC, and then to launch another Lambda function within those Private Subnets.
Below is the code I got so far. It lists all subnets within a VPC that has a Virtual Private Gateway attached.
import boto3
def get_vpn_gateways():
ec2_client = boto3.client('ec2')
response = ec2_client.describe_vpn_gateways()
return response
def get_vpc_subnets(VpcId):
ec2 = boto3.resource('ec2')
vpc = ec2.Vpc(VpcId)
subnets = vpc.subnets.all()
return subnets
# Get VPC Ids associated with the virtual private gateway
vpc_list = []
virtual_gateways = get_vpn_gateways()
for virtual_gateway in virtual_gateways["VpnGateways"]:
vgwId = virtual_gateway["VpnGatewayId"]
vpcAttach = virtual_gateway["VpcAttachments"]
vpc_list.append(vpcAttach[0]["VpcId"])
for vpc in vpc_list:
print(vpc)
subnets = get_vpc_subnets(vpc)
for subnet in subnets:
print(subnet)
The code so far lists all the subnets within the VPC. I am thinking to use routetable as the key identifier for Private subnet. If there are routes going through VGW, then I will deem the subnet as Private. Does that make sense?
I think the routing for 0.0.0.0/0 is not an internet gateway, then that is the private subnet. The private subnet could be routed to NAT gateway or Virtual gateway but not the internet gateway directly. So, I wrote code as below.
import boto3
ec2 = boto3.resource('ec2')
route_tables = ec2.route_tables.all()
for route_table in route_tables:
for ra in route_table.routes_attribute:
if ra.get('DestinationCidrBlock') == '0.0.0.0/0' and ra.get('GatewayId') is None:
for rs in route_table.associations_attribute:
if rs.get('SubnetId') is not None:
print(rs.get('SubnetId'))
Here's the final working code that looks up private subnets within each VPC that has Virtual Private Gateway attached. It checks if the private subnets is in the VPC's subnet list, then proceeds to save it later for another Lambda function. This probably not the most effective/efficient way to achive my goal. Keen to see any other better solutions.
import boto3
def get_vpn_gateways():
ec2_client = boto3.client('ec2')
response = ec2_client.describe_vpn_gateways()
return response
def get_vpc_subnets(VpcId):
ec2 = boto3.resource('ec2')
vpc = ec2.Vpc(VpcId)
subnets = vpc.subnets.all()
return subnets
def get_private_subnets():
priv_subnet_list = []
ec2 = boto3.resource('ec2')
route_tables = ec2.route_tables.all()
for route_table in route_tables:
for ra in route_table.routes_attribute:
if ra.get('DestinationCidrBlock') == '0.0.0.0/0' and ra.get('GatewayId') is None:
for rs in route_table.associations_attribute:
if rs.get('SubnetId') is not None:
priv_subnet_list.append(rs.get('SubnetId'))
return priv_subnet_list
def lambda_handler(event, context):
vpc_list = []
vpc_subnet_list = []
virtual_gateways = get_vpn_gateways()
lambda_subnets = []
# Get VPC Ids associated with the virtual private gateway
for virtual_gateway in virtual_gateways["VpnGateways"]:
vgwId = virtual_gateway["VpnGatewayId"]
vpcAttach = virtual_gateway["VpcAttachments"]
vpc_list.append(vpcAttach[0]["VpcId"])
# Get subnets within the VPC
for vpc in vpc_list:
subnets = get_vpc_subnets(vpc)
for subnet in subnets:
vpc_subnet_list.append(subnet.id)
# Get Private subnets from the subnet list
for privsubnet in get_private_subnets():
if privsubnet in vpc_subnet_list:
lambda_subnets.append(privsubnet)
Related
For security purpose, I have disabled public access under Networking Tab in Keyvault and have a private endpoint in place. Both keyvault and private endpoint reside in same resource group. I have an app registration for my application for which I have granted access under Access policies in Keyvault.
Using Python SDK,
from azure.keyvault.secrets import SecretClient
from azure.identity import ClientSecretCredential as cs
keyVaultName = "<NAME>"
kvURI = "https://<NAME>.vault.azure.net"
AZ_TENANT_ID = '<AZ_TENANT_ID>'
AZ_CLIENT_ID = '<AZ_CLIENT_ID>'
AZ_CLIENT_SECRET = '<AZ_CLIENT_SECRET>'
credential = cs(
tenant_id=AZ_TENANT_ID,
client_id=AZ_CLIENT_ID,
client_secret=AZ_CLIENT_SECRET)
def set_secret(secretname,secretvalue):
print(credential)
secret_client = SecretClient(vault_url=kvURI, credential=credential)
secret = secret_client.set_secret(secretname,secretvalue,enabled=True)
sec_dic={}
sec_dic['name']=secret.name
sec_dic['value']=secret.value
sec_dic['properties']=secret.properties.version
return sec_dic
xx=set_secret('g','ff')
print(xx)
When running this code, I get the follwing error,
azure.core.exceptions.HttpResponseError: (Forbidden) Public network access is disabled and request is not from a trusted service nor via an approved private link.
Code: Forbidden
Message: Public network access is disabled and request is not from a trusted service nor via an approved private link.
Inner error: {
"code": "ForbiddenByConnection"
}
What am I doing wrong? How do I connect to keyvault that has no public access only via private endpoint?
I have reproduced in my environment, and got expected results as below:
Firstly, I have done the same process you have explained, and I got same error as you have got:
So, this error comes when you create a private endpoint.
When you create a private endpoint with a particular Virtual Network-Subnet, you need to create a Virtual Machine where Virtual Network is integrated.
Then we need to open the Vm created integrated with the above Virtual Network-Subnet, and now when I try to access the key Vault, I got required result as below:
References:
azure - Unable to get storage account container details - Stack Overflow
Azure Key Vault not allow access via private endpoint connection
Azure functions and Azure KeyVault communicating through service endpoint
I have a python lambda function that is using requests to scrape metrics from Prometheus, which is http. I began by zipping the requests packages and a python function on my local machine. I uploaded and attached the zip file as a layer to the lambda function. When I test the function it times out on the GET request
import requests
import copy
import json
print("Imported packages successfull" )
# Grabbing metadata on metrics categoriies
print("Reading URL")
URL_meta = 'http://xxx.xxx.x.xxx:9091/api/v1/metadata'
print("URL read successfull")
print("Initiating requests.get")
r = requests.get(url = URL_meta) <--------- Times out
print("successful requests.get")
metadata = r.json()
I then made a VPC 10.0.0.0/16, a Public and Private Subnet which I attached to their respective Public and Private Route tables, an Internet Gateway attached to the VPC, a Public NAT Gateway connected to the Public Subnet, I made a Role and attached the permission policies: AWSLambdaExecute and AWSLambdaVPCAccessExecutionRole. I then went to the Lambda Function > Configuration > VPC, and added the VPC I made, attached the Private Subnet and the default VPC Security group to that. I then went to Lambda Function > Configuration > Permissions, and assigned the Execution Role.
When I run the function it times out at the same location. Here are the logs:
Function Logs
START RequestId: aa1e5ffa-ed04-4274-9a66-a5ac455b5325 Version: $LATEST
Imported packages successfull
Reading URL
URL read successfull
Initiating requests.get
END RequestId: xxxxxxxxx
I have successfully run the same requests function from a Jupyter Notebook on my local machine and have verified that the http address can be accessed from various devices.
I have increased the timeout period to 30 seconds. I have searched google, I have checked SO:
Cannot use Requests-Module on AWS Lambda
Issue with VPC attached to Lambda function
Why is my Lambda unable to access the internet?
How to make a HTTP rest call in AWS lambda using python?
Python request in AWS Lambda timing out
get request Task timed out in Lambda using Python
I've watched numerous YouTube videos, poured over the AWS Docs, read Medium articles, and I checked the "Similar Questions" that were suggested when I initiated this. I am stuck.
I'm trying to print available buckets on AWS but failed. I tried multiple tutorials online and i would get cannot locate credentials and 's3.ServiceResource' object has no attribute errors.
s3 = boto3.resource('s3',aws_access_key_id = "Random",aws_secret_access_key = "Secret" )
client = s3.client('s3')
response = client.list_buckets()
print(response)
Can you try:
for bucket in s3.buckets.all():
print(bucket.name)
The problem is probably because you are defining s3 as a resource:
s3 = boto3.resource('s3')
But then you are trying to use it as a client:
client = s3.client('s3')
That won't work. If you want a client, create one with:
s3_client = boto3.client('s3')
Or, you can extract a client from the resource:
s3_resource = boto3.resource('s3')
response = s3_resource.meta.client.list_buckets()
Or, sticking with the resource, you can use:
s3_resource = boto3.resource('s3')
for bucket in s3_resource.buckets.all():
# Do something with bucket
Confused? Try to stick with one method. Client directly matches the underlying API calls made to S3 and is the same as all other languages. Resource is a more "Pythonic" way of accessing resources. The calls get translated to client API calls. Resources can be a little more challenging when figuring out required permissions, since there isn't a one-to-one mapping to actual API call.
I'm trying to set and get keys from ElastiCache (memcached) from a python lambda function using Boto3. I can figure out how to get the endpoints but that's pretty much it. Is there some documentation out there that shows the entire process?
It sounds like you are trying to interact with Memcached via Boto3. This is not possible. Boto3 is for interacting with the AWS API. You can manage your ElastiCache servers via the AWS API, but you can't interact with the Memcached software running on those servers. You need to use a Memcached client library like python-memcached in your Python code to actually get and set keys in your Memcached cluster.
Also, your Lambda function will need to reside in the same VPC as the ElastiCache node(s).
I had the exact timeout problem listed in the commment of the older post. My bug is in the security group for memcached. Here is the working version in terraform:
resource "aws_security_group" "memcached" {
vpc_id = "${aws_vpc.dev.id}"
name = "memcached SG"
ingress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
egress {
from_port = "${var.memcached_port}"
to_port = "${var.memcached_port}"
protocol = "tcp"
cidr_blocks = ["${var.public_subnet_cidr}"]
}
tags = {
Name = "memcached SG"
}
}
I tested the connection by creating a EC2 instance in public subnet and do "telnet (input your cache node URL) 11211".
Using boto in Python, how can I find the boto.ec2 instance object given an IP address?
boto3
ec2 = boto3.client('ec2')
filters = [{
'Name': 'ip-address',
'Values': ['1.1.1.1'],
}]
result_list = ec2.describe_instances(Filters=filters)
Digging through the boto documentation, I found the get_only_instances method, which you use to get all instances. You can pass a filter dictionary to it, to filter by IP Address (I found this in the EC2 API Reference under the Filter.N title).
So for example, to get the instance with IP 1.1.1.1, you would do:
filters = {"ip-address": "1.1.1.1"}
result_list = conn.get_only_instances(filters=filters)
Then result_list[0] should be the Instance object for the instance with that IP address.