Using CDK in Python, I've created a VPC with Public and Private subnets in two availability zones and deployed an RDS DB Instance into the Private subnets.
How can I retrieve the Subnet ID that the RDS Instance is in so that I don't have to hardcode it into this glue connection? I've tried several different iterations of CfnOutputs and CDK constructs, but am not able to get the one that I need. Thanks for taking a look!
glue_connection = aws_glue.CfnConnection(
self,
connection_id,
catalog_id = self.account_name,
connection_input = aws_glue.CfnConnection.ConnectionInputProperty(
name = str(self.glue_connection_name),
connection_type = 'JDBC',
physical_connection_requirements = aws_glue.CfnConnection.PhysicalConnectionRequirementsProperty(
subnet_id = 'PrivateSubnet2',
security_group_id_list = [self.rds_SG.security_group_id, self.ec2_SG.security_group_id],
availability_zone = 'us-east-1b',
),
connection_properties = {
'JDBC_CONNECTION_URL': f'jdbc:mysql://{self.DBInstance.db_instance_endpoint_address}:{self.DBInstance.db_instance_endpoint_port}/test',
'PASSWORD': self.DBInstance.secret.secret_value_from_json("password").to_string(),
'USERNAME': self.db_username,
'JDBC_ENFORCE_SSL': 'false',
},
),
)
Use the L2 Connection construct - it's simpler - and get the first subnet from the list of subnets that the instance is in:
glue_connection = aws_glue.Connection(
self,
"my_connection",
type=aws_glue.ConnectionType.JDBC,
properties={
'JDBC_CONNECTION_URL': f'jdbc:mysql://{self.DBInstance.db_instance_endpoint_address}:{self.DBInstance.db_instance_endpoint_port}/test',
'PASSWORD': self.DBInstance.secret.secret_value_from_json("password").to_string(),
'USERNAME': self.db_username,
'JDBC_ENFORCE_SSL': 'false',
},
security_groups=[my_rds_instance.connections.security_groups],
subnet=my_vpc.private_subnets[0]
)
Related
I am trying to compile information from a list of EC2s that I have on a .csv using Python + Boto3.
This .csv contains the Private IPs of those instances. The following command returns everything that I need:
aws ec2 describe-network-interfaces --filters Name=addresses.private-ip-address,Values="<PRIVATE IP>" --region <MY REGION>
So I've decided to use Boto3 to do something similar.
But my code isn't returning the information inside the dictionary because I cannot specify the Region inside the code.
The documentation allows me to specify the Availability Zone, but it won't just work.
ec2 = boto3.client('ec2')
describe_network_interfaces = ec2.describe_network_interfaces(
Filters=[
{
'Name': 'addresses.private-ip-address',
'Values': [
'<PRIVATE IP>'
],
'Name': 'availability-zone',
'Values': [
'<REGION>'
]
}
],
MaxResults=123
)
print(describe_network_interfaces)
☝️ This returns me this 👇
{'NetworkInterfaces': [], 'ResponseMetadata': { <LOTS OF METADATA> }}
I believe it is not working because I can't specify the Region with describe_network_interfaces of Boto3. But I can do it with the AWS CLI command.
Any suggestions?
OBS: popen is not a good idea for this current project.
Thanks in advance.
You can set the region at the client level with something like:
my_region = "us-east-1"
ec2 = boto3.client('ec2', region_name=my_region)
This worked in my environment successfully to get information about systems running in another region.
I have a list of AWS accounts with account ids. I want to fetch the list of all the EC2 instances running in each of those accounts. Can I fetch that list using boto3?
First, you can access your AWS account programmatically by creating boto3 client.
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN,
)
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#method-parameters
After creating boto3 client with your credentials, You can use boto3's EC2.Client.describe_instances()
response = client.describe_instances(
Filters=[
{
'Name': 'string',
'Values': [
'string',
]
},
],
InstanceIds=[
'string',
],
DryRun=True|False,
MaxResults=123,
NextToken='string'
)
Use 'instance-state-name' : 'running' in Filters, like:
{
'Name': 'instance-state-name',
'Values': [
'running',
]
}
then you can get informations of your running ec2 instances.
To access AWS Account from CLI/Programs, you need to have 'aws_access_key_id_value' and 'aws_secret_access_key_value'. Once you have these details, you can use the following code to list EC2 instances from the AWS Account you specified keys of.
Create config.properties with the following code.
aws_access_key_id_value='YOUR-ACCESS-KEY-OF-THE-AWS-ACCOUNT'
aws_secret_access_key_value='YOUR-SECRETE-KEY-OF-THE-AWS-ACCOUNT'
Create list-ec2-instances.py with the following code.
import boto3
def getVarFromFile(filename):
import imp
f = open(filename)
global data
data = imp.load_source('data', '', f)
f.close()
getVarFromFile('config.properties')
ec2client = boto3.resource(
'ec2',
aws_access_key_id=data.aws_access_key_id_value,
aws_secret_access_key=data.aws_secret_access_key_value
)
for instance in ec2client.instances.all():
print(
"Id: {0}\nPlatform: {1}\nType: {2}\nPublic IPv4: {3}\nAMI: {4}\nState: {5}\n".format(
instance.id, instance.platform, instance.instance_type, instance.public_ip_address, instance.image.id, instance.state
)
)
Execute the following command.
python list-ec2-instances.py
I want to create ~267 Cloudwatch alarms, manual process is so pathetic, can someone guide me to use Boto3 script so that I can set up all alarms in a one shot.
import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch')
# Create alarm
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization',
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=False,
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': 'i-xxxxxxxxxx'
},
],
Unit='Seconds'
)
Assuming you want to add a CloudWatch alarm for different EC2 instances, you can simply put the instance IDs in a list and iterate over that list to create the alarms. That'd look like:
import boto3
cloudwatch = boto3.client('cloudwatch')
ec2_instances = [
'i-xxxxxxxxx1',
'i-xxxxxxxxx2',
'i-xxxxxxxxx3'
]
for ec2_instance in ec2_instances:
cloudwatch.put_metric_alarm(
AlarmName='Web_Server_CPU_Utilization_%s' % ec2_instance,
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='CPUUtilization',
Namespace='AWS/EC2',
Period=60,
Statistic='Average',
Threshold=70.0,
ActionsEnabled=False,
AlarmDescription='Alarm when server CPU exceeds 70%',
Dimensions=[
{
'Name': 'InstanceId',
'Value': ec2_instance
},
],
Unit='Seconds'
)
Here is a simple script I use to set up CloudWatch alarms on my running EC2 instances. The aim is to reboot my EC2 instances if StatusCheckFailed_Instance is True.
In case you are getting the "Insufficient Data" message as well, its worthwhile creating the same alarm on the EC2 console and then making sure your put_metric_alarm call matches the source/CloudFormation JSON.
AWS seems to be really fussy about the JSON. Once I matched the EC2 console's JSON exactly it worked like a charm.
Hope this helps someone.
import boto3
# Specify your region here
region = "ap-northeast-1"
ec2_client = boto3.client("ec2", region_name=region)
cloudwatch = boto3.client('cloudwatch')
# Get running EC2 instances
reservations = ec2_client.describe_instances(Filters=[
{
"Name": "instance-state-name",
"Values": ["running"],
}
]).get("Reservations")
# Set up an alarm for each instance
for reservation in reservations:
for instance in reservation["Instances"]:
instance_id = instance['InstanceId']
cloudwatch.put_metric_alarm(
AlarmName=f'Status_Check_{instance_id}',
AlarmDescription=f'Alarm when status check fails on {instance_id}',
ActionsEnabled=True,
OKActions=[],
AlarmActions=[
f"arn:aws:automate:{region}:ec2:reboot"
],
InsufficientDataActions=[],
MetricName='StatusCheckFailed_Instance',
Namespace='AWS/EC2',
Statistic='Maximum',
Dimensions=[
{
'Name': 'InstanceId',
'Value': instance_id
},
],
Period=60,
EvaluationPeriods=2,
DatapointsToAlarm=2,
Threshold=0.99,
ComparisonOperator='GreaterThanOrEqualToThreshold'
)
I can able to get the load balancers using below
import boto3
elb = boto3.client('elbv2')
lbs = elb.describe_load_balancers()
How to get the instances of the lbs.
Also How Can I fetch the load balancers which state is not active as describe_load_balanacers only give state active load balanceres.
Classic Load Balancer
Use: client = boto3.client('elb')
Then describe_load_balancers() results include a list of instances:
'Instances': [
{
'InstanceId': 'string'
},
],
Application Load Balancer
Use: client = boto3.client('elbv2')
Call describe_target_groups() passing in the Load Balancer ARN to obtain a list of Target Groups associated with the Load Balancer
Then call describe_target_health() to obtain a list of targets (instances).
Here is a sample response:
{
'TargetHealthDescriptions': [
{
'Target': {
'Id': 'i-0f76fade',
'Port': 80,
},
...
for anyone looking for a quick snippet to see if your instance is in the LB:
from ec2_metadata import ec2_metadata
instance_id: str = ec2_metadata.instance_id
import boto3
client = boto3.client("elbv2" , region_name="us-west-2")
response = client.describe_target_groups(
LoadBalancerArn="your arn goes here"
)
target_group_arn = response["TargetGroups"][0]["TargetGroupArn"]
response = client.describe_target_health(TargetGroupArn=target_group_arn)
instances = map(lambda x: x["Target"]["Id"], response["TargetHealthDescriptions"])
print(f"target group instances {list(instances)}")
print(f"this instance {instance_id}")
Currently I'm working on Lambda function to create, reboot, delete, modify a ElastiCache Redis using Python 2.7 script.
For this I need IAM rules and Policies also.
I'm done with EC2, RDS with Stop & Start actions. I have not seen any solutions for ElastiCache Redis, So Can you people anyone provide me scripts or solutions at least to Delete, Create a ElastiCache Redis.
You can use the AWS ElastiCache Python SDK to Create, Reboot, Delete & Modify using boto3.
Create: create_cache_cluster()
Reboot: reboot_cache_cluster()
Delete: delete_cache_cluster()
Modify: modify_cache_cluster()
example:
import boto3
client = boto3.client('elasticache')
response = client.create_cache_cluster(
CacheClusterId='string',
ReplicationGroupId='string',
AZMode='single-az'|'cross-az',
PreferredAvailabilityZone='string',
PreferredAvailabilityZones=[
'string',
],
NumCacheNodes=123,
CacheNodeType='string',
Engine='string',
EngineVersion='string',
CacheParameterGroupName='string',
CacheSubnetGroupName='string',
CacheSecurityGroupNames=[
'string',
],
SecurityGroupIds=[
'string',
],
Tags=[
{
'Key': 'string',
'Value': 'string'
},
],
SnapshotArns=[
'string',
],
SnapshotName='string',
PreferredMaintenanceWindow='string',
Port=123,
NotificationTopicArn='string',
AutoMinorVersionUpgrade=True|False,
SnapshotRetentionLimit=123,
SnapshotWindow='string',
AuthToken='string'
)
For more details on parameters refer this link.