AWS CDK Python Create Resources conditionally - python

I would like to create resources depending on the parameters value. How can I achieve that?
Example:
vpc_create = core.CfnParameter(stack, "createVPC")
condition = core.CfnCondition(stack,
"testeCondition",
expression=core.Fn.condition_equals(vpc_create, True)
)
vpc = ec2.Vpc(stack, "MyVpc", max_azs=3)
How add the condition to VPC resource for VPC being created only if the parameter it's true?
I think that I need to get Cloudformation resource, something like that:
vpc.node.default_child # And I think this returns an object from ec2.CfnVPC class, but I'm stuck here.
Thanks

Conditional resource creation and a lot of other flexibility is possible using context data. AWS itself recommends context over parameters
In general, we recommend against using AWS CloudFormation parameters with the AWS CDK. Unlike context values or environment variables, the usual way to pass values into your AWS CDK apps without hard-coding them, parameter values are not available at synthesis time, and thus cannot be easily used in other parts of your AWS CDK app, particularly for control flow.
Please read in full at: https://docs.aws.amazon.com/cdk/latest/guide/parameters.html

Related

How to load dependencies from an s3 bucket AND a separate event JSON?

The dependencies for my AWS Lambda function were larger than the allowable limits, so I uploaded them to an s3 bucket. I have seen how to use an s3 bucket as an event for a Lambda function, but I need to use these packages in conjunction with a separate event. The s3 bucket only contains python modules (numpy, nltk, etc.) not the event data used in the Lambda function.
How can I do this?
Event data will come in from whatever event source that you configure. Refer the docs here for the S3 event source.
As for the dependencies themself, you will have to zip the whole codebase (code + dependencies) and use that as a deployment package. You can find detailed intructions on that on the docs. For reference, here the one for NodeJS and Python.
Protip: A better way to manage dependencies is to use Lambda Layer. You can create a layer will all your dependencies and then add it to the function that make use of these. Read more about it here.
If your dependencies are still above the 512MB hard limit of AWS Lambda, you may consider using AWS Elastic File System with Lambda.
With this now you can essentially attach a network storage to your lambda function. I have personally used it to load huge reference files which are over the limit of Lambda's file storage. For a walkthrough you can refer to this article by AWS. To pick the conclusion from the article:
EFS for Lambda allows you to share data across function invocations, read large reference data files, and write function output to a persistent and shared store. After configuring EFS, you provide the Lambda function with an access point ARN, allowing you to read and write to this file system. Lambda securely connects the function instances to the EFS mount targets in the same Availability Zone and subnet.
You can read the Read the announcement here
Edit 1: Added EFS for lambda info.

"StartAfter" parameter equivalent in Google Cloud Storage in listing objects

I'm quite new to GCP, and now struggling to list out files after a given key.
In AWS, we can provide an additional parameter StartAfter to the list_objects_v2() boto3 call for S3 client. Then, it will start providing files starting from that particular key.
kwargs["StartAfter"] = start_after_file
response = self._storage_client.list_objects_v2(
Bucket=self._bucket_name,
Prefix=prefix,
**kwargs
)
I need to do the same in GCP, using the Google Cloud Storage (in Python). I'm going to use list_blobs() in Storage Client class, but I can't find any way to do this.
prefix parameter won't help since it will only return the files with that prefix.
Does anyone know how I can achieve this?
According to the documentation for this library, there is no method to achieve this directly, you would need to filter the response in your code.
Nevertheless, you can open a feature request for this to be revised, or you can contact the team responsible for the library over at their GitHub
Hope you find this useful.

Alternative way for terraform S3 backend to use like variables

Variables are not supported in S3 backend I need alternative way to do this can any one suggests I go through online some are saying terragrunt some are say like python, workspaces.environments.Actually we are built some Dev environment for clients from the app they will enter the details like for ec2 they will enter count, ami, type from here all are fine but with the backend state file issue which does not support variables I need to change every time bucket name,path.Can some one please explain me the structure and sample code to resolve this thanks in advance. #23208
You don't need to have different backends for every client, you need to have different tfstate. Then you can use terraform init --reconfigure "key=<client>", where <client> is an identifier you set for your clients.

Python Boto Deleting Routes?

I've been trying to find a way to delete routes in AWS programatically. I've built a python application for managing AWS resources using boto and boto3. When dealing with the clean up after deletion of VPC Peering, I have blackholed routes left over. I don't want to delete the Route Tables in question, just the blackholed routes.
The AWS CLI has a delete-route function, but I can't find the corresponding function in boto and I'd prefer not to run the AWS CLI directly from my python app if I can avoid it.
In boto3 (and boto) there are methods for creating routes, but I couldn't find any for deleting routes (just deleting the whole route table). I've searched this out numerous times but haven't come close to finding an answer.
Any help?
I do see a method in boto 2.38.
class boto.vpc.VPCConnection
delete_route(route_table_id, destination_cidr_block, dry_run=False)
Deletes a route from a route table within a VPC.
Parameters:
route_table_id (str) – The ID of the route table with the route.
destination_cidr_block (str) – The CIDR address block used for
destination match.
dry_run (bool) – Set to True if the operation
should not actually run.
Return type: bool Returns: True if
successful

Spin up new EC2 instances programatically with Python Boto

What I'm trying to do, is not have to manually create new EC2 instances as needed through the AWS.Amazon.com site, but rather programtically start up new instances based off a ami using python's boto module
import boto.ec2
conn = boto.ec2.connect_to_region("us-west-2",
aws_access_key_id='<aws access key>',
aws_secret_access_key='<aws secret key>')
#how to now spin up new instances based off a snapshot (already) preconfigured on ec2?
As in my comment, I'm trying to startup new instances based off a specific given ami id?
I can't seem to find a good way to do this. Can anyone help here?
Thank you
From the documentation:
Possibly, the most important and common task you’ll use EC2 for is to
launch, stop and terminate instances. In its most primitive form, you
can launch an instance as follows:
conn.run_instances('<ami-image-id>')
This will launch an instance in the specified region with the default parameters. You will not be
able to SSH into this machine, as it doesn’t have a security group
set. See EC2 Security Groups for details on creating one.
Now, let’s say that you already have a key pair, want a specific type
of instance, and you have your security group all setup. In this case
we can use the keyword arguments to accomplish that:
conn.run_instances(
'<ami-image-id>',
key_name='myKey',
instance_type='c1.xlarge',
security_groups=['your-security-group-here'])
The <ami-image-id> is where you fill in your ami id.

Categories

Resources