Naming CDK resources dynamically - python

I'm using the CDK to create some infrastructure from a yaml template file. Some resources require multiple instances. I thought writing a function would be the easiest way to create multiple instance of the resource
Function
def create_vpn_connection_route(cidr_count, destination_cidr):
vpn_connection_route = aws_ec2.CfnVPNConnectionRoute(
self,
f'vpn_connection_route{cidr_count}',
vpn_connection_id=vpn_connection.ref,
destination_cidr_block=destination_cidr
)
return vpn_connection_route
I then loop over it and generate the "Id" by enumarating over the destination_cidrs like so
for cidr_count, destination_cidr in enumerate(tenant_config['vpn_config'][0]['destination_cidrs']):
create_vpn_connection_route(cidr_count, destination_cidr)
This is what's in my yaml
vpn_config:
- private_ip:
- 10.1.195.201/32
- 10.1.80.20/32
- 10.1.101.8/32
Is there a better way to do this in the CDK? and can I dynamically generate Id'S for resources?
Cheers

I don't know that it makes your code much better, but you can use a Construct instead of a function.
class VpnConnectionRoute(core.Construct):
def __init__(self, scope, id_, vpn_connection, destination_cidr):
super().__init__(scope, id_)
self.vpn_connection_route = aws_ec2.CfnVPNConnectionRoute(
self,
'vpn_connection_route',
vpn_connection_id=vpn_connection.vpn_id,
destination_cidr_block=destination_cidr
)
# ...
for cidr_count, destination_cidr in enumerate(tenant_config['vpn_config'][0]['destination_cidrs']):
VpnConnectionRoute(self, f"route{cidr_count}", vpn_connection, destination_cidr)
VpnConnectionRoute(self, f"route{cidr_count}", vpn_connection, destination_cidr)
VpnConnectionRoute(self, f"route{cidr_count}", vpn_connection, destination_cidr)
CDK will automatically name your resources based on both the construct and your name. So the end result will look like:
"route1vpnconnectionrouteAE1C11A9": {
"Type": "AWS::EC2::VPNConnectionRoute",
"Properties": {
"DestinationCidrBlock": "10.1.195.201/32",
"VpnConnectionId": {
"Ref": "Vpn6F669752"
}
},
"Metadata": {
"aws:cdk:path": "app/route1/vpn_connection_route"
}
},
You can also just put destination_cidr inside your route name. CDK will remove all unsupported characters for you automatically.
for destination_cidr in tenant_config['vpn_config'][0]['destination_cidrs']:
aws_ec2.CfnVPNConnectionRoute(
self,
f'VPN Connection Route for {destination_cidr}',
vpn_connection_id=vpn_connection.vpn_id,
destination_cidr_block=destination_cidr
)
The best solution here probably depends on what you want to happen when these addresses change. For this particular resource type, any change in the name or the values will require a replacement anyway. So keeping the names consistent while the values change might not matter that much.

Related

Create Flask endpoint using a Loop

I want to create multiple Flask endpoints I read from my Config. Is it possible to make a for or while loop to create them? The Endpoints Address would be variable, but there would be no limit on how many I would need.
My Idea was:
for x in myList:
#app.route(var, ...)
def route():
do smt ...
Thanks for your help
You can use
app.add_url_rule(rule, endpoint=None, view_func=None, provide_automatic_options=None, **options)
to achieve this.
For example:
for endpoint in endpoint_list:
app.add_url_rule(endpoint['route'], view_func=endpoint['view_func'])
Check out the docs.
Note that the endpoint_list contains records of endpoints. It should be a list of dictionaries, for example:
endpoint_list = [
{
"route": "/",
"view_func": index
},
.....
]
Avoid using "list" as the variable name. It would override Python's default list function.

Pulumi python resource naming convention

Is there a prebuilt way to include prefixes on resource names when you create them? I am looking for something similar to terraform, but I'm not sure if we need to create it programmatically...
In terraform I had something like:
variable "org" {
type = string
validation {
condition = length(var.org) <= 3
error_message = "The org variable cannot be larger than 3 characters."
}
}
variable "tenant" {
type = string
validation {
condition = length(var.tenant) <= 4
error_message = "The tenant variable cannot be larger than 4 characters."
}
}
variable "environment" {
type = string
validation {
condition = length(var.environment) <= 4
error_message = "The environment variable cannot be larger than 4 characters."
}
}
And I use the above variables to name an Azure resource group like:
module "resource_group_name" {
source = "gsoft-inc/naming/azurerm//modules/general/resource_group"
name = "main"
prefixes = [var.org, var.tenant, var.environment]
}
It is possible to do something similar in Pulumi? I saw a similar issue reported here, but it looks like this is more under programmatic control(?)
You could use Python's formatting functions directly, like
resource_group = = azure_native.resources.ResourceGroup("main",
location="eastus",
resource_group_name="{0}-{1}-{2}-{3}".format(org, tenant, environment, rgname))
You could also define a helper function and use it in multiple places.
Following the answer from #Mikhail Shilkov I created a helper function to give format to the name of an storage account resource on azure. But before I used the configuration of my dev stack at Pulumi.dev.yaml to read the values I want to assign to the name of the storage account.
Taking as a reference the way of setting and getting configuration values, I set it up the following values to be included in my dev stack:
pulumi config set org rhd
pulumi config set application wmlab
pulumi config set environment dev
As long those values are set, I can see them at the Pulumi.dev.yaml stack file: (* Pulumi give it the name of the project wmlab-infrastructure to those values)
config:
azure-native:location: westeurope # This one was set it up when creating the pulumi python project
wmlab-infrastructure:application: wmlab
wmlab-infrastructure:environment: dev
wmlab-infrastructure:org: rhd
Then from python I use Config.require to get the value by giving the key in this way:
def generate_storage_account_name(name: str, number: int, org: str, app: str, env: str):
return f"{name}{number}{org}{app}{env}"
config = pulumi.Config()
organization = config.require('org')
application = config.require('application')
environment = config.require('environment')
Then when creating the storage account name, I called the generate_storage_account_name helper function:
(* I am using random.randint(a,b) function to provide an integer value to the name of the storage account, it will make easier things when assigning it a name)
# Create an Azure Resource Group
resource_group = azure_native.resources.ResourceGroup(
'resource_group',
resource_group_name="{0}-{1}-{2}".format(organization, application, environment)
)
# Create an Azure resource (Storage Account)
account = storage.StorageAccount(
'main',
resource_group_name=resource_group.name,
account_name=generate_storage_account_name('sa', random.randint(1,100000), organization, application, environment),
sku=storage.SkuArgs(
name=storage.SkuName.STANDARD_LRS,
),
kind=storage.Kind.STORAGE_V2)
And it works. When creating the resources, the name of the storage account is using the helper function:
> pulumi up
Previewing update (rhdhv/dev)
View Live: https://app.pulumi.com/myorg/wmlab-infrastructure/dev/previews/549c2c34-853f-4fe0-b9f2-d5504525b073
Type Name Plan
+ pulumi:pulumi:Stack wmlab-infrastructure-dev create
+ ├─ azure-native:resources:ResourceGroup resource_group create
+ └─ azure-native:storage:StorageAccount main create
Resources:
+ 3 to create
Do you want to perform this update? details
+ pulumi:pulumi:Stack: (create)
[urn=urn:pulumi:dev::wmlab-infrastructure::pulumi:pulumi:Stack::wmlab-infrastructure-dev]
+ azure-native:resources:ResourceGroup: (create)
[urn=urn:pulumi:dev::wmlab-infrastructure::azure-native:resources:ResourceGroup::resource_group]
[provider=urn:pulumi:dev::wmlab-infrastructure::pulumi:providers:azure-native::default_1_29_0::04da6b54-80e4-46f7-96ec-b56ff0331ba9]
location : "westeurope"
resourceGroupName: "rhd-wmlab-dev"
+ azure-native:storage:StorageAccount: (create)
[urn=urn:pulumi:dev::wmlab-infrastructure::azure-native:storage:StorageAccount::main]
[provider=urn:pulumi:dev::wmlab-infrastructure::pulumi:providers:azure-native::default_1_29_0::04da6b54-80e4-46f7-96ec-b56ff0331ba9]
accountName : "sa99180rhdwmlabdev" # HERE THE NAME GENERATED
kind : "StorageV2"
location : "westeurope"
resourceGroupName: output<string>
sku : {
name: "Standard_LRS"
}
To read more about accessing config values from code read here
Pulumi has a way to autonaming resources, it is explained here,but alter this scheme looks like is not possible or at least is not recommended, doing it can cause some issues and resources will be recreated.
Overriding auto-naming makes your project susceptible to naming collisions. As a result, for resources that may need to be replaced, you should specify deleteBeforeReplace: true in the resource’s options. This option ensures that old resources are deleted before new ones are created, which will prevent those collisions.
If I understood well, I can override those auto-named resources which allow the name attribute on their API specification, but then when doing so is when naming collisions might be presented (?)
In my case I am using the StorageAccount resource on python azure API, and it does not allow override the property name so the helper function works well.

Sorting Terraform variables.tf by variable name using Python

I am working on creating a new VPC where I need to provide some variables as input.
All the variables are listed in variables.tf. The file is very long (I only copied couple of them here) and variables are defined in no particular order.
I need to find a Pythonic way to sort my variables.tf by variable name.
variable "region" {
description = "The region to use when creating resources"
type = string
default = "us-east-1"
}
variable "create_vpc" {
description = "Controls if VPC should be created"
type = bool
default = true
}
variable "name" {
description = "Name to be used, no default, required"
type = string
}
The sorted variables.tf should look like this:
variable "create_vpc" {
description = "Controls if VPC should be created"
type = bool
default = true
}
variable "name" {
description = "Name to be used, no default, required"
type = string
}
variable "region" {
description = "The region to use when creating resources"
type = string
default = "us-east-1"
}
"Pythonic" might be the wrong approach here - you are still comfortably sitting behind a python interpreter, but for better or worse (worse) you are playing by Terraform's rules. Check out the links below. Hashicorp "enables" python via their CDK, and there are several other projects out there on github.
Once you are up and running with something like that, and you have Terraform fully ported over to your Python setup, then you can start thinking pythonic. </IMO>
https://github.com/hashicorp/terraform-cdk
https://github.com/beelit94/python-terraform/blob/develop/python_terraform/terraform.py
Here's what I came across last year https://github.com/hashicorp/terraform/issues/12959 and is why I created https://gist.github.com/yermulnik/7e0cf991962680d406692e1db1b551e6 out of curiosity. Not Python but awk :shrugging:
Simple awk script to sort TF files. Not just variables, but any of 1st level resource definition blocks.

Firestore: REST API runQuery method expects wrong parent path pattern

I'm trying to perform a simple query with Firestore REST API.
Being on Google App Engine standard I cannot use the google-cloud-firestore client which is not yet compatible with GAE standard. Instead, I'm using google-api-python-client as for other Google APIs.
This is how I initialize my service:
service = build('firestore', 'v1beta1', credentials=_credentials)
Once this is done, I perform the query that way:
query = { "structuredQuery":
{
"from": [{ "collectionId": "mycollection" }],
"where": {
"fieldFilter":
{
"field": { "fieldPath": "myfield" },
"op": "EQUAL",
"value": { "stringValue": "myvalue" }
}
}
}
}
response = service.projects().databases().documents().runQuery(
parent='projects/myprojectid/databases/(default)/documents',
body=query).execute()
This returns an error quite explicit:
TypeError: Parameter "parent" value
"projects/myprojectid/databases/(default)/documents"
does not match the pattern
"^projects/[^/]+/databases/[^/]+/documents/[^/]+/.+$"
which obviously is true. My point is that the documentation cleary states that this should be an accepted value:
The parent resource name. In the format: projects/{project_id}/databases/{database_id}/documents or projects/{project_id}/databases/{database_id}/documents/{document_path}. For example: projects/my-project/databases/my-database/documents or projects/my-project/databases/my-database/documents/chatrooms/my-chatroom (string)
Performing the same query with the API Explorer (or curl) works fine and returns the expected results. (even though the API Explorer does state that the parent parameter does not match the expected pattern).
It seems that the discovery document (which is used by google-api-python-client) enforces this pattern check for the parent parameter but the associated regular expression does not actually allow the only parent path that seems to work (projects/myprojectid/databases/(default)/documents).
I tried to use a different pattern path like projects/myprojectid/databases/(default)/documents/*/**, which makes the query run fine but does not return any results.
Is anyone having the same issue or am I doing something wrong here ?
The only workaround I can think of is making a request directly to the proper url without using google-api-python-client, but that means that I have to handle the auth process manually which is a pain.
Thanks for any help you could provide !
google-cloud-firestore is now compatible with standard App Engine.
https://cloud.google.com/firestore/docs/quickstart-servers
for your information, a trick works.
You can change the uri after generating the request :
request = service.projects().databases().documents().runQuery(
parent='projects/myprojectid/databases/(default)/documents/*/**',
body=query)
request.uri = request.uri.replace('/*/**', '')
response = request.execute()

ConfigObj 'un-nest' sections

I'm using ConfigObj 5.0.6 to hold many user-defined values, some of which are nested. I use a local.ini to supercede typical values. There is no front-end, so users edit the configs as needed. To make that easier and more intuitive, there are some values that belong at the 'root' level of the config object, but are more easily understood below a nested section of the local.ini file.
I'm using a local.ini to supercede defaults. The flow of the app suggests a config layout that would have non-nested values below nested values.
# un-nested
title = my_title
# nested
[section_1]
val_s1 = val
[section_2]
val_s2 = val
# nested, but I want to be un-nested
val_2 = val
This layout, as expected, puts val_2 under section_2:
{
'title': 'my_title',
{'section_1': {'val_s1': 'val'}},
{'section_2': {'val_s2': 'val'},
{'val_2': 'val'}}
}
Is it possible to define val_2 on a line below section_2, but access it under the 'main' section of the config object?
I would like to end up with a config object like this:
{
'title': 'my_title',
{'section_1': {'val_s1': 'val'}},
{'section_2': {'val_s2': 'val'}},
'val_2': 'val'
}
The order of the config dictionary isn't important, of course; what I'm interested in is being able to use nested sections, but from within the .ini, exit a section into its parent.
I haven't tested, but suspect nesting everything from the first line onward and then slicing the config object would work. I.e., write local.ini such that it creates:
{
'main_level':
{
'title': 'my_title',
{'section_1': {'val_s1': 'val'}},
{'section_2': {'val_s2': 'val'}},
'val_2': 'val'
}
}
Then I could use config = config['main_level'] when I first instantiate the config object, but I'm wondering if I'm just missing some easy, correct way that isn't a hack.
According to the documentation, that is not possible:
In the outer section, single values can only appear before any sub-section.

Categories

Resources