I'm using Troposphere to build CloudFormation stacks and would like to pass the Elastic Load Balancer ConnectionSettings attribute only if it's set in my configuration, otherwise I don't want to specify it.
If I set it to default None then I get an error about the value not being of the expected type of troposphere.elasticloadbalancing.ConnectionSettings.
I'd rather try to avoid setting an explicit default in the call because it might override other settings.
Idealy, I would like to be able to add attributes to an existing object, e.g.:
lb = template.add_resource(elb.LoadBalancer(
...
))
if condition:
lb.add_attribute(ConnectionSettings = elb.ConnectionSettings(
...
))
Is there a way to achieve that?
UPDATE: I achieved it using a hidden Troposphere method, which works but I'm not happy with:
if condition:
lb.__setattr__('ConnectionSettings', elb.ConnectionSettings(
....
))
I'm still interested in a solution which doesn't involve using a private method from outside the module.
The main README eludes to just using the attribute names like this:
from troposphere import Template
import troposphere.elasticloadbalancing as elb
template = Template()
webelb = elb.LoadBalancer(
'ElasticLoadBalancer',
Listeners=[
elb.Listener(
LoadBalancerPort="80",
InstancePort="80",
Protocol="HTTP",
),
],
)
if True:
webelb.ConnectionSettings = elb.ConnectionSettings(IdleTimeout=30)
elasticLB = template.add_resource(webelb)
print(template.to_json())
So the big question is - where does the configuration for ConnectionSettings come from? Inside Cloudformation itself (and troposphere) there are Conditions, Parameters, and the AWS::NoValue Ref. I use that fairly heavily in the stacker RDS templates:
Here's the Parameter: https://github.com/remind101/stacker/blob/master/stacker/blueprints/rds/base.py#L126
Here's the condition: https://github.com/remind101/stacker/blob/master/stacker/blueprints/rds/base.py#L243
And here's how it's used in a resource later, optionally - if the StorageType Parameter is blank, we use AWS::NoValue, which is a pseudo Ref for not actually setting something: (Sorry, can't post more than 2 links - go to line 304 in the same file to see what I'm talking about)
If you're not using Parameters however, and instead doing all your conditions in python, you could do something similar. Something like:
connection_setting = condition and <actual connection setting code> or Ref("AWS::NoValue")
The other option is to do it entirely in python, which is basically your example. Hopefully that helps, there are a lot of ways to deal with this, including creating two different ELB objects (one with connection settings, one without) and then picking either one with either python code (if condtion) or cloudformation conditions.
If the value is known in Python (i.e., it does not originate from a CloudFormation Parameter), you can use a dictionary to add optional attributes to resources in a Troposphere template:
from troposphere import Template
import troposphere.elasticloadbalancing as elb
template = Template()
my_idle_timeout = 30 # replace this with your method for determining the value
my_elb_params = {}
if my_idle_timeout is not None:
my_elb_params['ConnectionSettings'] = elb.ConnectionSettings(
IdleTimeout=my_idle_timeout,
)
my_elb = template.add_resource(elb.LoadBalancer(
'ElasticLoadBalancer',
Listeners=[
elb.Listener(
LoadBalancerPort="80",
InstancePort="80",
Protocol="HTTP",
),
],
**my_elb_params,
))
print(template.to_json())
If the value originates from a CloudFormation Parameter, you need to create a Condition to test the value of the parameter and use Ref("AWS::NoValue") if no value was provided for the parameter, e.g.:
from troposphere import Template, Parameter, Equals, Ref, If
import troposphere.elasticloadbalancing as elb
template = Template()
my_idle_timeout = template.add_parameter(Parameter(
"ElbIdleTimeout",
Description="Idle timeout for the Elastic Load Balancer",
Type="Number",
))
no_idle_timeout = "NoIdleTimeout"
template.add_condition(
no_idle_timeout,
Equals(Ref(my_idle_timeout), ""),
)
my_elb = template.add_resource(elb.LoadBalancer(
'ElasticLoadBalancer',
Listeners=[
elb.Listener(
LoadBalancerPort="80",
InstancePort="80",
Protocol="HTTP",
),
],
ConnectionSettings=If(
no_idle_timeout,
Ref("AWS::NoValue"),
elb.ConnectionSettings(
IdleTimeout=Ref(my_idle_timeout),
),
),
))
print(template.to_json())
Related
Similar issues but not in python cdk : How best to retrieve AWS SSM parameters from the AWS CDK?
Hi I am saving a arn in ssm and my variable name is like this
test-value-for-my-job-executor-new-function-lambda-arn
and when I try to get this using ssm like this
from aws_cdk import aws_ssm as ssm
my_arn = ssm.StringParameter.value_from_lookup(self, "test-value-for-my-job-executor-new-function-lambda-arn")
self.scan_pre_process = _lambda.Function.from_function_arn(self, "my-job-executor", my_arn)
I get this error
jsii.errors.JSIIError: ARNs must start with "arn:" and have at least 6 components: dummy-value-for-test-value-for-my-job-executor-new-function-lambda-arn
even I tried to increase name of this still same issue .
value_from_lookup is a Context Method. It looks up the SSM Parameter's cloud-side value once at synth-time and caches its value in cdk.context.json. Context methods return dummy variables until they resolve. That's what's happening in your case. CDK is trying to use my_arn before the lookup has occurred.
There are several fixes:
Quick-and-dirty: Comment out the _lambda.Function.from_function_arn line. Synth the app. Uncomment out the line. This simply forces the caching to happen before the value is used.
Hardcode the Lambda ARN into _lambda.Function.from_function_arn. If the Function name and version/alias are stable, no real downside to hardcoding. Tip: use the Stack.format_arn method.
There is an elegant third alternative using cdk.Lazy. However, I believe Lazy is still broken in Python. Anyway, here is how it would look in Typescript:
const my_arn = cdk.Lazy.string({
produce: () =>
ssm.StringParameter.valueFromLookup(
this,
'test-value-for-my-job-executor-new-function-lambda-arn'
),
});
I have below function which will create S3 bucket with cdk code:
def __create_s3_components_bucket(self, this_dir: str, props):
"""Create S3 components bucket"""
s3_bucket = s3.Bucket(
self,
"BucketForImageBuilder",
bucket_name="some_bucket_name_1234",
block_public_access=s3.BlockPublicAccess(
block_public_acls=True,
block_public_policy=True,
ignore_public_acls=True,
restrict_public_buckets=True,
),
public_read_access=False,
encryption=s3.BucketEncryption.S3_MANAGED,
removal_policy=cdk.RemovalPolicy.DESTROY,
auto_delete_objects=True,
lifecycle_rules=[
s3.LifecycleRule(
abort_incomplete_multipart_upload_after=cdk.Duration.days(amount=2),
enabled=True,
expiration=cdk.Duration.days(amount=180),
transitions=[
s3.Transition(
transition_after=cdk.Duration.days(amount=30),
storage_class=s3.StorageClass.ONE_ZONE_INFREQUENT_ACCESS,
)
],
)
],
)
I would like to create helper / decorator function to implement this part of above code for all buckets defined in different stacks:
block_public_access=s3.BlockPublicAccess(
block_public_acls=True,
block_public_policy=True,
ignore_public_acls=True,
restrict_public_buckets=True,
I understand theory (watched few YouTube videos, read some examples) that they add extra functionality, etc., but can't get my head around of doing that in this example. Is it doable? If yes, how? Can anyone share some code example please?
so you can do something like:
my_bucket = create_s3_bucket(self, "MyBucketName")
if so, and you stated you want this helper function to be used for different stacks, then you will want to do something like the following in your utility/helper file (using Python)
from aws_cdk import aws_s3
def create_s3_bucket(scope, bucket_name:str) -> aws_s3.Bucket:
return aws_s3.Bucket(
scope, bucket_name,
bucket_name=bucket_name,
property1= value1,
ect = ect
)
if you want do more things with that bucket you can do so, just do so before the return, or create the bucket as its own object than manipulate it as you need before returning it.
Two things to note: if your function to do this commonality is NOT in the same stack class then you MUST pass it the self parameter as scope - the scope of a construct is how CDK knows what stack to put that constructs instructions for formation into. So, as you may have noticed, every construct in the init of a Stack starts with self, "LogicalID" - these are the two cloudformation template variables needed to allow CloudFormation to actually work - the first defining the stack this instruction goes into, the second the LogicalID of that cloudformation resource (how CloudFormation identifies a resource that the stack controls, and when to update or create a new one.
So since you stated you intend to use this across multiple stacks, put this into its own helper/utility file and import it, then pass it the self variable when called so CDK knows which stack its part of.
Second, you should return the object. Even if you don't plan on using it now, its better to return it so you can make use of it later. To reference the Bucket.BucketName or to add a role to it thats unique for that stack or whatever.
You can also add additional parameters to the function for anything you want to specify, and set a default. For instance:
from aws_cdk import aws_s3
def create_s3_bucket(scope, bucket_name:str, removal: cdk.RemovalPolicy = cdk.RemovalPolicy.RETAIN) -> aws_s3.Bucket
return aws_s3.Bucket(scope, bucket_name
...
removal_policy=removal)
I'm not 100% confident in decorator functions, but I am pretty sure that the way CDK construct instantiators work a decorator would not do what you are trying to do here, where this does. This is mostly because most properties cannot be set after the bucket is instantiated in your code. The few that could be could be put in a decorator, I suppose, something like
psudo code
def custom_s3_decorator():
yield
bucket.add_removal_policy(blah)
bucket.add_to_principle_policy(blah)
ect, with the correct code needed to make a decorator work, but again ... pretty sure most of what you want isn't possible to do after the instatiation so you'd want to use a helper function.
I want to access a property exist in the self.context using a variable. I have a variable name "prop" and it contains a value and it is already set in the self.context. I am using Flask Restplus framework.
prop = 'binding'
If I try to access this property like below then it gives me an error:
Object is not subscriptable
I want to know if there is any way to get the value? Like doing this:
print(self.context[prop])
I only get one solution don't know if its correct or not, I tried this :
self.context.__getattribute__(prop)
There are two ways to do this, the simplest is using getattr:
getattr(self.context, prop)
This function internally calls __getattribute__ so it's the same as your code, just a little neater.
However, you still have the problem that you probably only want to access some of the context, while not risking editing values set by different parts of your application. A much better way would be to store a dict in context:
self.context.attributes = {}
self.context.attributes["binding"] = False
print(self.context.attributes[prop])
This way, only certain context variables can be accessed and those that are meant to be dynamic don't mess with any used by your application code directly.
I would like to programmatically write a method that calls a boto3 method and change default parameters inside methods.
For example, I want to use my log bucket if log is set to True. Otherwise, don't log it. Something like this:
def my_run(log=False):
log_string = "s3://mylogs" if log else None
result = emr.run_job_flow(Name = 'EMRTest1',
LogUri = log_string ,
...
)
return result
So, for default value I used None. However, boto3 expects a string. I tried empty string for default and it's not a valid value
I know that if I don't specify logURI, it won't get stored. So I could do it with if statements, like this:
def my_run(log=False):
if log:
result = emr.run_job_flow(Name = 'EMRTest1',
LogUri = "s3://mylogs" ,
...
)
else:
result = emr.run_job_flow(Name = 'EMRTest1',
...
)
return result
But that's horrible way. And LogUri was just an example. I want to be able to change other parameters. I just can't make those nested ifs.
Is there a default value for various types like strings in boto3 that I can use?
Edit 1
From the first comment below,
http://boto3.readthedocs.org/en/latest/guide/events.html#provide-client-params
Interesting API. Not well documented though. Their example for s3 clisnt works fine,
s3 = boto3.client('s3')
# Access the event system on the S3 client
event_system = s3.meta.events
# Create a function
def add_my_bucket(params, **kwargs):
print "Hello"
# Add the name of the bucket you want to default to.
if 'Bucket' not in params:
params['Bucket'] = 'mybucket'
# Register the function to an event
event_system.register('provide-client-params.s3.ListObjects', add_my_bucket)
response = s3.list_objects()
then response is good and also I see "Hello" printed.
But now I try to make an example for emr's run_job_flow:
def my_run(name):
def setName(params, **kwargs):
print "Hello"
params['Name'] = name
current_emr = boto3.client('emr')
event_system = current_emr.meta.events
event_system.register('provide-client-params.emr.RunJobFlow', setName)
current_emr.run_job_flow(...)
When I run this, I get:
Missing required parameter in input: "Name"
Am I using wrong syntax somehow?
I did use RunJobFlow which I got from current_emr.meta.method_to_api_mapping
Maybe it's not provide-client-params for emr?
I also don't see "Hello" printed
There are no default values except the service specified ones, in which case we just send nothing at all. If you want to change parameters at run-time you can hook into the events system. You can read up about that here. The examples there are pretty close to what I think you want to do.
I have a question about Django and it's routing system. I believe that it can be powerfull, but right now I am struggling with one issue I don't experience when working in other frameworks and I can't seem to get a grip on it. Worth to mention that I don't have much experience with Django at this point.
The issue is simple - I have a view which takes two optional parameters, defined like this
def test_view(id=None, grid=None):
Both parameters are optional and frequently are not passed. Id can only be an integer and grid will never be an integer (it is a special string to control datagrid when I don't want to use sessions). I have a route defined like this:
url(a(r'^test_view (\/(?P<id>\d+))? (\/(?P<grid>[^\/]+))? \/?$'), views.test_view, name='test_view'),
This works great and I am not having trouble with using one-way routes. But when I try to use the reverse function or url template tag, following error occurs:
Reverse for 'test_view' with arguments '('20~id~desc~1',)' and keyword arguments '{}' not found.
In this example I tried to find reverse without the id, just with the grid parameter. I have tried various methods of passing parameters to the reverse function:
(grid, )
(None, grid)
('', grid)
{id=None, grid=grid}
All of them result in same error or similliar one.
Is there a way to implement this in django? Maybe just disable the cool URL for the grid parameter. That is how I do it in for example Nette framework for PHP, isntead of having an url like this: 'localhost/test_view/1/20~id~desc~1' I have url like this: 'localhost/test_view/1?grid=20~id~desc~1'. This would be completely sufficient, but I have no idea how to achive this in Django.
As you note in your question, the best way to achieve this is to use standard GET query parameters, rather than doing it in the path itself. In Django you do that exclusively in the view; the URL itself is then just
url(r'^test_view$', views.test_view, name='test_view'),
and you request it via localhost/test_view?id=1&grid=20~id~desc~1. You get the params from request.GET, which is a dictionary-like object; you can use .get so that it does not raise a KeyError when the key is not provided.
def test_view(request):
id = request.GET.get('id')
grid = request.GET.get('grid')