CDK WAF Python Multiple Statement velues error - python

I have AWS WAF CDK that is working with rules, and now I'm trying to add a rule in WAF with multiple statements, but I'm getting this error:
Resource handler returned message: "Error reason: You have used none or multiple values for a field that requires exactly one value., field: STATEMENT, parameter: Statement (Service: Wafv2, Status Code: 400, Request ID: 6a36bfe2-543c-458a-9571-e929142f5df1, Extended Request ID: null)" (RequestToken: b751ae12-bb60-bb75-86c0-346926687ea4, HandlerErrorCode: InvalidRequest)
My Code:
{
'name': 'ruleName',
'priority': 3,
'statement': {
'orStatement': {
'statements': [
{
'iPSetReferenceStatement': {
'arn': 'arn:myARN'
}
},
{
'iPSetReferenceStatement': {
'arn': 'arn:myARN'
}
}
]
}
},
'action': {
'allow': {}
},
'visibilityConfig': {
'sampledRequestsEnabled': True,
'cloudWatchMetricsEnabled': True,
'metricName': 'ruleName'
}
},

There are two things going on there:
Firstly, your capitalization is off. iPSetReferenceStatement cannot be parsed and creates an empty statement reference. The correct key is ipSetReferenceStatement.
However, as mentioned here, there is a jsii implementation bug causing some issues with the IPSetReferenceStatementProperty. This causes it not to be parsed properly resulting in a jsii error when synthesizing.
You can fix it by using the workaround mentioned in the post.
Add to your file containing the construct:
import jsii
from aws_cdk import aws_wafv2 as wafv2 # just for clarity, you might already have this imported
#jsii.implements(wafv2.CfnRuleGroup.IPSetReferenceStatementProperty)
class IPSetReferenceStatement:
#property
def arn(self):
return self._arn
#arn.setter
def arn(self, value):
self._arn = value
Then define your ip reference statement as follows:
ip_set_ref_stmnt = IPSetReferenceStatement()
ip_set_ref_stmnt.arn = "arn:aws:..."
ip_set_ref_stmnt_2 = IPSetReferenceStatement()
ip_set_ref_stmnt_2.arn = "arn:aws:..."
Then in the rules section of the webacl, you can use it as follows:
...
rules=[
{
'name': 'ruleName',
'priority': 3,
'statement': {
'orStatement': {
'statements': [
wafv2.CfnWebACL.StatementProperty(
ip_set_reference_statement=ip_set_ref_stmnt
),
wafv2.CfnWebACL.StatementProperty(
ip_set_reference_statement=ip_set_ref_stmnt_2
),
]
}
},
'action': {
'allow': {}
},
'visibilityConfig': {
'sampledRequestsEnabled': True,
'cloudWatchMetricsEnabled': True,
'metricName': 'ruleName'
}
}
]
...
This should synthesize your stack as expected.

Related

filter based media search with google photos API (python)

I'm trying to use the mediaItems().search() method, using the following body:
body = {
"pageToken": page_token if page_token != "" else "",
"pageSize": 100,
"filters": {
"contentFilter": {
"includedContentCategories": {"LANDSCAPES","CITYSCAPES"}
}
},
"includeArchiveMedia": include_archive
}
but the problem is that the set {"LANDSCAPES","CITYSCAPES"} should actually be a set of enums (as in Java enums), and not strings as ive written. this is specified in the API: (https://developers.google.com/photos/library/reference/rest/v1/albums)
ContentFilter - This filter allows you to return media items based on the content type.
JSON representation
{
"includedContentCategories": [
enum (ContentCategory)
],
"excludedContentCategories": [
enum (ContentCategory)
]
}
is there a proper way of solving this in python?
Modification points:
When albumId and filters are used, an error of The album ID cannot be set if filters are used. occurs. So when you want to use filters, please remove albumId.
The value of includedContentCategories is an array as follows.
"includedContentCategories": ["LANDSCAPES","CITYSCAPES"]
includeArchiveMedia is includeArchivedMedia.
Please include includeArchivedMedia in filters.
When above points are reflected to your script, it becomes as follows.
Modified script:
body = {
# "albumId": album_id, # <--- removed
"pageToken": page_token if page_token != "" else "",
"pageSize": 100,
"filters": {
"contentFilter": {
"includedContentCategories": ["LANDSCAPES", "CITYSCAPES"]
},
"includeArchivedMedia": include_archive
}
}
Reference:
Method: mediaItems.search

Python eve: using Sub Resource value in $match

I need to get a value inside an url (/some/url/value as a Sub Resource) usable as a parameter in an aggregation $match :
event/mac/11:22:33:44:55:66 --> {value:'11:22:33:44:55:66'}
and then:
{"$match":{"MAC":"$value"}},
here is a non-working example :
event = {
'url': 'event/mac/<regex("([\w:]+)"):value>',
'datasource': {
'source':"event",
'aggregation': {
'pipeline': [
{"$match": {"MAC":"$value"}},
{"$group": {"_id":"$MAC", "total": {"$sum": "$count"}}},
]
}
}
}
this example is working correctly with :
event/mac/blablabla?aggregate={"$value":"aa:11:bb:22:cc:33"}
any suggestion ?
The real quick and easy way would be to
path = "event/mac/11:22:33:44:55:66"
value = path.replace("event/mac/", "")
# or
value = path.split("/")[-1]

Add a Protected Range to an existing NamedRange

I have an existing worksheet with an existing NamedRange for it and I would like to call the batch_update method of the API to protect that range from being edited by anyone other than the user that makes the batch_update call.
I have seen an example on how to add protected ranges via a new range definition, but not from an existing NamedRange.
I know I need to send the addProtectedRangeResponse request. Can I define the request body with a Sheetname!NamedRange notation?
this_range = worksheet_name + "!" + nrange
batch_update_spreadsheet_request_body = {
'requests': [
{
"addProtectedRange": {
"protectedRange": {
"range": {
"name": this_range,
},
"description": "Protecting xyz",
"warningOnly": False
}
}
}
],
}
EDIT: Given #Tanaike feedback, I adapted the call to something like:
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting via gsheets_manager",
"warningOnly": False,
"requestingUserCanEdit": False
}
}
}
]
}
res2 = service.spreadsheets().batchUpdate(spreadsheetId=ssId, body=body).execute()
print(res2)
But although it lists the new protections, it still lists 5 different users (all of them) as editors. If I try to manually edit the protection added by my gsheets_manager script, it complains that the range is invalid:
Interestingly, it seems to ignore the requestUserCanEdit flag according to the returning message:
{u'spreadsheetId': u'NNNNNNNNNNNNNNNNNNNNNNNNNNNN', u'replies': [{u'addProtectedRange': {u'protectedRange': {u'requestingUserCanEdit': True, u'description': u'Protecting via gsheets_manager', u'namedRangeId': u'1793914032', u'editors': {}, u'protectedRangeId': 2012740267, u'range': {u'endColumnIndex': 1, u'sheetId': 1196959832, u'startColumnIndex': 0}}}}]}
Any ideas?
How about using namedRangeId for your situation? The flow of the sample script is as follows.
Retrieve namedRangeId using spreadsheets().get of Sheets API.
Set a protected range using namedRangeId using spreadsheets().batchUpdate of Sheets API.
Sample script:
nrange = "### name ###"
ssId = "### spreadsheetId ###"
res1 = service.spreadsheets().get(spreadsheetId=ssId, fields="namedRanges").execute()
namedRangeId = ""
for e in res1['namedRanges']:
if e['name'] == nrange:
namedRangeId = e['namedRangeId']
break
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting xyz",
"warningOnly": False
}
}
}
]
}
res2 = service.spreadsheets().batchUpdate(spreadsheetId=ssId, body=body).execute()
print(res2)
Note:
This script supposes that Sheets API can be used for your environment.
This is a simple sample script. So please modify it to your situation.
References:
ProtectedRange
Named and Protected Ranges
If this was not what you want, I'm sorry.
Edit:
In my above answer, I modified your script using your settings. If you want to protect the named range, please modify body as follows.
Modified body
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting xyz",
"warningOnly": False,
"editors": {"users": ["### your email address ###"]}, # Added
}
}
}
]
}
By this, the named range can be modified by only you. I'm using such settings and I confirm that it works in my environment. But if in your situation, this didn't work, I'm sorry.

Is is possible to use shorthand option_settings with boto3?

When writing .ebextensions .config files, Amazon allows for long and shortform entries, for example these two configurations are identical:
Long form:
"option_settings": [
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBEngine',
'Value': 'postgres'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBInstanceClass',
'Value': 'db.t2.micro'
}
]
Shortform:
"option_settings": {
"aws:rds:dbinstance": {
"DBEngine": "postgres",
"DBInstanceClass": "db.t2.micro"
}
}
However, all of the configurations I've seen only specify using a long form with boto3:
response = eb_client.create_environment(
... trimmed ...
OptionSettings=[
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBEngineVersion',
'Value': '5.6'
},
... trimmed ...
)
Is it possible to use a dictionary with shortform entries with boto3?
Bonus: If not, why not?
Trial and error suggests no, you can not use the shortform config type.
However, if you are of that sort of persuasion you can do this:
def short_to_long(_in):
out = []
for namespace,key_vals in _in.items():
for optname,value in key_vals.items():
out.append(
{
'Namespace': namespace,
'OptionName': optname,
'Value': value
}
)
return out
Then elsewhere:
response = eb_client.create_environment(
OptionSettings=short_to_long({
"aws:rds:dbinstance": {
"DBDeletionPolicy": "Delete", # or snapshot
"DBEngine": "postgres",
"DBInstanceClass": "db.t2.micro"
},
})

JSON Schema: Input malformed

I'm using Tornado_JSON which is based on jsonschema and there is a problem with my schema definition. I tried fixing it in an online schema validator and the problem seems to lie in "additionalItems": True. True with capital T works for python and leads to an error in the online validator (Schema is invalid JSON.). With true the online validator is happy and the example json validates against the schema, but my python script doesn't start anymore (NameError: name 'true' is not defined). Can this be resolved somehow?
#schema.validate(
"""input_schema={
'type': 'object',
'properties': {
'DB': {
'type': 'number'
},
'values': {
'type': 'array',
'items': [
{
'type': 'array',
'items': [
{
'type': 'string'
},
{
'type': [
'number',
'string',
'boolean',
'null'
]
}
]
}
],
'additionalItems': true
}
}
},
input_example={
'DB': 22,
'values': [['INT', 44],['REAL', 33.33],['CHAR', 'b']]
}"""
)
I changed it according to your comments ( external file with json.loads() ). Perfect. Thank you.
Put the schema in a triple-quoted string or an external file, then parse it with json.loads(). Use the lower-case spelling.
The error stems from trying to put a builtin Python datatype into a JSON schema. The latter is a template syntax that is used to check type consistency and should not hold actual data. Instead, under input_schema you'll want to define "additionalItems" to be of { "type": "boolean" } and then add it to the test JSON in your input_example with a boolean after for testing purposes.
Also, I'm not too familiar with Tornado_JSON but it looks like you aren't complying with the schema definition language by placing "additionalItems" inside of the "values" property. Bring that up one level.
More specifically, I think what you're trying to do should look like:
"values": {
...value schema definition...
}
"additionalItems": {
"type": "boolean"
}
And the input examples would become:
input_example={
"DB": 22,
"values": [['INT', 44],['REAL', 33.33],['CHAR', 'b']],
"additionalItems": true
}

Categories

Resources