I'm trying to create a DynamoDB table using a CloudFormation stack, however I keep receiving the 'CREATE_FAILED' error in the AWS console and I'm not sure where I'm going wrong.
My method to create_stack:
cf = boto3.client('cloudformation')
stack_name = 'teststack'
with open('dynamoDBTemplate.json') as json_file:
template = json.load(json_file)
template = str(template)
try:
response = cf.create_stack(
StackName = stack_name,
TemplateBody = template,
TimeoutInMinutes = 123,
ResourceTypes = [
'AWS::DynamoDB::Table',
],
OnFailure = 'DO_NOTHING',
EnableTerminationProtection = True
)
print(response)
except ClientError as e:
print(e)
And here is my JSON file:
{
"AWSTemplateFormatVersion":"2010-09-09",
"Resources":{
"myDynamoDBTable":{
"Type":"AWS::DynamoDB::Table",
"Properties":{
"AttributeDefinitions":[
{
"AttributeName":"Filename",
"AttributeType":"S"
},
{
"AttributeName":"Positive Score",
"AttributeType":"S"
},
{
"AttributeName":"Negative Score",
"AttributeType":"S"
},
{
"AttributeName":"Mixed Score",
"AttributeType":"S"
}
],
"KeySchema":[
{
"AttributeName":"Filename",
"KeyType":"HASH"
}
],
"ProvisionedThroughput":{
"ReadCapacityUnits":"5",
"WriteCapacityUnits":"5"
},
"TableName":"testtable"
}
}
}
}
My console prints the created stack but there is no clear indication in the console as to why it keeps failing.
Take a look at the Events tab for your stack. It will show you the detailed actions and explain which step first failed. Specifically it will tell you:
One or more parameter values were invalid: Number of attributes in KeySchema does not exactly match number of attributes defined in AttributeDefinitions (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 12345; Proxy: null)
The problem is that you have provided definitions for all of your table attributes. You should only provide the key attributes.
Per the AttributeDefinitions documentation:
[AttributeDefinitions is] A list of attributes that describe the key schema for the table and indexes.
Related
I have namespace already created and defined tags to resources. When I try adding new tags to the resources, the old tags are getting deleted.
As I would like to use the old data and return the value along with the new tags. Please help me with how I can achieve this.
get volume details from a specific compartment
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
get_volume_response = core_client.get_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value")
# Get the data from response
print(get_volume_response.data)
output
{
"availability_domain": "eto:PHX-AD-1",
"compartment_id": "ocid1.compartment.oc1..aaaaaaaapmj",
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
}
},
"display_name": "test_VG",
"freeform_tags": {},
"id": "ocid1.volumegroup.oc1.phx.abced",
"is_hydrated": null,
"lifecycle_state": "AVAILABLE",
"size_in_gbs": 100,
"size_in_mbs": 102400,
"source_details": {
"type": "volumeIds",
"volume_ids": [
"ocid1.volume.oc1.phx.xyz"
]
}
I want the API below to update the tag along with the old data.
old tag
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
update_volume_response = core_client.update_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags={
'OMCS':{
'INSTANCE': 'TEST',
'COMPONENT': 'temp1.mt.exy.vcn.com'
}
},
display_name = "TEMPMT01"))
print(update_volume_response.data)
I also tried but got an attribute error.
for tag in get_volume_response.data:
def_tag.appened(tag.defined_tags)
return (def_tag)
Please help on how can I append the defined_tags?
tags are defined as dict in OCI. Append works the same way as in appending dict.
Below I have pasted the code for updating the defined_tags for Block Volumes in OCI
import oci
from oci.config import from_file
configAPI = from_file() # Config file is read from user's home location i.e., ~/.oci/config
core_client = oci.core.BlockstorageClient(configAPI)
get_volume_response = core_client.get_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen")
# Get the data from response
volume_details = get_volume_response.data
defined_tags = getattr(volume_details, "defined_tags")
freeform_tags = getattr(volume_details, "freeform_tags")
# Add new tags as required. As defined_tags is a dict, addition of new key/value pair works like below.
# In case there are multiple tags to be added then use update() method of dict.
defined_tags["OMCS"]["INSTANCE"] = "TEST"
defined_tags["OMCS"]["COMPONENT"] = "temp1.mt.exy.vcn.com"
myJson={"freeform_tags":freeform_tags,"defined_tags": defined_tags}
update_volume_response = core_client.update_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags=defined_tags,
freeform_tags=freeform_tags))
print(update_volume_response.data)
I'm trying to use the mediaItems().search() method, using the following body:
body = {
"pageToken": page_token if page_token != "" else "",
"pageSize": 100,
"filters": {
"contentFilter": {
"includedContentCategories": {"LANDSCAPES","CITYSCAPES"}
}
},
"includeArchiveMedia": include_archive
}
but the problem is that the set {"LANDSCAPES","CITYSCAPES"} should actually be a set of enums (as in Java enums), and not strings as ive written. this is specified in the API: (https://developers.google.com/photos/library/reference/rest/v1/albums)
ContentFilter - This filter allows you to return media items based on the content type.
JSON representation
{
"includedContentCategories": [
enum (ContentCategory)
],
"excludedContentCategories": [
enum (ContentCategory)
]
}
is there a proper way of solving this in python?
Modification points:
When albumId and filters are used, an error of The album ID cannot be set if filters are used. occurs. So when you want to use filters, please remove albumId.
The value of includedContentCategories is an array as follows.
"includedContentCategories": ["LANDSCAPES","CITYSCAPES"]
includeArchiveMedia is includeArchivedMedia.
Please include includeArchivedMedia in filters.
When above points are reflected to your script, it becomes as follows.
Modified script:
body = {
# "albumId": album_id, # <--- removed
"pageToken": page_token if page_token != "" else "",
"pageSize": 100,
"filters": {
"contentFilter": {
"includedContentCategories": ["LANDSCAPES", "CITYSCAPES"]
},
"includeArchivedMedia": include_archive
}
}
Reference:
Method: mediaItems.search
I am creating a dynamodb table via CDK.
const table = new dynamodb.Table(this, "my-table", {
tableName: StackConfiguration.tableName,
partitionKey: { name: "file_id", type: dynamodb.AttributeType.STRING },
});
dynamoReplayTable.addGlobalSecondaryIndex({
indexName: "processed",
# ideally would like boolean here but doesn't seem to be an option
partitionKey: { name: "processed", type: dynamodb.AttributeType.STRING },
});
Then using boto 3 i am inserting data into the table like so
failedRecord = {
"file_id": str(file_id),
"processed": "false",
"payload": str(payload),
"headers": str(headers),
}
table.put_item(Item=failedRecord)
I then have another service that reads the items, processes then and i want to update the processed field which is a global secondary index to true.
I have this code at the minute
table.update_item(
Key={"file_id": file_id}, AttributeUpdates={"processed": "true"},
)
But this results in the following error
Parameter validation failed:
Invalid type for parameter AttributeUpdates.processed, value: true, type: <class 'str'>, valid types: <class 'dict'>
DynamoDB handles dat types in a very specific way about which you can find more info here and here.
In your case, the issue it's around the value "true" of your update command. Working with types can be tricky, boto3 provides both a TypeSerializer and TypeDeserializer which you can use to handle the conversion for you:
import boto3
from boto3.dynamodb.types import TypeSerializer
serializer = TypeSerializer()
my_single_value = "processed"
print(serializer.serialize(my_single_value))
# {'S': 'processed'}
my_dict_object = {
"processed": "true"
}
print({k: serializer.serialize(v) for k, v in my_dict_object.items()})
# {'processed': {'S': 'true'}}
Resolved this with the following code
table.update_item(
Key={"file_id": file_id},
UpdateExpression="SET processed_status = :processed",
ExpressionAttributeValues={":processed": "true"},
)
I am starting to use aws sagemaker on the development of my machine learning model and I'm trying to build a lambda function to process the responses of a sagemaker labeling job. I already created my own lambda function but when I try to read the event contents I can see that the event dict is completely empty, so I'm not getting any data to read.
I have already given enough permissions to the role of the lambda function. Including:
- AmazonS3FullAccess.
- AmazonSagemakerFullAccess.
- AWSLambdaBasicExecutionRole
I've tried using this code for the Post-annotation Lambda (adapted for python 3.6):
https://docs.aws.amazon.com/sagemaker/latest/dg/sms-custom-templates-step2-demo1.html#sms-custom-templates-step2-demo1-post-annotation
As well as this one in this git repository:
https://github.com/aws-samples/aws-sagemaker-ground-truth-recipe/blob/master/aws_sagemaker_ground_truth_sample_lambda/annotation_consolidation_lambda.py
But none of them seemed to work.
For creating the labeling job I'm using boto3's functions for sagemaker:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_labeling_job
This is the code i have for creating the labeling job:
def create_labeling_job(client,bucket_name ,labeling_job_name, manifest_uri, output_path):
print("Creating labeling job with name: %s"%(labeling_job_name))
response = client.create_labeling_job(
LabelingJobName=labeling_job_name,
LabelAttributeName='annotations',
InputConfig={
'DataSource': {
'S3DataSource': {
'ManifestS3Uri': manifest_uri
}
},
'DataAttributes': {
'ContentClassifiers': [
'FreeOfAdultContent',
]
}
},
OutputConfig={
'S3OutputPath': output_path
},
RoleArn='arn:aws:myrolearn',
LabelCategoryConfigS3Uri='s3://'+bucket_name+'/config.json',
StoppingConditions={
'MaxPercentageOfInputDatasetLabeled': 100,
},
LabelingJobAlgorithmsConfig={
'LabelingJobAlgorithmSpecificationArn': 'arn:image-classification'
},
HumanTaskConfig={
'WorkteamArn': 'arn:my-private-workforce-arn',
'UiConfig': {
'UiTemplateS3Uri':'s3://'+bucket_name+'/templatefile'
},
'PreHumanTaskLambdaArn': 'arn:aws:lambda:us-east-1:432418664414:function:PRE-BoundingBox',
'TaskTitle': 'Title',
'TaskDescription': 'Description',
'NumberOfHumanWorkersPerDataObject': 1,
'TaskTimeLimitInSeconds': 600,
'AnnotationConsolidationConfig': {
'AnnotationConsolidationLambdaArn': 'arn:aws:my-custom-post-annotation-lambda'
}
}
)
return response
And this is the one i have for the lambda function:
print("Received event: " + json.dumps(event, indent=2))
print("event: %s"%(event))
print("context: %s"%(context))
print("event headers: %s"%(event["headers"]))
parsed_url = urlparse(event['payload']['s3Uri']);
print("parsed_url: ",parsed_url)
labeling_job_arn = event["labelingJobArn"]
label_attribute_name = event["labelAttributeName"]
label_categories = None
if "label_categories" in event:
label_categories = event["labelCategories"]
print(" Label Categories are : " + label_categories)
payload = event["payload"]
role_arn = event["roleArn"]
output_config = None # Output s3 location. You can choose to write your annotation to this location
if "outputConfig" in event:
output_config = event["outputConfig"]
# If you specified a KMS key in your labeling job, you can use the key to write
# consolidated_output to s3 location specified in outputConfig.
kms_key_id = None
if "kmsKeyId" in event:
kms_key_id = event["kmsKeyId"]
# Create s3 client object
s3_client = S3Client(role_arn, kms_key_id)
# Perform consolidation
return do_consolidation(labeling_job_arn, payload, label_attribute_name, s3_client)
I've tried debugging the event object with:
print("Received event: " + json.dumps(event, indent=2))
But it just prints an empty dictionary: Received event: {}
I expect the output to be something like:
#Content of an example event:
{
"version": "2018-10-16",
"labelingJobArn": <labelingJobArn>,
"labelCategories": [<string>], # If you created labeling job using aws console, labelCategories will be null
"labelAttributeName": <string>,
"roleArn" : "string",
"payload": {
"s3Uri": <string>
}
"outputConfig":"s3://<consolidated_output configured for labeling job>"
}
Lastly, when I try yo get the labeling job ARN with:
labeling_job_arn = event["labelingJobArn"]
I just get a KeyError (which makes sense because the dictionary is empty).
I am doing the same but in Labeled object section I am getting failed result and inside my output objects I am getting following error from Post Lambda function:
"annotation-case0-test3-metadata": {
"retry-count": 1,
"failure-reason": "ClientError: The JSON output from the AnnotationConsolidationLambda function could not be read. Check the output of the Lambda function and try your request again.",
"human-annotated": "true"
}
}
I found the problem, I needed to add the ARN of the role used by my Lamda function as a Trusted Entity on the Role used for the Sagemaker Labeling Job.
I just went to Roles > MySagemakerExecutionRole > Trust Relationships and added:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::xxxxxxxxx:role/My-Lambda-Role",
...
],
"Service": [
"lambda.amazonaws.com",
"sagemaker.amazonaws.com",
...
]
},
"Action": "sts:AssumeRole"
}
]
}
This made it work for me.
I have an existing worksheet with an existing NamedRange for it and I would like to call the batch_update method of the API to protect that range from being edited by anyone other than the user that makes the batch_update call.
I have seen an example on how to add protected ranges via a new range definition, but not from an existing NamedRange.
I know I need to send the addProtectedRangeResponse request. Can I define the request body with a Sheetname!NamedRange notation?
this_range = worksheet_name + "!" + nrange
batch_update_spreadsheet_request_body = {
'requests': [
{
"addProtectedRange": {
"protectedRange": {
"range": {
"name": this_range,
},
"description": "Protecting xyz",
"warningOnly": False
}
}
}
],
}
EDIT: Given #Tanaike feedback, I adapted the call to something like:
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting via gsheets_manager",
"warningOnly": False,
"requestingUserCanEdit": False
}
}
}
]
}
res2 = service.spreadsheets().batchUpdate(spreadsheetId=ssId, body=body).execute()
print(res2)
But although it lists the new protections, it still lists 5 different users (all of them) as editors. If I try to manually edit the protection added by my gsheets_manager script, it complains that the range is invalid:
Interestingly, it seems to ignore the requestUserCanEdit flag according to the returning message:
{u'spreadsheetId': u'NNNNNNNNNNNNNNNNNNNNNNNNNNNN', u'replies': [{u'addProtectedRange': {u'protectedRange': {u'requestingUserCanEdit': True, u'description': u'Protecting via gsheets_manager', u'namedRangeId': u'1793914032', u'editors': {}, u'protectedRangeId': 2012740267, u'range': {u'endColumnIndex': 1, u'sheetId': 1196959832, u'startColumnIndex': 0}}}}]}
Any ideas?
How about using namedRangeId for your situation? The flow of the sample script is as follows.
Retrieve namedRangeId using spreadsheets().get of Sheets API.
Set a protected range using namedRangeId using spreadsheets().batchUpdate of Sheets API.
Sample script:
nrange = "### name ###"
ssId = "### spreadsheetId ###"
res1 = service.spreadsheets().get(spreadsheetId=ssId, fields="namedRanges").execute()
namedRangeId = ""
for e in res1['namedRanges']:
if e['name'] == nrange:
namedRangeId = e['namedRangeId']
break
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting xyz",
"warningOnly": False
}
}
}
]
}
res2 = service.spreadsheets().batchUpdate(spreadsheetId=ssId, body=body).execute()
print(res2)
Note:
This script supposes that Sheets API can be used for your environment.
This is a simple sample script. So please modify it to your situation.
References:
ProtectedRange
Named and Protected Ranges
If this was not what you want, I'm sorry.
Edit:
In my above answer, I modified your script using your settings. If you want to protect the named range, please modify body as follows.
Modified body
body = {
"requests": [
{
"addProtectedRange": {
"protectedRange": {
"namedRangeId": namedRangeId,
"description": "Protecting xyz",
"warningOnly": False,
"editors": {"users": ["### your email address ###"]}, # Added
}
}
}
]
}
By this, the named range can be modified by only you. I'm using such settings and I confirm that it works in my environment. But if in your situation, this didn't work, I'm sorry.