I am using AWS recognition on an S3 bucket of data that is currently located in the US-West-1 region. Unfortunately, AWS Rekognition is not supported in that region. I attempted to copy over my bucket into a US-West-2 region, but encountered difficulties in getting metadata. As such, my question is, how do I route my API call to another endpoint, specifically the endpoint 'https://rekognition.us-east-1.amazonaws.com' even though the bucket is based in another region. Any help or advice would be appreciated.
EDIT: I thought it may be relevant to mention, I am running this on Python.
Assuming you are using boto3 in your python script, you should be able to select a region when you create your client resource. Try doing something similar to this:
re_client= boto3.client('rekognition', region_name='us-east-1')
If your question is if you can use AWS Rekognition in one region to access a bucket in another region: As far as I know, you can't. However, you might be able to either migrate yor bucket to the specific region, or use S3 cross-region recplication to access the data from both regions.
Related
I have tested almost every example code I can find on the Internet for Amazon Textract and I cant get it to work. I can upload and download a file to S3 from my Python client so the credentials should be OK. Lots of the errors points to some region failure but I have try every possible combinations.
Here are one of the last test call -
def test_parse_3():
# Document
s3BucketName = "xx-xxxx-xx"
documentName = "xxxx.jpg"
# Amazon Textract client
textract = boto3.client('textract')
# Call Amazon Textract
response = textract.detect_document_text(
Document={
'S3Object': {
'Bucket': s3BucketName,
'Name': documentName
}
})
print(response)
seems to be pretty easy but it generates the error -
botocore.errorfactory.InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the DetectDocumentText operation: Unable to get object metadata from S3. Check object key, region and/or access permissions.
Any ideas whats wrong and dose someone have a working example (I knew the tabs are not correct in the example code)?
I have also tested a lot of permission settings in AWS. The credentials are in a hidden files created by aws sdk.
I am sure you already know, but the bucket is case sensitive. If you have verified that both the object bucket and name are correct, just make sure to add the appropriate region to your credentials.
I tested just reading from s3 without including the region in the credentials and I was able to list the objects in the bucket with no issues. I am thinking this worked because s3 is supposed to be region agnostic. However, since Textract is region specific, you must define the region in your credentials when using Textract to get the data from the s3 bucket.
I realize this was asked a few months ago, but I am hoping this sheds some light to others that face this issue in the future.
I try to build Face-Rekognition using AWS. So, i want to upload file to S3 Bucket only and it uploads from S3 to Collection auto using Python. Pls help me, thank you!
You can create an Amazon S3 Event that triggers an AWS Lambda function when a new object is uploaded to an Amazon S3 bucket.
The AWS Lambda function can make an index_faces() call to AWS Rekognition to detect faces in the input image and add them to the specified collection.
You should first create the Face Collection where the faces will be added. It is also useful to provide an ExternalImageID with each image so that the face can be associated with an identifier within your application. This ID will then be returned with future face-search results.
Can we access bucket with bucket endpoint like .s3.amazonaws.com using python sdk. i don't want access bucket with following bucket = conn.get_bucket(bucket_name).
I don't know why you need to access it this way because the s3 endpoint is a fixed part where only thing changes is the name of your bucket (because it's global).
But, in the end, what you are looking for is not possible unfortunately. You need to provide bucket name for accessing the bucket and running operations on it.
Verified by boto3 documentation and here you can check:
S3 Boto documentation
I made a publicly listable bucket on google cloud storage. I can see all the keys if I try to list the bucket objects in the browser. I was trying to use the create_anonymous_client() function so that I can list the bucket keys in the python script. It is giving me an exception. I looked up everywhere and still can't find the proper way to use the function.
from google.cloud import storage
client = storage.Client.create_anonymous_client()
a = client.lookup_bucket('publically_listable_bucket')
a.list_blobs()
Exception I am getting:
ValueError: Anonymous credentials cannot be refreshed.
Additional Query: Can I list and download contents of public google cloud storage buckets using boto3, If yes, how to do it anonymously?
I was also struggling with thing and couldn't find an answer anywhere online. Turns out you can access the bucket with just the bucket() method.
I'm not sure why, but this method can take several seconds sometimes.
client = storage.Client.create_anonymous_client()
bucket = client.bucket('publically_listable_bucket')
blobs = list(bucket.list_blobs())
This error means the bucket you are attempting to list does not grant the right permission. You must Give "Storage Object Viewer" or "Storage Legacy Bucket Reader" role to "allUsers".
According to the Amazon WorkDocs SDK page, you can use Boto3 to migrate your content to Amazon WorkDocs. I found the entry for the WorkSpaces Client in the Boto3 documentation, but every call seems to require a "AuthenticationToken" parameter. The only information I can find on AuthenticationToken is that is it supposed to be a "Amazon WorkDocs authentication token".
Does anyone know what this token is? How do I get one? Is there any code examples of using the WorkDocs Client in Boto3?
I am trying to create a simple Python script that will upload a single document into WorkDocs, but there seems to be little to no information on how to do this. I was easily able to write a script that can upload/download files from S3, but this seems like something else entirely.