How can I add face to collection auto using python ( AWS Rekognition )? - python

I try to build Face-Rekognition using AWS. So, i want to upload file to S3 Bucket only and it uploads from S3 to Collection auto using Python. Pls help me, thank you!

You can create an Amazon S3 Event that triggers an AWS Lambda function when a new object is uploaded to an Amazon S3 bucket.
The AWS Lambda function can make an index_faces() call to AWS Rekognition to detect faces in the input image and add them to the specified collection.
You should first create the Face Collection where the faces will be added. It is also useful to provide an ExternalImageID with each image so that the face can be associated with an identifier within your application. This ID will then be returned with future face-search results.

Related

Access blob in storage container from function triggered by Event Grid

Just for reference I am coming from AWS so any comparisons would be welcome.
I need to create a function which detects when a blob is placed into a storage container and then downloads the blob to perform some actions on the data in it.
I have created a storage account with a container in, and a function app with a python function in it. I have then set up a event grid topic and subscription so that blob creation events trigger the event. I can verify that this is working. This gives me the URL of the blob which looks something like https://<name>.blob.core.windows.net/<container>/<blob-name>. However then when I try to download this blob using BlobClient I get various errors about not having the correct authentication or key. Is there a way in which I can just allow the function to access the container in the same way that in AWS I would give a lambda an execution role with S3 permissions, or do I need to create some key to pass through somehow?
Edit: I need this to run ASAP when the blob is put in the container so as far as I can tell I need to use EventGrid triggers not the normal blob triggers
I need to create a function which detects when a blob is placed into a storage container and then downloads the blob to perform some actions on the data in it.
This can be achieved by using an Azure Blob storage trigger for Azure Functions.
The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are provided as input to the function.
This last sentence, "The blob contents are provided as input to the function", means the blob can be an input parameter to the Function. This way, there's no (or less) need for you to download it manually.
Is there a way in which I can just allow the function to access the container in the same way that in AWS I would give a lambda an execution role with S3 permissions
Have a look at Using Managed Identity between Azure Functions and Azure Storage.
EDIT
I have understood correctly the normal blob trigger can have up to 10 minutes of delay?
This is correct, a Blob trigger could have up to 10 minutes of delay before it actually triggers the Function. The second part of the answer still stands, though.
The answer lied somewhere between #rickvdbosch's answer and Abdul's comment. I first had to assign an identity to the function giving it permission to access the storage account. Then I was able to use the azure.identity.DefaultAzureCredential class to automatically handle the credentials for the BlobClient

How can i automatically delete AWS S3 files using python?

I want delete some files from S3 after certain time. i need to set a time limit for each object not for the bucket. is that possible?
I am using boto3 to upload the file into S3.
region = "us-east-2"
bucket = os.environ["S3_BUCKET_NAME"]
credentials = {
'aws_access_key_id': os.environ["AWS_ACCESS_KEY"],
'aws_secret_access_key': os.environ["AWS_ACCESS_SECRET_KEY"]
}
client = boto3.client('s3', **credentials)
transfer = S3Transfer(client)
transfer.upload_file(file_name, bucket, folder+file_name,
extra_args={'ACL': 'public-read'})
Above is the code i used to upload the object.
You have many options here. Some ideas:
You can automatically delete files are a given time period by using Amazon S3 Object Lifecycle Management. See: How Do I Create a Lifecycle Policy for an S3 Bucket?
If you requirements are more-detailed (eg different files after different time periods), you could add a Tag to each object specifying when you'd like the object deleted, or after how many days it should be deleted. Then, you could define an Amazon CloudWatch Events rule to trigger an AWS Lambda function at regular periods (eg once a day or once an hour). You could then code the Lambda function to look at the tags on objects, determine whether they should be deleted and delete the desired objects. You will find examples of this on the Internet, often called a Stopinator.
If you have an Amazon EC2 instance that is running all the time for other work, then you could simply create a cron job or Scheduled Task to run a similar program (without using AWS Lambda).

python boto3: AWS Rekognition is unable to access S3 bucket

I am trying to upload the image to S3 and then have AWS Rekognition fetch it from S3 for face detection, but Rekognition cannot do that.
Here is my code - uploading and then detecting:
import boto3
s3 = boto3.client('s3')
s3.put_object(
ACL='public-read',
Body=open('/Users/1111/Desktop/kitten800300/kitten.jpeg', 'rb'),
Bucket='mobo2apps',
Key='kitten_img.jpeg'
)
rekognition = boto3.client('rekognition')
response = rekognition.detect_faces(
Image={
'S3Object': {
'Bucket': 'mobo2apps',
'Name': 'kitten_img.jpeg',
}
}
)
this produces an error:
Unable to get object metadata from S3. Check object key, region and/or access permissions.
Why is that?
About the permissions: I am authorized with AWS root access keys, so I have full access to all resources.
Here are the few things that you can do:
Make sure the region of the S3 bucket is the same as Recognition. Otherwise, it won't work. S3 service is global but every bucket is created in a specific region. The same region should be used by AWS clients.
Make sure the access keys of the user or role have the right set of permissions for the resource.
Make sure the file is actually uploaded.
Make sure there is no bucket policy applied that revokes access.
You can enable logging on your S3 bucket to see errors.
Make sure the bucket is not versioned. If versioned, specify the object version.
Make sure the object has the correct set of ACLs defined.
If the object is encrypted, make sure you have permission to use that KMS key to decrypt the object.
You have to wait for a while that the image uploading is done.
The code looks running smoothly, so your jpeg starts to upload and even before the uploading is finished, Rekognition starts to detect the face from the image. Since the uploading is not finished when the code runs, it cannot find the object from your S3. Put a wait time a bit.

What is the best way to convert a file in amazon s3 with Django/python?

I am making a website to get to know aws and Django better. The idea is to let a user upload an excel file, convert it to csv and then let the user download the converted csv file.
I am using amazon s3 for file storage. My question is, what is the best way to make the conversion? Is there any way to access the excel file once it is stored in the s3 bucket and convert it to csv via Django? Sorry if my question is silly but I haven’t been able to find much information on that online. Thanks in advance
AWS Lambda is the best way. It has many types of Event triggers. You can specify the bucket and the event(put, delete, copy, etc).
So what you have to do is, create a lambda function which will be
triggered only when an object gets inserted into the S3 bucket. In that
lambda function you can do your coding such as getting the file from
the S3 bucket and the conversion.
Since you are familiar with python already, I suggest to use Boto 3 to get files from the S3 bucket
Check out my blog about AWS lambda with s3 if you want to get a more clearer idea and more about permissions when work with S3 bucket.
My Blog
On every put event of Bucket you can trigger a AWS Lambda function which will convert your File format and save in desired bucket location.

How do I use a different AWS endpoint with another AWS region?

I am using AWS recognition on an S3 bucket of data that is currently located in the US-West-1 region. Unfortunately, AWS Rekognition is not supported in that region. I attempted to copy over my bucket into a US-West-2 region, but encountered difficulties in getting metadata. As such, my question is, how do I route my API call to another endpoint, specifically the endpoint 'https://rekognition.us-east-1.amazonaws.com' even though the bucket is based in another region. Any help or advice would be appreciated.
EDIT: I thought it may be relevant to mention, I am running this on Python.
Assuming you are using boto3 in your python script, you should be able to select a region when you create your client resource. Try doing something similar to this:
re_client= boto3.client('rekognition', region_name='us-east-1')
If your question is if you can use AWS Rekognition in one region to access a bucket in another region: As far as I know, you can't. However, you might be able to either migrate yor bucket to the specific region, or use S3 cross-region recplication to access the data from both regions.

Categories

Resources