Unable to get just subfolder objects from s3 aws - python

I am using this function to get data from S3:
s3 = boto3.resource('s3')
s3client = boto3.client('s3')
Bucket = s3.Bucket('ais-django');
obj = s3.Object('ais-django', 'Event/')
list = s3client.list_objects_v2(Bucket='ais-django' ,Prefix='Event/' )
for s3_key in list:
filename = s3_key['Key']
When I use prefix for Event folder (path is like 'ais-django/Event/') it gives abnormal output like this:
{
'IsTruncated': False,
'Prefix': 'Event/',
'ResponseMetadata': {
'HTTPHeaders': {
'date': 'Mon, 11 Jun 2018 12:42:35 GMT',
'content-type': 'application/xml',
'transfer-encoding': 'chunked',
'x-amz-bucket-region': 'us-east-1',
'x-amz-request-id': '94ADDB21361252F3',
'server': 'AmazonS3',
'x-amz-id-2': 'IVuVQuB2V7nClm5FaX4FRbt6brS3gAiuwpERnZxknIWoZLH65LerURwmoynKW5sv37VP6FdbYho='
},
'RequestId': '94ADDB21361252F3',
'RetryAttempts': 0,
'HostId': 'IVuVQuB2V7nClm5FaX4FRbt6brS3gAiuwpERnZxknIWoZLH65LerURwmoynKW5sv37VP6FdbYho=',
'HTTPStatusCode': 200
},
'MaxKeys': 1000,
'Name': 'ais-django',
'KeyCount': 0
}
while without prefix when I add like this:
list = s3client.list_objects_v2(Bucket='ais-django' )[Contents]
it gives list of all objects.
So how I can get all objects in a specific folder ?

this is the way you should do it :)
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('ais-django')
for o in bucket.objects.filter(Prefix='Event/test-event'):
print(o.key)
this is the result you will get
the result contains Event/test-event/ as there is no folder system in AWS s3 , everything is an object, hence Event/test-event/ as well as Event/test-event/image.jpg are both considered as objects.
if you want only contents , i.e , image only you can do it like this,
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('ais-django')
for o in bucket.objects.filter(Prefix='Event/test-event'):
filename=o.key
if filename.endswith(".jpeg") or filename.endswith(".jpg") or filename.endswith(".png"):
print(o.key)
Now in this case we are getting Event/test-event/18342087_1323920084341024_7613721308394107132_n.jpg as a result as we are filtering our results out and this is the only image object in my bucket right now

Related

Creating a dataframe of s3 object data with a paginator

I'm trying to create a pandas dataframe with bucket object data (list_objects_v2) using boto3.
Without pagination, I can easily create a dataframe by using recursion on the response and appending rows to the dataframe.
import boto3
import pandas
s3 = boto3.client('s3')
response = s3.list_objects_v2(Bucket=bucket_name) #this creates a nested json
print(response)
{'ResponseMetadata': {'RequestId': 'PGMCTZNAPV677CWE', 'HostId': '/8qweqweEfpdegFSNU/hfqweqweqweSHtM=', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': '/8yacqweqwe/hfjuSwKXDv3qweqweqweHtM=', 'x-amz-request-id': 'PqweqweqweE', 'date': 'Fri, 09 Sep 2022 09:25:04 GMT', 'x-amz-bucket-region': 'eu-central-1', 'content-type': 'application/xml', 'transfer-encoding': 'chunked', 'server': 'AmazonS3'}, 'RetryAttempts': 0}, 'IsTruncated': False, 'Contents': [{'Key': 'qweqweIntraday.csv', 'LastModified': datetime.datetime(2022, 7, 12, 8, 32, 10, tzinfo=tzutc()), 'ETag': '"qweqweqwe4"', 'Size': 1165, 'StorageClass': 'STANDARD'}], 'Name': 'test-bucket', 'Prefix': '', 'MaxKeys': 1000, 'EncodingType': 'url', 'KeyCount': 1}
object_df = pandas.DataFrame()
for elem in response:
if 'Contents' in elem:
object_df = pandas.json_normalize(response['Contents'])
Because of the 1000 row limitation of list_objects_v2, I'm trying to get to the same result using recursion. I attempted to do this with the following code, but I don't get the desired output (infinite loops on larger buckets).
object_df = pandas.DataFrame()
paginator = s3.get_paginator("list_objects_v2")
for page in paginator.paginate(Bucket=bucket_name):
for elem in page:
if 'Contents' in elem:
object_df = pandas.json_normalize(page['Contents'])
I managed to find a solution with adding another dataframe and just appending each page to it.
appended_object_df = pandas.DataFrame()
paginator = s3.get_paginator("list_objects_v2")
for page in paginator.paginate(Bucket=bucket_name):
object_df = pandas.DataFrame()
object_df = pandas.json_normalize(page['Contents'])
appended_object_df=appended_object_df.append(object_df, ignore_index=True)
I'm still curious if it's possible to skip the appending part and have the code directly generate the complete df.
Per the pandas documentation:
Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once.
So, you could do:
df_list = []
paginator = s3.get_paginator("list_objects_v2")
for page in paginator.paginate(Bucket=bucket_name):
page_df = pandas.json_normalize(page['Contents'])
df_list.append(page_df)
object_df = pandas.concat(df_list, ignore_index=True)

aws python lambda: reading csv file (iterator should return strings)

I'm getting this message when I'm trying to test my python 3.8 lambda function:
Logs are:
soc-connect
contacts.csv
{'ResponseMetadata': {'RequestId': '9D7D7F0C5CB79984', 'HostId': 'wOd6HvIm+BpLOMKF2beRvqLiW0NQt5mK/kzjCjYxQ2kHQZY0MRCtGs3l/rqo4o0r4xAPuV1QpGM=', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amz-id-2': 'wOd6HvIm+BpLOMKF2beRvqLiW0NQt5mK/kzjCjYxQ2kHQZY0MRCtGs3l/rqo4o0r4xAPuV1QpGM=', 'x-amz-request-id': '9D7D7F0C5CB79984', 'date': 'Thu, 26 Mar 2020 11:21:35 GMT', 'last-modified': 'Tue, 24 Mar 2020 16:07:30 GMT', 'etag': '"8a3785e750475af3ca25fa7eab159dab"', 'accept-ranges': 'bytes', 'content-type': 'text/csv', 'content-length': '52522', 'server': 'AmazonS3'}, 'RetryAttempts': 0}, 'AcceptRanges': 'bytes', 'LastModified': datetime.datetime(2020, 3, 24, 16, 7, 30, tzinfo=tzutc()), 'ContentLength': 52522, 'ETag': '"8a3785e750475af3ca25fa7eab159dab"', 'ContentType': 'text/csv', 'Metadata': {}, 'Body': <botocore.response.StreamingBody object at 0x7f858dc1e6d0>}
1153
<_csv.reader object at 0x7f858ea76970>
[ERROR] Error: iterator should return strings, not bytes (did you open the file in text mode?)
The code snippet is:
import boto3
import csv
def digest_csv(bucket_name, key_name):
# Let's use Amazon S3
s3 = boto3.client('s3');
print(bucket_name)
print(key_name)
s3_object = s3.get_object(Bucket=bucket_name, Key=key_name)
print(s3_object)
# read the contents of the file and split it into a list of lines
lines = s3_object['Body'].read().splitlines(True)
print(len(lines))
contacts = csv.reader(lines, delimiter=';')
print(contacts)
# now iterate over those contacts
for contact in contacts:
# here you get a sequence of dicts
# do whatever you want with each line here
print('-*-'.join(contact))
I think the problem is on csv.reader.
I'm setting first parameter an array of lines... Should it be modified?
Any ideas?
Instead of using the csv.reader the following worked for me (adjusted for your delimiter and variables):
for line in lines:
contact = ''.join(line.decode().split(';'))
print(contact)

Get volume ID of root volume in AWS EC2 using Python

I'm creating EC2 instances in AWS using python. I want to list the root volume ID of the instances I'm creating. I'm creating the instances with only the root volume. No additional EBS volumes are attached.
After I create the instance and have the instance ID, here is my attempt to get the volume ID for the root volume:
instance_list = []
volume_list = []
if instances:
for instance in instances:
instance_id = instance.instance_id
instance_list.append(instance_id)
root_volume = ec2_client.describe_instance_attribute(InstanceId=instance_id, Attribute='blockDeviceMapping')
volume_list.append(root_volume)
print(f"Volumes: {root_volume} Volumes Type: {type(root_volume)}")
The response that I get back from that print statement is:
Volumes: {'BlockDeviceMappings': [], 'InstanceId': 'i-0930a235f6b1c47b2', 'ResponseMetadata': {'RequestId': 'bb679ad1-5c81-461f-b5e6-427bc140df68', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'text/xml;charset=UTF-8', 'content-length': '299', 'date': 'Tue, 21 Jan 2020 21:22:16 GMT', 'server': 'AmazonEC2'}, 'RetryAttempts': 0}} Volumes Type: <class 'dict'>
The BlockDeviceMappings are empty.
However when I look at the instance in the console I can see it has a toot volume id of vol-02ebd87bcf9bf62be. How can I arrive at the volume ID of the instance root volume programmatically using python?

How To Use boto3 To Retrieve S3 File Size

I'm new to python and boto and I'm currently trying to write a dag that will check an s3 file size given the bucket location and file name. How can I take the file location (s3://bucket-info/folder/filename) and get the size of the file? If the file size is greater than 0kb, I will need to fail the job.
Thank you for your time
You can use boto3 head_object for this
Here's something that will get you the size. Replace bucket and key with your own values:
import boto3
client = boto3.client(service_name='s3', use_ssl=True)
response = client.head_object(
Bucket='bucketname',
Key='full/path/to/file.jpg'
)
print(response['ContentLength'])
You can also get a list of all objects if multiple files need to be checked. For a given bucket run list_objects_v2 and then iterate through response 'Contents'. For example:
s3_client = boto3.client('s3')
response_contents = s3_client.list_objects_v2(
Bucket='name_of_bucket'
).get('Contents')
you'll get a list of dictionaries like this:
[{'Key': 'path/to/object1', 'LastModified': datetime, 'ETag': '"some etag"', 'Size': 2600, 'StorageClass': 'STANDARD'}, {'Key': 'path/to/object2', 'LastModified': 'datetime', 'ETag': '"some etag"', 'Size': 454, 'StorageClass': 'STANDARD'}, ... ]
Notice that each dictionary in the list contains 'Size' key, which is the size of your particular object. It's iterable
for rc in response_contents:
if rc.get('Key') == 'path/to/file':
print(f"Size: {rc.get('Size')}")
You get sizes for all files you might be interested in:
Size: 2600
Size: 454
Size: 2600
...

AWS S3 image saving loses metadata

I am working with an AWS Lambda function written in python 2.7x which downloads, saves to /tmp , then uploads the image file back to bucket.
My image meta data starts out in original bucket with http headers like Content-Type= image/jpeg, and others.
After saving my image with PIL, all headers are gone and I am left with Content-Type = binary/octet-stream
From what I can tell, image.save is loosing the headers due to the way PIL works. How do I either preserve metadata or at least apply it to the new saved image?
I have seen post suggesting that this metadata is in exif but I tried to get exif info from original file and apply to saved file with no luck. I am not clear of it's in exif data anyway.
Partial code to give idea of what I am doing:
def resize_image(image_path):
with Image.open(image_path) as image:
image.save(upload_path, optimize=True)
def handler(event, context):
global upload_path
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode("utf8"))
download_path = '/tmp/{}{}'.format(uuid.uuid4(), file_name)
upload_path = '/tmp/resized-{}'.format(file_name)
s3_client.download_file(bucket, key, download_path)
resize_image(download_path)
s3_client.upload_file(upload_path, '{}resized'.format(bucket), key)
Thanks to Sergey, I changed to using get_object but response is missing Metadata:
response = s3_client.get_object(Bucket=bucket,Key=key)
response= {u'Body': , u'AcceptRanges': 'bytes', u'ContentType': 'image/jpeg', 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': 'au30hBMN37/ti0WCfDqlb3t9ehainumc9onVYWgu+CsrHtvG0u/zmgcOIvCCBKZgQrGoooZoW9o=', 'RequestId': '1A94D7F01914A787', 'HTTPHeaders': {'content-length': '84053', 'x-amz-id-2': 'au30hBMN37/ti0WCfDqlb3t9ehainumc9onVYWgu+CsrHtvG0u/zmgcOIvCCBKZgQrGoooZoW9o=', 'accept-ranges': 'bytes', 'expires': 'Sun, 01 Jan 2034 00:00:00 GMT', 'server': 'AmazonS3', 'last-modified': 'Fri, 23 Dec 2016 15:21:56 GMT', 'x-amz-request-id': '1A94D7F01914A787', 'etag': '"9ba59e5457da0dc40357f2b53715619d"', 'cache-control': 'max-age=2592000,public', 'date': 'Fri, 23 Dec 2016 15:21:58 GMT', 'content-type': 'image/jpeg'}}, u'LastModified': datetime.datetime(2016, 12, 23, 15, 21, 56, tzinfo=tzutc()), u'ContentLength': 84053, u'Expires': datetime.datetime(2034, 1, 1, 0, 0, tzinfo=tzutc()), u'ETag': '"9ba59e5457da0dc40357f2b53715619d"', u'CacheControl': 'max-age=2592000,public', u'Metadata': {}}
If I use:
metadata = response['ResponseMetadata']['HTTPHeaders']
metadata = {'content-length': '84053', 'x-amz-id-2': 'f5UAhWzx7lulo3cMVF8hdVRbHnhdnjHWRDl+LDFkYm9pubjL0A01L5yWjgDjWRE4TjRnjqDeA0U=', 'accept-ranges': 'bytes', 'expires': 'Sun, 01 Jan 2034 00:00:00 GMT', 'server': 'AmazonS3', 'last-modified': 'Fri, 23 Dec 2016 15:47:09 GMT', 'x-amz-request-id': '4C69DF8A58EF3380', 'etag': '"9ba59e5457da0dc40357f2b53715619d"', 'cache-control': 'max-age=2592000,public', 'date': 'Fri, 23 Dec 2016 15:47:10 GMT', 'content-type': 'image/jpeg'}
Saving with put_object
s3_client.put_object(Bucket=bucket+'resized',Key=key, Metadata=metadata, Body=downloadfile)
creates a whole lot of extra metadata in s3 including the fact that it does not save content-type as image/jpeg but rather as binary/octet-stream and it does create metadata x-amz-meta-content-type = image/jpeg
You are confusing S3 metadata, stored by AWS S3 along with an object, and EXIF metadata, stored inside the file itself.
download_file() doesn't get object attributes from S3. You should use get_object() instead: https://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object
Then you can use put_objects() with the same attributes to upload new file: https://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.put_object
Content type information is not on the file you upload, it has to be guessed or extracted somehow. This is something you must do manually or using tools. With a fairly small dictionary you can guess most file types.
When you upload a file or object, you have the chance to specify its content type. Otherwise S3 defaults to application/octet-stream.
Using the boto3 python package for instance:
s3client.upload_file(
Filename=local_path,
Bucket=bucket,
Key=remote_path,
ExtraArgs={
"ContentType": "image/jpeg"
}
)

Categories

Resources