How to list and read each of the files in specific folder of an S3 bucket using Python Boto3 - python

I have some files in a specific folder of an s3 bucket. All file names are in the same pattern like below:
s3://example_bucket/products/product1/rawmat-2343274absdfj7827d.csv
s3://example_bucket/products/product1/rawmat-7997werewr666ee.csv
s3://example_bucket/products/product1/rawmat-8qwer897hhw776w3.csv
s3://example_bucket/products/product1/rawmat-2364875349873uy68848732.csv
....
....
Here, I think we can say:
bucket_name = 'example_bucket'
prefix = 'products/product1/'
key = 'rawmat-*.csv'
I need to read each of them. I highly prefer not to list of the objects in the bucket.
What could be the most efficient way to do this?

Iterate over the objects at the folder using a prefix
bucket_name = 'example_bucket'
prefix = 'products/product1/rawmat'
for my_object in bucket_name.objects.filter(Prefix= prefix):
print(my_object)

Related

Read content of a file located under subfolders of S3 in Python

I'm trying to read a file content's (not to download it) from an S3 bucket. The problem is that the file is located under a multi-level folder. For instance, the full path could be s3://s3-bucket/folder-1/folder-2/my_file.json. How can I get that specific file instead of using my iterative approach that lists all objects?
Here is the code that I want to change:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('s3-bucket')
for obj in my_bucket.objects.all():
key = obj.key
if key == 'folder-1/folder-2/my_file.json':
return obj.get()['Body'].read()
Can it be done in a simpler, more direct way?
Yes - there is no need to enumerate the bucket.
Read the file directly using s3.Object, providing the bucket name as the 1st parameter & the object key as the 2nd parameter.
"Folders" don't really exist in S3 - Amazon S3 doesn't use hierarchy to organize its objects and files. For the sake of organizational simplicity, the Amazon S3 console shows "folders" as a means of grouping objects but they are ultimately baked into your object key.
This should work:
import boto3
s3 = boto3.resource('s3')
obj = s3.Object("s3-bucket", "folder-1/folder-2/my_file.json")
body = obj.get()['Body'].read()

Google Cloud Storage List Blob objects with specific file name

With the help of google.cloud.storage and list_blobs I can get the list of files from the specific bucket. But I want to filter(name*.ext) the exact files from the bucket. I was not able to find the exact solution.
For example: buket=data, prefix_folder_name=sales, with in prefix folder I have list of invoices with metadata. I want to get the specific invoices and its metadata(name*.csv & name.*.meta). Also, if I loop the entire all_blobs of the particular folder to get the selected files then it will be huge volume of data and it may affecting performance.
It would be good if someone one help me with this solution.
bucket = gcs_client.get_bucket(buket)
all_blobs = bucket.list_blobs(prefix=prefix_folder_name)
for blob in all_blobs:
print(blob.name)
According to google-cloud-storage documentation Blobs are objects that have name attribute, so you can filter them by this attribute.
from google.cloud import storage
# storage_client = gcs client
storage_client = storage.Client()
# bucket_name = "your-bucket-name"
# Note: Client.list_blobs requires at least package version 1.17.0.
blobs = storage_client.list_blobs(bucket_name)
# filter_dir = "filter-string"
[blob.name for blob in blobs if filter_dir in blob.name ]
It doesn't allow you to filter, but you can use use the fields parameter to just return the name of the objects, limiting the amount of data returned and making it easy to filter.
You can filter for a prefix, but to filter more specifically (e.g., for objects ending with a given name extension) you have to implement client-side filtering logic. That's what gsutil does when you do a command like:
gsutil ls gs://your-bucket/abc*.txt
You can use the following considering the filters as name and .ext for the files:
all_blobs = bucket.list_blobs()
fileList = [file.name for file in all_blobs if '.ext' in file.name and 'name' in file.name]
for file in fileList:
print(file)
Here name will be the fileName filter and .ext will be your extension filter.

How to get top-level folders in an S3 bucket using boto3?

I have an S3 bucket with a few top level folders, and hundreds of files in each of these folders. How do I get the names of these top level folders?
I have tried the following:
s3 = boto3.resource('s3', region_name='us-west-2', endpoint_url='https://s3.us-west-2.amazonaws.com')
bucket = s3.Bucket('XXX')
for obj in bucket.objects.filter(Prefix='', Delimiter='/'):
print obj.key
But this doesn't seem to work. I have thought about using regex to filter all the folder names, but this doesn't seem time efficient.
Thanks in advance!
Try this.
import boto3
client = boto3.client('s3')
paginator = client.get_paginator('list_objects')
result = paginator.paginate(Bucket='my-bucket', Delimiter='/')
for prefix in result.search('CommonPrefixes'):
print(prefix.get('Prefix'))
The Amazon S3 data model is a flat structure: you create a bucket, and the bucket stores objects. There is no hierarchy of subbuckets or subfolders; however, you can infer logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does (source)
In other words, there's no way around iterating all of the keys in the bucket and extracting whatever structure that you want to see (depending on your needs, a dict-of-dicts may be a good approach for you).
You could also use Amazon Athena in order to analyse/query S3 buckets.
https://aws.amazon.com/athena/

Python: recursive glob in s3

I am trying to get a list of parquet files paths from s3 that are inside of subdirectories and subdirectories of subdirectories (and so on and so forth).
If it was my local file system I would do this:
import glob
glob.glob('C:/Users/user/info/**/*.parquet', recursive=True)
I have tried using the glob method of s3fs however it doesn't have a recursive kwarg.
Is there a function I can use or do I need to implement it myself ?
You can use s3fs with glob:
import s3fs
s3 = s3fs.S3FileSystem(anon=False)
s3.glob('your/s3/path/here/*.parquet')
I also wanted to download the latest file from s3 bucket but located in a specific folder. Initially, I tried using glob but couldn't find a solution to this problem. Finally, I build following function to solve this problem. You can modify this function to work with subfolders.
This function will return dictionary of all filenames and timestamp in key-value pair
(Key: file_name, value: timestamp).
Just pass bucket name and prefix (which is folder name).
import boto3
def get_file_names(bucket_name,prefix):
"""
Return the latest file name in an S3 bucket folder.
:param bucket: Name of the S3 bucket.
:param prefix: Only fetch keys that start with this prefix (folder name).
"""
s3_client = boto3.client('s3')
objs = s3_client.list_objects_v2(Bucket=bucket_name)['Contents']
shortlisted_files = dict()
for obj in objs:
key = obj['Key']
timestamp = obj['LastModified']
# if key starts with folder name retrieve that key
if key.startswith(prefix):
# Adding a new key value pair
shortlisted_files.update( {key : timestamp} )
return shortlisted_files
latest_filename = get_latest_file_name(bucket_name='use_your_bucket_name',prefix = 'folder_name/')
S3 doesn't actually have subdirectories, per se.
boto3's S3.Client.list_objects() supports a prefix argument, which should get you all the objects in a given "directory" in a bucket no matter how "deep" they appear to be.

How to get objects from a folder in an S3 bucket

I am trying to traverse all objects inside a specific folder in my S3 bucket. The code I already have is like follows:
s3 = boto3.resource('s3')
bucket = s3.Bucket('bucket-name')
for obj in bucket.objects.filter(Prefix='folder/'):
do_stuff(obj)
I need to use boto3.resource and not client. This code is not getting any objects at all although I have a bunch of text files in the folder. Can someone advise?
Try adding the Delimiter attribute: Delimiter = '\' as you are filtering objects. The rest of the code looks fine.
I had to make sure to skip the first file. For some reason it thinks the folder name is the first file and that may not be what you want.
for video_item in source_bucket.objects.filter(Prefix="my-folder-name/", Delimiter='/'):
if video_item.key == 'my-folder-name/':
continue
do_something(video_item.key)

Categories

Resources