I have written a code to move files from one bucket to another in GCS using python. The bucket has multiple subfolders and I am trying to move the Day folder only to a different bucket. Source Path: /Bucketname/projectname/XXX/Day Target Path: /Bucketname/Archive/Day
Is there a way to directly move/copy the Day folder without moving each file inside it one by one. Im trying to optimize my code which is taking long time if there are multiple Day folders. Sample code below.
from google.cloud import storage
from google.cloud import bigquery
import glob
import pandas as pd
def Archive_JSON(bucket_name, new_bucket_name, source_prefix_arch, staging_prefix_arch, **kwargs):
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
today_execution_date = kwargs['ds_nodash']
source_prefix_new = source_prefix_arch + today_execution_date + '/'
blobs = bucket.list_blobs(prefix=source_prefix_new)
destination_bucket = storage_client.get_bucket(new_bucket_name)
for blob in blobs:
destination_bucket.rename_blob(blob, new_name=blob.name.replace(source_prefix_arch, staging_prefix_arch))
You can't move all the file of a folder to another bucket because folders don't exist in Cloud Storage. All the object are put at the bucket level and the object name is the full path of the object.
By convention, and for (poor) human readability, slash / are folder separator, but it's a fake!
So, you haven't other option than moving all the files with the same prefix (the "folder path"), and iterating on all of them.
Related
I have a large dataset of images in a zip file. I want to list all the files so I can work with them.
Since a while I started using google cloud platform and I uploaded my data to a bucket. When working local, I used this piece of code to find the filenames of all the files in the dataset. I want to do something similar so I can load the images in my notebook.
#files directory in list
matches = []
for root, dirnames, filenames in os.walk("D:\LH\..."):
for filename in fnmatch.filter(filenames, '*.nii'):
matches.append(os.path.join(root, filename))
print(matches[0])
Since my dataset is split up in 10 different zip files, I have this piece of code to list the objects in my bucket:
from google.cloud import storage
def list_blobs(bucket_name):
"""Lists all the blobs in the bucket."""
# bucket_name = "your-bucket-name"
storage_client = storage.Client()
# Note: Client.list_blobs requires at least package version 1.17.0.
blobs = storage_client.list_blobs(bucket_name)
for blob in blobs:
print(blob.name)
bloblist = list_blobs('adni_data')
which gives me a list with the folders where my files are in. But how can I load this data using the nilearn library? Btw, the folders consist of many folders with more folders after which the file is stored (every file in other folders).
By iterating over all the blobs with list_blobs, you're just getting references to where the data exists in Cloud Storage -- no actual image data has been transferred between GCS and your script.
If you wanted to load the image with something like nilearn.image.load_img, you need a path to a local .nii file, so you'd need to do something like:
for blob in blobs:
local_filename = '/tmp/' + blob.name
blob.download_to_filename(local_filename)
nilearn.image.load_img(local_filename)
I would like to move a list of file from google storage to another folder:
storage_client = storage.Client()
count = 0
# Retrieve all blobs with a prefix matching the file.
bucket=storage_client.get_bucket(BUCKET_NAME)
# List blobs iterate in folder
blobs=bucket.list_blobs(prefix=GS_FILES_PATH, delimiter='/') # Excluding folder inside bucket
for blob in blobs:
if fnmatch.fnmatch(blob.name, FILE_PREF):
WHAT CAN GO HERE?
count += 1
The only useful information which I found in Google Documentation is:
https://cloud.google.com/storage/docs/renaming-copying-moving-objects
By this documentation, the only method is to copy from one folder to another and delete it.
Any way to actually MOVE files?
What is the best way to move all the files based on PREFIX like *BLABLA*.csv
P.S. Do not want to use
"gsutil mv gs://[SOURCE_BUCKET_NAME]/[SOURCE_OBJECT_NAME]
gs://[DESTINATION_BUCKET_NAME]/[DESTINATION_OBJECT_NAME]"
This could be a possible solution, as there is no move_blob function in google.cloud.storage:
from google.cloud import storage
dest_bucket = storage_client.create_bucket(bucket_to)
source_bucket = storage_client.get_bucket(bucket_from)
blobs = source_bucket.list_blobs(prefix=GS_FILES_PATH, delimiter='/') #assuming this is tested
for blob in blobs:
if fnmatch.fnmatch(blob.name, FILE_PREF): #assuming this is tested
source_bucket.copy_blob(blob,dest_bucket,new_name = blob.name)
source_bucket.delete_blob(blob.name)
You could use method rename_blob in google.cloud.storage.Bucket, this function moves the file and deletes the old one. It's necessary that names of files(blobs) are different
Take a look at the code below:
from google.cloud import storage
dest_bucket = storage_client.create_bucket(bucket_to)
source_bucket = storage_client.get_bucket(bucket_from)
blobs = source_bucket.list_blobs(prefix=GS_FILES_PATH, delimiter='/') #assuming this is tested
for blob in blobs:
if fnmatch.fnmatch(blob.name, FILE_PREF): #assuming this is tested
source_bucket.rename_blob(blob,dest_bucket,new_name = blob.name)
I can upload single file to Azure blob storage with Python. But for a folder with multiple folders containing data, is there a way I can try to upload the whole folder with same directory to Azure?
Say I have
FOLDERA
------SUBFOLDERa
----------filea.txt
----------fileb.txt
------SUBFOLDERb
------SUBFOLDERc
I want to put this FOLDERA as above structure to Azure.
Any hints?
#Krumelur is almost right, but here I want to give a working code example, as well as explain some folders are not be able to upload to azure blob storage.
1.Code example:
from azure.storage.blob import BlockBlobService,PublicAccess
import os
def run_sample():
account_name = "your_account_name"
account_key ="your_account_key"
block_blob_service = BlockBlobService(account_name, account_key)
container_name ='test1'
path_remove = "F:\\"
local_path = "F:\\folderA"
for r,d,f in os.walk(local_path):
if f:
for file in f:
file_path_on_azure = os.path.join(r,file).replace(path_remove,"")
file_path_on_local = os.path.join(r,file)
block_blob_service.create_blob_from_path(container_name,file_path_on_azure,file_path_on_local)
# Main method.
if __name__ == '__main__':
run_sample()
2.You should remember that any empty folder can not be created / uploaded to azure blob storage, since there is no real "folder" in azure blob storage. The folder or directory is just a part of the blob name. So without a real blob file like test.txt inside a folder, there is no way to create/upload an empty folder. So in your folder structure, the empty folder SUBFOLDERb and SUBFOLDERc are not be able to upload to azure blob storage.
The test result is as below, all the non-empty folders are uploaded to blob storage in azure:
There is nothing built in, but you can easily write that functionality in your code (see os.walk).
Another option is to use the subprocess module to call into the azcopy command line tool.
I have written a code to delete older files and and keep the latest one. My code is working in local but wanted to apply the same code when accessing AWS s3 bucket folder to perform the similar operation.
The code working fine when providing local path.
import os
import glob
path = r'C:\Desktop\MyFolder'
allfiles =[os.path.basename(file) for file in glob.glob(path + '\*.*')]
diff_pattern=set()
deletefile=[]
for file in allfiles:
diff_pattern.add('_'.join(file.split('_',2)[:2]))
print('Pattern Found - ',diff_pattern)
for pattern in diff_pattern:
patternfiles=[os.path.basename(file) for file in glob.glob(path + '\\'+pattern+'_*.*')]
patternfiles.sort()
if len(patternfiles)>1:
deletefile=deletefile+patternfiles[:len(patternfiles)-1]
print('Files Need to Delete - ',deletefile)
for file in deletefile:
os.remove(path+'\\'+file)
print('File Deleted')
I expect the same code to work for AWS s3 buckets. Below is the files format and example with there status(keep/delete) that I'm working with.
file format: file_name_yyyyMMdd.txt
v_xyz_20190501.txt Delete
v_xyz_20190502.txt keep
v_xyz_20190430.txt Delete
v_abc_20190505.txt Keep
v_abc_20190504.txt Delete
I don't think you can access S3 files like local path.
You may need to use boto3 library in python to access s3 folders.
Here is a sample for you to see how it works..
https://dluo.me/s3databoto3
I have a working Python script for consolidating multiple xlsx files that I want to move to a Watson Studio project. My current code uses a path variable which is passed to glob...
path = '/Users/Me/My_Path/*.xlsx'
files = glob.glob(path)
Since credentials in Watson Studio are specific to individual files, how do I get a list of all files in my IBM COS storage bucket? I'm also wondering how to create folders to separate the files in my storage bucket?
Watson Studio cloud provides a helper library, named project-lib for working with objects in your Cloud Object Storage instance. Take a look at this documentation for using the package in Python: https://dataplatform.cloud.ibm.com/docs/content/analyze-data/project-lib-python.html
For your specific question, get_files() should do what you need. This will return a list of all the files in your bucket, then you can do pattern matching to only keep what you need. Based on this filtered list you can then iterate and use get_file(file_name) for each file_name in your list.
To create a "folder" in your bucket, you need to follow a naming convention for files to create a "pseudo folder". For example, if you want to create a "data" folder of assets, you should prefix file names for objects belonging to this folder with data/.
The credentials in IBM Cloud Object Storage (COS) is at COS instance level, not at individual file level. Each COS instance can have any number of buckets with each bucket containing files.
You can get the credentials for the COS instance from Bluemix console.
https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
You can use boto3 python package to access the files.
https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
import boto3
s3c = boto3.client('s3', endpoint_url='XXXXXXXXX',aws_access_key_id='XXXXXXXXXXX',aws_secret_access_key='XXXXXXXXXX')
s3.list_objects(Bucket=bucket_name, Prefix=file_path)
s3c.download_file(Filename=filename, Bucket=bucket, Key=objectname)
s3c.upload_file(Filename=filename, Bucket=bucket, Key=objectname)
There's probably a more pythonic way to write this but here is the code I wrote using project-lib per the answer provided by #Greg Filla
files = [] # List to hold data file names
# Get list of all file names in storage bucket
all_files = project.get_files() # returns list of dictionaries
# Create list of file names to load based on prefix
for f in all_files:
if f['name'][:3] == DataFile_Prefix:
files.append(f['name'])
print ("There are " + str(len(files)) + " data files in the storage bucket.")