I am trying to upload files to azure using the SAS URI only. I found ways using C# but I didn't find a solution using python. The only solution I found using python is to input the account name and account key as parameters in blockblobservice. Here is an example Upload image to azure blob storage using python but I am trying to avoid using this solution. Is there a specific way to upload csv files to azure using only the SAS URI ? Thanks for your help :)
If you're using the latest python blob sdk azure-storage-blob 12.4.0, then you can use the code like below(please feel free to modify the code as per your need):
from azure.storage.blob import BlobClient
upload_file_path="d:\\a11.csv"
sas_url="https://xxx.blob.core.windows.net/test5/a11.csv?sastoken"
client = BlobClient.from_blob_url(sas_url)
with open(upload_file_path,'rb') as data:
client.upload_blob(data)
print("**file uploaded**")
Here is the test result:
This might help:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python#upload-blobs-to-a-container
Example is shown by using the Python SDK for Azure Storage
Related
To Authenticate pipeline in Python project I'm using this
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/to/the/json/key.json"
How to do the same thing BUT with the loaded JSON file? (without using path to the JSON)??
There are a few ways to authenticate against a project in Google Cloud.
Please take a look at: https://googleapis.dev/python/google-api-core/latest/auth.html as well as the best practices as mentioned here: https://cloud.google.com/docs/authentication/best-practices-applications
Data is in MS Access and it's in one of the shared drive on the network. I need this data in azure blob storage as CSV files. Can anyone please suggest me how can this be possible?
You can move data to Azure Blob storage in several ways, You could use either Azcopy: located here: https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10 , Or Storage Explorer(GUI): https://azure.microsoft.com/en-us/features/storage-explorer/
OR using Python SDK:
block_blob_service.create_blob_from_path(container, file, file)
Python SDK can be found here: https://github.com/Azure/azure-sdk-for-python
When it comes to changing the format from Access to CSV, it's something not related to Azure Storage, you can try existing libraries for that conversion, then upload to blob storage.
I have a URL (https://example.com/myfile.txt) of a file and I want to upload it to my bucket (gs://my-sample-bucket) on Google Cloud Storage.
What I am currently doing is:
Downloading the file to my system using the requests library.
Uploading that file to my bucket using python function.
Is there any way I can upload the file directly using the URL.
You can use urllib2 or requests library to get the file from HTTP, then your existing python code to upload to Cloud Storage. Something like this should work:
import urllib2
from google.cloud import storage
client = storage.Client()
filedata = urllib2.urlopen('http://example.com/myfile.txt')
datatoupload = filedata.read()
bucket = client.get_bucket('bucket-id-here')
blob = Blob("myfile.txt", bucket)
blob.upload_from_string(datatoupload)
It still downloads the file into memory on your system, but I don't think there's a way to tell Cloud Storage to do that for you.
There is a way to do this, using a Cloud Storage Transfer job, but depending on your use case, it may be worth doing or not. You would need to create a transfer job to transfer a URL list.
I marked this question as duplicated from this.
import boto3
import io
import pandas as pd
# The entry point function can contain up to two input arguments:
# Param<dataframe1>: a pandas.DataFrame
# Param<dataframe2>: a pandas.DataFrame
def azureml_main(dataframe1 = None, dataframe2 = None):
s3 = boto3.client('s3',
aws_access_key_id='REMOVED',
aws_secret_access_key='REMOVED')
obj = s3.get_object(Bucket='bucket', Key='data.csv000')
df = pd.read_csv(io.BytesIO(obj['Body'].read()))
return df,
I'm tring to read data from S3 using the Execute Python module. I have downloaded the boto3 package and converted it to a zip. I have then uploaded and connected that .zip to the third input option of the module. When I run this code, I recieve an error stating botocore is not installed. Has anyone been able to read directly from S3 into Azure ML studio? I've tried using the R script module which also fails, so now I'm trying python.
Since the boto3 package has dependencies, even some that are cloned from git, I don't think Azure ML Studio can use it. According to the note in their documentation it would be easier to switch to Azure ML Workbench since it can handle Python packages much easier.
Another option, if you need to use Azure ML Studio, is to copy from S3 into Azure Blob Storage, which ML Studio has great support for.
Not much of an answer, but I'm afraid you've hit a limitation of Azure ML Studio.
According to the Amazon WorkDocs SDK page, you can use Boto3 to migrate your content to Amazon WorkDocs. I found the entry for the WorkSpaces Client in the Boto3 documentation, but every call seems to require a "AuthenticationToken" parameter. The only information I can find on AuthenticationToken is that is it supposed to be a "Amazon WorkDocs authentication token".
Does anyone know what this token is? How do I get one? Is there any code examples of using the WorkDocs Client in Boto3?
I am trying to create a simple Python script that will upload a single document into WorkDocs, but there seems to be little to no information on how to do this. I was easily able to write a script that can upload/download files from S3, but this seems like something else entirely.