conn = tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY)
f = open('sample.zip','rb')
conn.upload('sample.zip',f,bucketname)
I can upload the file to my bucket (test) via the code above, but I want to upload it directly to test/images/example. I am open to moving over to boto, but I can't seem to import boto.s3 in my environment.
I have looked through How to upload a file to directory in S3 bucket using boto but none of the tinys3 examples show this.
import boto3
client = boto3.client('s3', region_name='ap-southeast-2')
client.upload_file('/tmp/foo.txt', 'my-bucket', 'test/images/example/foo.txt')
The following worked for me
from boto3.s3.transfer import S3Transfer
from boto3 import client
client_obj = client('s3',
aws_access_key_id='my_aws_access_key_id',
aws_secret_access_key='my_aws_secret_access_key')
transfer = S3Transfer(client_obj)
transfer.upload_file(src_file,
'my_s3_bucket_name',
dst_file,
extra_args={'ContentType': "application/zip"})
Related
I am trying to download files from a public aws s3 from this website with python scripts. For example, the first the object on the link. I tried boto3 and I got a No Credentials error:
s3 = boto3.resource('s3')
bucket = s3.Bucket('oedi-data-lake')
keys = []
for obj in bucket.objects.filter(Prefix='nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/'):
if obj.key.endswith('bldg0000001-up10.zip'):
keys.append(obj.key)
print(keys)
I also found a post Download file/folder from Public AWS S3 with Python, no credentials
and I tried as the following:
import requests
headers = {'Host' : 'oedi-data-lake.s3.amazonaws.com'}
url = 'https://oedi-data-lake.s3.amazonaws.com/nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip'
r = requests.get(url)
but got a SSLCertVerificationError
Please help. :)
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Thank you, jhashimoto!
But by doing the following, I still have the NoCredentialsError
import boto3
from botocore import UNSIGNED
from botocore.config import Config
s3 = boto3.resource("s3", config=Config(signature_version=UNSIGNED))
s3_client = boto3.client('s3')
s3_client.download_file('oedi-data-lake', 'nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip', 'bldg1.zip')
I also read can-i-use-boto3-anonymously and changed the code as below:
import boto3
from botocore import UNSIGNED
from botocore.config import Config
client = boto3.client('s3', aws_access_key_id='', aws_secret_access_key='')
client._request_signer.sign = (lambda *args, **kwargs: None)
client.download_file('oedi-data-lake', 'nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip', 'bldg01.zip')
and got SSLCertVerificationError.
is this something that caused by the company security policy?
Sorry for the naive questions. Completely new on AWS.
thank you so much
To access a bucket that allows anonymous access, configure it not to use credentials.
import boto3
from botocore import UNSIGNED
from botocore.config import Config
s3 = boto3.resource("s3", config=Config(signature_version=UNSIGNED))
# output:
# ['nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip']
python - Can I use boto3 anonymously? - Stack Overflow
Yes. Your credentials are used to sign all the requests you send out, so what you have to do is configure the client to not perform the signing step at all.
Note:
Unrelated to the main topic, The AWS Python SDK team does not intend to add new features to the resource interface. You can use the client interface instead.
Resources — Boto3 Docs 1.26.54 documentation
The AWS Python SDK team does not intend to add new features to the resources interface in boto3. Existing interfaces will continue to operate during boto3's lifecycle. Customers can find access to newer service features through the client interface.
Added at 2023/01/21 12:00:
This is a sample code using the client interface.
import boto3
from botocore import UNSIGNED
from botocore.config import Config
s3_client = boto3.client('s3', config=Config(signature_version=UNSIGNED))
s3_client.download_file('oedi-data-lake', 'nrel-pds-building-stock/end-use-load-profiles-for-us-building-stock/2022/resstock_tmy3_release_1/building_energy_models/upgrade=10/bldg0000001-up10.zip', 'bldg1.zip')
I have an access in Google Cloud and AWS. I wanted to upload a file from Vertex AI Workbench to AWS S3, is that possible? Or there is an alternative way?
I have read some tread that might help me, and have try some code, but still can't solve my problem, and raise an error
Could not connect to the endpoint URL:
"https://xyz.s3.auto.amazonaws.com/uploaded.csv?uploads"
Here is my code
import boto3
import os
import io
s3 = boto3.resource('s3')
key_id="my_key"
access_key="my_access_key"
client = boto3.client("s3", region_name="auto", aws_access_key_id=key_id, aws_secret_access_key=access_key)
client.upload_file(
Filename="path_file.csv",
Bucket="bucket_name",
Key="uploaded.csv",
)
I think the issue here is you're using region=auto for AWS which is not supported. The region needs to be real region because (you can see in the error) it's being used to pick the endpoint.
Try it without that.
import os
import io
s3 = boto3.resource('s3')
key_id="my_key"
access_key="my_access_key"
client = boto3.client("s3", aws_access_key_id=key_id, aws_secret_access_key=access_key)
client.upload_file(
Filename="path_file.csv",
Bucket="bucket_name",
Key="uploaded.csv",
)
I am using python and jupyter notebook to read files from an aws s3 bucket, and I am getting the error 'No Credentials Error:Unable to locate credentials' when running the following code:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
for obj in bucket.objects.all():
key = obj.key
body = obj.get()['Body'].read()
I believe I need to put my access key somewhere, but I am not sure where. Thank you!
If you have AWS CLI installed, just do a simple aws configure. Then you will be good to go.
I was given this s3 url: s3://file.share.external.bdex.com/Offrs
In this url is a battery of files I need to download.
I have this code:
import boto3
s3_client = boto3.client('s3',
aws_access_key_id='<<ACCESS KEY>>',
aws_secret_access_key='<<SECRET_ACCESS_KEY>>'
)
object_listing = s3_client.list_objects_v2(Bucket='file.share.external.bdex.com/Offrs',
Prefix='')
print(object_listing)
I have tried:
Bucket='file.share.external.bdex.com', Prefix='Offrs'
Bucket='s3://file.share.external.bdex.com/Offrs/'
Bucket='file.share.external.bdx.com/Offrs', Prefix='Offrs'
and several other configurations, all saying I'm not following the regex. due to the slash, or not found.
What am I missing?
Thank you.
Bucket = 'file.share.external.bdx.com'
Prefix = 'Offrs/'
You can test your access permissions via the AWS CLI:
aws s3 ls s3://file.share.external.bdex.com/Offrs/
I am working on a process to dump files from a Redshift database, and would prefer not to have to locally download the files to process the data. I saw that Java has a StreamingObject class that does what I want, but I haven't seen anything similar in boto3.
If you have a mybucket S3 bucket, which contains a beer key, here is how to download and fetch the value without storing it in a local file:
import boto3
s3 = boto3.resource('s3')
print s3.Object('mybucket', 'beer').get()['Body'].read()
smart_open is a Python 3 library for efficient streaming of very large files from/to storages such as S3, GCS, Azure Blob Storage, HDFS, WebHDFS, HTTP, HTTPS, SFTP, or local filesystem.
https://pypi.org/project/smart-open/
import boto3
import smart_open
client = boto3.client(service_name='s3',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_KEY,
)
url = 's3://.............'
fin = smart_open.open(url, 'r', transport_params={'client':client})
for line in fin:
data = json.loads(line)
print(data)
fin.close()
This may or may not be relevant to what you want to do, but for my situation one thing that worked well was using tempfile:
import tempfile
import boto3
bucket_name = '[BUCKET_NAME]'
key_name = '[OBJECT_KEY_NAME]'
s3 = boto3.resource('s3')
temp = tempfile.NamedTemporaryFile()
s3.Bucket(bucket_name).download_file(key_name, temp.name)
# do what you will with your file...
temp.close()
I use that solution, actually:
import boto3
s3_client = boto3.client('s3')
def get_content_from_s3(bucket: str, key: str) -> str:
"""Save s3 content locally
param: bucket, s3 bucket
param: key, path to the file, f.i. folder/subfolder/file.txt
"""
s3_file = s3_client.get_ojct(Bucket=bucket, Key=key)['Body'].read()
return s3_file.decode('utf-8').strip()