I'm using Python and tinys3 to write files to S3, but it's not working. Here's my code:
import tinys3
conn = tinys3.Connection('xxxxxxx','xxxxxxxx',tls=True)
f = open('testing_s3.txt','rb')
print conn.upload('testing_data/testing_s3.txt',f,'testing-bucket')
print conn.get('testing_data/testing_s3.txt','testing-bucket')
That gives the output:
<Response [301]>
<Response [301]>
When I try specifying the endpoint, I get:
requests.exceptions.HTTPError: 403 Client Error: Forbidden
Any idea what I'm doing wrong?
Edit: When I try using boto, it works, so the problem isn't in the access key or secret key.
I finally figured this out. Here is the correct code:
import tinys3
conn = tinys3.Connection('xxxxxxx','xxxxxxxx',tls=True,endpoint='s3-us-west-1.amazonaws.com')
f = open('testing_s3.txt','rb')
print conn.upload('testing_data/testing_s3.txt',f,'testing-bucket')
print conn.get('testing_data/testing_s3.txt','testing-bucket')
You have to use the region endpoint, not s3.amazonaws.com. You can look up the region endpoint from here: http://docs.aws.amazon.com/general/latest/gr/rande.html. Look under the heading "Amazon Simple Storage Service (S3)."
I got the idea from this thread: https://github.com/smore-inc/tinys3/issues/5
If using an IAM user it is necessary to allow the "s3:PutObjectAcl" action.
Don't know why but this code never worked for me.
I've switched to boto, and it just uploaded file from 1 time.
AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXXXXX'
AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXX/XXXXXXXXXXXXXXXXXXXXXXXXXXX'
bucket_name = 'my-bucket'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket('my-bucket')
print 'Uploading %s to Amazon S3 bucket %s' % \
(filename, bucket_name)
k = Key(bucket)
k.key = filename
k.set_contents_from_filename(filename,
cb=percent_cb, num_cb=10)
Related
I'm working on an internal S3 service (not AWS one). When I provide hard coded credentials, region and endpoint_url, boto3 seems to ignore them.
I came to that conclusion because it is attempting to go on internet (by using a public aws endpoint URL instead of the internal I have provided) but it does not work because of the following proxy error. But he should not go on internet, since it is an internal S3 service :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
Here is my code
import io
import os
import boto3
import pandas as pd
# Method 1 : Client #########################################
s3_client = boto3.client(
's3',
region_name='EU-WEST-1',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
endpoint_url='https://my_company_enpoint_url'
)
# ==> at this point no error, but I don't know the value of endpoint_url
# Read bucket
bucket = "bkt-udt-arch"
file_name = "banking.csv"
print("debug 1") # printed OK
obj = s3_client.get_object(Bucket= bucket, Key= file_name)
# program stops here :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
print("debug 2") # not printed -
initial_df = pd.read_csv(obj['Body']) # 'Body' is a key word
print("debug 3")
# Method 2 : Resource #########################################
# use third party object storage
s3 = boto3.resource('s3', endpoint_url='https://my_company_enpoint_url',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
region_name='EU-WEST-1'
)
print("debug 4") # Printed OK if method 1 is commented
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
Thank you for the review
It was indeed a proxy problem : when http_prxoxy env variable is disabled, it works fine.
I am trying to upload a file in s3 bucket and the following code i have used to achieve the same.
Code
accessKey = ''
secretKey = ''
session = boto3.Session(aws_access_key_id = accessKey, aws_secret_access_key = secretKey,)
s3 = session.resource('s3')
try:
response =s3.Object(bucket_name,'sample.docx').upload_file(Filename='C:/Users/Anushka/Desktop/sample.docx')
except Exception as e:
return e
The code does not do anything not even raising any error and if I print "response", "None" gets printed on the shell. I am not able to understand what is the problem with the code.
I have flask python rest api which is called by another flask rest api.
the input for my api is one parquet file (FileStorage object) and ECS connection and bucket details.
I want to save parquet file to ECS in a specific folder using boto or boto3
the code I have tried
def uploadFileToGivenBucket(self,inputData,file):
BucketName = inputData.ecsbucketname
calling_format = OrdinaryCallingFormat()
client = S3Connection(inputData.access_key_id, inputData.secret_key, port=inputData.ecsport,
host=inputData.ecsEndpoint, debug=2,
calling_format=calling_format)
#client.upload_file(BucketName, inputData.filename, inputData.folderpath)
bucket = client.get_bucket(BucketName,validate=False)
key = boto.s3.key.Key(bucket, inputData.filename)
fileName = NamedTemporaryFile(delete=False,suffix=".parquet")
file.save(fileName)
with open(fileName.name) as f:
key.send_file(f)
but it is not working and giving me error like...
signature_host = '%s:%d' % (self.host, port)
TypeError: %d format: a number is required, not str
I tried google but no luck Can anyone help me with this or any sample code for the same.
After a lot of hit and tried and time, I finally got the solution. I posting it for everyone else who are facing the same issue.
You need to use Boto3 and here is the code...
def uploadFileToGivenBucket(self,inputData,file):
BucketName = inputData.ecsbucketname
#bucket = client.get_bucket(BucketName,validate=False)
f = NamedTemporaryFile(delete=False,suffix=".parquet")
file.save(f)
endpointurl = "<your endpoints>"
s3_client = boto3.client('s3',endpoint_url=endpointurl, aws_access_key_id=inputData.access_key_id,aws_secret_access_key=inputData.secret_key)
try:
newkey = 'yourfolderpath/anotherfolder'+inputData.filename
response = s3_client.upload_file(f.name, BucketName,newkey)
except ClientError as e:
logging.error(e)
return False
return True
I have the following code posted below which gets all the s3 bucket list on aws and I am trying to write code that checks if the buckets are encrypted in python but I am having trouble figuring out how to do that. Can anyone tell me how to modify my code to do that. I tried online examples and looked at the documentation.
my code is:
from __future__ import print_function
import boto3
import os
os.environ['AWS_DEFAULT_REGION'] = "us-east-1"
# Create an S3 client
s3 = boto3.client('s3')
# Call S3 to list current buckets
response = s3.list_buckets()
# Get a list of all bucket names from the response
buckets = [bucket['Name'] for bucket in response['Buckets']]
# Print out the bucket list
print("Bucket List: %s" % buckets)
Tried the following codes but they don't work:
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket-name')
for obj in bucket.objects.all():
key = s3.Object(bucket.name, obj.key)
print key.server_side_encryption
and
#!/usr/bin/env python
import boto3
s3_client = boto3.client('s3')
head = s3_client.head_object(
Bucket="<S3 bucket name>",
Key="<S3 object key>"
)
if 'ServerSideEncryption' in head:
print head['ServerSideEncryption']
It's first worth understanding a few things about S3 and encryption.
When you enable default encryption on an S3 bucket, you're actually configuring a server-side encryption configuration rule on the bucket that will cause S3 to encrypt every object uploaded to the bucket after the rule was configured.
Unrelated to #1, you can apply an S3 bucket policy to a bucket, denying any uploads of objects that are not encrypted. This will prevent you from adding unencrypted data but it will not automatically encrypt anything.
You can encrypt uploads on an object-by-object basis; encryption does not have to be bucket-wide.
So, one way to find out which buckets fall into category #1 (will automatically encrypt anything uploaded to them), you can do this:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
response = s3.list_buckets()
for bucket in response['Buckets']:
try:
enc = s3.get_bucket_encryption(Bucket=bucket['Name'])
rules = enc['ServerSideEncryptionConfiguration']['Rules']
print('Bucket: %s, Encryption: %s' % (bucket['Name'], rules))
except ClientError as e:
if e.response['Error']['Code'] == 'ServerSideEncryptionConfigurationNotFoundError':
print('Bucket: %s, no server-side encryption' % (bucket['Name']))
else:
print("Bucket: %s, unexpected error: %s" % (bucket['Name'], e))
This will result in output like this:
Bucket: mycats, no server-side encryption
Bucket: mydogs, no server-side encryption
Bucket: mytaxreturn, Encryption: [{'ApplyServerSideEncryptionByDefault': {'SSEAlgorithm': 'AES256'}}]
I have a script that copy files from one S3 account to another S3 account, It was working befoure!!!! That's for sure. Than I tried it today and it doesn't any more it gives me error S3ResponseError: 403 Forbidden. I'm 100% sure credentials are correct and I can go and download keys from both accounts manualy using aws console.
Code
def run(self):
while True:
# Remove and return an item from the queue
key_name = self.q.get()
k = Key(self.s_bucket, key_name)
d_key = Key(self.d_bucket, k.key)
if not d_key.exists() or k.etag != d_key.etag:
print 'Moving {file_name} from {s_bucket} to {d_bucket}'.format(
file_name = k.key,
s_bucket = source_bucket,
d_bucket = dest_bucket
)
# Create a new key in the bucket by copying another existing key
acl = self.s_bucket.get_acl(k)
self.d_bucket.copy_key( d_key.key, self.s_bucket.name, k.key, storage_class=k.storage_class)
d_key.set_acl(acl)
else:
print 'File exist'
self.q.task_done()
Error:
File "s3_to_s3.py", line 88, in run
self.d_bucket.copy_key( d_key.key, self.s_bucket.name, k.key, storage_class=k.storage_class)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 689, in copy_key
response.reason, body)
S3ResponseError: S3ResponseError: 403 Forbidden
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>0729E8ADBD7A9E60</RequestId><HostId>PSbbWCLBtLAC9cjW+52X1fUSVErnZeN79/w7rliDgNbLIdCpc9V0bPi8xO9fp1od</HostId></Error>
Try this: copy key from source bucket to destination bucket using boto's Key class
source_key_name = 'image.jpg' # for example
#return Key object
source_key = source_bucket.get_key(source_key_name)
#use Key.copy
source_key.copy(destination_bucket,source_key_name)
regarding the copy function. you can set preserve_acl to True and it will be copied from the source key.
Boto's Key.copy signature:
def copy(self, dst_bucket, dst_key, metadata=None,
reduced_redundancy=False, preserve_acl=False,
encrypt_key=False, validate_dst_bucket=True):