I am trying to upload a file in s3 bucket and the following code i have used to achieve the same.
Code
accessKey = ''
secretKey = ''
session = boto3.Session(aws_access_key_id = accessKey, aws_secret_access_key = secretKey,)
s3 = session.resource('s3')
try:
response =s3.Object(bucket_name,'sample.docx').upload_file(Filename='C:/Users/Anushka/Desktop/sample.docx')
except Exception as e:
return e
The code does not do anything not even raising any error and if I print "response", "None" gets printed on the shell. I am not able to understand what is the problem with the code.
Related
I have a lambda function that will read an excel file and do some stuffs and then store the result in a different S3 bucket.
def lambda_handler(event, context):
try:
status = int(event['status'])
if status:
Reading_Bucket_Name = 'read-data-s3'
Writing_Bucket_Name = 'write-excel-file-bucket'
rowDataFile = 'Analyse.xlsx'
HTMLfileName = 'index.html'
url = loading_function(Reading_Bucket_Name=Reading_Bucket_Name, Writing_Bucket_Name=Writing_Bucket_Name, rowDataFile=rowDataFile, HTMLfileName=HTMLfileName)
status = 0
return {"statusCode": 200, "URL": url}
else:
return {"statusCode": 400, "Error": "The code could not be executed"}
except Exception as e:
print('#________ An error occurred while reading Status code int(event[status]) ________#')
print(e)
raise e
return None
The code is only supposed to work once! And that it returns the URL and then turns off and exit the Lambda function.
But the problem is: I will get the first output, and then the lambda function will call itself again! And it will go to the exception and execute it at least many times! Because there is no event['status']. 'This must be received if I call this function by:
{
"status": "1"
}
How can I stop execution after getting the first output?
Update:
This will cause the problem by uploading a new file to an S3 bucket:
s3_client = boto3.client('s3')
fig.write_html('/tmp/' + HTMLfileName, auto_play=False)
response = s3_client.upload_file('/tmp/' + HTMLfileName, Writing_Bucket_Name, HTMLfileName, ExtraArgs={'ACL':'public-read', 'ContentType':'text/html'})
return True
Given that the Lambda function appears to be running when a new object is created in the Amazon S3 bucket, it would appear that the bucket has been configured with an Event Notification that is triggering the AWS Lambda function.
To check this, go the to bucket in the S3 management console, go to the Properties tab and scroll down to Event notifications. Look for any configured events that trigger a Lambda function.
I am trying to retrieve a file from an S3 bucket (using boto3), but I keep getting "An error occurred (AccessDenied) when calling the GetObject operation: Access Denied". Below is how I created the S3 bucket:
# S3
bucket = s3.Bucket(
self,
"testS3Bucket",
bucket_name=f"test_s3_bucket",
versioned=True,
)
bucket.grant_read_write(service_lambda)
and the method I used to get the file from S3:
def download_file(self, file_name, s3_bucket):
try:
file = self.s3.Bucket(s3_bucket).Object(file_name)
file_content = file.get()["Body"].read()
except ClientError as e:
if e.response["Error"]["Code"] == "404":
log.error("File does not exist for partner")
return {}
else:
raise e
except Exception as e:
raise e
return file_content.decode()
I even went as far as to add "s3:*" action to the IAM policy statement, but I still got the same error. I was able to use the command below to successfully retrieve the file's metadata so I don't think it is a permissions error.
aws s3api head-object --bucket test_s3_bucket --key configuration.txt
I have flask python rest api which is called by another flask rest api.
the input for my api is one parquet file (FileStorage object) and ECS connection and bucket details.
I want to save parquet file to ECS in a specific folder using boto or boto3
the code I have tried
def uploadFileToGivenBucket(self,inputData,file):
BucketName = inputData.ecsbucketname
calling_format = OrdinaryCallingFormat()
client = S3Connection(inputData.access_key_id, inputData.secret_key, port=inputData.ecsport,
host=inputData.ecsEndpoint, debug=2,
calling_format=calling_format)
#client.upload_file(BucketName, inputData.filename, inputData.folderpath)
bucket = client.get_bucket(BucketName,validate=False)
key = boto.s3.key.Key(bucket, inputData.filename)
fileName = NamedTemporaryFile(delete=False,suffix=".parquet")
file.save(fileName)
with open(fileName.name) as f:
key.send_file(f)
but it is not working and giving me error like...
signature_host = '%s:%d' % (self.host, port)
TypeError: %d format: a number is required, not str
I tried google but no luck Can anyone help me with this or any sample code for the same.
After a lot of hit and tried and time, I finally got the solution. I posting it for everyone else who are facing the same issue.
You need to use Boto3 and here is the code...
def uploadFileToGivenBucket(self,inputData,file):
BucketName = inputData.ecsbucketname
#bucket = client.get_bucket(BucketName,validate=False)
f = NamedTemporaryFile(delete=False,suffix=".parquet")
file.save(f)
endpointurl = "<your endpoints>"
s3_client = boto3.client('s3',endpoint_url=endpointurl, aws_access_key_id=inputData.access_key_id,aws_secret_access_key=inputData.secret_key)
try:
newkey = 'yourfolderpath/anotherfolder'+inputData.filename
response = s3_client.upload_file(f.name, BucketName,newkey)
except ClientError as e:
logging.error(e)
return False
return True
I'm using Python and tinys3 to write files to S3, but it's not working. Here's my code:
import tinys3
conn = tinys3.Connection('xxxxxxx','xxxxxxxx',tls=True)
f = open('testing_s3.txt','rb')
print conn.upload('testing_data/testing_s3.txt',f,'testing-bucket')
print conn.get('testing_data/testing_s3.txt','testing-bucket')
That gives the output:
<Response [301]>
<Response [301]>
When I try specifying the endpoint, I get:
requests.exceptions.HTTPError: 403 Client Error: Forbidden
Any idea what I'm doing wrong?
Edit: When I try using boto, it works, so the problem isn't in the access key or secret key.
I finally figured this out. Here is the correct code:
import tinys3
conn = tinys3.Connection('xxxxxxx','xxxxxxxx',tls=True,endpoint='s3-us-west-1.amazonaws.com')
f = open('testing_s3.txt','rb')
print conn.upload('testing_data/testing_s3.txt',f,'testing-bucket')
print conn.get('testing_data/testing_s3.txt','testing-bucket')
You have to use the region endpoint, not s3.amazonaws.com. You can look up the region endpoint from here: http://docs.aws.amazon.com/general/latest/gr/rande.html. Look under the heading "Amazon Simple Storage Service (S3)."
I got the idea from this thread: https://github.com/smore-inc/tinys3/issues/5
If using an IAM user it is necessary to allow the "s3:PutObjectAcl" action.
Don't know why but this code never worked for me.
I've switched to boto, and it just uploaded file from 1 time.
AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXXXXX'
AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXX/XXXXXXXXXXXXXXXXXXXXXXXXXXX'
bucket_name = 'my-bucket'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket('my-bucket')
print 'Uploading %s to Amazon S3 bucket %s' % \
(filename, bucket_name)
k = Key(bucket)
k.key = filename
k.set_contents_from_filename(filename,
cb=percent_cb, num_cb=10)
I have a script that copy files from one S3 account to another S3 account, It was working befoure!!!! That's for sure. Than I tried it today and it doesn't any more it gives me error S3ResponseError: 403 Forbidden. I'm 100% sure credentials are correct and I can go and download keys from both accounts manualy using aws console.
Code
def run(self):
while True:
# Remove and return an item from the queue
key_name = self.q.get()
k = Key(self.s_bucket, key_name)
d_key = Key(self.d_bucket, k.key)
if not d_key.exists() or k.etag != d_key.etag:
print 'Moving {file_name} from {s_bucket} to {d_bucket}'.format(
file_name = k.key,
s_bucket = source_bucket,
d_bucket = dest_bucket
)
# Create a new key in the bucket by copying another existing key
acl = self.s_bucket.get_acl(k)
self.d_bucket.copy_key( d_key.key, self.s_bucket.name, k.key, storage_class=k.storage_class)
d_key.set_acl(acl)
else:
print 'File exist'
self.q.task_done()
Error:
File "s3_to_s3.py", line 88, in run
self.d_bucket.copy_key( d_key.key, self.s_bucket.name, k.key, storage_class=k.storage_class)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 689, in copy_key
response.reason, body)
S3ResponseError: S3ResponseError: 403 Forbidden
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>0729E8ADBD7A9E60</RequestId><HostId>PSbbWCLBtLAC9cjW+52X1fUSVErnZeN79/w7rliDgNbLIdCpc9V0bPi8xO9fp1od</HostId></Error>
Try this: copy key from source bucket to destination bucket using boto's Key class
source_key_name = 'image.jpg' # for example
#return Key object
source_key = source_bucket.get_key(source_key_name)
#use Key.copy
source_key.copy(destination_bucket,source_key_name)
regarding the copy function. you can set preserve_acl to True and it will be copied from the source key.
Boto's Key.copy signature:
def copy(self, dst_bucket, dst_key, metadata=None,
reduced_redundancy=False, preserve_acl=False,
encrypt_key=False, validate_dst_bucket=True):