I am using Flask and Boto with AWS S3 to upload files. I would like to know the progress so I can show it to the user.
I am boto.s3client and
s3.put_object(Body=body, Bucket= BUCKET_NAME, Key=key, ContentType='image/png')
For example.
How can I know the progress of uploading?
Related
How can I pass a Transfer Acceleration endpoint-url to the boto upload_file function while trying to upload a file to S3?
My current code is:
s3.Bucket(BUCKET).upload_file(filetoupload, pathnfilename, ExtraArgs={'ACL':'public-read'})
You would provide it with when you created your resource object:
s3 = boto3.resource('s3', endpoint_url="the_endpoint")
I'm using AWS S3 boto3 to upload files to my AWS bucket called uploadtesting. Here is an example implementation:
import boto3
...
s3 = boto3.resource('s3')
s3.meta.client.upload_file('files/report.pdf', 'uploadtesting', 'report.pdf')
Accessing the object from the AWS S3 console allows you to see the object URL, however it is not a downloadable link. What I wanted to know is how can I use python to print out a downloadable link to the file I just uploaded?
It appears you are asking how to generate a URL that allows a private object to be downloaded.
This can be done by generating an Amazon S3 pre-signed URL, which provides access to a private S3 object for a limited time.
Basically, using credentials that have access to the object, you can create a URL that is 'signed'. When Amazon S3 receives this URL, it verifies the signature and provides access to the object if the expiry period has not ended.
From Presigned URLs — Boto3 documentation:
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': object_name},
ExpiresIn=expiration)
The ExpiresIn parameter is expressed in seconds.
The format is:
https://BUCKET-NAME.s3.amazonaws.com/OBJECT-KEY
So your object would be:
https://uploadtesting.s3.amazonaws.com/report.pdf
There is no supplied function to generate this string, so use:
url = f'https://{bucket_name}.s3.amazonaws.com/{key}'
I am using python and jupyter notebook to read files from an aws s3 bucket, and I am getting the error 'No Credentials Error:Unable to locate credentials' when running the following code:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
for obj in bucket.objects.all():
key = obj.key
body = obj.get()['Body'].read()
I believe I need to put my access key somewhere, but I am not sure where. Thank you!
If you have AWS CLI installed, just do a simple aws configure. Then you will be good to go.
I am trying to schedule a job in AWS lambda where i get data fromm a Json API. I want to transfer JSON file to amazon S3 everytime.I have set up S3 bucket and aws lambda function with proper IAM roles. I am writing AWS lambda function in Python. Code works fine on an EC2 instance but It's not transferring file to S3 if I put it in AWS Lambda.
import os
def lambda_handler(event, context):
#changing the directory to /tmp
os.chdir("/tmp")
print "loading function"
#downloading file to
os.system("wget https://jsonplaceholder.typicode.com/posts/1 -P /tmp")
#using aws-cli to transfer file to amazon S3
os.system("aws s3 sync . s3://targetbucket")
I am new to aws lambda. I am not getting any error but it's not giving me expected output
AWS Lambda does not have the aws cli by default.
You can either Create a deployment package with awscli in it or Use python boto3 library.
import boto3
s3client = boto3.client('s3')
for filename in os.listdir('/tmp'): # assuming there will not be any sub-directories
fpath = os.path.join('/tmp',filename)
if os.path.isfile(fpath):
s3client.upload_file(fpath, 'targetbucket', filename)
when my airflow dag fails i get the errors in the path
"/home/ec2-user/software/airflow/logs/dagtest_dag/trigger_load/2019-10-10T06:01:33.342433+00:00/1.log
"
how to take these log to s3 bucket?
Configure this as a cron job.
import boto3
s3 = boto3.client('s3')
# Make sure your client is authenticated
with open('path/to/your/logs.log', 'rb') as data:
s3.upload_fileobj(data, 'bucketname', 'path/to/your/logs/in/s3.log')