I'm using the following code to generate a signed URL:
client.generate_presigned_url('get_object',
{'Bucket': client.bucket, 'Key': s3_filename}, ExpiresIn=expiry_seconds, HttpMethod='GET')
This generates a URL that looks something like this:
https://[bucket_name].s3.amazonaws.com/[path]?AWSAccessKeyId=[access-key-id]&Signature=[signature]&Expires=1478646479&x-amz-security-token=[long_token_string]
The 'Expires' timestamp is 4 days in the future (at the time of writing) and yet when clicking on the link, I get the 'Token has expired' XML response. The link does work for a few hours after generation.
Is there anything else that I should be looking at?
Thanks.
The expiry time of 1478646479 equates to 2016-11-08T23:07:59+00:00, which is in the future, so the expiry time being generated appears correct. (If the clock was wrong on the computer generating the link, this might have been wrong.)
The next thing to check are the permissions associated with the entity (identified by the AWSAccessKeyId) that created the pre-signed URL. In effect, the URL is using the permissions of that entity (eg IAM User, IAM Role, STS credentials) to grant time-limited access to an object. If that entity no longer has permission to access the object, then the pre-signed URL will no longer function.
You can perform tests on pre-signed URLs by using the aws s3 presign command in the AWS Command-Line Interface (CLI).
If you created a presigned URL using a temporary token, then the URL expires when the token expires. To have a URL valid for up to seven days you need to assign IAM user credentials.
Signature Version 4 is a requirement for this to work.
Here is what this looks like in boto3.
import boto3
from botocore.client import Config
s3 = boto3.client('s3',
config=Config(signature_version='s3v4'),
region_name = 'us-east-1',
aws_access_key_id = ‘THE_ID’,
aws_secret_access_key = ‘THE_KEY’
)
response = s3.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': ‘key_name’
},
ExpiresIn=expiration
)
Related
I have uploaded an object with the
client = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
response = client.put_object(
Bucket=BUCKET_NAME, Body=in_mem_file.getvalue(), Key=str(img_name))
and I'm generating the URL by
url = client.generate_presigned_url('get_object', Params={
'Bucket': BUCKET_NAME, 'Key': str(img_name)}, ExpiresIn=518400)
I need to generate the URL without expiring in some cases. Is it possible to generate url without expiring in s3 bucket?
I need to generate the URL without expiring in some cases
Its not possible. If you don't want to regenerate links every 7 days, then you have to look at other solutions for sharing S3 files. Often CloudFront with S3 is used. Otherwise, you need fully custom solution tailored to your specific need.
If the object is public, it has a permanent url of the form:
def get_object_url(region, bucket, object_key)
return f"https://{bucket}.s3.{region}.amazonaws.com/{object_key}"
Otherwise, you cannot have a permanent link.
I am facing an issue which I have been unable to solve. I wish to generate a pre signed URL and redirect the user in the front end on that url so that he can download the file. For that reason I added one more trigger on my lambda function and that is an API Gateway. Here is my code:
import os
import boto3
def lambda_handler(event, context):
url = ""
for record in event['Records']:
bucket = record['s3']['bucket']['name']
file_key = record['s3']['object']['key']
uri_duration = 90
s3Client = boto3.client('s3')
url = s3Client.generate_presigned_url('get_object', Params =
{'Bucket': bucket,
'Key': file_key},
ExpiresIn = uri_duration)
print(url)
return {"location": url }
The code is supposed to to save the pre-signed url in the "location", which is then mapped to the API. The problem is that after debugging I realized that this function is run once every time it is called either by the API Gateway or from the s3 bucket. Therefore I would like to ask if someone knows if there is any way to generate a pre-signed url and do a 301 redirect there with any other way? Sorry if my question is obvious I am new to AWS
I am sorry if this question is straightforward, I excected to find a lot of info on it but unfortunately I have been looking since a while and cant find anything. How could I let a user download a file which is stored in an amazon s3 bucket when clicking on a button? I thought of implementing this with a lambda function with the aid of the boto3 python SDK but I am not sure if this is possible. Thanks in advance for any help you can provide
Yes, it should be possible. You can generate a pre-signed link in a manner similar to the following.
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': get_bucket_name(),
'Key': s3_key
}
)
The case you mentioned is possible . We can handle the same using couple of ways.
We could make the s3 bucket a public readable one, which effectively make the objects available to all the users which i think is not a good approach if you think of the security scenarios.
We can go for a pre-signed URL for which we can setup an expiration time, so that the link will gets expired after a specific time.
{
bucket = 'bucket_name' # name of the s3 bucket
file_key = 'file_path/file_name' # key including any folder paths
uri_duration = 10 #expiry duration in seconds. default 3600
s3Client = boto3.client('s3')
_uri = s3Client.generate_presigned_url('get_object', Params =
{'Bucket': bucket, 'Key': file_key}, ExpiresIn = uri_duration)
}
I'm using boto3 to copy encrypted EBS snapshots from one region to another, but I've been getting Invalid presigned URL messages when I try to copy. I'm generating the presigned URL using the boto3 client method generate_presigned_url in the source region and calling the copy function in the destination region like so:
uw2_client = non_prod.client(
'ec2',
region_name="us-west-2",
config=Config(signature_version='s3v4')
)
presigned_url = uw2_client.generate_presigned_url(
ClientMethod='copy_snapshot',
Params={
'SourceSnapshotId': og_snapshot_id, # Original snapshot ID
'SourceRegion': 'us-west-2',
'DestinationRegion': 'us-east-1'
# I also tried include all parameters from copy_snapshot.
# It didn't make a difference.
# 'Description': desc,
# 'KmsKeyId': 'alias/xva-nonprod-all-amicopykey',
# 'Encrypted': True,
}
)
Here's my code to create the copy.
ue1_client = non_prod.client(
'ec2',
region_name="us-east-1",
config=Config(signature_version='s3v4')
)
response = ue1_client.copy_snapshot(
Description=desc,
KmsKeyId='alias/xva-nonprod-all-amicopykey', # Exists in us-east-1
Encrypted=True,
SourceSnapshotId=og_snapshot_id,
SourceRegion='us-west-2',
DestinationRegion='us-east-1',
PresignedUrl=pre_signed_url
)
It successfully returns the presigned URL. But if I attempt to use that presigned URL to copy a snapshot, I get the invalid URL error. If I try to validate the url:
r = requests.post(presigned_url)
print(r.status_code)
print(r.text)
I get:
<Response>
<Errors>
<Error>
<Code>AuthFailure</Code>
<Message>AWS was not able to validate the provided access credentials</Message>
</Error>
</Errors>
<RequestID>3189bb5b-54c9-4d11-ab4c-762cbea32d9a</RequestID>
</Response>
You'd think that it would an issue with my credentials, but I'm not sure how... It's the same credentials I'm using to create the pre-signed URL. And my IAM user has unfettered access to EC2.
I'm obviously doing something wrong here, but I cannot figure out what it is. Any insight would be appreciated.
EDIT
Just to confirm that it's not a permissions issue, I tried this with my personal account which has access to everything. Still getting the same error message.
As it turns out, the documentation is wrong... A signed URL is NOT required when copying encrypted snapshots within the same account (according to AWS Support).
From AWS Support:
... it's not actually necessary to create the pre-signed URL in order to copy encrypted snapshot from one region to another (within the same AWS account).
However, according to their documentation, it's not possible to copy encrypted snapshots to another account either... ¯\_(ツ)_/¯
The current boto3.EC2.Client.copy_snapshot function documentation says:
PresignedUrl (string) --
When you copy an encrypted source snapshot using the Amazon EC2 Query API, you must supply a pre-signed URL. This parameter is optional for unencrypted snapshots.
Instead, it can simply be accomplished by creating the client object in the destination region and calling the copy_snapshot() method like so:
try:
ec2 = boto3.client(
service_name='ec2',
region_name='us-east-1'
)
ec2.copy_snapshot(
SourceSnapshotId='snap-xxxxxxxxxxxx',
SourceRegion='us-west-2',
Encrypted=True,
KmsKeyId='DestinationRegionKeyId'
)
except Exception as e:
print(e)
What I want to accomplish is to generate a link to view the file (ex.image or pdf). The item is not accessible by URL (https://[bucket].s3.amazonaws.com/img_name.jpg), I think because its private and not public? (I'm not the owner of the bucket, but he gave me the access_key and secret_key?)
For now, all I can do is to download a file with this code.
s3.Bucket('mybucket').download_file('upload/nocturnes.png', 'dropzone/static/pdf/download_nocturnes.png')
I want to access an image on s3 so I can put it on an HTML, can I view it using the access and secret key?. Thank you for those who can help!
You can accomplish this using a pre-signed URL using the generate_presigned_url function. The caveat is that pre-signed URLs must have an expiration date. I was unable to find documentation on the maximum allowed duration. Here is an example:
url = s3.generate_presigned_url('get_object',
Params={
'Bucket': 'mybucket',
'Key': 'upload/nocturnes.png',
},
ExpiresIn=3600)
print url
For people who want to use generate_presigned_url for a public object and therefore don't want to do the signing part that appends credentials, the best solution I found is to still to use the generate_presigned_url, just that the Client.Config.signature_version needs to be set to botocore.UNSIGNED.
The following returns the public link without the signing stuff.
config.signature_version = botocore.UNSIGNED
boto3.client('s3', config=config).generate_presigned_url('get_object', ExpiresIn=0, Params={'Bucket': bucket, 'Key': key})
The relevant discussions on the boto3 repository are:
https://github.com/boto/boto3/issues/110
https://github.com/boto/boto3/issues/169
https://github.com/boto/boto3/issues/1415