This is the line of code I use to connect to my boto3 client.
s3_client = boto3.client('s3', aws_access_key_id = '<access-key>', aws_secret_access_key = '<secret-key>')
How can I modify it to adapt to config file credentials within ~/.aws/config. For example, if I have
[default]
aws_access_key_id = FOO
aws_secret_access_key = BAR
[recordings]
aws_access_key_id = ABC
aws_secret_access_key = DEF
How can I set my python code to use recordings?
This is the easiest way to achieve what you want:
session = boto3.Session(profile_name='recordings')
s3_client = session.client('S3')
Best, Stefan
Related
I have my credentials stored in an S3 bucket and can access the file using the boto3 library but how can I pointos.environ['GOOGLE_APPLICATION_CREDENTIALS'] to the file stored in S3
client = boto3.client(
"s3",
aws_access_key_id=access_key,
aws_secret_access_key=secret_key
)
credentials = client.get_object(Bucket=bucket_name, Key='name-of-file.json')
GOOGLE_CREDENTIALS = json.loads(credentials["Body"].read().decode('utf-8'))
# THIS DOES NOT WORK
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = GOOGLE_CREDENTIALS
I am using a lambda to create a pre-signed URL to download files that land in an S3 bucket -
the code works and I get a URL but when trying to access it I get
af-south-1 location constraint is incompatible for the region-specific endpoint this request was sent to.
both the bucket and the lambda are in the same region
I'm at a loss as to what is actually happening any ideas or solutions would be greatly appreciated.
my code is below
import json
import boto3
import boto3.session
def lambda_handler(event, context):
session = boto3.session.Session(region_name='af-south-1')
s3 = session.client('s3')
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
url = s3.generate_presigned_url(ClientMethod='get_object',
Params={'Bucket': bucket,
'Key': key}, ExpiresIn = 400)
print (url)```
Set an endpoint_url=https://s3.af-south-1.amazonaws.com while generating the s3_client
s3_client = session.client('s3',
region_name='af-south-1',
endpoint_url='https://s3.af-south-1.amazonaws.com')
Could you please try using the boto3 client directly rather than via the session, and generate the pre-signed url :
import boto3
import requests
# Get the service client.
s3 = boto3.client('s3',region_name='af-south-1')
# Generate the URL to get 'key-name' from 'bucket-name'
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'bucket-name',
'Key': 'key-name'
}
)
You could also have a look at these 1 & 2, which resembles the same issue.
I am trying to use boto3 for Amazon Mechanical Turk. I was trying to get the client using the following code:
import boto3
endpoint_url = 'https://mturk-requester.us-east-1.amazonaws.com'
aws_access_key_id = <aws_access_key_id>
aws_secret_access_key = <aws_secret_access_key>
region_name = 'us-east-1'
client = boto3.client('mturk',
aws_access_key_id = aws_access_key_id,
aws_secret_access_key = aws_secret_access_key,
region_name=region_name,
endpoint_url = endpoint_url
)
But I am getting the following error about UnknownService name:
botocore.exceptions.UnknownServiceError: Unknown service: 'mturk'. Valid service
names are: acm,..., xray
Why is 'mturk' not in this list? The code I am using is taken from mturk developer website.
Any suggestion is welcome! Thanks in advance!
The following does not work:
From the boto 3 docs:
http://boto3.readthedocs.io/en/latest/guide/s3.html#generating-presigned-urls
This is my script with placeholder bucket and key values:
import boto3
import requests
from botocore.client import Config
# Get the service client.
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))
# Generate the URL to get 'key-name' from 'bucket-name'
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'mybucketname',
'Key': 'myObject.txt'
}
)
print url
response = requests.get(url)
print response
S3 responds with a 403:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>B5681E888657E2A1</RequestId>
<HostId>
FMS7oPPOXt4I0KXPPQwdBx2fyxze+ussMmy/BOWLVFusWMoU2zAErE08ez34O6VhSYRvIYFm7Bs=
</HostId>
</Error>
You need to provide aws credentials with your boto3 client. Docs here
If you need help getting access to your credentials on aws you can look here.
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN,
)
# Or via the Session
session = boto3.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN,
)
I want to make a folder called img that already exists within my private bucket public. I'm using Boto3. I just want to make this folder public not anything else using a script..
This is how I'm currently connecting to the bucket and how far I have got....
ACCESS_KEY_ID = 'xxxxx'
ACCESS_KEY_SECRET = 'xxxx'
bucket_name = 'mybucket'
sourceDir = "../../docs/buildHTML/html/"
destDir = ''
r = boto3.setup_default_session(region_name='eu-west-1')
s3 = boto3.resource('s3', aws_access_key_id=ACCESS_KEY_ID, aws_secret_access_key=ACCESS_KEY_SECRET)
bucket = s3.Bucket(bucket_name)
So I have the bucket and this works. How do I now make the folder img that already exists public?
You need to add a policy to the bucket, something like this:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicReadImages",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::mybucket/abc/img/*"]
}
]
}
You can do this through the AWS console, or any of the SDKs. In boto3, I think you do it like this:
bucket = s3.Bucket(bucket_name)
response = bucket.put(
Policy = '<policy string here>'
)