I am running file upload to AWS S3 python script using a cron job.
The result of the cron job is sent via mail to me.
Everything is working fine except the extra message that I'm getting in mail.
The error is sh: aws: command not found.
I tried setting the PATH environment variable in the script;
os.environ["PATH"] = "usr/bin:/usr/local/bin"
and I got rid of the error, but a new one showed up The config profile (247-datapusher-s3) could not be found
Just want to get rid of both of these messages.
*Note: The script is working perfectly in both the cases and the files are successfully uploaded to AWS S3 bucket.
Thanks in advance!! :)
You can check your config file. Edit ~/.aws/config
you will get something like
[default]
aws_access_key_id = KEY_ID
aws_secret_access_key = KEY_PASS
region = us-east-1 (or another region)
[247-datapusher-s3]
aws_access_key_id = KEY_ID
aws_secret_access_key = KEY_PASS
region = us-east-1 (or another region)
you will need to change to
[default]
aws_access_key_id = KEY_ID
aws_secret_access_key = KEY_PASS
region = us-east-1 (or another region)
[profile 247-datapusher-s3]
aws_access_key_id = KEY_ID
aws_secret_access_key = KEY_PASS
region = us-east-1 (or another region)
If you do not even see the 247-datapusher-s3 section and you're using it, you will need to add in your config file.
Related
Looking for some guidance with regards to uploading files into AWS S3 bucket via a python script and an IAM role. I am able to upload files using BOTO3 and an aws_access_key_id & aws_secret_access_key for other scripts.
However, I have now been given an IAM role to login to a certain account. I have no issue using AWS CLI to authenticate and query the S3 data so I do believe that my .aws/credential and .aws/config files are correct. However I am not sure how to use the ARN value within my python code.
This is what I have put together so far, but get a variety of errors which all lead to denied access:
session = boto3.Session(profile_name='randomName')
session.client('sts').get_caller_identity()
assumed_role_session = boto3.Session(profile_name='randomNameAccount')
print(assumed_role_session.client('sts').get_caller_identity())
credentials = session.get_credentials()
aws_access_key_id = credentials.access_key
aws_secret_access_key = credentials.secret_key
s3 = boto3.client('s3',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
bucket_name = 'bucketName'
This is a sample of what my credential and config files looks like as a referal.
.aws/config file:
[profile randomNameAccount]
role_arn = arn:aws:iam::12345678910:role/roleName
source_profile = randomName
aws/credentials file:
[randomName]
aws_access_key_id = 12345678910
aws_secret_access_key = 1234567-abcdefghijk
My question is help around the python code to be able to authenticate against AWS and navigate around a S3 bucket using an IAM role and then upload files when I call an upload function.
Thank you in advance.
You should create an entry for the IAM Role in ~/.aws/credentials that refers to a set of IAM User credentials that have permission to assume the role:
[my-user]
aws_access_key_id = AKIAxxx
aws_secret_access_key = xxx
[my-role]
source_profile = my-user
role_arn = arn:aws:iam::123456789012:role/the-role
Add an entry to ~/.aws/config to provide a default region:
[profile my-role]
region = ap-southeast-2
Then you can assume the IAM Role with this code:
import boto3
# Create a session by assuming the role in the named profile
session = boto3.Session(profile_name='my-role')
# Use the session to access resources via the role
s3_client = session.client('s3')
response = s3_client.list_objects(Bucket=...)
I have two aws profiles in my config file like below
[profile projet]
region = us-east-1
output = json
[profile accPersonal]
region = us-east-1
output = json
and their respective credential files like below
[accPersonal]
aws_access_key_id = key_id
aws_secret_access_key = access_key
[projet]
aws_access_key_id = key_id
aws_secret_access_key = access_key
When I try to launch a sceptre create from command line
sceptre create dev/api-gateway/admin-key.yaml
I get the following error
ERROR : "Session credentials were not found. Profile: None. Region: us-east-1."
I used this command before multiple times to create resources in aws using sceptre and never had this issue,
checked in the aws docs for configuring profiles, the configuration looks correct and I dont seem to understand why am getting that error
I figured out why I was getting that error
in my config.yaml file for sceptre
project_code: projet_api
region: us-east-1
AvailabilityZoneA: us-west-2a
AvailabilityZoneB: us-west-2b
I searched in the documentation [https://sceptre.cloudreach.com/1.4.2/environment_config.html][1]
We need to add profile in the .yaml file for septre to read which profile we are creating the resources
I added profile to my config.yaml and it worked fine
project_code: projet_api
region: us-east-1
AvailabilityZoneA: us-west-2a
AvailabilityZoneB: us-west-2b
profile: projet
The code below fails in row s3 = boto3.client('s3') returning error botocore.exceptions.InvalidConfigError: The source profile "default" must have credentials.
def connect_s3_boto3():
try:
os.environ["AWS_PROFILE"] = "a"
s3 = boto3.client('s3')
return s3
except:
raise
I have set up the key and secret using aws configure
My file vim ~/.aws/credentials looks like:
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
My file vim ~/.aws/config looks like:
[default]
region = eu-west-1
output = json
[profile b]
region=eu-west-1
role_arn=arn:aws:iam::XX
source_profile=default
[profile a]
region=eu-west-1
role_arn=arn:aws:iam::YY
source_profile=default
[profile d]
region=eu-west-1
role_arn=arn:aws:iam::EE
source_profile=default
If I run aws-vault exec --no-session --debug a
it returns:
aws-vault: error: exec: Failed to get credentials for a9e: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 7087ea72-32c5-4b0a-a20e-fd2da9c3c747
I noticed you tagged this question with "docker". Is it possible that you're running your code from a Docker container that does not have your AWS credentials in it?
Use a docker volume to pass your credential files into the container:
https://docs.docker.com/storage/volumes/
It is not a good idea to add credentials into a container image because anybody who uses this image will have and use your credentials.
This is considered a bad practice.
For more information how to properly deal with secrets see https://docs.docker.com/engine/swarm/secrets/
I ran into this problem while trying to assume a role on an ECS container. It turned out that in such cases, instead of source_profile, credential_source should be used. It takes the value of EcsContainer for the container, Ec2InstanceMetadata for the EC2 machine or Environment for other cases.
Since the solution is not very intuitive, I thought it might save someone the trouble despite the age of this question.
Finally the issue is that Docker didn't had the credentials. And despite connect through bash and add them, it didn't work.
So, in the dockerfile I added:
ADD myfolder/aws/credentials /root/.aws/credentials
To move my locahost credentials files added through aws cli using aws configure to the docker. Then, I build the docker again and it works.
I'm using a container that simulate a S3 server running on http://127.0.0.1:4569 (with no authorization or credentials needed)
and I'm trying to simply connect and print a list of all the bucket names using python and boto3
here's my docker-compose:
s3:
image: andrewgaul/s3proxy
environment:
S3PROXY_AUTHORIZATION: none
hostname: s3
ports:
- 4569:80
volumes:
- ./data/s3:/data
here's my code:
s3 = boto3.resource('s3', endpoint_url='http://127.0.0.1:4569')
for bucket in s3.buckets.all():
print(bucket.name)enter code here
here's the error message that I received:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I tried this solution => How do you use an HTTP/HTTPS proxy with boto3?
but still not working, I don't understand what I'm doing wrong
First, boto3 always try to handshake with S3 server with AWS API key. Even your simulation server don't need password, you still need to specify them either in your .aws/credentials or inside your program. e.g.
[default]
aws_access_key_id = x
aws_secret_access_key = x
hardcoded dummy access key example
import boto3
session = boto3.session(
aws_access_key_id = 'x',
aws_secret_access_key = 'x')
s3 = session.resource('s3', endpoint_url='http://127.0.0.1:4569')
Second, I don't know how reliable and what kind of protocol is implemented by your "s3 simulation container". To make life easier, I always suggest anyone that wants to simulate S3 load test or whatever to use fake-s3
I want Boto3 to get the access and secret key from a config file instead of hard coding them. On my Linux server I set the following environment variable AWS_SHARED_CREDENTIALS_FILE with the value /app/.aws/credentials. In /app/.aws/ I put a file with the name credentials with the following content:
[default]
aws_access_key_id = abcd
aws_secret_access_key = abcd
Of course I used the actual keys instead of abcd.
Python:
import boto3
conn = boto3.client('s3',
region_name="eu-west-1",
endpoint_url="endpoint",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
config=Config(signature_version="s3", s3={'addressing_style': 'path'}))
However it says name'aws_access_key_id' is not defined. How can I fix it? Thanks
Edit:
>>> os.environ['AWS_SHARED_CREDENTIALS_FILE']
'/app/.aws/credentials'
If you have credentials folder with aws credentials already created, this means you don't need to specify them when instantiating your client. The following should work:
import boto3
conn = boto3.client('s3',
region_name="eu-west-1",
endpoint_url="endpoint",
config=Config(signature_version="s3", s3={'addressing_style': 'path'}))
If you are running your application on an EC2 instance, you can also assign a S3 role to your instance and have boto assume that role. Prevents you from having to store keys in your instance.
Look at the section "Assume Role Provider" in the docs:
http://boto3.readthedocs.io/en/latest/guide/configuration.html
Link to IAM roles as well:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html