Background
I have an AWS Lambda (of Image type) configured through a Dockerfile. The Lambda function should execute an arbitrary python code (the code is sent from a user, and therefore can be malicious and should be restricted).
There is important information stored in the EFS (mounted on /mnt/efs) which should be accessible from the Lambda, but not from the user's code. The EFS access point is configured as:
AccessPointResource:
Type: 'AWS::EFS::AccessPoint'
Properties:
FileSystemId: !Ref FileSystemResource
PosixUser:
Uid: '1000'
Gid: '1000'
RootDirectory:
CreationInfo:
OwnerGid: '1000'
OwnerUid: '1000'
Permissions: '0777'
Path: '/mnt/efs'
Initial idea that did not work
Restrict AccessPointResource to allow reads for only a specific group
Include the main lambda user in the group
Create a Linux user that is not in the group
When running the submitted code, run under the newly created user's credentials
The reasons why it didn't work:
When creating a user in the Dockerfile, the user disappears when deploying the image
Tried creating the user with both RUN /usr/sbin/useradd -ms /bin/bash coderunner and in the entrypoint.sh
Tried creating the user inside the lambda (in the python code) - Permission denied (the main user of the lambda does not have permissions to access /usr/sbin/useradd)
When specifying the user for Popen following the guide, all the commands fail with permission denied - for any user even (with the current one).
Additional information
AWS lambda seems to reset all the users and their permissions when the docker image is deployed to match the Lambda restrictions
It creates ~150 other users to manage the access within the Lambda image
When printing os.getuid(), getpass.getuser(), os.getgroups() we get 993 sbx_user1051 []
When printing cat /etc/passwd we get ~150 users and none of them is the user that we created (coderunner)
The main question
Is there a way of permitting the main AWS Lambda code to access the EFS on /mnt/efs but restricting the access for a code launched through a python subprocess?
Related
I would like to require AWS IAM users to set their SourceIdentity after AssumeRoleWithsaml.
How can I set SourceIdentity information on the user's side inside the credentials file or from an environment variable?
Does anyone know how I can configure a function to run on all AWS accounts instead of manually entering an AWS acct ID as a parameter?
I have a Python script that I want to run and text me a notification if a certain condition is met. I'm using Twilio, so I have a Twilio API token and I want to keep it secret. I have it successfully running locally, and now I'm working on getting it running on an EC2 instance.
Regarding AWS steps, I've created an IAM user with permissions, launched the EC2 instance (and saved the ssh keys), and created some parameters in the AWS SSM Parameter store. Then I ssh'd into the instance and installed boto3. When I try to use boto3 to grab a parameter, I'm unable to locate the credentials:
# test.py
import boto3
ssm = boto3.client('ssm', region_name='us-west-1')
secret = ssm.get_parameter(Name='/test/cli-parameter')
print(secret)
# running the file in the console
>> python test.py
...
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I'm pretty sure this means it can't find the credentials that were created when I ran aws configure and it created the .aws/credentials file. I believe the reason for this is because I ran aws configure on my local machine, rather than running it while ssh'd into the instance. I did this to keep my AWS ID and secret key off of my EC2 instance, because I thought I'm supposed to keep that private and not put tokens/keys on my EC2 instance. I think I can solve the issue by running aws configure while ssh'd into my instance, but I want to understand what happens if there's a .aws/credentials file on my actual EC2 instance, and whether or not this is dangerous. I'm just not sure how this is all supposed to be structured, or what is a safe/correct way of running my script and accessing secret variables.
Any insight at all is helpful!
I suspect the answer you're looking for looks something like:
Create an IAM policy which allows access to the SSM parameter (why not use the SecretStore?)
Attach that IAM policy to a role.
Attach the role to your EC2 instance (instance profile)
boto3 will now automatically collect an AWS secret key, etc.. from the meta data service when it needs to talk to the parameter store.
I am building out a product that will use the serverless architecture on Amazon (using this example project).
Right now the product is usable by anyone. However, I don't want just anyone to be able to add/update/delete from the database. I do want anyone to be able to read from it though. So, I'd like to use two different sets of credentials. The first would be distributed with the application and would allow read only access. The second set remains internal and would be embedded in OS variables that the application would utilize.
It looks like these permissions are set up in the serverless.yml file, but this is only for one set of credentials.
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.DYNAMODB_TABLE}"
How I can set up two different roles?
IAM offers a number of pre-defined, managed IAM policies for DynamoDB, including:
AmazonDynamoDBReadOnlyAccess
AmazonDynamoDBFullAccess
Create two IAM roles with these managed policies: one for your read-only application and the other for your internal system. If either/both are running on EC2 then, rather than rely on credentials in environment variables, you can launch these EC2 instances with the relevant IAM role.
I'm using the very good Click framework to build a Python CLI that acts as a "wrapper" around a set of complex REST APIs. I've used the "complex" example in order to have good boilerplate code to build the rest of CLI.
However, since the CLI itself communicates with REST APIs, I need a bit of configuration for each command. Example: user authentication (id, password, etc.), and, if different from the default one, the URL to API server.
I could force the user to put these configuration as parameters for each command, but this would be really annoying when executing many commands (the user has to insert his auth details for every command).
Is there a way to have the user enter his credentials at the first command in order to have his uid/pwd persist for the entire session (like the mysql-cli, for example), and, after executing the commands he needed, "logout" from the CLI?
The way this is normally done is to have a configure command that stores these credentials in a file (normally in the user's $HOME folder, if you are on Linux) and changes its permissions so it is only readable by the user.
You can use configparser (or JSON or YAML or whatever you want) to load different sets of credentials based on a profile:
# $HOME/.your-config-name
[default]
auth-mode=password
username=bsmith
password=abc123
[system1]
auth-mode=oauth
auth-token=abc-123
auth-url=http://system.1/authenticate
[system2]
auth-mode=anonymous
auth-url=http://this-is.system2/start
Then you can use a global argument (say --profile) to pick which credentials should be used for a given request:
$ your-cli --profile system1 command --for first-system