I have a docker container python app deployed on a kubernetes cluster on Azure (I also tried on a container app). I'm trying to connect this app to Azure key vault to fetch some secrets. I created a managed identity and assigned it to both but the python app always fails to find the managed identity to even attempt connecting to the key vault.
The Managed Identity role assignments:
Key Vault Contributor -> on the key vault
Managed Identity Operator -> Managed Identity
Azure Kubernetes Service Contributor Role,
Azure Kubernetes Service Cluster User Role,
Managed Identity Operator -> on the resource group that includes the cluster
Also on the key vault Access policies I added the Managed Identity and gave it access to all key, secrets, and certs permissions (for now)
Python code:
credential = ManagedIdentityCredential()
vault_client = SecretClient(vault_url=key_vault_uri, credential=credential)
retrieved_secret = vault_client.get_secret(secret_name)
I keep getting the error:
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: no azure identity found for request clientID
So at some point I attempted to add the managed identity clientID in the cluster secrets and load it from there and still got the same error:
Python code:
def get_kube_secret(self, secret_name):
kube_config.load_incluster_config()
v1_secrets = kube_client.CoreV1Api()
string_secret = str(v1_secrets.read_namespaced_secret(secret_name, "redacted_namespace_name").data).replace("'", "\"")
json_secret = json.loads(string_secret)
return json_secret
def decode_base64_string(self, encoded_string):
decoded_secret = base64.b64decode(encoded_string.strip())
decoded_secret = decoded_secret.decode('UTF-8')
return decoded_secret
managed_identity_client_id_secret = self.get_kube_secret('managed-identity-credential')['clientId']
managed_identity_client_id = self.decode_base64_string(managed_identity_client_id_secret)
Update:
I also attempted to use the secret store CSI driver, but I have a feeling I'm missing a step there. Should the python code be updated to be able to use the secret store CSI driver?
# This is a SecretProviderClass using user-assigned identity to access the key vault
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-user-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true" # Set to true for using managed identity
userAssignedIdentityID: "$CLIENT_ID" # Set the clientID of the user-assigned managed identity to use
vmmanagedidentityclientid: "$CLIENT_ID"
keyvaultName: "$KEYVAULT_NAME" # Set to the name of your key vault
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
objects: ""
tenantId: "$AZURE_TENANT_ID"
Deployment Yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: redacted_namespace
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: redacted_image
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
imagePullPolicy: Always
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "250m"
env:
- name: test-secrets
valueFrom:
secretKeyRef:
name: test-secrets
key: test-secrets
volumeMounts:
- name: test-secrets
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: test-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname-user-msi"
dnsPolicy: ClusterFirst
Update 16/01/2023
I followed the steps in the answers and the linked docs to the letter, even contacted Azure support and followed it step by step with them on the phone and the result is still the following error:
"failed to process mount request" err="failed to get objectType:secret, objectName:MongoUsername, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://<RedactedVaultName>.vault.azure.net/secrets/<RedactedSecretName>/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=<RedactedClientId>&resource=https%3A%2F%2Fvault.azure.net"
Using the Secrets Store CSI Driver, you can configure the SecretProviderClass to use a workload identity by setting the clientID in the SecretProviderClass. You'll need to use the client ID of your user assigned managed identity and change the usePodIdentity and useVMManagedIdentity setting to false.
With this approach, you don't need to add any additional code in your app to retrieve the secrets. Instead, you can mount a secrets store (using CSI driver) as a volume mount in your pod and have secrets loaded as environment variables which is documented here.
This doc will walk you through setting it up on Azure, but at a high-level here is what you need to do:
Register the EnableWorkloadIdentityPreview feature using Azure CLI
Create an AKS cluster using Azure CLI with the azure-keyvault-secrets-provider add-on enabled and --enable-oidc-issuer and --enable-workload-identiy flags set
Create an Azure Key Vault and set your secrets
Create an Azure User Assigned Managed Identity and set an access policy on the key vault for the the managed identity' client ID
Connect to the AKS cluster and create a Kubernetes ServiceAccount with annotations and labels that enable this for Azure workload identity
Create an Azure identity federated credential for the managed identity using the AKS cluster's OIDC issuer URL and Kubernetes ServiceAccount as the subject
Create a Kubernetes SecretProviderClass using clientID to use workload identity and adding a secretObjects block to enable syncing objects as environment variables using Kubernetes secret store.
Create a Kubernetes Deployment with a label to use workload identity, the serviceAccountName set to the service account you created above, volume using CSI and the secret provider class you created above, volumeMount, and finally environment variables in your container using valueFrom and secretKeyRef syntax to mount from your secret object store.
Hope that helps.
What you are referring to is called pod identity (recently deprecated for workload identity).
if the cluster is configured with managed identity, you can use workload identity.
However, for AKS I suggest configuring the secret store CSI driver to fetch secrets from KV and have them as k8s secrets. To use managed identity for secret provider, refer to this doc.
Then you can configure your pods to read those secrets.
I finally figured it out, I contacted microsoft support and it seams Aks Preview is a bit buggy "go figure". They recommended to revert back to a stable version of the CLI and use user assigned identity.
I did just that but this time, instead of creating my own identity that I would then assign to both the vault and the cluster as this seams to confuse it. I used the the identity the cluster automatically generates for the nodes.
Maybe not the neatest solution, but it's the only one that worked for me without any issues.
Finally, some notes missing from the Azure docs:
Since the CSI driver mounts the secrets as files in the target folder, you still need to read those files yourself to load them as env variables.
For example in python:
def load_secrets():
directory = '/path/to/mounted/secrets/folder'
if not os.path.isdir(directory):
return
for filename in os.listdir(directory):
file_path = os.path.join(directory, filename)
# checking if it is a file
if os.path.isfile(file_path):
with open(file_path, 'r') as file:
file_value = file.read()
os.environ.setdefault(filename, file_value)
I have a Pulumi Python code that creates some Azure resources. Insted of I user User Account to create resource, I created an Azure Service Principal with Powershell and authenticate with the below method:
pulumi config set azure-native:clientId <clientID>
pulumi config set azure-native:clientSecret <clientSecret> --secret
pulumi config set azure-native:tenantId <tenantID>
pulumi config set azure-native:subscriptionId <subscriptionID>
Now, I want to know Is it possible to create an Azure Service Principal with Pulumi and after that authenticate with Service Principal has been created?
Other Question, Is it the true way?
Edit:
As these documents Service Principal
My code is:
import pulumi
from pulumi.output import Output
import pulumi_azure_native as azure_native
import pulumi_azuread as azuread
current = azuread.get_client_config()
example_application = azuread.Application("exampleApplication",
display_name="example",
owners=[current.object_id])
example_service_principal = azuread.ServicePrincipal("exampleServicePrincipal",
application_id=example_application.application_id,
app_role_assignment_required=False,
owners=[current.object_id])
I received this error. Also I install with pip install pulumi-azuread
ModuleNotFoundError: No module named 'pulumi_azuread'
You certainly can, using the Service Principal resource as part of the Azure Active Directory Provider.
You'll need to define an application first, but in typescript it would look something like:
import * as azuread from "#pulumi/azuread";
const application = new azuread.Application(`application`, {
displayName: `application`,
});
const servicePrincipal = new azuread.ServicePrincipal(`servicePrincipal`, {
applicationId: application.applicationId,
});
The code below fails in row s3 = boto3.client('s3') returning error botocore.exceptions.InvalidConfigError: The source profile "default" must have credentials.
def connect_s3_boto3():
try:
os.environ["AWS_PROFILE"] = "a"
s3 = boto3.client('s3')
return s3
except:
raise
I have set up the key and secret using aws configure
My file vim ~/.aws/credentials looks like:
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
My file vim ~/.aws/config looks like:
[default]
region = eu-west-1
output = json
[profile b]
region=eu-west-1
role_arn=arn:aws:iam::XX
source_profile=default
[profile a]
region=eu-west-1
role_arn=arn:aws:iam::YY
source_profile=default
[profile d]
region=eu-west-1
role_arn=arn:aws:iam::EE
source_profile=default
If I run aws-vault exec --no-session --debug a
it returns:
aws-vault: error: exec: Failed to get credentials for a9e: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: 7087ea72-32c5-4b0a-a20e-fd2da9c3c747
I noticed you tagged this question with "docker". Is it possible that you're running your code from a Docker container that does not have your AWS credentials in it?
Use a docker volume to pass your credential files into the container:
https://docs.docker.com/storage/volumes/
It is not a good idea to add credentials into a container image because anybody who uses this image will have and use your credentials.
This is considered a bad practice.
For more information how to properly deal with secrets see https://docs.docker.com/engine/swarm/secrets/
I ran into this problem while trying to assume a role on an ECS container. It turned out that in such cases, instead of source_profile, credential_source should be used. It takes the value of EcsContainer for the container, Ec2InstanceMetadata for the EC2 machine or Environment for other cases.
Since the solution is not very intuitive, I thought it might save someone the trouble despite the age of this question.
Finally the issue is that Docker didn't had the credentials. And despite connect through bash and add them, it didn't work.
So, in the dockerfile I added:
ADD myfolder/aws/credentials /root/.aws/credentials
To move my locahost credentials files added through aws cli using aws configure to the docker. Then, I build the docker again and it works.
I have a Python script I want to run in Azure Resource Manager context within an Azure DevOps pipeline task to be able to access Azure resources (like the Azure CLI or Azure PowerShell tasks).
How can I get Azure RM Service Endpoint credentials stored in Azure DevOps passed - as ServicePrincipal/Secret or OAuth Token - into the script?
If I understand the issue correctly, you want to use the Python Azure CLI wrapper classes to manage or access Azure resources. Rather than using shell or PowerShell commands. I ran across the same issue and used the following steps to solve it.
import sys
from azure.identity import ClientSecretCredential
tenant_id = sys.argv[1]
client_id = sys.argv[2]
client_secret = sys.argv[3]
credentials = ClientSecretCredential(tenant_id, client_id, client_secret)
Add a "User Python version" step to add the correct version of python to your agent's PATH
Add a "Azure CLI" step. The goal here is to install your requirements and execute the script.
Within the Azure CLI step, be sure to check the "Access service principal details in script" box in the Advanced section. This will allow you to pass in the service principal details into your script as arguments.
Pass in the $tenantId $servicePrincipalId $servicePrincipalKey variables as arguments. These variables are pipeline defined so long as the box in step 3 is checked. No action is required on your part to define them.
Setup your Python script to accept the values and setup your
credentials. See the script above
Depends on what you call a python script, but either way Azure DevOps hasn't got native support to authenticate python sdk (or your custom python script), but you can pass in credentials from build\release variables to your script, or try and pull that from the Azure Cli (I think it stores data somewhere under /home/.azure/.
based on the hint given by 4c74356b41 above and with some dissecting of Azure CLI I created this function that allows pulling an OAuth token over ADAL from the Service Princial logged in inside an Azure DevOps - Azure CLI task
import os
import json
import adal
_SERVICE_PRINCIPAL_ID = 'servicePrincipalId'
_SERVICE_PRINCIPAL_TENANT = 'servicePrincipalTenant'
_TOKEN_ENTRY_TOKEN_TYPE = 'tokenType'
_ACCESS_TOKEN = 'accessToken'
def get_config_dir():
return os.getenv('AZURE_CONFIG_DIR', None) or os.path.expanduser(os.path.join('~', '.azure'))
def getOAuthTokenFromCLI():
token_file = (os.environ.get('AZURE_ACCESS_TOKEN_FILE', None)
or os.path.join(get_config_dir(), 'accessTokens.json'))
with open(token_file) as f:
tokenEntry = json.load(f)[0] # just assume first entry
tenantID = tokenEntry[_SERVICE_PRINCIPAL_TENANT]
appId = tokenEntry[_SERVICE_PRINCIPAL_ID]
appPassword = tokenEntry[_ACCESS_TOKEN]
authURL = "https://login.windows.net/" + tenantID
resource = "https://management.azure.com/"
context = adal.AuthenticationContext(authURL, validate_authority=tenantID, api_version=None)
token = context.acquire_token_with_client_credentials(resource,appId,appPassword)
return token[_TOKEN_ENTRY_TOKEN_TYPE] + " " + token[_ACCESS_TOKEN]
I need to store API keys and other sensitive information in app.yaml as environment variables for deployment on GAE. The issue with this is that if I push app.yaml to GitHub, this information becomes public (not good). I don't want to store the info in a datastore as it does not suit the project. Rather, I'd like to swap out the values from a file that is listed in .gitignore on each deployment of the app.
Here is my app.yaml file:
application: myapp
version: 3
runtime: python27
api_version: 1
threadsafe: true
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
handlers:
- url: /static
static_dir: static
- url: /.*
script: main.application
login: required
secure: always
# auth_fail_action: unauthorized
env_variables:
CLIENT_ID: ${CLIENT_ID}
CLIENT_SECRET: ${CLIENT_SECRET}
ORG: ${ORG}
ACCESS_TOKEN: ${ACCESS_TOKEN}
SESSION_SECRET: ${SESSION_SECRET}
Any ideas?
This solution is simple but may not suit all different teams.
First, put the environment variables in an env_variables.yaml, e.g.,
env_variables:
SECRET: 'my_secret'
Then, include this env_variables.yaml in the app.yaml
includes:
- env_variables.yaml
Finally, add the env_variables.yaml to .gitignore, so that the secret variables won't exist in the repository.
In this case, the env_variables.yaml needs to be shared among the deployment managers.
If it's sensitive data, you should not store it in source code as it will be checked into source control. The wrong people (inside or outside your organization) may find it there. Also, your development environment probably uses different config values from your production environment. If these values are stored in code, you will have to run different code in development and production, which is messy and bad practice.
In my projects, I put config data in the datastore using this class:
from google.appengine.ext import ndb
class Settings(ndb.Model):
name = ndb.StringProperty()
value = ndb.StringProperty()
#staticmethod
def get(name):
NOT_SET_VALUE = "NOT SET"
retval = Settings.query(Settings.name == name).get()
if not retval:
retval = Settings()
retval.name = name
retval.value = NOT_SET_VALUE
retval.put()
if retval.value == NOT_SET_VALUE:
raise Exception(('Setting %s not found in the database. A placeholder ' +
'record has been created. Go to the Developers Console for your app ' +
'in App Engine, look up the Settings record with name=%s and enter ' +
'its value in that record\'s value field.') % (name, name))
return retval.value
Your application would do this to get a value:
API_KEY = Settings.get('API_KEY')
If there is a value for that key in the datastore, you will get it. If there isn't, a placeholder record will be created and an exception will be thrown. The exception will remind you to go to the Developers Console and update the placeholder record.
I find this takes the guessing out of setting config values. If you are unsure of what config values to set, just run the code and it will tell you!
The code above uses the ndb library which uses memcache and the datastore under the hood, so it's fast.
Update:
jelder asked for how to find the Datastore values in the App Engine console and set them. Here is how:
Go to https://console.cloud.google.com/datastore/
Select your project at the top of the page if it's not already selected.
In the Kind dropdown box, select Settings.
If you ran the code above, your keys will show up. They will all have the value NOT SET. Click each one and set its value.
Hope this helps!
This didn't exist when you posted, but for anyone else who stumbles in here, Google now offers a service called Secret Manager.
It's a simple REST service (with SDKs wrapping it, of course) to store your secrets in a secure location on google cloud platform. This is a better approach than Data Store, requiring extra steps to see the stored secrets and having a finer-grained permission model -- you can secure individual secrets differently for different aspects of your project, if you need to.
It offers versioning, so you can handle password changes with relative ease, as well as a robust query and management layer enabling you to discover and create secrets at runtime, if necessary.
Python SDK
Example usage:
from google.cloud import secretmanager_v1beta1 as secretmanager
secret_id = 'my_secret_key'
project_id = 'my_project'
version = 1 # use the management tools to determine version at runtime
client = secretmanager.SecretManagerServiceClient()
secret_path = client.secret_version_path(project_id, secret_id, version)
response = client.access_secret_version(secret_path)
password_string = response.payload.data.decode('UTF-8')
# use password_string -- set up database connection, call third party service, whatever
My approach is to store client secrets only within the App Engine app itself. The client secrets are neither in source control nor on any local computers. This has the benefit that any App Engine collaborator can deploy code changes without having to worry about the client secrets.
I store client secrets directly in Datastore and use Memcache for improved latency accessing the secrets. The Datastore entities only need to be created once and will persist across future deploys. of course the App Engine console can be used to update these entities at any time.
There are two options to perform the one-time entity creation:
Use the App Engine Remote API interactive shell to create the entities.
Create an Admin only handler that will initialize the entities with dummy values. Manually invoke this admin handler, then use the App Engine console to update the entities with the production client secrets.
Best way to do it, is store the keys in a client_secrets.json file, and exclude that from being uploaded to git by listing it in your .gitignore file. If you have different keys for different environments, you can use app_identity api to determine what the app id is, and load appropriately.
There is a fairly comprehensive example here -> https://developers.google.com/api-client-library/python/guide/aaa_client_secrets.
Here's some example code:
# declare your app ids as globals ...
APPID_LIVE = 'awesomeapp'
APPID_DEV = 'awesomeapp-dev'
APPID_PILOT = 'awesomeapp-pilot'
# create a dictionary mapping the app_ids to the filepaths ...
client_secrets_map = {APPID_LIVE:'client_secrets_live.json',
APPID_DEV:'client_secrets_dev.json',
APPID_PILOT:'client_secrets_pilot.json'}
# get the filename based on the current app_id ...
client_secrets_filename = client_secrets_map.get(
app_identity.get_application_id(),
APPID_DEV # fall back to dev
)
# use the filename to construct the flow ...
flow = flow_from_clientsecrets(filename=client_secrets_filename,
scope=scope,
redirect_uri=redirect_uri)
# or, you could load up the json file manually if you need more control ...
f = open(client_secrets_filename, 'r')
client_secrets = json.loads(f.read())
f.close()
This solution relies on the deprecated appcfg.py
You can use the -E command line option of appcfg.py to setup the environment variables when you deploy your app to GAE (appcfg.py update)
$ appcfg.py
...
-E NAME:VALUE, --env_variable=NAME:VALUE
Set an environment variable, potentially overriding an
env_variable value from app.yaml file (flag may be
repeated to set multiple variables).
...
Most answers are outdated. Using google cloud datastore is actually a bit different right now. https://cloud.google.com/python/getting-started/using-cloud-datastore
Here's an example:
from google.cloud import datastore
client = datastore.Client()
datastore_entity = client.get(client.key('settings', 'TWITTER_APP_KEY'))
connection_string_prod = datastore_entity.get('value')
This assumes the entity name is 'TWITTER_APP_KEY', the kind is 'settings', and 'value' is a property of the TWITTER_APP_KEY entity.
With github action instead of google cloud triggers (Google cloud triggers aren't able to find it's own app.yaml and manage the freaking environment variable by itself.)
Here is how to do it:
My environment :
App engine,
standard (not flex),
Nodejs Express application,
a PostgreSQL CloudSql
First the setup :
1. Create a new Google Cloud Project (or select an existing project).
2. Initialize your App Engine app with your project.
[Create a Google Cloud service account][sa] or select an existing one.
3. Add the the following Cloud IAM roles to your service account:
App Engine Admin - allows for the creation of new App Engine apps
Service Account User - required to deploy to App Engine as service account
Storage Admin - allows upload of source code
Cloud Build Editor - allows building of source code
[Download a JSON service account key][create-key] for the service account.
4. Add the following [secrets to your repository's secrets][gh-secret]:
GCP_PROJECT: Google Cloud project ID
GCP_SA_KEY: the downloaded service account key
The app.yaml
runtime: nodejs14
env: standard
env_variables:
SESSION_SECRET: $SESSION_SECRET
beta_settings:
cloud_sql_instances: SQL_INSTANCE
Then the github action
name: Build and Deploy to GKE
on: push
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
DATABASE_URL: ${{ secrets.DATABASE_URL}}
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: '12'
- run: npm install
- uses: actions/checkout#v1
- uses: ikuanyshbekov/app-yaml-env-compiler#v1.0
env:
SESSION_SECRET: ${{ secrets.SESSION_SECRET }}
- shell: bash
run: |
sed -i 's/SQL_INSTANCE/'${{secrets.DATABASE_URL}}'/g' app.yaml
- uses: actions-hub/gcloud#master
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
APPLICATION_CREDENTIALS: ${{ secrets.GCLOUD_AUTH }}
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
with:
args: app deploy app.yaml
To add secrets into github action you must go to : Settings/secrets
Take note that I could handle all the substitution with the bash script. So I would not depend on the github project "ikuanyshbekov/app-yaml-env-compiler#v1.0"
It's a shame that GAE doesn't offer an easiest way to handle environment variable for the app.yaml. I don't want to use KMS since I need to update the beta-settings/cloud sql instance.. I really needed to substitute everything into the app.yaml.
This way I can make a specific action for the right environment and manage the secrets.
It sounds like you can do a few approaches. We have a similar issue and do the following (adapted to your use-case):
Create a file that stores any dynamic app.yaml values and place it on a secure server in your build environment. If you are really paranoid, you can asymmetrically encrypt the values. You can even keep this in a private repo if you need version control/dynamic pulling, or just use a shells script to copy it/pull it from the appropriate place.
Pull from git during the deployment script
After the git pull, modify the app.yaml by reading and writing it in pure python using a yaml library
The easiest way to do this is to use a continuous integration server such as Hudson, Bamboo, or Jenkins. Simply add some plug-in, script step, or workflow that does all the above items I mentioned. You can pass in environment variables that are configured in Bamboo itself for example.
In summary, just push in the values during your build process in an environment you only have access to. If you aren't already automating your builds, you should be.
Another option option is what you said, put it in the database. If your reason for not doing that is that things are too slow, simply push the values into memcache as a 2nd layer cache, and pin the values to the instances as a first-layer cache. If the values can change and you need to update the instances without rebooting them, just keep a hash you can check to know when they change or trigger it somehow when something you do changes the values. That should be it.
Just wanted to note how I solved this problem in javascript/nodejs. For local development I used the 'dotenv' npm package which loads environment variables from a .env file into process.env. When I started using GAE I learned that environment variables need to be set in a 'app.yaml' file. Well, I didn't want to use 'dotenv' for local development and 'app.yaml' for GAE (and duplicate my environment variables between the two files), so I wrote a little script that loads app.yaml environment variables into process.env, for local development. Hope this helps someone:
yaml_env.js:
(function () {
const yaml = require('js-yaml');
const fs = require('fs');
const isObject = require('lodash.isobject')
var doc = yaml.safeLoad(
fs.readFileSync('app.yaml', 'utf8'),
{ json: true }
);
// The .env file will take precedence over the settings the app.yaml file
// which allows me to override stuff in app.yaml (the database connection string (DATABASE_URL), for example)
// This is optional of course. If you don't use dotenv then remove this line:
require('dotenv/config');
if(isObject(doc) && isObject(doc.env_variables)) {
Object.keys(doc.env_variables).forEach(function (key) {
// Dont set environment with the yaml file value if it's already set
process.env[key] = process.env[key] || doc.env_variables[key]
})
}
})()
Now include this file as early as possible in your code, and you're done:
require('../yaml_env')
You should encrypt the variables with google kms and embed it in your source code. (https://cloud.google.com/kms/)
echo -n the-twitter-app-key | gcloud kms encrypt \
> --project my-project \
> --location us-central1 \
> --keyring THEKEYRING \
> --key THECRYPTOKEY \
> --plaintext-file - \
> --ciphertext-file - \
> | base64
put the scrambled (encrypted and base64 encoded) value into your environment variable (in yaml file).
Some pythonish code to get you started on decrypting.
kms_client = kms_v1.KeyManagementServiceClient()
name = kms_client.crypto_key_path_path("project", "global", "THEKEYRING", "THECRYPTOKEY")
twitter_app_key = kms_client.decrypt(name, base64.b64decode(os.environ.get("TWITTER_APP_KEY"))).plaintext
#Jason F's answer based on using Google Datastore is close, but the code is a bit outdated based on the sample usage on the library docs. Here's the snippet that worked for me:
from google.cloud import datastore
client = datastore.Client('<your project id>')
key = client.key('<kind e.g settings>', '<entity name>') # note: entity name not property
# get by key for this entity
result = client.get(key)
print(result) # prints all the properties ( a dict). index a specific value like result['MY_SECRET_KEY'])
Partly inspired by this Medium post
Extending Martin's answer
from google.appengine.ext import ndb
class Settings(ndb.Model):
"""
Get sensitive data setting from DataStore.
key:String -> value:String
key:String -> Exception
Thanks to: Martin Omander # Stackoverflow
https://stackoverflow.com/a/35261091/1463812
"""
name = ndb.StringProperty()
value = ndb.StringProperty()
#staticmethod
def get(name):
retval = Settings.query(Settings.name == name).get()
if not retval:
raise Exception(('Setting %s not found in the database. A placeholder ' +
'record has been created. Go to the Developers Console for your app ' +
'in App Engine, look up the Settings record with name=%s and enter ' +
'its value in that record\'s value field.') % (name, name))
return retval.value
#staticmethod
def set(name, value):
exists = Settings.query(Settings.name == name).get()
if not exists:
s = Settings(name=name, value=value)
s.put()
else:
exists.value = value
exists.put()
return True
There is a pypi package called gae_env that allows you to save appengine environment variables in Cloud Datastore. Under the hood, it also uses Memcache so its fast
Usage:
import gae_env
API_KEY = gae_env.get('API_KEY')
If there is a value for that key in the datastore, it will be returned.
If there isn't, a placeholder record __NOT_SET__ will be created and a ValueNotSetError will be thrown. The exception will remind you to go to the Developers Console and update the placeholder record.
Similar to Martin's answer, here is how to update the value for the key in Datastore:
Go to Datastore Section in the developers console
Select your project at the top of the page if it's not already selected.
In the Kind dropdown box, select GaeEnvSettings.
Keys for which an exception was raised will have value __NOT_SET__.
Go to the package's GitHub page for more info on usage/configuration
My solution is to replace the secrets in the app.yaml file via github action and github secrets.
app.yaml (App Engine)
env_variables:
SECRET_ONE: $SECRET_ONE
ANOTHER_SECRET: $ANOTHER_SECRET
workflow.yaml (Github)
steps:
- uses: actions/checkout#v2
- uses: 73h/gae-app-yaml-replace-env-variables#v0.1
env:
SECRET_ONE: ${{ secrets.SECRET_ONE }}
ANOTHER_SECRET: ${{ secrets.ANOTHER_SECRET }}
Here you can find the Github action.
https://github.com/73h/gae-app-yaml-replace-env-variables
When developing locally, I write the secrets to an .env file.