Create an Istio Virtual Service with K8s Python API? - python

I'm building a Kubernetes application that Dockerizes code and runs it on the cluster. In order for users to be able to invoke their Dockerized code, I need to modify the Istio configuration to expose the service they've created.
I'm trying to create Istio virtual services using the Python API. I'm able to list existing Istio resources:
group = 'networking.istio.io'
version = 'v1alpha3'
plural = 'destinationrules'
from kubernetes import client, config
config.load_kube_config()
myclient = client.CustomObjectsApi()
api_response = myclient.list_cluster_custom_object(group, version, plural)
but when I use the same parameters to create, I get a 404 not found error.
with open('destination-rule.yaml', 'r') as file_reader:
file_content = file_reader.read()
deployment_template = yaml.safe_load(file_content)
api_response = myclient.create_cluster_custom_object(
group=group, version=version, plural=plural, body=body)
The destination-rule.yaml file looks like:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test
spec:
host: test
subsets:
- name: v1
labels:
run: test
What am I doing wrong here?

My problem was that I was doing create_cluster_custom_object instead of create_namespaced_custom_object. When I switched over, it started working.

Related

Azure AKS/Container App can't access Key vault using managed identity

I have a docker container python app deployed on a kubernetes cluster on Azure (I also tried on a container app). I'm trying to connect this app to Azure key vault to fetch some secrets. I created a managed identity and assigned it to both but the python app always fails to find the managed identity to even attempt connecting to the key vault.
The Managed Identity role assignments:
Key Vault Contributor -> on the key vault
Managed Identity Operator -> Managed Identity
Azure Kubernetes Service Contributor Role,
Azure Kubernetes Service Cluster User Role,
Managed Identity Operator -> on the resource group that includes the cluster
Also on the key vault Access policies I added the Managed Identity and gave it access to all key, secrets, and certs permissions (for now)
Python code:
credential = ManagedIdentityCredential()
vault_client = SecretClient(vault_url=key_vault_uri, credential=credential)
retrieved_secret = vault_client.get_secret(secret_name)
I keep getting the error:
azure.core.exceptions.ClientAuthenticationError: Unexpected content type "text/plain; charset=utf-8"
Content: no azure identity found for request clientID
So at some point I attempted to add the managed identity clientID in the cluster secrets and load it from there and still got the same error:
Python code:
def get_kube_secret(self, secret_name):
kube_config.load_incluster_config()
v1_secrets = kube_client.CoreV1Api()
string_secret = str(v1_secrets.read_namespaced_secret(secret_name, "redacted_namespace_name").data).replace("'", "\"")
json_secret = json.loads(string_secret)
return json_secret
def decode_base64_string(self, encoded_string):
decoded_secret = base64.b64decode(encoded_string.strip())
decoded_secret = decoded_secret.decode('UTF-8')
return decoded_secret
managed_identity_client_id_secret = self.get_kube_secret('managed-identity-credential')['clientId']
managed_identity_client_id = self.decode_base64_string(managed_identity_client_id_secret)
Update:
I also attempted to use the secret store CSI driver, but I have a feeling I'm missing a step there. Should the python code be updated to be able to use the secret store CSI driver?
# This is a SecretProviderClass using user-assigned identity to access the key vault
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname-user-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true" # Set to true for using managed identity
userAssignedIdentityID: "$CLIENT_ID" # Set the clientID of the user-assigned managed identity to use
vmmanagedidentityclientid: "$CLIENT_ID"
keyvaultName: "$KEYVAULT_NAME" # Set to the name of your key vault
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
objects: ""
tenantId: "$AZURE_TENANT_ID"
Deployment Yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: redacted_namespace
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: redacted_image
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
imagePullPolicy: Always
resources:
# You must specify requests for CPU to autoscale
# based on CPU utilization
requests:
cpu: "250m"
env:
- name: test-secrets
valueFrom:
secretKeyRef:
name: test-secrets
key: test-secrets
volumeMounts:
- name: test-secrets
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: test-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname-user-msi"
dnsPolicy: ClusterFirst
Update 16/01/2023
I followed the steps in the answers and the linked docs to the letter, even contacted Azure support and followed it step by step with them on the phone and the result is still the following error:
"failed to process mount request" err="failed to get objectType:secret, objectName:MongoUsername, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://<RedactedVaultName>.vault.azure.net/secrets/<RedactedSecretName>/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {\"error\":\"invalid_request\",\"error_description\":\"Identity not found\"} Endpoint http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&client_id=<RedactedClientId>&resource=https%3A%2F%2Fvault.azure.net"
Using the Secrets Store CSI Driver, you can configure the SecretProviderClass to use a workload identity by setting the clientID in the SecretProviderClass. You'll need to use the client ID of your user assigned managed identity and change the usePodIdentity and useVMManagedIdentity setting to false.
With this approach, you don't need to add any additional code in your app to retrieve the secrets. Instead, you can mount a secrets store (using CSI driver) as a volume mount in your pod and have secrets loaded as environment variables which is documented here.
This doc will walk you through setting it up on Azure, but at a high-level here is what you need to do:
Register the EnableWorkloadIdentityPreview feature using Azure CLI
Create an AKS cluster using Azure CLI with the azure-keyvault-secrets-provider add-on enabled and --enable-oidc-issuer and --enable-workload-identiy flags set
Create an Azure Key Vault and set your secrets
Create an Azure User Assigned Managed Identity and set an access policy on the key vault for the the managed identity' client ID
Connect to the AKS cluster and create a Kubernetes ServiceAccount with annotations and labels that enable this for Azure workload identity
Create an Azure identity federated credential for the managed identity using the AKS cluster's OIDC issuer URL and Kubernetes ServiceAccount as the subject
Create a Kubernetes SecretProviderClass using clientID to use workload identity and adding a secretObjects block to enable syncing objects as environment variables using Kubernetes secret store.
Create a Kubernetes Deployment with a label to use workload identity, the serviceAccountName set to the service account you created above, volume using CSI and the secret provider class you created above, volumeMount, and finally environment variables in your container using valueFrom and secretKeyRef syntax to mount from your secret object store.
Hope that helps.
What you are referring to is called pod identity (recently deprecated for workload identity).
if the cluster is configured with managed identity, you can use workload identity.
However, for AKS I suggest configuring the secret store CSI driver to fetch secrets from KV and have them as k8s secrets. To use managed identity for secret provider, refer to this doc.
Then you can configure your pods to read those secrets.
I finally figured it out, I contacted microsoft support and it seams Aks Preview is a bit buggy "go figure". They recommended to revert back to a stable version of the CLI and use user assigned identity.
I did just that but this time, instead of creating my own identity that I would then assign to both the vault and the cluster as this seams to confuse it. I used the the identity the cluster automatically generates for the nodes.
Maybe not the neatest solution, but it's the only one that worked for me without any issues.
Finally, some notes missing from the Azure docs:
Since the CSI driver mounts the secrets as files in the target folder, you still need to read those files yourself to load them as env variables.
For example in python:
def load_secrets():
directory = '/path/to/mounted/secrets/folder'
if not os.path.isdir(directory):
return
for filename in os.listdir(directory):
file_path = os.path.join(directory, filename)
# checking if it is a file
if os.path.isfile(file_path):
with open(file_path, 'r') as file:
file_value = file.read()
os.environ.setdefault(filename, file_value)

ConfingExeption while Sending a job signal to kubernetes inside django to activate a pod

I have written a C++ program and set it up with docker/kubernetes hosted on Google Cloud using Github actions.
I have 3 active pods within my cluster and my c++ program basically takes a json as the input from a Django application and produces an output.
My current goal is to trigger a pod from django.
Right now I have written some code using the official Kubernetes Django package but I'm getting an error:
Here is what I have developed up until now:
from kubernetes import client, config, utils
import kubernetes.client
from kubernetes.client.rest import ApiException
# Set logging
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
# Setup K8 configs
# config.load_incluster_config("./kube-config.yaml")
config.load_kube_config("./kube-config.yaml")
configuration = kubernetes.client.Configuration()
api_instance = kubernetes.client.BatchV1Api(kubernetes.client.ApiClient(configuration))
I don't know much about the kube-config.yaml file so I have borrowed one code from the internet:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: test:v5
env:
imagePullPolicy: IfNotPresent
command: ['./CppProgram']
args: ['project.json']
restartPolicy: OnFailure
The yaml file and the python file are in the same directory.
But when I call this via a view i get this error on the console:
kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
Is my load_kube_config call approach wrong or is my yaml file wrong? If so is there an example i can look into?
I saw this question asked before, I think according to here I should have used load_kube_config (I've already deployed to google kubernetes engine and the pods should be ready. ) but I'm not sure.
You need to use load_incluster_config() function instead of load_kube_config().

How to deploy a Knative service with Kubernetes python client library

We are trying to use deploy a service using knative with the python client library of Kubernetes. We are using the following yaml file:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: test-{{ test_id }}
namespace: default
spec:
template:
spec:
containers:
- image: test-deployment:latest
resources:
limits:
cpu: 50m
memory: 128Mi
requests:
cpu: 50m
memory: 128Mi
containerConcurrency: 1
If we deploy using the command line tool of kubernetes, it works fine.
kubectl create -f test.yaml
With the python client library, we are doing:
import kubernetes
import yaml
import uuid
from jinja2 import Template
from urllib3 import exceptions as urllib_exceptions
api = kubernetes.client.CoreV1Api(api_client=kubernetes.config.load_kube_config(context=cluster))
with open(deployment_yaml_path, 'r') as file_reader:
file_content = file_reader.read()
deployment_template = Template(file_content)
deployment_template = yaml.safe_load(template.render({
'test_id': str(uuid.uuid4())
}))
deployment = kubernetes.client.V1Service(
api_version=deployment_template['apiVersion'],
kind="Service",
metadata=deployment_template['metadata'],
spec=deployment_template['spec']
)
try:
response = api.create_namespaced_service(body=deployment, namespace='default')
except (kubernetes.client.rest.ApiException, urllib_exceptions.HTTPError):
raise TestError
However, we are getting this error:
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'a1968276-e16b-44f4-a40d-5eb5eaee9d47', 'Content-Type': 'application/json', 'Date': 'Thu, 23 Apr 2020 08:29:36 GMT', 'Content-Length': '347'})
HTTP response body: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Service in version \"v1\" cannot be handled as a Service: no kind \"Service\" is registered for version \"serving.knative.dev/v1\" in scheme \"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30\"",
"reason": "BadRequest",
"code": 400
}
Is there a way to deploy a service with knative? As far as I understood knative service is different than the normal Kubernetes service. I don't know whether the problem is that I'm trying to deploy the service in a wrong way or whether the Kubernetes python client library doesn't support this deployment yet.
Edit:
Python Client Library: kubernetes==11.0.0
Kubernetes:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:51:13Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-gke.5", GitCommit:"a5bf731ea129336a3cf32c3375317b3a626919d7", GitTreeState:"clean", BuildDate:"2020-03-31T02:49:49Z", GoVersion:"go1.12.17b4", Compiler:"gc", Platform:"linux/amd64"}
kubernetes.client.V1Service is a reference to the Kubernetes "Service" concept, which is a selector across pods that appears as a network endpoint, rather than the Knative "Service" concept, which is the entire application which provides functionality over the network.
Based on this example from the kubernetes-client/python repo, you need to do something like this to get and use a client for Knative services:
api = kubernetes.client.CustomObjectsApi()
try:
resource = api.create_namespaced_custom_object(
group="serving.knative.dev",
version="v1",
plural="services",
namespace="default",
body=deployment_template)
except (kubernetes.client.rest.ApiException, urllib_exceptions.HTTPError):
raise TestError
If you're going to be doing this a lot, you might want to make a helper that takes arguments similar to create_namespaced_service, and possibly also a wrapper object similar to kubernetes.client.V1Service to simplify creating Knative Services.
Try using create_namespaced_custom_object
Refer: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#create_namespaced_custom_object
Here service is a custom resource specific to Knative.

Python or Node access to GKE kubectl

I manage a couple (presently, but will increase) clusters at GKE and up till now have been ok launching things manually as needed. I've started working my own API that can take in requests to spin up new resources on-demand for a specific cluster but in order to make it scalable I need to do something more dynamic than switching between clusters with each request. I have found a link for a Google API python client that supposedly can access GKE:
https://developers.google.com/api-client-library/python/apis/container/v1#system-requirements
I've also found several other clients (specifically one I was looking closely at was the nodejs client from godaddy) that can access Kubernetes:
https://github.com/godaddy/kubernetes-client
The Google API Client doesn't appear to be documented for use with GKE/kubectl commands, and the godaddy kubernetes-client has to access a single cluster master but can't reach one at GKE (without a kubectl proxy enabled first). So my question is, how does one manage kubernetes on GKE programmatically without having to use the command-line utilities in either nodejs or python?
I know this question is a couple of years old, but hopefully this helps someone. Newer GKE APIs are available for Node.js here: https://cloud.google.com/nodejs/docs/reference/container/0.3.x/
See list of container APIs here: https://developers.google.com/apis-explorer/#p/container/v1/
Once connected via the API, you can access cluster details, which includes the connectivity information for connecting to the master with standard API calls.
I just posted an article on Medium with an example of how to do this
The first part of the article outlines how to setup the service account, roles and credentials and load them as Environmental variables. Once done, you could then run the following python:
from kubernetes import client
import base64
from tempfile import NamedTemporaryFile
import os
import yaml
from os import path
def main():
try:
host_url = os.environ["HOST_URL"]
cacert = os.environ["CACERT"]
token = os.environ["TOKEN"]
# Set the configuration
configuration = client.Configuration()
with NamedTemporaryFile(delete=False) as cert:
cert.write(base64.b64decode(cacert))
configuration.ssl_ca_cert = cert.name
configuration.host = host_url
configuration.verify_ssl = True
configuration.debug = False
configuration.api_key = {"authorization": "Bearer " + token}
client.Configuration.set_default(configuration)
# Prepare all the required properties in order to run the create_namespaced_job method
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#create_namespaced_job
v1 = client.BatchV1Api()
with open(path.join(path.dirname(__file__), "job.yaml")) as f:
body = yaml.safe_load(f)
v1.create_namespaced_job(namespace="default", body=body, pretty=True)
return f'Job created successfully', 200
except Exception as e:
return str(e), 500
if __name__ == '__main__':
main()

Securely storing environment variables in GAE with app.yaml

I need to store API keys and other sensitive information in app.yaml as environment variables for deployment on GAE. The issue with this is that if I push app.yaml to GitHub, this information becomes public (not good). I don't want to store the info in a datastore as it does not suit the project. Rather, I'd like to swap out the values from a file that is listed in .gitignore on each deployment of the app.
Here is my app.yaml file:
application: myapp
version: 3
runtime: python27
api_version: 1
threadsafe: true
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
handlers:
- url: /static
static_dir: static
- url: /.*
script: main.application
login: required
secure: always
# auth_fail_action: unauthorized
env_variables:
CLIENT_ID: ${CLIENT_ID}
CLIENT_SECRET: ${CLIENT_SECRET}
ORG: ${ORG}
ACCESS_TOKEN: ${ACCESS_TOKEN}
SESSION_SECRET: ${SESSION_SECRET}
Any ideas?
This solution is simple but may not suit all different teams.
First, put the environment variables in an env_variables.yaml, e.g.,
env_variables:
SECRET: 'my_secret'
Then, include this env_variables.yaml in the app.yaml
includes:
- env_variables.yaml
Finally, add the env_variables.yaml to .gitignore, so that the secret variables won't exist in the repository.
In this case, the env_variables.yaml needs to be shared among the deployment managers.
If it's sensitive data, you should not store it in source code as it will be checked into source control. The wrong people (inside or outside your organization) may find it there. Also, your development environment probably uses different config values from your production environment. If these values are stored in code, you will have to run different code in development and production, which is messy and bad practice.
In my projects, I put config data in the datastore using this class:
from google.appengine.ext import ndb
class Settings(ndb.Model):
name = ndb.StringProperty()
value = ndb.StringProperty()
#staticmethod
def get(name):
NOT_SET_VALUE = "NOT SET"
retval = Settings.query(Settings.name == name).get()
if not retval:
retval = Settings()
retval.name = name
retval.value = NOT_SET_VALUE
retval.put()
if retval.value == NOT_SET_VALUE:
raise Exception(('Setting %s not found in the database. A placeholder ' +
'record has been created. Go to the Developers Console for your app ' +
'in App Engine, look up the Settings record with name=%s and enter ' +
'its value in that record\'s value field.') % (name, name))
return retval.value
Your application would do this to get a value:
API_KEY = Settings.get('API_KEY')
If there is a value for that key in the datastore, you will get it. If there isn't, a placeholder record will be created and an exception will be thrown. The exception will remind you to go to the Developers Console and update the placeholder record.
I find this takes the guessing out of setting config values. If you are unsure of what config values to set, just run the code and it will tell you!
The code above uses the ndb library which uses memcache and the datastore under the hood, so it's fast.
Update:
jelder asked for how to find the Datastore values in the App Engine console and set them. Here is how:
Go to https://console.cloud.google.com/datastore/
Select your project at the top of the page if it's not already selected.
In the Kind dropdown box, select Settings.
If you ran the code above, your keys will show up. They will all have the value NOT SET. Click each one and set its value.
Hope this helps!
This didn't exist when you posted, but for anyone else who stumbles in here, Google now offers a service called Secret Manager.
It's a simple REST service (with SDKs wrapping it, of course) to store your secrets in a secure location on google cloud platform. This is a better approach than Data Store, requiring extra steps to see the stored secrets and having a finer-grained permission model -- you can secure individual secrets differently for different aspects of your project, if you need to.
It offers versioning, so you can handle password changes with relative ease, as well as a robust query and management layer enabling you to discover and create secrets at runtime, if necessary.
Python SDK
Example usage:
from google.cloud import secretmanager_v1beta1 as secretmanager
secret_id = 'my_secret_key'
project_id = 'my_project'
version = 1 # use the management tools to determine version at runtime
client = secretmanager.SecretManagerServiceClient()
secret_path = client.secret_version_path(project_id, secret_id, version)
response = client.access_secret_version(secret_path)
password_string = response.payload.data.decode('UTF-8')
# use password_string -- set up database connection, call third party service, whatever
My approach is to store client secrets only within the App Engine app itself. The client secrets are neither in source control nor on any local computers. This has the benefit that any App Engine collaborator can deploy code changes without having to worry about the client secrets.
I store client secrets directly in Datastore and use Memcache for improved latency accessing the secrets. The Datastore entities only need to be created once and will persist across future deploys. of course the App Engine console can be used to update these entities at any time.
There are two options to perform the one-time entity creation:
Use the App Engine Remote API interactive shell to create the entities.
Create an Admin only handler that will initialize the entities with dummy values. Manually invoke this admin handler, then use the App Engine console to update the entities with the production client secrets.
Best way to do it, is store the keys in a client_secrets.json file, and exclude that from being uploaded to git by listing it in your .gitignore file. If you have different keys for different environments, you can use app_identity api to determine what the app id is, and load appropriately.
There is a fairly comprehensive example here -> https://developers.google.com/api-client-library/python/guide/aaa_client_secrets.
Here's some example code:
# declare your app ids as globals ...
APPID_LIVE = 'awesomeapp'
APPID_DEV = 'awesomeapp-dev'
APPID_PILOT = 'awesomeapp-pilot'
# create a dictionary mapping the app_ids to the filepaths ...
client_secrets_map = {APPID_LIVE:'client_secrets_live.json',
APPID_DEV:'client_secrets_dev.json',
APPID_PILOT:'client_secrets_pilot.json'}
# get the filename based on the current app_id ...
client_secrets_filename = client_secrets_map.get(
app_identity.get_application_id(),
APPID_DEV # fall back to dev
)
# use the filename to construct the flow ...
flow = flow_from_clientsecrets(filename=client_secrets_filename,
scope=scope,
redirect_uri=redirect_uri)
# or, you could load up the json file manually if you need more control ...
f = open(client_secrets_filename, 'r')
client_secrets = json.loads(f.read())
f.close()
This solution relies on the deprecated appcfg.py
You can use the -E command line option of appcfg.py to setup the environment variables when you deploy your app to GAE (appcfg.py update)
$ appcfg.py
...
-E NAME:VALUE, --env_variable=NAME:VALUE
Set an environment variable, potentially overriding an
env_variable value from app.yaml file (flag may be
repeated to set multiple variables).
...
Most answers are outdated. Using google cloud datastore is actually a bit different right now. https://cloud.google.com/python/getting-started/using-cloud-datastore
Here's an example:
from google.cloud import datastore
client = datastore.Client()
datastore_entity = client.get(client.key('settings', 'TWITTER_APP_KEY'))
connection_string_prod = datastore_entity.get('value')
This assumes the entity name is 'TWITTER_APP_KEY', the kind is 'settings', and 'value' is a property of the TWITTER_APP_KEY entity.
With github action instead of google cloud triggers (Google cloud triggers aren't able to find it's own app.yaml and manage the freaking environment variable by itself.)
Here is how to do it:
My environment :
App engine,
standard (not flex),
Nodejs Express application,
a PostgreSQL CloudSql
First the setup :
1. Create a new Google Cloud Project (or select an existing project).
2. Initialize your App Engine app with your project.
[Create a Google Cloud service account][sa] or select an existing one.
3. Add the the following Cloud IAM roles to your service account:
App Engine Admin - allows for the creation of new App Engine apps
Service Account User - required to deploy to App Engine as service account
Storage Admin - allows upload of source code
Cloud Build Editor - allows building of source code
[Download a JSON service account key][create-key] for the service account.
4. Add the following [secrets to your repository's secrets][gh-secret]:
GCP_PROJECT: Google Cloud project ID
GCP_SA_KEY: the downloaded service account key
The app.yaml
runtime: nodejs14
env: standard
env_variables:
SESSION_SECRET: $SESSION_SECRET
beta_settings:
cloud_sql_instances: SQL_INSTANCE
Then the github action
name: Build and Deploy to GKE
on: push
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
DATABASE_URL: ${{ secrets.DATABASE_URL}}
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: '12'
- run: npm install
- uses: actions/checkout#v1
- uses: ikuanyshbekov/app-yaml-env-compiler#v1.0
env:
SESSION_SECRET: ${{ secrets.SESSION_SECRET }}
- shell: bash
run: |
sed -i 's/SQL_INSTANCE/'${{secrets.DATABASE_URL}}'/g' app.yaml
- uses: actions-hub/gcloud#master
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
APPLICATION_CREDENTIALS: ${{ secrets.GCLOUD_AUTH }}
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
with:
args: app deploy app.yaml
To add secrets into github action you must go to : Settings/secrets
Take note that I could handle all the substitution with the bash script. So I would not depend on the github project "ikuanyshbekov/app-yaml-env-compiler#v1.0"
It's a shame that GAE doesn't offer an easiest way to handle environment variable for the app.yaml. I don't want to use KMS since I need to update the beta-settings/cloud sql instance.. I really needed to substitute everything into the app.yaml.
This way I can make a specific action for the right environment and manage the secrets.
It sounds like you can do a few approaches. We have a similar issue and do the following (adapted to your use-case):
Create a file that stores any dynamic app.yaml values and place it on a secure server in your build environment. If you are really paranoid, you can asymmetrically encrypt the values. You can even keep this in a private repo if you need version control/dynamic pulling, or just use a shells script to copy it/pull it from the appropriate place.
Pull from git during the deployment script
After the git pull, modify the app.yaml by reading and writing it in pure python using a yaml library
The easiest way to do this is to use a continuous integration server such as Hudson, Bamboo, or Jenkins. Simply add some plug-in, script step, or workflow that does all the above items I mentioned. You can pass in environment variables that are configured in Bamboo itself for example.
In summary, just push in the values during your build process in an environment you only have access to. If you aren't already automating your builds, you should be.
Another option option is what you said, put it in the database. If your reason for not doing that is that things are too slow, simply push the values into memcache as a 2nd layer cache, and pin the values to the instances as a first-layer cache. If the values can change and you need to update the instances without rebooting them, just keep a hash you can check to know when they change or trigger it somehow when something you do changes the values. That should be it.
Just wanted to note how I solved this problem in javascript/nodejs. For local development I used the 'dotenv' npm package which loads environment variables from a .env file into process.env. When I started using GAE I learned that environment variables need to be set in a 'app.yaml' file. Well, I didn't want to use 'dotenv' for local development and 'app.yaml' for GAE (and duplicate my environment variables between the two files), so I wrote a little script that loads app.yaml environment variables into process.env, for local development. Hope this helps someone:
yaml_env.js:
(function () {
const yaml = require('js-yaml');
const fs = require('fs');
const isObject = require('lodash.isobject')
var doc = yaml.safeLoad(
fs.readFileSync('app.yaml', 'utf8'),
{ json: true }
);
// The .env file will take precedence over the settings the app.yaml file
// which allows me to override stuff in app.yaml (the database connection string (DATABASE_URL), for example)
// This is optional of course. If you don't use dotenv then remove this line:
require('dotenv/config');
if(isObject(doc) && isObject(doc.env_variables)) {
Object.keys(doc.env_variables).forEach(function (key) {
// Dont set environment with the yaml file value if it's already set
process.env[key] = process.env[key] || doc.env_variables[key]
})
}
})()
Now include this file as early as possible in your code, and you're done:
require('../yaml_env')
You should encrypt the variables with google kms and embed it in your source code. (https://cloud.google.com/kms/)
echo -n the-twitter-app-key | gcloud kms encrypt \
> --project my-project \
> --location us-central1 \
> --keyring THEKEYRING \
> --key THECRYPTOKEY \
> --plaintext-file - \
> --ciphertext-file - \
> | base64
put the scrambled (encrypted and base64 encoded) value into your environment variable (in yaml file).
Some pythonish code to get you started on decrypting.
kms_client = kms_v1.KeyManagementServiceClient()
name = kms_client.crypto_key_path_path("project", "global", "THEKEYRING", "THECRYPTOKEY")
twitter_app_key = kms_client.decrypt(name, base64.b64decode(os.environ.get("TWITTER_APP_KEY"))).plaintext
#Jason F's answer based on using Google Datastore is close, but the code is a bit outdated based on the sample usage on the library docs. Here's the snippet that worked for me:
from google.cloud import datastore
client = datastore.Client('<your project id>')
key = client.key('<kind e.g settings>', '<entity name>') # note: entity name not property
# get by key for this entity
result = client.get(key)
print(result) # prints all the properties ( a dict). index a specific value like result['MY_SECRET_KEY'])
Partly inspired by this Medium post
Extending Martin's answer
from google.appengine.ext import ndb
class Settings(ndb.Model):
"""
Get sensitive data setting from DataStore.
key:String -> value:String
key:String -> Exception
Thanks to: Martin Omander # Stackoverflow
https://stackoverflow.com/a/35261091/1463812
"""
name = ndb.StringProperty()
value = ndb.StringProperty()
#staticmethod
def get(name):
retval = Settings.query(Settings.name == name).get()
if not retval:
raise Exception(('Setting %s not found in the database. A placeholder ' +
'record has been created. Go to the Developers Console for your app ' +
'in App Engine, look up the Settings record with name=%s and enter ' +
'its value in that record\'s value field.') % (name, name))
return retval.value
#staticmethod
def set(name, value):
exists = Settings.query(Settings.name == name).get()
if not exists:
s = Settings(name=name, value=value)
s.put()
else:
exists.value = value
exists.put()
return True
There is a pypi package called gae_env that allows you to save appengine environment variables in Cloud Datastore. Under the hood, it also uses Memcache so its fast
Usage:
import gae_env
API_KEY = gae_env.get('API_KEY')
If there is a value for that key in the datastore, it will be returned.
If there isn't, a placeholder record __NOT_SET__ will be created and a ValueNotSetError will be thrown. The exception will remind you to go to the Developers Console and update the placeholder record.
Similar to Martin's answer, here is how to update the value for the key in Datastore:
Go to Datastore Section in the developers console
Select your project at the top of the page if it's not already selected.
In the Kind dropdown box, select GaeEnvSettings.
Keys for which an exception was raised will have value __NOT_SET__.
Go to the package's GitHub page for more info on usage/configuration
My solution is to replace the secrets in the app.yaml file via github action and github secrets.
app.yaml (App Engine)
env_variables:
SECRET_ONE: $SECRET_ONE
ANOTHER_SECRET: $ANOTHER_SECRET
workflow.yaml (Github)
steps:
- uses: actions/checkout#v2
- uses: 73h/gae-app-yaml-replace-env-variables#v0.1
env:
SECRET_ONE: ${{ secrets.SECRET_ONE }}
ANOTHER_SECRET: ${{ secrets.ANOTHER_SECRET }}
Here you can find the Github action.
https://github.com/73h/gae-app-yaml-replace-env-variables
When developing locally, I write the secrets to an .env file.

Categories

Resources