Python or Node access to GKE kubectl - python

I manage a couple (presently, but will increase) clusters at GKE and up till now have been ok launching things manually as needed. I've started working my own API that can take in requests to spin up new resources on-demand for a specific cluster but in order to make it scalable I need to do something more dynamic than switching between clusters with each request. I have found a link for a Google API python client that supposedly can access GKE:
https://developers.google.com/api-client-library/python/apis/container/v1#system-requirements
I've also found several other clients (specifically one I was looking closely at was the nodejs client from godaddy) that can access Kubernetes:
https://github.com/godaddy/kubernetes-client
The Google API Client doesn't appear to be documented for use with GKE/kubectl commands, and the godaddy kubernetes-client has to access a single cluster master but can't reach one at GKE (without a kubectl proxy enabled first). So my question is, how does one manage kubernetes on GKE programmatically without having to use the command-line utilities in either nodejs or python?

I know this question is a couple of years old, but hopefully this helps someone. Newer GKE APIs are available for Node.js here: https://cloud.google.com/nodejs/docs/reference/container/0.3.x/
See list of container APIs here: https://developers.google.com/apis-explorer/#p/container/v1/
Once connected via the API, you can access cluster details, which includes the connectivity information for connecting to the master with standard API calls.

I just posted an article on Medium with an example of how to do this
The first part of the article outlines how to setup the service account, roles and credentials and load them as Environmental variables. Once done, you could then run the following python:
from kubernetes import client
import base64
from tempfile import NamedTemporaryFile
import os
import yaml
from os import path
def main():
try:
host_url = os.environ["HOST_URL"]
cacert = os.environ["CACERT"]
token = os.environ["TOKEN"]
# Set the configuration
configuration = client.Configuration()
with NamedTemporaryFile(delete=False) as cert:
cert.write(base64.b64decode(cacert))
configuration.ssl_ca_cert = cert.name
configuration.host = host_url
configuration.verify_ssl = True
configuration.debug = False
configuration.api_key = {"authorization": "Bearer " + token}
client.Configuration.set_default(configuration)
# Prepare all the required properties in order to run the create_namespaced_job method
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#create_namespaced_job
v1 = client.BatchV1Api()
with open(path.join(path.dirname(__file__), "job.yaml")) as f:
body = yaml.safe_load(f)
v1.create_namespaced_job(namespace="default", body=body, pretty=True)
return f'Job created successfully', 200
except Exception as e:
return str(e), 500
if __name__ == '__main__':
main()

Related

How to use the AWS Python SDK while connecting via SSO credentials

I am attempting to create a python script to connect to and interact with my AWS account. I was reading up on it here https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
and I see that it reads your credentials from ~/.aws/credentials (on a Linux machine). I however and not connecting with an IAM user but SSO user. Thus, the profile connection data I use is located at ~/.aws/sso/cache directory.
Inside that directory, I see two json files. One has the following keys:
startUrl
region
accessToken
expiresAt
the second has the following keys:
clientId
clientSecret
expiresAt
I don't see anywhere in the docs about how to tell it to use my SSO user.
Thus, when I try to run my script, I get error such as
botocore.exceptions.ClientError: An error occurred (AuthFailure) when calling the DescribeSecurityGroups operation: AWS was not able to validate the provided access credentials
even though I can run the same command fine from the command prompt.
This was fixed in boto3 1.14.
So given you have a profile like this in your ~/.aws/config:
[profile sso_profile]
sso_start_url = <sso-url>
sso_region = <sso-region>
sso_account_id = <account-id>
sso_role_name = <role>
region = <default region>
output = <default output (json or text)>
And then login with
$ aws sso login --profile sso_profile
You will be able to create a session:
import boto3
boto3.setup_default_session(profile_name='sso_profile')
client = boto3.client('<whatever service you want>')
So here's the long and hairy answer tested on boto3==1.21.39:
It's an eight-step process where:
register the client using sso-oidc.register_client
start the device authorization flow using sso-oidc.start_device_authorization
redirect the user to the sso login page using webbrowser.open
poll sso-oidc.create_token until the user completes the signin
list and present the account roles to the user using sso.list_account_roles
get role credentials using sso.get_role_credentials
create a new boto3 session with the session credentials from (6)
eat a cookie
Step 8 is really key and should not be overlooked as part of any successful authorization flow.
In the sample below the account_id should be the account id of the account you are trying to get credentials for. And the start_url should be the url that aws generates for you to start the sso flow (in the AWS SSO management console, under Settings).
from time import time, sleep
import webbrowser
from boto3.session import Session
session = Session()
account_id = '1234567890'
start_url = 'https://d-0987654321.awsapps.com/start'
region = 'us-east-1'
sso_oidc = session.client('sso-oidc')
client_creds = sso_oidc.register_client(
clientName='myapp',
clientType='public',
)
device_authorization = sso_oidc.start_device_authorization(
clientId=client_creds['clientId'],
clientSecret=client_creds['clientSecret'],
startUrl=start_url,
)
url = device_authorization['verificationUriComplete']
device_code = device_authorization['deviceCode']
expires_in = device_authorization['expiresIn']
interval = device_authorization['interval']
webbrowser.open(url, autoraise=True)
for n in range(1, expires_in // interval + 1):
sleep(interval)
try:
token = sso_oidc.create_token(
grantType='urn:ietf:params:oauth:grant-type:device_code',
deviceCode=device_code,
clientId=client_creds['clientId'],
clientSecret=client_creds['clientSecret'],
)
break
except sso_oidc.exceptions.AuthorizationPendingException:
pass
access_token = token['accessToken']
sso = session.client('sso')
account_roles = sso.list_account_roles(
accessToken=access_token,
accountId=account_id,
)
roles = account_roles['roleList']
# simplifying here for illustrative purposes
role = roles[0]
role_creds = sso.get_role_credentials(
roleName=role['roleName'],
accountId=account_id,
accessToken=access_token,
)
session = Session(
region_name=region,
aws_access_key_id=role_creds['accessKeyId'],
aws_secret_access_key=role_creds['secretAccessKey'],
aws_session_token=role_creds['sessionToken'],
)
Your current .aws/sso/cache folder structure looks like this:
$ ls
botocore-client-XXXXXXXX.json cXXXXXXXXXXXXXXXXXXX.json
The 2 json files contain 3 different parameters that are useful.
botocore-client-XXXXXXXX.json -> clientId and clientSecret
cXXXXXXXXXXXXXXXXXXX.json -> accessToken
Using the access token in cXXXXXXXXXXXXXXXXXXX.json you can call get-role-credentials. The output from this command can be used to create a new session.
Your Python file should look something like this:
import json
import os
import boto3
dir = os.path.expanduser('~/.aws/sso/cache')
json_files = [pos_json for pos_json in os.listdir(dir) if pos_json.endswith('.json')]
for json_file in json_files :
path = dir + '/' + json_file
with open(path) as file :
data = json.load(file)
if 'accessToken' in data:
accessToken = data['accessToken']
client = boto3.client('sso',region_name='us-east-1')
response = client.get_role_credentials(
roleName='string',
accountId='string',
accessToken=accessToken
)
session = boto3.Session(aws_access_key_id=response['roleCredentials']['accessKeyId'], aws_secret_access_key=response['roleCredentials']['secretAccessKey'], aws_session_token=response['roleCredentials']['sessionToken'], region_name='us-east-1')
A well-formed boto3-based script should transparently authenticate based on profile name. It is not recommended to handle the cached files or keys or tokens yourself, since the official code methods might change in the future. To see the state of your profile(s), run aws configure list --examples:
$ aws configure list --profile=sso
Name Value Type Location
---- ----- ---- --------
profile sso manual --profile
The SSO session associated with this profile has expired or is otherwise invalid.
To refresh this SSO session run aws sso login with the corresponding profile.
$ aws configure list --profile=old
Name Value Type Location
---- ----- ---- --------
profile old manual --profile
access_key ****************3DSx shared-credentials-file
secret_key ****************sX64 shared-credentials-file
region us-west-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']
What works for me is the following:
import boto 3
session = boto3.Session(profile_name="sso_profile_name")
session.resource("whatever")
using boto3==1.20.18.
This would work if you had previously configured SSO for aws ie. aws configure sso.
Interestingly enough, I don't have to go through this if I use ipython, I just aws sso login beforehand and then call boto3.Session().
I am trying to figure out whether there is something wrong with my approach - I fully agree with what was said above with respect to transparency and although it is a working solution, I am not in love with it.
EDIT: there was something wrong and here is how I fixed it:
run aws configure sso (as above);
install aws-vault - it basically replaces aws sso login --profile <profile-name>;
run aws-vault exec <profile-name> to create a sub-shell with AWS credentials exported to environment variables.
Doing so, it is possible to run any boto3 command both interactively (eg. iPython) and from a script, as in my case. Therefore, the snippet above simply becomes:
import boto 3
session = boto3.Session()
session.resource("whatever")
Here for further details on AWS vault.

How do I explicitly pass my authentication key into Google's text to speech engine?

I'm trying to use Google Cloud's text to speech engine for my robot, and I cannot understand the reference page for passing the key explicitly in Python as mentioned here.
I spent several hours yesterday exploring different options on how to set the environment variable GOOGLE_APPLICATION_CREDENTIALS needed for implicit authorization including an export command in the shell script I use to start the robot, using os.environ commands in Python, and using os.system to call an export command.
client = texttospeech.TextToSpeechClient()
voice = robot_config.get('google_cloud', 'voice')
keyFile = robot_config.get('google_cloud', 'key_file')
hwNum = robot_config.getint('tts', 'hw_num')
languageCode = robot_config.get('google_cloud', 'language_code')
voice = texttospeech.types.VoiceSelectionParams(
name=voice,
language_code=languageCode
)
audio_config = texttospeech.types.AudioConfig(
audio_config=texttospeech.enums.AudioEncoding.LINEAR16
)
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = keyFile
Logging in via SSH shows that I have successfully set the environment variable since it shows up in env; however a DefaultCredentialsError is thrown with the following message
Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, pease see https://cloud.google.com/docs/authentication/getting-started
Logging in and setting the environment variable manually will allow the script to run and work, but this is not a long term solution.
This works for me:
import os
from google.cloud import texttospeech
os.environ ["GOOGLE_APPLICATION_CREDENTIALS"]= "/home/pi/projectx-17f8348743.json"
client=texttospeech.TextToSpeechClient()
The correct answer lies in the google.oath2 library. The client object is not looking for a json key, and is instead looking for a service account object.
from google.oath2 import service_account
from google.cloud import texttospeech
client = texttospeech.TextToSpeechClient(
credentials=service_account.Credentials.from_service_account_file(keyFile)
)

How can I run a Python script in Azure DevOps with Azure Resource Manager credentials?

I have a Python script I want to run in Azure Resource Manager context within an Azure DevOps pipeline task to be able to access Azure resources (like the Azure CLI or Azure PowerShell tasks).
How can I get Azure RM Service Endpoint credentials stored in Azure DevOps passed - as ServicePrincipal/Secret or OAuth Token - into the script?
If I understand the issue correctly, you want to use the Python Azure CLI wrapper classes to manage or access Azure resources. Rather than using shell or PowerShell commands. I ran across the same issue and used the following steps to solve it.
import sys
from azure.identity import ClientSecretCredential
tenant_id = sys.argv[1]
client_id = sys.argv[2]
client_secret = sys.argv[3]
credentials = ClientSecretCredential(tenant_id, client_id, client_secret)
Add a "User Python version" step to add the correct version of python to your agent's PATH
Add a "Azure CLI" step. The goal here is to install your requirements and execute the script.
Within the Azure CLI step, be sure to check the "Access service principal details in script" box in the Advanced section. This will allow you to pass in the service principal details into your script as arguments.
Pass in the $tenantId $servicePrincipalId $servicePrincipalKey variables as arguments. These variables are pipeline defined so long as the box in step 3 is checked. No action is required on your part to define them.
Setup your Python script to accept the values and setup your
credentials. See the script above
Depends on what you call a python script, but either way Azure DevOps hasn't got native support to authenticate python sdk (or your custom python script), but you can pass in credentials from build\release variables to your script, or try and pull that from the Azure Cli (I think it stores data somewhere under /home/.azure/.
based on the hint given by 4c74356b41 above and with some dissecting of Azure CLI I created this function that allows pulling an OAuth token over ADAL from the Service Princial logged in inside an Azure DevOps - Azure CLI task
import os
import json
import adal
_SERVICE_PRINCIPAL_ID = 'servicePrincipalId'
_SERVICE_PRINCIPAL_TENANT = 'servicePrincipalTenant'
_TOKEN_ENTRY_TOKEN_TYPE = 'tokenType'
_ACCESS_TOKEN = 'accessToken'
def get_config_dir():
return os.getenv('AZURE_CONFIG_DIR', None) or os.path.expanduser(os.path.join('~', '.azure'))
def getOAuthTokenFromCLI():
token_file = (os.environ.get('AZURE_ACCESS_TOKEN_FILE', None)
or os.path.join(get_config_dir(), 'accessTokens.json'))
with open(token_file) as f:
tokenEntry = json.load(f)[0] # just assume first entry
tenantID = tokenEntry[_SERVICE_PRINCIPAL_TENANT]
appId = tokenEntry[_SERVICE_PRINCIPAL_ID]
appPassword = tokenEntry[_ACCESS_TOKEN]
authURL = "https://login.windows.net/" + tenantID
resource = "https://management.azure.com/"
context = adal.AuthenticationContext(authURL, validate_authority=tenantID, api_version=None)
token = context.acquire_token_with_client_credentials(resource,appId,appPassword)
return token[_TOKEN_ENTRY_TOKEN_TYPE] + " " + token[_ACCESS_TOKEN]

Apache LibCloud and Rackspace Cloudfiles

I've started using the Apache libcloud library with python to allow me to talk to rackspace cloudfiles in python3 (pyrax is 2 only)
I've got this running successfully and am uploading files / creating containers etc happily.
Sadly, I appear to only be able to get the HTTP url for the items uploaded:
driver.get_object_cdn_url(obj)
This will return the HTTP url for the object I've just uploaded.
Is there a way to get the OTHER url(s) (HTTPS / Streaming etc) via this library (I can't fathom it from the documentation!)
The driver allows you to first enable CDN functionality on the container.
driver.enable_container_cdn(container)
There isn't a method to directly get the streaming URL, get_container_cdn_url only responds with the static CDN URL. This code snippet would get the information directly from the API:
from libcloud.utils.py3 import urlquote
container_name = '<your container name'
response = driver.connection.request('/%s' % (urlquote(container_name)),
method='HEAD',
cdn_request=True)
uri = response.headers['x-cdn-uri']
ssl_uri = response.headers['x-cdn-ssl-uri']
stream_uri = response.headers['x-cdn-streaming-uri']
See these reference docs for details.

401 Unauthorized making REST Call to Azure API App using Bearer token

I created 2 applications in my Azure directory, 1 for my API Server and one for my API client. I am using the Python ADAL Library and can successfully obtain a token using the following code:
tenant_id = "abc123-abc123-abc123"
context = adal.AuthenticationContext('https://login.microsoftonline.com/' + tenant_id)
token = context.acquire_token_with_username_password(
'https://myapiserver.azurewebsites.net/',
'myuser',
'mypassword',
'my_apiclient_client_id'
)
I then try to send a request to my API app using the following method but keep getting 'unauthorized':
at = token['accessToken']
id_token = "Bearer {0}".format(at)
response = requests.get('https://myapiserver.azurewebsites.net/', headers={"Authorization": id_token})
I am able to successfully login using myuser/mypass from the loginurl. I have also given the client app access to the server app in Azure AD.
Although the question was posted a long time ago, I'll try to provide an answer. I stumbled across the question because we had the exact same problem here. We could successfully obtain a token with the adal library but then we were not able to access the resource I obtained the token for.
To make things worse, we sat up a simple console app in .Net, used the exact same parameters, and it was working. We could also copy the token obtained through the .Net app and use it in our Python request and it worked (this one is kind of obvious, but made us confident that the problem was not related to how I assemble the request).
The source of the problem was in the end in the oauth2_client of the adal python package. When I compared the actual HTTP requests sent by the .Net and the python app, a subtle difference was that the python app sent a POST request explicitly asking for api-version=1.0.
POST https://login.microsoftonline.com/common//oauth2/token?api-version=1.0
Once I changed the following line in oauth2_client.py in the adal library, I could access my resource.
Changed
return urlparse('{}?{}'.format(self._token_endpoint, urlencode(parameters)))
in the method _create_token_url, to
return urlparse(self._token_endpoint)
We are working on a pull request to patch the library in github.
For the current release of Azure Python SDK, it support authentication with a service principal. It does not support authentication using an ADAL library yet. Maybe it will in future releases.
See https://azure-sdk-for-python.readthedocs.io/en/latest/resourcemanagement.html#authentication for details.
See also Azure Active Directory Authentication Libraries for the platforms ADAL is available on.
#Derek,
Could you set your Issue URL on Azure Portal? If I set the wrong Issue URL, I could get the same error with you. It seems that your code is right.
Base on my experience, you need add your application into Azure AD and get a client ID.(I am sure you have done this.) And then you can get the tenant ID and input into Issue URL textbox on Azure portal.
NOTE:
On old portal(manage.windowsazure.com),in the bottom command bar, click View Endpoints, and then copy the Federation Metadata Document URL and download that document or navigate to it in a browser.
Within the root EntityDescriptor element, there should be an entityID attribute of the form https://sts.windows.net/ followed by a GUID specific to your tenant (called a "tenant ID"). Copy this value - it will serve as your Issuer URL. You will configure your application to use this later.
My demo is as following:
import adal
import requests
TenantURL='https://login.microsoftonline.com/*******'
context = adal.AuthenticationContext(TenantURL)
RESOURCE = 'http://wi****.azurewebsites.net'
ClientID='****'
ClientSect='7****'
token_response = context.acquire_token_with_client_credentials(
RESOURCE,
ClientID,
ClientSect
)
access_token = token_response.get('accessToken')
print(access_token)
id_token = "Bearer {0}".format(access_token)
response = requests.get(RESOURCE, headers={"Authorization": id_token})
print(response)
Please try to modified it. Any updates, please let me know.

Categories

Resources