How to use the AWS Python SDK while connecting via SSO credentials - python

I am attempting to create a python script to connect to and interact with my AWS account. I was reading up on it here https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
and I see that it reads your credentials from ~/.aws/credentials (on a Linux machine). I however and not connecting with an IAM user but SSO user. Thus, the profile connection data I use is located at ~/.aws/sso/cache directory.
Inside that directory, I see two json files. One has the following keys:
startUrl
region
accessToken
expiresAt
the second has the following keys:
clientId
clientSecret
expiresAt
I don't see anywhere in the docs about how to tell it to use my SSO user.
Thus, when I try to run my script, I get error such as
botocore.exceptions.ClientError: An error occurred (AuthFailure) when calling the DescribeSecurityGroups operation: AWS was not able to validate the provided access credentials
even though I can run the same command fine from the command prompt.

This was fixed in boto3 1.14.
So given you have a profile like this in your ~/.aws/config:
[profile sso_profile]
sso_start_url = <sso-url>
sso_region = <sso-region>
sso_account_id = <account-id>
sso_role_name = <role>
region = <default region>
output = <default output (json or text)>
And then login with
$ aws sso login --profile sso_profile
You will be able to create a session:
import boto3
boto3.setup_default_session(profile_name='sso_profile')
client = boto3.client('<whatever service you want>')

So here's the long and hairy answer tested on boto3==1.21.39:
It's an eight-step process where:
register the client using sso-oidc.register_client
start the device authorization flow using sso-oidc.start_device_authorization
redirect the user to the sso login page using webbrowser.open
poll sso-oidc.create_token until the user completes the signin
list and present the account roles to the user using sso.list_account_roles
get role credentials using sso.get_role_credentials
create a new boto3 session with the session credentials from (6)
eat a cookie
Step 8 is really key and should not be overlooked as part of any successful authorization flow.
In the sample below the account_id should be the account id of the account you are trying to get credentials for. And the start_url should be the url that aws generates for you to start the sso flow (in the AWS SSO management console, under Settings).
from time import time, sleep
import webbrowser
from boto3.session import Session
session = Session()
account_id = '1234567890'
start_url = 'https://d-0987654321.awsapps.com/start'
region = 'us-east-1'
sso_oidc = session.client('sso-oidc')
client_creds = sso_oidc.register_client(
clientName='myapp',
clientType='public',
)
device_authorization = sso_oidc.start_device_authorization(
clientId=client_creds['clientId'],
clientSecret=client_creds['clientSecret'],
startUrl=start_url,
)
url = device_authorization['verificationUriComplete']
device_code = device_authorization['deviceCode']
expires_in = device_authorization['expiresIn']
interval = device_authorization['interval']
webbrowser.open(url, autoraise=True)
for n in range(1, expires_in // interval + 1):
sleep(interval)
try:
token = sso_oidc.create_token(
grantType='urn:ietf:params:oauth:grant-type:device_code',
deviceCode=device_code,
clientId=client_creds['clientId'],
clientSecret=client_creds['clientSecret'],
)
break
except sso_oidc.exceptions.AuthorizationPendingException:
pass
access_token = token['accessToken']
sso = session.client('sso')
account_roles = sso.list_account_roles(
accessToken=access_token,
accountId=account_id,
)
roles = account_roles['roleList']
# simplifying here for illustrative purposes
role = roles[0]
role_creds = sso.get_role_credentials(
roleName=role['roleName'],
accountId=account_id,
accessToken=access_token,
)
session = Session(
region_name=region,
aws_access_key_id=role_creds['accessKeyId'],
aws_secret_access_key=role_creds['secretAccessKey'],
aws_session_token=role_creds['sessionToken'],
)

Your current .aws/sso/cache folder structure looks like this:
$ ls
botocore-client-XXXXXXXX.json cXXXXXXXXXXXXXXXXXXX.json
The 2 json files contain 3 different parameters that are useful.
botocore-client-XXXXXXXX.json -> clientId and clientSecret
cXXXXXXXXXXXXXXXXXXX.json -> accessToken
Using the access token in cXXXXXXXXXXXXXXXXXXX.json you can call get-role-credentials. The output from this command can be used to create a new session.
Your Python file should look something like this:
import json
import os
import boto3
dir = os.path.expanduser('~/.aws/sso/cache')
json_files = [pos_json for pos_json in os.listdir(dir) if pos_json.endswith('.json')]
for json_file in json_files :
path = dir + '/' + json_file
with open(path) as file :
data = json.load(file)
if 'accessToken' in data:
accessToken = data['accessToken']
client = boto3.client('sso',region_name='us-east-1')
response = client.get_role_credentials(
roleName='string',
accountId='string',
accessToken=accessToken
)
session = boto3.Session(aws_access_key_id=response['roleCredentials']['accessKeyId'], aws_secret_access_key=response['roleCredentials']['secretAccessKey'], aws_session_token=response['roleCredentials']['sessionToken'], region_name='us-east-1')

A well-formed boto3-based script should transparently authenticate based on profile name. It is not recommended to handle the cached files or keys or tokens yourself, since the official code methods might change in the future. To see the state of your profile(s), run aws configure list --examples:
$ aws configure list --profile=sso
Name Value Type Location
---- ----- ---- --------
profile sso manual --profile
The SSO session associated with this profile has expired or is otherwise invalid.
To refresh this SSO session run aws sso login with the corresponding profile.
$ aws configure list --profile=old
Name Value Type Location
---- ----- ---- --------
profile old manual --profile
access_key ****************3DSx shared-credentials-file
secret_key ****************sX64 shared-credentials-file
region us-west-1 env ['AWS_REGION', 'AWS_DEFAULT_REGION']

What works for me is the following:
import boto 3
session = boto3.Session(profile_name="sso_profile_name")
session.resource("whatever")
using boto3==1.20.18.
This would work if you had previously configured SSO for aws ie. aws configure sso.
Interestingly enough, I don't have to go through this if I use ipython, I just aws sso login beforehand and then call boto3.Session().
I am trying to figure out whether there is something wrong with my approach - I fully agree with what was said above with respect to transparency and although it is a working solution, I am not in love with it.
EDIT: there was something wrong and here is how I fixed it:
run aws configure sso (as above);
install aws-vault - it basically replaces aws sso login --profile <profile-name>;
run aws-vault exec <profile-name> to create a sub-shell with AWS credentials exported to environment variables.
Doing so, it is possible to run any boto3 command both interactively (eg. iPython) and from a script, as in my case. Therefore, the snippet above simply becomes:
import boto 3
session = boto3.Session()
session.resource("whatever")
Here for further details on AWS vault.

Related

do you have to pass SSO profile credentials in order to assume the IAM role using boto3

I have my config file set up with multiple profiles and I am trying to assume an IAM role, but all the articles I see about assuming roles are starting with making an sts client using
import boto3 client = boto3.client('sts')
which makes sense but the only problem is, It gives me an error when I try to do it like this. but when I do it like this, while passing a profile that exists in my config file, it works. here is the code below:
import boto3 session = boto3.Session(profile_name="test_profile")
sts = session.client("sts")
response = sts.assume_role(
RoleArn="arn:aws:iam::xxx:role/role-name",
RoleSessionName="test-session"
)
new_session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'])
when other people are assuming roles in their codes without passing a profile in, how does that even work? does boto3 automatically grabs the default profile from the config file or something like that in their case?
Yes. This line:
sts = session.client("sts")
tells boto3 to create a session using the default credentials.
The credentials can be provided in the ~/.aws/credentials file. If the code is running on an Amazon EC2 instance, boto3 will automatically use credentials associated with the IAM Role associated with the instance.
Credentials can also be passed via Environment Variables.
See: Credentials — Boto3 documentation

Unable to trigger composer/airflow dag from Cloud function that triggers when there are changes in cloud storage

I have created and ran dags on a google-cloud-composer environment (dlkpipelinesv1 : composer-1.13.0-airflow-1.10.12). I am able to trigger these dags manually, and using the scheduler, but I am stuck when it comes to triggering them via cloud-functions that detect changes in a google-cloud-storage bucket.
Note that I had another GC-Composer environment (pipelines:composer-1.7.5-airflow-1.10.2) that used those same google cloud functions to trigger the relevant dags, and it was working.
I followed this guide to create the functions that trigger the dags. So I retrieved the following variables:
PROJECT_ID = <project_id>
CLIENT_ID = <client_id_retrieved_by_running_the_code_in_the_guide_within_my_gcp_console>
WEBSERVER_ID = <airflow_webserver_id>
DAG_NAME = <dag_to_trigger>
WEBSERVER_URL = f"https://{WEBSERVER_ID}.appspot.com/api/experimental/dags/{DAG_NAME}/dag_runs"
def file_listener(event, context):
"""Entry point of the cloud function: Triggered by a change to a Cloud Storage bucket.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
logging.info("Running the file listener process")
logging.info(f"event : {event}")
logging.info(f"context : {context}")
file = event
if file["size"] == "0" or "DTM_DATALAKE_AUDIT_COMPTAGE" not in file["name"] or ".filepart" in file["name"].lower():
logging.info("no matching file")
exit(0)
logging.info(f"File listener detected the presence of : {file['name']}.")
# id_token = authorize_iap()
# make_iap_request({"file_name": file["name"]}, id_token)
make_iap_request(url=WEBSERVER_URL, client_id=CLIENT_ID, method="POST")
def make_iap_request(url, client_id, method="GET", **kwargs):
"""Makes a request to an application protected by Identity-Aware Proxy.
Args:
url: The Identity-Aware Proxy-protected URL to fetch.
client_id: The client ID used by Identity-Aware Proxy.
method: The request method to use
('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')
**kwargs: Any of the parameters defined for the request function:
https://github.com/requests/requests/blob/master/requests/api.py
If no timeout is provided, it is set to 90 by default.
Returns:
The page body, or raises an exception if the page couldn't be retrieved.
"""
# Set the default timeout, if missing
if "timeout" not in kwargs:
kwargs["timeout"] = 90
# Obtain an OpenID Connect (OIDC) token from metadata server or using service account.
open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
logging.info(f"Retrieved open id connect (bearer) token {open_id_connect_token}")
# Fetch the Identity-Aware Proxy-protected URL, including an authorization header containing "Bearer " followed by a
# Google-issued OpenID Connect token for the service account.
resp = requests.request(method, url, headers={"Authorization": f"Bearer {open_id_connect_token}"}, **kwargs)
if resp.status_code == 403:
raise Exception("Service account does not have permission to access the IAP-protected application.")
elif resp.status_code != 200:
raise Exception(f"Bad response from application: {resp.status_code} / {resp.headers} / {resp.text}")
else:
logging.info(f"Response status - {resp.status_code}")
return resp.json
This is the code that runs in the GC-functions
I have checked the environment details in dlkpipelinesv1 and piplines respectively, using this code :
credentials, _ = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform'])
authed_session = google.auth.transport.requests.AuthorizedSession(
credentials)
# project_id = 'YOUR_PROJECT_ID'
# location = 'us-central1'
# composer_environment = 'YOUR_COMPOSER_ENVIRONMENT_NAME'
environment_url = (
'https://composer.googleapis.com/v1beta1/projects/{}/locations/{}'
'/environments/{}').format(project_id, location, composer_environment)
composer_response = authed_session.request('GET', environment_url)
environment_data = composer_response.json()
and the two are using the same service accounts to run, i.e. the same IAM roles. Although I have noticed the following different details :
In the old environment :
"airflowUri": "https://p5<hidden_value>-tp.appspot.com",
"privateEnvironmentConfig": { "privateClusterConfig": {} },
in the new environment:
"airflowUri": "https://da<hidden_value>-tp.appspot.com",
"privateEnvironmentConfig": {
"privateClusterConfig": {},
"webServerIpv4CidrBlock": "<hidden_value>",
"cloudSqlIpv4CidrBlock": "<hidden_value>"
}
The service account that I use to make the post request has the following roles :
Cloud Functions Service Agent
Composer Administrator
Composer User
Service Account Token Creator
Service Account User
The service account that runs my composer environment has the following roles :
BigQuery Admin
Composer Worker
Service Account Token Creator
Storage Object Admin
But I am still receiving a 403 - Forbidden in the Log Explorer when the post request is made to the airflow API.
EDIT 2020-11-16 :
I've updated to the latest make_iap_request code.
I tinkered with the IAP within the security service, but I cannot find the webserver that will accept HTTP: post requests from my cloud functions... See the image bellow, anyway I added the service account to the default and CRM IAP resources just in case, but I still get this error :
Exception: Service account does not have permission to access the IAP-protected application.
The main question is: What IAP is at stake here?? And how do I add my service account as a user of this IAP.
What am I missing?
There is a configuration parameter that causes ALL requests to the API to be denied...
In the documentation, it is mentioned that we need to override the following airflow configuration :
[api]
auth_backend = airflow.api.auth.backend.deny_all
into
[api]
auth_backend = airflow.api.auth.backend.default
This detail is really important to know, and it is not mentioned in google's documentation...
Useful links :
Triggering DAGS (workflows) with GCS
make_iap_request.py repository
The code throwing the 403 is the way it used to work. There was a breaking change in the middle of 2020. Instead of using requests to make an HTTP call for the token, you should use Google's OAuth2 library:
from google.oauth2 import id_token
from google.auth.transport.requests import Request
open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
see this example
I followed the steps in Triggering DAGs and has worked in my env, please see below my recommendations.
It is a good start that the Componser Environment is up and runnning. Through the process you will only need to upload the new DAG (trigger_response_dag.py) and get the clientID (ends with .apps.googleusercontent.com) with either a python script or from the login page the first time you open Airflow UI.
In the Cloud Functions side, I noticed you have a combination of instructions for Node.js and for Python, for example, USER_AGENT is only for Node.js. And routine make_iap_request is only for python. I hope the following points helps to resolve your problem:
Service Account (SA). The Node.js code uses the default service account ${projectId}#appspot.gserviceaccount.com whose default role is Editor, meaning that it has wide access to the GCP services, including Cloud Composer. In python I think the authentication is managed somehow by client_id since a token is retrieved with id. Please ensure that the SA has this Editor role and don't forget to assign serviceAccountTokenCreator as specified in the guide.
I used Node.js 8 runtime and I noticed the user agent you are concerned it should be 'gcf-event-trigger' as it is hard coded; USER_AGENT = 'gcf-event-trigger'. In python, it seems not necesary.
By default, in the GCS trigger, the GCS Event Type is set to Archive, you need to change it to Finalize/Create. If set to Archive, the trigger won't work when you upload objects, and the DAG won't be started.
If you think your cloud function is correctly configured and an error persists, you can find its cause in your cloud function's LOGS tab in the Console. It can give you more details.
Basically, from the guide, I only had to change the following values in Node.js:
// The project that holds your function. Replace <YOUR-PROJECT-ID>
const PROJECT_ID = '<YOUR-PROJECT-ID>';
// Navigate to your webserver's login page and get this from the URL
const CLIENT_ID = '<ALPHANUMERIC>.apps.googleusercontent.com';
// This should be part of your webserver's URL in the Env's detail page: {tenant-project-id}.appspot.com.
const WEBSERVER_ID = 'v90eaaaa11113fp-tp';
// The name of the DAG you wish to trigger. It's DAG's name in the script trigger_response_dag.py you uploaded to your Env.
const DAG_NAME = 'composer_sample_trigger_response_dag';
For Python I only changed these parameters:
client_id = '<ALPHANUMERIC>.apps.googleusercontent.com'
# This should be part of your webserver's URL:
# {tenant-project-id}.appspot.com
webserver_id = 'v90eaaaa11113fp-tp'
# Change dag_name only if you are not using the example
dag_name = 'composer_sample_trigger_response_dag'

How to generate a Blob signed url in Google Cloud Run?

Under Google Cloud Run, you can select which service account your container is running. Using the default compute service account fails to generate a signed url.
The work around listed here works on Google Cloud Compute -- if you allow all the scopes for the service account. There does not seem to be away to do that in Cloud Run (not that I can find).
https://github.com/googleapis/google-auth-library-python/issues/50
Things I have tried:
Assigned the service account the role: roles/iam.serviceAccountTokenCreator
Verified the workaround in the same GCP project in a Virtual Machine (vs Cloud Run)
Verified the code works locally in the container with the service account loaded from private key (via json file).
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('EXAMPLE_BUCKET')
blob = bucket.get_blob('libraries/image_1.png')
expires = datetime.now() + timedelta(seconds=86400)
blob.generate_signed_url(expiration=expires)
Fails with:
you need a private key to sign credentials.the credentials you are currently using <class 'google.auth.compute_engine.credentials.Credentials'> just contains a token. see https://googleapis.dev/python/google-api-core/latest/auth.html#setting-up-a-service-account for more details.
/usr/local/lib/python3.8/site-packages/google/cloud/storage/_signing.py, line 51, in ensure_signed_credentials
Trying to add the workaround,
Error calling the IAM signBytes API:
{ "error": { "code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT" }
}
Exception Location: /usr/local/lib/python3.8/site-packages/google/auth/iam.py, line 81, in _make_signing_request
Workaround code as mention in Github issue:
from google.cloud import storage
from google.auth.transport import requests
from google.auth import compute_engine
from datetime import datetime, timedelta
def get_signing_creds(credentials):
auth_request = requests.Request()
print(credentials.service_account_email)
signing_credentials = compute_engine.IDTokenCredentials(auth_request, "", service_account_email=credentials.ser
vice_account_email)
return signing_credentials
client = storage.Client()
bucket = client.get_bucket('EXAMPLE_BUCKET')
blob = bucket.get_blob('libraries/image_1.png')
expires = datetime.now() + timedelta(seconds=86400)
signing_creds = get_signing_creds(client._credentials)
url = blob.generate_signed_url(expiration=expires, credentials=signing_creds)
print(url)
How do I generate a signed url under Google Cloud Run?
At this point, it seems like I may have to mount the service account key which I wanted to avoid.
EDIT:
To try and clarify, the service account has the correct permissions - it works in GCE and locally with the JSON private key.
Yes you can, but I had to deep dive to find how (jump to the end if you don't care about the details)
If you go in the _signing.py file, line 623, you can see this
if access_token and service_account_email:
signature = _sign_message(string_to_sign, access_token, service_account_email)
...
If you provide the access_token and the service_account_email, you can use the _sign_message method. This method uses the IAM service SignBlob API at this line
It's important because you can now sign blob without having locally the private key!! So, that solves the problem, and the following code works on Cloud Run (and I'm sure on Cloud Function)
def sign_url():
from google.cloud import storage
from datetime import datetime, timedelta
import google.auth
credentials, project_id = google.auth.default()
# Perform a refresh request to get the access token of the current credentials (Else, it's None)
from google.auth.transport import requests
r = requests.Request()
credentials.refresh(r)
client = storage.Client()
bucket = client.get_bucket('EXAMPLE_BUCKET')
blob = bucket.get_blob('libraries/image_1.png')
expires = datetime.now() + timedelta(seconds=86400)
# In case of user credential use, define manually the service account to use (for development purpose only)
service_account_email = "YOUR DEV SERVICE ACCOUNT"
# If you use a service account credential, you can use the embedded email
if hasattr(credentials, "service_account_email"):
service_account_email = credentials.service_account_email
url = blob.generate_signed_url(expiration=expires,service_account_email=service_account_email, access_token=credentials.token)
return url, 200
Let me know if it's not clear
The answer #guillaume-blaquiere posted here does work, but it requires an additional step not mentioned, which is to add the Service Account Token Creator role in IAM to your default service account, which will allow said default service account to "Impersonate service accounts (create OAuth2 access tokens, sign blobs or JWTs, etc)."
This allows the default service account to sign blobs, as per the signBlob documentation.
I tried it on AppEngine and it worked perfectly once that permission was given.
import datetime as dt
from google import auth
from google.cloud import storage
# SCOPES = [
# "https://www.googleapis.com/auth/devstorage.read_only",
# "https://www.googleapis.com/auth/iam"
# ]
credentials, project = auth.default(
# scopes=SCOPES
)
credentials.refresh(auth.transport.requests.Request())
expiration_timedelta = dt.timedelta(days=1)
storage_client = storage.Client(credentials=credentials)
bucket = storage_client.get_bucket("bucket_name")
blob = bucket.get_blob("blob_name")
signed_url = blob.generate_signed_url(
expiration=expiration_timedelta,
service_account_email=credentials.service_account_email,
access_token=credentials.token,
)
I downloaded a key for the AppEngine default service account to test locally, and in order to make it work properly outside of the AppEngine environment, I had to add the proper scopes to the credentials, as per the commented lines setting the SCOPES. You can ignore them if running only in AppEngine itself.
You can't sign urls with the default service account.
Try your service code again with a dedicated service account with the permissions, and see if that resolves your error
References and further reading:
https://stackoverflow.com/a/54272263
https://cloud.google.com/storage/docs/access-control/signed-urls
https://github.com/googleapis/google-auth-library-python/issues/238
An updated approach has been added to GCP's documentation for serverless instances such as Cloud Run and App Engine.
The following snippet shows how to create a signed URL from the storage library.
def generate_upload_signed_url_v4(bucket_name, blob_name):
"""Generates a v4 signed URL for uploading a blob using HTTP PUT.
Note that this method requires a service account key file. You can not use
this if you are using Application Default Credentials from Google Compute
Engine or from the Google Cloud SDK.
"""
# bucket_name = 'your-bucket-name'
# blob_name = 'your-object-name'
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(blob_name)
url = blob.generate_signed_url(
version="v4",
# This URL is valid for 15 minutes
expiration=datetime.timedelta(minutes=15),
# Allow PUT requests using this URL.
method="PUT",
content_type="application/octet-stream",
)
return url
Once your backend returns the signed URL you could execute curl put request from your frontend as follows
curl -X PUT -H 'Content-Type: application/octet-stream' --upload-file my-file 'my-signed-url'
I had to add both Service Account Token Creator and Storage Object Creator to the default compute engine service account (which is what my Cloud Run services use) before it worked. You could also create a custom Role that has just iam.serviceAccounts.signBlob instead of Service Account Token Creator, which is what I did:
I store the credentials.json contents in Secret Manager then load it in my Django app like this:
project_id = os.environ.get("GOOGLE_CLOUD_PROJECT")
client = secretmanager.SecretManagerServiceClient()
secret_name = "service_account_credentials"
secret_path = f"projects/{project_id}/secrets/{secret_name}/versions/latest"
credentials_json = client.access_secret_version(name=secret_path).payload.data.decode("UTF-8")
service_account_info = json.loads(credentials_json)
google_service_credentials = service_account.Credentials.from_service_account_info(
service_account_info)
I tried the answer from #guillaume-blaquiere and I added the permission recommended by #guilherme-coppini but when using Google Cloud Run I always saw the same "You need a private key to sign credentials.the credentials you are currently using..." error.

Are function/host keys directly available inside an Azure Function written in Python?

When writing an Azure Function in Python, I would expect to be able to access the host and function keys from the environment. Is this possible? All the examples I've seen do it by calling a get request, which seems like a lot of code to access something that I've set through the website.
This question is very similar, but not language specific.
It sounds like you want to get the response of the Host API admin/host/keys of Azure Functions as below, so please refer to Azure Functions wiki page Key management API
Here is my sample code.
# App Credentials, to get it see the figures below
username = "<your username like `$xxxxx`>"
password = "<your password>"
functionapp_name = "<your function app name>"
api_url = f"https://{functionapp_name}.scm.azurewebsites.net/api"
site_url = f"https://{functionapp_name}.azurewebsites.net"
import base64
import requests
auth_info = f"{username}:{password}"
base64_auth = base64.b64encode(str.encode(auth_info)).decode()
print(base64_auth)
jwt_resp = requests.get(f"{api_url}/functions/admin/token", headers={"Authorization": f"Basic {base64_auth}"})
jwt = jwt_resp.text.replace("\"", "", -1)
print(jwt)
keys_resp = requests.get(f"{site_url}/admin/host/keys", headers={"Authorization": f"Bearer {jwt}"})
print(keys_resp.text)
It works and its result as below.
For getting the username and password of App Credentials, please see the figures below.
Fig 1. On Azure portal, open the Platform features tab of your Function App and click the Deployment Center link
Fig 2. Select the FTP option in the first step of SOURCE CONTROL and click the Dashboard button to copy the values of Username and Password, but just use the part of Username with $ prefix as username variable in my script. Ofcouse, you also can use them in tab User Credentials tab.
Also, you can refer to my answer for the similar SO thread Unable to access admin URL of Azure Functions using PowerShell, and my figures below come from that.
Update: For using Azure Function for Python in container, please refer to the figure below to get the deployment credentials.

How can I run a Python script in Azure DevOps with Azure Resource Manager credentials?

I have a Python script I want to run in Azure Resource Manager context within an Azure DevOps pipeline task to be able to access Azure resources (like the Azure CLI or Azure PowerShell tasks).
How can I get Azure RM Service Endpoint credentials stored in Azure DevOps passed - as ServicePrincipal/Secret or OAuth Token - into the script?
If I understand the issue correctly, you want to use the Python Azure CLI wrapper classes to manage or access Azure resources. Rather than using shell or PowerShell commands. I ran across the same issue and used the following steps to solve it.
import sys
from azure.identity import ClientSecretCredential
tenant_id = sys.argv[1]
client_id = sys.argv[2]
client_secret = sys.argv[3]
credentials = ClientSecretCredential(tenant_id, client_id, client_secret)
Add a "User Python version" step to add the correct version of python to your agent's PATH
Add a "Azure CLI" step. The goal here is to install your requirements and execute the script.
Within the Azure CLI step, be sure to check the "Access service principal details in script" box in the Advanced section. This will allow you to pass in the service principal details into your script as arguments.
Pass in the $tenantId $servicePrincipalId $servicePrincipalKey variables as arguments. These variables are pipeline defined so long as the box in step 3 is checked. No action is required on your part to define them.
Setup your Python script to accept the values and setup your
credentials. See the script above
Depends on what you call a python script, but either way Azure DevOps hasn't got native support to authenticate python sdk (or your custom python script), but you can pass in credentials from build\release variables to your script, or try and pull that from the Azure Cli (I think it stores data somewhere under /home/.azure/.
based on the hint given by 4c74356b41 above and with some dissecting of Azure CLI I created this function that allows pulling an OAuth token over ADAL from the Service Princial logged in inside an Azure DevOps - Azure CLI task
import os
import json
import adal
_SERVICE_PRINCIPAL_ID = 'servicePrincipalId'
_SERVICE_PRINCIPAL_TENANT = 'servicePrincipalTenant'
_TOKEN_ENTRY_TOKEN_TYPE = 'tokenType'
_ACCESS_TOKEN = 'accessToken'
def get_config_dir():
return os.getenv('AZURE_CONFIG_DIR', None) or os.path.expanduser(os.path.join('~', '.azure'))
def getOAuthTokenFromCLI():
token_file = (os.environ.get('AZURE_ACCESS_TOKEN_FILE', None)
or os.path.join(get_config_dir(), 'accessTokens.json'))
with open(token_file) as f:
tokenEntry = json.load(f)[0] # just assume first entry
tenantID = tokenEntry[_SERVICE_PRINCIPAL_TENANT]
appId = tokenEntry[_SERVICE_PRINCIPAL_ID]
appPassword = tokenEntry[_ACCESS_TOKEN]
authURL = "https://login.windows.net/" + tenantID
resource = "https://management.azure.com/"
context = adal.AuthenticationContext(authURL, validate_authority=tenantID, api_version=None)
token = context.acquire_token_with_client_credentials(resource,appId,appPassword)
return token[_TOKEN_ENTRY_TOKEN_TYPE] + " " + token[_ACCESS_TOKEN]

Categories

Resources