I want to automate a azure resources(ex- start/stop VM) currently I am using Automation Account runbook and its working fine but I need to implement a framework something lie this :
1)Trigger runbook whenever put a new object(excel sheet) in azure bucket.
2)Read the excel sheet for input variables
Below is the runbook code
Somebody please tell me best way to trigger runbook which suits the above framework
"""
Azure Automation documentation : https://aka.ms/azure-automation-python-documentation
Azure Python SDK documentation : https://aka.ms/azure-python-sdk
"""
import os
import sys
from azure.mgmt.compute import ComputeManagementClient
import azure.mgmt.resource
import automationassets
def get_automation_runas_credential(runas_connection):
from OpenSSL import crypto
import binascii
from msrestazure import azure_active_directory
import adal
# Get the Azure Automation RunAs service principal certificate
cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
pks12_cert = crypto.load_pkcs12(cert)
pem_pkey = crypto.dump_privatekey(crypto.FILETYPE_PEM,pks12_cert.get_privatekey())
# Get run as connection information for the Azure Automation service principal
application_id = runas_connection["ApplicationId"]
thumbprint = runas_connection["CertificateThumbprint"]
tenant_id = runas_connection["TenantId"]
# Authenticate with service principal certificate
resource ="https://management.core.windows.net/"
authority_url = ("https://login.microsoftonline.com/"+tenant_id)
context = adal.AuthenticationContext(authority_url)
return azure_active_directory.AdalAuthentication(
lambda: context.acquire_token_with_client_certificate(
resource,
application_id,
pem_pkey,
thumbprint)
)
Authenticate to Azure using the Azure Automation RunAs service principal
runas_connection = automationassets.get_automation_connection("AzureRunAsConnection")
azure_credential = get_automation_runas_credential(runas_connection)
Initialize the compute management client with the RunAs credential and specify the subscription to work against.
compute_client = ComputeManagementClient(
azure_credential,
str(runas_connection["SubscriptionId"])
)
print('\nStart VM')
async_vm_start = compute_client.virtual_machines.start(
'resource1', 'vm1')
async_vm_start.wait()
'''
print('\nStop VM')
async_vm_stop=compute_client.virtual_machines.power_off(resource_group_name, vm_name)
async_vm_stop.wait()'''
I believe one way to accomplish your requirement of triggering runbook whenever a new blob (or in your words 'object') is added in an Azure Storage container (on in your words 'bucket') is by leveraging Event Subscription (Event Grid). For related information, refer this document.
To illustrate it in a better way, you would have to go to Azure Portal -> your Storage account (that is of StorageV2 kind) -> Events tile -> More options -> Logic Apps -> Have 2 Steps as shown in below screenshot that does validate if a new storage blob is added and then runs the required runbook
You may also add next steps like sending mail after runbook execution is completed, etc.
Hope this helps!
Related
My Sql server DB password is saved on Azure app vault which has DATAREF ID as a identifier. I need that password to create spark dataframe from table which is present in SQL server. I am running this .py file on google Dataproc cluster. How can I get that password using python?
Since you are accessing an Azure service from a non-Azure service, you will need a service principal. You can use certificate or secret. See THIS link for the different methods. You will need to give the service principal proper access and this will depend if you are using RBAC or access policy for your key vault.
So the steps you need to follow are:
Create a key vault and create a secret.
Create a Service principal or application registration. Store the clientid, clientsecret and tenantid.
Give the service principal proper access to the key vault(if you are using access policies) or to the specific secret(if you are using RBAC model)
The python link for the code is HERE.
The code that will work for you is below:
from azure.identity import ClientSecretCredential
from azure.keyvault.secrets import SecretClient
tenantid = <your_tenant_id>
clientsecret = <your_client_secret>
clientid = <your_client_id>
my_credentials = ClientSecretCredential(tenant_id=tenantid, client_id=clientid, client_secret=clientsecret)
secret_client = SecretClient(vault_url="https://<your_keyvault_name>.vault.azure.net/", credential=my_credentials)
secret = secret_client.get_secret("<your_secret_name>")
print(secret.name)
print(secret.value)
I am struggling to programmatically access a kubernetes cluster running on Google Cloud. I have set up a service account and pointed GOOGLE_APPLICATION_CREDENTIALS to a corresponding credentials file. I managed to get the cluster and credentials as follows:
import google.auth
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform',])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager.get_cluster(project, 'us-west1-b', 'clic-cluster')
So far so good. But then I want to start using the kubernetes client:
config = client.Configuration()
config.host = f'https://{cluster.endpoint}:443'
config.verify_ssl = False
config.api_key = {"authorization": "Bearer " + credentials.token}
config.username = credentials._service_account_email
client.Configuration.set_default(config)
kub = client.CoreV1Api()
print(kub.list_pod_for_all_namespaces(watch=False))
And I get an error message like this:
pods is forbidden: User "12341234123451234567" cannot list resource "pods" in API group "" at the cluster scope: Required "container.pods.list" permission.
I found this website describing the container.pods.list, but I don't know where I should add it, or how it relates to the API scopes described here.
As per the error:
pods is forbidden: User "12341234123451234567" cannot list resource
"pods" in API group "" at the cluster scope: Required
"container.pods.list" permission.
it seems evident the user credentials you are trying to use, does not have permission on listing the pods.
The entire list of permissions mentioned in https://cloud.google.com/kubernetes-engine/docs/how-to/iam, states the following:
There are different Role which can play into account here:
If you are able to get cluster, then it is covered with multiple Role sections like: Kubernetes Engine Cluster Admin, Kubernetes Engine Cluster Viewer, Kubernetes Engine Developer & Kubernetes Engine Viewer
Whereas, if you want to list pods kub.list_pod_for_all_namespaces(watch=False) then you might need Kubernetes Engine Viewer access.
You should be able to add multiple roles.
I've setup a VM machine in Azure that has a managed identity.
I follow the guide here https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-arm
So now I have an access token. But what I fail to understand is how do I use this token to access my key vault? I'm using the Python SDK. Looking at the docs for the SDK here https://learn.microsoft.com/en-us/python/api/azure-keyvault/azure.keyvault?view=azure-python
There exist a access token class AccessToken(scheme, token, key)
I assume i can use my token i generated earlier here. But what is scheme and key? The docs does not explain it. Or am I looking at the wrong class to use with the token?
If you're using a VM with a managed identity, then you can create a credential for a Key Vault client using azure-identity's ManagedIdentityCredential class. The credential will fetch and use access tokens for you as you use the Key Vault client:
from azure.identity import ManagedIdentityCredential
from azure.keyvault.secrets import SecretClient
credential = ManagedIdentityCredential()
client = SecretClient("https://{vault-name}.vault.azure.net", credential)
secret = client.get_secret("secret-name")
Note that I'm using a SecretClient to fetch secrets from Key Vault; there are new packages for working with Key Vault in Python that replace azure-keyvault:
azure-keyvault-certificates (Migration guide)
azure-keyvault-keys (Migration guide)
azure-keyvault-secrets (Migration guide)
Clients in each of these packages can use any credential from azure-identity for authentication.
(I work on the Azure SDK in Python)
I wouldn't recommend you use the managed identity of a VM to access KeyVault. You should create a service principal if you intend on running scripts / code.
The best way of doing this is with the Azure CLI. See here for instructions on installing the CLI, and refer to this, or this for creating your service principal.
The best way to manage resources in Python is by using ADAL, which is documented:
https://github.com/AzureAD/azure-activedirectory-library-for-python
In your case, however, managing KeyVault is made a little easier since the KeyVault library for Python also provides the means for you to authenticate without directly using ADAL to obtain your access token. See here:
https://learn.microsoft.com/en-us/python/api/overview/azure/key-vault?view=azure-python
from azure.keyvault import KeyVaultClient
from azure.common.credentials import ServicePrincipalCredentials
credentials = ServicePrincipalCredentials(
client_id = '...',
secret = '...',
tenant = '...'
)
client = KeyVaultClient(credentials)
# VAULT_URL must be in the format 'https://<vaultname>.vault.azure.net'
# KEY_VERSION is required, and can be obtained with the KeyVaultClient.get_key_versions(self, vault_url, key_name) API
key_bundle = client.get_key(VAULT_URL, KEY_NAME, KEY_VERSION)
key = key_bundle.key
In the above, client_id, secret, and tenant (id) are all outputs of the az ad sp create-for-rbac --name {APP-NAME} CLI command.
Remember to review and adjust the role assignments for the sp you created. And your KeyVault is only as secure as the devices which have access to your sp's credentials.
I have a Python script I want to run in Azure Resource Manager context within an Azure DevOps pipeline task to be able to access Azure resources (like the Azure CLI or Azure PowerShell tasks).
How can I get Azure RM Service Endpoint credentials stored in Azure DevOps passed - as ServicePrincipal/Secret or OAuth Token - into the script?
If I understand the issue correctly, you want to use the Python Azure CLI wrapper classes to manage or access Azure resources. Rather than using shell or PowerShell commands. I ran across the same issue and used the following steps to solve it.
import sys
from azure.identity import ClientSecretCredential
tenant_id = sys.argv[1]
client_id = sys.argv[2]
client_secret = sys.argv[3]
credentials = ClientSecretCredential(tenant_id, client_id, client_secret)
Add a "User Python version" step to add the correct version of python to your agent's PATH
Add a "Azure CLI" step. The goal here is to install your requirements and execute the script.
Within the Azure CLI step, be sure to check the "Access service principal details in script" box in the Advanced section. This will allow you to pass in the service principal details into your script as arguments.
Pass in the $tenantId $servicePrincipalId $servicePrincipalKey variables as arguments. These variables are pipeline defined so long as the box in step 3 is checked. No action is required on your part to define them.
Setup your Python script to accept the values and setup your
credentials. See the script above
Depends on what you call a python script, but either way Azure DevOps hasn't got native support to authenticate python sdk (or your custom python script), but you can pass in credentials from build\release variables to your script, or try and pull that from the Azure Cli (I think it stores data somewhere under /home/.azure/.
based on the hint given by 4c74356b41 above and with some dissecting of Azure CLI I created this function that allows pulling an OAuth token over ADAL from the Service Princial logged in inside an Azure DevOps - Azure CLI task
import os
import json
import adal
_SERVICE_PRINCIPAL_ID = 'servicePrincipalId'
_SERVICE_PRINCIPAL_TENANT = 'servicePrincipalTenant'
_TOKEN_ENTRY_TOKEN_TYPE = 'tokenType'
_ACCESS_TOKEN = 'accessToken'
def get_config_dir():
return os.getenv('AZURE_CONFIG_DIR', None) or os.path.expanduser(os.path.join('~', '.azure'))
def getOAuthTokenFromCLI():
token_file = (os.environ.get('AZURE_ACCESS_TOKEN_FILE', None)
or os.path.join(get_config_dir(), 'accessTokens.json'))
with open(token_file) as f:
tokenEntry = json.load(f)[0] # just assume first entry
tenantID = tokenEntry[_SERVICE_PRINCIPAL_TENANT]
appId = tokenEntry[_SERVICE_PRINCIPAL_ID]
appPassword = tokenEntry[_ACCESS_TOKEN]
authURL = "https://login.windows.net/" + tenantID
resource = "https://management.azure.com/"
context = adal.AuthenticationContext(authURL, validate_authority=tenantID, api_version=None)
token = context.acquire_token_with_client_credentials(resource,appId,appPassword)
return token[_TOKEN_ENTRY_TOKEN_TYPE] + " " + token[_ACCESS_TOKEN]
I'm using the python google.cloud api
For example using the metrics module
from google.cloud import monitoring
client = monitoring.Client()
client.query(my/gcp/metric, minutes=10)
For my GOOGLE_APPLICATION_CREDENTIALS im using a service account that has specific access to a gcp project.
Does google.cloud have any modules that can let me derive the project from the service account (like get what project the service account is in)?
This would be convenient because each service account only has access to a single project, so I could set my service account and be able to reference that project in code.
Not sure if this will work, you may need to tweak it:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('yourservicename', credentials=credentials)
request = service.projects().list()[0]
Google Cloud Identity and Access Management (IAM) API has ‘serviceAccounts.get’ method and which shows the projects associated with a service account as shown here. You need to have proper permissions on the projects for the API to work.
The method google.auth.default return a tuple (project_id, credentials) if that information is available on the environment.
Also, the client object knows to which project it is linked from (either client.project or client.project_id, I'm not sure which one for the Monitoring API).
If you set the service account manually with the GOOGLE_APPLICATION_CREDENTIALS env var, you can open the file and load its json. One of the parameters in a service account key file is the project id.