I am trying to get all resources and providers from Azure subscription by using Python SDK.
Here is my code:
get all resources by "resource group"
extract id of each resource within "resource group"
then calling details about particular resource by its id
The problem is that each call from point 3. requires a correct "API version" and it differs from object to object. So obviously my code keeps failing when trying to find some common API version that fits to everything.
Is there a way to retrieve suitable API version per resource in resource group ??? (similarly as retrieving id, name, ...)
# Import specific methods and models from other libraries
from azure.mgmt.resource import SubscriptionClient
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
credential = AzureCliCredential()
client = ResourceManagementClient(credential, "<subscription_id>")
rg = [i for i in client.resource_groups.list()]
# Retrieve the list of resources in "myResourceGroup" (change to any name desired).
# The expand argument includes additional properties in the output.
rg_resources = {}
for i in range(0, len(rg)):
rg_resources[rg[i].as_dict()
["name"]] = client.resources.list_by_resource_group(
rg[i].as_dict()["name"],
expand="properties,created_time,changed_time")
data = {}
for i in rg_resources.keys():
details = []
for _data in iter(rg_resources[i]):
a = _data
details.append(client.resources.get_by_id(vars(_data)['id'], 'latest'))
data[i] = details
print(data)
error:
azure.core.exceptions.HttpResponseError: (NoRegisteredProviderFound) No registered resource provider found for location 'westeurope' and API version 'latest' for type 'workspaces'. The supported api-versions are '2015-03-20, 2015-11-01-preview, 2017-01-01-preview, 2017-03-03-preview, 2017-03-15-preview, 2017-04-26-preview, 2020-03-01-preview, 2020-08-01, 2020-10-01, 2021-06-01, 2021-03-01-privatepreview'. The supported locations are 'eastus, westeurope, southeastasia, australiasoutheast, westcentralus, japaneast, uksouth, centralindia, canadacentral, westus2, australiacentral, australiaeast, francecentral, koreacentral, northeurope, centralus, eastasia, eastus2, southcentralus, northcentralus, westus, ukwest, southafricanorth, brazilsouth, switzerlandnorth, switzerlandwest, germanywestcentral, australiacentral2, uaecentral, uaenorth, japanwest, brazilsoutheast, norwayeast, norwaywest, francesouth, southindia, jioindiawest, canadaeast, westus3
What information exactly do you want to retrieve from the resources?
In most cases, I would recommend to use the Graph API to query over all resources. This is very powerful, as you can query the whole platform using a simple Query language - Kusto Query Lanaguage (KQL)
You can try the queries directly in the Azure service Azure Resource Graph Explorer in the Portal
A query that summarizes all types of resources would be:
resources
| project resourceGroup, type
| summarize count() by type, resourceGroup
| order by count_
A simple python-codeblock can be seen on the linked documentation above.
The below sample uses DefaultAzureCredential for authentication and lists the first resource in detail, that is in a resource group, where its name starts with "rg".
# Import Azure Resource Graph library
import azure.mgmt.resourcegraph as arg
# Import specific methods and models from other libraries
from azure.mgmt.resource import SubscriptionClient
from azure.identity import DefaultAzureCredential
# Wrap all the work in a function
def getresources( strQuery ):
# Get your credentials from environment (CLI, MSI,..)
credential = DefaultAzureCredential()
subsClient = SubscriptionClient(credential)
subsRaw = []
for sub in subsClient.subscriptions.list():
subsRaw.append(sub.as_dict())
subsList = []
for sub in subsRaw:
subsList.append(sub.get('subscription_id'))
# Create Azure Resource Graph client and set options
argClient = arg.ResourceGraphClient(credential)
argQueryOptions = arg.models.QueryRequestOptions(result_format="objectArray")
# Create query
argQuery = arg.models.QueryRequest(subscriptions=subsList, query=strQuery, options=argQueryOptions)
# Run query
argResults = argClient.resources(argQuery)
# Show Python object
print(argResults)
getresources("Resources | where resourceGroup startswith 'rg' | limit 1")
Related
How to I assign a compute instance to a user using python SDK?
Right now I'm connecting to my workspace via serviceprincipal authentication using the following code sniped with the python sdk
from azureml.core import Workspace
from azureml.core.authentication import ServicePrincipalAuthentication
svc_pr = ServicePrincipalAuthentication(
tenant_id="tenant_id_from_my_account",
service_principal_id="service_principal_id_of_my_app",
service_principal_password='password_of_my_service_principal')
ws = Workspace(
subscription_id="my_subscription_id",
resource_group="my_resource_group",
workspace_name="my_workspacename",
auth=svc_pr)
To create a compute instance I'm using this code snipet
from azureml.core.compute import ComputeTarget, ComputeInstance
from azureml.core.compute_target import ComputeTargetException
compute_name = "light-medium-gabriel-2nd"
# Verify that instance does not exist already
try:
instance = ComputeInstance(workspace=ws, name=compute_name)
print('Found existing instance, use it.')
except ComputeTargetException:
compute_config = ComputeInstance.provisioning_configuration(
vm_size='STANDARD_E4S_V3',
ssh_public_access=False,
tags = {'projeto' : 'Data Science','Ambiente':'Homologação'},
)
instance = ComputeInstance.create(ws, compute_name, compute_config)
instance.wait_for_completion(show_output=True)
But I can't access the compute instance. Since I'm using the service principal autentication it's like I'm creating the compute instance assigned to the service principal and not to my user?
I tried in my environment and below results:
You can be able to create compute instance in AzureML workspace with user by using Defaultazurecredential method.
You can follow this MS-DOCS to create compute instance.
Code:
from azure.identity import DefaultAzureCredential
from azure.ai.ml.entities import ComputeInstance
import datetime
from azure.ai.ml import MLClient
subscription_id = "<sub id>"
resource_group = "<resource_grp>"
workspace_name = "wrkspace_name"
credential=DefaultAzureCredential()
ml_client = MLClient(subscription_id,resource_group,workspace_name,credential)
ci_basic_name = "v-vsettu1" + datetime.datetime.now().strftime("%Y%m%d%H%M")
ci_basic = ComputeInstance(name=ci_basic_name,size="STANDARD_DS3_v2")
ml_client.begin_create_or_update(ci_basic).result()
Console:
Also, you can use Cli commands(v2) to create Compute instance.
Command:
az ml compute create -f instance1.yml
yaml
$schema: https://azuremlschemas.azureedge.net/latest/computeInstance.schema.json
name: v-vsettu1
type: computeinstance
size: STANDARD_DS3_v2
Portal:
I'm using Firebase Firestore in my Python project (with their official Python SDK) and having trouble performing count() aggregation. This funciton is supported according to their docs. However, they do not provide Python example ( they do in other parts of documentation ). I tried to play with it in Python console, tried something like this:
query = db.collection('videos').where('status', '==', 'pending')
query.count()
without any luck. So I'm wondering how is it possible to implement? Does Python SDK support this functionality?
Firebase Admin Python SDK doesn't support that query yet. You can still use the runAggregationQuery REST API meanwhile. The Google Cloud Firestore Python SDK has Aggregation result types available from v2.7.0+ so it should be available in Admin SDK soon.
Although the API for this purpose is not fully ready but it's coming along. If you don't wanna use the REST API as suggested by #Dharmaraj, you can do something like this for now:
from google.cloud.firestore_v1.services.firestore import FirestoreClient
from google.cloud.firestore_v1.types.document import Value
from google.cloud.firestore_v1.types.firestore import RunAggregationQueryRequest
from google.cloud.firestore_v1.types.query import (
StructuredAggregationQuery,
StructuredQuery,
)
Aggregation = StructuredAggregationQuery.Aggregation
CollectionSelector = StructuredQuery.CollectionSelector
Count = Aggregation.Count
FieldFilter = StructuredQuery.FieldFilter
FieldReference = StructuredQuery.FieldReference
Filter = StructuredQuery.Filter
Operator = StructuredQuery.FieldFilter.Operator
client = FirestoreClient()
project_id = ""
request = RunAggregationQueryRequest(
parent=f"projects/{project_id}/databases/(default)/documents",
structured_aggregation_query=StructuredAggregationQuery(
structured_query=StructuredQuery(
from_=[CollectionSelector(collection_id="videos")],
where=Filter(
field_filter=FieldFilter(
field=FieldReference(
field_path="status",
),
op=Operator.EQUAL,
value=Value(string_value="pending"),
)
),
),
aggregations=[Aggregation(count=Count())],
),
)
stream = client.run_aggregation_query(request=request)
print(next(stream).result.aggregate_fields["field_1"].integer_value)
Output:
1
Generally the following would work to count the total number of documents in a collection:
def count_documents(collection_id: str) -> int:
client = FirestoreClient()
project_id = ""
request = RunAggregationQueryRequest(
parent=f"projects/{project_id}/databases/(default)/documents",
structured_aggregation_query=StructuredAggregationQuery(
structured_query=StructuredQuery(
from_=[CollectionSelector(collection_id=collection_id)]
),
aggregations=[Aggregation(count=Count())],
),
)
stream = client.run_aggregation_query(request=request)
return next(stream).result.aggregate_fields["field_1"].integer_value
print(count_documents(collection_id="videos"))
Output:
10
Make sure that you have google-cloud-firestore>=2.7.3 and also remember to set the value of project_id variable in the count_documents function accordingly.
I'm using the Python interface to access Scopus resources (link), using the following code:
from elsapy.elsclient import ElsClient
from elsapy.elssearch import ElsSearch
import json
## Load configuration
con_file = open("config.json")
config = json.load(con_file)
con_file.close()
## Initialize client
client = ElsClient(api_key = config['apikey'], inst_token = config['Insttoken'])
## Initialize doc search object using Scopus and execute search, retrieving
# all results
doc_srch = ElsSearch("KEY(urban AND clustering) AND PUBYEAR > 2021",'scopus')
doc_srch.execute(client, get_all = True)
I understand that ElsSearch searches for keywords in Scoups abstracts, not in full documents. However, I'm getting results that don't contain my keywords in the abstract.
How do I limit ElsSearch to look for keywords only in abstracts?
I'm trying to get metrics from azure storage, like transaction_count, ingress, egress, server sucess latency etc.
I am trying with the following code :
from azure.storage.blob import BlobAnalyticsLogging, Metrics, CorsRule, RetentionPolicy
# Create logging settings
logging = BlobAnalyticsLogging(read=True, write=True, delete=True, retention_policy=RetentionPolicy(enabled=True, days=5))
# Create metrics for requests statistics
hour_metrics = Metrics(enabled=True, include_apis=True, retention_policy=RetentionPolicy(enabled=True, days=5))
minute_metrics = Metrics(enabled=True, include_apis=True,retention_policy=RetentionPolicy(enabled=True, days=5))
# Create CORS rules
cors_rule = CorsRule(['www.xyz.com'], ['GET'])
cors = [cors_rule]
# Set the service properties
blob_service_client.set_service_properties(logging, hour_metrics, minute_metrics, cors)
# [END set_blob_service_properties]
# [START get_blob_service_properties]
properties = blob_service_client.get_service_properties()
# [END get_blob_service_properties]
print (properties)
This one does not give an error, but returns the following output:
{'analytics_logging': <azure.storage.blob._models.BlobAnalyticsLogging object at 0x7ffa629d8880>, 'hour_metrics': <azure.storage.blob._models.Metrics object at 0x7ffa629d84c0>, 'minute_metrics': <azure.storage.blob._models.Metrics object at 0x7ffa629d8940>, 'cors': [<azure.storage.blob._models.CorsRule object at 0x7ffa629d8a60>], 'target_version': None, 'delete_retention_policy': <azure.storage.blob._models.RetentionPolicy object at 0x7ffa629d85e0>, 'static_website': <azure.storage.blob._models.StaticWebsite object at 0x7ffa629d8a00>}
I understand that maybe I am missing something, the documentation is quite dense and I don't understand it very well.
Thanks in advance for any possible answers
You would probably want to read the service properties individually, i.E.:
analytics_logging = blob_service_client.get_service_properties().get('analytics_logging')
This way you should be able to process (or read) the output further in your code.
My Bash script using kubectl create/apply -f ... to deploy lots of Kubernetes resources has grown too large for Bash. I'm converting it to Python using the PyPI kubernetes package.
Is there a generic way to create resources given the YAML manifest? Otherwise, the only way I can see to do it would be to create and maintain a mapping from Kind to API method create_namespaced_<kind>. That seems tedious and error prone to me.
Update: I'm deploying many (10-20) resources to many (10+) GKE clusters.
Update in the year 2020, for anyone still interested in this (since the docs for the python library is mostly empty).
At the end of 2018 this pull request has been merged,
so it's now possible to do:
from kubernetes import client, config
from kubernetes import utils
config.load_kube_config()
api = client.ApiClient()
file_path = ... # A path to a deployment file
namespace = 'default'
utils.create_from_yaml(api, file_path, namespace=namespace)
EDIT: from a request in a comment, a snippet for skipping the python error if the deployment already exists
from kubernetes import client, config
from kubernetes import utils
config.load_kube_config()
api = client.ApiClient()
def skip_if_already_exists(e):
import json
# found in https://github.com/kubernetes-client/python/blob/master/kubernetes/utils/create_from_yaml.py#L165
info = json.loads(e.api_exceptions[0].body)
if info.get('reason').lower() == 'alreadyexists':
pass
else
raise e
file_path = ... # A path to a deployment file
namespace = 'default'
try:
utils.create_from_yaml(api, file_path, namespace=namespace)
except utils.FailToCreateError as e:
skip_if_already_exists(e)
I have written a following piece of code to achieve the functionality of creating k8s resources from its json/yaml file:
def create_from_yaml(yaml_file):
"""
:param yaml_file:
:return:
"""
yaml_object = yaml.loads(common.load_file(yaml_file))
group, _, version = yaml_object["apiVersion"].partition("/")
if version == "":
version = group
group = "core"
group = "".join(group.split(".k8s.io,1"))
func_to_call = "{0}{1}Api".format(group.capitalize(), version.capitalize())
k8s_api = getattr(client, func_to_call)()
kind = yaml_object["kind"]
kind = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', kind)
kind = re.sub('([a-z0-9])([A-Z])', r'\1_\2', kind).lower()
if "namespace" in yaml_object["metadata"]:
namespace = yaml_object["metadata"]["namespace"]
else:
namespace = "default"
try:
if hasattr(k8s_api, "create_namespaced_{0}".format(kind)):
resp = getattr(k8s_api, "create_namespaced_{0}".format(kind))(
body=yaml_object, namespace=namespace)
else:
resp = getattr(k8s_api, "create_{0}".format(kind))(
body=yaml_object)
except Exception as e:
raise e
print("{0} created. status='{1}'".format(kind, str(resp.status)))
return k8s_api
In above function, If you provide any object yaml/json file, it will automatically pick up the API type and object type and create the object like statefulset, deployment, service etc.
PS: The above code doesn't handler multiple kubernetes resources in one file, so you should have only one object per yaml file.
I see what you are looking for. This is possible with other k8s clients available in other languages. Here is an example in java. Unfortunately the python client library does not support that functionality yet. I opened a new feature request requesting the same and you can either choose to track it or contribute yourself :). Here is the link for the issue on GitHub.
The other way to still do what you are trying to do is to use java/golang client and put your code in a docker container.