get Azure VM hardware profile via Azure Python SDK - python

I'm trying to develop a system parameter optimization algorithm on Azure but I'm stuck in an API question.
I can use azure cli command to get a VM hardware profile but I can't figure out which Azure SDK API has equivalent result.
Azure cli command and partial output result is:
az vm get-instance-view -g GROUP_NAME -n VM_NAME.
output will include:
"hardwareProfile": {
"vmSize": "Standard_D4s_v3"
},
The value of vmSize is what I need. I tried with:
compute_client.virtual_machines.get(GROUP_NAME, VM_NAME, expand='instanceView').instance_view
but I couldn't get expected result from above API. I have searched but failed to find out from Azure doc either.

Just to make it clear. It's not weird.
In your first method, you define the expand='instanceView', which can only return the instance view of a virtual machine. And instance view means information about the run-time state of a virtual machine. It(the instance view) does not contain the VM hardware profile information.
You can use this api(the get() method also calls this api when you go through the source code) to check the returned result for instance view, like below:
And if you does not specify the instanceview in the get() method, it will return the model view of a vm, which contains VM hardware profile information.
You can also test it via this api for model view of a vm.
Hope it helps.

found out after I posted this question, weird:
virtual_machine = compute_client.virtual_machines.get(
GROUP_NAME,
VM_NAME
)
hardware = virtual_machine.hardware_profile
print("\nHardware:", hardware)
output is:
hardware: {'additional_properties': {}, 'vm_size': 'Standard_D2s_v3'}

Related

which API should i use so as to access the V2beta2PodsMetricStatus Model of kubernetes-client in Python?

I am developing an application in python where i need to consume the current values of a metric for some pods(eg. cpu, memory). I can get this info through an API (/apis/metrics.k8s.io/v1beta1/pods) but i try to use the kubernetes client so as to access these metrics within my python app.
I see that the V2beta2PodsMetricStatus Model includes the information i need but i can not find the API endpoint i should use so as to reach this model.
Any help or alternative option would be very helpful since i am really stacked with this issue several days.
Thank you very much for you time.
I finally get the metrics by executing directly the relevant kubectl command. I do not like the solution but it works. Hope to be able to use the kubernetes-client instead at the near future.
import subprocess
p = subprocess.getoutput('kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods')
ret_metrics = json.loads(p)
items = ret_metrics['items']
for item in items:
print(item['metadata']['name'])
You could use the call_api method of api_client as below to call an API which is not part of core kubernetes API.
ret_metrics = api_client.call_api('/apis/metrics.k8s.io/v1beta1/pods', 'GET', auth_settings = ['BearerToken'], response_type='json', _preload_content=False)
response = ret_metrics[0].data.decode('utf-8')
There is an open issue to support it via V2beta2PodsMetricStatus model

Connect to Google Compute Engine instance to run Python script

I'm very new to cloud computing and I don't come from a software engineering background, so excuse me if some things I say are incorrect.
I'm used to work on an IDE like Spyder and I'd like to keep it that way. Lately, in my organization we're experimenting with Google Cloud and what I'm trying to do is to run a simple script on the cloud instead of on my computer using Google Cloud's APIs.
Say I want to run this on the cloud through Spyder:
x=3
y=2
print(f'your result is {x+y}')
I'm guessing I could do something like:
from googleapiclient import discovery
compute = discovery.build('compute', 'v1')
request = compute.instances().start(project=project, zone=zone, instance=instance)
request.execute()
#Do something to connect to instance
x=3
y=2
print(f'your result is {x+y}')
Is there any way to do this? Or tell python to run script.py? Thanks, and please tell me if I'm not being clear.
You needn't apologize; everyone is new to cloud computing at some point.
I encourage to read around on cloud computing to get more of a feel for what it is and how it compares with your current experience.
The code you included won't work as-is.
There are 2 modes of interaction with Compute Engine which is one of several compute services in Google Cloud Platform.
Fundamentally, interacting with Compute Engine instances is similar to how you'd interact with your laptop. To run the python program, you'd either start Python's REPL or create a script and then run the script through the python interpreter. This is also how this would work on a Compute Engine instance.
You can do this on Linux in a single line:
python -c "x=2; y=3; print(x+y)"
But, first, you have to tell Compute Engine to create you an instance. You may do this using Google Cloud Console (http://console.cloud.google.com), Google Cloud SDK aka "gcloud", or using e.g. Google's Python library for Compute Engine (this is what your code does). Regardless of which of these approaches you use, all of them ultimately make REST calls against Google Cloud to e.g. provision an instance:
from googleapiclient import discovery
compute = discovery.build('compute', 'v1')
request = compute.instances().start(project=PROJECT, zone=ZONE, instance=INSTANCE)
request.execute()
#Do something to connect to instance
Your example ends connect to instance and this marks the transition between provisioning an instance and interacting with it. An alternative to your code above would be to use Google's command-line often called "gcloud", e.g.:
gcloud compute instances create ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE}
gcloud provides a convenience command that allows you to use ssh but it takes care of authentication for you:
gcloud compute ssh ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE} \
--command='python -c "x=2; y=3; print(x+y)"'
NB This command ssh's into the Compute Engine instance and then runs your Python program.
This is not the best way to achieve this but I hope it shows you one way that you could achieve it.
As you learn about Google Cloud Platform, you'll learn that there are other compute services. These other compute services provide a higher-level of abstraction. Instead of provisioning a virtual machine, you can deploy code directly to e.g. a Python runtime. Google App Engine and Google Cloud Functions both provide a way by which you could deploy your program directly to a compute service without provisioning instances. Because these services operate at a higher-level, you may work from write, test and even deploy code from within an IDE too.
Google Cloud Platform provides a myriad of compute services depending on your requirements. These are accompanied by storage services, machine-learning, analytics, internet-of-things, developer tools etc. etc. It can be overwhelming but you should start with the basics (follow some "hello world" tutorials) and take it from there.
HTH!

Discovering peer instances in Azure Virtual Machine Scale Set

Problem: Given N instances launched as part of VMSS, I would like my application code on each azure instance to discover the IP address of the other peer instances. How do I do this?
The overall intent is to cluster the instances so, as to provide active passive HA or keep the configuration in sync.
Seems like there is some support for REST API based querying : https://learn.microsoft.com/en-us/rest/api/virtualmachinescalesets/
Would like to know any other way to do it, i.e. either python SDK or instance meta data URL etc.
The RestAPI you mentioned has a Python SDK, the "azure-mgmt-compute" client
https://learn.microsoft.com/python/api/azure.mgmt.compute.compute.computemanagementclient
One way to do this would be to use instance metadata. Right now instance metadata only shows information about the VM it's running on, e.g.
curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute?api-version=2017-03-01"
{"compute":
{"location":"westcentralus","name":"imdsvmss_0","offer":"UbuntuServer","osType":"Linux","platformFaultDomain":"0","platformUpdateDomain":"0",
"publisher":"Canonical","sku":"16.04-LTS","version":"16.04.201703300","vmId":"e850e4fa-0fcf-423b-9aed-6095228c0bfc","vmSize":"Standard_D1_V2"},
"network":{"interface":[{"ipv4":{"ipaddress":[{"ipaddress":"10.0.0.4","publicip":"52.161.25.104"}],"subnet":[{"address":"10.0.0.0","dnsservers":[],"prefix":"24"}]},
"ipv6":{"ipaddress":[]},"mac":"000D3AF8BECE"}]}}
You could do something like have each VM send the info to a listener on VM#0, or to an external service, or you could combine this with Azure Files, and have each VM output to a common share. There's an Azure template proof of concept here which outputs information from each VM to an Azure File share.. https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-azure-files-linux - every VM has a mountpoint which contains info written by every VM.

Azure python SDK ComputerManagementClient error

I get an error when trying to deallocate a virtual machine with the Python SDK for Azure.
Basically I try something like:
credentials = ServicePrincipalCredentials(client_id, secret, tenant)
compute_client = ComputeManagementClient(credentials, subscription_id, '2015-05-01-preview')
compute_client.virtual_machines.deallocate(resource_group_name, vm_name)
pprint (result.result())
-> exception:
msrestazure.azure_exceptions.CloudError: Azure Error: AuthorizationFailed
Message: The client '<some client UUID>' with object id '<same client UUID>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/deallocate/action' over scope '/subscriptions/<our subscription UUID>/resourceGroups/<resource-group>/providers/Microsoft.Compute/virtualMachines/<our-machine>'.
What I don't understand is that the error message contains an unknown client UUID that I have not used in the credentials.
Python is version 2.7.13 and the SDK version was from yesterday.
What I guess I need is a registration for an Application, which I did to get the information for the credentials. I am not quite sure which exact permission(s) I need to register for the application with IAM. For adding an access entry I can only pick existing users, but not an application.
So is there any programmatic way to find out which permissions are required for an action and which permissions our client application has?
Thanks!
As #GauravMantri & #LaurentMazuel said, the issue was caused by not assign role/permission to a service principal. I had answered another SO thread Cannot list image publishers from Azure java SDK, which is similar with yours.
There are two ways to resolve the issue, which include using Azure CLI & doing these operations on Azure portal, please see the details of my answer for the first, and I update below for the second way which is old.
And for you want to find out these permissions programmatically, you can refer to the REST API Role Definition List to get all role definitions that are applicable at scope and above, or refer to Azure Python SDK Authentication Management to do it via the code authorization_client.role_definitions.list(scope).
Hope it helps.
Thank you all for your answers! The best recipe for creating an application and to register it with the right role - Virtual Machine Contributor - is presented indeed on https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal
The main issue I had was that there is a bug in the adding a role within IAM. I use add. I select "Virtual Machine Contributor". With "Select" I get presented a list of users, but not the application that I have created for this purpose. Entering the first few letters of the name of my application will give a filtered output that includes my application this time though. Registration is then finished and things can proceed.

How to access EMR master private ip address using pure python / boto

I've searched on this site and google but have not been able to get an answer for this.
I have code running from an EC2 instance which creates and manager EMR clusters using boto.
I can use this framework to get the flow_id (or cluster_id, not sure which is the right name for it), it start with "j-" and has a fixed amount of chars to identify the cluster.
Using the framework I can establish an emr or ec2 connection, but for the life of me I cannot do the following using boto:
aws emr --list-clusters --cluster-id=j-ASDFGHJKL | json '["instances"].[0].["privateipaddress"]
**The above is a little fudged, I cannot remember the json format and what the json command is or what args it wants, but cli nonetheless.
I've pprint.pprint()'ed and inspected with inspect.getmembers() the connections, getting the conn to the specific cluster_id, but I have yet to see this field/var/attribute with or without method calls.
I've been up and down amazon and boto, how do they do it like
here?
In the
def test_list_instances(self): #line 317
...
self.assertEqual(response.instances[0].privateipaddress , '10.0.0.60')
...
P.S. I've tried this but python complains that "instances" property is not iterable, array accessable (i forget the "var[0]" naming), and something else I tried, including inspecting.
BTW, i can access the publicDNSaddress from here, and many other things, just not the privateIP...
Please tell me if I messup up somewhere and where I can find the answer, i'm using an ugly fix by using subprocess!
If you are asking for taking master ip of emr then below commands will work:
list_intance_resp = boto3.client('emr',region_name='us-east-1').list_instances(ClusterId ='j-XXXXXXX')
print list_intance_resp['Instances'][len(list_intance_resp['Instances'])-1]['PrivateIpAddress']
Check your version of boto using
pip show boto
My guess is your using a version 2.24 or earlier as this version did not parse instance information see https://github.com/boto/boto/blob/2.24.0/tests/unit/emr/test_connection.py#L117
compared to
https://github.com/boto/boto/blob/2.25.0/tests/unit/emr/test_connection.py#L313
If you upgrade your version of boto to 2.25 or newer you'll be able to do the following
from boto.emr.connection import EmrConnection
conn = EmrConnection(<your aws key>, <your aws secret key>)
jobid = 'j-XXXXXXXXXXXXXX' # your job id
response = conn.list_instances(jobid)
for instance in response.instances:
print instance.privateipaddress
You just need to query the master instances from the Master Instance Group with the help of the EMR cluster-ID. If you have more than one master, you can parse the boto3 output and take the IP if the first listed master.
Your Boto3 execution environment should have access to EMR describe a cluster and its instance groups. Here is the
emr_list_instance_rep = boto_client_emr.list_instances(
ClusterId=cluster_id,
InstanceGroupTypes=[
'MASTER',
],
InstanceStates=[
'RUNNING',
]
)
return emr_list_instance_rep["Instances"][0]["PrivateIpAddress"]
You can find the full boto3 script and its explanation here https://scriptcrunch.com/script-retrieve-aws-emr-master-ip/

Categories

Resources