I'm trying to write a script in python 3.8 in cloud functions to stop all of the instances (VM's) no matter about the region, instance name etc. Moreover I'm looking for Tag specified too. However I didn't found an answer anywhere, everywhere it's said I need to give project id, region and instance name. Is there any option to jump over it?
Use the aggregatedList() and aggregatedList_next() methods to list all instances in all zones. Use the stop() method to terminate an instance. To understand the data returned by aggregatedList(), study the REST API response body.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = "REPLACE_ME"
request = service.instances().aggregatedList(project=project)
while request is not None:
response = request.execute()
instances = response.get('items', {})
for instance in instances.values():
for i in instance.get('instances', []):
# Do something here to determine if this instance should be stopped.
# Stop instance
response = service.instances().stop(project=project, zone=zone, instance=i)
# Add code to check the response, see below
request = service.instances().aggregatedList_next(previous_request=request, previous_response=response)
Example code to check the response status returned by stop(). You might want to stop all instances and save each response in a list and then process the list until all instances have stopped.
while True:
result = service.zoneOperations().get(
project=project,
zone=zone,
operation=response['name']).execute()
print('status:', result['status'])
if result['status'] == 'DONE':
print("done.")
break;
if 'error' in result:
raise Exception(result['error'])
time.sleep(1)
Related
I need a bit of help using google's api mocks. I am new to using mocks and google's api.
Here is the api mock
Here is my code I want to test:
#add_entry_to_calendar.py
#...
try:
service = build("calendar", "v3", credentials=delegated_credentials)
event = service.events().insert(calendarId=calendarID, body=entry).execute()
#handle exceptions
#test_add_entry_to_calendar.py
#patch("add_entry_to_calendar.build")
def test_add_entry_to_calendar_400(self, mock_build):
http = HttpMock('tests/config-test.json', {'status' : '400'})
service = mock_build("calendar", "v3", http=http)
self.assertEqual(add_entry_to_calendar({"A":"B"}), None)
add_entry_to_calendar is getting the mock object when I run my test.
My question - How do I get add_entry_to_calender to use the HttpMock object that I created in test_add_entry_to_calendar? I need the mock object that is created to ".execute()" with my "HttpMock" that i created in the test as a parameter.
#from googleapiclient.errors import HttpError
except HttpError as google_e:
response = json.loads(google_e.content)
response_header_code = response.get("error").get("code")
response_header_message = response.get("error").get("message")
response_string = F"{response_header_code} {response_header_message} {response.get('error').get('errors')[0]}"
if response_header_code == 400:
logger.warning(F"Failed to add entry to calendar. Missing or invalid field parameter in the request. {response_string}")
Throw an HttpError and then look for the error code.
I'm trying to "wrap" Google Python Client for AI Platform (Unified) into a Cloud Function.
import json
from google.cloud import aiplatform
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Value
def infer(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
request_json = request.get_json()
project="simple-1234"
endpoint_id="7106293183897665536"
location="europe-west4"
api_endpoint = "europe-west4-aiplatform.googleapis.com"
# The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)
# for more info on the instance schema, please use get_model_sample.py
# and look at the yaml found in instance_schema_uri
endpoint = client.endpoint_path(
project=project, location=location, endpoint=endpoint_id
)
instance = request.json["instances"]
instances = [instance]
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
try:
response = client.predict(endpoint=endpoint, instances=instances, parameters=parameters)
if 'error' in response:
return (json.dumps({"msg": 'Error during prediction'}), 500)
except Exception as e:
print("Exception when calling predict: ", e)
return (json.dumps({"msg": 'Exception when calling predict'}), 500)
print(" deployed_model_id:", response.deployed_model_id)
# See gs://google-cloud-aiplatform/schema/predict/prediction/tables_classification.yaml for the format of the predictions.
predictions = response.predictions
for prediction in predictions:
print(" prediction:", dict(prediction))
return (json.dumps({"prediction": response['predictions']}), 200)
When calling client.predict() I'm getting exception 400
{"error": "Required property Values is not found"}
What am I doing wrong?
I believe your parameters variable is not correct, in the documentation example that variable is set like this, as an example:
parameters = predict.params.ImageClassificationPredictionParams(
confidence_threshold=0.5, max_predictions=5,
).to_value()
This is probably why the error says the properties are not found. You will have to set your own parameters and then call the predict method.
I'm having an issue retrieving an Azure Managed Identity access token from my Function App. The function gets a token then accesses a Mysql database using that token as the password.
I am getting this response from the function:
9103 (HY000): An error occurred while validating the access token. Please acquire a new token and retry.
Code:
import logging
import mysql.connector
import requests
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
def get_access_token():
URL = "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=<client_id>"
headers = {"Metadata":"true"}
try:
req = requests.get(URL, headers=headers)
except Exception as e:
print(str(e))
return str(e)
else:
password = req.json()["access_token"]
return password
def get_mysql_connection(password):
"""
Get a Mysql Connection.
"""
try:
con = mysql.connector.connect(
host='<host>.mysql.database.azure.com',
user='<user>#<db>',
password=password,
database = 'materials_db',
auth_plugin='mysql_clear_password'
)
except Exception as e:
print(str(e))
return str(e)
else:
return "Connected to DB!"
password = get_access_token()
return func.HttpResponse(get_mysql_connection(password))
Running a modified version of this code on a VM with my managed identity works. It seems that the Function App is not allowed to get an access token. Any help would be appreciated.
Note: I have previously logged in as AzureAD Manager to the DB and created this user with all privileges to this DB.
Edit: No longer calling endpoint for VMs.
def get_access_token():
identity_endpoint = os.environ["IDENTITY_ENDPOINT"] # Env var provided by Azure. Local to service doing the requesting.
identity_header = os.environ["IDENTITY_HEADER"] # Env var provided by Azure. Local to service doing the requesting.
api_version = "2019-08-01" # "2018-02-01" #"2019-03-01" #"2019-08-01"
CLIENT_ID = "<client_id>"
resource_requested = "https%3A%2F%2Fossrdbms-aad.database.windows.net"
# resource_requested = "https://ossrdbms-aad.database.windows.net"
URL = f"{identity_endpoint}?api-version={api_version}&resource={resource_requested}&client_id={CLIENT_ID}"
headers = {"X-IDENTITY-HEADER":identity_header}
try:
req = requests.get(URL, headers=headers)
except Exception as e:
print(str(e))
return str(e)
else:
try:
password = req.json()["access_token"]
except:
password = str(req.text)
return password
But now I am getting this Error:
{"error":{"code":"UnsupportedApiVersion","message":"The HTTP resource that matches the request URI 'http://localhost:8081/msi/token?api-version=2019-08-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=<client_idxxxxx>' does not support the API version '2019-08-01'.","innerError":null}}
Upon inspection this seems to be a general error. This error message is propagated even if it's not the underlying issue. Noted several times in Github.
Is my endpoint correct now?
For this problem, it was caused by the wrong endpoint you request for the access token. We can just use the endpoint http://169.254.169.254/metadata/identity..... in azure VM, but if in azure function we can not use it.
In azure function, we need to get the IDENTITY_ENDPOINT from the environment.
identity_endpoint = os.environ["IDENTITY_ENDPOINT"]
The endpoint is like:
http://127.0.0.1:xxxxx/MSI/token/
You can refer to this tutorial about it, you can also find the python code sample in the tutorial.
In my function code, I also add the client id of the managed identity I created in the token_auth_uri but I'm not sure if the client_id is necessary here (In my case, I use user-assigned identity but not system-assigned identity).
token_auth_uri = f"{identity_endpoint}?resource={resource_uri}&api-version=2019-08-01&client_id={client_id}"
Update:
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
string resource="https://ossrdbms-aad.database.windows.net";
string clientId="xxxxxxxx";
log.LogInformation("C# HTTP trigger function processed a request.");
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Format("{0}/?resource={1}&api-version=2019-08-01&client_id={2}", Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT"), resource,clientId));
request.Headers["X-IDENTITY-HEADER"] = Environment.GetEnvironmentVariable("IDENTITY_HEADER");
request.Method = "GET";
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader streamResponse = new StreamReader(response.GetResponseStream());
string stringResponse = streamResponse.ReadToEnd();
log.LogInformation("test:"+stringResponse);
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
For your latest issue, where you are seeing UnsupportedApiVersion, it is probably this issue: https://github.com/MicrosoftDocs/azure-docs/issues/53726
Here are a couple of options that worked for me:
I am assuming you are hosting the Function app on Linux. I noticed that ApiVersion 2017-09-01 works, but you need to make additional changes (instead of "X-IDENTITY-HEADER", use "secret" header). And also use a system-assigned managed identity for your function app, and not a user assigned identity.
When I hosted the function app on Windows, I didn't have the same issues. So if you want to use an user-assigned managed identity, you can try this option instead. (with the api-version=2019-08-01, and X-IDENTITY-HEADER.
I am messing with the example code Google provides and I am looking to determine when an instance is ready to do work. They have an operation 'DONE' status and they have an instance 'RUNNING' status, but there is still a delay until I can actually use the instance. What is the best way to wait for this without waiting for a set time period (because that is a waste for time if the instance is ready sooner)?
I modified their wait_for_operation function so it uses isUp:
# [START wait_for_operation]
def wait_for_operation(compute, project, zone, operation):
print('Waiting for operation to finish...')
while True:
result = compute.zoneOperations().get(
project=project,
zone=zone,
operation=operation).execute()
if result['status'] == 'DONE':
print("done.")
print("result:")
print(result)
if 'error' in result:
raise Exception(result['error'])
print("before ex")
instStatus = compute.instances().get(
project=project,
zone=zone,
instance='inst-test1').execute()
print("after ex")
if instStatus['status'] == 'RUNNING':
if isUp("10.xxx.xx.xx"):
print("instStatus = ")
print(instStatus)
return result
else:
print("wasn't replying to ping")
time.sleep(1)
# [END wait_for_operation]
def isUp(hostname):
giveFeedback = False
if platform.system() == "Windows":
response = os.system("ping "+hostname+" -n 1")
else:
response = os.system("ping -c 1 " + hostname)
isUpBool = False
if response == 0:
if giveFeedback:
print( hostname + 'is up!')
isUpBool = True
else:
if giveFeedback:
print( hostname + 'is down!')
return isUpBool
See Matthew's answer for original isUp code: Pinging servers in Python
Most of the other code originated here:
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/compute/api/create_instance.py
GCP status link:
https://cloud.google.com/compute/docs/instances/checking-instance-status
My code works, but is there a better way using instance status or something and avoiding the entire isUp/ping stuff? Seems like my method is a needless workaround.
Obviously I am using Python and this is just messing around code with needless prints etc.
I have a Windows 7 workstation and I don't want to have to require admin rights and a Linux instance.
Edit 1: "by ready to do work", I mean I can send commands to it and it will respond.
Hi I would suggest you to use global operations.
https://cloud.google.com/compute/docs/reference/rest/beta/globalOperations/get
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'beta', credentials=credentials)
# Project ID for this request.
project = 'my-project' # TODO: Update placeholder value.
# Name of the Operations resource to return.
operation = 'my-operation' # TODO: Update placeholder value.
request = service.globalOperations().get(project=project, operation=operation)
response = request.execute()
# TODO: Change code below to process the `response` dict:
pprint(response)
One approach I have used is having a startup script create a field in the instance metadata. You can then check the instance status using your code about and see if the new metadata has been added. You can then avoid pinging the server. An added benefit is if there is not an external ip for the instance, this method still works.
instStatus = compute.instances().get(
project=project,
zone=zone,
instance='inst-test1').execute()
i'm sending apple push notifications via AWS SNS via Lambda with Boto3 and Python.
from __future__ import print_function
import boto3
def lambda_handler(event, context):
client = boto3.client('sns')
for record in event['Records']:
if record['eventName'] == 'INSERT':
rec = record['dynamodb']['NewImage']
competitors = rec['competitors']['L']
for competitor in competitors:
if competitor['M']['confirmed']['BOOL'] == False:
endpoints = competitor['M']['endpoints']['L']
for endpoint in endpoints:
print(endpoint['S'])
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
Everything works fine! But when an endpoint is invalid for some reason (at the moment this happens everytime i run a development build on my device, since i get a different endpoint then. This will be either not found or deactivated.) the Lambda function fails and gets called all over again. In this particular case if for example the second endpoint fails it will send the push over and over again to endpoint 1 to infinity.
Is it possible to ignore invalid endpoints and just keep going with the function?
Thank you
Edit:
Thanks to your help i was able to solve it with:
try:
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
except Exception as e:
print(e)
continue
Aws lamdba on failure retries the function till the event expires from the stream.
In your case since the exception on the 2nd endpoint is not handled, the retry mechanism ensures the reexecution of post to the first endpoint.
If you handle the exception and ensure the function successfully ends even when there is a failure, then the retries will not happen.