HttpError 503, while creating subnet using GCP Python API - python

Hello Everyone,
Need your thoughts on an issue I am getting with a python script to create vpc and subnet.
My script is working fine when creating vpc, but next step of subnet creation is failing with error
googleapiclient.errors.HttpError: <HttpError 503 when requesting https://www.googleapis.com/compute/v1/projects/<projectname>/regions/us-east1/subnetworks?alt=json returned "Internal error. Please try again or contact Google Support.
I am able to create subnet from UI and from rest API page.
Here is the script code I am using for subnet creation-
def create_subnet(compute, project, region, classname):
subnetname = classname
networkpath = 'projects/<projectname>/global/networks/%s' % (classname)
ipCidrRange = "10.0.0.0/16"
config = {
"name": subnetname,
"network": networkpath,
"ipCidrRange": ipCidrRange
}
print('##### Print Config ##### %s' % (config))
return compute.subnetworks().insert(
project=project,
region=region,
body=config).execute()
```
def main(project, classname, zone, region):
compute = googleapiclient.discovery.build('compute', 'v1')
print('Creating vpc')
operation = create_vpc(compute, project, classname)
print('Creating subnet')
operation = create_subnet(compute, project, region, classname)
```
Thanks in advance for comments and suggestions.

I got the root cause of this issue. I was making subnet call without waiting for vpc create operation to complete.
Created new function to wait and call it after vpc creation step resolves the issue.
def wait_for_global_operation(compute, project, operation):
print('Waiting for operation to finish...')
while True:
result = compute.globalOperations().get(
project=project,
operation=operation).execute()
if result['status'] == 'DONE':
print("done.")
if 'error' in result:
raise Exception(result['error'])
return result
time.sleep(1)
Thanks Lozano for your comments and feedback.

This seems to be related to the a wrong label syntax. Try the following syntax for network and region:
"network":
"https://www.googleapis.com/compute/v1/projects/XXXXX/global/networks/XXXXX",
"region":
"https://www.googleapis.com/compute/v1/projects/XXXXX/regions/XXXXX"
The online API Explorer can be pretty helpful1.

Related

How to use Azure DataBricks Api to submit job?

I am a beginner in Azure Databricks and I want to use APIs to create cluster and submit job in python. I am stuck as I am unable to do so. Also if I have an existing cluster how will the code look like? I got job id after running this code but unable to see any output.
import requests
DOMAIN = ''
TOKEN = ''
response = requests.post(
'https://%s/api/2.0/jobs/create' % (DOMAIN),
headers={'Authorization': 'Bearer %s' % TOKEN},
json={
"name": "SparkPi spark-submit job",
"new_cluster": {
"spark_version": "7.3.x-scala2.12",
"node_type_id": "Standard_DS3_v2",
"num_workers": 2
},
"spark_submit_task": {
"parameters": [
"--class",
"org.apache.spark.examples.SparkPi",
"dbfs:/FileStore/sparkpi_assembly_0_1.jar",
"10"
]
}
}
)
if response.status_code == 200:
print(response.json())
else:
print("Error launching cluster: %s: %s" % (response.json()["error_code"], response.json()["message"]))
Jobs at Databricks could be executed two ways (see docs):
on a new cluster - that's how you do it right now
on existing cluster - remove the new_cluster block, and add the existing_cluster_id field with the ID of existing cluster. If you don't have a cluster yet, then you can create it via Cluster API
When you create a job, then you get back the job ID that could be used to edit the job or delete it. You can also launch the job using the Run Now API. But if you just want to execute the job without create the Job in the UI, then you need to look onto Run Submit API. Either of the APIs will return the ID of specific job run, and then you can use Run Get API to get status of the job, or Run Get Output API to get the results of execution.

Error Getting Managed Identity Access Token from Azure Function

I'm having an issue retrieving an Azure Managed Identity access token from my Function App. The function gets a token then accesses a Mysql database using that token as the password.
I am getting this response from the function:
9103 (HY000): An error occurred while validating the access token. Please acquire a new token and retry.
Code:
import logging
import mysql.connector
import requests
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
def get_access_token():
URL = "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=<client_id>"
headers = {"Metadata":"true"}
try:
req = requests.get(URL, headers=headers)
except Exception as e:
print(str(e))
return str(e)
else:
password = req.json()["access_token"]
return password
def get_mysql_connection(password):
"""
Get a Mysql Connection.
"""
try:
con = mysql.connector.connect(
host='<host>.mysql.database.azure.com',
user='<user>#<db>',
password=password,
database = 'materials_db',
auth_plugin='mysql_clear_password'
)
except Exception as e:
print(str(e))
return str(e)
else:
return "Connected to DB!"
password = get_access_token()
return func.HttpResponse(get_mysql_connection(password))
Running a modified version of this code on a VM with my managed identity works. It seems that the Function App is not allowed to get an access token. Any help would be appreciated.
Note: I have previously logged in as AzureAD Manager to the DB and created this user with all privileges to this DB.
Edit: No longer calling endpoint for VMs.
def get_access_token():
identity_endpoint = os.environ["IDENTITY_ENDPOINT"] # Env var provided by Azure. Local to service doing the requesting.
identity_header = os.environ["IDENTITY_HEADER"] # Env var provided by Azure. Local to service doing the requesting.
api_version = "2019-08-01" # "2018-02-01" #"2019-03-01" #"2019-08-01"
CLIENT_ID = "<client_id>"
resource_requested = "https%3A%2F%2Fossrdbms-aad.database.windows.net"
# resource_requested = "https://ossrdbms-aad.database.windows.net"
URL = f"{identity_endpoint}?api-version={api_version}&resource={resource_requested}&client_id={CLIENT_ID}"
headers = {"X-IDENTITY-HEADER":identity_header}
try:
req = requests.get(URL, headers=headers)
except Exception as e:
print(str(e))
return str(e)
else:
try:
password = req.json()["access_token"]
except:
password = str(req.text)
return password
But now I am getting this Error:
{"error":{"code":"UnsupportedApiVersion","message":"The HTTP resource that matches the request URI 'http://localhost:8081/msi/token?api-version=2019-08-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=<client_idxxxxx>' does not support the API version '2019-08-01'.","innerError":null}}
Upon inspection this seems to be a general error. This error message is propagated even if it's not the underlying issue. Noted several times in Github.
Is my endpoint correct now?
For this problem, it was caused by the wrong endpoint you request for the access token. We can just use the endpoint http://169.254.169.254/metadata/identity..... in azure VM, but if in azure function we can not use it.
In azure function, we need to get the IDENTITY_ENDPOINT from the environment.
identity_endpoint = os.environ["IDENTITY_ENDPOINT"]
The endpoint is like:
http://127.0.0.1:xxxxx/MSI/token/
You can refer to this tutorial about it, you can also find the python code sample in the tutorial.
In my function code, I also add the client id of the managed identity I created in the token_auth_uri but I'm not sure if the client_id is necessary here (In my case, I use user-assigned identity but not system-assigned identity).
token_auth_uri = f"{identity_endpoint}?resource={resource_uri}&api-version=2019-08-01&client_id={client_id}"
Update:
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
string resource="https://ossrdbms-aad.database.windows.net";
string clientId="xxxxxxxx";
log.LogInformation("C# HTTP trigger function processed a request.");
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Format("{0}/?resource={1}&api-version=2019-08-01&client_id={2}", Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT"), resource,clientId));
request.Headers["X-IDENTITY-HEADER"] = Environment.GetEnvironmentVariable("IDENTITY_HEADER");
request.Method = "GET";
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader streamResponse = new StreamReader(response.GetResponseStream());
string stringResponse = streamResponse.ReadToEnd();
log.LogInformation("test:"+stringResponse);
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
For your latest issue, where you are seeing UnsupportedApiVersion, it is probably this issue: https://github.com/MicrosoftDocs/azure-docs/issues/53726
Here are a couple of options that worked for me:
I am assuming you are hosting the Function app on Linux. I noticed that ApiVersion 2017-09-01 works, but you need to make additional changes (instead of "X-IDENTITY-HEADER", use "secret" header). And also use a system-assigned managed identity for your function app, and not a user assigned identity.
When I hosted the function app on Windows, I didn't have the same issues. So if you want to use an user-assigned managed identity, you can try this option instead. (with the api-version=2019-08-01, and X-IDENTITY-HEADER.

Satellite image analysis using sentinel in python

I am trying to download the map of certain region using Sentinal. I am using "BANDS-S2-L1C" to authenticate. When i try running an algorithm to fetch the data of the region, it is throwing "Server response: "Layer BANDS-S2-L1C not found"
"Can someone tell me how to link my sentinal id with the program. How to link the layer to BANDS-S2-L1C "
layer = 'BANDS-S2-L1C'
input_task = S2L1CWCSInput(layer=layer,
resx='20m', resy='20m',
maxcc=.3,
time_difference=datetime.timedelta(hours=2))
#add_ndvi = S2L1CWCSInput(layer='NDVI')
add_dem = DEMWCSInput(layer='DEM')
add_l2a = S2L2AWCSInput(layer='BANDS-S2-L2A')
add_sen2cor = AddSen2CorClassificationFeature('SCL', layer='BANDS-
S2-L2A')
save = SaveToDisk('io_example', overwrite_permission=2,
compress_level=1)
workflow = LinearWorkflow(input_task, add_l2a, add_sen2cor,
add_dem, save)
result = workflow.execute({input_task: {'bbox': roi_bbox,
'time_interval': time_interval},
save: {'eopatch_folder': 'eopatch'}})
The error i am getting is
DownloadFailedException: During execution of task S2L1CWCSInput: Failed to download from:
https://services.sentinel-hub.com/ogc/wcs/ecc7a293-1882-4cff-8bf8-918a2e74baff?SERVICE=wcs&MAXCC=30.0&ShowLogo=False&Transparent=True&BBOX=44.97%2C27.67%2C45.26%2C28.03&FORMAT=image%2Ftiff%3Bdepth%3D32f&CRS=EPSG%3A4326&TIME=2019-04-01T07%3A18%3A16%2F2019-04-01T11%3A18%3A16&RESX=20m&RESY=20m&COVERAGE=BANDS-S2-L1C&REQUEST=GetCoverage&VERSION=1.1.2
with HTTPError:
400 Client Error: Bad Request for url: https://services.sentinel-hub.com/ogc/wcs/ecc7a293-1882-4cff-8bf8-918a2e74baff?SERVICE=wcs&MAXCC=30.0&ShowLogo=False&Transparent=True&BBOX=44.97%2C27.67%2C45.26%2C28.03&FORMAT=image%2Ftiff%3Bdepth%3D32f&CRS=EPSG%3A4326&TIME=2019-04-01T07%3A18%3A16%2F2019-04-01T11%3A18%3A16&RESX=20m&RESY=20m&COVERAGE=BANDS-S2-L1C&REQUEST=GetCoverage&VERSION=1.1.2
Server response: "Layer BANDS-S2-L1C not found"
You need to first set-up and configure your SentinelHub account. Follow these instructions.

Azure Create Container

After creating an Azure Storage account using Python SDK when proceeding to create a storage blob container its throws the following error :
ResourceNotFoundThe specified resource does not exist.
RequestId:508e8df1-301e-004b-224e-c5feb1000000
Time:2018-03-26T22:03:46.0941011Z
Code snippet:
def create_blob_container(self, storage_account_name, storage_account_key, version):
try:
account = BlockBlobService(account_name = storage_account_name, account_key = storage_account_key.value)
container_name = "dummy container"
container = account.create_container(container_name)
if container:
print 'Container %s created' %(container_name)
else:
print 'Container %s exists' %(container_name)
except Exception as e:
print e
Does anybody have any idea how to go about this?
P.S.: I am waiting for provisioning state of storage account to be succeeded and then proceeding to create a container.
Any help is greatly appreciated.
Firstly, as #Thomas and #trailmax mentioned in the comment that the container name and blob name need to follow naming rules. The "dummy container" in your sample code is inconsistent with the rules certainly. It will throw exception :
azure.common.AzureHttpError: The specifed resource name contains
invalid characters.ErrorCode: InvalidResourceName
Then I test your code and it works well for me.
from azure.storage.blob import (
BlockBlobService,
ContainerPermissions,
)
accountName = "***"
accountKey = "***"
containerName = "test"
account = BlockBlobService(account_name = accountName, account_key = accountKey)
container = account.create_container(containerName)
if container:
print 'Container %s created' %(containerName)
else:
print 'Container %s exists' %(containerName)
I suggest you check if your request client has permission to create resources in the storage account. If no permission of creation, above exception will be thrown.
Hope it helps you.
Just spoke with Microsoft Azure team, apparently they are facing network related issues with storage accounts. For me, its failing all the time when I try to create a container, but for one of my colleague, he is getting mixed results; sometimes it goes through in 15seconds, sometimes it takes upto 10 minutes and most of the times it is bailing out.
That might be what’s going on- they’ll get back to us tomorrow with an update.

GCP/Py: determine when compute engine instance is actually ready to be used (not "RUNNING")

I am messing with the example code Google provides and I am looking to determine when an instance is ready to do work. They have an operation 'DONE' status and they have an instance 'RUNNING' status, but there is still a delay until I can actually use the instance. What is the best way to wait for this without waiting for a set time period (because that is a waste for time if the instance is ready sooner)?
I modified their wait_for_operation function so it uses isUp:
# [START wait_for_operation]
def wait_for_operation(compute, project, zone, operation):
print('Waiting for operation to finish...')
while True:
result = compute.zoneOperations().get(
project=project,
zone=zone,
operation=operation).execute()
if result['status'] == 'DONE':
print("done.")
print("result:")
print(result)
if 'error' in result:
raise Exception(result['error'])
print("before ex")
instStatus = compute.instances().get(
project=project,
zone=zone,
instance='inst-test1').execute()
print("after ex")
if instStatus['status'] == 'RUNNING':
if isUp("10.xxx.xx.xx"):
print("instStatus = ")
print(instStatus)
return result
else:
print("wasn't replying to ping")
time.sleep(1)
# [END wait_for_operation]
def isUp(hostname):
giveFeedback = False
if platform.system() == "Windows":
response = os.system("ping "+hostname+" -n 1")
else:
response = os.system("ping -c 1 " + hostname)
isUpBool = False
if response == 0:
if giveFeedback:
print( hostname + 'is up!')
isUpBool = True
else:
if giveFeedback:
print( hostname + 'is down!')
return isUpBool
See Matthew's answer for original isUp code: Pinging servers in Python
Most of the other code originated here:
https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/compute/api/create_instance.py
GCP status link:
https://cloud.google.com/compute/docs/instances/checking-instance-status
My code works, but is there a better way using instance status or something and avoiding the entire isUp/ping stuff? Seems like my method is a needless workaround.
Obviously I am using Python and this is just messing around code with needless prints etc.
I have a Windows 7 workstation and I don't want to have to require admin rights and a Linux instance.
Edit 1: "by ready to do work", I mean I can send commands to it and it will respond.
Hi I would suggest you to use global operations.
https://cloud.google.com/compute/docs/reference/rest/beta/globalOperations/get
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'beta', credentials=credentials)
# Project ID for this request.
project = 'my-project' # TODO: Update placeholder value.
# Name of the Operations resource to return.
operation = 'my-operation' # TODO: Update placeholder value.
request = service.globalOperations().get(project=project, operation=operation)
response = request.execute()
# TODO: Change code below to process the `response` dict:
pprint(response)
One approach I have used is having a startup script create a field in the instance metadata. You can then check the instance status using your code about and see if the new metadata has been added. You can then avoid pinging the server. An added benefit is if there is not an external ip for the instance, this method still works.
instStatus = compute.instances().get(
project=project,
zone=zone,
instance='inst-test1').execute()

Categories

Resources