Error call Google Vertex AI endpoint from a python backend - python

I am trying to send an http post request to my google vertex ai endpoint for prediction. Though I do set the Bearer Token in the request header, the request still fails with the below error:
{
"error": {
"code": 401,
"message": "Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.",
"status": "UNAUTHENTICATED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "ACCESS_TOKEN_TYPE_UNSUPPORTED",
"metadata": {
"service": "aiplatform.googleapis.com",
"method": "google.cloud.aiplatform.v1.PredictionService.Predict"
}
}
]
}
}
Since I am making this call from a python backend, I'm not sure if OAuth 2 as suggested in the message would be wise and applicable choice.
The model is already deployed and endpointed test on vertex ai and it worked fine. What I am trying to do is send same prediction task via an http post request using postman and this is what failed.
The request url looks like this:
https://[LOCATION]-aiplatform.googleapis.com/v1/projects/[PROJECT ID]/locations/[LOCATION]/endpoints/[ENDPOINT ID]:predict
Where token bearer is set in the potman authorization tab and instance set in request body.

I usually call Vertex AI endpoint this way:
from google.cloud import aiplatform
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/home/user/1a2b3c4d.json"
aip_endpoint_name = (
f"projects/your-project/locations/us-west1/endpoints/1234567"
)
endpoint = aiplatform.Endpoint(aip_endpoint_name)
Here, you encode the input according to your needs.
def encode_64(input):
message = input
message_bytes = message.encode('ascii')
base64_bytes = base64.b64encode(message_bytes)
base64_message = base64_bytes.decode('ascii')
return base64_message
Check the model signature for the input type:
saved_model_cli show --dir /home/jupyter/model --all
Then call the endpoint
instances_list = [{"input_1": {"b64": encode_64("doctor")}}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
results = endpoint.predict(instances=instances)
print(results.predictions)
According to the type of the input you are submitting to Vertex AI endpoint (integer, array, string, image), you may have to change the encoding.

If you don't want to use oAuth 2.0 for authentication when making http post request, you may want to use Application Default Credentials. In this documentation you can follow the step by step on getting service account key to passing credentials via environment variable.

Can you try following:
import subprocess
import base64
import requests
import json
def get_headers():
gcloud_access_token = subprocess.check_output("gcloud auth print-access-token".split(' ')).decode().rstrip('\n')
return {"authorization": "Bearer " + gcloud_access_token}
def send_get_request(uri):
return requests.get(uri, headers=get_headers(), verify=False)
It should work ff you have gcloud configured (you can do this by using gcloud init) on the machine you are sending request from and have permissions to access Vertex Prediction.

Related

How to use AWS Cognito with FastAPI authentication

I'm trying to add authentication to a FastAPI application using AWS Cognito. I can get valid JSON responses from Cognito, including AccessToken and RefreshToken. Using the FastAPI Oauth2 examples I've seen has led me to create code like this:
#router.post("/token")
async def get_token(form_data: OAuth2PasswordRequestForm = Depends()):
# get token from cognito
response = await concurrency.run_in_threadpool(
client.initiate_auth,
ClientId=settings.aws_cognito_app_client_id,
AuthFlow="USER_PASSWORD_AUTH",
AuthParameters={
"USERNAME": form_data.username,
"PASSWORD": form_data.password,
},
)
return response["AuthenticationResult"]["AccessToken"]
#router.post("/things")
async def things(token: str = Depends(oauth2_scheme)):
return {"token": token}
This seems to work as the "/things" endpoint is only accessible if authorized through the OpendAPI authentication popup. However, two things:
The token value is "undefined" in the things() handler, why is that?
How do I get the RefreshToken to the user?
Any suggestions or ideas are welcome.

Creating VMs from Instance Template on Cloud Function via API call

The code I've written seems to be what I need, however it doesn't work and I get a 401 error (authentication) I've tried everything: 1. Service account permissions 2. create secret id and key (not sure how to use those to get access token though) 3. Basically, tried everything for the past 2 days.
import requests
from google.oauth2 import service_account
METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1/'
METADATA_HEADERS = {'Metadata-Flavor': 'Google'}
SERVICE_ACCOUNT = [NAME-OF-SERVICE-ACCOUNT-USED-WITH-CLOUD-FUNCTION-WHICH-HAS-COMPUTE-ADMIN-PRIVILEGES]
def get_access_token():
url = '{}instance/service-accounts/{}/token'.format(
METADATA_URL, SERVICE_ACCOUNT)
# Request an access token from the metadata server.
r = requests.get(url, headers=METADATA_HEADERS)
r.raise_for_status()
# Extract the access token from the response.
access_token = r.json()['access_token']
return access_token
def start_vms(request):
request_json = request.get_json(silent=True)
request_args = request.args
if request_json and 'number_of_instances_to_create' in request_json:
number_of_instances_to_create = request_json['number_of_instances_to_create']
elif request_args and 'number_of_instances_to_create' in request_args:
number_of_instances_to_create = request_args['number_of_instances_to_create']
else:
number_of_instances_to_create = 0
access_token = get_access_token()
address = "https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/zones/europe-west2-b/instances?sourceInstanceTemplate=https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/global/instanceTemplates/[MY-INSTANCE-TEMPLATE]"
headers = {'token': '{}'.format(access_token)}
for i in range(1,number_of_instances_to_create):
data = {'name': 'my-instance-{}'.format(i)}
r = requests.post(address, data=data, headers=headers)
r.raise_for_status()
print("my-instance-{} created".format(i))
Any advice/guidance? If someone could tell me how to get an access token using secret Id and key. Also, I'm not too sure if OAuth2.0 will work because I essentially want to turn these machines on, and they do some processing and then self destruct. So there is no user involvement to allow access. If OAuth2.0 is the wrong way to go about it, what else can I use?
I tried using gcloud, but subprocess'ing gcloud commands aren't recommended.
I did something similar to this, though I used the Node 10 Firebase Functions runtime, but should be very similar never-the-less.
I agree that OAuth is not the correct solution since there is no user involved.
What you need to use is 'Application Default Credentials' which is based on the permissions available to your cloud functions' default service account which will be the one labelled as "App Engine default service account" here:
https://console.cloud.google.com/iam-admin/serviceaccounts?folder=&organizationId=&project=[YOUR_PROJECT_ID]
(For my project that service account already had the permissions necessary for starting and stopping GCE instances, but for other API's I have grant it permissions manually.)
ADC is for server-to-server API calls. To use it I called google.auth.getClient (of the Google APIs Auth Library) with just the scope, ie. "https://www.googleapis.com/auth/cloud-platform".
This API is very versatile in that it returns whatever credentials you need, so when I am running on cloud functions it returns a 'Compute' object and when I'm running in the emulator it gives me a "UserRefreshClient" object.
I then include that auth object in my call to compute.instances.insert() and compute.instances.stop().
Here the template I used for testing my code...
{
name: 'base',
description: 'Temporary instance used for testing.',
tags: { items: [ 'test' ] },
machineType: `zones/${zone}/machineTypes/n1-standard-1`,
disks: [
{
autoDelete: true, // you will want this!
boot: true,
type: 'PERSISTENT',
initializeParams: {
diskSizeGb: '10',
sourceImage: "projects/ubuntu-os-cloud/global/images/ubuntu-minimal-1804-bionic-v20190628",
}
}
],
networkInterfaces: [
{
network: `https://www.googleapis.com/compute/v1/projects/${projectId}/global/networks/default`,
accessConfigs: [
{
name: 'External NAT',
type: 'ONE_TO_ONE_NAT'
}
]
}
],
}
Hope that helps.
If you’re getting a 401 error that means that the access token you're using is either expired or invalid.
This guide will be able to show you how to request OAuth 2.0 access tokens and make API calls using a Service Account: https://developers.google.com/identity/protocols/OAuth2ServiceAccount
The .json file mentioned is the private key you create in IAM & Admin under your service account.

What is the format for adding a compliance standard to an existing policy with the Prisma Cloud API?

I'm having trouble adding a Compliance Standard to an existing Policy via the Pal Alto Prisma Cloud API.
Everytime I send the request, I'm returned with a 500 Server Error (and, unfortunately, the API documentation is super unhelpful with this). I'm not sure if I'm sending the right information to add a compliance standard as the API documentation doesn't show what info needs to be sent. If I leave out required fields (name, policyType, and severity), I'm returned a 400 error (bad request, which makes sense). But I can't figure out why I keep getting the 500 Server Error.
In essence, my code looks like:
import requests
url = https://api2.redlock.io/policy/{policy_id}
header = {'Content-Type': 'application/json', 'x-redlock-auth': 'token'}
payload = {
'name': 'policy_name',
'policyType': 'policy_type',
'severity': 'policy_severity',
'complianceMetadata': [
{
'standardName': 'standard_name',
'requirementId': 'requirement_ID',
'sectionId': 'section_id'
}
]
}
response = requests.request('PUT', url, json=payload, header=header)
The response should be a 200 with the policy's metadata returned in JSON format with the new compliance standard.
For those using the RedLock API, I managed to figure it out.
Though non-descriptive, 500 errors generally mean the JSON being sent to the server is incorrect. In this case, the payload was incorrect.
The correct JSON for updating a policy's compliance standard is:
req_header = {'Content-Type':'application/json','x-redlock-auth':jwt_token}
# This is a small function to get a policy by ID
policy = get_redlock_policy_by_ID(req_header, 'policy_ID')
new_standard = {
"standardName":"std-name",
"requirementId":"1.1",
"sectionId":"1.1.1",
"customAssigned":true,
"complianceId":"comp-id",
"requirementName":"req-name"
}
policy['complianceMetadata'].append(new_standard)
requests.put('{}/policy/{}'.format(REDLOCK_API_URL, policy['policyId']), json=policy, headers=req_header)

Microsoft Graph API REQUESTS with Python: "Insufficient privileges to complete the operation" error when making simple call

Receiving the following error response when doing a basic Graph API POST using REQUESTS in Python:
{
"error": {
"code": "Authorization_RequestDenied",
"message": "Insufficient privileges to complete the operation.",
"innerError": {
"request-id": "36c01b2f-5c5c-438a-bd10-b3ebbc1a17c9",
"date": "2019-04-05T22:39:37"
}
}
}
Here is my token request and Graph request using REQUESTS in Python:
redirect_uri = "https://smartusys.sharepoint.com"
client_id = 'd259015e-****-4e99-****-aaad67057124'
client_secret = '********'
tennant_id = '15792366-ddf0-****-97cb-****'
scope = 'https://graph.microsoft.com/.default'
####GET A TOKEN
payload = "client_id="+client_id+"&scope="+scope+"&client_secret="+client_secret+"&grant_type=client_credentials"
headers = {'content-type':'application/x-www-form-urlencoded'}
tokenResponse = requests.post('https://login.microsoftonline.com/'+tennant_id+'/oauth2/v2.0/token',headers=headers, data=payload)
json_tokenObject = json.loads(tokenResponse.text)
authToken = json_tokenObject['access_token']
#### Make a call to the graph API
graphResponse = requests.get('https://graph.microsoft.com/v1.0/me/',headers={'Authorization':'Bearer '+authToken})
if tokenResponse.status_code != 200:
print('Error code: ' +graphResponse.status_code)
print(graphResponse.text)
exit()
print('Request successfull: Response: ')
print(graphResponse.text)
print('Press any key to continue...')
x=input()
According to the documentation ( https://learn.microsoft.com/en-us/graph/api/resources/users?view=graph-rest-1.0 ) for this /me call, I need just one of the following permissions:
User.ReadBasic.All
User.Read
User.ReadWrite
User.Read.All
User.ReadWrite.All
Directory.Read.All
Directory.ReadWrite.All
Directory.AccessAsUser.All
and I have all of these on both application and delegated permissions in the azure application manager.
What am I doing wrong here? I feel like it's something small but I just can't figure this out.
I decoded my token using: http://calebb.net/ and I do not see a spot for "AUD" or "role" or "scope" so maybe that is where I am doing it wrong?
I looked everywhere and can't find a resolution, any help would be VERY much appreciated.
Thank you.
This sounds like you forgot to "Grant Permissions" to your application.
See this answer.
I finally figured this out, it had to do with Admin rights that needed to be granted by the Admin for our Office 365.
it was as simple as giving my Office admin the following link and having him approve it:
https://login.microsoftonline.com/{TENNANT ID HERE}/adminconsent?client_id={CLIENT ID HERE}
Instantly worked.

Getting auth token from keystone in horizon

I want to get the auth token from keystone using horizon and then wants to pass that auth token to my backed code.
i don't know how to get this, please help me out.
I read many articles and blogs blogs but i am not able to find the answer. Please just point me into the right direction.
Easiest is to use a Rest client to login and just take the token from the response. I like the Firefox RESTClient add-on but you can use any client you want.
Post a request to the Openstack Identity URL:
POST keystone_ip:port/version/tokens
(e.g. 127.0.0.1:5000/v2.0/tokens)
with header:
Content-Type: application/json
and body:
{
"auth": {
"tenantName": "enter_your_tenantname",
"passwordCredentials": {
"username": "enter_your_username",
"password": "enter_your_password"
}
}
}
Note: If you're not sure what is the correct identity (keystone) URL you can log in manually to Horizon and look for a list of API endpoints.
The response body will include something like this:
{
"token": {
"issued_at": "2014-02-25T08:34:56.068363",
"expires": "2014-02-26T08:34:55Z",
"id": "529e3a0e1c375j498315c71d08134837"
}
}
Use the returned token id as a header in new rest calls. For example, to get a list of servers use request:
GET compute_endpoint_ip:port/v2/tenant_id/servers
with Headers:
X-Auth-Token: 529e3a0e1c375j498315c71d08134837
Content-Type: application/json
As an example of how to get at it:
import keystoneclient.v2_0.client as ksclient
# authenticate with keystone to get a token
keystone = ksclient.Client(auth_url="http://192.168.10.5:35357/v2.0",
username="admin",
password="admin",
tenant_name="admin")
token = keystone.auth_ref['token']['id']
# use this token for whatever other services you are accessing.
print token
You can use python-keystoneclient. To authenticate the user, use for example
username='admin'
password='1234'
tenant_name='admin'
auth_url='http://127.0.0.1:5000/v2.0'
keystone = client.Client(username=username, password=password, tenant_name=tenant_name, auth_url=auth_url)
Once, the user is authenticated, a token is generated. The auth_ref property on the client ( keystone variable in this example) will give you a dictionary like structure having all the information you need about the token, which will enable you to re-use the token or pass it to the back-end in your case.
token_dict = keystone.auth_ref
Now,the token_dict is the variable that you can pass to your back-end.
Go to the node where you have installed Keystone services. Open vi /etc/keystone/keystone.conf
Check for the third line starting admin_token. It should be a long random string:
admin_token = 05131394ad6b49c56f217
That is your keystone token. Using python:
>>> from keystoneclient.v2_0.client as ksclient
>>> keystone = ksclient.Client(auth_url="http://service-stack.linxsol.com:35357/v2.0", username="admin", password="123456", tenant_name="admin")
Ofcourse, you will change auth_url, *username, password* and tenant_name to your choice. Now you can use keystone to execute all the api tasks:
keystone.tenants.list()
keystone.users.list()
keystone.roles.list()
Or use dir(keystone) to list all the available options.
You can reuse the token as follows:
auth_ref = keystone.auth_ref or token = ksclient.get_raw_token_from_identity_service(auth_url="http://service-stack.linxsol.com:35357/v2.0", username="admin", password="123456", tenant_name="admin")
But remember it returns a dictionary and a raw token not in a form of a token as you can see above.
For further information please check the python-keystoneclient.
I hope that helps.
Use the python-keystoneclient package.
Look into the Client.get_raw_token_from_identity_service method.
First you have to install python-keystoneclient.
To generate the token you can use the following code, here I want to mention you can change the authentication url with your server url but port number will be same,
from keystoneclient.v2_0 import client
username='admin'
password='1234'
tenant_name='demo'
auth_url='http://10.0.2.15:5000/v2.0' # Or auth_url='http://192.168.206.133:5000/v2.0'
if your username, password, or tenant_name is wrong then you will get keystoneclient.openstack.common.apiclient.exceptions.Unauthorized: Invalid user / password
keystone = client.Client(username=username, password=password, tenant_name=tenant_name, auth_url=auth_url)
token_dict = keystone.auth_ref
token_dict

Categories

Resources