I am using DF with Python API and here is the code and I can't use the detect intent text.
If I use the second line I get the next error:
google.api_core.exceptions.PermissionDenied: 403 IAM permission 'dialogflow.sessions.detectIntent' on 'projects/newagent/agent' denied.
If I use the first one:
google.api_core.exceptions.InvalidArgument: 400 Resource name 'projects/newagent/agent/environments/draft/users//agent/sessions/5276b6d4-a0b6-4e91-84d3-16512d1f3299' does not match 'projects//agent/environments//users//sessions/'.
I have enabled billing on Google Cloud and the user has Owner privileges. What is going wrong?
def detect_intent_texts(project_id, session_id, texts, language_code):
session_client = dialogflow_v2.SessionsClient()
#----------------------------------------------------------Lines that I talk about in the question---------------------------------------------------------------------------------------------------
#session = session_client.session_path(project_id, session_id)
session = "projects/newagent/agent/environments/draft/users/<user id>/sessions/6344a857-9de5-406c-ba0f-c71b7b3ffdba"
#----------------------------------------------------------Lines that I talk about in the question---------------------------------------------------------------------------------------------------
for text in texts:
text_input = dialogflow_v2.types.TextInput(text=text, language_code=language_code)
query_input = dialogflow_v2.types.QueryInput(text=text_input)
response = session_client.detect_intent(session=session, query_input=query_input)
detect_intent_texts("newagent/agent/environments/draft/users/<User Number>",str(uuid.uuid4()),"Que tal?","es-ES")
The Session ID should have the format projects/<Project ID>/agent/sessions/<Session ID> (being <Project ID> the ID of the GCP project where your agent is located and <Session ID> the ID you use for your ongoing session), as can be seen in this documentation page.
In your code I see that you are calling the detect_intent_texts() function like:
project_id = "newagent/agent/environments/draft/users/<User Number>"
session_id = str(uuid.uuid4())
texts = "Que tal?"
language_code = "es-ES"
I see two main errors here:
The Project ID has the wrong format, it should be the ID of your GCP project, which usually has a format like my-first-project or similar, and slashes / are not supported, so you are using a wrong Project ID.
The text should be a Python list of strings, like ["hello"] and not just "hello".
Just as an example, the following minimal code provides the result below:
import dialogflow
def detect_intent_texts(project_id, session_id, texts, language_code):
session_client = dialogflow.SessionsClient()
session = session_client.session_path(project_id, session_id)
print('Session path: {}\n'.format(session))
for text in texts:
text_input = dialogflow.types.TextInput(text=text, language_code=language_code)
query_input = dialogflow.types.QueryInput(text=text_input)
response = session_client.detect_intent(session=session, query_input=query_input)
print('Fulfillment text: {}\n'.format(response.query_result.fulfillment_text))
detect_intent_texts("my-project","abcd",["hello"],"en-US")
Result:
user#my-project:~/dialogflow$ python detect_intent_minimal.py
Session path: projects/my-project/agent/sessions/abcd
Fulfillment text: Hi!
Therefore I suspect that changing the project_id to its correct value and the texts to a list should solve your issues.
EDIT:
I have been able to reproduce the issue that you are seeing with a 403 PermissionDenied message by using a Service Account without the required permissions.
In order to run intents in Dialogflow, you need to use a Service Account with one of the following roles:
Dialogflow API Admin and Dialogflow API Client can query for intents, and therefore, one of those is required in order to make the type of requests you are trying to do with your script.
I see you said that your user has owner privileges over the project. However, the issue may be that you are using a wrong service account. In order to set up authentication correctly, follow the steps detailed in the docs. In summary, you will have to create a Service Account with the right permissions, download its JSON key, and use it as an environment variable by running the command export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/key.json" wherever you are running the script.
When an identity calls a Google Cloud Platform API, Google Cloud Identity and Access Management(IAM) requires that the identity has the appropriate permissions to use the resource for which you have to create custom roles and then assign to service account. Then you will use that service account to call Google Cloud Platform API. Here you can search Dialogflow and see that DF is supported with custom roles only. That is why you have google.api_core.exceptions.PermissionDenied: 403 IAM permission 'dialogflow.sessions.detectIntent'. Do following steps:
Go to you project in Google Cloud Platform and then select roles as shown here:
Then click on Create Role, insert role name and related fields. Then click on Add Permissions and in the filter, search 'Service: Dialogflow'. Select the permissions you want and then click on create.
Then select this:
Click on Create Service Account and on Select Role option, type and search for the role you created on step 2 and save the account.
Do this: . A list of service account will show. Click on 'Create credentials' button.
Select the service account created in above steps and choose JSON. Then select Create. A JSON file be downloaded.
Add that file in your code as: os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'yourfilename.json'
Create an agent in Dialogflow console with google project for which you did all of the above steps. Enjoy :D
Related
I want to be able getting access token while I act as a user (meaning I only have username and password).
In all the relevant topics I only see that they try getting the token as administrator of the application (for example, in order to know the clientId), but can I do the same while acting as the user of the application?
As suggested by #Thomas, you can make use of ROPC flow.
In order to get access token as a user, you still need to know values of client_id and tenant_id along with your UPN and password.
Client_Id - Your Application ID
Tenant_Id - Your Directory ID
You can get these values from the person who registered the application by:
Go to Azure Portal -> Azure Active Directory -> Your Application -> Overview
After getting those values, make use of Postman to generate the access token.
For that, POST an HTTP request like below that need tenant_id and parameters like below:
https://login.microsoftonline.com/your_tenant_id/oauth2/v2.0/token
In Postman, Go to Authorization tab and select type as Oauth2.0
Visit Headers tab and include Content-Type key with value as application/x-www-form-urlencoded
In Body tab, include parameters like client_id, grant_type, username, password and scope as below:
Make sure to grant admin consent to required API permissions defined in scope before sending the request.
Now, send the request and you can get the access token successfully like below:
To know more in detail, please refer below links:
Sign in with resource owner password credentials grant - Microsoft identity platform | Microsoft Docs
Azure registered app error: The user or administrator has not consented to use the application with ID - Stack Overflow
Today I'm trying to do read calendar from outlook. I created a new app through Microsoft Azure then setted a secret key and added a api permissions. When I was trying to authenticate via simple script I caught an error
The request is not valid for the application's 'userAudience' configuration.
In order to use /common/ endpoint%2c the application must not be configured with 'Consumer' as the user audience.
The userAudience should be configured with 'All' to use /common/ endpoint
This is my script
from O365 import Account, MSGraphProtocol
CLIENT_ID = 'MY CLIENT ID'
SECRET_ID = 'MY SECRET ID'
credentials = (CLIENT_ID, SECRET_ID)
protocol = MSGraphProtocol()
scopes = ['Calendars.Read.Shared']
account = Account(credentials, protocol=protocol)
if account.authenticate(scopes=scopes):
print('Authenticated!')
Could you tell me a reason of this error and how should i fix it?
It looks like you trying to use the client_credentials flow but your firstly using a Delegate permission which isn't correct. So in your Application registration you need to make sure you have assigned the Application permission for Calendars eg
to use the Client_credentials flow you need to first find your tenantId (if you don't already know it) you can do this in python eg using requests (you need to replace yourdomain.com with the domain your using
requests.get('https://login.windows.net/yourdomain.com/v2.0/.well-known/openid-configuration').json()["token_endpoint"]
Then take the guid part of the response eg
'https://login.windows.net/1c3a18bf-da31-4f6c-xxxx-2c06c9cf5ae4/oauth2/v2.0/token'
Then your code should look like where
from O365 import Account
credentials = ('my_client_id', 'my_client_secret')
# the default protocol will be Microsoft Graph
account = Account(credentials, auth_flow_type='credentials', tenant_id='1c3a18bf-da31-4f6c-xxxx-2c06c9cf5ae4')
if account.authenticate():
print('Authenticated!')
I have created and ran dags on a google-cloud-composer environment (dlkpipelinesv1 : composer-1.13.0-airflow-1.10.12). I am able to trigger these dags manually, and using the scheduler, but I am stuck when it comes to triggering them via cloud-functions that detect changes in a google-cloud-storage bucket.
Note that I had another GC-Composer environment (pipelines:composer-1.7.5-airflow-1.10.2) that used those same google cloud functions to trigger the relevant dags, and it was working.
I followed this guide to create the functions that trigger the dags. So I retrieved the following variables:
PROJECT_ID = <project_id>
CLIENT_ID = <client_id_retrieved_by_running_the_code_in_the_guide_within_my_gcp_console>
WEBSERVER_ID = <airflow_webserver_id>
DAG_NAME = <dag_to_trigger>
WEBSERVER_URL = f"https://{WEBSERVER_ID}.appspot.com/api/experimental/dags/{DAG_NAME}/dag_runs"
def file_listener(event, context):
"""Entry point of the cloud function: Triggered by a change to a Cloud Storage bucket.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
logging.info("Running the file listener process")
logging.info(f"event : {event}")
logging.info(f"context : {context}")
file = event
if file["size"] == "0" or "DTM_DATALAKE_AUDIT_COMPTAGE" not in file["name"] or ".filepart" in file["name"].lower():
logging.info("no matching file")
exit(0)
logging.info(f"File listener detected the presence of : {file['name']}.")
# id_token = authorize_iap()
# make_iap_request({"file_name": file["name"]}, id_token)
make_iap_request(url=WEBSERVER_URL, client_id=CLIENT_ID, method="POST")
def make_iap_request(url, client_id, method="GET", **kwargs):
"""Makes a request to an application protected by Identity-Aware Proxy.
Args:
url: The Identity-Aware Proxy-protected URL to fetch.
client_id: The client ID used by Identity-Aware Proxy.
method: The request method to use
('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')
**kwargs: Any of the parameters defined for the request function:
https://github.com/requests/requests/blob/master/requests/api.py
If no timeout is provided, it is set to 90 by default.
Returns:
The page body, or raises an exception if the page couldn't be retrieved.
"""
# Set the default timeout, if missing
if "timeout" not in kwargs:
kwargs["timeout"] = 90
# Obtain an OpenID Connect (OIDC) token from metadata server or using service account.
open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
logging.info(f"Retrieved open id connect (bearer) token {open_id_connect_token}")
# Fetch the Identity-Aware Proxy-protected URL, including an authorization header containing "Bearer " followed by a
# Google-issued OpenID Connect token for the service account.
resp = requests.request(method, url, headers={"Authorization": f"Bearer {open_id_connect_token}"}, **kwargs)
if resp.status_code == 403:
raise Exception("Service account does not have permission to access the IAP-protected application.")
elif resp.status_code != 200:
raise Exception(f"Bad response from application: {resp.status_code} / {resp.headers} / {resp.text}")
else:
logging.info(f"Response status - {resp.status_code}")
return resp.json
This is the code that runs in the GC-functions
I have checked the environment details in dlkpipelinesv1 and piplines respectively, using this code :
credentials, _ = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform'])
authed_session = google.auth.transport.requests.AuthorizedSession(
credentials)
# project_id = 'YOUR_PROJECT_ID'
# location = 'us-central1'
# composer_environment = 'YOUR_COMPOSER_ENVIRONMENT_NAME'
environment_url = (
'https://composer.googleapis.com/v1beta1/projects/{}/locations/{}'
'/environments/{}').format(project_id, location, composer_environment)
composer_response = authed_session.request('GET', environment_url)
environment_data = composer_response.json()
and the two are using the same service accounts to run, i.e. the same IAM roles. Although I have noticed the following different details :
In the old environment :
"airflowUri": "https://p5<hidden_value>-tp.appspot.com",
"privateEnvironmentConfig": { "privateClusterConfig": {} },
in the new environment:
"airflowUri": "https://da<hidden_value>-tp.appspot.com",
"privateEnvironmentConfig": {
"privateClusterConfig": {},
"webServerIpv4CidrBlock": "<hidden_value>",
"cloudSqlIpv4CidrBlock": "<hidden_value>"
}
The service account that I use to make the post request has the following roles :
Cloud Functions Service Agent
Composer Administrator
Composer User
Service Account Token Creator
Service Account User
The service account that runs my composer environment has the following roles :
BigQuery Admin
Composer Worker
Service Account Token Creator
Storage Object Admin
But I am still receiving a 403 - Forbidden in the Log Explorer when the post request is made to the airflow API.
EDIT 2020-11-16 :
I've updated to the latest make_iap_request code.
I tinkered with the IAP within the security service, but I cannot find the webserver that will accept HTTP: post requests from my cloud functions... See the image bellow, anyway I added the service account to the default and CRM IAP resources just in case, but I still get this error :
Exception: Service account does not have permission to access the IAP-protected application.
The main question is: What IAP is at stake here?? And how do I add my service account as a user of this IAP.
What am I missing?
There is a configuration parameter that causes ALL requests to the API to be denied...
In the documentation, it is mentioned that we need to override the following airflow configuration :
[api]
auth_backend = airflow.api.auth.backend.deny_all
into
[api]
auth_backend = airflow.api.auth.backend.default
This detail is really important to know, and it is not mentioned in google's documentation...
Useful links :
Triggering DAGS (workflows) with GCS
make_iap_request.py repository
The code throwing the 403 is the way it used to work. There was a breaking change in the middle of 2020. Instead of using requests to make an HTTP call for the token, you should use Google's OAuth2 library:
from google.oauth2 import id_token
from google.auth.transport.requests import Request
open_id_connect_token = id_token.fetch_id_token(Request(), client_id)
see this example
I followed the steps in Triggering DAGs and has worked in my env, please see below my recommendations.
It is a good start that the Componser Environment is up and runnning. Through the process you will only need to upload the new DAG (trigger_response_dag.py) and get the clientID (ends with .apps.googleusercontent.com) with either a python script or from the login page the first time you open Airflow UI.
In the Cloud Functions side, I noticed you have a combination of instructions for Node.js and for Python, for example, USER_AGENT is only for Node.js. And routine make_iap_request is only for python. I hope the following points helps to resolve your problem:
Service Account (SA). The Node.js code uses the default service account ${projectId}#appspot.gserviceaccount.com whose default role is Editor, meaning that it has wide access to the GCP services, including Cloud Composer. In python I think the authentication is managed somehow by client_id since a token is retrieved with id. Please ensure that the SA has this Editor role and don't forget to assign serviceAccountTokenCreator as specified in the guide.
I used Node.js 8 runtime and I noticed the user agent you are concerned it should be 'gcf-event-trigger' as it is hard coded; USER_AGENT = 'gcf-event-trigger'. In python, it seems not necesary.
By default, in the GCS trigger, the GCS Event Type is set to Archive, you need to change it to Finalize/Create. If set to Archive, the trigger won't work when you upload objects, and the DAG won't be started.
If you think your cloud function is correctly configured and an error persists, you can find its cause in your cloud function's LOGS tab in the Console. It can give you more details.
Basically, from the guide, I only had to change the following values in Node.js:
// The project that holds your function. Replace <YOUR-PROJECT-ID>
const PROJECT_ID = '<YOUR-PROJECT-ID>';
// Navigate to your webserver's login page and get this from the URL
const CLIENT_ID = '<ALPHANUMERIC>.apps.googleusercontent.com';
// This should be part of your webserver's URL in the Env's detail page: {tenant-project-id}.appspot.com.
const WEBSERVER_ID = 'v90eaaaa11113fp-tp';
// The name of the DAG you wish to trigger. It's DAG's name in the script trigger_response_dag.py you uploaded to your Env.
const DAG_NAME = 'composer_sample_trigger_response_dag';
For Python I only changed these parameters:
client_id = '<ALPHANUMERIC>.apps.googleusercontent.com'
# This should be part of your webserver's URL:
# {tenant-project-id}.appspot.com
webserver_id = 'v90eaaaa11113fp-tp'
# Change dag_name only if you are not using the example
dag_name = 'composer_sample_trigger_response_dag'
When writing an Azure Function in Python, I would expect to be able to access the host and function keys from the environment. Is this possible? All the examples I've seen do it by calling a get request, which seems like a lot of code to access something that I've set through the website.
This question is very similar, but not language specific.
It sounds like you want to get the response of the Host API admin/host/keys of Azure Functions as below, so please refer to Azure Functions wiki page Key management API
Here is my sample code.
# App Credentials, to get it see the figures below
username = "<your username like `$xxxxx`>"
password = "<your password>"
functionapp_name = "<your function app name>"
api_url = f"https://{functionapp_name}.scm.azurewebsites.net/api"
site_url = f"https://{functionapp_name}.azurewebsites.net"
import base64
import requests
auth_info = f"{username}:{password}"
base64_auth = base64.b64encode(str.encode(auth_info)).decode()
print(base64_auth)
jwt_resp = requests.get(f"{api_url}/functions/admin/token", headers={"Authorization": f"Basic {base64_auth}"})
jwt = jwt_resp.text.replace("\"", "", -1)
print(jwt)
keys_resp = requests.get(f"{site_url}/admin/host/keys", headers={"Authorization": f"Bearer {jwt}"})
print(keys_resp.text)
It works and its result as below.
For getting the username and password of App Credentials, please see the figures below.
Fig 1. On Azure portal, open the Platform features tab of your Function App and click the Deployment Center link
Fig 2. Select the FTP option in the first step of SOURCE CONTROL and click the Dashboard button to copy the values of Username and Password, but just use the part of Username with $ prefix as username variable in my script. Ofcouse, you also can use them in tab User Credentials tab.
Also, you can refer to my answer for the similar SO thread Unable to access admin URL of Azure Functions using PowerShell, and my figures below come from that.
Update: For using Azure Function for Python in container, please refer to the figure below to get the deployment credentials.
I'm trying to create a webhook for a folder on Box such that when the file is uploaded I get a notification.
from boxsdk import OAuth2, Client
auth = OAuth2(
client_id='xxxxxxxxxxxxo',
client_secret='xxxxxxxxxxxxxxxxh',
access_token='xxxxxxxxxMj2',
)
client = Client(auth)
folder = client.folder(folder_id='1')
webhook = client.create_webhook(folder, ['FILE.UPLOADED'], <HTTPS_URL>)
print('Webhook ID is {0} and the address is {1}'.format(webhook.id, webhook.address))
The Error:
Status: 403 Code: access_denied_insufficient_permissions
I also tried using the JWTAuth method and generated a Public/Private key pair
from boxsdk import JWTAuth, Client
config = JWTAuth.from_settings_file('./config_box_demo.json')
client = Client(config)
folder = client.folder(folder_id='1')
webhook = client.create_webhook(folder, ['FILE.UPLOADED'], <HTTPS_URL>)
print('Webhook ID is {0} and the address is {1}'.format(webhook.id, webhook.address))
But it displays the same error.
Things I have already done:
Enabled all application scopes (include 'Manage Webhooks')
Activated 'Perform Actions As Users' and 'Generate User Access Token'
Authorised the App from Admin Console
Any help/tips would be appreciated.
Also, does it show the same error if theres an issue with the HTTPS URL?
Two things might be causing an issue here. Firstly, make sure you application is configured to have the scope enabled to create webhooks.
Webhook configuration screen
Secondly, it is important that the user who the access token belongs to actually has access to the folder you are trying to add a webhook to. In the case of a JWT authenticated app, the user is actually a service account that does not actually have access to your (a regular user) files and folders. You can read more on our user model here.
https://developer.box.com/en/guides/authentication/user-types/