There's a built-in way to setup oauth on App Engine side, and it works great for request coming from my local machine with token generated by GoogleCredentials.get_application_default(), but it does not work for requests from Compute Engine with NotAllowedError exception on App Engine side.
I did multiple attempts to configure requests scopes to include https://www.googleapis.com/auth/userinfo.emails as its required one, but no luck.
Turned out that when you create your instance with Allow API access to all Google Cloud services in the same project. it does not includes required User Info scope.
To include User Info scope, you have to uncheck Allow API access to all Google Cloud services in the same project., go to Access & Security tab and explicitly enable User Info scope.
UPDATE 2018-11-15
The correct way to set email scope now is by using gcloud command:
gcloud compute instances set-service-account INSTANCE-ID --zone=us-central1-f --service-account=PROJECT-ID-compute#developer.gserviceaccount.com --scopes https://www.googleapis.com/auth/userinfo.email,cloud-platform
Related
I am trying to use the 'compute_v1' client library to START/STOP VMs in GCP. I need to do this across 'PROJECTS' so I am planning to use the IAM , RBAC approach for Service Accounts created with the “Service Account Token Creator Role” assigned.
How do I go about setting up the Credentials in my client.?
right now I just use my 'login' related json key file as follows:
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path/xxxxxx.json"
The scope of this application is to STOP/START a list of VMs based on a project and do this on demand.
All is well , now I want to use this in production across multiple projects.!
Any direction of how to Authenticate and impersonate the RBAC for each Service Account is much appreciated.
We have created a Flutter Web app that fetches bigquery data through bigquery API from Cloud Function. We were using a service account for authentication but as we want to make our application public, we need to use OAuth for end-user and use OAuth credentials.
I have tried to deploy the code from this link for testing on cloud function but the cloud function keeps on running and shuts down because of timeout. I then checked the logs and found that, the reason was the cloud function doesn't allow the browser to open for authentication as it would do when run locally.
Logs:
Function execution started
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2 /auth?response_type=code&client_id=XXXXXXXXXXXXXXXX&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery&state=IXYIkeZUTaisTMGUGkVbgnohlor7Jx&access_type=offline.
Function execution took 180003 ms, finished with status: 'timeout'
I am confused as to how can I now authenticate and authorize a user once and have that credentials for every other bigquery API used in our web app.
I think you are missing the point of the use of Cloud Functions. The documentation you shared clearly states:
This guide explains how to authenticate by using user accounts for access to the BigQuery API when your app is installed onto users' machines.
This is never the case for a Cloud Function, since it is hosted in a Google Cloud Server and available for you to use via an HTTP request or a background process.
Because of that, Cloud Function will interact with other GCP products by using Service Accounts and if you want to setup authentication you will have to set it up in the Cloud Function layer, for which I recommend you to take a look at this documentation which explains the principles of authentication with Cloud Functions
I have a Google Cloud Platform Datstore instance and would like to run queries against it from a Python client. All of the examples show how to do this given a service account keyfile:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"
However, I plan to run this on a Serverless platform and don't want to include a keyfile with my deployment. Instead I've setup an API key and given it access to Datastore.
How can I then make datastore.Client() aware of this API Key?
from google.cloud import datastore
datastore_client = datastore.Client()
It takes credentials as an optional keyword arg, but I can't figure out how to create the proper credentials object out of an API key.
As an aside
I am amenable to other forms of auth if recommended. Basically I want to deploy a web application in Google Cloud Platform - what is the standard way to manage service accounts such that files don't have to be passed around ad-hoc?
we want to use default application credentials (python code running in GCP) to perform domain-wide delegation to access Gmail/Drive APIs. The main reason for this is that using default credentials alleviates us from needing to create/manage a gcp service account key (which is very sensitive), whereas code running in GCP (appengine/cloud functions) handles key management for us securely.
We know that Google's professional services have published how to do this for accessing Admin SDK APIs here, however, we're not able to make this work with Gmail/Drive APIs.
Does anyone know if this is technically possible, and if so how?
For what I understood from your question you don't want to use a Service Account, but instead some Application Default Credentials (ADC).
Basically, you will always need to use a Service Account, but if you are running your app on Compute Engine, Kubernetes Engine, the App Engine flexible environment, or Cloud Functions, it will not be necessary for you to create it in your own as it is stated HERE.
You will only need to get the credentials needed to your project and then you will able to call the Gmail API as you would normally do:
from google.auth import compute_engine
credentials = compute_engine.Credentials()
I am creating a web app for my company.I don't want to add a new sign up process and store the creds for our employees. We already use openshift and every one having openshift creds can login into our openshift cluster. I want to re use that creds to login into my web app.
I came to knew that openshift supports oauth 2.0 and but most of the methods available in internet is using other identity providers like google as auth in openshift. No one guides in using openshift as identity provider in a web app. Any leads will be appreciated.
Based on what I'm seeing in OpenShift 4.1's documentation on Configuring the internal OAuth Server it looks like it may be possible to use the /oauth/authorize endpoint of the control-plane api.
The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API.
When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request.
It then determines what user that identity maps to, creates an access token for that user, and returns the token for use.
The intention of this endpoint is to grant OAuth tokens specifically for use with the OpenShift cluster, not for third party applications.
Even if it ends up being possible, you'll still probably want to use the OAuth/OIDC mechanisms directly in the upstream authentication provider OpenShift is using if possible as that will provide better support and be more intuitive from an application architecture standpoint.
You can use the openshift user api to access the identity of the user which requested an access token.
The api to call is <api_root>/apis/user.openshift.io/v1/users/~ with a Authorization: Bearer <token> header.
This will give you the k8s user object containing the username and groups of the user.
You can also do this from within an openshift pod using https://kubernetes.default.svc as api_root, this requires you to add the ca in the pod to setup a secure connection.
The ca is mounted in any pod at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
You can use the oauth mechanism provided by openshift to retrieve an access token using code grant.
The documentation for the openshift oauth internals is sketchy at best, I found it helpful to find the correct urls and parameters in the dex openshift connector source code: here and here