How to handle keys and credentials when deploying to Google Cloud Functions? - python

I have several cloud functions (in Python) that require a modulair package auth in which there is a subfolder with credentials (containing mostly json files of Google Service Accounts files or Firebase configurations).
From a security perspective, I have obviously not included these files on the .git by adding the folder in the .gitignore file (auth/credentials).
However, I am now stuck with what to do when deploying the Google Cloud Function (.gcloudignore). If I deploy it with the credentials then I imagine that these keys are exposed on the server? How could I overcome this?
I have heard some speaking of environmental variables, but I am not sure if this is more secure than just deploying it?
What is the Google Way of doing it?

You have two primary solutions available to you. The first is that the Cloud Function can run with the identity of a custom Service Account. This service account can then be associated with all the roles necessary for your logic to achieve its task. The value of this is that no credentials need be explicitly known to your logic. The environment in which your calls are being made "implicitly" has all that it needs.
See: Per-function identity
The second mechanism which is more in line with what you are currently doing uses the concept of the Compute Metadata Server. This metadata can be configured with the tokens necessary to make on-ward calls. The metadata is configured separately from your Cloud Function logic which merely retrieves the data as needed.
See: Fetching identity and access tokens.

Related

GCP IAM: Granting a role to a service account while/after creating it via python API

Goal:
Using python, I want to create a service account in a project on the Google Cloud Platform and grant that service account one role.
Problem:
The docs explain here how to grant a single role to the service account. However, it seems to be only possible by using the Console or the gcloud tool, not with python. The alternative for python is to update the whole IAM policy of the project to grant the role for the single service account and overwrite it (described here). However, overwriting the whole policy seems quite risky because in case of an error the policy of the whole project could be lost. Therefore I want to avoid that.
Question:
I'm creating a service account using the python code provided here in the docs. Is it possible to grant the role already while creating the service account with this code or in any other way?
Creating a service account, creating a service account key, downloading a service account JSON key file, and granting a role are separate steps. There is no single API to create a service account and grant a role at the same time.
Anytime you update a project's IAM bindings is a risk. Google prevents multiple applications from updating IAM at the same time. It is possible to lock everyone (users and services) out of a project by overwriting the policy with no members.
I recommend that you create a test project and develop and debug your code against that project. Use credentials that have no permissions to your other projects. Otherwise use the CLI or Terraform to minimize your risks.
The API is very easy to use provided that you understand the API, IAM bindings, and JSON data structures.
As mentioned in John’s answer, you should be very careful when manipulating the IAM module, if something goes wrong it could end in services completely inoperable.
Here is a Google’s document which manipulates the IAM resources using the REST API.
The owner role can be granted to a user, serviceAccount, or a group that is part of an organization. For example, group#myownpersonaldomain.com could be added as an owner to a project in the myownpersonaldomain.com organization, but not the examplepetstore.com organization.

How to use default credentials with DWD to access Drive/Gmail APIs

we want to use default application credentials (python code running in GCP) to perform domain-wide delegation to access Gmail/Drive APIs. The main reason for this is that using default credentials alleviates us from needing to create/manage a gcp service account key (which is very sensitive), whereas code running in GCP (appengine/cloud functions) handles key management for us securely.
We know that Google's professional services have published how to do this for accessing Admin SDK APIs here, however, we're not able to make this work with Gmail/Drive APIs.
Does anyone know if this is technically possible, and if so how?
For what I understood from your question you don't want to use a Service Account, but instead some Application Default Credentials (ADC).
Basically, you will always need to use a Service Account, but if you are running your app on Compute Engine, Kubernetes Engine, the App Engine flexible environment, or Cloud Functions, it will not be necessary for you to create it in your own as it is stated HERE.
You will only need to get the credentials needed to your project and then you will able to call the Gmail API as you would normally do:
from google.auth import compute_engine
credentials = compute_engine.Credentials()

Secure Google Cloud Functions Calls from Server-Side, Authentication strategy?

I have developed a Google Cloud Function (GCF) in python, which i want to access from a web service deployed on AWS (written in python). While in the development phase of the GCF, It had Cloud Function Invoker permission set to allUsers. I assume that is why it didn't ask for an Authorization Token when called.
I want to revoke this public access and make it so that i can only call this function from the web service code and it is not accessible public-ally.
Possible Approach :In my research i have found out that this can be done using the following steps:
Removing all the unnecessary members who have permissions to the GCF.
Creating a new service account which has restricted access to only use GCF.
Download the service account key (json) and use it in the AWS web application
Set environment variable GOOGLE_APPLICATION_CREDENTIALS equal to the path of that service account key (json) file.
Questions
How to generate the Access token using the service account, which may then be appended as Authorization Bearer within the HTTP call made to the GCF? Without this token the GCF should throw error.
The docs say not to put the service account key in the source code. Then what is the best way to go about it. They suggest to use KMS which seems like an overkill.
Do not embed secrets related to authentication in source code, such as API keys, OAuth tokens, and service account credentials. You can use an environment variable pointing to credentials outside of the application's source code, such as Cloud Key Management Service.
What are the bare minimum permissions i will require for the service account?
Please feel free to correct me if you think my understanding is wrong and there is a better and preferable way to do it.
UPDATE: The web service on AWS will call the GCF in a server-to-server fashion. There is no need to propagate the client-end (end-user) credentials.
In your description, you don't mention who/what will call your GCF. A user? A Compute? Another GCF? However, this page can help you to find code example
Yes, secret in plain text and pushed on GIT is not yet a secret! Here again, I don't know what performing the call. If it's a compute, functions, cloud run, or any service of GCP, don't use JSON file key, but the component identity. I would say, create a service account and set it to this component. Tell me more on where are you deploying if you want more help!
Related to 2: if you have a service account, what the minimal role: cloudfunctions.Invoker. It's the minimal role to invoke function
gcloud beta functions add-iam-policy-binding RECEIVING_FUNCTION \
--member='serviceAccount:CALLING_FUNCTION_IDENTITY' \
--role='roles/cloudfunctions.invoker'

Google Cloud from_service_account_json env variables

I'm working on a Python Cloud Function that will push data into BigQuery on a Google Cloud Storage bucket trigger.
Would like to avoid pushing the JSON file to GCS and save the values in the Cloud Function environment variables. However, not sure how to use them to authenticate. The documentation says to use the file path string.
Is there any way to do this? Does it even matter?
Currently using the file path and it is working, but thought it would be more secure to use environment variables.
If the the storage bucket, Cloud Function and BigQuery database are all within the same project, you shouldn't need to use service account credentials at all, it will implicitly use the same service account for the project.
If you do need to use service accounts to work across different products, explicitly specifying a service account with the --service-account flag would be ideal. See Understanding service accounts for more details.
What I ended up doing is storing it in a Cloud Object store and pulling it down on API start.
An example being amazon s3, download file, then pass the file to google cloud method/function.
Hope that helps.

How to allow a user to download a Google Cloud Storage file from Compute Engine without public access

I'm going to try and keep this as short as possible.
I have a compute engine instance, and it is running Python/Flask.
What I am trying to do, is allow a user to download a file from google cloud storage, however I do not want the file to be publicly accessible. Is there a way I can have my Compute instance stream the file from cloud storage for the user to download, and then have the file deleted from the compute instance after the user has finished downloading the file? I'd like the download to start immediately after they click the download button.
I am using the default app credentials.
subprocess is not an option.
SideNote:
Another way I was thinking about doing this was to allow each user, who is logged into the website, access to a specific folder on a bucket. However I am unsure if this would even be possible without having them login with a google account. This also seems like it would be a pain to implement.
#jterrace's answer is what you want.
Signed URLs can have a time limit associated with them. In your application you would create a signed url for the file and do a HTTP redirect to said file.
https://cloud.google.com/storage/docs/access-control/create-signed-urls-program
If you are using the default compute engine service account (the default associated with your GCE instance) you should be able to sign just fine. Just follow the instructions on how to create the keys in the url above.
You can do all kinds of awesome stuff this way, including allowing users to upload DIRECTLY to google cloud storage! :)
It sounds like you're looking for Signed URLs.
Service account associated with your compute engine will solve the problem.
Service accounts authenticate applications running on your
virtual machine instances to other Google Cloud Platform services. For
example, if you write an application that reads and writes files on
Google Cloud Storage, it must first authenticate to the Google Cloud
Storage API. You can create a service account and grant the service
account access to the Cloud Storage API.
For historical reasons, all projects come with the Compute Engine default service account, identifiable using this email:
[PROJECT_NUMBER]-compute#developer.gserviceaccount.com
By default, the service account of compute engine has read-only access to google cloud storage service. So, compute engine can access your storage using GCP client libraries.
gsutil is the command-line tool for GCP storage, which is very handy for trying out various options offered by storage.
start by typing gsutil ls from your compute engine which lists all the buckets in your cloud storage.

Categories

Resources