I have my config file set up with multiple profiles and I am trying to assume an IAM role, but all the articles I see about assuming roles are starting with making an sts client using
import boto3 client = boto3.client('sts')
which makes sense but the only problem is, It gives me an error when I try to do it like this. but when I do it like this, while passing a profile that exists in my config file, it works. here is the code below:
import boto3 session = boto3.Session(profile_name="test_profile")
sts = session.client("sts")
response = sts.assume_role(
RoleArn="arn:aws:iam::xxx:role/role-name",
RoleSessionName="test-session"
)
new_session = Session(aws_access_key_id=response['Credentials']['AccessKeyId'], aws_secret_access_key=response['Credentials']['SecretAccessKey'], aws_session_token=response['Credentials']['SessionToken'])
when other people are assuming roles in their codes without passing a profile in, how does that even work? does boto3 automatically grabs the default profile from the config file or something like that in their case?
Yes. This line:
sts = session.client("sts")
tells boto3 to create a session using the default credentials.
The credentials can be provided in the ~/.aws/credentials file. If the code is running on an Amazon EC2 instance, boto3 will automatically use credentials associated with the IAM Role associated with the instance.
Credentials can also be passed via Environment Variables.
See: Credentials — Boto3 documentation
I need to have a public URL for a file that I am creating inside a google function.
I want therefore to create an access token :
I am able to upload the file from a python google function with the function blob.upload_from_string(blob_text), but I do not know how I can create a public url (or create an access token) for it.
Could you help me with it ?
EDITING WITH THE ANSWER (almost copy paste from Marc Anthony B answer )
blob = bucket.blob(storage_path)
token = uuid4()
metadata = {"firebaseStorageDownloadTokens": token}
blob.metadata = metadata
download_url = 'https://firebasestorage.googleapis.com/v0/b/{}/o/{}?alt=media&token={}' \
.format(bucket.name, storage_path.replace("/", "%2F"), token)
with open(video_file_path, 'rb') as f:
blob.upload_from_file(f)
Firebase Storage for Python still doesn't have its own SDK but you can use firebase-admin instead. Firebase Admin SDKs depend on the Google Cloud Storage client libraries to provide Cloud Storage access. The bucket references returned by the Admin SDK are objects defined in these libraries.
When uploading an object to Firebase Storage, you must incorporate a custom access token. You may use UUID4 for this case. See code below:
import firebase_admin
from firebase_admin import credentials
from firebase_admin import storage
from uuid import uuid4
projectId = '<PROJECT-ID>'
storageBucket = '<BUCKET-NAME>'
cred = credentials.ApplicationDefault()
firebase_admin.initialize_app(cred, {
'projectId': projectId,
'storageBucket': storageBucket
})
bucket = storage.bucket()
# E.g: "upload/file.txt"
bucket_path = "<BUCKET-PATH>"
blob = bucket.blob(bucket_path)
# Create a token from UUID.
# Technically, you can use any string to your token.
# You can assign whatever you want.
token = uuid4()
metadata = {"firebaseStorageDownloadTokens": token}
# Assign the token as metadata
blob.metadata = metadata
blob.upload_from_filename(filename="<FILEPATH>")
# Make the file public (OPTIONAL). To be used for Cloud Storage URL.
blob.make_public()
# Fetches a public URL from GCS.
gcs_storageURL = blob.public_url
# Generates a URL with Access Token from Firebase.
firebase_storageURL = 'https://firebasestorage.googleapis.com/v0/b/{}/o/{}?alt=media&token={}'.format(storageBucket, bucket_path, token)
print({
"gcs_storageURL": gcs_storageURL,
"firebase_storageURL": firebase_storageURL
})
As you can see from the code above, I've mentioned GCS and Firebase URLs. If you want a public URL from GCS then you should make the object public by using the make_public() method. If you want to use the access token generated, then just concatenate the default Firebase URL with the variables required.
If the objects are already in the Firebase Storage and already have access tokens incorporated on it, then you can get it by getting the objects metadata. See code below:
# E.g: "upload/file.txt"
bucket_path = "<BUCKET-PATH>"
blob = bucket.get_blob(bucket_path)
# Fetches object metadata
metadata = blob.metadata
# Firebase Access Token
token = metadata['firebaseStorageDownloadTokens']
firebase_storageURL = 'https://firebasestorage.googleapis.com/v0/b/{}/o/{}?alt=media&token={}'.format(storageBucket, bucket_path, token)
print(firebase_storageURL)
For more information, you may check out this documentation:
Google Cloud Storage Library for Python
Introduction to the Admin Cloud Storage API
Hi there first and foremost this is my first time using Googles services. I'm trying to develop an app with the Google AutoML Vision Api (Custom Model). I have already build a custom model and generated the API keys(I hope I did it correctly tho).
After many attempts of developing via Ionics & Android and failing to connect to the to the API.
I have now taken the prediction modelling given codes in Python (on Google Colab) and even with that I still get an error message saying that Could not automatically determine credentials. I'm not sure where I have gone wrong in this. Please help. Dying.
#installing & importing libraries
!pip3 install google-cloud-automl
import sys
from google.cloud import automl_v1beta1
from google.cloud.automl_v1beta1.proto import service_pb2
#import key.json file generated by GOOGLE_APPLICATION_CREDENTIALS
from google.colab import files
credentials = files.upload()
#explicit function given by Google accounts
[https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-python][1]
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(credentials)
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
#import image for prediction
from google.colab import files
YOUR_LOCAL_IMAGE_FILE = files.upload()
#prediction code from modelling
def get_prediction(content, project_id, model_id):
prediction_client = automl_v1beta1.PredictionServiceClient()
name = 'projects/{}/locations/uscentral1/models/{}'.format(project_id,
model_id)
payload = {'image': {'image_bytes': content }}
params = {}
request = prediction_client.predict(name, payload, params)
return request # waits till request is returned
#print function substitute with values
content = YOUR_LOCAL_IMAGE_FILE
project_id = "REDACTED_PROJECT_ID"
model_id = "REDACTED_MODEL_ID"
print (get_prediction(content, project_id, model_id))
Error Message when run the last line of code:
credentials = files.upload()
storage_client = storage.Client.from_service_account_json(credentials)
these two lines are the issue I think.
The first one actually loads the contents of the file, but the second one expects a path to a file, instead of the contents.
Lets tackle the first line first:
I see that just passing the credentials you get after calling credentials = files.upload() will not work as explained in the docs for it. Doing it like you're doing, the credentials don't actually contain the value of the file directly, but rather a dictionary for filenames & contents.
Assuming you're only uploading the 1 credentials file, you can get the contents of the file like this (stolen from this SO answer):
from google.colab import files
uploaded = files.upload()
credentials_as_string = uploaded[uploaded.keys()[0]]
So now we actually have the contents of the uploaded file as a string, next step is to create an actual credentials object out of it.
This answer on Github shows how to create a credentials object from a string converted to json.
import json
from google.oauth2 import service_account
credentials_as_dict = json.loads(credentials_as_string)
credentials = service_account.Credentials.from_service_account_info(credentials_as_dict)
Finally we can create the storage client object using this credentials object:
storage_client = storage.Client(credentials=credentials)
Please note I've not tested this though, so please give it a go and see if it actually works.
gcloud auth print-access-token gives me a Bearer token that I can use later on; however, this is a shell command. How would I obtain one programmatically via the Google Cloud Python API?
I see a prior example using oauth2client, but oauth2client is now deprecated. How would I do this with google.auth and oauthlib?
While the above answer is quite informative, it misses one important point - credentials object obtained from google.auth.default() or compute_engine.Credentials() will not have token in it. So back to the original question of what is the programmatic alternative to gcloud auth print-access-token, my answer would be:
import google.auth
import google.auth.transport.requests
creds, project = google.auth.default()
# creds.valid is False, and creds.token is None
# Need to refresh credentials to populate those
auth_req = google.auth.transport.requests.Request()
creds.refresh(auth_req)
# Now you can use creds.token
I'm using the official google-auth package and default credentials, which will get you going both in local dev and on remote GCE/GKE app.
Too bad this is not properly documented and I had to read google-auth code to figure our how to obtain the token.
The answer depends on your environment and how you want to create / obtain credentials.
What are Google Cloud Credentials?
Google Cloud credentials are an OAuth 2.0 token. This token has at a minimum an Access Token and optionally a Refresh Token, Client ID Token, and supporting parameters such as expiration, Service Account Email or Client Email, etc.
The important item in Google Cloud APIs is the Access Token. This token is what authorizes access to the cloud. This token can be used in programs such as curl, software such as python, etc and does not require an SDK. The Access Token is used in the HTTP Authorization header.
What is an Access Token?
An access token is an opaque value generated by Google that is derived from a Signed JWT, more correctly called JWS. A JWT consists of a header and claims (the payload) Json structures. These two Json structures are signed with the Service Account's Private Key. These values are base64 encoded and concatenated to create the Access Key.
The format of an Access Token is: base64(header) + '.' + base64(payload) + '.' + base64(signature).
Here is an example JWT:
Header:
{
"alg": "RS256",
"typ": "JWT",
"kid": "42ba1e234ac91ffca687a5b5b3d0ca2d7ce0fc0a"
}
Payload:
{
"iss": "myservice#myproject.iam.gserviceaccount.com",
"iat": 1493833746,
"aud": "myservice.appspot.com",
"exp": 1493837346,
"sub": "myservice#myproject.iam.gserviceaccount.com"
}
Using an Access Token:
Example that will start a VM instance. Replace PROJECT_ID, ZONE and INSTANCE_NAME. This example is for Windows.
curl -v -X GET -H "Authorization: Bearer <access_token_here>" ^
https://www.googleapis.com/compute/v1/projects/%PROJECT_ID%/zones/%ZONE%/instances/%INSTANCE_NAME%/start
Compute Engine Service Account:
Dustin's answer is correct for this case, but I will include for completeness with some additional information.
These credentials are automatically created for you by GCP and are obtained from the VM Instance metadata. Permissions are controlled by Cloud API access scopes in the Google Console.
However, these credentials have some limitations. To modify the credentials you must stop the VM Instance first. Additionally, not all permissions (roles) are supported.
from google.auth import compute_engine
cred = compute_engine.Credentials()
Service Account Credentials:
Until you understand all of the types of credentials and their use cases, these are the credentials that you will use for everything except for gcloud and gsutil. Understanding these credentials will make working with Google Cloud much simpler when writing programs. Obtaining credentials from a Google Service Account Json file is easy. The only item to make note of is that credentials expire (typically 60 minutes) and either need to be refreshed or recreated.
gcloud auth print-access-token is NOT recommended. Service Account Credentials are the recommended method by Google.
These credentials are created by the Console, gcloud or via programs / APIs. Permissions are assigned to the creditials by IAM and function inside Compute Engine, App Engine, Firestore, Kubernetes, etc. as well as other environments outside of Google Cloud. These credentials are downloaded from Google Cloud and stored in a Json file. Notice the scopes parameter. This defines permissions that are granted to the resulting credentials object.
SCOPES = ['https://www.googleapis.com/auth/sqlservice.admin']
SERVICE_ACCOUNT_FILE = 'service-account-credentials.json'
from google.oauth2 import service_account
cred = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
Google OAuth 2.0 Credentials:
These credentials are derived from a full OAuth 2.0 flow. These credentials are generated when your browser is launched to access Google Accounts for authorizing access. This process is much more complicated and requires a fair amount of code to implement and requires a built-in web server for the callback for authorization.
This method provides additional features such as being able to run everything in a browser, example you can create a Cloud Storage File Browser, but be careful that you understand the security implications. This method is the technique used to support Google Sign-In, etc. I like to use this method to authenticate users before allowing posting on websites, etc. The possibilities are endless with correctly authorized OAuth 2.0 identities and scopes.
Example code using google_auth_oauthlib:
from google_auth_oauthlib.flow import InstalledAppFlow
flow = InstalledAppFlow.from_client_secrets_file(
'client_secrets.json',
scopes=scope)
cred = flow.run_local_server(
host='localhost',
port=8088,
authorization_prompt_message='Please visit this URL: {url}',
success_message='The auth flow is complete; you may close this window.',
open_browser=True)
Example code using the requests_oauthlib library:
from requests_oauthlib import OAuth2Session
gcp = OAuth2Session(
app.config['gcp_client_id'],
scope=scope,
redirect_uri=redirect_uri)
# print('Requesting authorization url:', authorization_base_url)
authorization_url, state = gcp.authorization_url(
authorization_base_url,
access_type="offline",
prompt="consent",
include_granted_scopes='true')
session['oauth_state'] = state
return redirect(authorization_url)
# Next section of code after the browser approves the request
token = gcp.fetch_token(
token_url,
client_secret=app.config['gcp_client_secret'],
authorization_response=request.url)
In some cases, it's not possible to set environment variables on the server or container while needing a Bearer access token to call Google cloud APIs. I present the following to solve such problem:
# pip3 install google-auth
# pip3 install requests
import google.auth
import google.auth.transport.requests
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('/home/user/secrets/hil-test.json', scopes=['https://www.googleapis.com/auth/cloud-platform'])
auth_req = google.auth.transport.requests.Request()
credentials.refresh(auth_req)
credentials.token
The last line would print the access token for calling Google cloud APIs. Replace ya29<REDACTED> in the following curl command with the printed token from python as a test:
curl https://example.googleapis.com/v1alpha1/projects/PROJECT_ID/locations -H "Authorization: Bearer ya29<REDACTED>"
It may not make sense to execute python to get the token then curl in BASH to call an API. The purpose is to demonstrate getting the token to call Google cloud Alpha API which may not have any Python client library but REST API. Developers can then use Python requests HTTP library to call the APIs.
import google.auth
import google.auth.transport.requests
# getting the credentials and project details for gcp project
credentials, your_project_id = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
#getting request object
auth_req = google.auth.transport.requests.Request()
print(credentials.valid) # prints False
credentials.refresh(auth_req) #refresh token
#cehck for valid credentials
print(credentials.valid) # prints True
print(credentials.token) # prints token
This may not be the recommended way but for Rest API in my application this was an easy way to get the token.
from subprocess import PIPE, Popen
def cmdline(command):
process = Popen(
args=command,
stdout=PIPE,
shell=True
)
return process.communicate()[0]
token = cmdline("gcloud auth application-default print-access-token")
print("Token:"+token)
I found myself here when looking for a way to use the python SDK without creating a service account. I wanted a way to locally develop a script that would run in the cloud. I was able to achieve this by using an artifact of the gcloud command:
export GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcloud/legacy_credentials/<me>/adc.json
Merging suggestions from this post and the google cloud documentation, I wrote an auxiliary function that returns a token. It generates a token if possible, and if not takes it from the environment, then checks that it's valid.
import google
import os
import requests
GOOGLE_APPLICATION_CREDENTIALS = "GOOGLE_APPLICATION_CREDENTIALS"
GCS_OAUTH_TOKEN = "GCS_OAUTH_TOKEN"
SCOPE = "https://www.googleapis.com/auth/cloud-platform"
URL = "https://www.googleapis.com/oauth2/v1/tokeninfo"
PAYLOAD = "access_token={}"
HEADERS = {"content-type": "application/x-www-form-urlencoded"}
OK = "OK"
def get_gcs_token():
"""
Returns gcs access token.
Ideally, this function generates a new token, requries that GOOGLE_APPLICATION_CREDENTIALS be set in the environment
(os.environ).
Alternatively, environment variable GCS_OAUTH_TOKEN could be set if a token already exists
"""
if GOOGLE_APPLICATION_CREDENTIALS in os.environ:
# getting the credentials and project details for gcp project
credentials, your_project_id = google.auth.default(scopes=[SCOPE])
# getting request object
auth_req = google.auth.transport.requests.Request()
credentials.refresh(auth_req) # refresh token
token = credentials.token
elif GCS_OAUTH_TOKEN in os.environ:
token = os.environ[GCS_OAUTH_TOKEN]
else:
raise ValueError(
f"""Could not generate gcs token because {GOOGLE_APPLICATION_CREDENTIALS} is not set in the environment.
Alternatively, environment variable {GCS_OAUTH_TOKEN} could be set if a token already exists, but it was not"""
)
r = requests.post(URL, data=PAYLOAD.format(token), headers=HEADERS)
if not r.reason == OK:
raise ValueError(
f"Could not verify token {token}\n\nResponse from server:\n{r.text}"
)
if not r.json()["expires_in"] > 0:
raise ValueError(f"token {token} expired")
return token
Official documentation code example
I followed this official documentation for Cloud Functions, which works for any GCP API:
auth_req = google.auth.transport.requests.Request()
id_token = google.oauth2.id_token.fetch_id_token(
auth_req,
# This is an OAuth authorisation scope that you must pass
# depending on the API.
# You can see an example of the need for this scope here: https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/insert#authorization-scopes
"https://www.googleapis.com/auth/bigquery"
)
Now, you can use id_token in the Authorisation header:
headers = {'Authorization': f'Bearer {id_token}'}
A simple Python code to list obj in certain folder in Google Cloud Storage:
from apiclient import discovery
import apiclient
import json
client = discovery.build('storage', 'v1beta2')
request = client.objects().list(
bucket = 'mybucket',
prefix = 'myfolder1/myfolder',
key = 'A0rsER3odwksawsesse3Dw_d3Ks') # my API key
try:
response = request.execute()
print json.dumps(response, indent = 2)
except apiclient.errors.HttpError, e:
print e
Then I got below failure message:
https://www.googleapis.com/storage/v1beta2/b/mybucket/o?prefix=myfolder1%2Fmyfolder&alt=json&key=A0rsER3odwksawsesse3Dw_d3Ks returned "*Access Not Configured. Please use Google Developers Console to activate the API for your project.*">
I have enabled the API for my project, so it should not be the problem as the return message said, maybe it jump to incorrect project? As I know the flow should be -> project in GCS -> bucket -> root-folder -> folder-1.
So my question is how do this code know which project I'm using? Do I miss any code here?
Thanks for all the kind help!
API key alone is only good for accessing data in public buckets. GCS is able to map API usage back to your project by looking at the API key, which is unique. For anything else than public data access you need proper authentication, via OAuth2, for example. More info here:
https://developers.google.com/storage/docs/json_api/v1/how-tos/authorizing
Your code looks OK and it works with public buckets, which I've verified. Ensure you have both Google Cloud Storage and Google Cloud Storage JSON API enabled in Developers Console.