I have code below that was given to me to list Google Cloud Service Accounts for a specific Project.
import os
from googleapiclient import discovery
from gcp import get_key
"""gets all Service Accounts from the Service Account page"""
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = get_key()
service = discovery.build('iam', 'v1')
project_id = 'projects/<google cloud project>'
request = service.projects().serviceAccounts().list(name=project_id)
response = request.execute()
accounts = response['accounts']
for account in accounts:
print(account['email'])
This code works perfectly and prints the accounts as I need them. What I'm trying to figure out is:
Where can I go to see how to construct code like this? I found a site that has references to the Python API Client, but I can't seem to figure out how to make the code above from it. I can see the Method to list the Service Accounts, but it's still not giving me enough information.
Is there somewhere else I should be going to educate myself. Any information you have is appreciated so I don't pull out the rest of my hair.
Thanks, Eric
Let me share with you this documentation page, where there is a detailed explanation on how to build a script such as the one you shared, and what does each line of code mean. It is extracted from the documentation of ML Engine, not IAM, but it is using the same Python Google API Client Libary, so just ignore the references to ML and the rest will be useful for you.
In any case, here it is a commented version of your code, so that you understand it better:
# Imports for the Client API Libraries and the key management
import os
from googleapiclient import discovery
from gcp import get_key
# Look for an environment variable containing the credentials for Google Cloud Platform
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = get_key()
# Build a Python representation of the REST API
service = discovery.build('iam', 'v1')
# Define the Project ID of your project
project_id = 'projects/<google cloud project>'
"""Until this point, the code is general to any API
From this point on, it is specific to the IAM API"""
# Create the request using the appropriate 'serviceAccounts' API
# You can substitute serviceAccounts by any other available API
request = service.projects().serviceAccounts().list(name=project_id)
# Execute the request that was built in the previous step
response = request.execute()
# Process the data from the response obtained with the request execution
accounts = response['accounts']
for account in accounts:
print(account['email'])
Once you understand the first part of the code, the last lines are specific to the API you are using, which in this case is the Google IAM API. In this link, you can find detailed information on the methods available and what they do.
Then, you can follow the Python API Client Library documentation that you shared in order to see how to call the methods. For instance, in the code you shared, the method used depends on service, which is the Python representation of the API, and then goes down the tree of methods in the last link as in projects(), then serviceAccounts() and finally the specificlist() method, which ends up in request = service.projects().serviceAccounts().list(name=project_id).
Finally, just in case you are interested in the other available APIs, please refer to this page for more information.
I hope the comments I made on your code were of help, and that the documentation shared makes it easier for you to understand how a code like that one could be scripted.
You can use ipython having googleapiclient installed - with something like:
sudo pip install --upgrade google-api-python-client
You can go to interactive python console and do:
from googleapiclient import discovery
dir(discovery)
help(discovery)
dir - gives all entries that object has - so:
a = ''
dir(a)
Will tell what you can do with string object. Doing help(a) will give help for string object. You can do dipper:
dir(discovery)
# and then for instance
help(discovery.re)
You can call your script in steps, and see what is result print it, do some research, having something - do %history to printout your session, and have solution that can be triggered as a script.
Related
I need to get the events for the current day from a personal Outlook calendar. I have found next to no feasible resources online besides maybe Microsoft's tutorial (https://learn.microsoft.com/en-us/graph/tutorials/python), but I do not want to build a Django app. Can anyone provide some other resources?
also: I have seen a lot of ppl calling APIs by using a GET <url> command. I cannot for the life of me understand how or where you can use this? Am I missing something crucial when it comes to using APIs?
First you should know that if you wanna call ms graph api, you need to get the access token first and add it to the request header like screenshot below. What I showed in the screenshot is create calendar events but they're similar. Therefore, you can't avoid to generate the token.
Then there're 2 ways lie in front of you, if you are composing a web app, then you can follow this section to find a suitable sample for you, and if you are composing a daemon application, that means you need to use clientcredentialflow here and you may refer to this section.
Anyway, whatever you use SDK or sending http request to call the api, you all need to choose a suitable flow to obtain access token.
For this purpose without using Microsoft Graph API via request in python, there is a PyPI package named O365.
By the following procedure you can easily read a Microsoft calendar:
install the package: pip install O365
register an application in the Microsoft Azure console and keep the application (client) id as well as client secret — this article can help you up.
check the signInAudience, it should be AzureADandPersonalMicrosoftAccount not PersonalMicrosoftAccount within Microsft Azure Manifest, otherwise, you can edit that.
next you should set delegated permission to what scopes you want, in your case it's Calendars.Read. Here's a snapshot of my configuration in Azure:
Now it's time to dive into the code:
from O365 import Account
CLIENT_ID = "xxx"
CLIENT_SECRET = "xxx"
credentials = (CLIENT_ID, CLIENT_SECRET)
scopes = ['Calendars.Read']
account = Account(credentials)
if not account.is_authenticated:
account.authenticate(scopes=scopes)
print('Authenticated!')
schedule = account.schedule()
calendar = schedule.get_default_calendar()
events = calendar.get_events(include_recurring=False)
for event in events:
print(event)
What is the python programmatic alternative to the gcloud command line gcloud auth print-identity-token?
I am trying to invoke Google Cloud Function by http trigger (only for auth users) and i need to pass the identity token in the Authentication header. I have method which works great when i run the code on GCP app engine. However, i struggle to find a way to get this identity token when i run the program on my own machine (where i can create the token with gcloud command line gcloud auth print-identity-token)
I found how to create access-token according to this answer but i didn't managed to understand how can i create identity-token.
Thank you in advance!
Great topic! And it's a long long way, and months of tests and discussion with Google.
TL;DR: you can't generate an identity token with your user credential, you need to have a service account (or to impersonate a service) to generate an identity token.
If you have a service account key file, I can share a piece of code to generate an identity token, but generating and having a service account key file is globally a bad practice.
I released an article on this and 2 merge requests to implement an evolution in the Java Google auth library (I'm more Java developer that python developer even if I also contribute to python OSS project) here and here. You can read them if you want to understand what is missing and how works the gcloud command today.
On the latest merge request, I understood that something is coming from google, internally, but up to now, I didn't see anything...
If you have a service account you can impersonate this is one way to get an ID token in Python from a local/dev machine.
import google.auth
from google.auth.transport.requests import AuthorizedSession
def impersonated_id_token():
credentials, project = google.auth.default(scopes=['https://www.googleapis.com/auth/cloud-platform'])
authed_session = AuthorizedSession(credentials)
sa_to_impersonate = "<SA_NAME>#<GCP_PROJECT>.iam.gserviceaccount.com"
request_body = {"audience": "<SOME_URL>"}
response = authed_session.post( f'https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/{sa_to_impersonate}:generateIdToken',request_body)
return response
if __name__ == "__main__":
print(impersonated_id_token().json())
This is more of a design question on what to use within Google Cloud's infrastructure to obtain the results from a Python script.
Take the following scenario: we have over 60 projects and one central project for Stackdriver logging and the such.
It is from this central project I want to run a Python script (using Cloud Scheduler which then triggers the Cloud Function) to obtain a list of disks that haven't had their snapshot taken in the past 24 hours, those that aren't assigned to a snapshot schedule, and the snapshot schedules that have names that do not match our naming convention. I have the script already prepared, and it works very well from my workstation (producing a list of dictionaries of the desired results per project).
However, my question is: where should I send the results to? And how could I then have an email sent out to the appropriate people to action it?
I've played about with sending the object attributes to Pub/Sub within the central project, but this requires me to manually pull the messages, and I can't see any way of scheduling the pull request. I also don't see an option of sending out an email from Pub/Sub to an email address, and so the only option seems to be to create an email Cloud Function which is then triggered whenever one of the Subscriptions receives a new message from the first Cloud Function containing the original script.
I suppose I could simply set this up on one of our Windows VM instances and convert the script to PowerShell, but I was rather hoping to keep it out of a VM if at all possible.
Has anyone done this before? And if so, what did you use to get the desired results?
I think you can use Sendgrid API to send emails from your Cloud Function. It's very easy to set up it has a free plan which includes 12,000 per month and has an API for Python :D.
You can signup using the Google Cloud Marketplace selecting the free plan.
Then create an API key for your code here. If you only need to send mails I suggest you can select the option Restricted Access and for Mail Send give Full Access or the level you think will work for you.
Here's a code snippet for you:
import logging
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail, Email
from python_http_client.exceptions import HTTPError
def send_mail(request):
log = logging.getLogger(__name__)
SENDGRID_API_KEY = 'SG.blahblahblah'
sg = SendGridAPIClient(SENDGRID_API_KEY)
"""
Maybe here goes the code you use to check what you need
"""
APP_NAME = "Testing"
html_content = f"""
Here goes your mail body in HTML format
"""
message = Mail(
to_emails="dest#a.domain.com",
from_email=Email('sender#your.domain.com', "Your name or your app name"),
subject=f"Warning!!!!",
html_content=html_content
)
try:
response = sg.send(message)
log.info(f"email.status_code={response.status_code}")
return f'Your mail was sent!'
except HTTPError as e:
log.error(e)
And don't forget to add the sendgrid lib to your requirements.txt file:
# Function dependencies, for example:
# package>=version
sendgrid
Hope this can help you.
I have problems with the authentication in the Python Library of Google Cloud API.
At first it worked for some days without problem, but suddenly the API calls are not showing up in the API Overview of the Google CloudPlatform.
I created a service account and stored the json file locally. Then I set the environment variable GCLOUD_PROJECT to the project ID and GOOGLE_APPLICATION_CREDENTIALS to the path of the json file.
from google.cloud import speech
client = speech.Client()
print(client._credentials.service_account_email)
prints the correct service account email.
The following code transcribes the audio_file successfully, but the Dashboard for my Google Cloud project doesn't show anything for the activated Speech API Graph.
import io
with io.open(audio_file, 'rb') as f:
audio = client.sample(f.read(), source_uri=None, sample_rate=48000, encoding=speech.encoding.Encoding.FLAC)
alternatives = audio.sync_recognize(language_code='de-DE')
At some point the code also ran in some errors, regarding the usage limit. I guess due to the unsuccessful authentication, the free/limited option is used somehow.
I also tried the alternative option for authentication by installing the Google Cloud SDK and gcloud auth application-default login, but without success.
I have no idea where to start troubleshooting the problem.
Any help is appreciated!
(My system is running Windows 7 with Anaconda)
EDIT:
The error count (Fehler) is increasing with calls to the API. How can I get detailed information about the error?!
Make sure you are using an absolute path when setting the GOOGLE_APPLICATION_CREDENTIALS environment variable. Also, you might want to try inspecting the access token using OAuth2 tokeninfo and make sure it has "scope": "https://www.googleapis.com/auth/cloud-platform" in its response.
Sometimes you will get different error information if you initialize the client with GRPC enabled:
0.24.0:
speech_client = speech.Client(_use_grpc=True)
0.23.0:
speech_client = speech.Client(use_gax=True)
Usually it's an encoding issue, can you try with the sample audio or try generating LINEAR16 samples using something like the Unix rec tool:
rec --channels=1 --bits=16 --rate=44100 audio.wav trim 0 5
...
with io.open(speech_file, 'rb') as audio_file:
content = audio_file.read()
audio_sample = speech_client.sample(
content,
source_uri=None,
encoding='LINEAR16',
sample_rate=44100)
Other notes:
Sync Recognize is limited to 60 seconds of audio, you must use async for longer audio
If you haven't already, set up billing for your account
With regards to the usage problem, the issue is in fact that when you use the new google-cloud library to access ML APIs, it seems everyone authenticates to a project shared by everyone (hence it says you've used up your limit even though you've not used anything). To check and confirm this, you can call an ML API that you have not enabled by using the python client library, which will give you a result even though it shouldn't. This problem persists to other language client libraries and OS, so I suspect it's an issue with their grpc.
Because of this, to ensure consistency I always use the older googleapiclient that uses my API key. Here is an example to use the translate API:
from googleapiclient import discovery
service = discovery.build('translate', 'v2', developerKey='')
service_request = service.translations().list(q='hello world', target='zh')
result = service_request.execute()
print(result)
For the speech API, it's something along the lines of:
from googleapiclient import discovery
service = discovery.build('speech', 'v1beta1', developerKey='')
service_request = service.speech().syncrecognize()
result = service_request.execute()
print(result)
You can get the list of the discovery APIs at https://developers.google.com/api-client-library/python/apis/ with the speech one located in https://developers.google.com/resources/api-libraries/documentation/speech/v1beta1/python/latest/.
One of the other benefits of using the discovery library is that you get a lot more options compared to the current library, although often times it's a bit more of a pain to implement.
I'd like to get a list of deployed versions from appengine, either from the remote API or via appcfg.py. I can't seem to find any way to do it, certainly not a documented way. Does anyone know of any way to do this (even undocumented)?
You can list deployed versions in the admin console under "Admin Logs". Short of screen-scraping this page, there's no way to access this data programmatically.
You can submit this as an enhancement request to the issue tracker.
I was able to do this by copying some of the RPC code from appcfg.py into my application. I posted up this gist that goes into detail on how to do this, but I will repeat them here for posterity.
Install the python API client. This will give you the OAuth2 and httplib2 libraries you need to interact with Google's RPC servers from within your application.
Copy this file from the GAE SDK installed on your development machine: google/appengine/tools/appengine_rpc_httplib2.py into your GAE webapp.
Obtain a refresh token by executing appcfg.py list_versions . --oauth2 from your local machine. This will open a browser so you can login to your Google Account. Then, you can find the refresh_token in ~/.appcfg_oauth2_tokens
Modify and run the following code inside of a web handler:
Enjoy.
from third_party.google_api_python_client import appengine_rpc_httplib2
# Not-so-secret IDs cribbed from appcfg.py
# https://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/appcfg.py#144
APPCFG_CLIENT_ID = '550516889912.apps.googleusercontent.com'
APPCFG_CLIENT_NOTSOSECRET = 'ykPq-0UYfKNprLRjVx1hBBar'
APPCFG_SCOPES = ['https://www.googleapis.com/auth/appengine.admin']
source = (APPCFG_CLIENT_ID,
APPCFG_CLIENT_NOTSOSECRET,
APPCFG_SCOPES,
None)
rpc_server = appengine_rpc_httplib2.HttpRpcServerOauth2(
'appengine.google.com',
# NOTE: Here's there the refresh token is used
"your OAuth2 refresh token goes here",
"appcfg_py/1.8.3 Darwin/12.5.0 Python/2.7.2.final.0",
source,
host_override=None,
save_cookies=False,
auth_tries=1,
account_type='HOSTED_OR_GOOGLE',
secure=True,
ignore_certs=False)
# NOTE: You must insert the correct app_id here, too
response = rpc_server.Send('/api/versions/list', app_id="khan-academy")
# The response is in YAML format
parsed_response = yaml.safe_load(response)
if not parsed_response:
return None
else:
return parsed_response
Looks like Google has recently released a get_versions() function in the google.appengine.api.modules package. I recommend using that over the hack I implemented in my other answer.
Read more at:
https://developers.google.com/appengine/docs/python/modules/functions