To use scheduler_v1.CloudSchedulerClient().location_path() in Python I need parent with projectId and LocationId. I know how to get projectId in code and locationId from terminal, but how to do it in my code?
I've tried to check this website(https://cloud.google.com/functions/docs/reference/rpc/google.cloud.location), but there are no examples, idk what to do with it
from google.cloud import scheduler_v1
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('/home/myname/folder/service_account.json')
service = googleapiclient.discovery.build('cloudresourcemanager', 'v1', credentials = credentials)
request = service.projects().list()
response = request.execute()
client = scheduler_v1.CloudSchedulerClient()
for project in response['projects']:
parent = client.location_path(project['projectId'], ['LocationID'])
for element in client.list_jobs(parent):
print(element)
Thank you!
I've recently begun using the http://metadata.google.internal/computeMetadata/v1/instance/region endpoint. While it works in App Engine, it's not explicitly documented right now (but it is documented for Cloud Run: https://cloud.google.com/run/docs/reference/container-contract).
The result from that endpoint will be something that looks like this:
projects/12345678901234/regions/us-central1
Obviously, the region is the last part (us-central1).
Here's a sample that I have in Go (remember to set the Metadata-Flavor header to Google):
var region string
{
domain := "http://metadata.google.internal"
path := "computeMetadata/v1/instance/region"
request, err := http.NewRequest(http.MethodGet, domain+"/"+path, nil)
if err != nil {
logrus.Errorf("Could not create request: %v", err)
panic(err)
}
request.Header.Set("Metadata-Flavor", "Google")
response, err := http.DefaultClient.Do(request)
if err != nil {
logrus.Errorf("Could not perform request: %v", err)
panic(err)
}
contents, err := ioutil.ReadAll(response.Body)
if err != nil {
logrus.Errorf("Could not read contents: %v", err)
panic(err)
}
originalRegion := string(contents)
logrus.Infof("Contents of %q: %s", path, originalRegion)
parts := strings.Split(originalRegion, "/")
if len(parts) > 0 {
region = parts[len(parts)-1]
}
}
logrus.Infof("Region: %s", region)
It is not necessary to use cloudresourcemanager to get the project ID, instead you can use the App Engine environment variable GOOGLE_CLOUD_PROJECT
You can use App engine admin API to get the location ID please check this code snippet.
credentials = GoogleCredentials.get_application_default()
#start discovery service and use appengine admin API
service = discovery.build('appengine', 'v1', credentials=credentials, cache_discovery=False)
#disable cache for app engine std (avoid noise in logs)
logging.getLogger('googleapiclient.discovery_cache').setLevel(logging.ERROR)
#take the project ID form the environment variables
project = os.environ['GOOGLE_CLOUD_PROJECT']
#get App engine application details
req = service.apps().get(appsId=project)
response =req.execute()
#this is the application location id
location_id = (response["locationId"])
In Google App Engine, there are some environment variables set by runtime environment.
You can get it by os.environ.get(environment_name).
https://cloud.google.com/appengine/docs/standard/python3/runtime#environment_variables
Related
A FastAPI-based API written in Python has been deployed as an Azure App Service. The API needs to read and write data from CosmosDB, and I attempted to use Managed Identity for this purpose, but encountered an error, stating Unrecognized credential type
These are the key steps that I took towards that goal
Step One: I used Terraform to configure the managed identity for Azure App Service, and assigned the 'contributor' role to the identity so that it can access and write data to CosmosDB. The role assignment was carried out in the file where the Azure App Service is provisioned.
resource "azurerm_linux_web_app" "this" {
name = var.appname
location = var.location
resource_group_name = var.rg_name
service_plan_id = azurerm_service_plan.this.id
app_settings = {
"PROD" = false
"DOCKER_ENABLE_CI" = true
"DOCKER_REGISTRY_SERVER_URL" = data.azurerm_container_registry.this.login_server
"WEBSITE_HTTPLOGGING_RETENTION_DAYS" = "30"
"WEBSITE_ENABLE_APP_SERVICE_STORAGE" = false
}
lifecycle {
ignore_changes = [
app_settings["WEBSITE_HTTPLOGGING_RETENTION_DAYS"]
]
}
https_only = true
identity {
type = "SystemAssigned"
}
data "azurerm_cosmosdb_account" "this" {
name = var.cosmosdb_account_name
resource_group_name = var.cosmosdb_resource_group_name
}
// built-in role that allow the app-service to read and write to an Azure Cosmos DB
resource "azurerm_role_assignment" "cosmosdbContributor" {
scope = data.azurerm_cosmosdb_account.this.id
principal_id = azurerm_linux_web_app.this.identity.0.principal_id
role_definition_name = "Contributor"
}
Step Two: I used the managed identity library to fetch the necessary credentials in the Python code.
from azure.identity import ManagedIdentityCredential
from azure.cosmos.cosmos_client import CosmosClient
client = CosmosClient(get_endpoint(),credential=ManagedIdentityCredential())
client = self._get_or_create_client()
database = client.get_database_client(DB_NAME)
container = database.get_container_client(CONTAINER_NAME)
container.query_items(query)
I received the following error when running the code locally and from Azure (the error can be viewed from the Log stream of the Azure App Service):
raise TypeError(
TypeError: Unrecognized credential type. Please supply the master key as str, or a dictionary or resource tokens, or a list of permissions.
Any help or discussion is welcome
If you are using the Python SDK, you can directly do this ,check the sample here
aad_credentials = ClientSecretCredential(
tenant_id="<azure-ad-tenant-id>",
client_id="<client-application-id>",
client_secret="<client-application-secret>")
client = CosmosClient("<account-endpoint>", aad_credentials)
I am trying to get the URL of a file I'm uploading to Firebase Storage. I want the URL that includes the token at the end, the one that looks like this:
https://firebasestorage.googleapis.com/v0/b/myapp.appspot.com/o/folder%myfile?alt=media&token=mytoken
So far this is my code:
from firebase_admin import credentials, initialize_app
cred = credentials.Certificate("serviceAccountKey.json")
initialize_app(cred, {'storageBucket': 'myapp.appspot.com'})
bucket = storage.bucket()
path = "path/to/myfile"
blob = self.bucket.blob(path)
blob.upload_from_filename("temp.mp3")
# I only know how to get this URL but it's not the one that I want
blob.make_public()
url = blob.public_url
I also don't want the signed URL that expires.
I've seen people mention the function getDownloadURL but I don't know how I can use it with firebase-admin in Python.
I've checked https://googleapis.dev/python/storage/latest/blobs.html but all I could find about URLs was either signedURL or publicURL
Change the security rule to that specific folder
Make sure you upload publically visible images to that specific folder
It doesn't require any access token to access such images
Capturing media token can only be done by the Client SDK and not available in the Admin SDK for firebase-admin python
rules_version = '2';
service firebase.storage {
match /b/{bucket}/o {
// Explicitly define rules for the 'path/to/myfile/' pattern
match /path/to/myfile/{allPaths}{
allow write: if request.auth != null; // Only Auth person can read
allow read: if request.auth == null; // Anyone can read
}
// This will be defined for everything else
match /{allPaths=**} {
allow write: if request.auth != null; // Only auth person can write
allow read: if request.auth != null; // Only auth person can read
}
}
}
Sample python code
storageBucket='myapp.appspot.com'
bucket_path="path/to/myfile/temp.mp3"
firebase_storageURL = 'https://firebasestorage.googleapis.com/v0/b/{}/o/{}?alt=media'.format(storageBucket, bucket_path)
Cloud SQL instance is not being stopped using Cloud Schedule after these steps:
Create a pub/sub topic that is supposed to trigger the cloud function.
Deploy a cloud function using the topic already created in step 1, with the below python (3.8) code file and requirements. (Entry point: start_stop)
Create a cloud scheduler job to trigger the cloud function on a regular basis with the topic created in step 1.
The payload is set to start [CloudSQL instance name] or stop [CloudSQL instance name] to start or stop the specified instance
Main.py:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import base64
from pprint import pprint
credentials = GoogleCredentials.get_application_default()
service = discovery.build('sqladmin', 'v1beta4', credentials=credentials, cache_discovery=False)
project = 'projectID'
def start_stop(event, context):
print(event)
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print(pubsub_message)
command, instance_name = pubsub_message.split(' ', 1)
if command == 'start':
start(instance_name)
elif command == 'stop':
stop(instance_name)
else:
print("unknown command " + command)
def start(instance_name):
print("starting " + instance_name)
patch(instance_name, "ALWAYS")
def stop(instance_name):
print("stopping " + instance_name)
patch(instance_name, "NEVER")
def patch(instance, activation_policy):
request = service.instances().get(project=project, instance=instance)
response = request.execute()
j = response["settings"]
settingsVersion = int(j["settingsVersion"])
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"activationPolicy": activation_policy
}
}
dbinstancebody = {
"settings": {
"settingsVersion": response["settings"]["settingsVersion"],
"activationPolicy": activation_policy
}
}
request = service.instances().update(
project=project,
instance=instance,
body=dbinstancebody)
response = request.execute()
pprint(response)
Requirements.txt
google-api-python-client==1.10.0
google-auth-httplib2==0.0.4
google-auth==1.21.1
oauth2client==4.1.3
When I click RUN NOW button in the stop scheduler, it's executed successfully, but when I navigate to SQL instance, it is not stopped.
Can someone spot what I am missing? If you need more details, just let me know please, I have just started with GCP. :)
Tier configuration was missing in the body sent to the GCP api:
dbinstancebody = {
"settings": {
"settingsVersion": settingsVersion,
"tier": "db-custom-2-13312"
"activationPolicy": activation_policy
}
}
If you click on the deployed function you will see all the details (along with the graphs), but in the end, there are also the errors displayed. (My PC didn't fit all the screen, that's why I noticed this section later on 😅)
In the Python code for requesting data from Google Analytics ( https://developers.google.com/analytics/devguides/reporting/core/v4/quickstart/service-py ) via an API, oauth2client is being used. The code was last time updated in July 2018 and until now the oauth2client is deprecated. My question is can I get the same code, where instead of oauth2client, google-auth or oauthlib is being used ?
I was googling to find a solution how to replace the parts of code where oauth2client is being used. Yet since I am not a developer I didn't succeed. This is how I tried to adapt the code in this link ( https://developers.google.com/analytics/devguides/reporting/core/v4/quickstart/service-py ) to google-auth. Any idea how to fix this ?
import argparse
from apiclient.discovery import build
from google.oauth2 import service_account
from google.auth.transport.urllib3 import AuthorizedHttp
SCOPES = ['...']
DISCOVERY_URI = ('...')
CLIENT_SECRETS_PATH = 'client_secrets.json' # Path to client_secrets.json file.
VIEW_ID = '...'
def initialize_analyticsreporting():
"""Initializes the analyticsreporting service object.
Returns:l
analytics an authorized analyticsreporting service object.
"""
# Parse command-line arguments.
credentials = service_account.Credentials.from_service_account_file(CLIENT_SECRETS_PATH)
# Prepare credentials, and authorize HTTP object with them.
# If the credentials don't exist or are invalid run through the native client
# flow. The Storage object will ensure that if successful the good
# credentials will get written back to a file.
authed_http = AuthorizedHttp(credentials)
response = authed_http.request(
'GET', SCOPES)
# Build the service object.
analytics = build('analytics', 'v4', http=http, discoveryServiceUrl=DISCOVERY_URI)
return analytics
def get_report(analytics):
# Use the Analytics Service Object to query the Analytics Reporting API V4.
return analytics.reports().batchGet(
body=
{
"reportRequests":[
{
"viewId":VIEW_ID,
"dateRanges":[
{
"startDate":"2019-01-01",
"endDate":"yesterday"
}],
"dimensions":[
{
"name":"ga:transactionId"
},
{
"name":"ga:sourceMedium"
},
{
"name":"ga:date"
}],
"metrics":[
{
"expression":"ga:transactionRevenue"
}]
}]
}
).execute()
def printResults(response):
for report in response.get("reports", []):
columnHeader = report.get("columnHeader", {})
dimensionHeaders = columnHeader.get("dimensions", [])
metricHeaders = columnHeader.get("metricHeader", {}).get("metricHeaderEntries", [])
rows = report.get("data", {}).get("rows", [])
for row in rows:
dimensions = row.get("dimensions", [])
dateRangeValues = row.get("metrics", [])
for header, dimension in zip(dimensionHeaders, dimensions):
print (header + ": " + dimension)
for i, values in enumerate(dateRangeValues):
for metric, value in zip(metricHeaders, values.get("values")):
print (metric.get("name") + ": " + value)
def main():
analytics = initialize_analyticsreporting()
response = get_report(analytics)
printResults(response)
if __name__ == '__main__':
main()
I need to obtain response in form of a json with given dimensions and metrics from Google Analytics.
For those running into this problem and wish to port to the newer auth libraries, do a diff b/w the 2 different versions of the short/simple Google Drive API sample at the code repo for the G Suite APIs intro codelab to see what needs to be updated (and what can stay as-is). The bottom-line is that the API client library code can remain the same while all you do is swap out the auth libraries underneath.
Note that sample is only for user acct auth... for svc acct auth, the update is similar, but I don't have an example of that yet (working on one though... will update this once it's published).
I have a problem with using exchangelib in python. I try this example code:
from exchangelib import DELEGATE, Account, Credentials
creds = Credentials(
username='xxxx\\username',
password="mypassword"
)
account = Account(
primary_smtp_address='surname.name#xxxx.fr',
credentials=creds,
autodiscover=True,
access_type=DELEGATE
)
# Print first 10 inbox messages in reverse order
for item in account.inbox.all().order_by('-datetime_received')[:10]:
print(item.subject, item.body, item.attachments)
I tried differents usernames but nothing works and I have always the same error message :
AutoDiscoverFailed: All steps in the autodiscover protocol failed
By the way, just in case it could help, i tried to use the Exchange Web service coded for C# and it works perfectly fine with this creds, i can send a mail:
static void Main(string[] args)
{
ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2010_SP2);
// The last parameter is the domain name
service.Credentials = new WebCredentials("username", "password", "xxxx.lan");
service.AutodiscoverUrl("surname.name#xxxx.fr", RedirectionUrlValidationCallback);
EmailMessage email = new EmailMessage(service);
email.ToRecipients.Add("surname.name#xxxx.fr");
email.Subject = "salut ";
email.Body = new MessageBody("corps du message");
email.Send();
}
private static bool RedirectionUrlValidationCallback(string redirectionUrl)
{
// The default for the validation callback is to reject the URL.
bool result = false;
Uri redirectionUri = new Uri(redirectionUrl);
/* Validate the contents of the redirection URL. In this simple validation
callback, the redirection URL is considered valid if it is using HTTPS
to encrypt the authentication credentials. */
if (redirectionUri.Scheme == "https")
{
result = true;
}
return result;
}
Thanks in advance !
I finally succeed with this configuration:
from exchangelib import DELEGATE, Account, Credentials, Configuration
creds = Credentials(
username="domain_name\\username",
password="password"
)
config = Configuration(server='mail.solutec.fr', credentials=creds)
account = Account(
primary_smtp_address="my email address",
autodiscover=False,
config=config,
access_type=DELEGATE
)
For those who will have the same problem, you can find your domain_name by right-clicking on "Computer" and Properties.
Username and Password are the one you use to connect to your company mailbox for example. For the server in Configuration, for me this one works: "mail.solutec.fr", where solutec is the name of my company and fr for France.
Looks like this autodiscover guy really doesn't like me ^^
Thanks for your help anyway and have a good day !