I've got a Python Google Cloud Function called "artificiellt_osterbotten" (not a Firebase function), and I want to trigger it through Firebase Hosting. This is my firebase.json file:
{
"hosting": {
"public": "public",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**"],
"rewrites": [
{
"source": "/artificiellt_osterbotten",
"function": "artificiellt_osterbotten"
}
]
}
}
The route seems to be working, but all I'm getting is a 404. I'm assuming this has to do with a disconnect between Firebase and GCP. The function does show up in the Firebase console, however.
Anyone got any idea as to what's the issue here? Is it even possible to trigger GCP Cloud Functions from Firebase Hosting?
I have upgraded my Firebase plan to Blaze.
Turns out, I just had to have the function located in us-central1 for it to work. Wish it could have warned me in the CLI, would have saved me a few hours!
For future readers, there is an open issue that firebase is working on about this and it appears either a warning will be issued, or multi region support will happen.
Related
I am trying to receive results from The Google Cloud Text To Speech API, as per the demo on the official website under "Show json": https://cloud.google.com/text-to-speech by running this code:
import requests
r = requests.post('https://texttospeech.googleapis.com/v1beta1/text:synthesize', json={
"audioConfig": {
"audioEncoding": "LINEAR16",
"pitch": 0,
"speakingRate": 1
},
"input": {
"text": "Google Cloud Text-to-Speech enables developers to synthesize natural-sounding speech with 100+ voices, available in multiple languages and variants. It applies DeepMind’s groundbreaking research in WaveNet and Google’s powerful neural networks to deliver the highest fidelity possible. As an easy-to-use API, you can create lifelike interactions with your users, across many applications and devices."
},
"voice": {
"languageCode": "en-US",
"name": "en-US-Wavenet-D"
}
})
print(r.json())
However I am receiving this error message:
{'error': {'code': 403, 'message': 'The request is missing a valid API key.', 'status': 'PERMISSION_DENIED'}}
Despite having done everything for the setup with the account, API key and environment variable exactly as detailed here: https://cloud.google.com/text-to-speech/docs/before-you-begin
So I am at a bit of a loss for what I'm doing wrong. Any help would be much appreciated!
As the API response says, your request is missing the API key.
To fix this, you can either inject an Authorization bearer token in your request [the value is the access token, the output of gcloud auth application-default print-access-token]
-OR-
(as I can see you're using python)I suggest you use the client library to read your credentials automatically (since you've set the GOOGLE_APPLICATION_CREDENTIALS environment variable)
Installing via pip: pip install --upgrade google-cloud-texttospeech
Read more about using the client library here
I have a Google Cloud Function that I would like to call from my Google App Script on a Google Form submission.
The process will be: 1)user submits google form, 2)there will be a trigger (onformsubmit) that will run the app script function 3) app script function will trigger cloud function.
So far:
The script trigger works, in the logs it's listening correctly.
The cloud function works, I tested it in the Cloud function testing interface and when I run it from there, it does what I need it to do which is to update a google sheet as well as upload data to BigQuery.
The problem comes from calling that function from App Script that I have associated with my google form submission trigger. There seems to be no communication there, as cloud function logs don't show anything happening at trigger submission.
This is my app script code:
function onSubmit() {
var url = "myurl"
const token = ScriptApp.getIdentityToken()
var options = {
'method' : 'get',
'headers': {"Authorization":"Bearer "+ token}
};
var data = UrlFetchApp.getRequest(url,options);
return data
}
And my Cloud function is a HTTP one in Python and starts with:
def numbers(request):
Some troubleshooting:
When I test it, the execution log shows no errors
If I try to change UrlFetchApp to .fetch or change getIdentityToken to
getOAuthToken I get a 401 error for both
I added the following to my oauthScopes:
"openid",
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/script.container.ui",
"https://www.googleapis.com/auth/script.external_request",
"https://www.googleapis.com/auth/documents"```
I'm running both from the same Google Cloud account
I added myself to permissions in Cloud Function settings too
Any ideas of why the two aren't communicating would be appreciated!
I was able to resolve this in case anyone has a similar issue. Since my email address was associated with an organizational account, my Apps Script and GCP didn't allow the correct permissions.
In the settings of Apps Script, I couldn't change the GCP account with that function because the GCP was outside of my organization. Once I set up the Cloud Function on my organizations GCP, I was able to change the account manually in the settings and my function worked properly on the trigger.
From documentation on https://developers.google.com/vault/guides/exports, I've been able to create, list, and retrieve exports, but I haven't found any way to download the exported data associated with a specific export. Is there any way to download the exported files via the API, or is this only available through the vault UI?
There is a cloudStorageSink key in the export metadata, but trying to use the values provided using the cloud storage API results in a generic permissions issue (403 Error).
Example export metadata response:
{
"status": "COMPLETED",
"cloudStorageSink": {
"files": [
{
"md5Hash": "da5e3979864d71d1e3ac776b618dcf48",
"bucketName": "408d9135-6155-4a43-9d3c-424f124b9474",
"objectName": "a740999b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd7886-fe02-432f-83c-a4b6fd4520a5/Test_Export-1.zip",
"size": "37720"
},
{
"md5Hash": "d345a812e15cdae3b6277a0806668808",
"bucketName": "408d9135-6155-4a43-9d3c-424f124b9474",
"objectName": "a507999b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd6886-fb02-4c2f-813c-a4b6fd4520a5/Test_Export-metadata.xml",
"size": "8943"
},
{
"md5Hash": "21e91e1c60e6c07490faaae30f8154fd",
"bucketName": "408d9135-6155-4a43-9d3c-424f124b9474",
"objectName": "a503959b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd6786-fb02-42f-813c-a4b6fd4520a5/Test_Export-results-count.csv",
"size": "26"
}
]
},
"stats": {
"sizeInBytes": "46689",
"exportedArtifactCount": "7",
"totalArtifactCount": "7"
},
"name": "Test Export",
...
}
There are two approaches that can do the action you require:
The first:
using OAuth 2.0 refresh and access keys however it requires the intervention of the user, acknowledging your app access.
You can find a nice playground supplied by Google and more info here: https://developers.google.com/oauthplayground/.
You will first need to choose your desired API (in your case it is the: https://www.googleapis.com/auth/devstorage.full_controll under the Cloud Storage JSON API v1 section.
Then, you will need to log in with an admin account and click: "Exchange authorization code for tokens" (the fields "Refresh token" and "Access token" will be field automatically).
Lastly, you will need to choose the right URL to perform your request. I suggest using the "List possible operations" to choose the right URL. You will need to choose "Get Object - Retrieve the object" under Cloud Storage API v1 (notice that there are several options with the name -"Get Object", be sure to choose the one under Cloud Storage API v1 and not the one under Cloud Storage JSON API v1). Now just enter your bucket and object name in the appropriate placeholders and click Send the request.
The second:
Programmatically download it using Google client libraries. This is the approach suggested by #darkfolcer however I believe that the documentation provided by Google is insufficient and thus does not really help. If a python example will help, you can find one in the answer to the following question - How to download files from Google Vault export immediately after creating it with Python API?
Once all the exports are created you'll need to wait for them to be completed. You can use https://developers.google.com/vault/reference/rest/v1/matters.exports/list to check the status of every export in a matter. In the response refer to the “exports” array and check the value of “status” for each, any that say "COMPLETED" can be downloaded.
To download a completed export go to the “cloudStorageSink” object of each export and take the "bucketName" and "objectName" value of the first entry in the "files" Array. You’ll need to use the Cloud Storage API and these two values to download the files. This page has code examples for all the popular languages and using the API https://cloud.google.com/storage/docs/downloading-objects#storage-download-object-cpp.
Hope it helps.
The issue you are seeing is because the API works with the principle of least privilege.
The implications for you is that, since your objective is to download the files from the export, you would get the permissions to download only the files, not the whole bucket (even if it contains only those files).
This is why when you request information from the storage bucket, you get the 403 error (permission error). However, you do have permission to download the files inside the bucket. In this way, what you should do is get each object directly, doing requests like this (using the information on the question):
GET https://storage.googleapis.com/storage/v1/b/408d9135-6155-4a43-9d3c-424f124b9474/o/a740999b-e11b-4af5-b8b1-6c6def35d677/exportly-41dd7886-fe02-432f-83c-a4b6fd4520a5/Test_Export-1.zip
So, in short, instead of getting the full bucket, get each individual file generated by the export.
Hope this helps.
I've been testing reading from the Graph API with an app I'm working on for a while. I've been reading events directly from their /{id} endpoints using the Python package. When I attempted this today, however, it didn't work. The response was as follows, when attempted using the Graph API Explorer.
{
"error": {
"message": "Unsupported get request. Object with ID 'XXXXXXXXXXX' does not exist, cannot be loaded due to missing permissions, or does not support this operation. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api",
"type": "GraphMethodException",
"code": 100,
"error_subcode": 33,
"fbtrace_id": "HAli25GZ3N4
"
}
}
The Explorer itself seems to know somehow that the object in question is an event, as the field options it gives in the left sidebar are all specific to Event objects. I'm aware you need to go through App Review to be able to read public Events, but I haven't needed to thus far. What's the issue?
I've also checked the changelogs, that state nothing breaking has occured today in that instance. One thing to note was that I was briefly demoted to Moderator of the Page I'm trying to read from. I've tried using my personal Access Token and that of the Page too.
I've deployed an endpoint in sagemaker and was trying to invoke it through my python program. I had tested it using postman and it worked perfectly ok. Then I wrote the invocation code as follows
import boto3
import pandas as pd
import io
import numpy as np
def np2csv(arr):
csv = io.BytesIO()
np.savetxt(csv, arr, delimiter=',', fmt='%g')
return csv.getvalue().decode().rstrip()
runtime= boto3.client('runtime.sagemaker')
payload = np2csv(test_X)
runtime.invoke_endpoint(
EndpointName='<my-endpoint-name>',
Body=payload,
ContentType='text/csv',
Accept='Accept'
)
Now whe I run this I get a validation error
ValidationError: An error occurred (ValidationError) when calling the InvokeEndpoint operation: Endpoint <my-endpoint-name> of account <some-unknown-account-number> not found.
While using postman i had given my access key and secret key but I'm not sure how to pass it when using sagemaker apis. I'm not able to find it in the documentation also.
So my question is, how can I use sagemaker api from my local machine to invoke my endpoint?
I also had this issue and it turned out to be my region was wrong.
Silly but worth a check!
When you are using any of the AWS SDK (including the one for Amazon SageMaker), you need to configure the credentials of your AWS account on the machine that you are using to run your code. If you are using your local machine, you can use the AWS CLI flow. You can find detailed instructions on the Python SDK page: https://aws.amazon.com/developers/getting-started/python/
Please note that when you are deploying the code to a different machine, you will have to make sure that you are giving the EC2, ECS, Lambda or any other target a role that will allow the call to this specific endpoint. While in your local machine it can be OK to give you admin rights or other permissive permissions, when you are deploying to a remote instance, you should restrict the permissions as much as possible.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sagemaker:InvokeEndpoint",
"Resource": "arn:aws:sagemaker:*:1234567890:endpoint/<my-endpoint-name>"
}
]
}
Based on #Jack's answer, I ran aws configure and changed the default region name and it worked.