I am trying to pass a parameter "env" while triggering/invoking a cloud function using Scheduler with http.
I am using a service account that has sufficient permissions for invoking functions and admin rights on Scheduler.
Passing the parameter works when the function allows un-authenticated invocation, but if the function is deployed with authentication it gives an error: { "status": "UNAUTHENTICATED" ....
It is worth noting that when I changed the function code that it does not require a parameter, it worked successfully with the same service account.
So, it must be an issue with passing parameters.
The scheduler job setup looks like this:
The way I retrieve the parameter "env" in the function is as
def fetchtest(request):
env = request.args.get('env')
Related
I am trying to deploy a rest api in cloud run where one endpoint launches an async job. The job is defined inside a function in the code.
It seems one way to do it is to use Cloud Task, but this would mean to make a self-call to another endpoint of the deployed api. Specifically, to create an auxiliary endpoint that contains the job code (e.g. /run-my-function) and another one to set the queue to cloud task that launches the /run-my-function?
Is this the right way to do it or I have misunderstand something? In case it's the right way how to specify the url of the /run-my-function endpoint without explicitly hard-code the cloud run deployed uRL name?
The code for the endpoint that launches the endpoint with the run-my-function code would be:
from google.cloud import tasks_v2
client = tasks_v2.CloudTasksClient()
project = 'myproject'
queue = 'myqueue'
location = 'mylocation'
url = 'https://cloudrunservice-abcdefg-ca.b.run.app/run-my-function'
service_account_email = '12345#cloudbuild.gserviceaccount.com'
parent = client.queue_path(project, location, queue)
task = {
"http_request": {
"http_method": tasks_v2.HttpMethod.POST,
'url': url,
"oidc_token": {"service_account_email": service_account_email},
}
}
response = client.create_task(parent=parent, task=task)
However, this requires to hard-code the service name https://cloudrunservice-abcdefg-ca.b.run.app and to define an auxiliary endpoint /run-my-function that can be called via http
In your code you are able to get the Cloud Run URL without hardcoding it or setting it in an environment variable.
You can have a look to a previous article that I wrote, in the gracefull termison part. I provide a working code in Go, not so difficult to re-implement in Python.
Here the principle:
Get the Region and the project Number from the Metadata server. Keep in mind that Cloud Run has specific metadata like the region
Get the K_SERVICE env var (it's a standard Cloud Run env var)
Perform a call to the Cloud Run Rest API to get the service detail and customize the request with the data got previously
Extract the status.url JSON entry from the response.
Now you have it!
Let me know if you have difficulties to achieve that. I'm not good at Python, but I will be able to write that piece of code!
I have a Google Cloud Function that I would like to call from my Google App Script on a Google Form submission.
The process will be: 1)user submits google form, 2)there will be a trigger (onformsubmit) that will run the app script function 3) app script function will trigger cloud function.
So far:
The script trigger works, in the logs it's listening correctly.
The cloud function works, I tested it in the Cloud function testing interface and when I run it from there, it does what I need it to do which is to update a google sheet as well as upload data to BigQuery.
The problem comes from calling that function from App Script that I have associated with my google form submission trigger. There seems to be no communication there, as cloud function logs don't show anything happening at trigger submission.
This is my app script code:
function onSubmit() {
var url = "myurl"
const token = ScriptApp.getIdentityToken()
var options = {
'method' : 'get',
'headers': {"Authorization":"Bearer "+ token}
};
var data = UrlFetchApp.getRequest(url,options);
return data
}
And my Cloud function is a HTTP one in Python and starts with:
def numbers(request):
Some troubleshooting:
When I test it, the execution log shows no errors
If I try to change UrlFetchApp to .fetch or change getIdentityToken to
getOAuthToken I get a 401 error for both
I added the following to my oauthScopes:
"openid",
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/script.container.ui",
"https://www.googleapis.com/auth/script.external_request",
"https://www.googleapis.com/auth/documents"```
I'm running both from the same Google Cloud account
I added myself to permissions in Cloud Function settings too
Any ideas of why the two aren't communicating would be appreciated!
I was able to resolve this in case anyone has a similar issue. Since my email address was associated with an organizational account, my Apps Script and GCP didn't allow the correct permissions.
In the settings of Apps Script, I couldn't change the GCP account with that function because the GCP was outside of my organization. Once I set up the Cloud Function on my organizations GCP, I was able to change the account manually in the settings and my function worked properly on the trigger.
I want to get the environment name (dev/qa/prod) where my AWS Lambda function is running or being executed programatically. I don't want to give as part of my environment variables.
How do we do that?
In AWS, everything is in "Production". That is, there is no concept of "AWS for Dev" or "AWS for QA".
It is up to you to create resources that you declare to be Dev, QA or Production. Some people do this in different AWS Accounts, or at least use different VPCs for each environment.
Fortunately, you mention that "each environment in AWS has a different role".
This means that the AWS Lambda function can call get_caller_identity() to obtain "details about the IAM user or role whose credentials are used to call the operation":
import boto3
def lambda_handler(event, context):
sts_client = boto3.client('sts')
print(sts_client.get_caller_identity())
It returns:
{
"UserId": "AROAJK7HIAAAAAJYPQN7E:My-Function",
"Account": "111111111111",
"Arn": "arn:aws:sts: : 111111111111:assumed-role/my-role/My-Function",
...
}
}
Thus, you could extract the name of the Role being used from the Arn.
Your lambda function can do one of the following :
def handler_name(event, context): - read the data from the event dict. The caller will have to add this argument (since I dont know what is the lambda trigger I cant tell if it is a good solution)
Read the data from S3 (or other storage like DB)
Read the data from AWS Systems Manager Parameter Store
I don't want to give as part of my environment variables
Why?
I have an app that is meant to integrate with third-party apps. These apps should be able to trigger a function when data changes.
The way I was envisioning this, I would use a node function to safely prepare data for the third parties, and get the url to call from the app's configuration on firestore. I would call that url from the node function, and wait for it to return, updating results as necessary (actually, triggering a push notification). -- these third-party functions would tend to be python functions, so my demo should be in python.
I have the initial node function and firestore setup so that I am currently triggering a ECONNREFUSED -- because I don't know how to set up the third-party function.
Let's say this is the function I need to trigger:
def hello_world(request):
request_json = request.get_json()
if request_json and 'name' in request_json:
name = request_json['name']
else:
name = 'World'
return 'Hello, {}!\n'.format(name)
Do I need to set up a separate gcloud account to host this function, or can I include it in my firestore functions? If so, how do I deploy this to firestore? Typically with my node functions, I am running firebase deploy and it automagically finds my functions from my index.js file.
If you're asking whether Cloud Functions that are triggered by Cloud Firestore can co-exist in a project with Cloud Functions that are triggered by HTTP(S) requests, then the answer is "yes they can". There is no need to set up a separate (Firebase or Cloud) project for each function type.
However: when you deploy your Cloud Functions through the Firebase CLI with firebase deploy, it will remove any functions that it finds in the project, that are not in the code. If you have functions both in Python and in Node.js, there is never a single codebase that contains both, so a blanket deploy would always delete some of your functions. So in that case, you should use the granular deploy option of the Firebase CLI.
I'm trying to develop a system parameter optimization algorithm on Azure but I'm stuck in an API question.
I can use azure cli command to get a VM hardware profile but I can't figure out which Azure SDK API has equivalent result.
Azure cli command and partial output result is:
az vm get-instance-view -g GROUP_NAME -n VM_NAME.
output will include:
"hardwareProfile": {
"vmSize": "Standard_D4s_v3"
},
The value of vmSize is what I need. I tried with:
compute_client.virtual_machines.get(GROUP_NAME, VM_NAME, expand='instanceView').instance_view
but I couldn't get expected result from above API. I have searched but failed to find out from Azure doc either.
Just to make it clear. It's not weird.
In your first method, you define the expand='instanceView', which can only return the instance view of a virtual machine. And instance view means information about the run-time state of a virtual machine. It(the instance view) does not contain the VM hardware profile information.
You can use this api(the get() method also calls this api when you go through the source code) to check the returned result for instance view, like below:
And if you does not specify the instanceview in the get() method, it will return the model view of a vm, which contains VM hardware profile information.
You can also test it via this api for model view of a vm.
Hope it helps.
found out after I posted this question, weird:
virtual_machine = compute_client.virtual_machines.get(
GROUP_NAME,
VM_NAME
)
hardware = virtual_machine.hardware_profile
print("\nHardware:", hardware)
output is:
hardware: {'additional_properties': {}, 'vm_size': 'Standard_D2s_v3'}