I have created a Google Cloud Function using Python 3.7. Testing the function itself, it behaves as expected. I have set the trigger of the function to be a topic called "Scheduled".
The code itself runs a series of API calls, when tested manually from the UI works exactly as expected.
Output when running a manual test.
The original source code requires no arguments for the main function inside the script, however I realized the Cloud Function passes 2 to it anyway, so I have added them with no actual use:
def main(event, data):
print(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
getNotifs = getNotifications()
if getNotifs["hasData"] == True:
print("Found notifications, posting to Telegram.")
notifsParsed = parseNotifications(getNotifs["data"])
telegramCall = telegramPost(
{"text": str(notifsParsed), "chat_id": ChatId, "parse_mode": "HTML"})
print(telegramCall)
elif getNotifs["hasData"] == False:
print(getNotifs["error"])
print("================================")
Now, I have created a new Cloud Scheduler job where the target is "pub/sub" and the topic is also "Scheduled". Nowhere could I find a use for the required 'payload' field, and the Scheduler guide by Google only fills in a random value, 'hello' without quotes or something like it, so I filled in 'hi'.
Running the job I am repeatedly met with a failure, and this log:
status: "INVALID_ARGUMENT"
targetType: "PUB_SUB".
I have tried changing the payload to "hi" (with quotes), editing the main PY function to accept one more argument, both seem entirely unrelated. What am I missing?
Only issue was a mistype of the topic defined in the scheduler job, it's free text and not a selection of existing topics or anything.
Related
I have a script with 2 Cloud Functions, lets say Fn1 and Fn2. Fn1 is event driven, gets the uri of a storage object, Fn2 is required to take the uri from Fn1 and use this as input.
I have tested Fn1 and Fn2 independently and they work.
What I need is how to run both functions together in one go, so that Fn1 gets the information needed and passes it to Fn2 to execute and provide the final output.
def fn1(a,b):
value = '{}.'format(a['x'])
return value
def fn2(value):
do work
output = final
1.) I want to be able to deploy both functions in a single Cloud Function (Or maybe 2).
2.) I want to be able to have Fn1 triggered by event(Solved, working OK), but call Fn2 on completion and have Fn2 output final result.
As you may have noticed, I am early on in the Python adoption, so not so great with ability currently, but learning fast.
You didn't specify how you trigger these functions (http/pubsub)...
I have http function that I trigger from HTTP but in payload I define what script should do.
Here is a little snippet of it:
# to run locally:
# FLASK_ENV=development FLASK_APP=main flask run
# and uncomment the following:
from flask import Flask, request
app = Flask(__name__)
#app.route('/', methods=['POST'])
def wrapper():
return main(request)
and main function is like this:
def main(args=None):
action = args.form['action'] #simplified because I use it together with argparse
match action:
case 'task1':
...
This can be done with pubsub too, you can use pubsub attribute to define what action of the function should do.
You can not have same function to be triggered from http and pubsub.
As for second question, you have GCP workflows or GCP composer but these products might be overkill for what you are trying to do.
Cloud Functions means that you expose only one function during your deployment. However, in your code, you can have several functions, classes, files, object,(...), there is no issue with that. The only condition is: only one entry point in your code.
Therefore, you can have do something like that
def fn3(c):
do.other.things(c)
def fn2(e):
do.something(e)
def fn1(event, context):
doFn2(event)
doFn3(context)
print("end")
Deploy with the --entry-point=fn1 parameter to specify which first function to call and then build your workload around.
So, basically the same issue as in this SO post except I'm using Python and the accepted answer didn't help.
Using the provided template in the Console UI:
def hello_firestore(event, context):
"""Triggered by a change to a Firestore document.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
resource_string = context.resource
# print out the resource string that triggered the function
print(f"Function triggered by change to: {resource_string}.")
# now print out the entire event object
print(str(event))
with a wildcard in the trigger path:
'emails/{wildcard}'
I'm getting the below error:
Deployment failure: Failed to configure trigger
providers/cloud.firestore/eventTypes/document.create#firestore.googleapis.com
(gcf.us-central1.presignups-counter)
Similarly as in the referenced question, the error clears when removing the wildcard from the trigger resource:
'emails/wildcard'
EDIT: here is a screenshot of the Function Details:
I was able to deploy the cloud function using : emails/{wildcard} and not 'emails/{wildcard}'.
The reason for that is that when the document path is added in the UI then it should be without the single quotes. When it's in the code, then it should be in single quotes. More information here
I am trying to tweet something by talking to Alexa. I want to put my code on AWS Lambda, and trigger the function by Alexa.
I already have a Python code that can tweet certain string successfully. And I also managed to create a zip file and deploy it on Lambda (code depends on the "tweepy" package). However, I could not get to trigger the function by Alexa, I understand that I need to use handlers and ASK-SDK (Alexa Service Kit), but I am kind of lost at this stage. Could anyone please give me an idea about how the handlers work and help me see the big picture?
Alexa ASK_SDK Psuedo Code:
This is pseudo code of the new ASK_SDK, which is the predecessor to the ALEXA_SDK.
Also note I work in NodeJS but the structure is likely the same
Outer Function with Call Back - Lambda Function Handler
CanHandle Function
Contains logic to determine if this handler is the right handler. The HandlerInput variable contains the request data so you can check and see if the intent == "A specific intent" then return true. Else return false. Or you can go way more specific. (Firing handlers by intent is pretty basic. you can take it a step further and fire handlers based on Intent and State.
Handle Function
Which ever "canHandle" function returns true this is the code that will be run. The handler has a few functions it can perform. It can read the session attributes, change the session attributes based on the intent that was called, formulate a string response, read and write to a more persistant attribute store like dynamodb and create and fire an alexa response.
The handerInput contains everything you'll need. I'd highly recommend running your test code in Pycharm with the debugger and then examining the handlerInput variable.
The response builder is also very important and is what allows you to add speech, follow up prompts, cards, elicit slot values etc.
handler_input.response_builder
Example to Examine
https://github.com/alexa/skill-sample-python-helloworld-classes/blob/master/lambda/py/hello_world.py
class HelloWorldIntentHandler(AbstractRequestHandler):
"""Handler for Hello World Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_intent_name("HelloWorldIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Hello Python World from Classes!"
return (
handler_input.response_builder
.speak(speak_output)
# .ask("add a reprompt if you want to keep the session open for the user to respond")
.response
)
For your question about capturing user's input for tweeting, use AMAZON.SearchQuery slot type. You may run into limitations about how much text can be collected and the quality of the capture but SearchQuery slot is the place to start.
I have some Python metapy code being executed inside a Flask route which runs perfectly fine when the route is called the first time (ie a user submits a form after startup of the application) but it doesnt terminate when it runs a second time (ie the form is submitted a second time after application startup).
Precisely:
#app.route('/search', methods=['POST'])
def searchPageResults():
form = SearchForm(request.form)
import metapy
idx = metapy.index.make_inverted_index(os.path.abspath("search/config.toml"))
ranker = metapy.index.OkapiBM25()
query = metapy.index.Document()
query.content("auto")
for result in ranker.score(idx, query):
print(result)
return render_template('SearchPage.html', form=form)
The code snippet inside the method runs fine if I run it outside Flask (no matter how many times I call it). Only inside the method decorated with #app.route(...) it seems to only run once. To be specific: the ranker.score(...) function is the one running forever.
Since the code runs fine outside flask, I think there is something Flask specific happening in the background I don't understand.
What I tried so far (but didn't help):
When I have the "import metapy" statement at the top of the file,
then even the first call to ranker.score(...) runs forever.
I ensured that "import metapy" and the initialization of "idx" and "ranker" only run once by putting the search functionality inside an own Class
which is instantiated at Flask server startup. However, also then the
code won't run even at the first call of the route.
Is there something Flask specific explaining this behaviour?
----Update: additional info-----
config.toml
index = "idx"
corpus = "line.toml"
dataset = "data"
prefix = "."
stop-words = "search/german-stopwords.txt"
start-exceptions = "search/sentence-start-exceptions.txt"
end-exceptions = "search/sentence-end-exceptions.txt"
function-words = "search/function-words.txt"
punctuation = "search/sentence-punctuation.txt"
[[analyzers]]
method = "ngram-word"
ngram = 1
filter = [{type = "icu-tokenizer"}, {type = "lowercase"}]
As said, the behaviour only occurs after the second call of this Flask route. Locally everything works fine (with exact same dataset and config.toml)
Update: same behaviour in MetaPy Flask demo app
I have the same behaviour in the MetaPy demo app: https://github.com/meta-toolkit/metapy-demos. (Only difference is that I needed to take some newer versions as specified in the requirements.txt for some packages due to availability).
Solved. There was a problem with the Flask integrated Webserver. Once deployed to another webserver, the problem was solved.
I'm trying to delay a part of my pipeline tool (which runs during the startup of Maya) to run after VRay has been registered.
I'm currently delaying the initialization of the tool in a userSetup.py like so:
def run_my_tool():
import my_tool
reload(my_tool)
mc.evalDeferred("run_my_tool()")
I've tried using evalDeferred within the tool to delay the execution of the render_settings script, but it keeps running before VRay has been registered. Any thoughts on how to create a listener for the VRay register event, or what event that is? Thanks!
EDIT:
Made a new topic to figure out how to correctly use theodox's condition/scriptJob commands suggestion here.
Uiron over at tech-artists.com showed me how to do this properly. Here's a link to the thread
Here's the post by uiron:
"don't pass the python code as string unless you have to. Wherever a python callback is accepted (that's not everywhere in Maya's api, but mostly everywhere), try one of these:
# notice that we're passing a function, not function call
mc.scriptJob(runOnce=True, e=["idle", myObject.myMethod], permanent=True)
mc.scriptJob(runOnce=True, e=["idle", myGlobalFunction], permanent=True)
# when in doubt, wrap into temporary function; remember that in Python you can
# declare functions anywhere in the code, even inside other functions
open_file_path = '...'
def idle_handler(*args):
# here's where you solve the 'how to pass the argument into the handler' problem -
# use variable from outer scope
file_manip_open_fn(open_file_path)
mc.scriptJob(runOnce=True, e=["idle", idle_handler], permanent=True)
"