I have a script with 2 Cloud Functions, lets say Fn1 and Fn2. Fn1 is event driven, gets the uri of a storage object, Fn2 is required to take the uri from Fn1 and use this as input.
I have tested Fn1 and Fn2 independently and they work.
What I need is how to run both functions together in one go, so that Fn1 gets the information needed and passes it to Fn2 to execute and provide the final output.
def fn1(a,b):
value = '{}.'format(a['x'])
return value
def fn2(value):
do work
output = final
1.) I want to be able to deploy both functions in a single Cloud Function (Or maybe 2).
2.) I want to be able to have Fn1 triggered by event(Solved, working OK), but call Fn2 on completion and have Fn2 output final result.
As you may have noticed, I am early on in the Python adoption, so not so great with ability currently, but learning fast.
You didn't specify how you trigger these functions (http/pubsub)...
I have http function that I trigger from HTTP but in payload I define what script should do.
Here is a little snippet of it:
# to run locally:
# FLASK_ENV=development FLASK_APP=main flask run
# and uncomment the following:
from flask import Flask, request
app = Flask(__name__)
#app.route('/', methods=['POST'])
def wrapper():
return main(request)
and main function is like this:
def main(args=None):
action = args.form['action'] #simplified because I use it together with argparse
match action:
case 'task1':
...
This can be done with pubsub too, you can use pubsub attribute to define what action of the function should do.
You can not have same function to be triggered from http and pubsub.
As for second question, you have GCP workflows or GCP composer but these products might be overkill for what you are trying to do.
Cloud Functions means that you expose only one function during your deployment. However, in your code, you can have several functions, classes, files, object,(...), there is no issue with that. The only condition is: only one entry point in your code.
Therefore, you can have do something like that
def fn3(c):
do.other.things(c)
def fn2(e):
do.something(e)
def fn1(event, context):
doFn2(event)
doFn3(context)
print("end")
Deploy with the --entry-point=fn1 parameter to specify which first function to call and then build your workload around.
Related
I have a function called when_login()
I need to always run it whenever login_user() function from Flask-Login module is called
sequence :
User provide credentials
credentials go through auth steps
auth is via a mass of methods
3. once user is authinticated call login_user()
4. [step needed]: whenever login_user() is called then call a function called when_login() on the user logged before redirection
I'm not able to edit / rewrite login_user() to call when_login() and the program functionality
also rewriting functions could lead to slowing down the app
Generally: is there a way of binding functions in python without editing the first that will be called?
This is for a flask application but I'd be pleased with python-wide answer.
I am not sure if this is what you want, there maybe a Flask specific way to do this, but here is what I have tried
from flask_login import login_user as login_user__
def run_first(fun):
def inner(*args):
fun(*args)
when_login() # add arguments you want for making this call
return inner
def when_login(*args):
print('processing something with args in when_login')
#run_first
def login_user(*args):
print('login_user is running first')
return login_user__(*args)
login_user() # call this however you normally use
Output:
login_user is running first
processing something with args in when_login
Calling login_user makes that run first followed by when_login, you can notice that in the print statements.
This does not require you to change your code, add this at the top level before you start your flask process.
I am trying to tweet something by talking to Alexa. I want to put my code on AWS Lambda, and trigger the function by Alexa.
I already have a Python code that can tweet certain string successfully. And I also managed to create a zip file and deploy it on Lambda (code depends on the "tweepy" package). However, I could not get to trigger the function by Alexa, I understand that I need to use handlers and ASK-SDK (Alexa Service Kit), but I am kind of lost at this stage. Could anyone please give me an idea about how the handlers work and help me see the big picture?
Alexa ASK_SDK Psuedo Code:
This is pseudo code of the new ASK_SDK, which is the predecessor to the ALEXA_SDK.
Also note I work in NodeJS but the structure is likely the same
Outer Function with Call Back - Lambda Function Handler
CanHandle Function
Contains logic to determine if this handler is the right handler. The HandlerInput variable contains the request data so you can check and see if the intent == "A specific intent" then return true. Else return false. Or you can go way more specific. (Firing handlers by intent is pretty basic. you can take it a step further and fire handlers based on Intent and State.
Handle Function
Which ever "canHandle" function returns true this is the code that will be run. The handler has a few functions it can perform. It can read the session attributes, change the session attributes based on the intent that was called, formulate a string response, read and write to a more persistant attribute store like dynamodb and create and fire an alexa response.
The handerInput contains everything you'll need. I'd highly recommend running your test code in Pycharm with the debugger and then examining the handlerInput variable.
The response builder is also very important and is what allows you to add speech, follow up prompts, cards, elicit slot values etc.
handler_input.response_builder
Example to Examine
https://github.com/alexa/skill-sample-python-helloworld-classes/blob/master/lambda/py/hello_world.py
class HelloWorldIntentHandler(AbstractRequestHandler):
"""Handler for Hello World Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_intent_name("HelloWorldIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Hello Python World from Classes!"
return (
handler_input.response_builder
.speak(speak_output)
# .ask("add a reprompt if you want to keep the session open for the user to respond")
.response
)
For your question about capturing user's input for tweeting, use AMAZON.SearchQuery slot type. You may run into limitations about how much text can be collected and the quality of the capture but SearchQuery slot is the place to start.
I have some Python metapy code being executed inside a Flask route which runs perfectly fine when the route is called the first time (ie a user submits a form after startup of the application) but it doesnt terminate when it runs a second time (ie the form is submitted a second time after application startup).
Precisely:
#app.route('/search', methods=['POST'])
def searchPageResults():
form = SearchForm(request.form)
import metapy
idx = metapy.index.make_inverted_index(os.path.abspath("search/config.toml"))
ranker = metapy.index.OkapiBM25()
query = metapy.index.Document()
query.content("auto")
for result in ranker.score(idx, query):
print(result)
return render_template('SearchPage.html', form=form)
The code snippet inside the method runs fine if I run it outside Flask (no matter how many times I call it). Only inside the method decorated with #app.route(...) it seems to only run once. To be specific: the ranker.score(...) function is the one running forever.
Since the code runs fine outside flask, I think there is something Flask specific happening in the background I don't understand.
What I tried so far (but didn't help):
When I have the "import metapy" statement at the top of the file,
then even the first call to ranker.score(...) runs forever.
I ensured that "import metapy" and the initialization of "idx" and "ranker" only run once by putting the search functionality inside an own Class
which is instantiated at Flask server startup. However, also then the
code won't run even at the first call of the route.
Is there something Flask specific explaining this behaviour?
----Update: additional info-----
config.toml
index = "idx"
corpus = "line.toml"
dataset = "data"
prefix = "."
stop-words = "search/german-stopwords.txt"
start-exceptions = "search/sentence-start-exceptions.txt"
end-exceptions = "search/sentence-end-exceptions.txt"
function-words = "search/function-words.txt"
punctuation = "search/sentence-punctuation.txt"
[[analyzers]]
method = "ngram-word"
ngram = 1
filter = [{type = "icu-tokenizer"}, {type = "lowercase"}]
As said, the behaviour only occurs after the second call of this Flask route. Locally everything works fine (with exact same dataset and config.toml)
Update: same behaviour in MetaPy Flask demo app
I have the same behaviour in the MetaPy demo app: https://github.com/meta-toolkit/metapy-demos. (Only difference is that I needed to take some newer versions as specified in the requirements.txt for some packages due to availability).
Solved. There was a problem with the Flask integrated Webserver. Once deployed to another webserver, the problem was solved.
I'm trying to delay a part of my pipeline tool (which runs during the startup of Maya) to run after VRay has been registered.
I'm currently delaying the initialization of the tool in a userSetup.py like so:
def run_my_tool():
import my_tool
reload(my_tool)
mc.evalDeferred("run_my_tool()")
I've tried using evalDeferred within the tool to delay the execution of the render_settings script, but it keeps running before VRay has been registered. Any thoughts on how to create a listener for the VRay register event, or what event that is? Thanks!
EDIT:
Made a new topic to figure out how to correctly use theodox's condition/scriptJob commands suggestion here.
Uiron over at tech-artists.com showed me how to do this properly. Here's a link to the thread
Here's the post by uiron:
"don't pass the python code as string unless you have to. Wherever a python callback is accepted (that's not everywhere in Maya's api, but mostly everywhere), try one of these:
# notice that we're passing a function, not function call
mc.scriptJob(runOnce=True, e=["idle", myObject.myMethod], permanent=True)
mc.scriptJob(runOnce=True, e=["idle", myGlobalFunction], permanent=True)
# when in doubt, wrap into temporary function; remember that in Python you can
# declare functions anywhere in the code, even inside other functions
open_file_path = '...'
def idle_handler(*args):
# here's where you solve the 'how to pass the argument into the handler' problem -
# use variable from outer scope
file_manip_open_fn(open_file_path)
mc.scriptJob(runOnce=True, e=["idle", idle_handler], permanent=True)
"
I think this is quite an easy question to answer, I just haven't been able to find anywhere detailing how to do it.
I'm developing a GAE app.
In my main file I have a few request handlers, for example:
class Query(webapp.RequestHandler):
def post(self):
queryDOI = cgi.escape(self.request.get('doiortitle'))
import queryCosine
self.response.out.write(queryCosine.cosine(queryDOI))
In that handler there I'm importing from a queryCosine.py script which is doing all of the work. If something in the queryCosine script fails, I'd like to be able to print a message or do a redirect.
Inside queryCosine.py there is just a normal Python function, so obviously doing things like
self.response.out.write("Done")
doesn't work. What should I use instead of self or what do I need to include within my included file? I've tried using Query.self.response.out.write instead but that doesn't work.
A much better, more modular approach, is to have your queryCosine.cosine function throw an exception if something goes wrong. Then, your handler method can output the appropriate response depending on the return value or exception. This avoids unduly coupling the code that calculates whatever it is you're calculating to the webapp that hosts it.
Pass it to the function.
main file:
import second
...
second.somefunction(self.response.out.write)
second.py:
def somefunction(output):
output('Done')