How to integrate Lambda, Alexa, and my code (Python - Tweepy)? - python

I am trying to tweet something by talking to Alexa. I want to put my code on AWS Lambda, and trigger the function by Alexa.
I already have a Python code that can tweet certain string successfully. And I also managed to create a zip file and deploy it on Lambda (code depends on the "tweepy" package). However, I could not get to trigger the function by Alexa, I understand that I need to use handlers and ASK-SDK (Alexa Service Kit), but I am kind of lost at this stage. Could anyone please give me an idea about how the handlers work and help me see the big picture?

Alexa ASK_SDK Psuedo Code:
This is pseudo code of the new ASK_SDK, which is the predecessor to the ALEXA_SDK.
Also note I work in NodeJS but the structure is likely the same
Outer Function with Call Back - Lambda Function Handler
CanHandle Function
Contains logic to determine if this handler is the right handler. The HandlerInput variable contains the request data so you can check and see if the intent == "A specific intent" then return true. Else return false. Or you can go way more specific. (Firing handlers by intent is pretty basic. you can take it a step further and fire handlers based on Intent and State.
Handle Function
Which ever "canHandle" function returns true this is the code that will be run. The handler has a few functions it can perform. It can read the session attributes, change the session attributes based on the intent that was called, formulate a string response, read and write to a more persistant attribute store like dynamodb and create and fire an alexa response.
The handerInput contains everything you'll need. I'd highly recommend running your test code in Pycharm with the debugger and then examining the handlerInput variable.
The response builder is also very important and is what allows you to add speech, follow up prompts, cards, elicit slot values etc.
handler_input.response_builder
Example to Examine
https://github.com/alexa/skill-sample-python-helloworld-classes/blob/master/lambda/py/hello_world.py
class HelloWorldIntentHandler(AbstractRequestHandler):
"""Handler for Hello World Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return ask_utils.is_intent_name("HelloWorldIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Hello Python World from Classes!"
return (
handler_input.response_builder
.speak(speak_output)
# .ask("add a reprompt if you want to keep the session open for the user to respond")
.response
)

For your question about capturing user's input for tweeting, use AMAZON.SearchQuery slot type. You may run into limitations about how much text can be collected and the quality of the capture but SearchQuery slot is the place to start.

Related

run 2 Python Cloud Functions in same script

I have a script with 2 Cloud Functions, lets say Fn1 and Fn2. Fn1 is event driven, gets the uri of a storage object, Fn2 is required to take the uri from Fn1 and use this as input.
I have tested Fn1 and Fn2 independently and they work.
What I need is how to run both functions together in one go, so that Fn1 gets the information needed and passes it to Fn2 to execute and provide the final output.
def fn1(a,b):
value = '{}.'format(a['x'])
return value
def fn2(value):
do work
output = final
1.) I want to be able to deploy both functions in a single Cloud Function (Or maybe 2).
2.) I want to be able to have Fn1 triggered by event(Solved, working OK), but call Fn2 on completion and have Fn2 output final result.
As you may have noticed, I am early on in the Python adoption, so not so great with ability currently, but learning fast.
You didn't specify how you trigger these functions (http/pubsub)...
I have http function that I trigger from HTTP but in payload I define what script should do.
Here is a little snippet of it:
# to run locally:
# FLASK_ENV=development FLASK_APP=main flask run
# and uncomment the following:
from flask import Flask, request
app = Flask(__name__)
#app.route('/', methods=['POST'])
def wrapper():
return main(request)
and main function is like this:
def main(args=None):
action = args.form['action'] #simplified because I use it together with argparse
match action:
case 'task1':
...
This can be done with pubsub too, you can use pubsub attribute to define what action of the function should do.
You can not have same function to be triggered from http and pubsub.
As for second question, you have GCP workflows or GCP composer but these products might be overkill for what you are trying to do.
Cloud Functions means that you expose only one function during your deployment. However, in your code, you can have several functions, classes, files, object,(...), there is no issue with that. The only condition is: only one entry point in your code.
Therefore, you can have do something like that
def fn3(c):
do.other.things(c)
def fn2(e):
do.something(e)
def fn1(event, context):
doFn2(event)
doFn3(context)
print("end")
Deploy with the --entry-point=fn1 parameter to specify which first function to call and then build your workload around.

Failing to invoke Google Cloud Function with Google Cloud Scheduler

I have created a Google Cloud Function using Python 3.7. Testing the function itself, it behaves as expected. I have set the trigger of the function to be a topic called "Scheduled".
The code itself runs a series of API calls, when tested manually from the UI works exactly as expected.
Output when running a manual test.
The original source code requires no arguments for the main function inside the script, however I realized the Cloud Function passes 2 to it anyway, so I have added them with no actual use:
def main(event, data):
print(datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
getNotifs = getNotifications()
if getNotifs["hasData"] == True:
print("Found notifications, posting to Telegram.")
notifsParsed = parseNotifications(getNotifs["data"])
telegramCall = telegramPost(
{"text": str(notifsParsed), "chat_id": ChatId, "parse_mode": "HTML"})
print(telegramCall)
elif getNotifs["hasData"] == False:
print(getNotifs["error"])
print("================================")
Now, I have created a new Cloud Scheduler job where the target is "pub/sub" and the topic is also "Scheduled". Nowhere could I find a use for the required 'payload' field, and the Scheduler guide by Google only fills in a random value, 'hello' without quotes or something like it, so I filled in 'hi'.
Running the job I am repeatedly met with a failure, and this log:
status: "INVALID_ARGUMENT"
targetType: "PUB_SUB".
I have tried changing the payload to "hi" (with quotes), editing the main PY function to accept one more argument, both seem entirely unrelated. What am I missing?
Only issue was a mistype of the topic defined in the scheduler job, it's free text and not a selection of existing topics or anything.

Maya: Defer a script until after VRay is registered?

I'm trying to delay a part of my pipeline tool (which runs during the startup of Maya) to run after VRay has been registered.
I'm currently delaying the initialization of the tool in a userSetup.py like so:
def run_my_tool():
import my_tool
reload(my_tool)
mc.evalDeferred("run_my_tool()")
I've tried using evalDeferred within the tool to delay the execution of the render_settings script, but it keeps running before VRay has been registered. Any thoughts on how to create a listener for the VRay register event, or what event that is? Thanks!
EDIT:
Made a new topic to figure out how to correctly use theodox's condition/scriptJob commands suggestion here.
Uiron over at tech-artists.com showed me how to do this properly. Here's a link to the thread
Here's the post by uiron:
"don't pass the python code as string unless you have to. Wherever a python callback is accepted (that's not everywhere in Maya's api, but mostly everywhere), try one of these:
# notice that we're passing a function, not function call
mc.scriptJob(runOnce=True, e=["idle", myObject.myMethod], permanent=True)
mc.scriptJob(runOnce=True, e=["idle", myGlobalFunction], permanent=True)
# when in doubt, wrap into temporary function; remember that in Python you can
# declare functions anywhere in the code, even inside other functions
open_file_path = '...'
def idle_handler(*args):
# here's where you solve the 'how to pass the argument into the handler' problem -
# use variable from outer scope
file_manip_open_fn(open_file_path)
mc.scriptJob(runOnce=True, e=["idle", idle_handler], permanent=True)
"

loadhook function in web.py not working

I was trying to experiment with the loadhook function in web.py, however I am not quite able to make it work. Here is my code:
import web
render = web.template.render('templates/')
urls = (
'/(.*)', 'index'
)
class index:
def GET(self, name):
return render.base(name)
def test():
print "damn"
render.base("test")
if __name__ == "__main__":
app = web.application(urls, globals())
app.run()
app.add_processor(web.loadhook(test))
The base.html template is pretty simple which echoes back the "name" parameter.
What I understood from the documentation was that the loadhook function will be called before every request. But it doesn't seem to work. I have tried going to the homepage, another page etc. Neither do I see a print statement on my CMD, nor does the base template with the name test gets executed.
I tried running the same code with just the add_processor as well, but no luck.
Can anyone help me figure out how to run a function before a request happens on a page?
Also, I am assuming request only encompasses browser level requests. Is there any way to capture more via web.py? (such as call a function on keypress, mouse click etc.)
Any help is much appreciated!
loadhooks are called early in the processing and are used to either set configuration or intercept. For example, I implement a black list similar to the following:
def my_hook():
# If requester's IP is in my blacklist, redirect his browser.
if blacklist.in_blacklist(web.ctx.ip) and web.ctx.path != '/blacklist':
raise web.seeother('/blacklist')
....
app.add_processor(web.loadhook(my_hook))
In your example, your test hook calls render (I'm guessing you're trying to render the test page?) Problem is loadhooks don't return data to the browser, so calling render here doesn't do what you want.
Couple other issues: you need to call app.add_processor(web.loadhook(my_hook)) prior to calling app.run(), because the latter sets your polling loop & never returns.
As for your final question: to capture keypresses, etc. you need your javascript to send something to the server.... Everytime there's a keypress, do an ajax call to the server to log the action.
Python's powerful, but still can't read minds.

Using response.out.write from within an included file in Google App Engine

I think this is quite an easy question to answer, I just haven't been able to find anywhere detailing how to do it.
I'm developing a GAE app.
In my main file I have a few request handlers, for example:
class Query(webapp.RequestHandler):
def post(self):
queryDOI = cgi.escape(self.request.get('doiortitle'))
import queryCosine
self.response.out.write(queryCosine.cosine(queryDOI))
In that handler there I'm importing from a queryCosine.py script which is doing all of the work. If something in the queryCosine script fails, I'd like to be able to print a message or do a redirect.
Inside queryCosine.py there is just a normal Python function, so obviously doing things like
self.response.out.write("Done")
doesn't work. What should I use instead of self or what do I need to include within my included file? I've tried using Query.self.response.out.write instead but that doesn't work.
A much better, more modular approach, is to have your queryCosine.cosine function throw an exception if something goes wrong. Then, your handler method can output the appropriate response depending on the return value or exception. This avoids unduly coupling the code that calculates whatever it is you're calculating to the webapp that hosts it.
Pass it to the function.
main file:
import second
...
second.somefunction(self.response.out.write)
second.py:
def somefunction(output):
output('Done')

Categories

Resources