DialogFlow Python3 Webhook - Increase Timeout? - python

I have a DialogFlow Intent that manages to parse the user's query about an item price. For example, the user asks "How much for a can of sardines?", my DialogFlow is able to get "can of sardines" as the user input.
Once it get's that, it would proceed to fulfilment where it would send a POST request to a WebHook I have. I linked the Fulfilment to my local Python3 Flask App through ngrok.com.
Right now, what my Python App does is it takes in the user input (can of sardines), and uses PDFGrep to look for the user input through the PDF of the pricelist that's in the server. The pricelist has 3 columns: product code, product name, product price. For each instance that the user input appears, the entire line is sent out as an output. This means that if "can of sardines" appears 3 separate times, the row lines would be shown.
An output to the console would be something like this:
10000 Can of Sardines - 6 Cans $5.00
10001 Can of Sardines - 12 Cans $9.00
10002 Can of Sardines - 18 Cans $13.00
This works in the console just fine.
However, the file is rather large with about 348 pages worth of items. So what happens is that my pdfgrep command takes some time to come up with the output, but DialogFlow, from what I understand, seems to expect a server response from it's POST request within a given short amount of time.
Is there a way to adjust the timeout of the Webook for the DialogFlow API?

There is no way of increasing this timeout because it would spoil the conversation experience of the user i.e user would get frustrated if he has to wait for a long time for a response.
What you can do is, send a response to the user that you are checking for the prices, then once you get the data from the database you send another reply using a POST request to the client.

Dialogflow webhooks have a timeout of 5 seconds.You can increase timeout by chaining intents,that is you can use an intent as a trigger to another intent(which can give you 5+5 seconds to send a response)
Here in this code, when actual_intent is hit it will redirect it to demo_intent
which have an event called demo_event
May be you can use multiprocessing with threading module for the time taking task, adjust the sleep times accordingly
if 'action' in request_['queryResult']:
if request_['queryResult']['action']=='actual_intent':
time.sleep(3)
reply={
"followupEventInput": {
"name": "demo_event",
}
}
return jsonify(reply)
if request_['queryResult']['action']=='demo_intent':
time.sleep(3)
reply = {
"fulfillmentMessages": [
{
"text": {
"text": [
"Some message you want to show"
]
}
},
]
}

Related

How to get new refresh token after 2 weeks are passed without redirecting user using Upwork API?

I'm developing app that is going to be run on a headless server. To launch it I need to possess access and refresh tokens that is done by following request https://developers.upwork.com/?lang=python#authentication_access-token-request. I'm using python, so my request looks like:
import upwork
config = upwork.Config(
{
"client_id": <my_client_id>,
"client_secret": <my_client_secret>,
"redirect_uri": <my_redirect_uri>
}
)
client = upwork.Client(config)
try:
config.token
except AttributeError:
authorization_url, state = client.get_authorization_url()
# cover "state" flow if needed
authz_code = input(
"Please enter the full callback URL you get "
"following this link:\n{0}\n\n> ".format(authorization_url)
)
print("Retrieving access and refresh tokens.... ")
token = client.get_access_token(authz_code)
As a result token object looks like:
{
"access_token": <access_token>,
"refresh_token": <refresh_token>,
"token_type": "Bearer",
"expires_in": 86400
}
Given access_token and refresh_token I put them to my program and it is successfully launched. To keep continuous access to Upwork API I need to have valid access_token which expires every 24 hours, so I renew it with refresh_token. But the problem is than last one's lifespan is 2 weeks and when it's gone I can't use it to refresh access token, so need to get new one. In the documentation I haven't found how to do so and it seems that the only way is to go through the whole process of obtaining tokens pair again as described above, but that's not an option for me because as I said I want to deploy an application on a headless server without ability to redirect user. I need the way to get tokens pair every 2 weeks without manual intervention
Expecting:
Find a way to refresh refresh_token without redirecting user and manual intervention at all
you can set a timer, that is going to call refresh-token a moment before it expires. This is one way to do it. But maybe someone will come up with a better idea. I've seen people doing this with access token, which wasn't a good practice in that case. But you have a different situation.
#sviddo, if there is no activity for 2 weeks, the authentication is required, involving the user manual login. It's a security requirement.
The other thing is that a refresh token is valid for 14 days, and its TTL automatically extended when refresh is performed. If it's not the case, please, contact Support Team at Upwork

Python to post and format API response data into Slack channel

Working on a script to present data in Slack channel.. I have a script I'm working on to request and return data that I need to post in the channel that gets invoked with a Slash command, but having an issue with presenting the data in the slack channel from where I've executed the Slash command. I've been attempting to work with the Block Kit Builder, but I see no way of presenting that data coming back from my script using the template I created.
Then in the Block kit builder, I can kind of see the format I want and send it to Slack from the Block kit builder, but if I wanted my return response from my Python script to be formatted by the template and respond in the channel, it doesn't seem to work.. I'm thinking I'm definitely doing something wrong, but was looking for suggestions..
I've been searching on possibly how to do it in Flask, which is what I'm using to execute a Slash command in Slack to invoke my Python script to query the API, get the response and send it to Slack..
Thanx.
P.S. Stackflow won't allow me to post a snippet of the json and/or images.. yet... something about a '10 reputation'...
I was able to figure this out. In my code, I reflected this:
for i in pdjson:
t = {}
try:
t["text"] = {
"type": "plain_text",
"text": f'{i["escalation_policy"]["summary"]} - Layer: {i["escalation_level"]}'
}
except (KeyError):
t["text"] = {
"type": "plain_text",
"text": f'{i["escalation_policy"]["summary"]} - Layer: {i["escalation_level"]}'
}
t["value"] = i["user"]["summary"]
Now I can present in the slack workspace with the block kit template. Now I just have to figure out how make the 'value' show up once I select an item in the list.

Django - Is there a better way to send bulk Twilio text messages?

I'm currently using the following views.py function sendsmss to allow a user to do a bulk sms message to their list of subscribers, after the user has completed an html form with the sms they want to send to their subscribers:
def sendsmss(request):
if request.method == "POST":
subscribers = Subscriber.objects.all()
sms = request.POST['sms']
mytwilionum = "+13421234567"
ACCOUNT_SID = TWILIO_ACCOUNT_SID
AUTH_TOKEN = TWILIO_AUTH_TOKEN
client = Client(ACCOUNT_SID, AUTH_TOKEN)
for subscriber in subscribers:
subscriber_num = subscriber.phone_number
client.messages.create(
to= subscriber_num,
from_=mytwilionum,
body=sms
)
return redirect('homepage')
This function works, but I have only tested the bulk send with 3 subscribers. If there were 100s or 1000s of subscribers how long would this take .. if it takes long then would user be waiting for task to complete before redirect to homepage happens? Is there a better way to do this in Django?
The questions are very subjective and I will try to answer those accordingly:
If there were 100s or 1000s of subscribers how long would this take
This is totally dependent on performance of Twilio. The API client is using the requests library and it is creating the messages one by one for each subscriber. In an ideal scenario the time taken seems proportional to the number of subscribers.
if it takes long then would user be waiting for task to complete before redirect to homepage happens?
Based on your current implementation, Yes. The return redirect('homepage') will be executed only after the message has been sent to all the subscribers. In case, there is an error it will be thrown and the page won't redirect to the home page.
Is there a better way to do this in Django?
Yes, there are. You can use an asynchronous job queue e.g. celery and hook it up with django. In this, you can start an async task in celery and return a response to the user immediately. You can also choose to display progress of the running celery task to the user (if required).

Sending Notifications in to multiple users FCM

I am configuring my mobile applications with firebase cloud messaging.
I've finally figured out how to send these annoying to configure notifications.
My python code looks like this
url = 'https://fcm.googleapis.com/fcm/send'
body = {
"data":{
"title":"mytitle",
"body":"mybody",
"url":"myurl"
},
"notification":{
"title":"My web app name",
"body":"message",
"content_available": "true"
},
"to":"device_id_here"
}
headers = {"Content-Type":"application/json",
"Authorization": "key=api_key_here"}
requests.post(url, data=json.dumps(body), headers=headers)
I would think that putting this in a for loop and swapping device ids to send thousands of notifications would be an immense strain on the server and a bad programming practice. (Correct me if i'm wrong)
now the documentation tells me to create "device groups" https://firebase.google.com/docs/cloud-messaging/notifications which store device_id's to send in bulk....this is annoying and inefficient. As my groups for my web application are constantly changing.
Plain and Simple
How do I send the notification above to an array of device id's that I specify in my python code so that i can make only 1 post to FCM instead of thousands.
To send FCM to multiple device you use the key "registration_ids" instead of "to"
"registration_ids": ["fcm_token1", "fcm_token2"]
Have a look at this package and see how they implemented it.
Instead of "to":"device_id" you should use "to":"topic" ,
topic is use from group messaging in FCM or GCM
https://developers.google.com/cloud-messaging/topic-messaging

Facebook API - Insights: Status: 500, error code 1, "An unknown error occurred" at random times

Of the late when trying to fetch data from FB's Marketing API, I get following error:
Status: 500
Response:
{
"error": {
"code": 1,
"message": "An unknown error occurred"
}
}
at times. If I try to immediately make the same request via Postman, it returns data at times and at times throws 500 status error.
Below is the data being sent to FB
facebookads.exceptions.FacebookRequestError:
Message: Call was not successful
Method: GET
Path: https://graph.facebook.com/v2.3/act_XYZ/insights
Params: {
'time_increment': 1,
'level': 'adgroup',
'fields': '["account_name", "deeplink_clicks",
"campaign_name",
"social_impressions",
"campaign_group_name",
"campaign_id",
"adgroup_name",
"unique_impressions",
"social_reach",
"unique_social_impressions",
"placement",
"total_actions",
"cpm",
"impressions",
"ctr",
"reach",
"clicks",
"social_clicks",
"spend",
"website_clicks",
"adgroup_id",
"actions",
"cpc",
"cpp",
"unique_clicks",
"app_store_clicks",
"unique_social_clicks",
"account_id",
"campaign_group_id"
]',
'breakdowns': '["placement"]',
'time_range': '{"since":"2015-09-01","until":"2015-09-09"}',
'summary': None
}
Status: 500
Response:
{
"error": {
"code": 1,
"message": "An unknown error occurred"
}
}
I am using Facebook's Python SDK from
-e git+https://github.com/pythonforfacebook/facebook-sdk.git#449f56f0db086a41bedd23df714e7f77c1051f5b#egg=facebook_sdk-dev
Can someone please let me know what I might be missing in this case?
Thanks.
I'm running into the same issue and I have noticed a pattern where it errors out when the request takes more than ~30s in Postman. Not sure what you can do to fix this but I have had some success by:
Pulling back the level of granularity / breakdowns
Limiting to a lower # of records per page
Unfortunately I haven't seen a consistent pattern with the levels of granularity that cause this delay. Sometimes I can report # ad level with 5000 results per page, other times I need to throttle back significantly
You may want to learn about the HTTP protocol - in this case about response status codes. 500 means "internal server error", IOW it's the facebook API server that failed, not your code. Your only options at this point are to either log the error and call it a day or setup a wait/retry loop.
I'm seeing this problem as well and noticed that it seems to be particular to looking for campaign_name and campaign_id. I've had a consistent success/fail based on excluding or including those fields.
I'm using Ruby and accessing the v2.5 insights api.
Update
So I just tested playing with setting vs. not setting level to campaign when asking for the campaign_name field and when I set the level to campaign the call is now successful. Maybe try setting your level differently to test?
I've been experiencing the same problem. The inights ran at the ad account level is definitely the most performant from "our" end. I've put in conditional exception handling on the insights call to the adaccount and if that returns an exception from facebook after the api calls have been initiated then I try grabbing all campaigns and run insights against each campaign.
This also helps prevent the execution limit by only calling one insights api call per account if it can and only go into the campaign granularity of insight calls if the adaccount call fails.

Categories

Resources