I have built a app that handles incoming sms for a short number. We might have several campaigns running at the same time, each with different mechanics and keywords etc.
At the moment I am using Django to handle this.
My biggest concern atm is that the consumer who is using out service is a very low LSM market in South Africa. The sms's come ins weird ways and I get allot of wrong entries.
Im trying to rethink the way to interpret the sms and would like some ideas.
At the moment it basically works like this:
Get the SMS, I split it by a ' or * THEN look first for the KEYWORD. All campaigns have Keywords and I run through a list of live campaign keywords to see if there is a match in the message. Then I go on in the split message and compare or find more words if needed. I need to reply based on the message, like the KEYWORD is there, but not the 2nd or 3rd or what ever parameter.
Megan from Twilio here.
You may or may not be using a Twilio shortcode. But I think your question relates to a little guide we have on How to Build a SMS Keyword Response Application.
The example there is in PHP but you can get started in Python here and your code might look like:
def sms():
r = twiml.Response()
body = request.form['Body']
if KEYWORD in body:
//do some action
Related
Just built my first Shopify store and wanted to use python and API to bulk-update product tags on all our products.
However, I've run into a snag on my PUT call. I keep getting a 400 Response with the message '{"errors":{"product":"Required parameter missing or invalid"}}'.
I've seen other people with similar issues on here but none of their solutions seem to be working for me.
Has anyone else run into this problem that can help me figure it out?
Here are some screenshots of my code, a printout of the payload, and the error in the response.
Code, Payload and Response:
I can successfully get the product info using the API and was originally sending the entire JSON payload that was returned just updated it with my product tags and sent it back through the API.
To narrow potential pain points, I'm keeping it simple now and just including "id" and "tags" in the payload but still get the same error.
We figured it out! It turns out that when we initially created the store we used a domain store name that was misspelled so we created a new domain store name with the correct spelling. However, the original domain store name was not deleted (not even sure it can be), and even though the new domain store name forwards to the original one and allows for GET requests, for PUT requests we had to use the original misspelled domain store name and the PUTs work fine.
I figured this out by using fiddler to capture a manual product update via the Shopify website to see what the payload looked like and that's why I noticed the store URL was different than the one we were using.
Hope this helps someone with a similar issue!
This is a small project I'd like to get started on in the near future. It's still in the planning stage so this post is more about being steered in the right direction
Essentially, I'd like to obtain tweets from a user and parse the tweets into a table/database, with the aim to be able to run this program in real-time.
My initial plan to tackle this was to use Beautiful Soup, a Python specific library, however, I believe the Twitter API is the better approach (advice on this subject would be appreciated)
There are still 3 unknowns:
Where do I store the tweets once obtained?
How to parse the tweets?
Where to store the parsed data?
To answer (3), I suppose it depends on what I want to do with the data. I still haven't decided how I'll use the parsed data but I know that I'd like it put into categories so my thinking is probably a database/table/excel??
A few questions still to answer and I'd like you guys to steer me in the right direction. My programming language knowledge is limited to just C for now, but as this project means a great deal to me, I'm willing to put the effort in and learn the necessary languages/APIs.
What languages/APIs will I need to gain an understanding of to accomplish this project? From where I stand, it seems to be Twitter API and Python.
EDIT: So I have a basic script going which obtains a user tweets. It works better than expected. However, I'd like to take it another step. I'd like to only obtain the users' tweets if it contains a hashtag inside of the tweet. All other tweets should be ignored. How best to do this?
Here is a snippet of the basic code I have going:
import tweepy
import twitter_credentials
auth = tweepy.OAuthHandler(twitter_credentials.CONSUMER_KEY, twitter_credentials.CONSUMER_SECRET)
auth.set_access_token(twitter_credentials.ACCESS_TOKEN, twitter_credentials.ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
stuff = api.user_timeline(screen_name = 'XXXXXXXXXX', count = 10, include_rts = False)
for status in stuff:
print(status.text)
Scraping Twitter (or any other social network) with for example Beautiful soup, as you said, is not a good idea for 2 reasons :
if the source pages changes (name attributes, div ids...), you have to keep your code up to date
your script can be banned because scraping is not "allowed".
To answer your questions :
1) you can store the tweets wherever you want : csv, mysql, sqlite, redis, neo4j...
2) With official API, you get JSON. Here is a Tweet Object : https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object.html . With tweepy, for example status.text will give you the text of the tweet.
3) Same as #1. If you don't know actually what you will do with the data, store the full JSONs. You will be able later to parse them.
I suggest tweepy/python (http://www.tweepy.org/) or twit/nodejs (https://www.npmjs.com/package/twit). And read official docs : https://developer.twitter.com/en/docs/api-reference-index
I am working on an AIML project for leisure and came across pandorabots. I was wondering if there is a way to parse a variable from the user inputs into other languages (in this case python) or framework so that we can do further manipulation through other third party API by means of any templating?
For instance, I want to obtain a date from the user and then feed it into the google calendar API. Is there a way to extract the 'date' variable and parse it to google calendar API in Python (or any other languages)?
<category><pattern># 1 MAY 2016 #</pattern>
<think>{{ date }}</think> #is there a way to parse "1 May 2016" as a
#variable date in python?
<template>...
</template>
</category>
Ultimately, the goal I am trying to achieve would have a conversation something like this:
User: Hi bot, could you check if I am available on 1 May 2016?
Bot: Nope, you have a gathering at Mike's! #(<--- Response rendered after
checking user's schedule on 1 May via google calendar )
I explored templating engine like mustache but apparently it does not talk to AIML (or rather xml). Is there anyone who can point me to a good example/tutorial that can help me get started?
ps: I'm using pandorabots API and python2.7
In the pyAIML API, look for the keyword "predicates":
kernel.getPredicate('date')
kernel.getBotPredicate('dat')
it returns the predicate that was set using
<set name="date"><star/></set>
Then you can easily parse it with python.
But that brings me closer to the question: what do I need AIML for? what's AIML's added value here?
I was also looking information for such similar question. With the help of the answer provided by #Berry Tsakala... I was able to find a solution to my problem. Here is a detailed and improved version of the above example case...that might be useful for others having the same question...
<category><pattern>Hi bot, could you check if I am available on *</pattern>
<template>Let me check your schedules on <set name="date"><star/></set>
</template>
</category>
Then in your python script you can store it into a variable as,
import aiml
kernel = aiml.Kernel()
kernel.learn("std-startup.xml")
kernel.respond("load aiml b")
while True:
try:
kernel.respond(raw_input("Enter your message >> "))
appointment_date = kernel.getPredicate('date')
print appointment_date
Feel free to make any corrections to the answer if you find any errors, or if it needs any improvements. Hope you find it useful :)
I am very new to the Graph API and trying to write a simple python script that first identifies all pages that a user has liked and all groups that he/she is a part of. To do this, I used the following:
To get the groups he has joined:
API: /{user-id}/groups
Permissions req: user_groups
To get the pages he has liked:
API: /{user-id}/likes
Permissions req: user_likes
and
url='https://graph.facebook.com/'+userId+'/likes?access_token='+accessToken +'&limit='+str(limit)
Now that I can see the id's of the groups in the JSON output, I want to hit them one by one and fetch all content (posts, comments, photos etc.) posted within that group. Is this possible and if yes, how can I do it? What API calls do I have to make?
That's quite a broad question, before asking here you should have give a try searching on SO.
Anyways, I'll tell you broadly how can you do it.
First of all go through the official documentation of Graph API: Graph API Reference.
You'll find each and every API which can be used to fetch the data. For example: /group, /page. You'll get to know what kind of access token with what permissions are required for an API call.
Here are some API calls useful to you-
to fetch the group/page's posts- /{group-id/page-id}/posts
to fetch the comments of a post- {post-id}/comments
to fetch the group/page's photos- /{group-id/page-id}/photos
and so on. Once you'll go through the documentation and test some API calls, the things would be much clear. It's quite easy!
Hope it helps. Good luck!
Here's an example using facepy:
from facepy import GraphAPI
import json
graph = GraphAPI(APP_TOKEN)
groupIDs = ("[id here]","[etc]")
outfile_name ="teacher-groups-summary-export-data.csv"
f = csv.writer(open(outfile_name, "wb+"))
for gID in groupIDs:
groupData = graph.get(gID + "/feed", page=True, retry=3, limit=500)
for data in groupData:
json_data=json.dumps(data, indent = 4,cls=DecimalEncoder)
decoded_response = json_data.decode("UTF-8")
data = json.loads(decoded_response)
print "Paging group data..."
for item in data["data"]:
...etc, dealing with items...
Check the API reference. You should use feed.
You can use /{group-id}/feed to get an array of Post objects of the group. Remember to include a user access token for a member of the group.
I wonder if it's possible to create and delete events with Google API in a secondary calendar. I know well how to do it in main calendar so I only ask, how to change calendar_service to read and write to other calendar.
I've tried loging with secondary calendar email, but that's not possible with BadAuthentication Error. The URL was surely correct, becouse it was read by API.
Waiting for your help.
A'm answering my own question so I can finally accept this one. The problem has been solved some time ago.
The most important answer is in this documentation.
Each query can be run with uri as argument. For example "InsertEvent(event, uri)". Uri can be set manually (from google calendar settings) or automatically, as written in post below. Note, that CalendarEventQuery takes only username, not the whole url.
The construction of both goes this way:
user = "abcd1234#group.calendar.google.com"
uri = "http://www.google.com/calendar/feeds/{{ user }}/private/full-noattendees"
What's useful, is that you can run queries with different uri and add/delete events to many different calendars in one script.
Hope someone finds it helpful.
I got the same issue but I found this solution (I do not remember where)
This solution is to extract the secondary calendar user from its src url provided by google
This is probably not the better one but it's a working one
Note:the code is extracted from a real project [some part has been removed] and must be adapted to your particular case and is provide as sample just to have a support for explaination (It will not work as is)
# 1 - Connect using the main user email address in a classical way
cal_client = gdata.calendar.service.CalendarService()
# insert here after connection stuff
# 2 - For each existing calendars
feed = cal_client.GetAllCalendarsFeed():
# a loop the find the calendar by it's title (cal_title)
for a_calendar in feed.entry:
if cal_title in a_calendar.title.text:
cal_user = a_calendar.content.src.split('/')[5].replace('%40','#')
# If you print a_calendar.content.src.split you will see that the url
# contains an email like definition. This is the one to used to work
# with the calendar
Then you just have to replace the default user by the cal_user in the api to work on the secondary calendar.
Replace call is required because google api function are doing internal conversion on special characters like '%'
I hope this will help you.