I am working on an AIML project for leisure and came across pandorabots. I was wondering if there is a way to parse a variable from the user inputs into other languages (in this case python) or framework so that we can do further manipulation through other third party API by means of any templating?
For instance, I want to obtain a date from the user and then feed it into the google calendar API. Is there a way to extract the 'date' variable and parse it to google calendar API in Python (or any other languages)?
<category><pattern># 1 MAY 2016 #</pattern>
<think>{{ date }}</think> #is there a way to parse "1 May 2016" as a
#variable date in python?
<template>...
</template>
</category>
Ultimately, the goal I am trying to achieve would have a conversation something like this:
User: Hi bot, could you check if I am available on 1 May 2016?
Bot: Nope, you have a gathering at Mike's! #(<--- Response rendered after
checking user's schedule on 1 May via google calendar )
I explored templating engine like mustache but apparently it does not talk to AIML (or rather xml). Is there anyone who can point me to a good example/tutorial that can help me get started?
ps: I'm using pandorabots API and python2.7
In the pyAIML API, look for the keyword "predicates":
kernel.getPredicate('date')
kernel.getBotPredicate('dat')
it returns the predicate that was set using
<set name="date"><star/></set>
Then you can easily parse it with python.
But that brings me closer to the question: what do I need AIML for? what's AIML's added value here?
I was also looking information for such similar question. With the help of the answer provided by #Berry Tsakala... I was able to find a solution to my problem. Here is a detailed and improved version of the above example case...that might be useful for others having the same question...
<category><pattern>Hi bot, could you check if I am available on *</pattern>
<template>Let me check your schedules on <set name="date"><star/></set>
</template>
</category>
Then in your python script you can store it into a variable as,
import aiml
kernel = aiml.Kernel()
kernel.learn("std-startup.xml")
kernel.respond("load aiml b")
while True:
try:
kernel.respond(raw_input("Enter your message >> "))
appointment_date = kernel.getPredicate('date')
print appointment_date
Feel free to make any corrections to the answer if you find any errors, or if it needs any improvements. Hope you find it useful :)
Related
I recently started working with Knowledge Graphs and RDFs, and I was fortunate enough that someone provided me with an introductory excercise, which helped me implement some basic functionalities on a local Apache server using Php and EasyRdf. My long-term goal is to make use of NLP and Machine Learning libraries, which is why I switched to Python and the rdflib library.
As far as I know, after reading the documentation of rdflib and several plugins, there is no equivalence for the load() and dump() functions that are available in EasyRdf.
Background
I used load() in Php for loading URLs that conform with the GraphStore protocol, which allows the retrieval of content from a specific graph
I then use dump() for a readable output of a graph (without checking the pages source text)
<?php
require 'vendor/autoload.php';
$graph=new EasyRdf\Graph();
print "<br/><b>The content of the graph:</b><br/>";
$graph->load("http://localhost:7200/repositories/myrepo/rdf-graph/service?graph=".urlencode("http://example.com#graph"));
print $graph->dump();
?>
My Question
My question now is what the most straightforward solution would be to implement something similar to the given example in Python using rdflib. Did I miss something or are there just no equivalent functions available in rdflib?
I already used the SPARQLWrapper and the SPARQLUpdateStore for a different purpose, but they are not helping in this case. So I would be interested in possibilities that are similar to the use of the EasyRdf\Http\Client() from EasyRDF. A basic example of mine in Php would be this:
<?php
require 'vendor/autoload.php';
$adress="http://localhost:7200/repositories/myrepo?query=";
$query=urlencode("some_query")
$clienthttp=new EasyRdf\Http\Client($adress.$query);
$clienthttp->setHeaders("Accept","application/sparql-results+json");
$resultJSON=$clienthttp->request()->getBody();
print "<br/><br/><b>The received JSON response: </b>".$resultJSON;
?>
Much thanks in advance for any kind of help.
UPDATE
There actually seems to be no equivalent in rdflib to the dump() function from easyRDF, as Python is not a language for web pages (like PHP is), so it does not have default awareness of HTML and front-end stuff. The only thing I managed was to parse and serialize the graph as needed, and then just return it. Although the output is not lovely, the graph content will be correctly displayed in the page's source.
Regarding the load() function, this can be achieved by using the respective URL that is needed to access your repository in combination with a simple request. In the case of GraphDB, the following code worked out for me:
# userinput is provided in the form of a Graph URI from the repository
def graphparser_helper(userinput):
repoURL = "http://localhost:7200/repositories/SomeRepo/rdf-graphs/service?"
graphURI = {"graph": userinput}
desiredFormat = {"Accept": "text/turtle"}
myRequest = requests.get(repoURL, headers=desiredFormat, params=graphURI)
data = myRequest.text
return data
I hope that helps someone.
So, I have a problem statement in which I want to extract the list of users who are following a particular #hashtag like #obama, #corona etc.
The challenge here is I want to extract this data anonymously i.e without providing any account keys.
I tried a library named twint that is capable of doing this but it's very slow. can anyone recommend a better alternative for my use case..?
there's no such library available that satisfy your use case. yes there's this twint library but as you mentioned it's slow for your use case. so try with some other language libraries see if something is available over there.
You can try to make a scripts in python using selenium, and I think that you could get that names of users very fast.
This Github repo which I found might be useful. It does not require authentication to get the twitter data. Have a look at it -https://github.com/bisguzar/twitter-scraper
I tried this approach last year, but found out that my date range fell well outside of the available info provided by Twitter, and had to use the Premium API. If this is not a constraint for you, and since you do not want to code your own scraper, take a look at this option:
TweetScraper: Updated in September last year, also provides MongoDB integration. I haven't tried it, but seems to work OK. Don't know about the time performance.
I have built a app that handles incoming sms for a short number. We might have several campaigns running at the same time, each with different mechanics and keywords etc.
At the moment I am using Django to handle this.
My biggest concern atm is that the consumer who is using out service is a very low LSM market in South Africa. The sms's come ins weird ways and I get allot of wrong entries.
Im trying to rethink the way to interpret the sms and would like some ideas.
At the moment it basically works like this:
Get the SMS, I split it by a ' or * THEN look first for the KEYWORD. All campaigns have Keywords and I run through a list of live campaign keywords to see if there is a match in the message. Then I go on in the split message and compare or find more words if needed. I need to reply based on the message, like the KEYWORD is there, but not the 2nd or 3rd or what ever parameter.
Megan from Twilio here.
You may or may not be using a Twilio shortcode. But I think your question relates to a little guide we have on How to Build a SMS Keyword Response Application.
The example there is in PHP but you can get started in Python here and your code might look like:
def sms():
r = twiml.Response()
body = request.form['Body']
if KEYWORD in body:
//do some action
I need to get the number of unique visitors(say, for the last 5 minutes) that are currently looking at an article, so I can display that number, and sort the articles by most popular.
ex. Similar to how most forums display 'There are n people viewing this thread'
How can I achieve this on Google App Engine? I am using Python 2.7.
Please try to explain in a simple way because I recently started learning programming and I am working on my first project. I don't have lots of experience. Thank you!
Create a counter (property within entity) and increase it transactionally for every page view. If you have more then a few pageviews a second you need to look into sharded counters.
There is no way to tell when someone stops viewing a page unless you use Javascript to inform the server when that happens. Forums etc typically assume that someone has stopped viewing a page after n minutes of inactivity, and base their figures on that.
For minimal resource use, I would suggest using memcache exclusively here. If the value gets evicted, the count will be incorrect, but the consequences of that are minimal, and other solutions will use a lot more resources.
Did you consider Google Analytics service for getting statistics? Read this article about real-time monitoring using this service. Please note: a special script must be embedded on every page you want to monitor.
Could someone explain how Internet Exploer cookies stored and how it retrived using the index.dat ,also could someone provide how this binray file is built and if there are a paserer/module/library for it written on perl /python ?
Have a look at HTTP::Cookies::Microsoft, part of libwww-perl.
Here is a series of blog posts I wrote about index.dat:
Index.dat: Part I - What is index.dat?
Index.dat: Part II - What are they used for?
Basically you'll need to be able to call the wininet UrlCacheEntry APIs from Perl. I think there is a win32 module that let's you do this.