Luis python SDK Utterance addition - python

we are trying to create a Chatbot using Luis framework and Python SDK using the Azure documentation as a reference. We have been able to add Intent, entity and pre-built entities using the same. These changes show up on the portal verifying the addition.
But the code for Utterance addition is not showing on the portal or being listed on the terminal.
def create_utterance(intent, utterance, *labels):
"""
Add an example LUIS utterance from utterance text and a list of
labels. Each label is a 2-tuple containing a label name and the
text within the utterance that represents that label.
Utterances apply to a specific intent, which must be specified.
"""
text = utterance.lower()
def label(name, value):
value = value.lower()
start = text.index(value)
return dict(entity_name=name, start_char_index=start,
end_char_index=start + len(value), role=None)
return dict(text=text, intent_name=intent,
entity_labels=[label(n, v) for (n, v) in labels])
utterances = [create_utterance("FindFlights", "find flights in economy to Madrid",
("Flight", "economy to Madrid"),
("Location", "Madrid"),
("Class", "economy")),
create_utterance("FindFlights", "find flights to London in first class",
("Flight", "London in first class"),
("Location", "London"),
("Class", "first")),
create_utterance("FindFlights", "find flights from seattle to London in first class",
("Flight", "flights from seattle to London in first class"),
("Location", "London"),
("Location", "Seattle"),
("Class", "first"))]
client.examples.batch(appId, appVersion, utterances, raw=True)
client.examples.list(appId, appVersion)
This code does not return any error but does not list the Utterances either.

Related

How to access the entity value in Luis

I want to print the value of the entity that is already defined in the Luis portal, I already printed the TopIntent but I need to access the entity and print the value of it
As per MS doc:
You can get entity and subentities, for example:
modelObject = client.model.get_entity(app_id, versionId, modelId)
toppingQuantityId = get_grandchild_id(modelObject, "Toppings", "Quantity")
pizzaQuantityId = get_grandchild_id(modelObject, "Pizza", "Quantity")
If you want to get the prediction from the runtime, for example:
# Production == slot name
predictionRequest = { "query" : "I want two small pepperoni pizzas with more salsa" }
predictionResponse = clientRuntime.prediction.get_slot_prediction(app_id, "Production", predictionRequest)
print("Top intent: {}".format(predictionResponse.prediction.top_intent))
print("Sentiment: {}".format (predictionResponse.prediction.sentiment))
print("Intents: ")
for intent in predictionResponse.prediction.intents:
print("\t{}".format (json.dumps (intent)))
print("Entities: {}".format (predictionResponse.prediction.entities))
You can refer to How to extract entity from Microsoft LUIS using python?

How to do search by option to search from files? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I'm a beginner trying to build a simple library management system using Python. Users can search a book from a list of many books stored in a text file. Here is an example of what is in the text file:
Author: J.K Rowling
Title: Harry Potter and the Deathly Hollow
Keywords: xxxx
Published by: xxxx
Published year: xxxx
Author: Stephen King
Title: xxxx
Keywords: xxxx
Published by: xxxx
Published year: xxxx
Author: J.K Rowling
Title: Harry Potter and the Half Blood Prince
Keywords: xxxx
Published by: xxxx
Published year: xxxx
This is where it gets difficult for me. There is a Search by Author option for the user to search books. What I want to do is when the users search for any authors (e.g. J.K Rowling), it would output all (in this case, there are two J.K Rowling books) of the related components (Author, Title, Keywords, Published by, Published year). This is the last piece of the program, which I'm having very much difficulty in doing. Please help me, and thank you all in advance.
Is it possible for you to implement the text file in the form of a JSON file instead? It could be a better alternative since you could easily access all the values depending on the key you have chosen and search through those as well.
{
"Harry Potter and the Deathly Hollow" :
{
"Author": "J.K Rowling",
"Keywords": xxxx,
"Published by": xxxx,
"Published year": xxxx
},
'Example 2' :
{
"Author": "Stephen King"
"Keywords": xxxx
"Published by": xxxx
"Published year": xxxx
}
}
You can iterate through the lines of the text file like this:
with open(r"path\to\text_file.txt", "r") as books:
lines = books.readlines()
for index in range(len(lines)):
line = lines[index]
Now, get the author of each book by splitting the line on the ":" character and testing if the first part == "Author". Then, get the second part of the split string and strip it of the "\n" [newline] and " " characters to make sure there are no extra spaces or anything that will mess up the search on either side. I would also recomment lowercasing the author name and search query to make capitalisation not matter. Test if this is equal to the search query:
if line.split(":")[0] == "Author" and\
line.split(":")[1].strip("\n ").lower() == search_query.lower():
Then, in this if loop, print out all the required information about this book.
Completed code:
search_query = "J.K Rowling"
with open(r"books.txt", "r") as books:
lines = books.readlines()
for index in range(len(lines)):
line = lines[index]
if line.split(":")[0] == "Author" and line.split(":")[1].strip("\n ").lower() == search_query.lower():
print(*lines[index + 1: index + 5])
Generally, a lot of problems to be programmed can be resolved into a three-step process:
Read the input into an internal data structure
Do processing as required
Write the output
This problem seems like quite a good fit for that pattern:
In the first part, read the text file into an in-memory list of either dictionaries or objects (depending on what's expected by your course)
In the second part, search the in-memory list according to the search criteria; this will result in a shorter list containing the results
In the third part, print out the results neatly
It would be reasonable to put these into three separate functions, and to attack each of them separately
# To read the details from the file ex books.txt
with open("books.txt","r") as fd:
lines = fd.read()
#Split the lines based on Author. As Author word will be missing after split so add the Author to the result. The entire result is in bookdetails list.
bookdetails = ["Author" + line for line in lines.split("Author")[1:]]
#Author Name to search
authorName = "J.K Rowling"
# Search for the given author name from the bookdetails list. Split the result based on new line results in array of details.
result = [book.splitlines() for book in bookdetails if "Author: " + authorName in book]
print(result)
If you will always receive this format of the file and you want to transform it into a dictionary:
def read_author(file):
data = dict()
with open(file, "r") as f:
li = f.read().split("\n")
for e in li:
if ":" in e:
data[e.split(":")[0]] = e.split(":")[1]
return data['Author']
Note: The text file sometimes has empty lines so I check if the line contains the colon (:) before transforming it into a dict.
Then if you want a more generic method you can pass the KEY of the element you want:
def read_info(file, key):
data = dict()
with open(file, "r") as f:
li = f.read().split("\n")
for e in li:
if ":" in e:
data[e.split(":")[0]] = e.split(":")[1]
return data[key]
Separating the reading like the following you can be more modular:
class BookInfo:
def __init__(self, file) -> None:
self.file = file
self.data = None
def __read_file(self):
if self.data is None:
with open(self.file, "r") as f:
li = f.read().split("\n")
self.data = dict()
for e in li:
if ":" in e:
self.data[e.split(":")[0]] = e.split(":")[1]
def read_author(self):
self.__read_file()
return self.data['Author']
Then create objects for each book:
info = BookInfo("book.txt")
print(info.read_author())

Google Cloud NL entity recognizer grouping words together

When attempting to find the entities in a long input of text, Google Cloud's natural language program is grouping together words and then getting their incorrect entity. Here is my program:
def entity_recognizer(nouns):
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/Users/superaitor/Downloads/link"
text = ""
for words in nouns:
text += words + " "
client = language.LanguageServiceClient()
if isinstance(text, six.binary_type):
text = text.decode('utf-8')
document = types.Document(
content=text.encode('utf-8'),
type=enums.Document.Type.PLAIN_TEXT)
encoding = enums.EncodingType.UTF32
if sys.maxunicode == 65535:
encoding = enums.EncodingType.UTF16
entity = client.analyze_entities(document, encoding).entities
entity_type = ('UNKNOWN', 'PERSON', 'LOCATION', 'ORGANIZATION',
'EVENT', 'WORK_OF_ART', 'CONSUMER_GOOD', 'OTHER')
for entity in entity:
#if entity_type[entity.type] is "PERSON":
print(entity_type[entity.type])
print(entity.name)
Here nouns is a list of words. I then turn that into a string(i've tried multiple ways of doing so, all give the same result), but yet the program spits out output like:
PERSON
liberty secularism etching domain professor lecturer tutor royalty
government adviser commissioner
OTHER
business view society economy
OTHER
business
OTHER
verge industrialization market system custom shift rationality
OTHER
family kingdom life drunkenness college student appearance income family
brink poverty life writer variety attitude capitalism age process
production factory system
Any input on how to fix this?
To analyze entities in a text you can use a sample from the documentation which looks something like this:
import argparse
import sys
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
import six
def entities_text(text):
"""Detects entities in the text."""
client = language.LanguageServiceClient()
if isinstance(text, six.binary_type):
text = text.decode('utf-8')
# Instantiates a plain text document.
document = types.Document(
content=text,
type=enums.Document.Type.PLAIN_TEXT)
# Detects entities in the document. You can also analyze HTML with:
# document.type == enums.Document.Type.HTML
entities = client.analyze_entities(document).entities
# entity types from enums.Entity.Type
entity_type = ('UNKNOWN', 'PERSON', 'LOCATION', 'ORGANIZATION',
'EVENT', 'WORK_OF_ART', 'CONSUMER_GOOD', 'OTHER')
for entity in entities:
print('=' * 20)
print(u'{:<16}: {}'.format('name', entity.name))
print(u'{:<16}: {}'.format('type', entity_type[entity.type]))
print(u'{:<16}: {}'.format('metadata', entity.metadata))
print(u'{:<16}: {}'.format('salience', entity.salience))
print(u'{:<16}: {}'.format('wikipedia_url',
entity.metadata.get('wikipedia_url', '-')))
entities_text("Donald Trump is president of United States of America")
The output of this sample is:
====================
name : Donald Trump
type : PERSON
metadata : <google.protobuf.pyext._message.ScalarMapContainer object at 0x7fd9d0125170>
salience : 0.9564903974533081
wikipedia_url : https://en.wikipedia.org/wiki/Donald_Trump
====================
name : United States of America
type : LOCATION
metadata : <google.protobuf.pyext._message.ScalarMapContainer object at 0x7fd9d01252b0>
salience : 0.04350961744785309
wikipedia_url : https://en.wikipedia.org/wiki/United_States
As you can see in this example, Entity Analysis inspects the given text for known entities (proper nouns such as public figures, landmarks, etc.). It's not gonna provide you entity for each word in the text.
Instead of classifying according to entities, I would use Google default categories directly, changing
entity = client.analyze_entities(document, encoding).entities
to
categories = client.classify_text(document).categories
and consequently up-dating the code. I wrote the following sample code based on this tutorial, further developed in github.
def run_quickstart():
# [START language_quickstart]
# Imports the Google Cloud client library
# [START migration_import]
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
# [END migration_import]
# Instantiates a client
# [START migration_client]
client = language.LanguageServiceClient()
# [END migration_client]
# The text to analyze
text = u'For its part, India has said it will raise taxes on 29 products imported from the US - including some agricultural goods, steel and iron products - in retaliation for the wide-ranging US tariffs.'
document = types.Document(
content=text,
type=enums.Document.Type.PLAIN_TEXT)
# Detects the sentiment of the text
sentiment = client.analyze_sentiment(document=document).document_sentiment
# Classify content categories
categories = client.classify_text(document).categories
# User category feedback
for category in categories:
print(u'=' * 20)
print(u'{:<16}: {}'.format('name', category.name))
print(u'{:<16}: {}'.format('confidence', category.confidence))
# User sentiment feedback
print('Text: {}'.format(text))
print('Sentiment: {}, {}'.format(sentiment.score, sentiment.magnitude))
# [END language_quickstart]
if __name__ == '__main__':
run_quickstart()
Does this solution works for you? If not, why?

Entity Recognition in Stanford NLP using Python

I am using Stanford Core NLP using Python.I have taken the code from here.
Following is the code :
from stanfordcorenlp import StanfordCoreNLP
import logging
import json
class StanfordNLP:
def __init__(self, host='http://localhost', port=9000):
self.nlp = StanfordCoreNLP(host, port=port,
timeout=30000 , quiet=True, logging_level=logging.DEBUG)
self.props = {
'annotators': 'tokenize,ssplit,pos,lemma,ner,parse,depparse,dcoref,relation,sentiment',
'pipelineLanguage': 'en',
'outputFormat': 'json'
}
def word_tokenize(self, sentence):
return self.nlp.word_tokenize(sentence)
def pos(self, sentence):
return self.nlp.pos_tag(sentence)
def ner(self, sentence):
return self.nlp.ner(sentence)
def parse(self, sentence):
return self.nlp.parse(sentence)
def dependency_parse(self, sentence):
return self.nlp.dependency_parse(sentence)
def annotate(self, sentence):
return json.loads(self.nlp.annotate(sentence, properties=self.props))
#staticmethod
def tokens_to_dict(_tokens):
tokens = defaultdict(dict)
for token in _tokens:
tokens[int(token['index'])] = {
'word': token['word'],
'lemma': token['lemma'],
'pos': token['pos'],
'ner': token['ner']
}
return tokens
if __name__ == '__main__':
sNLP = StanfordNLP()
text = r'China on Wednesday issued a $50-billion list of U.S. goods including soybeans and small aircraft for possible tariff hikes in an escalating technology dispute with Washington that companies worry could set back the global economic recovery.The country\'s tax agency gave no date for the 25 percent increase...'
ANNOTATE = sNLP.annotate(text)
POS = sNLP.pos(text)
TOKENS = sNLP.word_tokenize(text)
NER = sNLP.ner(text)
PARSE = sNLP.parse(text)
DEP_PARSE = sNLP.dependency_parse(text)
I am only interested in Entity Recognition which is being saved in the variable NER. The command NER is giving the following result
The same thing if I run on Stanford Website, the output for NER is
There are 2 problems with my Python Code:
1. '$' and '50-billion' should be combined and named a single entity.
Similarly, I want '25' and 'percent' as a single entity as it is showing in the online stanford output.
2. In my output, 'Washington' is shown as State and 'China' is shown as Country. I want both of them to be shown as 'Loc' as in the stanford website output. The possible solution to this problem lies in the documentation .
But I don't know which model am I using and how to change the model.
Here is a way you can solve this
Make sure to download Stanford CoreNLP 3.9.1 and the necessary models jars
Set up the server properties in this file "ner-server.properties"
annotators = tokenize,ssplit,pos,lemma,ner
ner.applyFineGrained = false
Start the server with this command:
java -Xmx12g edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000 -serverProperties ner-server.properties
Make sure you've installed this Python package:
https://github.com/stanfordnlp/python-stanford-corenlp
Run this Python code:
import corenlp
client = corenlp.CoreNLPClient(start_server=False, annotators=["tokenize", "ssplit", "pos", "lemma", "ner"])
sample_text = "Joe Smith was born in Hawaii."
ann = client.annotate(sample_text)
for mention in ann.sentence[0].mentions:
print([x.word for x in ann.sentence[0].token[mention.tokenStartInSentenceInclusive:mention.tokenEndInSentenceExclusive]])
Here are all the fields available in the EntityMention for each entity:
sentenceIndex: 0
tokenStartInSentenceInclusive: 5
tokenEndInSentenceExclusive: 7
ner: "MONEY"
normalizedNER: "$5.0E10"
entityType: "MONEY"

Getting facts form an RDF Graph in a way that I can use using RDFlib

I am trying to learn to use RDF and am trying to pull a set of facts out of dbpedia as my learning exercise. The following code sample is sort of working but for subjects such as spouse it always pulls out the person them selves.
QUESTIONS:
get_name_from_uri() pulls out the last part of the URI and removes the underscores - There has got to be a better way to get a persons name
results for spouse pull back the spouse but also pull back the data subject - not sure whats going on there
Some results pull back data in both URI format and as a text item -
This is the output from the code block and shows some of the odd results I am getting (see the mixed output in the properties, the fact he is married to himself and the mangled name of Josephine?
Accessing facts for Napoleon held at http://dbpedia.org/resource/Napoleon
There are 800 facts about Napoleon stored at the URI
http://dbpedia.org/resource/Napoleon
Here are a few:-
Ontology:deathdate
Napoleon died on 1821-05-05
Ontology:birthdate
Napoleon was born on 1769-08-15
Property:spouse retruns the person themslves twice !
Napoleon was married to Marie Louise, Duchess of Parma
Napoleon was married to Napoleon
Napoleon was married to Jos%C3%A9phine de Beauharnais
Napoleon was married to Napoleon
Property:title retruns text and uri's
Napoleon Held the title: "The Death of Napoleon"
Napoleon Held the title: http://dbpedia.org/resource/Emperor_of_the_French
Napoleon Held the title: http://dbpedia.org/resource/King_of_Italy
Napoleon Held the title: First Consul of France
Napoleon Held the title: Provisional Consul of France
Napoleon Held the title: http://dbpedia.org/resource/Napoleon
Napoleon Held the title: Emperor of the French
Napoleon Held the title: http://dbpedia.org/resource/Co-Princes_of_Andorra
Napoleon Held the title: from the Memoirs of Bourrienne, 1831
Napoleon Held the title: Protector of the Confederation of the Rhine
Ontology birth place returns three records
Napoleon was born in Ajaccio
Napoleon was born in Corsica
Napoleon was born in Early modern France
This is the python that produces the output above, it requires rdflib and is very much a work in progress.
import rdflib
from rdflib import Graph, URIRef, RDF
######################################
# A quick test of a python library reflib to get data from an rdf graph
# D Moore 15/3/2014
# needs rdflib > version 3.0
# CHANGE THE URI BELOW TO A DIFFERENT PERSON AND SEE WHAT HAPPENS
# COULD DO WITH A WEB FORM
# NOTES:
#
#URI_ref = 'http://dbpedia.org/resource/Richard_Nixon'
#URI_ref = 'http://dbpedia.org/resource/Margaret_Thatcher'
#URI_ref = 'http://dbpedia.org/resource/Isaac_Newton'
#URI_ref = 'http://dbpedia.org/resource/Richard_Nixon'
URI_ref = 'http://dbpedia.org/resource/Napoleon'
#URI_ref = 'http://dbpedia.org/resource/apple'
##########################################################
def get_name_from_uri(dbpedia_uri):
# pulls the last part of a uri out and removes underscores
# got to be an easier way but it works
output_string = ""
s = dbpedia_uri
# chop the url into bits devided by the /
tokens = s.split("/")
# because the name of our person is in the last section itterate through each token
# and replace the underscore with a space
for i in tokens :
str = ''.join([i])
output_string = str.replace('_',' ')
# returns the name of the person without underscores
return(output_string)
def is_person(uri):
##### SPARQL way to do this
uri = URIRef(uri)
person = URIRef('http://dbpedia.org/ontology/Person')
g= Graph()
g.parse(uri)
resp = g.query(
"ASK {?uri a ?person}",
initBindings={'uri': uri, 'person': person}
)
print uri, "is a person?", resp.askAnswer
return resp.askAnswer
URI_NAME = get_name_from_uri(URI_ref)
NAME_LABEL = ''
if is_person(URI_ref):
print "Accessing facts for", URI_NAME, " held at ", URI_ref
g = Graph()
g.parse(URI_ref)
print "Person Extract for", URI_NAME
print "There are ",len(g)," facts about", URI_NAME, "stored at the URI ",URI_ref
print "Here are a few:-"
# Ok so lets get some facts for our person
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/birthName")):
print URI_NAME, "was born " + str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/deathDate")):
print URI_NAME, "died on", str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/birthDate")):
print URI_NAME, "was born on", str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/eyeColor")):
print URI_NAME, "had eyes coloured", str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/property/spouse")):
print URI_NAME, "was married to ", get_name_from_uri(str(stmt[1]))
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/reigned")):
print URI_NAME, "reigned ", get_name_from_uri(str(stmt[1]))
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/children")):
print URI_NAME, "had a child called ", get_name_from_uri(str(stmt[1]))
for stmt in g.subject_objects(URIRef("http://dbpedia.org/property/profession")):
print URI_NAME, "(PROPERTY profession) was trained as a ", get_name_fro m_uri(str(stmt[1]))
for stmt in g.subject_objects(URIRef("http://dbpedia.org/property/child")):
print URI_NAME, "PROPERTY child ", get_name_from_uri(str(stmt[1]))
for stmt in g.subject_objects(URIRef("http://dbpedia.org/property/deathplace")):
print URI_NAME, "(PROPERTY death place) died at: ", str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/property/title")):
print URI_NAME, "(PROPERTY title) Held the title: ", str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/sex")):
print URI_NAME, "was a ", str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/knownfor")):
print URI_NAME, "was known for ", str(stmt[1])
for stmt in g.subject_objects(URIRef("http://dbpedia.org/ontology/birthPlace")):
print URI_NAME, "was born in ", get_name_from_uri(str(stmt[1]))
else:
print "ERROR - "
print "Resource", URI_ref, 'does not look to be a person or there is no record in dbpedia'
Getting names
*get_name_from_uri* is doing something with the URI. Since DBpedia data has rdfs:labels on almost everything, it's probably a better idea to ask for the rdfs:label and to use that as a value. E.g., look at the results of this SPARQL query run the DBpedia SPARQL endpoint:
select ?spouse ?spouseName where {
dbpedia:Napoleon dbpedia-owl:spouse ?spouse .
?spouse rdfs:label ?spouseName .
filter( langMatches(lang(?spouseName),"en") )
}
spouse spouseName
http://dbpedia.org/resource/Jos%C3%A9phine_de_Beauharnais "Joséphine de Beauharnais"#en
http://dbpedia.org/resource/Marie_Louise,_Duchess_of_Parma "Marie Louise, Duchess of Parma"#en
Unexpected Spouses
The documentation for subject_objects says that
subject_objects(self, predicate=None)
A generator of (subject, object) tuples for the given predicate
You're seeing, correctly, that there are four triples in DBpedia that have the predicate dbpprop:spouse (by the way, is there a reason you're not using dbpedia-owl:spouse?) and have Napoleon as a subject or object:
Napoleon spouse Marie Louise, Duchess of Parma
Marie Louise, Duchess of Parma spouse Napoleon
Napoleon spouse Jos%C3%A9phine de Beauharnais
Jos%C3%A9phine de Beauharnais spouse Napoleon
For each one of those, you're printing out
"Napoleon was married to X"
where X is the object of the triple. Perhaps you should use objects instead:
objects(self, subject=None, predicate=None)
A generator of objects with the given subject and predicate
URI vs. text (literal) results
The data described by DBpedia ontology properties (those whose URIs begin with http://dbpedia.org/ontology/, typically abbreviated dbpedia-owl:) is much “cleaner” than the data described by the DBpedia raw data properties (those whose URIs begin with http://dbpedia.org/property/, typically abbreviated dbpprop:). E.g., when you're looking at the titles, you're using the property dbpprop:title, and there are both URIs and literals as values. It doesn't look like there's a dbpedia-owl:title, though, so in this case you'll just have to deal with it. It's easy enough to filter out one or the other though:
select ?title where {
dbpedia:Napoleon dbpprop:title ?title
filter isLiteral(?title)
}
title
================================================
"Emperor of the French"#en
"Protector of the Confederation of the Rhine"#en
"First Consul of France"#en
"Provisional Consul of France"#en
""The Death of Napoleon""#en
"from the Memoirs of Bourrienne, 1831"#en
select ?title where {
dbpedia:Napoleon dbpprop:title ?title
filter isURI(?title)
}
title
=================================================
http://dbpedia.org/resource/Co-Princes_of_Andorra
http://dbpedia.org/resource/Emperor_of_the_French
http://dbpedia.org/resource/King_of_Italy

Categories

Resources