pronoun resolution backwards - python

The usual coreference resolution works in the following way:
Provided
The man likes math. He really does.
it figures out that
he
refers to
the man.
There are plenty of tools to do this.
However, is there a way to do it backwards?
For example,
given
The man likes math. The man really does.
I want to do the pronoun resolution "backwards,"
so that I get an output like
The man likes math. He really does.
My input text will mostly be 3~10 sentences, and I'm working with python.

This is perhaps not really an answer to be happy with, but I think the answer is that there's no such functionality built in anywhere, though you can code it yourself without too much difficulty. Giving an outline of how I'd do it with CoreNLP:
Still run coref. This'll tell you that "the man" and "the man" are coreferent, and so you can replace the second one with a pronoun.
Run the gender annotator from CoreNLP. This is a poorly-documented and even more poorly advertised annotator that tries to attach gender to tokens in a sentence.
Somehow figure out plurals. Most of the time you could use the part-of-speech tag: plural nouns get the tags NNS or NNPS, but there are some complications so you might also want to consider (1) the existence of conjunctions in the antecedent; (2) the lemma of a word being different from its text; (3) especially in conjunction with 2, the word ending in 's' or 'es' -- this can distinguish between lemmatizations which strip out plurals versus lemmatizations which strip out tenses, etc.
This is enough to figure out the right pronoun. Now it's just a matter of chopping up the sentence and putting it back together. This is a bit of a pain if you do it in CoreNLP -- the code is just not set up to change the text of a sentence -- but in the worst case you can always just re-annotate a new surface form.
Hope this helps somewhat!

Related

improve gensim most_similar() return values by using wordnet hypernyms

import gensim.downloader as api
glove = api.load('glove-wiki-gigaword-200')
I first ran this code to download the pre-trained model.
glove.most_similar(positive=['sushi', 'uae'], negative=['japan'])
would then result in:
[('nahyan', 0.5181387066841125),
('caviar', 0.4778318405151367),
('paella', 0.4497394263744354),
('nahayan', 0.44313961267471313),
('zayed', 0.4321245849132538),
('omani', 0.4285220503807068),
('seafood', 0.4279175102710724),
('saif', 0.426000714302063),
('dirham', 0.4214130640029907),
('sashimi', 0.4165934920310974)]
and in this example, we can see that the method failed to capture the 'type' or 'category' of the query. 'zayed', 'nahyan' are not actually of 'type' food and rather they represent person name.
The approach suggested by my professor is to use wordnet hypernyms to find the 'type'.
With much research, the closest solution I found is to somehow incorporate
lowest_common_hypernyms() that will give the lowest common hypernym between two synsets and use it to filter the results of most_similar().
I am not sure if my idea make sense and would like the community feedback on this.
My idea is compute the hypernym of, e.g. 'sushi' and the hypernyms of all the similar words returned by most_similar() and only choose the word with 'longest' lowest common hypernym path. I expect this should return the word that best matches the 'type'
Not sure if it makes sense...
Does your proposed approach give adequate results when you try it?
That's the only test of whether the idea makes sense.
Word2vec is generally oblivious to the all the variations of category that a lexicon like WordNet can provide – all the words that are similar to another word, in any aspect, will be neighbors. Even words that people consider opposites – like 'hot' and 'cold' – will be often be fairly close to each other, in some direction in the coordinate space, as they are similar in what they describe and what contexts they're used in. (They can be drop-in replacements for each other.)
Word2vec is also fairly oblivious to polysemy in its standard formulation.
Some other things worth trying might be:
if you need only answers of a certain type, mix-in some measurement ranking candidate answers by their closeness to a word either describing that type ('food') or representing multiple examples (say an average vector for many food-names you'd know to be good answers)
choose another vector-set, or train your own. There's no universal "goodness" for word-vectors: their quality for certain tasks will vary based on their training data & parameters. Vectors trained on something broader than Wikipedia (your named vector file), or some text corpus more focused on your domain-of-interest – say, food criticism – might do better on some tasks. Changing training parameters can also change which kinds of similarity are most emphasized in the resulting vectors. For example, some observers have noticed small context-windows tend to put words that are direct drop-in replacements for each other closer-together, while larger context-windows bring words from the same domains-of-use, even if not drop-in replacements of the same 'type', closer. (It sounds like your current need might be best served with a model trained with smaller windows.)
Nahyan is from the UAE - it seems to be part of the name of all three presidents. So you seem to be getting what you ask for. If you want more foods, add "food" to your positive query, and maybe "people" to your negative query?
Another approach is to post-filter your results to remove anything that isn't a food. Or is a person. (WordNet won't be much help, as it is nowhere near comprehensive on foods, and even less so on people; Wikidata is likely to be more useful.)
By the way, if you find the common hypernym of sushi and UAE it will probably be the top-level entity in wordnet. So that will give you no filtering.

Keeping Numbers in Doc2Vec Tokenization

I’m in the process of trying to get document similarity values for a corpus of approximately 5,000 legal briefs with Doc2Vec (I recognize that the corpus may be a little bit small, but this is a proof-of-concept project for a larger corpus of approximately 15,000 briefs I’ll have to compile later).
Basically, every other component in the creation of the model is going relatively well so far – each brief I have is in a text file within a larger folder, so I compiled them in my script using glob.glob – but I’m running into a tokenization problem. The difficulty is, as these documents are legal briefs, they contain numbers that I’d like to keep, and many of the guides I’ve been using to help me write the code use Gensim’s simple preprocessing, which I believe eliminates digits from the corpus, in tandem with the TaggedDocument feature. However, I want to do as little preprocessing on the texts as possible.
Below is the code I’ve used, and I’ve tried swapping simple_preprocess for genism.utils.tokenize, but when I do that, I get generator objects that don’t appear workable in my final Doc2Vec model, and I can’t actually see how the corpus looks. When I’ve tried to use other tokenizers, like nltk, I don’t know how to fit that into the TaggedDocument component.
brief_corpus = []
for brief_filename in brief_filenames:
with codecs.open(brief_filename, "r", "utf-8") as brief_file:
brief_corpus.append(
gensim.models.doc2vec.TaggedDocument(
gensim.utils.simple_preprocess(
brief_file.read()),
["{}".format(brief_filename)])) #tagging each brief with its filename
I’d appreciate any advice that anyone can give that would help me combine a tokenizer that just separated on whitespace and didn’t eliminate any numbers with the TaggedDocument feature. Thank you!
Update: I was able to create a rudimentary code for some basic tokenization (I do plan on refining it further) without having to resort to Gensim's simple_preprocessing function. However, I'm having difficulty (again!) when using the TaggedDocument feature - but this time, the tags (which I want to be the file names of each brief) don't match the tokenized document. Basically, each document has a tag, but it's not the right one.
Can anyone possibly advise where I might have gone wrong with the new code below? Thanks!
briefs = []
BriefList = [p for p in os.listdir(FILEPATH) if p.endswith('.txt')]
for brief in BriefList:
str = open(FILEPATH + brief,'r').read()
tokens = re.findall(r"[\w']+|[.,!?;]", str)
tagged_data = [TaggedDocument(tokens, [brief]) for brief in BriefList]
briefs.append(tagged_data)
You're likely going to want to write your own preprocessing/tokenization functions. But don't worry, it's not hard to outdo Gensim's simple_preprocess, even with very crude code.
The only thing Doc2Vec needs as the words of a TaggedDocument is a list of string tokens (typically words).
So first, you might be surprised how well it works to just do a default Python string .split() on your raw strings - which just breaks text on whitespace.
Sure, a bunch of the resulting tokens will then be mixes of words & adjoining punctuation, which may be nearly nonsense.
For example, the word 'lawsuit' at the end of the sentence might appear as 'lawsuit.', which then won't be recognized as the same token as 'lawsuit', and might not appear enough min_count times to even be considered, or otherwise barely rise above serving as noise.
But especially for both longer documents, and larger datasets, no one token, or even 1% of all tokens, has that much influence. This isn't exact-keyword-search, where failing to return a document with 'lawsuit.' for a query on 'lawsuit' would be a fatal failure. A bunch of words 'lost' to such cruft may have hadly any effect on the overall document, or model, performance.
As your datasets seem manageable enough to run lots of experiments, I'd suggest trying this dumbest-possible tokenization – only .split() – just as a baseline to become confident that the algorithm still mostly works as well as some more intrusive operation (like simple_preprocess()).
Then, as you notice, or suspect, or ideally measure with some repeatable evaluation, that some things you'd want to be meaningful tokens aren't treated right, gradually add extra steps of stripping/splitting/canonicalizing characters or tokens. But as much as possible: checking that the extra complexity of code, and runtime, is actually delivering benefits.
For example, further refinements could be some mix of:
For each token created by the simple split(), strip off any non-alphanumeric leading/trailing chars. (Advantages: eliminates that punctuation-fouling-words cruft. Disadvantages: might lose useful symbols, like the leading $ of monetary amounts.)
Before splitting, replace certain single-character punctuation-marks (like say ['.', '"', ',', '(', ')', '!', '?', ';', ':']) with the same character with spaces on both sides - so that they're never connected with nearby words, and instead survive a simple .split() as standalone tokens. (Advantages: also prevents words-plus-punctuation cruft. Disadvantages: breaks up numbers like 2,345.77 or some useful abbreviations.)
At some appropriate stage in tokenization, canonicalize many varied tokens into a smaller set of tokens that may be more meaningful than each of them as rare standalone tokens. For example, $0.01 through $0.99 might all be turned into $0_XX - which then has a better chance of influencting the model, & being associated with 'tiny amount' concepts, than the original standalone tokens. Or replacing all digits with #, so that numbers of similar magnitudes share influence, without diluting the model with a token for every single number.
The exact mix of heuristics, and order of operations, will depend on your goals. But with a corpus only in the thousands of docs (rather than hundreds-of-thousands or millions), even if you do these replacements in a fairly inefficient way (lots of individual string- or regex- replacements in serial), it'll likely be a manageable preprocessing cost.
But you can start simple & only add complexity that your domain-specific knowledge, and evaluations, justifies.

How to extract text between two headings with regex, requires complicated non-capture groups

I want to pull abstracts out of a large corpus of scientific papers using a python script. The papers are all saved as strings in a large csv. I want to something like this: extracting text between two headers I can write a regex to find the 'Abstract' heading. However, finding the next section heading is proving difficult. Headers vary wildly from paper to paper. They can be ALL CAPS or Just Capitalized. They can be one word or a long phrase and span two lines. They are usually followed by one-two newlines. This is what I came up with: -->
abst = re.findall(r'(?:ABSTRACT\s*\n+|Abstract\s*\n+)(.*?)((?:[A-Z]+|(?:\n(?:[A-Z]+|(?:[A-Z][a-z]+\s*)+)\n+)',row[0],re.DOTALL)
Here is an example of an abstract:
'...\nAbstract\nFactorial Hidden Markov Models (FHMMs) are powerful models for
sequential\ndata but they do not scale well with long sequences. We
propose a scalable inference and learning algorithm for FHMMs that
draws on ideas from the stochastic\nvariational inference, neural
network and copula literatures. Unlike existing approaches, the
proposed algorithm requires no message passing procedure among\nlatent
variables and can be distributed to a network of computers to speed up
learning. Our experiments corroborate that the proposed algorithm does
not introduce\nfurther approximation bias compared to the proven
structured mean-field algorithm,\nand achieves better performance with
long sequences and large FHMMs.\n\n1\n\nIntroduction\n\n...'
So I'm trying to find 'Abstract' and 'Introduction' and pull out the text that is between them. However it could be 'ABSTRACT' and 'INTRODUCTION', or ABSTRACT and 'A SINGLE LAYER NETWORK AND THE MEAN FIELD\nAPPROXIMATION\n'
Help?
Recognizing the next section is a bit vague - perhaps we can rely on Abstract-section ending with two newlines?
ABSTRACT\n(.*)\n\n
Or maybe we'll just assume that the next section-title will start with an uppercase letter and be followed by any number of word-characters. (Also that's rather vague, too, and assumes there'l be no \n\n within the Abstract.
ABSTRACT\n(.*)\n\n\U[\w\s]*\n\n
Maybe that stimulates further fiddling on your end... Feel free to post examples where this did not match - maybe we can stepwise refine it.
N.B: as Wiktor pointed out, I could not use the case-insensitive modifiers. So the whole rx should be used with switches for case-insenstive matching.
Update1: the challenge here is really how to identify that a new section has begun...and not to confuse that with paragraph-breaks within the Abstract. Perhaps that can also be dealt with by changing the rather tolerant [\w\s]*with [\w\s]{1,100} which would only recognize text in a new paragraph as a title of the "abstract-successor" if it had between 2 and 100 characters (note: 2 characters, although the limit is set to 1 because of the \U (uppercase character).
ABSTRACT\n(.*)\n\n\U[\w\s]{1,100}\n\n

Negation handling in NLP

I'm currently working on a project, where I want to extract emotion from text. As I'm using conceptnet5 (a semantic network), I can't however simply prefix words in a sentence that contains a negation-word, as those words would simply not show up in conceptnet5's API.
Here's an example:
The movie wasn't that good.
Hence, I figured that I could use wordnet's lemma functionality to replace adjectives in sentences that contain negation-words like (not, ...).
In the previous example, the algorithm would detect wasn't and would replace it with was not.
Further, it would detect a negation-word not, and replace good with it's antonym bad.
The sentence would read:
The movie was that bad.
While I see that this isn't the most elegant way, and it does probably in many cases produce the wrong result, I'd still like to handle negation that way as I frankly don't know any better approach.
Considering my problem:
Unfortunately, I did not find any library that would allow me to replace all occurrences of appended negation-words (wasn't => was not).
I mean I could do it manually, by replacing the occurrences with a regex, but then I would be stuck with the english language.
Therefore I'd like to ask if some of you know a library, function or better method that could help me here.
Currently I'm using python nltk, still it doesn't seem that it contains such functionality, but I may be wrong.
Thanks in advance :)
Cases like wasn't can be simply parsed by tokenization (tokens = nltk.word_tokenize(sentence)): wasn't will turn into was and n't.
But negative meaning can also be formed by 'Quasi negative words, like hardly, barely, seldom' and 'Implied negatives, such as fail, prevent, reluctant, deny, absent', look into this paper. Even more detailed analysis can be found in Christopher Potts' On the negativity of negation
.
Considering your initial problem, sentiment analysis, most modern approaches, as far as I know, don't process negations explicitly; instead, they use supervised approaches with high-order n-grams. Those actually processing negation usually append special prefix NOT_ to all words between negation and punctuation marks.

Parsing Meaning from Text

I realize this is a broad topic, but I'm looking for a good primer on parsing meaning from text, ideally in Python. As an example of what I'm looking to do, if a user makes a blog post like:
"Manny Ramirez makes his return for the Dodgers today against the Houston Astros",
what's a light-weight/ easy way of getting the nouns out of a sentence? To start, I think I'd limit it to proper nouns, but I wouldn't want to be limited to just that (and I don't want to rely on a simple regex that assumes anything Title Capped is a proper noun).
To make this question even worse, what are the things I'm not asking that I should be? Do I need a corpus of existing words to get started? What lexical analysis stuff do I need to know to make this work? I did come across one other question on the topic and I'm digging through those resources now.
You need to look at the Natural Language Toolkit, which is for exactly this sort of thing.
This section of the manual looks very relevant: Categorizing and Tagging Words - here's an extract:
>>> text = nltk.word_tokenize("And now for something completely different")
>>> nltk.pos_tag(text)
[('And', 'CC'), ('now', 'RB'), ('for', 'IN'), ('something', 'NN'),
('completely', 'RB'), ('different', 'JJ')]
Here we see that and is CC, a coordinating conjunction; now and completely are RB, or adverbs; for is IN, a preposition; something is NN, a noun; and different is JJ, an adjective.
Use the NLTK, in particular chapter 7 on Information Extraction.
You say you want to extract meaning, and there are modules for semantic analysis, but I think IE is all you need--and honestly one of the only areas of NLP computers can handle right now.
See sections 7.5 and 7.6 on the subtopics of Named Entity Recognition (to chunk and categorize Manny Ramerez as a person, Dodgers as a sports organization, and Houston Astros as another sports organization, or whatever suits your domain) and Relationship Extraction. There is a NER chunker that you can plugin once you have the NLTK installed. From their examples, extracting a geo-political entity (GPE) and a person:
>>> sent = nltk.corpus.treebank.tagged_sents()[22]
>>> print nltk.ne_chunk(sent)
(S
The/DT
(GPE U.S./NNP)
is/VBZ
one/CD
...
according/VBG
to/TO
(PERSON Brooke/NNP T./NNP Mossman/NNP)
...)
Note you'll still need to know tokenization and tagging, as discussed in earlier chapters, to get your text in the right format for these IE tasks.
Natural Language Processing (NLP) is the name for parsing, well, natural language. Many algorithms and heuristics exist, and it's an active field of research. Whatever algorithm you will code, it will need to be trained on a corpus. Just like a human: we learn a language by reading text written by other people (and/or by listening to sentences uttered by other people).
In practical terms, have a look at the Natural Language Toolkit. For a theoretical underpinning of whatever you are going to code, you may want to check out Foundations of Statistical Natural Language Processing by Chris Manning and Hinrich Schütze.
(source: stanford.edu)
Here is the book I stumbled upon recently: Natural Language Processing with Python
What you want is called NP (noun phrase) chunking, or extraction.
Some links here
As pointed out, this is very problem domain specific stuff. The more you can narrow it down, the more effective it will be. And you're going to have to train your program on your specific domain.
This is a really really complicated topic. Generally, this sort of stuff falls under the rubric of Natural Language Processing, and tends to be tricky at best. The difficulty of this sort of stuff is precisely why there still is no completely automated system for handling customer service and the like.
Generally, the approach to this stuff REALLY depends on precisely what your problem domain is. If you're able to winnow down the problem domain, you can gain some very serious benefits; to use your example, if you're able to determine that your problem domain is baseball, then that gives you a really strong head start. Even then, it's a LOT of work to get anything particularly useful going.
For what it's worth, yes, an existing corpus of words is going to be useful. More importantly, determining the functional complexity expected of the system is going to be critical; do you need to parse simple sentences, or is there a need for parsing complex behavior? Can you constrain the inputs to a relatively simple set?
Regular expressions can help in some scenario. Here is a detailed example: What’s the Most Mentioned Scanner on CNET Forum, which used a regular expression to find all mentioned scanners in CNET forum posts.
In the post, a regular expression as such was used:
(?i)((?:\w+\s\w+\s(?:(?:(?:[0-9]+[a-z\-]|[a-z]+[0-9\-]|[0-9])[a-z0-9\-]*)|all-in-one|all in one)\s(\w+\s){0,1}(?:scanner|photo scanner|flatbed scanner|adf scanner|scanning|document scanner|printer scanner|portable scanner|handheld scanner|printer\/scanner))|(?:(?:scanner|photo scanner|flatbed scanner|adf scanner|scanning|document scanner|printer scanner|portable scanner|handheld scanner|printer\/scanner)\s(\w+\s){1,2}(?:(?:(?:[0-9]+[a-z\-]|[a-z]+[0-9\-]|[0-9])[a-z0-9\-]*)|all-in-one|all in one)))
in order to match either of the following:
two words, then model number (including all-in-one), then “scanner”
“scanner”, then one or two words, then model number (including
all-in-one)
As a result, the text extracted from the post was like,
discontinued HP C9900A photo scanner
scanning his old x-rays
new Epson V700 scanner
HP ScanJet 4850 scanner
Epson Perfection 3170 scanner
This regular expression solution worked in a way.

Categories

Resources