Detecting mistakes in words and fix them when classifying text (NLP) - python

Hi there✌🏼I make a neural network that classifies the text. First I need to prepare the text and I ran into the problem of "mistakes in words". How can they be found and corrected? And what ideas do you have? Thanks in advance!

You can correct spelling errors by maintaining a vocabulary and finding the closest valid word using a string metric like the Levenshtein distance. There are also some more advanced Python tools, like SpaCy Hunspell. That being said, if you plan to use pre-trained word embeddings I wouldn't worry too much about text normalisation, as the embeddings will likely contain most common spelling variants. You can check how many out-of-vocabulary words you have in your data to see if it's worth investing time in extra cleaning except for basic tokenisation (and converting everything to lowercase).

Related

Word Tokenization When There is No Space

I am wondering for the term in Machine Learning, Deep Learning, or in Natural Language Processing that split the word in a paragraph when there is no space between them.
example:
"iwanttocook"
become:
"i want to cook"
It wouldn't be easy since you do not have the character to tokenize the word.
I appreciate any help
You could achieve this using the polyglot package. There is an option for morphological analysis.
This kind of analysis is based on morfessor models trained on most frequent words to encounter morphemes ("primitive units of syntax, the smallest individually meaningful elements in the utterances of a language").
From the documentation:
from polyglot.text import Text
blob = "Wewillmeettoday."
text = Text(blob)
text.language = "en"
print(text.morphemes)
The output would be:
WordList([u'We', u'will', u'meet', u'to', u'day', u'.'])
Note that if you want to start working with polyglot, you should first read the documentation carefully, as there are a few things to consider, for example the downloading of language specific models.
I created an simple application using Scala to do this.
https://github.com/shredder47/Nonspaced-Sentence-Tokenizer

How to Grab meaning of sentence using NLP?

I am new to NLP. My requirement is to parse meaning from sentences.
Example
"Perpetually Drifting is haunting in all the best ways."
"When The Fog Rolls In is a fantastic song
From above sentences, I need to extract the following sentences
"haunting in all the best ways."
"fantastic song"
Is it possible to achieve this in spacy?
It is not possible to extract the summarized sentences using spacy. I hope the following methods might work for you
Simplest one is extract the noun phrases or verb phrases. Most of the time that should give the text what you want.(Phase struce grammar).
You can use dependency parsing and extract the center word dependencies.
dependency grammar
You can train an sequence model where input is going to be the full sentence and output will be your summarized sentence.
Sequence models for text summaraization
Extracting the meaning of a sentence is a quite arbitrary task. What do you mean by the meaning? Using spaCy you can extract the dependencies between the words (which specify the meaning of the sentence), find the POS tags to check how words are used in the sentence and also find places, organizations, people using NER tagger. However, meaning of the sentence is too general even for the humans.
Maybe you are searching for a specific meaning? If that's the case, you have to train your own classifier. This will get you started.
If your task is summarization of a couple of sentences, consider also using gensim . You can have a look here.
Hope it helps :)

word2vec gensim multiple languages

This problem is going completely over my head. I am training a Word2Vec model using gensim. I have provided data in multiple languages i.e. English and Hindi. When I am trying to find the words closest to 'man', this is what I am getting:
model.wv.most_similar(positive = ['man'])
Out[14]:
[('woman', 0.7380284070968628),
('lady', 0.6933152675628662),
('monk', 0.6662989258766174),
('guy', 0.6513140201568604),
('soldier', 0.6491742134094238),
('priest', 0.6440571546554565),
('farmer', 0.6366012692451477),
('sailor', 0.6297377943992615),
('knight', 0.6290514469146729),
('person', 0.6288090944290161)]
--------------------------------------------
Problem is, these are all English words. Then I tried to find similarity between same meaning Hindi and English words,
model.similarity('man', 'आदमी')
__main__:1: DeprecationWarning: Call to deprecated `similarity` (Method will
be removed in 4.0.0, use self.wv.similarity() instead).
Out[13]: 0.078265618974427215
This accuracy should have been better than all the other accuracies. The Hindi corpus I have has been made by translating the English one. Hence the words appear in similar contexts. Hence they should be close.
This is what I am doing here:
#Combining all the words together.
all_reviews=HindiWordsList + EnglishWordsList
#Training FastText model
cpu_count=multiprocessing.cpu_count()
model=Word2Vec(size=300,window=5,min_count=1,alpha=0.025,workers=cpu_count,max_vocab_size=None,negative=10)
model.build_vocab(all_reviews)
model.train(all_reviews,total_examples=model.corpus_count,epochs=model.iter)
model.save("word2vec_combined_50.bin")
I have been dealing with a very similar problem and came across a reasonably robust solution. This paper shows that a linear relationship can be defined between two Word2Vec models that have been trained on different languages. This means you can derive a translation matrix to convert word embeddings from one language model into the vector space of another language model. What does all of that mean? It means I can take a word from one language, and find words in the other language that have a similar meaning.
I've written a small Python package that implements this for you: transvec. Here's an example where I use pre-trained models to search for Russian words and find English words with a similar meaning:
import gensim.downloader
from transvec.transformers import TranslationWordVectorizer
# Pretrained models in two different languages.
ru_model = gensim.downloader.load("word2vec-ruscorpora-300")
en_model = gensim.downloader.load("glove-wiki-gigaword-300")
# Training data: pairs of English words with their Russian translations.
# The more you can provide, the better.
train = [
("king", "царь_NOUN"), ("tsar", "царь_NOUN"),
("man", "мужчина_NOUN"), ("woman", "женщина_NOUN")
]
bilingual_model = TranslationWordVectorizer(en_model, ru_model).fit(train)
# Find words with similar meanings across both languages.
bilingual_model.similar_by_word("царица_NOUN", 1) # "queen"
# [('king', 0.7763221263885498)]
Don't worry about the weird POS tags on the Russian words - this is just a quirk of the particular pre-trained model I used.
So basically, if you can provide a list of words with their translations, then you can train a TranslationWordVectorizer to translate any word that exists in your source language corpus into the target language. When I used this for real, I produced some training data by extracting all the individual Russian words from my data, running them through Google Translate and then keeping everything that translated to a single word in English. The results were pretty good (sorry I don't have any more detail for the benchmark yet; it's still a work in progress!).
First of all, you should really use self.wv.similarity().
I'm assuming there are very close to no words that exist in both between your Hindi corpus and English corpus, since Hindi corpus is in Devanagari and English is in, well, English. Simply adding two corpuses together to make a model does not make sense. Corresponding words in the two languages co-occur in two versions of a document, but not in your word embeddings for Word2Vec to figure out most similar.
Eg. Until your model knows that
Man:Aadmi::Woman:Aurat,
from the word embeddings, it can never make out the
Raja:King::Rani:Queen
relation. And for that, you need some anchor between the two corpuses.
Here are a few suggestions that you can try out:
Make an independent Hindi corpus/model
Maintain and lookup data of a few English->Hindi word pairs that you have will have to create manually.
Randomly replace input document words with their counterparts from the corresponding document while training
These might be enough to give you an idea. You can also look into seq2seq if you want only want to do translations. You can also read the Word2Vec theory in detail to understand what it does.
After reading the comments, I think that the problem is in the very different grammatical construction between English and Hindi sentences. I have worked with Hindi NLP models and it is much more difficult to get similar results as English (since you mention it).
In Hindi there's no order between words at all, only when declining them. Moreover, the translation of a sentence between languages that are not even descendants of the same root language is somewhat random and you can not assume that the contexts of both sentences are similar.

Text anonymization using supervised machine learning

I have a lot of text documents containing company and personal names. I have aligned text documents where the above have been manually anonymized (names replaced with a single unique character).
I want to use this corpora to train a system to perform automatic anonymization on unseen documents - that is simply replacing words with a character. Primary problem is to recognice words to be anonymized, secondary problem is to replace words by unique character. I can do the secondary problem.
Python is preferred and I'm thinking sklearn must contain the necessary tools.
How would I go about this? There are many articles on stackoverflow on supervised learning, but I'm not sure they match my situation. I suspect this is a fairly simple problem to solve, and I'm not necessarily looking for a complete solution, but some starting pointers would be nice. Also any insight on which algorithms would work better is much appreciated.

How to tokenize a Malayalam word?

ഇതുഒരുസ്ടലംമാണ്
itu oru stalam anu
This is a Unicode string meaning this is a place
import nltk
nltk.wordpunct_tokenize('ഇതുഒരുസ്ഥാലമാണ് '.decode('utf8'))
is not working for me .
nltk.word_tokenize('ഇതുഒരുസ്ഥാലമാണ് '.decode('utf8'))
is also not working
other examples
"കണ്ടില്ല " = കണ്ടു +ഇല്ല,
"വലിയൊരു" = വലിയ + ഒരു
Right Split :
ഇത് ഒരു സ്ഥാലം ആണ്
output:
[u'\u0d07\u0d24\u0d4d\u0d12\u0d30\u0d41\u0d38\u0d4d\u0d25\u0d32\u0d02\u0d06\u0d23\u0d4d']
I just need to split the words as shown in the other example. Other example section is for testing.The problem is not with Unicode. It is with morphology of language. for this purpose you need to use a morphological analyzer
Have a look at this paper.
http://link.springer.com/chapter/10.1007%2F978-3-642-27872-3_38
After a crash course of the language from wikipedia (http://en.wikipedia.org/wiki/Malayalam), there are some issues in your question and the tools you've requested for your desired output.
Conflated Task
Firstly, the OP conflated the task of morphological analysis, segmentation and tokenization. Often there is a fine distinction especially for aggluntinative languages such as Turkish/Malayalam (see http://en.wikipedia.org/wiki/Agglutinative_language).
Agglutinative NLP and best practices
Next, I don't think tokenizer is appropriate for Malayalam, an agglutinative language. One of the most studied aggluntinative language in NLP, Turkish have adopted a different strategy when it comes to "tokenization", they found that a full blown morphological analyzer is necessary (see http://www.denizyuret.com/2006/11/turkish-resources.html, www.andrew.cmu.edu/user/ko/downloads/lrec.pdf‎).
Word Boundaries
Tokenization is defined as the identification of linguistically meaningful units (LMU) from the surface text (see Why do I need a tokenizer for each language?) And different language would require a different tokenizer to identify the word boundary of different languages. Different people have approach the problem for finding word boundary different but in summary in NLP people have subscribed to the following:
Agglutinative Languages requires a full blown morphological analyzer trained with some sort of language models. There is often only a single tier when identifying what is token and that is at the morphemic level hence the NLP community had developed different language models for their respective morphological analysis tools.
Polysynthetic Languages with specified word boundary has the choice of a two tier tokenization where the system can first identify an isolated word and then if necessary morphological analysis should be done to obtain a finer grain tokens. A coarse grain tokenizer can split a string using certain delimiter (e.g. NLTK's word_tokenize or punct_tokenize which uses whitespaces/punctuation for English). Then for finer grain analysis at morphemic level, people would usually use some finite state machines to split words up into morpheme (e.g. in German http://canoo.net/services/WordformationRules/Derivation/To-N/N-To-N/Pre+Suffig.html)
Polysynthetic Langauges without specified word boundary often requires a segmenter first to add whitespaces between the tokens because the orthography doesn't differentiate word boundaries (e.g. in Chinese https://code.google.com/p/mini-segmenter/). Then from the delimited tokens, if necessary, morphemic analysis can be done to produce finer grain tokens (e.g. http://mecab.googlecode.com/svn/trunk/mecab/doc/index.html). Often this finer grain tokens are tied with POS tags.
The answer in brief to OP's request/question, the OP had used the wrong tools for the task:
To output tokens for Malayalam, a morphological analyzer is necessary, simple coarse grain tokenizer in NLTK would not work.
NLTK's tokenizer is meant to tokenize polysynthetic Languages with specified word boundary (e.g. English/European languages) so it is not that the tokenizer is not working for Malayalam, it just wasn't meant to tokenize aggluntinative languages.
To achieve the output, a full blown morphological analyzer needs to be built for the language and someone had built it (aclweb.org/anthology//O/O12/O12-1028.pdf‎), the OP should contact the author of the paper if he/she is interested in the tool.
Short of building a morphological analyzer with a language model, I encourage the OP to first spot for common delimiters that splits words into morphemes in the language and then perform the simple re.split() to achieve a baseline tokenizer.
A tokenizer is indeed the right tool; certainly this is what the NLTK calls them. A morphological analyzer (as in the article you link to) is for breaking words into smaller parts (morphemes). But in your example code, you tried to use a tokenizer that is appropriate for English: It recognizes space-delimited words and punctuation tokens. Since Malayalam evidently doesn't indicate word boundaries with spaces, or with anything else, you need a different approach.
So the NLTK doesn't provide anything that detects word boundaries for Malayalam. It might provide the tools to build a decent one fairly easily, though.
The obvious approach would be to try dictionary lookup: Try to break up your input into strings that are in the dictionary. But it would be harder than it sounds: You'd need a very large dictionary, you'd still have to deal with unknown words somehow, and since Malayalam has non-trivial morphology, you may need a morphological analyzer to match inflected words to the dictionary. Assuming you can store or generate every word form with your dictionary, you can use an algorithm like the one described here (and already mentioned by #amp) to divide your input into a sequence of words.
A better alternative would be to use a statistical algorithm that can guess where the word boundaries are. I don't know of such a module in the NLTK, but there has been quite a bit of work on this for Chinese. If it's worth your trouble, you can find a suitable algorithm and train it to work on Malayalam.
In short: The NLTK tokenizers only work for the typographical style of English. You can train a suitable tool to work on Malayalam, but the NLTK does not include such a tool as far as I know.
PS. The NLTK does come with several statistical tokenization tools; the PunctSentenceTokenizer can be trained to recognize sentence boundaries using an unsupervised learning algorithm (meaning you don't need to mark the boundaries in the training data). Unfortunately, the algorithm specifically targets the issue of abbreviations, and so it cannot be adapted to word boundary detection.
maybe the Viterbi algorithm could help?
This answer to another SO question (and the other high-vote answer) could help: https://stackoverflow.com/a/481773/583834
It seems like your space is the unicode character u'\u0d41'. So you should split normally with str.split().
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
x = 'ഇതുഒരുസ്ഥാലമാണ്'.decode('utf8')
y = x.split(u'\u0d41')
print " ".join(y)
[out]:
ഇത ഒര സ്ഥാലമാണ്`
I tried the following:
# encoding=utf-8
import nltk
cheese = nltk.wordpunct_tokenize('ഇതുഒരുസ്ഥാലമാണ്'.decode('utf8'))
for var in cheese:
print var.encode('utf8'),
And as output, I got the following:
ഇത ു ഒര ു സ ് ഥ ാ ലമ ാ ണ ്
Is this anywhere close to the output that you want, I'm a little in the dark here, since its difficult to get this right without understanding the language.
Morphological analysis example
from mlmorph import Analyser
analyser = Analyser()
analyser.analyse("കേരളത്തിന്റെ")
Gives
[('കേരളം<np><genitive>', 179)]
url: mlmorph
if you using anaconda then:
install git in anaconda prompt
conda install -c anaconda git
then clone the file using following command:
git clone https://gitlab.com/smc/mlmorph.git

Categories

Resources