ഇതുഒരുസ്ടലംമാണ്
itu oru stalam anu
This is a Unicode string meaning this is a place
import nltk
nltk.wordpunct_tokenize('ഇതുഒരുസ്ഥാലമാണ് '.decode('utf8'))
is not working for me .
nltk.word_tokenize('ഇതുഒരുസ്ഥാലമാണ് '.decode('utf8'))
is also not working
other examples
"കണ്ടില്ല " = കണ്ടു +ഇല്ല,
"വലിയൊരു" = വലിയ + ഒരു
Right Split :
ഇത് ഒരു സ്ഥാലം ആണ്
output:
[u'\u0d07\u0d24\u0d4d\u0d12\u0d30\u0d41\u0d38\u0d4d\u0d25\u0d32\u0d02\u0d06\u0d23\u0d4d']
I just need to split the words as shown in the other example. Other example section is for testing.The problem is not with Unicode. It is with morphology of language. for this purpose you need to use a morphological analyzer
Have a look at this paper.
http://link.springer.com/chapter/10.1007%2F978-3-642-27872-3_38
After a crash course of the language from wikipedia (http://en.wikipedia.org/wiki/Malayalam), there are some issues in your question and the tools you've requested for your desired output.
Conflated Task
Firstly, the OP conflated the task of morphological analysis, segmentation and tokenization. Often there is a fine distinction especially for aggluntinative languages such as Turkish/Malayalam (see http://en.wikipedia.org/wiki/Agglutinative_language).
Agglutinative NLP and best practices
Next, I don't think tokenizer is appropriate for Malayalam, an agglutinative language. One of the most studied aggluntinative language in NLP, Turkish have adopted a different strategy when it comes to "tokenization", they found that a full blown morphological analyzer is necessary (see http://www.denizyuret.com/2006/11/turkish-resources.html, www.andrew.cmu.edu/user/ko/downloads/lrec.pdf).
Word Boundaries
Tokenization is defined as the identification of linguistically meaningful units (LMU) from the surface text (see Why do I need a tokenizer for each language?) And different language would require a different tokenizer to identify the word boundary of different languages. Different people have approach the problem for finding word boundary different but in summary in NLP people have subscribed to the following:
Agglutinative Languages requires a full blown morphological analyzer trained with some sort of language models. There is often only a single tier when identifying what is token and that is at the morphemic level hence the NLP community had developed different language models for their respective morphological analysis tools.
Polysynthetic Languages with specified word boundary has the choice of a two tier tokenization where the system can first identify an isolated word and then if necessary morphological analysis should be done to obtain a finer grain tokens. A coarse grain tokenizer can split a string using certain delimiter (e.g. NLTK's word_tokenize or punct_tokenize which uses whitespaces/punctuation for English). Then for finer grain analysis at morphemic level, people would usually use some finite state machines to split words up into morpheme (e.g. in German http://canoo.net/services/WordformationRules/Derivation/To-N/N-To-N/Pre+Suffig.html)
Polysynthetic Langauges without specified word boundary often requires a segmenter first to add whitespaces between the tokens because the orthography doesn't differentiate word boundaries (e.g. in Chinese https://code.google.com/p/mini-segmenter/). Then from the delimited tokens, if necessary, morphemic analysis can be done to produce finer grain tokens (e.g. http://mecab.googlecode.com/svn/trunk/mecab/doc/index.html). Often this finer grain tokens are tied with POS tags.
The answer in brief to OP's request/question, the OP had used the wrong tools for the task:
To output tokens for Malayalam, a morphological analyzer is necessary, simple coarse grain tokenizer in NLTK would not work.
NLTK's tokenizer is meant to tokenize polysynthetic Languages with specified word boundary (e.g. English/European languages) so it is not that the tokenizer is not working for Malayalam, it just wasn't meant to tokenize aggluntinative languages.
To achieve the output, a full blown morphological analyzer needs to be built for the language and someone had built it (aclweb.org/anthology//O/O12/O12-1028.pdf), the OP should contact the author of the paper if he/she is interested in the tool.
Short of building a morphological analyzer with a language model, I encourage the OP to first spot for common delimiters that splits words into morphemes in the language and then perform the simple re.split() to achieve a baseline tokenizer.
A tokenizer is indeed the right tool; certainly this is what the NLTK calls them. A morphological analyzer (as in the article you link to) is for breaking words into smaller parts (morphemes). But in your example code, you tried to use a tokenizer that is appropriate for English: It recognizes space-delimited words and punctuation tokens. Since Malayalam evidently doesn't indicate word boundaries with spaces, or with anything else, you need a different approach.
So the NLTK doesn't provide anything that detects word boundaries for Malayalam. It might provide the tools to build a decent one fairly easily, though.
The obvious approach would be to try dictionary lookup: Try to break up your input into strings that are in the dictionary. But it would be harder than it sounds: You'd need a very large dictionary, you'd still have to deal with unknown words somehow, and since Malayalam has non-trivial morphology, you may need a morphological analyzer to match inflected words to the dictionary. Assuming you can store or generate every word form with your dictionary, you can use an algorithm like the one described here (and already mentioned by #amp) to divide your input into a sequence of words.
A better alternative would be to use a statistical algorithm that can guess where the word boundaries are. I don't know of such a module in the NLTK, but there has been quite a bit of work on this for Chinese. If it's worth your trouble, you can find a suitable algorithm and train it to work on Malayalam.
In short: The NLTK tokenizers only work for the typographical style of English. You can train a suitable tool to work on Malayalam, but the NLTK does not include such a tool as far as I know.
PS. The NLTK does come with several statistical tokenization tools; the PunctSentenceTokenizer can be trained to recognize sentence boundaries using an unsupervised learning algorithm (meaning you don't need to mark the boundaries in the training data). Unfortunately, the algorithm specifically targets the issue of abbreviations, and so it cannot be adapted to word boundary detection.
maybe the Viterbi algorithm could help?
This answer to another SO question (and the other high-vote answer) could help: https://stackoverflow.com/a/481773/583834
It seems like your space is the unicode character u'\u0d41'. So you should split normally with str.split().
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
x = 'ഇതുഒരുസ്ഥാലമാണ്'.decode('utf8')
y = x.split(u'\u0d41')
print " ".join(y)
[out]:
ഇത ഒര സ്ഥാലമാണ്`
I tried the following:
# encoding=utf-8
import nltk
cheese = nltk.wordpunct_tokenize('ഇതുഒരുസ്ഥാലമാണ്'.decode('utf8'))
for var in cheese:
print var.encode('utf8'),
And as output, I got the following:
ഇത ു ഒര ു സ ് ഥ ാ ലമ ാ ണ ്
Is this anywhere close to the output that you want, I'm a little in the dark here, since its difficult to get this right without understanding the language.
Morphological analysis example
from mlmorph import Analyser
analyser = Analyser()
analyser.analyse("കേരളത്തിന്റെ")
Gives
[('കേരളം<np><genitive>', 179)]
url: mlmorph
if you using anaconda then:
install git in anaconda prompt
conda install -c anaconda git
then clone the file using following command:
git clone https://gitlab.com/smc/mlmorph.git
Related
I’m in the process of trying to get document similarity values for a corpus of approximately 5,000 legal briefs with Doc2Vec (I recognize that the corpus may be a little bit small, but this is a proof-of-concept project for a larger corpus of approximately 15,000 briefs I’ll have to compile later).
Basically, every other component in the creation of the model is going relatively well so far – each brief I have is in a text file within a larger folder, so I compiled them in my script using glob.glob – but I’m running into a tokenization problem. The difficulty is, as these documents are legal briefs, they contain numbers that I’d like to keep, and many of the guides I’ve been using to help me write the code use Gensim’s simple preprocessing, which I believe eliminates digits from the corpus, in tandem with the TaggedDocument feature. However, I want to do as little preprocessing on the texts as possible.
Below is the code I’ve used, and I’ve tried swapping simple_preprocess for genism.utils.tokenize, but when I do that, I get generator objects that don’t appear workable in my final Doc2Vec model, and I can’t actually see how the corpus looks. When I’ve tried to use other tokenizers, like nltk, I don’t know how to fit that into the TaggedDocument component.
brief_corpus = []
for brief_filename in brief_filenames:
with codecs.open(brief_filename, "r", "utf-8") as brief_file:
brief_corpus.append(
gensim.models.doc2vec.TaggedDocument(
gensim.utils.simple_preprocess(
brief_file.read()),
["{}".format(brief_filename)])) #tagging each brief with its filename
I’d appreciate any advice that anyone can give that would help me combine a tokenizer that just separated on whitespace and didn’t eliminate any numbers with the TaggedDocument feature. Thank you!
Update: I was able to create a rudimentary code for some basic tokenization (I do plan on refining it further) without having to resort to Gensim's simple_preprocessing function. However, I'm having difficulty (again!) when using the TaggedDocument feature - but this time, the tags (which I want to be the file names of each brief) don't match the tokenized document. Basically, each document has a tag, but it's not the right one.
Can anyone possibly advise where I might have gone wrong with the new code below? Thanks!
briefs = []
BriefList = [p for p in os.listdir(FILEPATH) if p.endswith('.txt')]
for brief in BriefList:
str = open(FILEPATH + brief,'r').read()
tokens = re.findall(r"[\w']+|[.,!?;]", str)
tagged_data = [TaggedDocument(tokens, [brief]) for brief in BriefList]
briefs.append(tagged_data)
You're likely going to want to write your own preprocessing/tokenization functions. But don't worry, it's not hard to outdo Gensim's simple_preprocess, even with very crude code.
The only thing Doc2Vec needs as the words of a TaggedDocument is a list of string tokens (typically words).
So first, you might be surprised how well it works to just do a default Python string .split() on your raw strings - which just breaks text on whitespace.
Sure, a bunch of the resulting tokens will then be mixes of words & adjoining punctuation, which may be nearly nonsense.
For example, the word 'lawsuit' at the end of the sentence might appear as 'lawsuit.', which then won't be recognized as the same token as 'lawsuit', and might not appear enough min_count times to even be considered, or otherwise barely rise above serving as noise.
But especially for both longer documents, and larger datasets, no one token, or even 1% of all tokens, has that much influence. This isn't exact-keyword-search, where failing to return a document with 'lawsuit.' for a query on 'lawsuit' would be a fatal failure. A bunch of words 'lost' to such cruft may have hadly any effect on the overall document, or model, performance.
As your datasets seem manageable enough to run lots of experiments, I'd suggest trying this dumbest-possible tokenization – only .split() – just as a baseline to become confident that the algorithm still mostly works as well as some more intrusive operation (like simple_preprocess()).
Then, as you notice, or suspect, or ideally measure with some repeatable evaluation, that some things you'd want to be meaningful tokens aren't treated right, gradually add extra steps of stripping/splitting/canonicalizing characters or tokens. But as much as possible: checking that the extra complexity of code, and runtime, is actually delivering benefits.
For example, further refinements could be some mix of:
For each token created by the simple split(), strip off any non-alphanumeric leading/trailing chars. (Advantages: eliminates that punctuation-fouling-words cruft. Disadvantages: might lose useful symbols, like the leading $ of monetary amounts.)
Before splitting, replace certain single-character punctuation-marks (like say ['.', '"', ',', '(', ')', '!', '?', ';', ':']) with the same character with spaces on both sides - so that they're never connected with nearby words, and instead survive a simple .split() as standalone tokens. (Advantages: also prevents words-plus-punctuation cruft. Disadvantages: breaks up numbers like 2,345.77 or some useful abbreviations.)
At some appropriate stage in tokenization, canonicalize many varied tokens into a smaller set of tokens that may be more meaningful than each of them as rare standalone tokens. For example, $0.01 through $0.99 might all be turned into $0_XX - which then has a better chance of influencting the model, & being associated with 'tiny amount' concepts, than the original standalone tokens. Or replacing all digits with #, so that numbers of similar magnitudes share influence, without diluting the model with a token for every single number.
The exact mix of heuristics, and order of operations, will depend on your goals. But with a corpus only in the thousands of docs (rather than hundreds-of-thousands or millions), even if you do these replacements in a fairly inefficient way (lots of individual string- or regex- replacements in serial), it'll likely be a manageable preprocessing cost.
But you can start simple & only add complexity that your domain-specific knowledge, and evaluations, justifies.
my goal is very simple: I have a set of strings or a sentence and I want to find the most similar one within a text corpus.
For example I have the following text corpus: "The front of the library is adorned with the Word of Life mural designed by artist Millard Sheets."
And I'd like to find the substring of the original corpus which is most similar to: "the library facade is painted"
So what I should get as output is: "fhe front of the library is adorned"
The only thing I came up with is to split the original sentence in substrings of variable lengths (eg. in substrings of 3,4,5 strings) and then use something like string.similarity(substring) from the spacy python module to assess the similarities of my target text with all the substrings and then keep the one with the highest value.
It seems a pretty inefficient method. Is there anything better I can do?
It probably works to some degree, but I wouldn't expect the spacy similarity method (averaging word vectors) to work particularly well.
The task you're working on is related to paraphrase detection/identification and semantic textual similarity and there is a lot of existing work. It is frequently used for things like plagiarism detection and the evaluation of machine translation systems, so you might find more approaches by looking in those areas, too.
If you want something that works fairly quickly out of the box for English, one suggestion is terp, which was developed for MT evaluation but shown to work well for paraphrase detection:
https://github.com/snover/terp
Most methods are set up to compare two sentences, so this doesn't address your potential partial sentence matches. Maybe it would make sense to find the most similar sentence and then look for substrings within that sentence that match better than the sentence as a whole?
I am wondering for the term in Machine Learning, Deep Learning, or in Natural Language Processing that split the word in a paragraph when there is no space between them.
example:
"iwanttocook"
become:
"i want to cook"
It wouldn't be easy since you do not have the character to tokenize the word.
I appreciate any help
You could achieve this using the polyglot package. There is an option for morphological analysis.
This kind of analysis is based on morfessor models trained on most frequent words to encounter morphemes ("primitive units of syntax, the smallest individually meaningful elements in the utterances of a language").
From the documentation:
from polyglot.text import Text
blob = "Wewillmeettoday."
text = Text(blob)
text.language = "en"
print(text.morphemes)
The output would be:
WordList([u'We', u'will', u'meet', u'to', u'day', u'.'])
Note that if you want to start working with polyglot, you should first read the documentation carefully, as there are a few things to consider, for example the downloading of language specific models.
I created an simple application using Scala to do this.
https://github.com/shredder47/Nonspaced-Sentence-Tokenizer
This problem is going completely over my head. I am training a Word2Vec model using gensim. I have provided data in multiple languages i.e. English and Hindi. When I am trying to find the words closest to 'man', this is what I am getting:
model.wv.most_similar(positive = ['man'])
Out[14]:
[('woman', 0.7380284070968628),
('lady', 0.6933152675628662),
('monk', 0.6662989258766174),
('guy', 0.6513140201568604),
('soldier', 0.6491742134094238),
('priest', 0.6440571546554565),
('farmer', 0.6366012692451477),
('sailor', 0.6297377943992615),
('knight', 0.6290514469146729),
('person', 0.6288090944290161)]
--------------------------------------------
Problem is, these are all English words. Then I tried to find similarity between same meaning Hindi and English words,
model.similarity('man', 'आदमी')
__main__:1: DeprecationWarning: Call to deprecated `similarity` (Method will
be removed in 4.0.0, use self.wv.similarity() instead).
Out[13]: 0.078265618974427215
This accuracy should have been better than all the other accuracies. The Hindi corpus I have has been made by translating the English one. Hence the words appear in similar contexts. Hence they should be close.
This is what I am doing here:
#Combining all the words together.
all_reviews=HindiWordsList + EnglishWordsList
#Training FastText model
cpu_count=multiprocessing.cpu_count()
model=Word2Vec(size=300,window=5,min_count=1,alpha=0.025,workers=cpu_count,max_vocab_size=None,negative=10)
model.build_vocab(all_reviews)
model.train(all_reviews,total_examples=model.corpus_count,epochs=model.iter)
model.save("word2vec_combined_50.bin")
I have been dealing with a very similar problem and came across a reasonably robust solution. This paper shows that a linear relationship can be defined between two Word2Vec models that have been trained on different languages. This means you can derive a translation matrix to convert word embeddings from one language model into the vector space of another language model. What does all of that mean? It means I can take a word from one language, and find words in the other language that have a similar meaning.
I've written a small Python package that implements this for you: transvec. Here's an example where I use pre-trained models to search for Russian words and find English words with a similar meaning:
import gensim.downloader
from transvec.transformers import TranslationWordVectorizer
# Pretrained models in two different languages.
ru_model = gensim.downloader.load("word2vec-ruscorpora-300")
en_model = gensim.downloader.load("glove-wiki-gigaword-300")
# Training data: pairs of English words with their Russian translations.
# The more you can provide, the better.
train = [
("king", "царь_NOUN"), ("tsar", "царь_NOUN"),
("man", "мужчина_NOUN"), ("woman", "женщина_NOUN")
]
bilingual_model = TranslationWordVectorizer(en_model, ru_model).fit(train)
# Find words with similar meanings across both languages.
bilingual_model.similar_by_word("царица_NOUN", 1) # "queen"
# [('king', 0.7763221263885498)]
Don't worry about the weird POS tags on the Russian words - this is just a quirk of the particular pre-trained model I used.
So basically, if you can provide a list of words with their translations, then you can train a TranslationWordVectorizer to translate any word that exists in your source language corpus into the target language. When I used this for real, I produced some training data by extracting all the individual Russian words from my data, running them through Google Translate and then keeping everything that translated to a single word in English. The results were pretty good (sorry I don't have any more detail for the benchmark yet; it's still a work in progress!).
First of all, you should really use self.wv.similarity().
I'm assuming there are very close to no words that exist in both between your Hindi corpus and English corpus, since Hindi corpus is in Devanagari and English is in, well, English. Simply adding two corpuses together to make a model does not make sense. Corresponding words in the two languages co-occur in two versions of a document, but not in your word embeddings for Word2Vec to figure out most similar.
Eg. Until your model knows that
Man:Aadmi::Woman:Aurat,
from the word embeddings, it can never make out the
Raja:King::Rani:Queen
relation. And for that, you need some anchor between the two corpuses.
Here are a few suggestions that you can try out:
Make an independent Hindi corpus/model
Maintain and lookup data of a few English->Hindi word pairs that you have will have to create manually.
Randomly replace input document words with their counterparts from the corresponding document while training
These might be enough to give you an idea. You can also look into seq2seq if you want only want to do translations. You can also read the Word2Vec theory in detail to understand what it does.
After reading the comments, I think that the problem is in the very different grammatical construction between English and Hindi sentences. I have worked with Hindi NLP models and it is much more difficult to get similar results as English (since you mention it).
In Hindi there's no order between words at all, only when declining them. Moreover, the translation of a sentence between languages that are not even descendants of the same root language is somewhat random and you can not assume that the contexts of both sentences are similar.
I'm currently working on a project, where I want to extract emotion from text. As I'm using conceptnet5 (a semantic network), I can't however simply prefix words in a sentence that contains a negation-word, as those words would simply not show up in conceptnet5's API.
Here's an example:
The movie wasn't that good.
Hence, I figured that I could use wordnet's lemma functionality to replace adjectives in sentences that contain negation-words like (not, ...).
In the previous example, the algorithm would detect wasn't and would replace it with was not.
Further, it would detect a negation-word not, and replace good with it's antonym bad.
The sentence would read:
The movie was that bad.
While I see that this isn't the most elegant way, and it does probably in many cases produce the wrong result, I'd still like to handle negation that way as I frankly don't know any better approach.
Considering my problem:
Unfortunately, I did not find any library that would allow me to replace all occurrences of appended negation-words (wasn't => was not).
I mean I could do it manually, by replacing the occurrences with a regex, but then I would be stuck with the english language.
Therefore I'd like to ask if some of you know a library, function or better method that could help me here.
Currently I'm using python nltk, still it doesn't seem that it contains such functionality, but I may be wrong.
Thanks in advance :)
Cases like wasn't can be simply parsed by tokenization (tokens = nltk.word_tokenize(sentence)): wasn't will turn into was and n't.
But negative meaning can also be formed by 'Quasi negative words, like hardly, barely, seldom' and 'Implied negatives, such as fail, prevent, reluctant, deny, absent', look into this paper. Even more detailed analysis can be found in Christopher Potts' On the negativity of negation
.
Considering your initial problem, sentiment analysis, most modern approaches, as far as I know, don't process negations explicitly; instead, they use supervised approaches with high-order n-grams. Those actually processing negation usually append special prefix NOT_ to all words between negation and punctuation marks.