Python NLTK: how to lemmatize text include verb in english? - python

I want to lemmatize this text and it is only lemmatize the nouns i need to lemmatize the verbs also
>>> import nltk, re, string
>>> from nltk.stem import WordNetLemmatizer
>>> from urllib import urlopen
>>> url="https://raw.githubusercontent.com/evandrix/nltk_data/master/corpora/europarl_raw/english/ep-00-01-17.en"
>>> raw = urlopen(url).read()
>>> raw ="".join(l for l in raw if l not in string.punctuation)
>>> tokens=nltk.word_tokenize(raw)
>>> from nltk.stem import WordNetLemmatizer
>>> lemmatizer = WordNetLemmatizer()
>>> lem = [lemmatizer.lemmatize(t) for t in tokens]
>>> lem[:20]
['Resumption', 'of', 'the', 'session', 'I', 'declare', 'resumed', 'the', 'session', 'of', 'the', 'European', 'Parliament', 'adjourned', 'on', 'Friday', '17', 'December', '1999', 'and']
here verb like resumed it suppose to be resume can you tell me what i should to do for lemmatize the whole text

Using the pos parameter in wordnetlemmatizer:
>>> from nltk.stem import WordNetLemmatizer
>>> from nltk import pos_tag
>>> wnl = WordNetLemmatizer()
>>> wnl.lemmatize('resumed')
'resumed'
>>> wnl.lemmatize('resumed', pos='v')
u'resume'
Here's a complete code, with pos_tag function:
>>> from nltk import word_tokenize, pos_tag
>>> from nltk.stem import WordNetLemmatizer
>>> wnl = WordNetLemmatizer()
>>> txt = """Resumption of the session I declare resumed the session of the European Parliament adjourned on Friday 17 December 1999 , and I would like once again to wish you a happy new year in the hope that you enjoyed a pleasant festive period ."""
>>> [wnl.lemmatize(i,j[0].lower()) if j[0].lower() in ['a','n','v'] else wnl.lemmatize(i) for i,j in pos_tag(word_tokenize(txt))]
['Resumption', 'of', 'the', 'session', 'I', 'declare', u'resume', 'the', 'session', 'of', 'the', 'European', 'Parliament', u'adjourn', 'on', 'Friday', '17', 'December', '1999', ',', 'and', 'I', 'would', 'like', 'once', 'again', 'to', 'wish', 'you', 'a', 'happy', 'new', 'year', 'in', 'the', 'hope', 'that', 'you', u'enjoy', 'a', 'pleasant', 'festive', 'period', '.']

Related

Does the lemmatization mechanism reduce the size of the corpus?

Dear Community Members,
During the pre-processing of data, after splitting the raw_data into tokens, I have used the popular WordNet Lemmatizer to generate the stems. I am performing experiments on a dataset that has 18953 tokens.
My question is, does the lemmatization process reduce the size of corpus?
I am confused, kindly help in this regard. Any help is appreciated!
Lemmatization converts each token (aka form) in the sentence into its lemma form (aka type):
>>> from nltk import word_tokenize
>>> from pywsd.utils import lemmatize_sentence
>>> text = ['This is a corpus with multiple sentences.', 'This was the second sentence running.', 'For some reasons, there is a need to second foo bar ran.']
>>> lemmatize_sentence(text[0]) # Lemmatized sentence example.
['this', 'be', 'a', 'corpus', 'with', 'multiple', 'sentence', '.']
>>> word_tokenize(text[0]) # Tokenized sentence example.
['This', 'is', 'a', 'corpus', 'with', 'multiple', 'sentences', '.']
>>> word_tokenize(text[0].lower()) # Lowercased and tokenized sentence example.
['this', 'is', 'a', 'corpus', 'with', 'multiple', 'sentences', '.']
If we lemmatize the sentence, each token should receive the corresponding lemma form, so the no. of "words" remains the same whether it's the form or the type:
>>> num_tokens = sum([len(word_tokenize(sent.lower())) for sent in text])
>>> num_lemmas = sum([len(lemmatize_sentence(sent)) for sent in text])
>>> num_tokens, num_lemmas
(29, 29)
>>> [lemmatize_sentence(sent) for sent in text] # lemmatized sentences
[['this', 'be', 'a', 'corpus', 'with', 'multiple', 'sentence', '.'], ['this', 'be', 'the', 'second', 'sentence', 'running', '.'], ['for', 'some', 'reason', ',', 'there', 'be', 'a', 'need', 'to', 'second', 'foo', 'bar', 'ran', '.']]
>>> [word_tokenize(sent.lower()) for sent in text] # tokenized sentences
[['this', 'is', 'a', 'corpus', 'with', 'multiple', 'sentences', '.'], ['this', 'was', 'the', 'second', 'sentence', 'running', '.'], ['for', 'some', 'reasons', ',', 'there', 'is', 'a', 'need', 'to', 'second', 'foo', 'bar', 'ran', '.']]
The "compression" per-se would refer to the number of unique tokens represented in the whole corpus after you've lemmatized the sentences, e.g.
>>> lemma_vocab = set(chain(*[lemmatize_sentence(sent) for sent in text]))
>>> token_vocab = set(chain(*[word_tokenize(sent.lower()) for sent in text]))
>>> len(lemma_vocab), len(token_vocab)
(21, 23)
>>> lemma_vocab
{'the', 'this', 'to', 'reason', 'for', 'second', 'a', 'running', 'some', 'sentence', 'be', 'foo', 'ran', 'with', '.', 'need', 'multiple', 'bar', 'corpus', 'there', ','}
>>> token_vocab
{'the', 'this', 'to', 'for', 'sentences', 'a', 'second', 'running', 'some', 'is', 'sentence', 'foo', 'reasons', 'with', 'ran', '.', 'need', 'multiple', 'bar', 'corpus', 'there', 'was', ','}
Note: Lemmatization is a pre-processing step. But it should not overwrite your original corpus with the lemmatize forms.

How to lemmatize a list of sentences

How can I lemmatize a list of sentences in Python?
from nltk.stem.wordnet import WordNetLemmatizer
a = ['i like cars', 'cats are the best']
lmtzr = WordNetLemmatizer()
lemmatized = [lmtzr.lemmatize(word) for word in a]
print(lemmatized)
This is what I've tried but it gives me the same sentences. Do I need to tokenize the words before to work properly?
TL;DR:
pip3 install -U pywsd
Then:
>>> from pywsd.utils import lemmatize_sentence
>>> text = 'i like cars'
>>> lemmatize_sentence(text)
['i', 'like', 'car']
>>> lemmatize_sentence(text, keepWordPOS=True)
(['i', 'like', 'cars'], ['i', 'like', 'car'], ['n', 'v', 'n'])
>>> text = 'The cat likes cars'
>>> lemmatize_sentence(text, keepWordPOS=True)
(['The', 'cat', 'likes', 'cars'], ['the', 'cat', 'like', 'car'], [None, 'n', 'v', 'n'])
>>> text = 'The lazy brown fox jumps, and the cat likes cars.'
>>> lemmatize_sentence(text)
['the', 'lazy', 'brown', 'fox', 'jump', ',', 'and', 'the', 'cat', 'like', 'car', '.']
Otherwise, take a look at how the function in pywsd:
Tokenize the string
Uses the POS tagger and maps to WordNet POS tagset
Attempts to stem
Finally calling the lemmatizer with the POS and/or stems
See https://github.com/alvations/pywsd/blob/master/pywsd/utils.py#L129
You must lemmatize each word separately. Instead, you lemmatize sentences. Correct code fragment:
from nltk.stem.wordnet import WordNetLemmatizer
from nltk import word_tokenize
sents = ['i like cars', 'cats are the best']
lmtzr = WordNetLemmatizer()
lemmatized = [[lmtzr.lemmatize(word) for word in word_tokenize(s)]
for s in sents]
print(lemmatized)
#[['i', 'like', 'car'], ['cat', 'are', 'the', 'best']]
You can also get better results if you first do POS tagging and then provide the POS information to the lemmatizer.

Replacing only pronoun, noun, verb and adjective in a sentence with its corresponding tags, how could I do it efficiently in Python?

I have sentences and I want to replace only its Pronoun, nouns and adjective with its corresponding POS Tags.
For example my sentence is :
"I am going to the most beautiful city, Islamabad"
and want the result
"PRP am VBG to the most JJ NN, NNP".
TL;DR
>>> from nltk import pos_tag, word_tokenize
>>> sent = "I am going to the most beautiful city, Islamabad"
>>> [pos if any(p for p in wanted_tags if pos.startswith(p)) else word for word, pos in pos_tag(word_tokenize(sent))]
['PRP', 'VBP', 'VBG', 'to', 'the', 'RBS', 'JJ', 'NN', ',', 'NNP']
>>> from nltk.corpus import stopwords
>>> stoplist = stopwords.words('english')
>>> [pos if any(p for p in wanted_tags if pos.startswith(p)) and word not in stoplist else word for word, pos in pos_tag(word_tokenize(sent))]
['PRP', 'am', 'VBG', 'to', 'the', 'most', 'JJ', 'NN', ',', 'NNP']

How to tokenize all currency symbols using Regex in python?

I want to tokenize all the symbols of currency by using NLTK tokenize with regex.
For example this is my sentence:
The price of it is $5.00.
The price of it is RM5.00.
The price of it is €5.00.
I used this pattern of regex:
pattern = r'''(['()""\w]+|\.+|\?+|\,+|\!+|\$?\d+(\.\d+)?%?)'''
tokenize_list = nltk.regexp_tokenize(sentence, pattern)
But as we can see it only considers $.
I tried to use \p{Sc} as explained in What is regex for currency symbol? but it is still not working for me.
Try pad the numbering with the currency symbol with spaces then tokenize:
>>> import re
>>> from nltk import word_tokenize
>>> sents = """The price of it is $5.00.
... The price of it is RM5.00.
... The price of it is €5.00.""".split('\n')
>>>
>>> for sent in sents:
... numbers_in_sent = re.findall("[-+]?\d+[\.]?\d*", sent)
... for num in numbers_in_sent:
... sent = sent.replace(num, ' '+num+' ')
... print word_tokenize(sent)
...
['The', 'price', 'of', 'it', 'is', '$', '5.00', '.']
['The', 'price', 'of', 'it', 'is', 'RM', '5.00', '.']
['The', 'price', 'of', 'it', 'is', '\xe2\x82\xac', '5.00', '.']

text.replace(punctuation,'') does not remove all punctuation contained in list(punctuation)?

import urllib2,sys
from bs4 import BeautifulSoup,NavigableString
from string import punctuation as p
# URL for Obama's presidential acceptance speech in 2008
obama_4427_url = 'http://www.millercenter.org/president/obama/speeches/speech-4427'
# read in URL
obama_4427_html = urllib2.urlopen(obama_4427_url).read()
# BS magic
obama_4427_soup = BeautifulSoup(obama_4427_html)
# find the speech itself within the HTML
obama_4427_div = obama_4427_soup.find('div',{'id': 'transcript'},{'class': 'displaytext'})
# obama_4427_div.text.lower() removes extraneous characters (e.g. '<br/>')
# and places all letters in lowercase
obama_4427_str = obama_4427_div.text.lower()
# for further text analysis, remove punctuation
for punct in list(p):
obama_4427_str_processed = obama_4427_str.replace(p,'')
obama_4427_str_processed_2 = obama_4427_str_processed.replace(p,'')
print(obama_4427_str_processed_2)
# store individual words
words = obama_4427_str_processed.split(' ')
print(words)
Long story short, I have a speech from President Obama, and am looking to remove all punctuation, so that I'm left only with the words. I've imported the punctuation module, ran a for loop which didn't remove all my punctuation. What am I doing wrong here?
str.replace() searches for the whole value of the first argument. It is not a pattern, so only if the whole `string.punctuation* value is there will this be replaced with an empty string.
Use a regular expression instead:
import re
from string import punctuation as p
punctuation = re.compile('[{}]+'.format(re.escape(p)))
obama_4427_str_processed = punctuation.sub('', obama_4427_str)
words = obama_4427_str_processed.split()
Note that you can just use str.split() without an argument to split on any arbitrary-width whitespace, including newlines.
If you want to remove the punctuation you can rstrip it off:
obama_4427_str = obama_4427_div.text.lower()
# for further text analysis, remove punctuation
from string import punctuation
print([w.rstrip(punctuation) for w in obama_4427_str.split()])
Output:
['transcript', 'to', 'chairman', 'dean', 'and', 'my', 'great',
'friend', 'dick', 'durbin', 'and', 'to', 'all', 'my', 'fellow',
'citizens', 'of', 'this', 'great', 'nation', 'with', 'profound',
'gratitude', 'and', 'great', 'humility', 'i', 'accept', 'your',
'nomination', 'for', 'the', 'presidency', 'of', 'the', 'united',
................................................................
using python3 to remove from anywhere use str.translate:
from string import punctuation
tbl = str.maketrans({ord(ch):"" for ch in punctuation})
obama_4427_str = obama_4427_div.text.lower().translate(tbl)
print(obama_4427_str.split())
For python2:
from string import punctuation
obama_4427_str = obama_4427_div.text.lower().encode("utf-8").translate(None,punctuation)
print( obama_4427_str.split())
Output:
['transcript', 'to', 'chairman', 'dean', 'and', 'my', 'great',
'friend', 'dick', 'durbin', 'and', 'to', 'all', 'my', 'fellow',
'citizens', 'of', 'this', 'great', 'nation', 'with', 'profound',
'gratitude', 'and', 'great', 'humility', 'i', 'accept', 'your',
'nomination', 'for', 'the', 'presidency', 'of', 'the', 'united',
............................................................
On a another note, you can iterate over a string so list(p) is redundant in your own code.

Categories

Resources