I would like to use spacy for tokenizing Wikipedia scrapes. Ideally it would work like this:
text = 'procedure that arbitrates competing models or hypotheses.[2][3] Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them.[3][4]'
# run spacy
spacy_en = spacy.load("en")
doc = spacy_en(text, disable=['tagger', 'ner'])
tokens = [tok.text.lower() for tok in doc]
# desired output
# tokens = [..., 'models', 'or', 'hypotheses', '.', '[2][3]', 'Researchers', ...
# actual output
# tokens = [..., 'models', 'or', 'hypotheses.[2][3', ']', 'Researchers', ...]
The problem is that the 'hypotheses.[2][3]' is glued together into one token.
How can I prevent spacy from connecting this '[2][3]' to the previous token?
As long as it is split from the word hypotheses and the point at the end of the sentence, I don't care how it is handled. But individual words and grammar should stay apart from syntactical noise.
So for example, any of the following would be a desirable output:
'hypotheses', '.', '[2][', '3]'
'hypotheses', '.', '[2', '][3]'
I think you could try playing around with infix:
import re
import spacy
from spacy.tokenizer import Tokenizer
infix_re = re.compile(r'''[.]''')
def custom_tokenizer(nlp):
return Tokenizer(nlp.vocab, infix_finditer=infix_re.finditer)
nlp = spacy.load('en')
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u"hello-world! I am hypothesis.[2][3]")
print([t.text for t in doc])
More on this https://spacy.io/usage/linguistic-features#native-tokenizers
Related
I would like to keep the tokenizer that SpaCy normally uses, but adding a condition.
SpaCy usually separates a dot (".") from a word and places it as a token. I want to keep that, except in cases where I have the abbreviation: "et al.", in this case I would like to return as tokens: ['et' , 'al.'], without considering the dot as another token, just in this case.
I have been reviewing the information and it seems to me that the solution would be related to the script below, however, I do not know where I could place this condition.
import spacy
from spacy.lang.char_classes import ALPHA_LOWER, ALPHA_UPPER, PUNCT
from spacy.lang.char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES, LIST_ICONS
from spacy.lang.char_classes import CURRENCY, UNITS, ALPHA_LOWER, CONCAT_QUOTES, PUNCT, ALPHA_UPPER
from spacy.util import compile_suffix_regex
# Default tokenizer
nlp = spacy.load("pt_core_news_sm")
doc = nlp("Esse é um exemplo. Ramon et al., kcal.")
print([t.text for t in doc]) # ['Esse', 'é', 'um', 'exemplo', '.', 'Ramon', 'et', 'al', '.', ',', 'kcal', '.']
# Modify tokenizer suffix patterns
suffixes = (
LIST_PUNCT
+ LIST_ELLIPSES
+ LIST_QUOTES
+ LIST_ICONS
+ ["'s", "'S", "’s", "’S", "—", "–"]
+ [
r"(?<=[0-9])\+",
r"(?<=°[FfCcKk])\.",
r"(?<=[0-9])(?:{c})".format(c=CURRENCY),
r"(?<=[0-9])(?:{u})".format(u=UNITS),
r"(?<=[0-9{al}{e}{p}(?:{q})])\.".format(
al=ALPHA_LOWER, e=r"%²\-\+", q=CONCAT_QUOTES, p=PUNCT
),
r"(?<=[{au}][{au}])\.".format(au=ALPHA_UPPER),
]
)
suffix_regex = compile_suffix_regex(suffixes)
nlp.tokenizer.suffix_search = suffix_regex.search
doc = nlp("Esse é um exemplo. Ramon et al., kcal.")
print([t.text for t in doc]) # Expected -> ['Esse', 'é', 'um', 'exemplo', '.', 'Ramon', 'et', 'al.', ',', 'kcal', '.']
For this instance, I think the easiest thing to do is to add a special case to the tokenizer. The benefit is that you don't have to recreate and recompile all of those tokenizer regexes, but just add this one instance as follows:
import spacy
from spacy.symbols import ORTH
nlp = spacy.load("en_core_web_sm")
nlp.tokenizer.add_special_case("al.", [{ORTH: "al."}])
# Check new tokenization
print([w.text for w in nlp("et al.")]) # ['et', 'al.']
It would be good to review how the tokenizer works to understand what this is going to do. The tokenizer handles special cases first, so whenever it sees this substring it's going to tokenize it that way before any of the other rules. That means this solution could include some false positive tokenizations, where another token precedes al. aside from 'et' and you don't want to combine the period. For a more precise solution, you could write a small component that merges the al and . tokens after things have been tokenized - a good example for this is spaCy's merge_noun_chunks (source).
I am working on my bachelorthesis and have to prepare a corpus to train word embeddings.
What I'm thinking about is if it is possible to check a tokenized sentence or text for ngrams and then exchange these single tokens with the ngram.
To make it a bit clearer what i mean:
Input
var = ['Hello', 'Sherlock', 'Holmes', 'my', 'name', 'is', 'Mr', '.', 'Watson','.']
Desired Output
var = ['Hello', 'Sherlock_Holmes', 'my', 'name', 'is', 'Mr_Watson','.']
I know Mr. Watson is not the perfect example right now. But I am thinking about if this is possible.
Because training my word2vec algorithm without looking for ngrams does not do the job well enough.
class MySentence():
def __init__(self, dirname):
self.dirname = dirname
print('Hello init')
def __iter__(self):
for fname in os.listdir(self.dirname):
txt = []
for line in open(os.path.join(self.dirname, fname)):
txt = nltk.regexp_tokenize(line, pattern='\w+|\$[\d\.]+|\S+')
tokens = [token for token in tokens if len(token) > 1] #same as unigrams
bi_tokens = bigrams(tokens)
yield tri_tokens = trigrams(tokens)
sentences = MySentence(path)
N-grams are just sequences of adjacent words but they don't have to make sense language-wise. For example, "Hello Sherlock" and "Holmes my" could be 2-grams. Rather, it sounds like you are looking a more sophisticated tokenization with language-specific context, or entity recognition ("Sherlock Holmes"), which itself requires a trained model. Check out NLTK's documentation regarding nltk.ne_chunk()or rule-based chunking. Or for out-of-the-box solutions, spaCy's named entity recognition and tokenization capabilities, to get started.
Is there a way to force spacy not to parse punctuation as separate tokens ???
nlp = spacy.load('en')
doc = nlp(u'the $O is in $R')
[ w for w in doc ]
: [the, $, O, is, in, $, R]
I want :
: [the, $O, is, in, $R]
Customize the prefix_search function for the spaCy's Tokenizer class. Refer documentation. Something like:
import spacy
import re
from spacy.tokenizer import Tokenizer
# use any currency regex match as per your requirement
prefix_re = re.compile('''^\$[a-zA-Z0-9]''')
def custom_tokenizer(nlp):
return Tokenizer(nlp.vocab, prefix_search=prefix_re.search)
nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u'the $O is in $R')
print([t.text for t in doc])
# ['the', '$O', 'is', 'in', '$R']
Yes, there is. For example,
import spacy
import regex as re
from spacy.tokenizer import Tokenizer
prefix_re = re.compile(r'''^[\[\+\("']''')
suffix_re = re.compile(r'''[\]\)"']$''')
infix_re = re.compile(r'''[\(\-\)\#\.\:\$]''') #you need to change the infix tokenization rules
simple_url_re = re.compile(r'''^https?://''')
def custom_tokenizer(nlp):
return Tokenizer(nlp.vocab, prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
token_match=simple_url_re.match)
nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u'the $O is in $R')
print [w for w in doc] #prints
[the, $O, is, in, $R]
You just need to add '$' character to the infix regex (with an escape character '\' obviously).
Aside: Have included prefix and suffix to showcase the flexibility of spaCy tokenizer. In your case just the infix regex will suffice.
I have a question about whether there is a way to keep single white space as an independent token in spaCy tokenization.
For example if I ran:
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("This is easy.")
toks = [w.text for w in doc]
toks
The result is
['This', 'is', 'easy', '.']
Instead, I would like to have something like
['This', ' ', 'is', ' ', 'easy', '.']
Is there are a simple way to do that?
spaCy exposes the token's whitespace as the whitespace_ attribute. So if you only need a list of strings, you could do:
token_texts = []
for token in doc:
token_texts.append(token.text)
if token.whitespace_: # filter out empty strings
token_texts.append(token.whitespace_)
If you want to create an actual Doc object out of those tokens, that's possible, too. Doc objects can be constructed with a words keyword argument (a list of strings to add as tokens). However, I'm not sure how useful that would be.
If you want the whitespaces in the doc object:
import spacy
from spacy.tokens import Doc
class WhitespaceTokenizer(object):
def __init__(self, vocab):
self.vocab = vocab
def __call__(self, text):
words = text.split(' ')
res = [' '] * (2 * len(words) - 1)
res[::2] = words
return Doc(self.vocab, words=res)
nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = WhitespaceTokenizer(nlp.vocab)
doc = nlp("This is easy.")
print([t.text for t in doc])
In a sentence containing hashtags, such as a tweet, spacy's tokenizer splits hashtags into two tokens:
import spacy
nlp = spacy.load('en')
doc = nlp(u'This is a #sentence.')
[t for t in doc]
output:
[This, is, a, #, sentence, .]
I'd like to have hashtags tokenized as follows, is that possible?
[This, is, a, #sentence, .]
I also tried several ways to prevent spaCy from splitting hashtags or words with hyphens like "cutting-edge". My experience is that merging tokens afterwards can be problematic, because the pos tagger and dependency parsers already used the wrong tokens for their decisions. Touching the infix, prefix, suffix regexps is kind of error prone / complex, because you don't want to produce side effects by your changes.
The simplest way is indeed, as pointed out by before, to modify the token_match function of the tokenizer. This is a re.match identifying regular expressions that will not be split. Instead of importing the speficic URL pattern I'd rather extend whatever spaCy's default is.
from spacy.tokenizer import _get_regex_pattern
nlp = spacy.load('en')
# get default pattern for tokens that don't get split
re_token_match = _get_regex_pattern(nlp.Defaults.token_match)
# add your patterns (here: hashtags and in-word hyphens)
re_token_match = f"({re_token_match}|#\w+|\w+-\w+)"
# overwrite token_match function of the tokenizer
nlp.tokenizer.token_match = re.compile(re_token_match).match
text = "#Pete: choose low-carb #food #eatsmart ;-) 😋👍"
doc = nlp(text)
This yields:
['#Pete', ':', 'choose', 'low-carb', '#food', '#eatsmart', ';-)', '😋', '👍']
This is more of a add-on to the great answer by #DhruvPathak AND a shameless copy from the below linked github thread (and the even better answer by #csvance). spaCy features (since V2.0) the add_pipe method. Meaning you can define #DhruvPathak great answer in a function and add the step (conveniently) into your nlp processing pipeline, as below.
Citations starts here:
def hashtag_pipe(doc):
merged_hashtag = False
while True:
for token_index,token in enumerate(doc):
if token.text == '#':
if token.head is not None:
start_index = token.idx
end_index = start_index + len(token.head.text) + 1
if doc.merge(start_index, end_index) is not None:
merged_hashtag = True
break
if not merged_hashtag:
break
merged_hashtag = False
return doc
nlp = spacy.load('en')
nlp.add_pipe(hashtag_pipe)
doc = nlp("twitter #hashtag")
assert len(doc) == 2
assert doc[0].text == 'twitter'
assert doc[1].text == '#hashtag'
Citation ends here; Check out how to add hashtags to the part of speech tagger #503 for the full thread.
PS It's clear when reading the code, but for the copy&pasters, don't disable the parser :)
You can do some pre and post string manipulations,which shall make you bypass '#' based tokenization, and is easy to implement. e.g
> >>> import re
> >>> import spacy
> >>> nlp = spacy.load('en')
> >>> sentence = u'This is my twitter update #MyTopic'
> >>> parsed = nlp(sentence)
> >>> [token.text for token in parsed]
[u'This', u'is', u'my', u'twitter', u'update', u'#', u'MyTopic']
> >>> new_sentence = re.sub(r'#(\w+)',r'ZZZPLACEHOLDERZZZ\1',sentence)
> >>> new_sentence u'This is my twitter update ZZZPLACEHOLDERZZZMyTopic'
> >>> parsed = nlp(new_sentence)
> >>> [token.text for token in parsed]
[u'This', u'is', u'my', u'twitter', u'update', u'ZZZPLACEHOLDERZZZMyTopic']
> >>> [x.replace(u'ZZZPLACEHOLDERZZZ','#') for x in [token.text for token in parsed]]
[u'This', u'is', u'my', u'twitter', u'update', u'#MyTopic']
You can try setting custom seperators in spacy's tokenizer.
I am not aware of methods to do that.
UPDATE : You can use a regex to find span of token you would want to stay as single token, and retokenize using span.merge method as mentioned here : https://spacy.io/docs/api/span#merge
Merge example:
>>> import spacy
>>> import re
>>> nlp = spacy.load('en')
>>> my_str = u'Tweet hashtags #MyHashOne #MyHashTwo'
>>> parsed = nlp(my_str)
>>> [(x.text,x.pos_) for x in parsed]
[(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#', u'NOUN'), (u'MyHashOne', u'NOUN'), (u'#', u'NOUN'), (u'MyHashTwo', u'PROPN')]
>>> indexes = [m.span() for m in re.finditer('#\w+',my_str,flags=re.IGNORECASE)]
>>> indexes
[(15, 25), (26, 36)]
>>> for start,end in indexes:
... parsed.merge(start_idx=start,end_idx=end)
...
#MyHashOne
#MyHashTwo
>>> [(x.text,x.pos_) for x in parsed]
[(u'Tweet', u'PROPN'), (u'hashtags', u'NOUN'), (u'#MyHashOne', u'NOUN'), (u'#MyHashTwo', u'PROPN')]
>>>
I found this on github, which uses spaCy's Matcher:
from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)
matcher.add('HASHTAG', None, [{'ORTH': '#'}, {'IS_ASCII': True}])
doc = nlp('This is a #sentence. Here is another #hashtag. #The #End.')
matches = matcher(doc)
hashtags = []
for match_id, start, end in matches:
hashtags.append(doc[start:end])
for span in hashtags:
span.merge()
print([t.text for t in doc])
outputs:
['This', 'is', 'a', '#sentence', '.', 'Here', 'is', 'another', '#hashtag', '.', '#The', '#End', '.']
A list of hashtags is also available in the hashtags list:
print(hashtags)
output:
[#sentence, #hashtag, #The, #End]
I spent quite a bit of time on this and found I share what I came up with:
Subclassing the Tokenizer and adding the regex for hashtags to the default URL_PATTERN was the easiest solution for me, additionally adding a custom extension to match on hashtags to identify them:
import re
import spacy
from spacy.language import Language
from spacy.tokenizer import Tokenizer
from spacy.tokens import Token
nlp = spacy.load('en_core_web_sm')
def create_tokenizer(nlp):
# contains the regex to match all sorts of urls:
from spacy.lang.tokenizer_exceptions import URL_PATTERN
# spacy defaults: when the standard behaviour is required, they
# need to be included when subclassing the tokenizer
prefix_re = spacy.util.compile_prefix_regex(Language.Defaults.prefixes)
infix_re = spacy.util.compile_infix_regex(Language.Defaults.infixes)
suffix_re = spacy.util.compile_suffix_regex(Language.Defaults.suffixes)
# extending the default url regex with regex for hashtags with "or" = |
hashtag_pattern = r'''|^(#[\w_-]+)$'''
url_and_hashtag = URL_PATTERN + hashtag_pattern
url_and_hashtag_re = re.compile(url_and_hashtag)
# set a custom extension to match if token is a hashtag
hashtag_getter = lambda token: token.text.startswith('#')
Token.set_extension('is_hashtag', getter=hashtag_getter)
return Tokenizer(nlp.vocab, prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
token_match=url_and_hashtag_re.match
)
nlp.tokenizer = create_tokenizer(nlp)
doc = nlp("#spreadhappiness #smilemore so_great#good.com https://www.somedomain.com/foo")
for token in doc:
print(token.text)
if token._.is_hashtag:
print("-> matches hashtag")
# returns: "#spreadhappiness -> matches hashtag #smilemore -> matches hashtag so_great#good.com https://www.somedomain.com/foo"