How to separate the prefix in words that are 'di'? - python

I want to separate some prefixes that are integrated into words after the word "di" is followed by letters.
sentence1 = "dipermudah diperlancar"
sentence2 = "di permudah di perlancar"
I expect the output like this:
output1 = "di permudah di perlancar"
output2 = "di permudah di perlancar"
Demo

This expression might work to some extent:
(di)(\S+)
if our data would just look like as simple as is in the question. Otherwise, we would be adding more boundaries to our expression.
Test
import re
regex = r"(di)(\S+)"
test_str = "dipermudah diperlancar"
subst = "\\1 \\2"
print(re.sub(regex, subst, test_str))
The expression is explained on the top right panel of regex101.com, if you wish to explore/simplify/modify it, and in this link, you can watch how it would match against some sample inputs, if you like.

Here is one way to do this using re.sub:
sentence1 = "adi dipermudah diperlancar"
output = re.sub(r'(?<=\bdi)(?=\w)', ' ', sentence1)
print(output)
Output:
adi di permudah di perlancar
The idea here is to insert a space whenever what immediately precedes is the prefix di, and what also follows is some other word character.

Related

How can I look for specific bigrams in text example - python?

I am interested in finding how often (in percentage) a set of words, as in n_grams appears in a sentence.
example_txt= ["order intake is strong for Q4"]
def find_ngrams(text):
text = re.findall('[A-z]+', text)
content = [w for w in text if w.lower() in n_grams] # you can calculate %stopwords using "in"
return round(float(len(content)) / float(len(text)), 5)
#the goal is for the above procedure to work on a pandas datafame, but for now lets use 'text' as an example.
#full_MD['n_grams'] = [find_ngrams(x) for x in list(full_MD.loc[:,'text_no_stopwords'])]
Below you see two examples. The first one works, the last doesn't.
n_grams= ['order']
res = [find_ngrams(x) for x in list(example_txt)]
print(res)
Output:
[0.16667]
n_grams= ['order intake']
res = [find_ngrams(x) for x in list(example_txt)]
print(res)
Output:
[0.0]
How can I make the find_ngrams() function process bigrams, so the last example from above works?
Edit: Any other ideas?
You can use SpaCy Matcher:
import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
# Add match ID "orderintake" with no callback and one pattern
pattern = [{"LOWER": "order"}, {"LOWER": "intake"}]
matcher.add("orderintake", None, pattern)
doc = nlp("order intake is strong for Q4")
matches = matcher(doc)
print(len(matches)) #Number of times the bi-gram appears in text
maybe you have already exploited this option, but why not use the a simple .count combined with len:
(example_txt[0].count(n_grams[0]) * len(n_grams[0])) / len(example_txt[0])
or if you are not interested in the spaces as part of your calculation you can use the following:
(example_txt[0].count(n_grams[0])* len(n_grams[0])) / len(example_txt[0].replace(' ',''))
of course you can use them in a list comprehension, this was just for demonstration purposes
The line
re.findall('[A-z]+', text)
returns
['order', 'intake', 'is', 'strong', 'for', 'Q'].
For this reason, the string 'order intake' will not be matched in your for here:
content = [w for w in text if w.lower() in n_grams]
If you want it to match, you'll need to make one single of string from each Bigram.
Instead, you should probably use this to find Bigrams.
For N-grams, have a look at this answer.

Replacing the dots for a list of abbreviations?

I'm trying to remove the dots of a list of abbreviations so that they will not confuse the sentence tokenizer. This is should be very straightforward. Don't know why my code is not working.
Below please find my code:
abbrevs = [
"No.", "U.S.", "Mses.", "B.S.", "B.A.", "D.C.", "B.Tech.", "Pte.", "Mr.", "O.E.M.",
"I.R.S", "sq.", "Reg.", "S-K."
]
def replace_abbrev(abbrs, text):
re_abbrs = [r"\b" + re.escape(a) + r"\b" for a in abbrs]
abbr_no_dot = [a.replace(".", "") for a in abbrs]
pattern_zip = zip(re_abbrs, abbr_no_dot)
for p in pattern_zip:
text = re.sub(p[0], p[1], text)
return text
text = "Test No. U.S. Mses. B.S. Test"
text = replace_abbrev(abbrevs, text)
print(text)
Here is the result. Nothing happened. What was wrong? Thanks.
Test No. U.S. Mses. B.S. Test
re_abbrs = [r"\b" + re.escape(a) for a in abbrs]
You need to use this.There is no \b after . .This gives the correct output.
Test No US Mses BS Test
You could use map and operator.methodcaller no need for re even though it's a great library.
from operator import methodcaller
' '.join(map(methodcaller('replace', '.', ''), abbrevs))
#No US Mses BS BA DC BTech Pte Mr OEM IRS sq Reg S-K

regex the text string in python and split into arrays

I need to split a text like:
//string
s = CS -135IntrotoComputingCS -154IntroToWonderLand...
in array like
inputarray[0]= CS -135 Intro to computing
inputarray[1]= CS -154 Intro to WonderLand
.
.
.
and so on;
I am trying something like this:
re.compile("[CS]+\s").split(s)
But it's just not ready to even break, even if I try something like
re.compile("[CS]").split(s)
If anyone can throw some light on this?
You may use findall with a lookahead regex as this:
>>> s = 'CS -135IntrotoComputingCS -154IntroToWonderLand'
>>> print re.findall(r'.+?(?=CS|$)', s)
['CS -135IntrotoComputing', 'CS -154IntroToWonderLand']
Regex: .+?(?=CS|$) matches 1+ any characters that has CS at next position or end of line.
Although findall is more straightforward but finditer can also be used here
s = 'CS -135IntrotoComputingCS -154IntroToWonderLand'
x=[i.start() for i in re.finditer('CS ',s)] # to get the starting positions of 'CS'
count=0
l=[]
while count+1<len(x):
l.append(s[x[count]:x[count+1]])
count+=1
l.append(s[x[count]:])
print(l) # ['CS -135IntrotoComputing', 'CS -154IntroToWonderLand']

Python regex: Match ALL consecutive capitalized words

Short question:
I have a string:
title="Announcing Elasticsearch.js For Node.js And The Browser"
I want to find all pairs of words where each word is properly capitalized.
So, expected output should be:
['Announcing Elasticsearch.js', 'Elasticsearch.js For', 'For Node.js', 'Node.js And', 'And The', 'The Browser']
What I have right now is this:
'[A-Z][a-z]+[\s-][A-Z][a-z.]*'
This gives me the output:
['Announcing Elasticsearch.js', 'For Node.js', 'And The']
How can I change my regex to give desired output?
You can use this:
#!/usr/bin/python
import re
title="Announcing Elasticsearch.js For Node.js And The Browser TEst"
pattern = r'(?=((?<![A-Za-z.])[A-Z][a-z.]*[\s-][A-Z][a-z.]*))'
print re.findall(pattern, title)
A "normal" pattern can't match overlapping substrings, all characters are founded once for all. However, a lookahead (?=..) (i.e. "followed by") is only a check and match nothing. It can parse the string several times. Thus if you put a capturing group inside the lookahead, you can obtain overlapping substrings.
There's probably a more efficient way to do this, but you could use a regex like this:
(\b[A-Z][a-z.-]+\b)
Then iterate through the capture groups like so testing with this regex: (^[A-Z][a-z.-]+$) to ensure the matched group(current) matches the matched group(next).
Working example:
import re
title = "Announcing Elasticsearch.js For Node.js And The Browser"
matchlist = []
m = re.findall(r"(\b[A-Z][a-z.-]+\b)", title)
i = 1
if m:
for i in range(len(m)):
if re.match(r"(^[A-Z][a-z.-]+$)", m[i - 1]) and re.match(r"(^[A-Z][a-z.-]+$)", m[i]):
matchlist.append([m[i - 1], m[i]])
print matchlist
Output:
[
['Browser', 'Announcing'],
['Announcing', 'Elasticsearch.js'],
['Elasticsearch.js', 'For'],
['For', 'Node.js'],
['Node.js', 'And'],
['And', 'The'],
['The', 'Browser']
]
If your Python code at the moment is this
title="Announcing Elasticsearch.js For Node.js And The Browser"
results = re.findall("[A-Z][a-z]+[\s-][A-Z][a-z.]*", title)
then your program is skipping odd numbered pairs. An easy solution would be to research the pattern after skipping the first word like this:
m = re.match("[A-Z][a-z]+[\s-]", title)
title_without_first_word = title[m.end():]
results2 = re.findall("[A-Z][a-z]+[\s-][A-Z][a-z.]*", title_without_first_word)
Now just combine results and result2 together.

Python regular expressions for simple questions

I wish to let the user ask a simple question, so I can extract a few standard elements from the string entered.
Examples of strings to be entered:
Who is the director of The Dark Knight?
What is the capital of China?
Who is the president of USA?
As you can see sometimes it is "Who", sometimes it is "What". I'm most likely looking for the "|" operator. I'll need to extract two things from these strings. The word after "the" and before "of", as well as the word after "of".
For example:
1st sentence: I wish to extract "director" and place it in a variable called Relation, and extract "The Dark Knight" and place it in a variable called Concept.
Desired output:
RelationVar = "director"
ConceptVar = "The Dark Knight"
2nd sentence: I wish to extract "capital", assign it to variable "Relation".....and extract "China" and place it in variable "Concept".
RelationVar = "capital"
ConceptVar = "China"
Any ideas on how to use the re.match function? or any other method?
You're correct that you want to use | for who/what. The rest of the regex is very simple, the group names are there for clarity but you could use r"(?:Who|What) is the (.+) of (.+)[?]" instead.
>>> r = r"(?:Who|What) is the (?P<RelationVar>.+) of (?P<ConceptVar>.+)[?]"
>>> l = ['Who is the director of The Dark Knight?', 'What is the capital of China?', 'Who is the president of USA?']
>>> [re.match(r, i).groupdict() for i in l]
[{'RelationVar': 'director', 'ConceptVar': 'The Dark Knight'}, {'RelationVar': 'capital', 'ConceptVar': 'China'}, {'RelationVar': 'president', 'ConceptVar': 'USA'}]
Change (?:Who|What) to (Who|What) if you also want to capture whether the question uses who or what.
Actually extracting the data and assigning it to variables is very simple:
>>> m = re.match(r, "What is the capital of China?")
>>> d = m.groupdict()
>>> relation_var = d["RelationVar"]
>>> concept_var = d["ConceptVar"]
>>> relation_var
'capital'
>>> concept_var
'China'
Here is the script, you can simply use | to optional match one inside the brackets.
This worked fine for me
import re
list = ['Who is the director of The Dark Knight?','What is the capital of China?','Who is the president of USA?']
for string in list:
a = re.compile(r'(What|Who) is the (.+) of (.+)')
nodes = a.findall(string);
Relation = nodes[0][0]
Concept = nodes[0][1]
print Relation
print Concept
print '----'
Best Regards:)

Categories

Resources