Create a Reg Exp to search for __word__? - python

In a program I'm making in python and I want all words formatted like __word__ to stand out. How could I search for words like these using a regex?

Perhaps something like
\b__(\S+)__\b
>>> import re
>>> re.findall(r"\b__(\S+)__\b","Here __is__ a __test__ sentence")
['is', 'test']
>>> re.findall(r"\b__(\S+)__\b","__Here__ is a test __sentence__")
['Here', 'sentence']
>>> re.findall(r"\b__(\S+)__\b","__Here's__ a test __sentence__")
["Here's", 'sentence']
or you can put tags around the word like this
>>> print re.sub(r"\b(__)(\S+)(__)\b",r"<b>\2<\\b>","__Here__ is a test __sentence__")
<b>Here<\b> is a test <b>sentence<\b>
If you need more fine grained control over the legal word characters it's best to be explicit
\b__([a-zA-Z0-9_':])__\b ### count "'" and ":" as part of words
>>> re.findall(r"\b__([a-zA-Z0-9_']+)__\b","__Here's__ a test __sentence:__")
["Here's"]
>>> re.findall(r"\b__([a-zA-Z0-9_':]+)__\b","__Here's__ a test __sentence:__")
["Here's", 'sentence:']

Take a squizz here: http://docs.python.org/library/re.html
That should show you syntax and examples from which you can build a check for word(s) pre- and post-pended with 2 underscores.

The simplest regex for this would be
__.+__
If you want access to the word itself from your code, you should use
__(.+)__

This will give you a list with all such words
>>> import re
>>> m = re.findall("(__\w+__)", "What __word__ you search __for__")
>>> print m
['__word__', '__for__']

\b(__\w+__)\b
\b word boundary
\w+ one or more word characters - [a-zA-Z0-9_]

simple string functions. no regex
>>> mystring="blah __word__ blah __word2__"
>>> for item in mystring.split():
... if item.startswith("__") and item.endswith("__"):
... print item
...
__word__
__word2__

Related

Python re.findall() Capitalized Words including Apostrophes

I'm having trouble completing the a regex tutorial that went \w+ references words to this problem with "Find all capitalized words in my_string and print the result" where some of the words have apostrophes.
Original String:
In [1]: my_string
Out[1]: "Let's write RegEx! Won't that be fun? I sure think so. Can you
find 4 sentences? Or perhaps, all 19 words?"
Current Attempt:
# Import the regex module
import re
# Find all capitalized words in my_string and print the result
capitalized_words = r"((?:[A-Z][a-z]+ ?)+)"
print(re.findall(capitalized_words, my_string))
Current Result:
['Let', 'RegEx', 'Won', 'Can ', 'Or ']
What I think the desired outcome is:
['Let's', 'RegEx', 'Won't', 'Can't', 'Or']
How do you go from r"((?:[A-Z][a-z]+ ?)+)" to also selecting the 's and 't at the end of Let's, Won't and Can't when not everything were trying to catch is expected to have an apostrophe?
Just add an apostrophe to the second bracket group:
capitalized_words = r"((?:[A-Z][a-z']+)+)"
I suppose you can add a little apostrophe in the group [a-z'].
So it will be like ((?:[A-Z][a-z']+ ?)+)
Hope that works
While you do have your answer, I'd like to provide a more "real-world" solution using nltk:
from nltk import sent_tokenize, regexp_tokenize
my_string = """Let's write RegEx! Won't that be fun? I sure think so. Can you
find 4 sentences? Or perhaps, all 19 words?"""
sent = sent_tokenize(my_string)
print(len(sent))
# 5
pattern = r"\b(?i)[a-z][\w']*"
print(len(regexp_tokenize(my_string, pattern)))
# 19
And imo, these are 5 sentences, not 4 unless there's some special requirement for a sentence in place.

Difference between re.search() and re.findall()

The following code is very strange:
>>> words = "4324324 blahblah"
>>> print re.findall(r'(\s)\w+', words)
[' ']
>>> print re.search(r'(\s)\w+', words).group()
blahblah
The () operator seems to behave poorly with findall. Why is this? I need it for a csv file.
Edit for clarity: I want to display blahblah using findall.
I discovered that re.findall(r'\s(\w+)', words) does what I want, but have no idea why findall treats groups in this way.
One character off:
>>> print re.search(r'(\s)\w+', words).groups()
(' ',)
>>> print re.search(r'(\s)\w+', words).group(1)
' '
findall returns a list of all groups captured. You're getting a space back because that's what you capture. Stop capturing, and it works fine:
>>> print re.findall(r'\s\w+', words)
[' blahblah']
Use the csv module
If you prefer to keep the capturing groups in your regex, but you still want to find the entire contents of each match instead of the groups, you can use the following:
[m.group() for m in re.finditer(r'(\s)\w+', words)]
For example:
>>> [m.group() for m in re.finditer(r'(\s)\w+', '4324324 blahblah')]
[' blahblah']

python regular expression to find something in between two strings or phrases

How can I use regex in python to capture something between two strings or phrases, and removing everything else on the line?
For example, the following is a protein sequence preceded by a one-line header. How can I sift off "CG33289-PC" from the header below based on the stipulation that is occurs after the phrase "FlyBase_Annotation_IDs:" and before the next comma "," ?
I need to substitute the header with this simplified result "CG33289-PC" and not destroy the protein sequence (found below the header-line in all caps).
This is what each protein sequence entry looks like - a header followed by a sequence:
>FBpp0293870 type=protein;loc=3L:join(21527760..21527913,21527977..21528076,21528130..21528390,21528443..21528653,21528712..21529192,21529254..21529264); ID=FBpp0293870; name=CG33289-PC; parent=FBgn0053289,FBtr0305327; dbxref=FlyBase:FBpp0293870,FlyBase_Annotation_IDs:CG33289-PC; MD5=478485a27487608aa2b6c35d39a3295c; length=405; release=r5.45; species=Dmel;
MEMLKYVISDNNYSWWIKLYFAIIFALVLFVAVNLAVGIYNKWDSTPVII
GISSKMTPIDQIPFPTITVCNMNQAKKSKVEHLMPGSIRYAMLQKTCYKE
SNFSQYMDTQHRNETFSNFILDVSEKCADLIVSCIFHQQRIPCTDIFRET
FVDEGLCCIFNVLHPYYLYKFKSPYIRDFTSSDRFADIAVDWDPISGYPQ
RLPSSYYPRPGVGVGTSMGLQIVLNGHVDDYFCSSTNGQGFKILLYNPID
QPRMKESGLPVMIGHQTSFRIIARNVEATPSIRNIHRTKRQCIFSDEQEL
LFYRYYTRRNCEAECDSMFFLRLCSCIPYYLPLIYPNASVCDVFHFECLN
RAESQIFDLQSSQCKEFCLTSCHDLIFFPDAFSTPFSQKDVKAQTNYLTN
FSRAV
This is the desired output:
CG33289-PC
MEMLKYVISDNNYSWWIKLYFAIIFALVLFVAVNLAVGIYNKWDSTPVII
GISSKMTPIDQIPFPTITVCNMNQAKKSKVEHLMPGSIRYAMLQKTCYKE
SNFSQYMDTQHRNETFSNFILDVSEKCADLIVSCIFHQQRIPCTDIFRET
FVDEGLCCIFNVLHPYYLYKFKSPYIRDFTSSDRFADIAVDWDPISGYPQ
RLPSSYYPRPGVGVGTSMGLQIVLNGHVDDYFCSSTNGQGFKILLYNPID
QPRMKESGLPVMIGHQTSFRIIARNVEATPSIRNIHRTKRQCIFSDEQEL
LFYRYYTRRNCEAECDSMFFLRLCSCIPYYLPLIYPNASVCDVFHFECLN
RAESQIFDLQSSQCKEFCLTSCHDLIFFPDAFSTPFSQKDVKAQTNYLTN
FSRAV
Using regexps:
>>> s = """>FBpp0293870 type=protein;loc=3L:join(21527760..21527913,21527977..21528076,21528130..21528390,21528443..21528653,21528712..21529192,21529254..21529264); ID=FBpp0293870; name=CG33289-PC; parent=FBgn0053289,FBtr0305327; dbxref=FlyBase:FBpp0293870,FlyBase_Annotation_IDs:CG33289-PC; MD5=478485a27487608aa2b6c35d39a3295c; length=405; release=r5.45; species=Dmel; MEMLKYVISDNNYSWWIKLYFAIIFALVLFVAVNLAVGIYNKWDSTPVII
GISSKMTPIDQIPFPTITVCNMNQAKKSKVEHLMPGSIRYAMLQKTCYKE
SNFSQYMDTQHRNETFSNFILDVSEKCADLIVSCIFHQQRIPCTDIFRET
FVDEGLCCIFNVLHPYYLYKFKSPYIRDFTSSDRFADIAVDWDPISGYPQ
RLPSSYYPRPGVGVGTSMGLQIVLNGHVDDYFCSSTNGQGFKILLYNPID
QPRMKESGLPVMIGHQTSFRIIARNVEATPSIRNIHRTKRQCIFSDEQEL
LFYRYYTRRNCEAECDSMFFLRLCSCIPYYLPLIYPNASVCDVFHFECLN
RAESQIFDLQSSQCKEFCLTSCHDLIFFPDAFSTPFSQKDVKAQTNYLTN
FSRAV"""
>>> import re
>>> print re.sub(r'.*FlyBase_Annotation_IDs:([\w-]+).*;', r'\1\n', s)
CG33289-PC
MEMLKYVISDNNYSWWIKLYFAIIFALVLFVAVNLAVGIYNKWDSTPVII
GISSKMTPIDQIPFPTITVCNMNQAKKSKVEHLMPGSIRYAMLQKTCYKE
SNFSQYMDTQHRNETFSNFILDVSEKCADLIVSCIFHQQRIPCTDIFRET
FVDEGLCCIFNVLHPYYLYKFKSPYIRDFTSSDRFADIAVDWDPISGYPQ
RLPSSYYPRPGVGVGTSMGLQIVLNGHVDDYFCSSTNGQGFKILLYNPID
QPRMKESGLPVMIGHQTSFRIIARNVEATPSIRNIHRTKRQCIFSDEQEL
LFYRYYTRRNCEAECDSMFFLRLCSCIPYYLPLIYPNASVCDVFHFECLN
RAESQIFDLQSSQCKEFCLTSCHDLIFFPDAFSTPFSQKDVKAQTNYLTN
FSRAV
>>>
Not an elegant solution, but this should work for you:
>>> fly = 'FlyBase_Annotation_IDs'
>>> repl = 'CG33289-PC'
>>> part1, part2 = protein.split(fly)
>>> part2 = part2.replace(repl, "FooBar")
>>> protein = fly.join([part1, part2])
assuming FlyBase_Annotation_IDs can only appear once in the data.
I'm not sure about the format of the file, but this regex will capture the data in your example:
"FlyBase_Annotation_IDs:([A-Z0-9a-z-]*);"
Use findall function to get the match.
Assuming there is a newline after the header:
>>> import re
>>> protein = "..."
>>> r = re.compile(r"^.*FlyBase_Annotation_IDs:([A-Z0-9a-z-]*);.*$", re.MULTILINE)
>>> r.sub(r"\1", protein)
The group ([A-Z0-9a-z-]*) in the regular expression extracts any alphanumeric character and the dash. If ids can have other characters, just add them.

find a list of keywords from a text string and find inexact matches

I have a list of keywords I'm trying to find in a text string. The exact matches works fine but is anyone aware of library that could help with approximate matches so for instance if the list of words I provide are
["hello", "bye"]
I'd like it to watch if the text string has hlelo to a certain degree of "closeness"
Any recommendations?
Here's what I would do. First, define a string to search in and remove extraneous characters:
>>> tosearch = "This is a text string where I typed hlelo but I meant to type hello."
>>> import string
>>> exclude = set(string.punctuation)
>>> tosearch = ''.join(ch for ch in tosearch if ch not in exclude)
>>> tosearch
'This is a text string where I typed hlelo but I meant to type hello'
>>> words = set(tosearch.split(" "))
Next, you can use the difflib library to find close matches to a given word:
>>> import difflib
>>> difflib.get_close_matches('hello', words)
['hello', 'hlelo']

splitting merged words in python

I am working with a text where all "\n"s have been deleted (which merges two words into one, like "I like bananasAnd this is a new line.And another one.") What I would like to do now is tell Python to look for combinations of a small letter followed by capital letter/punctuation followed by capital letter and insert a whitespace.
I thought this would be easy with reg. expressions, but it is not - I couldnt find an "insert" function or anything, and the string commands seem not to be helpful either. How do I do this?
Any help would be greatly appreciated, I am despairing over here...
Thanks, patrick
Try the following:
re.sub(r"([a-z\.!?])([A-Z])", r"\1 \2", your_string)
For example:
import re
lines = "I like bananasAnd this is a new line.And another one."
print re.sub(r"([a-z\.!?])([A-Z])", r"\1 \2", lines)
# I like bananas And this is a new line. And another one.
If you want to insert a newline instead of a space, change the replacement to r"\1\n\2".
Using re.sub you should be able to make a pattern that grabs a lowercase and uppercase letter and substitutes them for the same two letters, but with a space in between:
import re
re.sub(r'([a-z][.?]?)([A-Z])', '\\1\n\\2', mystring)
You're looking for the sub function. See http://docs.python.org/library/re.html for documentation.
Hmm, interesting. You can use regular expressions to replace text with the sub() function:
>>> import re
>>> string = 'fooBar'
>>> re.sub(r'([a-z][.!?]*)([A-Z])', r'\1 \2', string)
'foo Bar'
If you really don't have any caps except at the beginning of a sentence, it will probably be easiest to just loop through the string.
>>> import string
>>> s = "a word endsA new sentence"
>>> lastend = 0
>>> sentences = list()
>>> for i in range(0, len(s)):
... if s[i] in string.uppercase:
... sentences.append(s[lastend:i])
... lastend = i
>>> sentences.append(s[lastend:])
>>> print sentences
['a word ends', 'A new sentence']
Here's another approach, which avoids regular expressions and does not use any imported libraries, just built-ins...
s = "I like bananasAnd this is a new line.And another one."
with_whitespace = ''
last_was_upper = True
for c in s:
if c.isupper():
if not last_was_upper:
with_whitespace += ' '
last_was_upper = True
else:
last_was_upper = False
with_whitespace += c
print with_whitespace
Yields:
I like bananas And this is a new line. And another one.

Categories

Resources