Struggling with Regex for adjacent letters differing by case - python

I am looking to be able to recursively remove adjacent letters in a string that differ only in their case e.g. if s = AaBbccDd i would want to be able to remove Aa Bb Dd but leave cc.
I can do this recursively using lists:
I think it aught to be able to be done using regex but i am struggling:
with test string 'fffAaaABbe' the answer should be 'fffe' but the regex I am using gives 'fe'
def test(line):
res = re.compile(r'(.)\1{1}', re.IGNORECASE)
#print(res.search(line))
while res.search(line):
line = res.sub('', line, 1)
print(line)
The way that works is:
def test(line):
result =''
chr = list(line)
cnt = 0
i = len(chr) - 1
while i > 0:
if ord(chr[i]) == ord(chr[i - 1]) + 32 or ord(chr[i]) == ord(chr[i - 1]) - 32:
cnt += 1
chr.pop(i)
chr.pop(i - 1)
i -= 2
else:
i -= 1
if cnt > 0: # until we can't find any duplicates.
return test(''.join(chr))
result = ''.join(chr)
print(result)
Is it possible to do this using a regex?

re.IGNORECASE is not way to solve this problem, as it will treat aa, Aa, aA, AA same way. Technically it is possible using re.sub, following way.
import re
txt = 'fffAaaABbe'
after_sub = re.sub(r'Aa|aA|Bb|bB|Cc|cC|Dd|dD|Ee|eE|Ff|fF|Gg|gG|Hh|hH|Ii|iI|Jj|jJ|Kk|kK|Ll|lL|Mm|mM|Nn|nN|Oo|oO|Pp|pP|Qq|qQ|Rr|rR|Ss|sS|Tt|tT|Uu|uU|Vv|vV|Ww|wW|Xx|xX|Yy|yY|Zz|zZ', '', txt)
print(after_sub) # fffe
Note that I explicitly defined all possible letters pairs, because so far I know there is no way to say "inverted case letter" using just re pattern. Maybe other user will be able to provide more concise re-based solution.

I suggest a different approach which uses groupby to group adjacent similar letters:
from itertools import groupby
def test(line):
res = []
for k, g in groupby(line, key=lambda x: x.lower()):
g = list(g)
if all(x == x.lower() for x in g):
res.append(''.join(g))
print(''.join(res))
Sample run:
>>> test('AaBbccDd')
cc
>>> test('fffAaaABbe')
fffe

r'(.)\1{1}' is wrong because it will match any character that is repeated twice, including non-letter characters. If you want to stick to letters, you can't use this.
However, even if we just do r'[A-z]\1{1}', this would still be bad because you would match any sequence of the same letter twice, but it would catch xx and XX -- you don't want to match consecutive same characters with matching case, as you said in the original question.
It just so happens that there is no short-hand to do this conveniently, but it is still possible. You could also just write a small function to turn it into a short-hand.
Building on #Daweo's answer, you can generate the regex pattern needed to match pairs of same letters with non-matching case to get the final pattern of aA|Aa|bB|Bb|cC|Cc|dD|Dd|eE|Ee|fF|Ff|gG|Gg|hH|Hh|iI|Ii|jJ|Jj|kK|Kk|lL|Ll|mM|Mm|nN|Nn|oO|Oo|pP|Pp|qQ|Qq|rR|Rr|sS|Ss|tT|Tt|uU|Uu|vV|Vv|wW|Ww|xX|Xx|yY|Yy|zZ|Zz:
import re
import string
def consecutiveLettersNonMatchingCase():
# Get all 'xX|Xx' with a list comprehension
# and join them with '|'
return '|'.join(['{0}{1}|{1}{0}'.format(s, t)\
# Iterate through the upper/lowercase characters
# in lock-step
for s, t in zip(
string.ascii_lowercase,
string.ascii_uppercase)])
def test(line):
res = re.compile(consecutiveLettersNonMatchingCase())
print(res.search(line))
while res.search(line):
line = res.sub('', line, 1)
print(line)
print(consecutiveLettersNonMatchingCase())

Related

Python detect if string contains specific length substring

I am given a string and need to find the first substring in it, according to the substring's length
for example: given the string 'abaadddefggg'
for length = 3 I should get the output of 'ddd'
for length = 2 I should get 'aa' and so on
any ideas?
You could iterate over the strings indexes, and produce all the substrings. If any of these substrings is made up of a single character, that's the substring you're looking for:
def sequence(s, length):
for i in range(len(s) - length):
candidate = s[i:i+length]
if len(set(candidate)) == 1:
return candidate
One approach in Python 3.8+ using itertools.groupby combined with the walrus operator:
from itertools import groupby
string = 'abaadddefggg'
k = 3
res = next(s for _, group in groupby(string) if len(s := "".join(group)) == k)
print(res)
Output
ddd
An alternative general approach:
from itertools import groupby
def find_substring(string, k):
for _, group in groupby(string):
s = "".join(group)
if len(s) == k:
return s
res = find_substring('abaadddefggg', 3)
print(res)

Python: compare a string to a reference string to see what percentage of the letters are the same

I'm very new to python and I'm trying to work with strings.
I have some data with peptides for example (test string) KGSLADEE. I want to write a function which compares the test string to the reference string: AGSTQKP to see what percentage of the letters in the test string are the same as in the reference string. How can I do this? When looking online I can only find code for exact string matches.
For this example:
(1*K) + (1*G) + (1*S) + (1*L) = 4 (letters which are the same)
Divide by 8 (total number of letters in the test string)
(4/8) * 100 = 50%
How can I do this? When looking online I can only find code for exact string matches.
This yields the same results as the answer from Hoxha Alban, but I find this one a bit easier to read. It uses the Counter module (see here: https://pymotw.com/2/collections/counter.html)
from collections import Counter
def f(test, ref):
intersection = Counter(test) & Counter(ref)
return len(list(intersection.elements())) / len(test) * 100
Are you searching for something like this?
def f(test, ref):
d = dict()
for c in ref:
if c not in d:
d[c] = min(ref.count(c), test.count(c))
return sum(d.values()) / len(test) * 100
f('KGSLADEE', 'AGSTQKP') # 50%
f('hello', 'h') # 20%
f('abc', 'cba') # 100%
f('a', 'aaa') # 100%
f('aaa', 'a') # 33.333%
You will need to loop through every letter in your test string. Your loop will go through each letter and check that letter within your reference string and then give you an output.
You can then use this output to calculate your percentage.
word1 = 'KGSLADEE'
word2 = 'AGSTQKP'
same = 0
for letter1 in word1:
for letter2 in word2:
if letter1 == letter2:
same += 1
break
print(same/len(word1)*100)
It is not a solution for all situations. But you can expand that

Python : Convert Integers into a Count (i.e. 3 --> 1,2,3)

This might be more information than necessary to explain my question, but I am trying to combine 2 scripts (I wrote for other uses) together to do the following.
TargetString (input_file) 4FOO 2BAR
Result (output_file) 1FOO 2FOO 3FOO 4FOO 1BAR 2BAR
My first script finds the pattern and copies to file_2
pattern = "\d[A-Za-z]{3}"
matches = re.findall(pattern, input_file.read())
f1.write('\n'.join(matches))
My second script opens the output_file and, using re.sub, replaces and alters the target string(s) using capturing groups and back-references. But I am stuck here on how to turn i.e. 3 into 1 2 3.
Any ideas?
This simple example doesn't need to use regular expression, but if you want to use re anyway, here's example (note: you have minor error in your pattern, should be A-Z, not A-A):
text_input = '4FOO 2BAR'
import re
matches = re.findall(r"(\d)([A-Za-z]{3})", text_input)
for (count, what) in matches:
for i in range(1, int(count)+1):
print(f'{i}{what}', end=' ')
print()
Prints:
1FOO 2FOO 3FOO 4FOO 1BAR 2BAR
Note: If you want to support multiple digits, you can use (\d+) - note the + sign.
Assuming your numbers are between 1 and 9, without regex, you can use a list comprehension with f-strings (Python 3.6+):
L = ['4FOO', '2BAR']
res = [f'{j}{i[1:]}' for i in L for j in range(1, int(i[0])+1)]
['1FOO', '2FOO', '3FOO', '4FOO', '1BAR', '2BAR']
Reading and writing to CSV files are covered elsewhere: read, write.
More generalised, to account for numbers greater than 9, you can use itertools.groupby:
from itertools import groupby
L = ['4FOO', '10BAR']
def make_var(x, int_flag):
return int(''.join(x)) if int_flag else ''.join(x)
vals = ((make_var(b, a) for a, b in groupby(i, str.isdigit)) for i in L)
res = [f'{j}{k}' for num, k in vals for j in range(1, num+1)]
print(res)
['1FOO', '2FOO', '3FOO', '4FOO', '1BAR', '2BAR', '3BAR', '4BAR',
'5BAR', '6BAR', '7BAR', '8BAR', '9BAR', '10BAR']

Splitting a string before the nth occurrence of a character [duplicate]

Is there a Python-way to split a string after the nth occurrence of a given delimiter?
Given a string:
'20_231_myString_234'
It should be split into (with the delimiter being '_', after its second occurrence):
['20_231', 'myString_234']
Or is the only way to accomplish this to count, split and join?
>>> n = 2
>>> groups = text.split('_')
>>> '_'.join(groups[:n]), '_'.join(groups[n:])
('20_231', 'myString_234')
Seems like this is the most readable way, the alternative is regex)
Using re to get a regex of the form ^((?:[^_]*_){n-1}[^_]*)_(.*) where n is a variable:
n=2
s='20_231_myString_234'
m=re.match(r'^((?:[^_]*_){%d}[^_]*)_(.*)' % (n-1), s)
if m: print m.groups()
or have a nice function:
import re
def nthofchar(s, c, n):
regex=r'^((?:[^%c]*%c){%d}[^%c]*)%c(.*)' % (c,c,n-1,c,c)
l = ()
m = re.match(regex, s)
if m: l = m.groups()
return l
s='20_231_myString_234'
print nthofchar(s, '_', 2)
Or without regexes, using iterative find:
def nth_split(s, delim, n):
p, c = -1, 0
while c < n:
p = s.index(delim, p + 1)
c += 1
return s[:p], s[p + 1:]
s1, s2 = nth_split('20_231_myString_234', '_', 2)
print s1, ":", s2
I like this solution because it works without any actuall regex and can easiely be adapted to another "nth" or delimiter.
import re
string = "20_231_myString_234"
occur = 2 # on which occourence you want to split
indices = [x.start() for x in re.finditer("_", string)]
part1 = string[0:indices[occur-1]]
part2 = string[indices[occur-1]+1:]
print (part1, ' ', part2)
I thought I would contribute my two cents. The second parameter to split() allows you to limit the split after a certain number of strings:
def split_at(s, delim, n):
r = s.split(delim, n)[n]
return s[:-len(r)-len(delim)], r
On my machine, the two good answers by #perreal, iterative find and regular expressions, actually measure 1.4 and 1.6 times slower (respectively) than this method.
It's worth noting that it can become even quicker if you don't need the initial bit. Then the code becomes:
def remove_head_parts(s, delim, n):
return s.split(delim, n)[n]
Not so sure about the naming, I admit, but it does the job. Somewhat surprisingly, it is 2 times faster than iterative find and 3 times faster than regular expressions.
I put up my testing script online. You are welcome to review and comment.
>>>import re
>>>str= '20_231_myString_234'
>>> occerence = [m.start() for m in re.finditer('_',str)] # this will give you a list of '_' position
>>>occerence
[2, 6, 15]
>>>result = [str[:occerence[1]],str[occerence[1]+1:]] # [str[:6],str[7:]]
>>>result
['20_231', 'myString_234']
It depends what is your pattern for this split. Because if first two elements are always numbers for example, you may build regular expression and use re module. It is able to split your string as well.
I had a larger string to split ever nth character, ended up with the following code:
# Split every 6 spaces
n = 6
sep = ' '
n_split_groups = []
groups = err_str.split(sep)
while len(groups):
n_split_groups.append(sep.join(groups[:n]))
groups = groups[n:]
print n_split_groups
Thanks #perreal!
In function form of #AllBlackt's solution
def split_nth(s, sep, n):
n_split_groups = []
groups = s.split(sep)
while len(groups):
n_split_groups.append(sep.join(groups[:n]))
groups = groups[n:]
return n_split_groups
s = "aaaaa bbbbb ccccc ddddd eeeeeee ffffffff"
print (split_nth(s, " ", 2))
['aaaaa bbbbb', 'ccccc ddddd', 'eeeeeee ffffffff']
As #Yuval has noted in his answer, and #jamylak commented in his answer, the split and rsplit methods accept a second (optional) parameter maxsplit to avoid making splits beyond what is necessary. Thus, I find the better solution (both for readability and performance) is this:
s = '20_231_myString_234'
first_part = text.rsplit('_', 2)[0] # Gives '20_231'
second_part = text.split('_', 2)[2] # Gives 'myString_234'
This is not only simple, but also avoids performance hits of regex solutions and other solutions using join to undo unnecessary splits.

Concatenate strings if they have an overlapping region

I am trying to write a script that will find strings that share an overlapping region of 5 letters at the beginning or end of each string (shown in example below).
facgakfjeakfjekfzpgghi
pgghiaewkfjaekfjkjakjfkj
kjfkjaejfaefkajewf
I am trying to create a new string which concatenates all three, so the output would be:
facgakfjeakfjekfzpgghiaewkfjaekfjkjakjfkjaejfaefkajewf
Edit:
This is the input:
x = ('facgakfjeakfjekfzpgghi', 'kjfkjaejfaefkajewf', 'pgghiaewkfjaekfjkjakjfkj')
**the list is not ordered
What I've written so far *but is not correct:
def findOverlap(seq)
i = 0
while i < len(seq):
for x[i]:
#check if x[0:5] == [:5] elsewhere
x = ('facgakfjeakfjekfzpgghi', 'kjfkjaejfaefkajewf', 'pgghiaewkfjaekfjkjakjfkj')
findOverlap(x)
Create a dictionary mapping the first 5 characters of each string to its tail
strings = {s[:5]: s[5:] for s in x}
and a set of all the suffixes:
suffixes = set(s[-5:] for s in x)
Now find the string whose prefix does not match any suffix:
prefix = next(p for p in strings if p not in suffixes)
Now we can follow the chain of strings:
result = [prefix]
while prefix in strings:
result.append(strings[prefix])
prefix = strings[prefix][-5:]
print "".join(result)
A brute-force approach - do all combinations and return the first that matches linking terms:
def solution(x):
from itertools import permutations
for perm in permutations(x):
linked = [perm[i][:-5] for i in range(len(perm)-1)
if perm[i][-5:]==perm[i+1][:5]]
if len(perm)-1==len(linked):
return "".join(linked)+perm[-1]
return None
x = ('facgakfjeakfjekfzpgghi', 'kjfkjaejfaefkajewf', 'pgghiaewkfjaekfjkjakjfkj')
print solution(x)
Loop over each pair of candidates, reverse the second string and use the answer from here

Categories

Resources