How to do CamelCase split in python - python
What I was trying to achieve, was something like this:
>>> camel_case_split("CamelCaseXYZ")
['Camel', 'Case', 'XYZ']
>>> camel_case_split("XYZCamelCase")
['XYZ', 'Camel', 'Case']
So I searched and found this perfect regular expression:
(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])
As the next logical step I tried:
>>> re.split("(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])", "CamelCaseXYZ")
['CamelCaseXYZ']
Why does this not work, and how do I achieve the result from the linked question in python?
Edit: Solution summary
I tested all provided solutions with a few test cases:
string: ''
AplusKminus: ['']
casimir_et_hippolyte: []
two_hundred_success: []
kalefranz: string index out of range # with modification: either [] or ['']
string: ' '
AplusKminus: [' ']
casimir_et_hippolyte: []
two_hundred_success: [' ']
kalefranz: [' ']
string: 'lower'
all algorithms: ['lower']
string: 'UPPER'
all algorithms: ['UPPER']
string: 'Initial'
all algorithms: ['Initial']
string: 'dromedaryCase'
AplusKminus: ['dromedary', 'Case']
casimir_et_hippolyte: ['dromedary', 'Case']
two_hundred_success: ['dromedary', 'Case']
kalefranz: ['Dromedary', 'Case'] # with modification: ['dromedary', 'Case']
string: 'CamelCase'
all algorithms: ['Camel', 'Case']
string: 'ABCWordDEF'
AplusKminus: ['ABC', 'Word', 'DEF']
casimir_et_hippolyte: ['ABC', 'Word', 'DEF']
two_hundred_success: ['ABC', 'Word', 'DEF']
kalefranz: ['ABCWord', 'DEF']
In summary you could say the solution by #kalefranz does not match the question (see the last case) and the solution by #casimir et hippolyte eats a single space, and thereby violates the idea that a split should not change the individual parts. The only difference among the remaining two alternatives is that my solution returns a list with the empty string on an empty string input and the solution by #200_success returns an empty list.
I don't know how the python community stands on that issue, so I say: I am fine with either one. And since 200_success's solution is simpler, I accepted it as the correct answer.
As #AplusKminus has explained, re.split() never splits on an empty pattern match. Therefore, instead of splitting, you should try finding the components you are interested in.
Here is a solution using re.finditer() that emulates splitting:
def camel_case_split(identifier):
matches = finditer('.+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)', identifier)
return [m.group(0) for m in matches]
Use re.sub() and split()
import re
name = 'CamelCaseTest123'
splitted = re.sub('([A-Z][a-z]+)', r' \1', re.sub('([A-Z]+)', r' \1', name)).split()
Result
'CamelCaseTest123' -> ['Camel', 'Case', 'Test123']
'CamelCaseXYZ' -> ['Camel', 'Case', 'XYZ']
'XYZCamelCase' -> ['XYZ', 'Camel', 'Case']
'XYZ' -> ['XYZ']
'IPAddress' -> ['IP', 'Address']
Most of the time when you don't need to check the format of a string, a global research is more simple than a split (for the same result):
re.findall(r'[A-Z](?:[a-z]+|[A-Z]*(?=[A-Z]|$))', 'CamelCaseXYZ')
returns
['Camel', 'Case', 'XYZ']
To deal with dromedary too, you can use:
re.findall(r'[A-Z]?[a-z]+|[A-Z]+(?=[A-Z]|$)', 'camelCaseXYZ')
Note: (?=[A-Z]|$) can be shorten using a double negation (a negative lookahead with a negated character class): (?![^A-Z])
Working solution, without regexp
I am not that good at regexp. I like to use them for search/replace in my IDE but I try to avoid them in programs.
Here is a quite straightforward solution in pure python:
def camel_case_split(s):
idx = list(map(str.isupper, s))
# mark change of case
l = [0]
for (i, (x, y)) in enumerate(zip(idx, idx[1:])):
if x and not y: # "Ul"
l.append(i)
elif not x and y: # "lU"
l.append(i+1)
l.append(len(s))
# for "lUl", index of "U" will pop twice, have to filter that
return [s[x:y] for x, y in zip(l, l[1:]) if x < y]
And some tests
TESTS = [
("XYZCamelCase", ['XYZ', 'Camel', 'Case']),
("CamelCaseXYZ", ['Camel', 'Case', 'XYZ']),
("CamelCaseXYZa", ['Camel', 'Case', 'XY', 'Za']),
("XYZCamelCaseXYZ", ['XYZ', 'Camel', 'Case', 'XYZ']),
("aCamelCaseWordT", ['a', 'Camel', 'Case', 'Word', 'T']),
("CamelCaseWordT", ['Camel', 'Case', 'Word', 'T']),
("CamelCaseWordTa", ['Camel', 'Case', 'Word', 'Ta']),
("aCamelCaseWordTa", ['a', 'Camel', 'Case', 'Word', 'Ta']),
("Ta", ['Ta']),
("aT", ['a', 'T']),
("a", ['a']),
("T", ['T']),
("", []),
]
def test():
for (q,a) in TESTS:
assert camel_case_split(q) == a
if __name__ == "__main__":
test()
Edit: a solution which streams data in one pass
This solution leverages the fact that the decision to split word or not can be taken locally, just considering the current character and the previous one.
def camel_case_split(s):
u = True # case of previous char
w = b = '' # current word, buffer for last uppercase letter
for c in s:
o = c.isupper()
if u and o:
w += b
b = c
elif u and not o:
if len(w)>0:
yield w
w = b + c
b = ''
elif not u and o:
yield w
w = ''
b = c
else: # not u and not o:
w += c
u = o
if len(w)>0 or len(b)>0: # flush
yield w + b
It is theoretically faster and lesser memory usage.
same tests suite applies
but list must be built by caller
def test():
for (q,a) in TESTS:
r = list(camel_case_split(q))
print(q,a,r)
assert r == a
Try it online
I just stumbled upon this case and wrote a regular expression to solve it. It should work for any group of words, actually.
RE_WORDS = re.compile(r'''
# Find words in a string. Order matters!
[A-Z]+(?=[A-Z][a-z]) | # All upper case before a capitalized word
[A-Z]?[a-z]+ | # Capitalized words / all lower case
[A-Z]+ | # All upper case
\d+ # Numbers
''', re.VERBOSE)
The key here is the lookahead on the first possible case. It will match (and preserve) uppercase words before capitalized ones:
assert RE_WORDS.findall('FOOBar') == ['FOO', 'Bar']
import re
re.split('(?<=[a-z])(?=[A-Z])', 'camelCamelCAMEL')
# ['camel', 'Camel', 'CAMEL'] <-- result
# '(?<=[a-z])' --> means preceding lowercase char (group A)
# '(?=[A-Z])' --> means following UPPERCASE char (group B)
# '(group A)(group B)' --> 'aA' or 'aB' or 'bA' and so on
The documentation for python's re.split says:
Note that split will never split a string on an empty pattern match.
When seeing this:
>>> re.findall("(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])", "CamelCaseXYZ")
['', '']
it becomes clear, why the split does not work as expected. The remodule finds empty matches, just as intended by the regular expression.
Since the documentation states that this is not a bug, but rather intended behavior, you have to work around that when trying to create a camel case split:
def camel_case_split(identifier):
matches = finditer('(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])', identifier)
split_string = []
# index of beginning of slice
previous = 0
for match in matches:
# get slice
split_string.append(identifier[previous:match.start()])
# advance index
previous = match.start()
# get remaining string
split_string.append(identifier[previous:])
return split_string
This solution also supports numbers, spaces, and auto remove underscores:
def camel_terms(value):
return re.findall('[A-Z][a-z]+|[0-9A-Z]+(?=[A-Z][a-z])|[0-9A-Z]{2,}|[a-z0-9]{2,}|[a-zA-Z0-9]', value)
Some tests:
tests = [
"XYZCamelCase",
"CamelCaseXYZ",
"Camel_CaseXYZ",
"3DCamelCase",
"Camel5Case",
"Camel5Case5D",
"Camel Case XYZ"
]
for test in tests:
print(test, "=>", camel_terms(test))
results:
XYZCamelCase => ['XYZ', 'Camel', 'Case']
CamelCaseXYZ => ['Camel', 'Case', 'XYZ']
Camel_CaseXYZ => ['Camel', 'Case', 'XYZ']
3DCamelCase => ['3D', 'Camel', 'Case']
Camel5Case => ['Camel', '5', 'Case']
Camel5Case5D => ['Camel', '5', 'Case', '5D']
Camel Case XYZ => ['Camel', 'Case', 'XYZ']
Simple solution:
re.sub(r"([a-z0-9])([A-Z])", r"\1 \2", str(text))
Here's another solution that requires less code and no complicated regular expressions:
def camel_case_split(string):
bldrs = [[string[0].upper()]]
for c in string[1:]:
if bldrs[-1][-1].islower() and c.isupper():
bldrs.append([c])
else:
bldrs[-1].append(c)
return [''.join(bldr) for bldr in bldrs]
Edit
The above code contains an optimization that avoids rebuilding the entire string with every appended character. Leaving out that optimization, a simpler version (with comments) might look like
def camel_case_split2(string):
# set the logic for creating a "break"
def is_transition(c1, c2):
return c1.islower() and c2.isupper()
# start the builder list with the first character
# enforce upper case
bldr = [string[0].upper()]
for c in string[1:]:
# get the last character in the last element in the builder
# note that strings can be addressed just like lists
previous_character = bldr[-1][-1]
if is_transition(previous_character, c):
# start a new element in the list
bldr.append(c)
else:
# append the character to the last string
bldr[-1] += c
return bldr
I know that the question added the tag of regex. But still, I always try to stay as far away from regex as possible. So, here is my solution without regex:
def split_camel(text, char):
if len(text) <= 1: # To avoid adding a wrong space in the beginning
return text+char
if char.isupper() and text[-1].islower(): # Regular Camel case
return text + " " + char
elif text[-1].isupper() and char.islower() and text[-2] != " ": # Detect Camel case in case of abbreviations
return text[:-1] + " " + text[-1] + char
else: # Do nothing part
return text + char
text = "PathURLFinder"
text = reduce(split_camel, a, "")
print text
# prints "Path URL Finder"
print text.split(" ")
# prints "['Path', 'URL', 'Finder']"
EDIT:
As suggested, here is the code to put the functionality in a single function.
def split_camel(text):
def splitter(text, char):
if len(text) <= 1: # To avoid adding a wrong space in the beginning
return text+char
if char.isupper() and text[-1].islower(): # Regular Camel case
return text + " " + char
elif text[-1].isupper() and char.islower() and text[-2] != " ": # Detect Camel case in case of abbreviations
return text[:-1] + " " + text[-1] + char
else: # Do nothing part
return text + char
converted_text = reduce(splitter, text, "")
return converted_text.split(" ")
split_camel("PathURLFinder")
# prints ['Path', 'URL', 'Finder']
Putting a more comprehensive approach otu ther. It takes care of several issues like numbers, string starting with lower case, single letter words etc.
def camel_case_split(identifier, remove_single_letter_words=False):
"""Parses CamelCase and Snake naming"""
concat_words = re.split('[^a-zA-Z]+', identifier)
def camel_case_split(string):
bldrs = [[string[0].upper()]]
string = string[1:]
for idx, c in enumerate(string):
if bldrs[-1][-1].islower() and c.isupper():
bldrs.append([c])
elif c.isupper() and (idx+1) < len(string) and string[idx+1].islower():
bldrs.append([c])
else:
bldrs[-1].append(c)
words = [''.join(bldr) for bldr in bldrs]
words = [word.lower() for word in words]
return words
words = []
for word in concat_words:
if len(word) > 0:
words.extend(camel_case_split(word))
if remove_single_letter_words:
subset_words = []
for word in words:
if len(word) > 1:
subset_words.append(word)
if len(subset_words) > 0:
words = subset_words
return words
My requirement was a bit more specific than the OP. In particular, in addition to handling all OP cases, I needed the following which the other solutions do not provide:
- treat all non-alphanumeric input (e.g. !##$%^&*() etc) as a word separator
- handle digits as follows:
- cannot be in the middle of a word
- cannot be at the beginning of the word unless the phrase starts with a digit
def splitWords(s):
new_s = re.sub(r'[^a-zA-Z0-9]', ' ', # not alphanumeric
re.sub(r'([0-9]+)([^0-9])', '\\1 \\2', # digit followed by non-digit
re.sub(r'([a-z])([A-Z])','\\1 \\2', # lower case followed by upper case
re.sub(r'([A-Z])([A-Z][a-z])', '\\1 \\2', # upper case followed by upper case followed by lower case
s
)
)
)
)
return [x for x in new_s.split(' ') if x]
Output:
for test in ['', ' ', 'lower', 'UPPER', 'Initial', 'dromedaryCase', 'CamelCase', 'ABCWordDEF', 'CamelCaseXYZand123.how23^ar23e you doing AndABC123XYZdf']:
print test + ':' + str(splitWords(test))
:[]
:[]
lower:['lower']
UPPER:['UPPER']
Initial:['Initial']
dromedaryCase:['dromedary', 'Case']
CamelCase:['Camel', 'Case']
ABCWordDEF:['ABC', 'Word', 'DEF']
CamelCaseXYZand123.how23^ar23e you doing AndABC123XYZdf:['Camel', 'Case', 'XY', 'Zand123', 'how23', 'ar23', 'e', 'you', 'doing', 'And', 'ABC123', 'XY', 'Zdf']
Maybe this will be enough to for some people:
a = "SomeCamelTextUpper"
def camelText(val):
return ''.join([' ' + i if i.isupper() else i for i in val]).strip()
print(camelText(a))
It dosen't work with the type "CamelXYZ", but with 'typical' CamelCase scenario should work just fine.
I think below is the optimim
Def count_word():
Return(re.findall(‘[A-Z]?[a-z]+’, input(‘please enter your string’))
Print(count_word())
Related
Python Split String at First Non-Alpha Character
Say I have strings such as 'ABC)D.' or 'AB:CD/'. How can I split them at the first non-alphabetic character to end up with ['ABC', 'D.'] and ['AB', 'CD/']? Is there a way to do this without regex?
You can use a loop a = 'AB$FDWRE' i = 0 while i<len(a) and a[i].isalpha(): i += 1 >>> a[:i] 'AB' >>> a[i:] '$FDWRE'
One option would be to find the location of the first non-alphabetic character: def split_at_non_alpha(s): try: split_at = next(i for i, x in enumerate(s) if not x.isalpha()) return s[:split_at], s[split_at+1:] except StopIteration: # if not found return (s,) print(split_at_non_alpha('ABC)D.')) # ('ABC', 'D.') print(split_at_non_alpha('AB:CD/')) # ('AB', 'CD/') print(split_at_non_alpha('.ABCD')) # ('', 'ABCD') print(split_at_non_alpha('ABCD.')) # ('ABCD', '') print(split_at_non_alpha('ABCD')) # ('ABCD',)
With for loop, enumerate, and string indexing: def first_non_alpha_splitter(word): for index, char in enumerate(word): if not char.isalpha(): break return [word[:index], word[index+1:]] The result first_non_alpha_splitter('ABC)D.') # Output: ['ABC', 'D.'] first_non_alpha_splitter('AB:CD/') # Output: ['AB', 'CD/']
Barmar's suggestion's worked best for me. The other answers had near the same execution time but I chose the former for readability. from itertools import takewhile str = 'ABC)D.' alphStr = ''.join(takewhile(lambda x: x.isalpha(), str)) print(alphStr) # Outputs 'ABC'
How do I split a string with several delimiters, but only once on each delimiter? Python
I am trying to split a string such as the one below, with all of the delimiters below, but only once. string = 'it; seems; like\ta good\tday to watch\va\vmovie.' delimiters = '\t \v ;' The output, in this case, would be: ['it', ' seems; like', 'a good\tday to watch', 'a\vmovie.'] Obviously the example above is a nonsense example, but I am trying to learn whether or not this is possible. Would a fairly involved regex be in order? Apologies if this question had been asked before. I did a fair bit of searching and could not find something quite like my example. Thanks for your time!
This should do the trick: import re def split_once_by(s, delims): delims = set(delims) parts = [] while delims: delim_re = '({})'.format('|'.join(re.escape(d) for d in delims)) result = re.split(delim_re, s, maxsplit=1) if len(result) == 3: first, delim, s = result parts.append(first) delims.remove(delim) else: break parts.append(s) return parts Example: >>> split_once_by('it; seems; like\ta good\tday to watch\va\vmovie.', '\t\v;') ['it', ' seems; like', 'a good\tday to watch', 'a\x0bmovie.'] Burning Alcohol's answer inspired me to write this (IMO) better function: def split_once_by(s, delims): split_points = sorted((s.find(d), -len(d), d) for d in delims) start = 0 for stop, _longest_first, d in split_points: if stop < start: continue yield s[start:stop] start = stop + len(d) yield s[start:] with usage: >>> list(split_once_by('it; seems; like\ta good\tday to watch\va\vmovie.', '\t\v;')) ['it', ' seems; like', 'a good\tday to watch', 'a\x0bmovie.']
A simple algorithm would do, test_string = r'it; seems; like\ta good\tday to watch\va\vmovie.' delimiters = [r'\t', r'\v', ';'] # find the index of each first occurence and sort it delimiters = sorted(delimiters, key=lambda delimiter: test_string.find(delimiter)) splitted_string = [test_string] # perform split with option maxsplit for index, delimiter in enumerate(delimiters): if delimiter in splitted_string[-1]: splitted_string += splitted_string[-1].split(delimiter, maxsplit=1) splitted_string.pop(index) print(splitted_string) # ['it', ' seems; like', 'a good\\tday to watch', 'a\\vmovie.']
Just create a list of patterns and apply them once: string = 'it; seems; like\ta good\tday to watch\va\vmovie.' patterns = ['\t', '\v', ';'] for pattern in patterns: string = '*****'.join(string.split(pattern, maxsplit=1)) print(string.split('*****')) Output: ['it', ' seems; like', 'a good\tday to watch', 'a\x0bmovie.'] So, what is "*****" ?? On each iteration, when you apply the split method you get a list. So, in the next iteration, You can't apply the .split () method (because you have a list), so you have to join each value of that list with some weird character like "****" or "###" or "^^^^^^^" or whatever you want, in order to re-apply the split () in the next iteration. Finally, for each "*****" on your string, you will have one pattern of the list, so you can use this to make a final split.
Splitting string using different scenarios using regex
I have 2 scenarios so split a string scenario 1: "##$hello?? getting good.<li>hii" I want to be split as 'hello','getting','good.<li>hii (Scenario 1) 'hello','getting','good','li,'hi' (Scenario 2) Any ideas please??
Something like this should work: >>> re.split(r"[^\w<>.]+", s) # or re.split(r"[##$? ]+", s) ['', 'hello', 'getting', 'good.<li>hii'] >>> re.split(r"[^\w]+", s) ['', 'hello', 'getting', 'good', 'li', 'hii']
This might be what your looking for \w+ it matches any digit or letter from 1 to n times as many times as possible. Here is a working Java-Script var value = "##$hello?? getting good.<li>hii"; var matches = value.match( new RegExp("\\w+", "gi") ); console.log(matches) It works by using \w+ which matches word characters as many times as possible. You cound also use [A-Za-b] to match only letters which not numbers. As show here. var value = "##$hello?? getting good.<li>hii777bloop"; var matches = value.match( new RegExp("[A-Za-z]+", "gi") ); console.log(matches) It matches what are in the brackets 1 to n timeas as many as possible. In this case the range a-z of lower case charactors and the range of A-Z uppder case characters. Hope this is what you want.
For first scenario just use regex to find all words that are contain word characters and <>.: In [60]: re.findall(r'[\w<>.]+', s) Out[60]: ['hello', 'getting', 'good.<li>hii'] For second one you need to repleace the repeated characters only if they are not valid english words, you can do this using nltk corpus, and re.sub regex: In [61]: import nltk In [62]: english_vocab = set(w.lower() for w in nltk.corpus.words.words()) In [63]: repeat_regexp = re.compile(r'(\w*)(\w)\2(\w*)') In [64]: [repeat_regexp.sub(r'\1\2\3', word) if word not in english_vocab else word for word in re.findall(r'[^\W]+', s)] Out[64]: ['hello', 'getting', 'good', 'li', 'hi']
In case you are looking for solution without regex. string.punctuation will give you list of all special characters. Use this list with list comprehension for achieving your desired result as: >>> import string >>> my_string = '##$hello?? getting good.<li>hii' >>> ''.join([(' ' if s in string.punctuation else s) for s in my_string]).split() ['hello', 'getting', 'good', 'li', 'hii'] # desired output Explanation: Below is the step by step instruction regarding how it works: import string # Importing the 'string' module special_char_string = string.punctuation # Value of 'special_char_string': '!"#$%&\'()*+,-./:;<=>?#[\\]^_`{|}~' my_string = '##$hello?? getting good.<li>hii' # Generating list of character in sample string with # special character replaced with whitespace my_list = [(' ' if item in special_char_string else item) for item in my_string] # Join the list to form string my_string = ''.join(my_list) # Split it based on space my_desired_list = my_string.strip().split() The value of my_desired_list will be: ['hello', 'getting', 'good', 'li', 'hii']
Python - Splitting numbers and letters into sub-strings with regular expression
I am creating a metric measurement converter. The user is expected to enter in an expression such as 125km (a number followed by a unit abbreviation). For conversion, the numerical value must be split from the abbreviation, producing a result such as [125, 'km']. I have done this with a regular expression, re.split, however it produces unwanted item in the resulting list: import re s = '125km' print(re.split('(\d+)', s)) Output: ['', '125', 'km'] I do not need nor want the beginning ''. How can I simply separate the numerical part of the string from the alphabetical part to produce a list using a regular expression?
What's wrong with re.findall ? >>> s = '125km' >>> re.findall(r'[A-Za-z]+|\d+', s) ['125', 'km'] [A-Za-z]+ matches one or more alphabets. | or \d+ one or more digits. OR Use list comprehension. >>> [i for i in re.split(r'([A-Za-z]+)', s) if i] ['125', 'km'] >>> [i for i in re.split(r'(\d+)', s) if i] ['125', 'km']
Split a string into list of sub-string (number and others) Using program: s = "125km1234string" sub = [] char = "" num = "" for letter in s: if letter.isdigit(): if char: sub.append(char) char = "" num += letter else: if num: sub.append(num) num = "" char += letter sub.append(char) if char else sub.append(num) print(sub) Output ['125', 'km', '1234', 'string']
How to replace multiple substrings of a string?
I would like to use the .replace function to replace multiple strings. I currently have string.replace("condition1", "") but would like to have something like string.replace("condition1", "").replace("condition2", "text") although that does not feel like good syntax what is the proper way to do this? kind of like how in grep/regex you can do \1 and \2 to replace fields to certain search strings
Here is a short example that should do the trick with regular expressions: import re rep = {"condition1": "", "condition2": "text"} # define desired replacements here # use these three lines to do the replacement rep = dict((re.escape(k), v) for k, v in rep.iteritems()) #Python 3 renamed dict.iteritems to dict.items so use rep.items() for latest versions pattern = re.compile("|".join(rep.keys())) text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text) For example: >>> pattern.sub(lambda m: rep[re.escape(m.group(0))], "(condition1) and --condition2--") '() and --text--'
You could just make a nice little looping function. def replace_all(text, dic): for i, j in dic.iteritems(): text = text.replace(i, j) return text where text is the complete string and dic is a dictionary — each definition is a string that will replace a match to the term. Note: in Python 3, iteritems() has been replaced with items() Careful: Python dictionaries don't have a reliable order for iteration. This solution only solves your problem if: order of replacements is irrelevant it's ok for a replacement to change the results of previous replacements Update: The above statement related to ordering of insertion does not apply to Python versions greater than or equal to 3.6, as standard dicts were changed to use insertion ordering for iteration. For instance: d = { "cat": "dog", "dog": "pig"} my_sentence = "This is my cat and this is my dog." replace_all(my_sentence, d) print(my_sentence) Possible output #1: "This is my pig and this is my pig." Possible output #2 "This is my dog and this is my pig." One possible fix is to use an OrderedDict. from collections import OrderedDict def replace_all(text, dic): for i, j in dic.items(): text = text.replace(i, j) return text od = OrderedDict([("cat", "dog"), ("dog", "pig")]) my_sentence = "This is my cat and this is my dog." replace_all(my_sentence, od) print(my_sentence) Output: "This is my pig and this is my pig." Careful #2: Inefficient if your text string is too big or there are many pairs in the dictionary.
Why not one solution like this? s = "The quick brown fox jumps over the lazy dog" for r in (("brown", "red"), ("lazy", "quick")): s = s.replace(*r) #output will be: The quick red fox jumps over the quick dog
Here is a variant of the first solution using reduce, in case you like being functional. :) repls = {'hello' : 'goodbye', 'world' : 'earth'} s = 'hello, world' reduce(lambda a, kv: a.replace(*kv), repls.iteritems(), s) martineau's even better version: repls = ('hello', 'goodbye'), ('world', 'earth') s = 'hello, world' reduce(lambda a, kv: a.replace(*kv), repls, s)
This is just a more concise recap of F.J and MiniQuark great answers and last but decisive improvement by bgusach. All you need to achieve multiple simultaneous string replacements is the following function: def multiple_replace(string, rep_dict): pattern = re.compile("|".join([re.escape(k) for k in sorted(rep_dict,key=len,reverse=True)]), flags=re.DOTALL) return pattern.sub(lambda x: rep_dict[x.group(0)], string) Usage: >>>multiple_replace("Do you like cafe? No, I prefer tea.", {'cafe':'tea', 'tea':'cafe', 'like':'prefer'}) 'Do you prefer tea? No, I prefer cafe.' If you wish, you can make your own dedicated replacement functions starting from this simpler one.
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can apply the replacements within a list comprehension: # text = "The quick brown fox jumps over the lazy dog" # replacements = [("brown", "red"), ("lazy", "quick")] [text := text.replace(a, b) for a, b in replacements] # text = 'The quick red fox jumps over the quick dog'
I built this upon F.J.s excellent answer: import re def multiple_replacer(*key_values): replace_dict = dict(key_values) replacement_function = lambda match: replace_dict[match.group(0)] pattern = re.compile("|".join([re.escape(k) for k, v in key_values]), re.M) return lambda string: pattern.sub(replacement_function, string) def multiple_replace(string, *key_values): return multiple_replacer(*key_values)(string) One shot usage: >>> replacements = (u"café", u"tea"), (u"tea", u"café"), (u"like", u"love") >>> print multiple_replace(u"Do you like café? No, I prefer tea.", *replacements) Do you love tea? No, I prefer café. Note that since replacement is done in just one pass, "café" changes to "tea", but it does not change back to "café". If you need to do the same replacement many times, you can create a replacement function easily: >>> my_escaper = multiple_replacer(('"','\\"'), ('\t', '\\t')) >>> many_many_strings = (u'This text will be escaped by "my_escaper"', u'Does this work?\tYes it does', u'And can we span\nmultiple lines?\t"Yes\twe\tcan!"') >>> for line in many_many_strings: ... print my_escaper(line) ... This text will be escaped by \"my_escaper\" Does this work?\tYes it does And can we span multiple lines?\t\"Yes\twe\tcan!\" Improvements: turned code into a function added multiline support fixed a bug in escaping easy to create a function for a specific multiple replacement Enjoy! :-)
I would like to propose the usage of string templates. Just place the string to be replaced in a dictionary and all is set! Example from docs.python.org >>> from string import Template >>> s = Template('$who likes $what') >>> s.substitute(who='tim', what='kung pao') 'tim likes kung pao' >>> d = dict(who='tim') >>> Template('Give $who $100').substitute(d) Traceback (most recent call last): [...] ValueError: Invalid placeholder in string: line 1, col 10 >>> Template('$who likes $what').substitute(d) Traceback (most recent call last): [...] KeyError: 'what' >>> Template('$who likes $what').safe_substitute(d) 'tim likes $what'
In my case, I needed a simple replacing of unique keys with names, so I thought this up: a = 'This is a test string.' b = {'i': 'I', 's': 'S'} for x,y in b.items(): a = a.replace(x, y) >>> a 'ThIS IS a teSt StrIng.'
Here my $0.02. It is based on Andrew Clark's answer, just a little bit clearer, and it also covers the case when a string to replace is a substring of another string to replace (longer string wins) def multireplace(string, replacements): """ Given a string and a replacement map, it returns the replaced string. :param str string: string to execute replacements on :param dict replacements: replacement dictionary {value to find: value to replace} :rtype: str """ # Place longer ones first to keep shorter substrings from matching # where the longer ones should take place # For instance given the replacements {'ab': 'AB', 'abc': 'ABC'} against # the string 'hey abc', it should produce 'hey ABC' and not 'hey ABc' substrs = sorted(replacements, key=len, reverse=True) # Create a big OR regex that matches any of the substrings to replace regexp = re.compile('|'.join(map(re.escape, substrs))) # For each match, look up the new string in the replacements return regexp.sub(lambda match: replacements[match.group(0)], string) It is in this this gist, feel free to modify it if you have any proposal.
I needed a solution where the strings to be replaced can be a regular expressions, for example to help in normalizing a long text by replacing multiple whitespace characters with a single one. Building on a chain of answers from others, including MiniQuark and mmj, this is what I came up with: def multiple_replace(string, reps, re_flags = 0): """ Transforms string, replacing keys from re_str_dict with values. reps: dictionary, or list of key-value pairs (to enforce ordering; earlier items have higher priority). Keys are used as regular expressions. re_flags: interpretation of regular expressions, such as re.DOTALL """ if isinstance(reps, dict): reps = reps.items() pattern = re.compile("|".join("(?P<_%d>%s)" % (i, re_str[0]) for i, re_str in enumerate(reps)), re_flags) return pattern.sub(lambda x: reps[int(x.lastgroup[1:])][1], string) It works for the examples given in other answers, for example: >>> multiple_replace("(condition1) and --condition2--", ... {"condition1": "", "condition2": "text"}) '() and --text--' >>> multiple_replace('hello, world', {'hello' : 'goodbye', 'world' : 'earth'}) 'goodbye, earth' >>> multiple_replace("Do you like cafe? No, I prefer tea.", ... {'cafe': 'tea', 'tea': 'cafe', 'like': 'prefer'}) 'Do you prefer tea? No, I prefer cafe.' The main thing for me is that you can use regular expressions as well, for example to replace whole words only, or to normalize white space: >>> s = "I don't want to change this name:\n Philip II of Spain" >>> re_str_dict = {r'\bI\b': 'You', r'[\n\t ]+': ' '} >>> multiple_replace(s, re_str_dict) "You don't want to change this name: Philip II of Spain" If you want to use the dictionary keys as normal strings, you can escape those before calling multiple_replace using e.g. this function: def escape_keys(d): """ transform dictionary d by applying re.escape to the keys """ return dict((re.escape(k), v) for k, v in d.items()) >>> multiple_replace(s, escape_keys(re_str_dict)) "I don't want to change this name:\n Philip II of Spain" The following function can help in finding erroneous regular expressions among your dictionary keys (since the error message from multiple_replace isn't very telling): def check_re_list(re_list): """ Checks if each regular expression in list is well-formed. """ for i, e in enumerate(re_list): try: re.compile(e) except (TypeError, re.error): print("Invalid regular expression string " "at position {}: '{}'".format(i, e)) >>> check_re_list(re_str_dict.keys()) Note that it does not chain the replacements, instead performs them simultaneously. This makes it more efficient without constraining what it can do. To mimic the effect of chaining, you may just need to add more string-replacement pairs and ensure the expected ordering of the pairs: >>> multiple_replace("button", {"but": "mut", "mutton": "lamb"}) 'mutton' >>> multiple_replace("button", [("button", "lamb"), ... ("but", "mut"), ("mutton", "lamb")]) 'lamb'
Note: Test your case, see comments. Here's a sample which is more efficient on long strings with many small replacements. source = "Here is foo, it does moo!" replacements = { 'is': 'was', # replace 'is' with 'was' 'does': 'did', '!': '?' } def replace(source, replacements): finder = re.compile("|".join(re.escape(k) for k in replacements.keys())) # matches every string we want replaced result = [] pos = 0 while True: match = finder.search(source, pos) if match: # cut off the part up until match result.append(source[pos : match.start()]) # cut off the matched part and replace it in place result.append(replacements[source[match.start() : match.end()]]) pos = match.end() else: # the rest after the last match result.append(source[pos:]) break return "".join(result) print replace(source, replacements) The point is in avoiding many concatenations of long strings. We chop the source string to fragments, replacing some of the fragments as we form the list, and then join the whole thing back into a string.
I was doing a similar exercise in one of my school homework. This was my solution dictionary = {1: ['hate', 'love'], 2: ['salad', 'burger'], 3: ['vegetables', 'pizza']} def normalize(text): for i in dictionary: text = text.replace(dictionary[i][0], dictionary[i][1]) return text See result yourself on test string string_to_change = 'I hate salad and vegetables' print(normalize(string_to_change))
You can use the pandas library and the replace function which supports both exact matches as well as regex replacements. For example: df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']}) to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}'] replace_with=['name','city','month','time', 'date'] print(df.text.replace(to_replace, replace_with, regex=True)) And the modified text is: 0 name is going to visit city in month 1 I was born in date 2 I will be there at time You can find an example here. Notice that the replacements on the text are done with the order they appear in the lists
I was struggling with this problem as well. With many substitutions regular expressions struggle, and are about four times slower than looping string.replace (in my experiment conditions). You should absolutely try using the Flashtext library (blog post here, Github here). In my case it was a bit over two orders of magnitude faster, from 1.8 s to 0.015 s (regular expressions took 7.7 s) for each document. It is easy to find use examples in the links above, but this is a working example: from flashtext import KeywordProcessor self.processor = KeywordProcessor(case_sensitive=False) for k, v in self.my_dict.items(): self.processor.add_keyword(k, v) new_string = self.processor.replace_keywords(string) Note that Flashtext makes substitutions in a single pass (to avoid a --> b and b --> c translating 'a' into 'c'). Flashtext also looks for whole words (so 'is' will not match 'this'). It works fine if your target is several words (replacing 'This is' by 'Hello').
I face similar problem today, where I had to do use .replace() method multiple times but it didn't feel good to me. So I did something like this: REPLACEMENTS = {'<': '<', '>': '>', '&': '&'} event_title = ''.join([REPLACEMENTS.get(c,c) for c in event['summary']])
I feel this question needs a single-line recursive lambda function answer for completeness, just because. So there: >>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.popitem()), d) Usage: >>> mrep('abcabc', {'a': '1', 'c': '2'}) '1b21b2' Notes: This consumes the input dictionary. Python dicts preserve key order as of 3.6; corresponding caveats in other answers are not relevant anymore. For backward compatibility one could resort to a tuple-based version: >>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.pop()), d) >>> mrep('abcabc', [('a', '1'), ('c', '2')]) Note: As with all recursive functions in python, too large recursion depth (i.e. too large replacement dictionaries) will result in an error. See e.g. here.
You should really not do it this way, but I just find it way too cool: >>> replacements = {'cond1':'text1', 'cond2':'text2'} >>> cmd = 'answer = s' >>> for k,v in replacements.iteritems(): >>> cmd += ".replace(%s, %s)" %(k,v) >>> exec(cmd) Now, answer is the result of all the replacements in turn again, this is very hacky and is not something that you should be using regularly. But it's just nice to know that you can do something like this if you ever need to.
For replace only one character, use the translate and str.maketrans is my favorite method. tl;dr > result_string = your_string.translate(str.maketrans(dict_mapping)) demo my_string = 'This is a test string.' dict_mapping = {'i': 's', 's': 'S'} result_good = my_string.translate(str.maketrans(dict_mapping)) result_bad = my_string for x, y in dict_mapping.items(): result_bad = result_bad.replace(x, y) print(result_good) # ThsS sS a teSt Strsng. print(result_bad) # ThSS SS a teSt StrSng.
I don't know about speed but this is my workaday quick fix: reduce(lambda a, b: a.replace(*b) , [('o','W'), ('t','X')] #iterable of pairs: (oldval, newval) , 'tomato' #The string from which to replace values ) ... but I like the #1 regex answer above. Note - if one new value is a substring of another one then the operation is not commutative.
Here is a version with support for basic regex replacement. The main restriction is that expressions must not contain subgroups, and there may be some edge cases: Code based on #bgusach and others import re class StringReplacer: def __init__(self, replacements, ignore_case=False): patterns = sorted(replacements, key=len, reverse=True) self.replacements = [replacements[k] for k in patterns] re_mode = re.IGNORECASE if ignore_case else 0 self.pattern = re.compile('|'.join(("({})".format(p) for p in patterns)), re_mode) def tr(matcher): index = next((index for index,value in enumerate(matcher.groups()) if value), None) return self.replacements[index] self.tr = tr def __call__(self, string): return self.pattern.sub(self.tr, string) Tests table = { "aaa" : "[This is three a]", "b+" : "[This is one or more b]", r"<\w+>" : "[This is a tag]" } replacer = StringReplacer(table, True) sample1 = "whatever bb, aaa, <star> BBB <end>" print(replacer(sample1)) # output: # whatever [This is one or more b], [This is three a], [This is a tag] [This is one or more b] [This is a tag] The trick is to identify the matched group by its position. It is not super efficient (O(n)), but it works. index = next((index for index,value in enumerate(matcher.groups()) if value), None) Replacement is done in one pass.
Starting from the precious answer of Andrew i developed a script that loads the dictionary from a file and elaborates all the files on the opened folder to do the replacements. The script loads the mappings from an external file in which you can set the separator. I'm a beginner but i found this script very useful when doing multiple substitutions in multiple files. It loaded a dictionary with more than 1000 entries in seconds. It is not elegant but it worked for me import glob import re mapfile = input("Enter map file name with extension eg. codifica.txt: ") sep = input("Enter map file column separator eg. |: ") mask = input("Enter search mask with extension eg. 2010*txt for all files to be processed: ") suff = input("Enter suffix with extension eg. _NEW.txt for newly generated files: ") rep = {} # creation of empy dictionary with open(mapfile) as temprep: # loading of definitions in the dictionary using input file, separator is prompted for line in temprep: (key, val) = line.strip('\n').split(sep) rep[key] = val for filename in glob.iglob(mask): # recursion on all the files with the mask prompted with open (filename, "r") as textfile: # load each file in the variable text text = textfile.read() # start replacement #rep = dict((re.escape(k), v) for k, v in rep.items()) commented to enable the use in the mapping of re reserved characters pattern = re.compile("|".join(rep.keys())) text = pattern.sub(lambda m: rep[m.group(0)], text) #write of te output files with the prompted suffice target = open(filename[:-4]+"_NEW.txt", "w") target.write(text) target.close()
this is my solution to the problem. I used it in a chatbot to replace the different words at once. def mass_replace(text, dct): new_string = "" old_string = text while len(old_string) > 0: s = "" sk = "" for k in dct.keys(): if old_string.startswith(k): s = dct[k] sk = k if s: new_string+=s old_string = old_string[len(sk):] else: new_string+=old_string[0] old_string = old_string[1:] return new_string print mass_replace("The dog hunts the cat", {"dog":"cat", "cat":"dog"}) this will become The cat hunts the dog
Another example : Input list error_list = ['[br]', '[ex]', 'Something'] words = ['how', 'much[ex]', 'is[br]', 'the', 'fish[br]', 'noSomething', 'really'] The desired output would be words = ['how', 'much', 'is', 'the', 'fish', 'no', 'really'] Code : [n[0][0] if len(n[0]) else n[1] for n in [[[w.replace(e,"") for e in error_list if e in w],w] for w in words]]
My approach would be to first tokenize the string, then decide for each token whether to include it or not. Potentially, might be more performant, if we can assume O(1) lookup for a hashmap/set: remove_words = {"we", "this"} target_sent = "we should modify this string" target_sent_words = target_sent.split() filtered_sent = " ".join(list(filter(lambda word: word not in remove_words, target_sent_words))) filtered_sent is now 'should modify string'
Or just for a fast hack: for line in to_read: read_buffer = line stripped_buffer1 = read_buffer.replace("term1", " ") stripped_buffer2 = stripped_buffer1.replace("term2", " ") write_to_file = to_write.write(stripped_buffer2)
Here is another way of doing it with a dictionary: listA="The cat jumped over the house".split() modify = {word:word for number,word in enumerate(listA)} modify["cat"],modify["jumped"]="dog","walked" print " ".join(modify[x] for x in listA)
sentence='its some sentence with a something text' def replaceAll(f,Array1,Array2): if len(Array1)==len(Array2): for x in range(len(Array1)): return f.replace(Array1[x],Array2[x]) newSentence=replaceAll(sentence,['a','sentence','something'],['another','sentence','something something']) print(newSentence)