Replacement of characters in a file (python) [duplicate] - python

I would like to use the .replace function to replace multiple strings.
I currently have
string.replace("condition1", "")
but would like to have something like
string.replace("condition1", "").replace("condition2", "text")
although that does not feel like good syntax
what is the proper way to do this? kind of like how in grep/regex you can do \1 and \2 to replace fields to certain search strings

Here is a short example that should do the trick with regular expressions:
import re
rep = {"condition1": "", "condition2": "text"} # define desired replacements here
# use these three lines to do the replacement
rep = dict((re.escape(k), v) for k, v in rep.iteritems())
#Python 3 renamed dict.iteritems to dict.items so use rep.items() for latest versions
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text)
For example:
>>> pattern.sub(lambda m: rep[re.escape(m.group(0))], "(condition1) and --condition2--")
'() and --text--'

You could just make a nice little looping function.
def replace_all(text, dic):
for i, j in dic.iteritems():
text = text.replace(i, j)
return text
where text is the complete string and dic is a dictionary — each definition is a string that will replace a match to the term.
Note: in Python 3, iteritems() has been replaced with items()
Careful: Python dictionaries don't have a reliable order for iteration. This solution only solves your problem if:
order of replacements is irrelevant
it's ok for a replacement to change the results of previous replacements
Update: The above statement related to ordering of insertion does not apply to Python versions greater than or equal to 3.6, as standard dicts were changed to use insertion ordering for iteration.
For instance:
d = { "cat": "dog", "dog": "pig"}
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, d)
print(my_sentence)
Possible output #1:
"This is my pig and this is my pig."
Possible output #2
"This is my dog and this is my pig."
One possible fix is to use an OrderedDict.
from collections import OrderedDict
def replace_all(text, dic):
for i, j in dic.items():
text = text.replace(i, j)
return text
od = OrderedDict([("cat", "dog"), ("dog", "pig")])
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, od)
print(my_sentence)
Output:
"This is my pig and this is my pig."
Careful #2: Inefficient if your text string is too big or there are many pairs in the dictionary.

Why not one solution like this?
s = "The quick brown fox jumps over the lazy dog"
for r in (("brown", "red"), ("lazy", "quick")):
s = s.replace(*r)
#output will be: The quick red fox jumps over the quick dog

Here is a variant of the first solution using reduce, in case you like being functional. :)
repls = {'hello' : 'goodbye', 'world' : 'earth'}
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls.iteritems(), s)
martineau's even better version:
repls = ('hello', 'goodbye'), ('world', 'earth')
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls, s)

This is just a more concise recap of F.J and MiniQuark great answers and last but decisive improvement by bgusach. All you need to achieve multiple simultaneous string replacements is the following function:
def multiple_replace(string, rep_dict):
pattern = re.compile("|".join([re.escape(k) for k in sorted(rep_dict,key=len,reverse=True)]), flags=re.DOTALL)
return pattern.sub(lambda x: rep_dict[x.group(0)], string)
Usage:
>>>multiple_replace("Do you like cafe? No, I prefer tea.", {'cafe':'tea', 'tea':'cafe', 'like':'prefer'})
'Do you prefer tea? No, I prefer cafe.'
If you wish, you can make your own dedicated replacement functions starting from this simpler one.

Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can apply the replacements within a list comprehension:
# text = "The quick brown fox jumps over the lazy dog"
# replacements = [("brown", "red"), ("lazy", "quick")]
[text := text.replace(a, b) for a, b in replacements]
# text = 'The quick red fox jumps over the quick dog'

I built this upon F.J.s excellent answer:
import re
def multiple_replacer(*key_values):
replace_dict = dict(key_values)
replacement_function = lambda match: replace_dict[match.group(0)]
pattern = re.compile("|".join([re.escape(k) for k, v in key_values]), re.M)
return lambda string: pattern.sub(replacement_function, string)
def multiple_replace(string, *key_values):
return multiple_replacer(*key_values)(string)
One shot usage:
>>> replacements = (u"café", u"tea"), (u"tea", u"café"), (u"like", u"love")
>>> print multiple_replace(u"Do you like café? No, I prefer tea.", *replacements)
Do you love tea? No, I prefer café.
Note that since replacement is done in just one pass, "café" changes to "tea", but it does not change back to "café".
If you need to do the same replacement many times, you can create a replacement function easily:
>>> my_escaper = multiple_replacer(('"','\\"'), ('\t', '\\t'))
>>> many_many_strings = (u'This text will be escaped by "my_escaper"',
u'Does this work?\tYes it does',
u'And can we span\nmultiple lines?\t"Yes\twe\tcan!"')
>>> for line in many_many_strings:
... print my_escaper(line)
...
This text will be escaped by \"my_escaper\"
Does this work?\tYes it does
And can we span
multiple lines?\t\"Yes\twe\tcan!\"
Improvements:
turned code into a function
added multiline support
fixed a bug in escaping
easy to create a function for a specific multiple replacement
Enjoy! :-)

I would like to propose the usage of string templates. Just place the string to be replaced in a dictionary and all is set! Example from docs.python.org
>>> from string import Template
>>> s = Template('$who likes $what')
>>> s.substitute(who='tim', what='kung pao')
'tim likes kung pao'
>>> d = dict(who='tim')
>>> Template('Give $who $100').substitute(d)
Traceback (most recent call last):
[...]
ValueError: Invalid placeholder in string: line 1, col 10
>>> Template('$who likes $what').substitute(d)
Traceback (most recent call last):
[...]
KeyError: 'what'
>>> Template('$who likes $what').safe_substitute(d)
'tim likes $what'

In my case, I needed a simple replacing of unique keys with names, so I thought this up:
a = 'This is a test string.'
b = {'i': 'I', 's': 'S'}
for x,y in b.items():
a = a.replace(x, y)
>>> a
'ThIS IS a teSt StrIng.'

Here my $0.02. It is based on Andrew Clark's answer, just a little bit clearer, and it also covers the case when a string to replace is a substring of another string to replace (longer string wins)
def multireplace(string, replacements):
"""
Given a string and a replacement map, it returns the replaced string.
:param str string: string to execute replacements on
:param dict replacements: replacement dictionary {value to find: value to replace}
:rtype: str
"""
# Place longer ones first to keep shorter substrings from matching
# where the longer ones should take place
# For instance given the replacements {'ab': 'AB', 'abc': 'ABC'} against
# the string 'hey abc', it should produce 'hey ABC' and not 'hey ABc'
substrs = sorted(replacements, key=len, reverse=True)
# Create a big OR regex that matches any of the substrings to replace
regexp = re.compile('|'.join(map(re.escape, substrs)))
# For each match, look up the new string in the replacements
return regexp.sub(lambda match: replacements[match.group(0)], string)
It is in this this gist, feel free to modify it if you have any proposal.

I needed a solution where the strings to be replaced can be a regular expressions,
for example to help in normalizing a long text by replacing multiple whitespace characters with a single one. Building on a chain of answers from others, including MiniQuark and mmj, this is what I came up with:
def multiple_replace(string, reps, re_flags = 0):
""" Transforms string, replacing keys from re_str_dict with values.
reps: dictionary, or list of key-value pairs (to enforce ordering;
earlier items have higher priority).
Keys are used as regular expressions.
re_flags: interpretation of regular expressions, such as re.DOTALL
"""
if isinstance(reps, dict):
reps = reps.items()
pattern = re.compile("|".join("(?P<_%d>%s)" % (i, re_str[0])
for i, re_str in enumerate(reps)),
re_flags)
return pattern.sub(lambda x: reps[int(x.lastgroup[1:])][1], string)
It works for the examples given in other answers, for example:
>>> multiple_replace("(condition1) and --condition2--",
... {"condition1": "", "condition2": "text"})
'() and --text--'
>>> multiple_replace('hello, world', {'hello' : 'goodbye', 'world' : 'earth'})
'goodbye, earth'
>>> multiple_replace("Do you like cafe? No, I prefer tea.",
... {'cafe': 'tea', 'tea': 'cafe', 'like': 'prefer'})
'Do you prefer tea? No, I prefer cafe.'
The main thing for me is that you can use regular expressions as well, for example to replace whole words only, or to normalize white space:
>>> s = "I don't want to change this name:\n Philip II of Spain"
>>> re_str_dict = {r'\bI\b': 'You', r'[\n\t ]+': ' '}
>>> multiple_replace(s, re_str_dict)
"You don't want to change this name: Philip II of Spain"
If you want to use the dictionary keys as normal strings,
you can escape those before calling multiple_replace using e.g. this function:
def escape_keys(d):
""" transform dictionary d by applying re.escape to the keys """
return dict((re.escape(k), v) for k, v in d.items())
>>> multiple_replace(s, escape_keys(re_str_dict))
"I don't want to change this name:\n Philip II of Spain"
The following function can help in finding erroneous regular expressions among your dictionary keys (since the error message from multiple_replace isn't very telling):
def check_re_list(re_list):
""" Checks if each regular expression in list is well-formed. """
for i, e in enumerate(re_list):
try:
re.compile(e)
except (TypeError, re.error):
print("Invalid regular expression string "
"at position {}: '{}'".format(i, e))
>>> check_re_list(re_str_dict.keys())
Note that it does not chain the replacements, instead performs them simultaneously. This makes it more efficient without constraining what it can do. To mimic the effect of chaining, you may just need to add more string-replacement pairs and ensure the expected ordering of the pairs:
>>> multiple_replace("button", {"but": "mut", "mutton": "lamb"})
'mutton'
>>> multiple_replace("button", [("button", "lamb"),
... ("but", "mut"), ("mutton", "lamb")])
'lamb'

Note: Test your case, see comments.
Here's a sample which is more efficient on long strings with many small replacements.
source = "Here is foo, it does moo!"
replacements = {
'is': 'was', # replace 'is' with 'was'
'does': 'did',
'!': '?'
}
def replace(source, replacements):
finder = re.compile("|".join(re.escape(k) for k in replacements.keys())) # matches every string we want replaced
result = []
pos = 0
while True:
match = finder.search(source, pos)
if match:
# cut off the part up until match
result.append(source[pos : match.start()])
# cut off the matched part and replace it in place
result.append(replacements[source[match.start() : match.end()]])
pos = match.end()
else:
# the rest after the last match
result.append(source[pos:])
break
return "".join(result)
print replace(source, replacements)
The point is in avoiding many concatenations of long strings. We chop the source string to fragments, replacing some of the fragments as we form the list, and then join the whole thing back into a string.

I was doing a similar exercise in one of my school homework. This was my solution
dictionary = {1: ['hate', 'love'],
2: ['salad', 'burger'],
3: ['vegetables', 'pizza']}
def normalize(text):
for i in dictionary:
text = text.replace(dictionary[i][0], dictionary[i][1])
return text
See result yourself on test string
string_to_change = 'I hate salad and vegetables'
print(normalize(string_to_change))

You can use the pandas library and the replace function which supports both exact matches as well as regex replacements. For example:
df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']})
to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}']
replace_with=['name','city','month','time', 'date']
print(df.text.replace(to_replace, replace_with, regex=True))
And the modified text is:
0 name is going to visit city in month
1 I was born in date
2 I will be there at time
You can find an example here. Notice that the replacements on the text are done with the order they appear in the lists

I was struggling with this problem as well. With many substitutions regular expressions struggle, and are about four times slower than looping string.replace (in my experiment conditions).
You should absolutely try using the Flashtext library (blog post here, Github here). In my case it was a bit over two orders of magnitude faster, from 1.8 s to 0.015 s (regular expressions took 7.7 s) for each document.
It is easy to find use examples in the links above, but this is a working example:
from flashtext import KeywordProcessor
self.processor = KeywordProcessor(case_sensitive=False)
for k, v in self.my_dict.items():
self.processor.add_keyword(k, v)
new_string = self.processor.replace_keywords(string)
Note that Flashtext makes substitutions in a single pass (to avoid a --> b and b --> c translating 'a' into 'c'). Flashtext also looks for whole words (so 'is' will not match 'this'). It works fine if your target is several words (replacing 'This is' by 'Hello').

I face similar problem today, where I had to do use .replace() method multiple times but it didn't feel good to me. So I did something like this:
REPLACEMENTS = {'<': '<', '>': '>', '&': '&'}
event_title = ''.join([REPLACEMENTS.get(c,c) for c in event['summary']])

I feel this question needs a single-line recursive lambda function answer for completeness, just because. So there:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.popitem()), d)
Usage:
>>> mrep('abcabc', {'a': '1', 'c': '2'})
'1b21b2'
Notes:
This consumes the input dictionary.
Python dicts preserve key order as of 3.6; corresponding caveats in other answers are not relevant anymore. For backward compatibility one could resort to a tuple-based version:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.pop()), d)
>>> mrep('abcabc', [('a', '1'), ('c', '2')])
Note: As with all recursive functions in python, too large recursion depth (i.e. too large replacement dictionaries) will result in an error. See e.g. here.

You should really not do it this way, but I just find it way too cool:
>>> replacements = {'cond1':'text1', 'cond2':'text2'}
>>> cmd = 'answer = s'
>>> for k,v in replacements.iteritems():
>>> cmd += ".replace(%s, %s)" %(k,v)
>>> exec(cmd)
Now, answer is the result of all the replacements in turn
again, this is very hacky and is not something that you should be using regularly. But it's just nice to know that you can do something like this if you ever need to.

For replace only one character, use the translate and str.maketrans is my favorite method.
tl;dr > result_string = your_string.translate(str.maketrans(dict_mapping))
demo
my_string = 'This is a test string.'
dict_mapping = {'i': 's', 's': 'S'}
result_good = my_string.translate(str.maketrans(dict_mapping))
result_bad = my_string
for x, y in dict_mapping.items():
result_bad = result_bad.replace(x, y)
print(result_good) # ThsS sS a teSt Strsng.
print(result_bad) # ThSS SS a teSt StrSng.

I don't know about speed but this is my workaday quick fix:
reduce(lambda a, b: a.replace(*b)
, [('o','W'), ('t','X')] #iterable of pairs: (oldval, newval)
, 'tomato' #The string from which to replace values
)
... but I like the #1 regex answer above. Note - if one new value is a substring of another one then the operation is not commutative.

Here is a version with support for basic regex replacement. The main restriction is that expressions must not contain subgroups, and there may be some edge cases:
Code based on #bgusach and others
import re
class StringReplacer:
def __init__(self, replacements, ignore_case=False):
patterns = sorted(replacements, key=len, reverse=True)
self.replacements = [replacements[k] for k in patterns]
re_mode = re.IGNORECASE if ignore_case else 0
self.pattern = re.compile('|'.join(("({})".format(p) for p in patterns)), re_mode)
def tr(matcher):
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
return self.replacements[index]
self.tr = tr
def __call__(self, string):
return self.pattern.sub(self.tr, string)
Tests
table = {
"aaa" : "[This is three a]",
"b+" : "[This is one or more b]",
r"<\w+>" : "[This is a tag]"
}
replacer = StringReplacer(table, True)
sample1 = "whatever bb, aaa, <star> BBB <end>"
print(replacer(sample1))
# output:
# whatever [This is one or more b], [This is three a], [This is a tag] [This is one or more b] [This is a tag]
The trick is to identify the matched group by its position. It is not super efficient (O(n)), but it works.
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
Replacement is done in one pass.

Starting from the precious answer of Andrew i developed a script that loads the dictionary from a file and elaborates all the files on the opened folder to do the replacements. The script loads the mappings from an external file in which you can set the separator. I'm a beginner but i found this script very useful when doing multiple substitutions in multiple files. It loaded a dictionary with more than 1000 entries in seconds. It is not elegant but it worked for me
import glob
import re
mapfile = input("Enter map file name with extension eg. codifica.txt: ")
sep = input("Enter map file column separator eg. |: ")
mask = input("Enter search mask with extension eg. 2010*txt for all files to be processed: ")
suff = input("Enter suffix with extension eg. _NEW.txt for newly generated files: ")
rep = {} # creation of empy dictionary
with open(mapfile) as temprep: # loading of definitions in the dictionary using input file, separator is prompted
for line in temprep:
(key, val) = line.strip('\n').split(sep)
rep[key] = val
for filename in glob.iglob(mask): # recursion on all the files with the mask prompted
with open (filename, "r") as textfile: # load each file in the variable text
text = textfile.read()
# start replacement
#rep = dict((re.escape(k), v) for k, v in rep.items()) commented to enable the use in the mapping of re reserved characters
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[m.group(0)], text)
#write of te output files with the prompted suffice
target = open(filename[:-4]+"_NEW.txt", "w")
target.write(text)
target.close()

this is my solution to the problem. I used it in a chatbot to replace the different words at once.
def mass_replace(text, dct):
new_string = ""
old_string = text
while len(old_string) > 0:
s = ""
sk = ""
for k in dct.keys():
if old_string.startswith(k):
s = dct[k]
sk = k
if s:
new_string+=s
old_string = old_string[len(sk):]
else:
new_string+=old_string[0]
old_string = old_string[1:]
return new_string
print mass_replace("The dog hunts the cat", {"dog":"cat", "cat":"dog"})
this will become The cat hunts the dog

Another example :
Input list
error_list = ['[br]', '[ex]', 'Something']
words = ['how', 'much[ex]', 'is[br]', 'the', 'fish[br]', 'noSomething', 'really']
The desired output would be
words = ['how', 'much', 'is', 'the', 'fish', 'no', 'really']
Code :
[n[0][0] if len(n[0]) else n[1] for n in [[[w.replace(e,"") for e in error_list if e in w],w] for w in words]]

My approach would be to first tokenize the string, then decide for each token whether to include it or not.
Potentially, might be more performant, if we can assume O(1) lookup for a hashmap/set:
remove_words = {"we", "this"}
target_sent = "we should modify this string"
target_sent_words = target_sent.split()
filtered_sent = " ".join(list(filter(lambda word: word not in remove_words, target_sent_words)))
filtered_sent is now 'should modify string'

Or just for a fast hack:
for line in to_read:
read_buffer = line
stripped_buffer1 = read_buffer.replace("term1", " ")
stripped_buffer2 = stripped_buffer1.replace("term2", " ")
write_to_file = to_write.write(stripped_buffer2)

Here is another way of doing it with a dictionary:
listA="The cat jumped over the house".split()
modify = {word:word for number,word in enumerate(listA)}
modify["cat"],modify["jumped"]="dog","walked"
print " ".join(modify[x] for x in listA)

sentence='its some sentence with a something text'
def replaceAll(f,Array1,Array2):
if len(Array1)==len(Array2):
for x in range(len(Array1)):
return f.replace(Array1[x],Array2[x])
newSentence=replaceAll(sentence,['a','sentence','something'],['another','sentence','something something'])
print(newSentence)

Related

How to use multiple patterns for multiple replacements with Python module re? [duplicate]

I would like to use the .replace function to replace multiple strings.
I currently have
string.replace("condition1", "")
but would like to have something like
string.replace("condition1", "").replace("condition2", "text")
although that does not feel like good syntax
what is the proper way to do this? kind of like how in grep/regex you can do \1 and \2 to replace fields to certain search strings
Here is a short example that should do the trick with regular expressions:
import re
rep = {"condition1": "", "condition2": "text"} # define desired replacements here
# use these three lines to do the replacement
rep = dict((re.escape(k), v) for k, v in rep.iteritems())
#Python 3 renamed dict.iteritems to dict.items so use rep.items() for latest versions
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text)
For example:
>>> pattern.sub(lambda m: rep[re.escape(m.group(0))], "(condition1) and --condition2--")
'() and --text--'
You could just make a nice little looping function.
def replace_all(text, dic):
for i, j in dic.iteritems():
text = text.replace(i, j)
return text
where text is the complete string and dic is a dictionary — each definition is a string that will replace a match to the term.
Note: in Python 3, iteritems() has been replaced with items()
Careful: Python dictionaries don't have a reliable order for iteration. This solution only solves your problem if:
order of replacements is irrelevant
it's ok for a replacement to change the results of previous replacements
Update: The above statement related to ordering of insertion does not apply to Python versions greater than or equal to 3.6, as standard dicts were changed to use insertion ordering for iteration.
For instance:
d = { "cat": "dog", "dog": "pig"}
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, d)
print(my_sentence)
Possible output #1:
"This is my pig and this is my pig."
Possible output #2
"This is my dog and this is my pig."
One possible fix is to use an OrderedDict.
from collections import OrderedDict
def replace_all(text, dic):
for i, j in dic.items():
text = text.replace(i, j)
return text
od = OrderedDict([("cat", "dog"), ("dog", "pig")])
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, od)
print(my_sentence)
Output:
"This is my pig and this is my pig."
Careful #2: Inefficient if your text string is too big or there are many pairs in the dictionary.
Why not one solution like this?
s = "The quick brown fox jumps over the lazy dog"
for r in (("brown", "red"), ("lazy", "quick")):
s = s.replace(*r)
#output will be: The quick red fox jumps over the quick dog
Here is a variant of the first solution using reduce, in case you like being functional. :)
repls = {'hello' : 'goodbye', 'world' : 'earth'}
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls.iteritems(), s)
martineau's even better version:
repls = ('hello', 'goodbye'), ('world', 'earth')
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls, s)
This is just a more concise recap of F.J and MiniQuark great answers and last but decisive improvement by bgusach. All you need to achieve multiple simultaneous string replacements is the following function:
def multiple_replace(string, rep_dict):
pattern = re.compile("|".join([re.escape(k) for k in sorted(rep_dict,key=len,reverse=True)]), flags=re.DOTALL)
return pattern.sub(lambda x: rep_dict[x.group(0)], string)
Usage:
>>>multiple_replace("Do you like cafe? No, I prefer tea.", {'cafe':'tea', 'tea':'cafe', 'like':'prefer'})
'Do you prefer tea? No, I prefer cafe.'
If you wish, you can make your own dedicated replacement functions starting from this simpler one.
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can apply the replacements within a list comprehension:
# text = "The quick brown fox jumps over the lazy dog"
# replacements = [("brown", "red"), ("lazy", "quick")]
[text := text.replace(a, b) for a, b in replacements]
# text = 'The quick red fox jumps over the quick dog'
I built this upon F.J.s excellent answer:
import re
def multiple_replacer(*key_values):
replace_dict = dict(key_values)
replacement_function = lambda match: replace_dict[match.group(0)]
pattern = re.compile("|".join([re.escape(k) for k, v in key_values]), re.M)
return lambda string: pattern.sub(replacement_function, string)
def multiple_replace(string, *key_values):
return multiple_replacer(*key_values)(string)
One shot usage:
>>> replacements = (u"café", u"tea"), (u"tea", u"café"), (u"like", u"love")
>>> print multiple_replace(u"Do you like café? No, I prefer tea.", *replacements)
Do you love tea? No, I prefer café.
Note that since replacement is done in just one pass, "café" changes to "tea", but it does not change back to "café".
If you need to do the same replacement many times, you can create a replacement function easily:
>>> my_escaper = multiple_replacer(('"','\\"'), ('\t', '\\t'))
>>> many_many_strings = (u'This text will be escaped by "my_escaper"',
u'Does this work?\tYes it does',
u'And can we span\nmultiple lines?\t"Yes\twe\tcan!"')
>>> for line in many_many_strings:
... print my_escaper(line)
...
This text will be escaped by \"my_escaper\"
Does this work?\tYes it does
And can we span
multiple lines?\t\"Yes\twe\tcan!\"
Improvements:
turned code into a function
added multiline support
fixed a bug in escaping
easy to create a function for a specific multiple replacement
Enjoy! :-)
I would like to propose the usage of string templates. Just place the string to be replaced in a dictionary and all is set! Example from docs.python.org
>>> from string import Template
>>> s = Template('$who likes $what')
>>> s.substitute(who='tim', what='kung pao')
'tim likes kung pao'
>>> d = dict(who='tim')
>>> Template('Give $who $100').substitute(d)
Traceback (most recent call last):
[...]
ValueError: Invalid placeholder in string: line 1, col 10
>>> Template('$who likes $what').substitute(d)
Traceback (most recent call last):
[...]
KeyError: 'what'
>>> Template('$who likes $what').safe_substitute(d)
'tim likes $what'
In my case, I needed a simple replacing of unique keys with names, so I thought this up:
a = 'This is a test string.'
b = {'i': 'I', 's': 'S'}
for x,y in b.items():
a = a.replace(x, y)
>>> a
'ThIS IS a teSt StrIng.'
Here my $0.02. It is based on Andrew Clark's answer, just a little bit clearer, and it also covers the case when a string to replace is a substring of another string to replace (longer string wins)
def multireplace(string, replacements):
"""
Given a string and a replacement map, it returns the replaced string.
:param str string: string to execute replacements on
:param dict replacements: replacement dictionary {value to find: value to replace}
:rtype: str
"""
# Place longer ones first to keep shorter substrings from matching
# where the longer ones should take place
# For instance given the replacements {'ab': 'AB', 'abc': 'ABC'} against
# the string 'hey abc', it should produce 'hey ABC' and not 'hey ABc'
substrs = sorted(replacements, key=len, reverse=True)
# Create a big OR regex that matches any of the substrings to replace
regexp = re.compile('|'.join(map(re.escape, substrs)))
# For each match, look up the new string in the replacements
return regexp.sub(lambda match: replacements[match.group(0)], string)
It is in this this gist, feel free to modify it if you have any proposal.
I needed a solution where the strings to be replaced can be a regular expressions,
for example to help in normalizing a long text by replacing multiple whitespace characters with a single one. Building on a chain of answers from others, including MiniQuark and mmj, this is what I came up with:
def multiple_replace(string, reps, re_flags = 0):
""" Transforms string, replacing keys from re_str_dict with values.
reps: dictionary, or list of key-value pairs (to enforce ordering;
earlier items have higher priority).
Keys are used as regular expressions.
re_flags: interpretation of regular expressions, such as re.DOTALL
"""
if isinstance(reps, dict):
reps = reps.items()
pattern = re.compile("|".join("(?P<_%d>%s)" % (i, re_str[0])
for i, re_str in enumerate(reps)),
re_flags)
return pattern.sub(lambda x: reps[int(x.lastgroup[1:])][1], string)
It works for the examples given in other answers, for example:
>>> multiple_replace("(condition1) and --condition2--",
... {"condition1": "", "condition2": "text"})
'() and --text--'
>>> multiple_replace('hello, world', {'hello' : 'goodbye', 'world' : 'earth'})
'goodbye, earth'
>>> multiple_replace("Do you like cafe? No, I prefer tea.",
... {'cafe': 'tea', 'tea': 'cafe', 'like': 'prefer'})
'Do you prefer tea? No, I prefer cafe.'
The main thing for me is that you can use regular expressions as well, for example to replace whole words only, or to normalize white space:
>>> s = "I don't want to change this name:\n Philip II of Spain"
>>> re_str_dict = {r'\bI\b': 'You', r'[\n\t ]+': ' '}
>>> multiple_replace(s, re_str_dict)
"You don't want to change this name: Philip II of Spain"
If you want to use the dictionary keys as normal strings,
you can escape those before calling multiple_replace using e.g. this function:
def escape_keys(d):
""" transform dictionary d by applying re.escape to the keys """
return dict((re.escape(k), v) for k, v in d.items())
>>> multiple_replace(s, escape_keys(re_str_dict))
"I don't want to change this name:\n Philip II of Spain"
The following function can help in finding erroneous regular expressions among your dictionary keys (since the error message from multiple_replace isn't very telling):
def check_re_list(re_list):
""" Checks if each regular expression in list is well-formed. """
for i, e in enumerate(re_list):
try:
re.compile(e)
except (TypeError, re.error):
print("Invalid regular expression string "
"at position {}: '{}'".format(i, e))
>>> check_re_list(re_str_dict.keys())
Note that it does not chain the replacements, instead performs them simultaneously. This makes it more efficient without constraining what it can do. To mimic the effect of chaining, you may just need to add more string-replacement pairs and ensure the expected ordering of the pairs:
>>> multiple_replace("button", {"but": "mut", "mutton": "lamb"})
'mutton'
>>> multiple_replace("button", [("button", "lamb"),
... ("but", "mut"), ("mutton", "lamb")])
'lamb'
Note: Test your case, see comments.
Here's a sample which is more efficient on long strings with many small replacements.
source = "Here is foo, it does moo!"
replacements = {
'is': 'was', # replace 'is' with 'was'
'does': 'did',
'!': '?'
}
def replace(source, replacements):
finder = re.compile("|".join(re.escape(k) for k in replacements.keys())) # matches every string we want replaced
result = []
pos = 0
while True:
match = finder.search(source, pos)
if match:
# cut off the part up until match
result.append(source[pos : match.start()])
# cut off the matched part and replace it in place
result.append(replacements[source[match.start() : match.end()]])
pos = match.end()
else:
# the rest after the last match
result.append(source[pos:])
break
return "".join(result)
print replace(source, replacements)
The point is in avoiding many concatenations of long strings. We chop the source string to fragments, replacing some of the fragments as we form the list, and then join the whole thing back into a string.
I was doing a similar exercise in one of my school homework. This was my solution
dictionary = {1: ['hate', 'love'],
2: ['salad', 'burger'],
3: ['vegetables', 'pizza']}
def normalize(text):
for i in dictionary:
text = text.replace(dictionary[i][0], dictionary[i][1])
return text
See result yourself on test string
string_to_change = 'I hate salad and vegetables'
print(normalize(string_to_change))
You can use the pandas library and the replace function which supports both exact matches as well as regex replacements. For example:
df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']})
to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}']
replace_with=['name','city','month','time', 'date']
print(df.text.replace(to_replace, replace_with, regex=True))
And the modified text is:
0 name is going to visit city in month
1 I was born in date
2 I will be there at time
You can find an example here. Notice that the replacements on the text are done with the order they appear in the lists
I was struggling with this problem as well. With many substitutions regular expressions struggle, and are about four times slower than looping string.replace (in my experiment conditions).
You should absolutely try using the Flashtext library (blog post here, Github here). In my case it was a bit over two orders of magnitude faster, from 1.8 s to 0.015 s (regular expressions took 7.7 s) for each document.
It is easy to find use examples in the links above, but this is a working example:
from flashtext import KeywordProcessor
self.processor = KeywordProcessor(case_sensitive=False)
for k, v in self.my_dict.items():
self.processor.add_keyword(k, v)
new_string = self.processor.replace_keywords(string)
Note that Flashtext makes substitutions in a single pass (to avoid a --> b and b --> c translating 'a' into 'c'). Flashtext also looks for whole words (so 'is' will not match 'this'). It works fine if your target is several words (replacing 'This is' by 'Hello').
I face similar problem today, where I had to do use .replace() method multiple times but it didn't feel good to me. So I did something like this:
REPLACEMENTS = {'<': '<', '>': '>', '&': '&'}
event_title = ''.join([REPLACEMENTS.get(c,c) for c in event['summary']])
I feel this question needs a single-line recursive lambda function answer for completeness, just because. So there:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.popitem()), d)
Usage:
>>> mrep('abcabc', {'a': '1', 'c': '2'})
'1b21b2'
Notes:
This consumes the input dictionary.
Python dicts preserve key order as of 3.6; corresponding caveats in other answers are not relevant anymore. For backward compatibility one could resort to a tuple-based version:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.pop()), d)
>>> mrep('abcabc', [('a', '1'), ('c', '2')])
Note: As with all recursive functions in python, too large recursion depth (i.e. too large replacement dictionaries) will result in an error. See e.g. here.
You should really not do it this way, but I just find it way too cool:
>>> replacements = {'cond1':'text1', 'cond2':'text2'}
>>> cmd = 'answer = s'
>>> for k,v in replacements.iteritems():
>>> cmd += ".replace(%s, %s)" %(k,v)
>>> exec(cmd)
Now, answer is the result of all the replacements in turn
again, this is very hacky and is not something that you should be using regularly. But it's just nice to know that you can do something like this if you ever need to.
For replace only one character, use the translate and str.maketrans is my favorite method.
tl;dr > result_string = your_string.translate(str.maketrans(dict_mapping))
demo
my_string = 'This is a test string.'
dict_mapping = {'i': 's', 's': 'S'}
result_good = my_string.translate(str.maketrans(dict_mapping))
result_bad = my_string
for x, y in dict_mapping.items():
result_bad = result_bad.replace(x, y)
print(result_good) # ThsS sS a teSt Strsng.
print(result_bad) # ThSS SS a teSt StrSng.
I don't know about speed but this is my workaday quick fix:
reduce(lambda a, b: a.replace(*b)
, [('o','W'), ('t','X')] #iterable of pairs: (oldval, newval)
, 'tomato' #The string from which to replace values
)
... but I like the #1 regex answer above. Note - if one new value is a substring of another one then the operation is not commutative.
Here is a version with support for basic regex replacement. The main restriction is that expressions must not contain subgroups, and there may be some edge cases:
Code based on #bgusach and others
import re
class StringReplacer:
def __init__(self, replacements, ignore_case=False):
patterns = sorted(replacements, key=len, reverse=True)
self.replacements = [replacements[k] for k in patterns]
re_mode = re.IGNORECASE if ignore_case else 0
self.pattern = re.compile('|'.join(("({})".format(p) for p in patterns)), re_mode)
def tr(matcher):
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
return self.replacements[index]
self.tr = tr
def __call__(self, string):
return self.pattern.sub(self.tr, string)
Tests
table = {
"aaa" : "[This is three a]",
"b+" : "[This is one or more b]",
r"<\w+>" : "[This is a tag]"
}
replacer = StringReplacer(table, True)
sample1 = "whatever bb, aaa, <star> BBB <end>"
print(replacer(sample1))
# output:
# whatever [This is one or more b], [This is three a], [This is a tag] [This is one or more b] [This is a tag]
The trick is to identify the matched group by its position. It is not super efficient (O(n)), but it works.
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
Replacement is done in one pass.
Starting from the precious answer of Andrew i developed a script that loads the dictionary from a file and elaborates all the files on the opened folder to do the replacements. The script loads the mappings from an external file in which you can set the separator. I'm a beginner but i found this script very useful when doing multiple substitutions in multiple files. It loaded a dictionary with more than 1000 entries in seconds. It is not elegant but it worked for me
import glob
import re
mapfile = input("Enter map file name with extension eg. codifica.txt: ")
sep = input("Enter map file column separator eg. |: ")
mask = input("Enter search mask with extension eg. 2010*txt for all files to be processed: ")
suff = input("Enter suffix with extension eg. _NEW.txt for newly generated files: ")
rep = {} # creation of empy dictionary
with open(mapfile) as temprep: # loading of definitions in the dictionary using input file, separator is prompted
for line in temprep:
(key, val) = line.strip('\n').split(sep)
rep[key] = val
for filename in glob.iglob(mask): # recursion on all the files with the mask prompted
with open (filename, "r") as textfile: # load each file in the variable text
text = textfile.read()
# start replacement
#rep = dict((re.escape(k), v) for k, v in rep.items()) commented to enable the use in the mapping of re reserved characters
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[m.group(0)], text)
#write of te output files with the prompted suffice
target = open(filename[:-4]+"_NEW.txt", "w")
target.write(text)
target.close()
this is my solution to the problem. I used it in a chatbot to replace the different words at once.
def mass_replace(text, dct):
new_string = ""
old_string = text
while len(old_string) > 0:
s = ""
sk = ""
for k in dct.keys():
if old_string.startswith(k):
s = dct[k]
sk = k
if s:
new_string+=s
old_string = old_string[len(sk):]
else:
new_string+=old_string[0]
old_string = old_string[1:]
return new_string
print mass_replace("The dog hunts the cat", {"dog":"cat", "cat":"dog"})
this will become The cat hunts the dog
Another example :
Input list
error_list = ['[br]', '[ex]', 'Something']
words = ['how', 'much[ex]', 'is[br]', 'the', 'fish[br]', 'noSomething', 'really']
The desired output would be
words = ['how', 'much', 'is', 'the', 'fish', 'no', 'really']
Code :
[n[0][0] if len(n[0]) else n[1] for n in [[[w.replace(e,"") for e in error_list if e in w],w] for w in words]]
My approach would be to first tokenize the string, then decide for each token whether to include it or not.
Potentially, might be more performant, if we can assume O(1) lookup for a hashmap/set:
remove_words = {"we", "this"}
target_sent = "we should modify this string"
target_sent_words = target_sent.split()
filtered_sent = " ".join(list(filter(lambda word: word not in remove_words, target_sent_words)))
filtered_sent is now 'should modify string'
Or just for a fast hack:
for line in to_read:
read_buffer = line
stripped_buffer1 = read_buffer.replace("term1", " ")
stripped_buffer2 = stripped_buffer1.replace("term2", " ")
write_to_file = to_write.write(stripped_buffer2)
Here is another way of doing it with a dictionary:
listA="The cat jumped over the house".split()
modify = {word:word for number,word in enumerate(listA)}
modify["cat"],modify["jumped"]="dog","walked"
print " ".join(modify[x] for x in listA)
sentence='its some sentence with a something text'
def replaceAll(f,Array1,Array2):
if len(Array1)==len(Array2):
for x in range(len(Array1)):
return f.replace(Array1[x],Array2[x])
newSentence=replaceAll(sentence,['a','sentence','something'],['another','sentence','something something'])
print(newSentence)

Dynamically replace all starting and ending letters of words in a sentence by using regex plus a dictionary or Hash map in python

I'm looking for a way to create a function that dynamically replaces all the initial or beginning letters of words in a sentence. I created a function that replaces the initial letters no problem.
def replace_all_initial_letters(original, new, sentence):
new_string = re.sub(r'\b'+original, new, sentence)
return new_string
test_sentence = 'This was something that had to happen again'
print(replace_all_initial_letters('h', 'b', test_sentence))
Output: 'This was something that bad to bappen again'
I would however like to be able to enter multiple options into this function using a dictionary or Hash Map. For example like using the following:
initialLetterConversion = {
'r': 'v',
'h': 'b'
}
Or I think there might be a way to do this using regex grouping perhaps.
I'm also having trouble implementing this for ending letters. I tried the following function but it does not work
def replace_all_final_letters(original, new, sentence):
new_string = re.sub(original+r'/s', new, sentence)
return new_string
print(replace_all_final_letters('n', 'm', test_sentence))
Expected Output: 'This was something that had to happem agaim'
Any help would be greatly appreciated.
By "simple" grouping you can access to the match with the lastindex attribute. Notice that such indexes starts from 1. re.sub accept as second argument a callback to add more flexibility for custom substitutions. Here an example of usage:
import re
mapper = [
{'regex': r'\b(w)', 'replace_with': 'W'},
{'regex': r'\b(h)', 'replace_with': 'H'}]
regex = '|'.join(d['regex'] for d in mapper)
def replacer(match):
return mapper[match.lastindex - 1]['replace_with'] # mapper is globally defined
text = 'This was something that had to happen again'
out = re.sub(regex, replacer, text)
print(out)
#This Was something that Had to Happen again
Ignore this if, for some reason, re is required for this. This is plain Python with no need for any imports.
The conversion map is a list of 2-tuples. Each tuple has a from and to value. The from and to values are not limited to a string of length 1.
This single function handles both the beginning and end of words although the mapping is for both 'ends' of the word and may therefore need some adaptation.
sentence = 'This was something that had to happen again'
def func(sentence, conv_map):
words = sentence.split()
for i, word in enumerate(words):
for f, t in conv_map:
if word.startswith(f):
words[i] = t + word[len(f):]
word = words[i]
if word.endswith(f):
words[i] = word[:-len(f)] + t
return ' '.join(words)
print(func(sentence, [('h', 'b'), ('a', 'x'), ('s', 'y')]))
Output:
Thiy way yomething that bad to bappen xgain

Can I pass a list of strings as "old" in str.replace(old, new[, count])? [duplicate]

I would like to use the .replace function to replace multiple strings.
I currently have
string.replace("condition1", "")
but would like to have something like
string.replace("condition1", "").replace("condition2", "text")
although that does not feel like good syntax
what is the proper way to do this? kind of like how in grep/regex you can do \1 and \2 to replace fields to certain search strings
Here is a short example that should do the trick with regular expressions:
import re
rep = {"condition1": "", "condition2": "text"} # define desired replacements here
# use these three lines to do the replacement
rep = dict((re.escape(k), v) for k, v in rep.iteritems())
#Python 3 renamed dict.iteritems to dict.items so use rep.items() for latest versions
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text)
For example:
>>> pattern.sub(lambda m: rep[re.escape(m.group(0))], "(condition1) and --condition2--")
'() and --text--'
You could just make a nice little looping function.
def replace_all(text, dic):
for i, j in dic.iteritems():
text = text.replace(i, j)
return text
where text is the complete string and dic is a dictionary — each definition is a string that will replace a match to the term.
Note: in Python 3, iteritems() has been replaced with items()
Careful: Python dictionaries don't have a reliable order for iteration. This solution only solves your problem if:
order of replacements is irrelevant
it's ok for a replacement to change the results of previous replacements
Update: The above statement related to ordering of insertion does not apply to Python versions greater than or equal to 3.6, as standard dicts were changed to use insertion ordering for iteration.
For instance:
d = { "cat": "dog", "dog": "pig"}
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, d)
print(my_sentence)
Possible output #1:
"This is my pig and this is my pig."
Possible output #2
"This is my dog and this is my pig."
One possible fix is to use an OrderedDict.
from collections import OrderedDict
def replace_all(text, dic):
for i, j in dic.items():
text = text.replace(i, j)
return text
od = OrderedDict([("cat", "dog"), ("dog", "pig")])
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, od)
print(my_sentence)
Output:
"This is my pig and this is my pig."
Careful #2: Inefficient if your text string is too big or there are many pairs in the dictionary.
Why not one solution like this?
s = "The quick brown fox jumps over the lazy dog"
for r in (("brown", "red"), ("lazy", "quick")):
s = s.replace(*r)
#output will be: The quick red fox jumps over the quick dog
Here is a variant of the first solution using reduce, in case you like being functional. :)
repls = {'hello' : 'goodbye', 'world' : 'earth'}
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls.iteritems(), s)
martineau's even better version:
repls = ('hello', 'goodbye'), ('world', 'earth')
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls, s)
This is just a more concise recap of F.J and MiniQuark great answers and last but decisive improvement by bgusach. All you need to achieve multiple simultaneous string replacements is the following function:
def multiple_replace(string, rep_dict):
pattern = re.compile("|".join([re.escape(k) for k in sorted(rep_dict,key=len,reverse=True)]), flags=re.DOTALL)
return pattern.sub(lambda x: rep_dict[x.group(0)], string)
Usage:
>>>multiple_replace("Do you like cafe? No, I prefer tea.", {'cafe':'tea', 'tea':'cafe', 'like':'prefer'})
'Do you prefer tea? No, I prefer cafe.'
If you wish, you can make your own dedicated replacement functions starting from this simpler one.
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can apply the replacements within a list comprehension:
# text = "The quick brown fox jumps over the lazy dog"
# replacements = [("brown", "red"), ("lazy", "quick")]
[text := text.replace(a, b) for a, b in replacements]
# text = 'The quick red fox jumps over the quick dog'
I built this upon F.J.s excellent answer:
import re
def multiple_replacer(*key_values):
replace_dict = dict(key_values)
replacement_function = lambda match: replace_dict[match.group(0)]
pattern = re.compile("|".join([re.escape(k) for k, v in key_values]), re.M)
return lambda string: pattern.sub(replacement_function, string)
def multiple_replace(string, *key_values):
return multiple_replacer(*key_values)(string)
One shot usage:
>>> replacements = (u"café", u"tea"), (u"tea", u"café"), (u"like", u"love")
>>> print multiple_replace(u"Do you like café? No, I prefer tea.", *replacements)
Do you love tea? No, I prefer café.
Note that since replacement is done in just one pass, "café" changes to "tea", but it does not change back to "café".
If you need to do the same replacement many times, you can create a replacement function easily:
>>> my_escaper = multiple_replacer(('"','\\"'), ('\t', '\\t'))
>>> many_many_strings = (u'This text will be escaped by "my_escaper"',
u'Does this work?\tYes it does',
u'And can we span\nmultiple lines?\t"Yes\twe\tcan!"')
>>> for line in many_many_strings:
... print my_escaper(line)
...
This text will be escaped by \"my_escaper\"
Does this work?\tYes it does
And can we span
multiple lines?\t\"Yes\twe\tcan!\"
Improvements:
turned code into a function
added multiline support
fixed a bug in escaping
easy to create a function for a specific multiple replacement
Enjoy! :-)
I would like to propose the usage of string templates. Just place the string to be replaced in a dictionary and all is set! Example from docs.python.org
>>> from string import Template
>>> s = Template('$who likes $what')
>>> s.substitute(who='tim', what='kung pao')
'tim likes kung pao'
>>> d = dict(who='tim')
>>> Template('Give $who $100').substitute(d)
Traceback (most recent call last):
[...]
ValueError: Invalid placeholder in string: line 1, col 10
>>> Template('$who likes $what').substitute(d)
Traceback (most recent call last):
[...]
KeyError: 'what'
>>> Template('$who likes $what').safe_substitute(d)
'tim likes $what'
In my case, I needed a simple replacing of unique keys with names, so I thought this up:
a = 'This is a test string.'
b = {'i': 'I', 's': 'S'}
for x,y in b.items():
a = a.replace(x, y)
>>> a
'ThIS IS a teSt StrIng.'
Here my $0.02. It is based on Andrew Clark's answer, just a little bit clearer, and it also covers the case when a string to replace is a substring of another string to replace (longer string wins)
def multireplace(string, replacements):
"""
Given a string and a replacement map, it returns the replaced string.
:param str string: string to execute replacements on
:param dict replacements: replacement dictionary {value to find: value to replace}
:rtype: str
"""
# Place longer ones first to keep shorter substrings from matching
# where the longer ones should take place
# For instance given the replacements {'ab': 'AB', 'abc': 'ABC'} against
# the string 'hey abc', it should produce 'hey ABC' and not 'hey ABc'
substrs = sorted(replacements, key=len, reverse=True)
# Create a big OR regex that matches any of the substrings to replace
regexp = re.compile('|'.join(map(re.escape, substrs)))
# For each match, look up the new string in the replacements
return regexp.sub(lambda match: replacements[match.group(0)], string)
It is in this this gist, feel free to modify it if you have any proposal.
I needed a solution where the strings to be replaced can be a regular expressions,
for example to help in normalizing a long text by replacing multiple whitespace characters with a single one. Building on a chain of answers from others, including MiniQuark and mmj, this is what I came up with:
def multiple_replace(string, reps, re_flags = 0):
""" Transforms string, replacing keys from re_str_dict with values.
reps: dictionary, or list of key-value pairs (to enforce ordering;
earlier items have higher priority).
Keys are used as regular expressions.
re_flags: interpretation of regular expressions, such as re.DOTALL
"""
if isinstance(reps, dict):
reps = reps.items()
pattern = re.compile("|".join("(?P<_%d>%s)" % (i, re_str[0])
for i, re_str in enumerate(reps)),
re_flags)
return pattern.sub(lambda x: reps[int(x.lastgroup[1:])][1], string)
It works for the examples given in other answers, for example:
>>> multiple_replace("(condition1) and --condition2--",
... {"condition1": "", "condition2": "text"})
'() and --text--'
>>> multiple_replace('hello, world', {'hello' : 'goodbye', 'world' : 'earth'})
'goodbye, earth'
>>> multiple_replace("Do you like cafe? No, I prefer tea.",
... {'cafe': 'tea', 'tea': 'cafe', 'like': 'prefer'})
'Do you prefer tea? No, I prefer cafe.'
The main thing for me is that you can use regular expressions as well, for example to replace whole words only, or to normalize white space:
>>> s = "I don't want to change this name:\n Philip II of Spain"
>>> re_str_dict = {r'\bI\b': 'You', r'[\n\t ]+': ' '}
>>> multiple_replace(s, re_str_dict)
"You don't want to change this name: Philip II of Spain"
If you want to use the dictionary keys as normal strings,
you can escape those before calling multiple_replace using e.g. this function:
def escape_keys(d):
""" transform dictionary d by applying re.escape to the keys """
return dict((re.escape(k), v) for k, v in d.items())
>>> multiple_replace(s, escape_keys(re_str_dict))
"I don't want to change this name:\n Philip II of Spain"
The following function can help in finding erroneous regular expressions among your dictionary keys (since the error message from multiple_replace isn't very telling):
def check_re_list(re_list):
""" Checks if each regular expression in list is well-formed. """
for i, e in enumerate(re_list):
try:
re.compile(e)
except (TypeError, re.error):
print("Invalid regular expression string "
"at position {}: '{}'".format(i, e))
>>> check_re_list(re_str_dict.keys())
Note that it does not chain the replacements, instead performs them simultaneously. This makes it more efficient without constraining what it can do. To mimic the effect of chaining, you may just need to add more string-replacement pairs and ensure the expected ordering of the pairs:
>>> multiple_replace("button", {"but": "mut", "mutton": "lamb"})
'mutton'
>>> multiple_replace("button", [("button", "lamb"),
... ("but", "mut"), ("mutton", "lamb")])
'lamb'
Note: Test your case, see comments.
Here's a sample which is more efficient on long strings with many small replacements.
source = "Here is foo, it does moo!"
replacements = {
'is': 'was', # replace 'is' with 'was'
'does': 'did',
'!': '?'
}
def replace(source, replacements):
finder = re.compile("|".join(re.escape(k) for k in replacements.keys())) # matches every string we want replaced
result = []
pos = 0
while True:
match = finder.search(source, pos)
if match:
# cut off the part up until match
result.append(source[pos : match.start()])
# cut off the matched part and replace it in place
result.append(replacements[source[match.start() : match.end()]])
pos = match.end()
else:
# the rest after the last match
result.append(source[pos:])
break
return "".join(result)
print replace(source, replacements)
The point is in avoiding many concatenations of long strings. We chop the source string to fragments, replacing some of the fragments as we form the list, and then join the whole thing back into a string.
I was doing a similar exercise in one of my school homework. This was my solution
dictionary = {1: ['hate', 'love'],
2: ['salad', 'burger'],
3: ['vegetables', 'pizza']}
def normalize(text):
for i in dictionary:
text = text.replace(dictionary[i][0], dictionary[i][1])
return text
See result yourself on test string
string_to_change = 'I hate salad and vegetables'
print(normalize(string_to_change))
You can use the pandas library and the replace function which supports both exact matches as well as regex replacements. For example:
df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']})
to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}']
replace_with=['name','city','month','time', 'date']
print(df.text.replace(to_replace, replace_with, regex=True))
And the modified text is:
0 name is going to visit city in month
1 I was born in date
2 I will be there at time
You can find an example here. Notice that the replacements on the text are done with the order they appear in the lists
I was struggling with this problem as well. With many substitutions regular expressions struggle, and are about four times slower than looping string.replace (in my experiment conditions).
You should absolutely try using the Flashtext library (blog post here, Github here). In my case it was a bit over two orders of magnitude faster, from 1.8 s to 0.015 s (regular expressions took 7.7 s) for each document.
It is easy to find use examples in the links above, but this is a working example:
from flashtext import KeywordProcessor
self.processor = KeywordProcessor(case_sensitive=False)
for k, v in self.my_dict.items():
self.processor.add_keyword(k, v)
new_string = self.processor.replace_keywords(string)
Note that Flashtext makes substitutions in a single pass (to avoid a --> b and b --> c translating 'a' into 'c'). Flashtext also looks for whole words (so 'is' will not match 'this'). It works fine if your target is several words (replacing 'This is' by 'Hello').
I face similar problem today, where I had to do use .replace() method multiple times but it didn't feel good to me. So I did something like this:
REPLACEMENTS = {'<': '<', '>': '>', '&': '&'}
event_title = ''.join([REPLACEMENTS.get(c,c) for c in event['summary']])
I feel this question needs a single-line recursive lambda function answer for completeness, just because. So there:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.popitem()), d)
Usage:
>>> mrep('abcabc', {'a': '1', 'c': '2'})
'1b21b2'
Notes:
This consumes the input dictionary.
Python dicts preserve key order as of 3.6; corresponding caveats in other answers are not relevant anymore. For backward compatibility one could resort to a tuple-based version:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.pop()), d)
>>> mrep('abcabc', [('a', '1'), ('c', '2')])
Note: As with all recursive functions in python, too large recursion depth (i.e. too large replacement dictionaries) will result in an error. See e.g. here.
You should really not do it this way, but I just find it way too cool:
>>> replacements = {'cond1':'text1', 'cond2':'text2'}
>>> cmd = 'answer = s'
>>> for k,v in replacements.iteritems():
>>> cmd += ".replace(%s, %s)" %(k,v)
>>> exec(cmd)
Now, answer is the result of all the replacements in turn
again, this is very hacky and is not something that you should be using regularly. But it's just nice to know that you can do something like this if you ever need to.
For replace only one character, use the translate and str.maketrans is my favorite method.
tl;dr > result_string = your_string.translate(str.maketrans(dict_mapping))
demo
my_string = 'This is a test string.'
dict_mapping = {'i': 's', 's': 'S'}
result_good = my_string.translate(str.maketrans(dict_mapping))
result_bad = my_string
for x, y in dict_mapping.items():
result_bad = result_bad.replace(x, y)
print(result_good) # ThsS sS a teSt Strsng.
print(result_bad) # ThSS SS a teSt StrSng.
I don't know about speed but this is my workaday quick fix:
reduce(lambda a, b: a.replace(*b)
, [('o','W'), ('t','X')] #iterable of pairs: (oldval, newval)
, 'tomato' #The string from which to replace values
)
... but I like the #1 regex answer above. Note - if one new value is a substring of another one then the operation is not commutative.
Here is a version with support for basic regex replacement. The main restriction is that expressions must not contain subgroups, and there may be some edge cases:
Code based on #bgusach and others
import re
class StringReplacer:
def __init__(self, replacements, ignore_case=False):
patterns = sorted(replacements, key=len, reverse=True)
self.replacements = [replacements[k] for k in patterns]
re_mode = re.IGNORECASE if ignore_case else 0
self.pattern = re.compile('|'.join(("({})".format(p) for p in patterns)), re_mode)
def tr(matcher):
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
return self.replacements[index]
self.tr = tr
def __call__(self, string):
return self.pattern.sub(self.tr, string)
Tests
table = {
"aaa" : "[This is three a]",
"b+" : "[This is one or more b]",
r"<\w+>" : "[This is a tag]"
}
replacer = StringReplacer(table, True)
sample1 = "whatever bb, aaa, <star> BBB <end>"
print(replacer(sample1))
# output:
# whatever [This is one or more b], [This is three a], [This is a tag] [This is one or more b] [This is a tag]
The trick is to identify the matched group by its position. It is not super efficient (O(n)), but it works.
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
Replacement is done in one pass.
Starting from the precious answer of Andrew i developed a script that loads the dictionary from a file and elaborates all the files on the opened folder to do the replacements. The script loads the mappings from an external file in which you can set the separator. I'm a beginner but i found this script very useful when doing multiple substitutions in multiple files. It loaded a dictionary with more than 1000 entries in seconds. It is not elegant but it worked for me
import glob
import re
mapfile = input("Enter map file name with extension eg. codifica.txt: ")
sep = input("Enter map file column separator eg. |: ")
mask = input("Enter search mask with extension eg. 2010*txt for all files to be processed: ")
suff = input("Enter suffix with extension eg. _NEW.txt for newly generated files: ")
rep = {} # creation of empy dictionary
with open(mapfile) as temprep: # loading of definitions in the dictionary using input file, separator is prompted
for line in temprep:
(key, val) = line.strip('\n').split(sep)
rep[key] = val
for filename in glob.iglob(mask): # recursion on all the files with the mask prompted
with open (filename, "r") as textfile: # load each file in the variable text
text = textfile.read()
# start replacement
#rep = dict((re.escape(k), v) for k, v in rep.items()) commented to enable the use in the mapping of re reserved characters
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[m.group(0)], text)
#write of te output files with the prompted suffice
target = open(filename[:-4]+"_NEW.txt", "w")
target.write(text)
target.close()
this is my solution to the problem. I used it in a chatbot to replace the different words at once.
def mass_replace(text, dct):
new_string = ""
old_string = text
while len(old_string) > 0:
s = ""
sk = ""
for k in dct.keys():
if old_string.startswith(k):
s = dct[k]
sk = k
if s:
new_string+=s
old_string = old_string[len(sk):]
else:
new_string+=old_string[0]
old_string = old_string[1:]
return new_string
print mass_replace("The dog hunts the cat", {"dog":"cat", "cat":"dog"})
this will become The cat hunts the dog
Another example :
Input list
error_list = ['[br]', '[ex]', 'Something']
words = ['how', 'much[ex]', 'is[br]', 'the', 'fish[br]', 'noSomething', 'really']
The desired output would be
words = ['how', 'much', 'is', 'the', 'fish', 'no', 'really']
Code :
[n[0][0] if len(n[0]) else n[1] for n in [[[w.replace(e,"") for e in error_list if e in w],w] for w in words]]
My approach would be to first tokenize the string, then decide for each token whether to include it or not.
Potentially, might be more performant, if we can assume O(1) lookup for a hashmap/set:
remove_words = {"we", "this"}
target_sent = "we should modify this string"
target_sent_words = target_sent.split()
filtered_sent = " ".join(list(filter(lambda word: word not in remove_words, target_sent_words)))
filtered_sent is now 'should modify string'
Or just for a fast hack:
for line in to_read:
read_buffer = line
stripped_buffer1 = read_buffer.replace("term1", " ")
stripped_buffer2 = stripped_buffer1.replace("term2", " ")
write_to_file = to_write.write(stripped_buffer2)
Here is another way of doing it with a dictionary:
listA="The cat jumped over the house".split()
modify = {word:word for number,word in enumerate(listA)}
modify["cat"],modify["jumped"]="dog","walked"
print " ".join(modify[x] for x in listA)
sentence='its some sentence with a something text'
def replaceAll(f,Array1,Array2):
if len(Array1)==len(Array2):
for x in range(len(Array1)):
return f.replace(Array1[x],Array2[x])
newSentence=replaceAll(sentence,['a','sentence','something'],['another','sentence','something something'])
print(newSentence)

How use regular expressions to compare an input with a set?

I would like a regular expression python code to:
1) Take an input of characters
2) Outputs the characters in all lower case letters
3) Compares this output in a python set.
I am no good at all with regular expressions.
Why bother?
>>> 'FOO'.lower() in set(('foo', 'bar', 'baz'))
True
>>> 'Quux'.lower() in set(('foo', 'bar', 'baz'))
False
After much google searching, and with trial an error, I a created a solution that works to separate multiple words from the input of characters.
import re
keywords = ('cars', 'jewelry', 'gas')
pattern = re.compile('[a-z]+', re.IGNORECASE)
txt = 'GAS, CaRs, Jewelrys'
keywords_found = pattern.findall(txt.lower())
n = 0
for i in keywords_found:
if i in keywords:
print keywords_found[n]
n = n + 1
Your self-answer would be better using a set rather than that loop.
Using i for a text variable and n for an index is very counter-intuitive. And keywords_found is a misnomer.
Try this:
>>> import re
>>> keywords = set(('cars', 'jewelry', 'gas'))
>>> pattern = re.compile('[a-z]+', re.IGNORECASE)
>>> txt = 'GAS, CaRs, Jewelrys'
>>> text_words = set(pattern.findall(txt.lower()))
>>> print "keywords:", keywords
keywords: set(['cars', 'gas', 'jewelry'])
>>> print "text_words:", text_words
text_words: set(['cars', 'gas', 'jewelrys'])
>>> print "text words in keywords:", text_words & keywords
text words in keywords: set(['cars', 'gas'])
>>> print "text words NOT in keywords:", text_words - (text_words & keywords)
text words NOT in keywords: set(['jewelrys'])
>>> print "keywords NOT in text words:", keywords - (text_words & keywords)
keywords NOT in text words: set(['jewelry'])

How to replace multiple substrings of a string?

I would like to use the .replace function to replace multiple strings.
I currently have
string.replace("condition1", "")
but would like to have something like
string.replace("condition1", "").replace("condition2", "text")
although that does not feel like good syntax
what is the proper way to do this? kind of like how in grep/regex you can do \1 and \2 to replace fields to certain search strings
Here is a short example that should do the trick with regular expressions:
import re
rep = {"condition1": "", "condition2": "text"} # define desired replacements here
# use these three lines to do the replacement
rep = dict((re.escape(k), v) for k, v in rep.iteritems())
#Python 3 renamed dict.iteritems to dict.items so use rep.items() for latest versions
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[re.escape(m.group(0))], text)
For example:
>>> pattern.sub(lambda m: rep[re.escape(m.group(0))], "(condition1) and --condition2--")
'() and --text--'
You could just make a nice little looping function.
def replace_all(text, dic):
for i, j in dic.iteritems():
text = text.replace(i, j)
return text
where text is the complete string and dic is a dictionary — each definition is a string that will replace a match to the term.
Note: in Python 3, iteritems() has been replaced with items()
Careful: Python dictionaries don't have a reliable order for iteration. This solution only solves your problem if:
order of replacements is irrelevant
it's ok for a replacement to change the results of previous replacements
Update: The above statement related to ordering of insertion does not apply to Python versions greater than or equal to 3.6, as standard dicts were changed to use insertion ordering for iteration.
For instance:
d = { "cat": "dog", "dog": "pig"}
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, d)
print(my_sentence)
Possible output #1:
"This is my pig and this is my pig."
Possible output #2
"This is my dog and this is my pig."
One possible fix is to use an OrderedDict.
from collections import OrderedDict
def replace_all(text, dic):
for i, j in dic.items():
text = text.replace(i, j)
return text
od = OrderedDict([("cat", "dog"), ("dog", "pig")])
my_sentence = "This is my cat and this is my dog."
replace_all(my_sentence, od)
print(my_sentence)
Output:
"This is my pig and this is my pig."
Careful #2: Inefficient if your text string is too big or there are many pairs in the dictionary.
Why not one solution like this?
s = "The quick brown fox jumps over the lazy dog"
for r in (("brown", "red"), ("lazy", "quick")):
s = s.replace(*r)
#output will be: The quick red fox jumps over the quick dog
Here is a variant of the first solution using reduce, in case you like being functional. :)
repls = {'hello' : 'goodbye', 'world' : 'earth'}
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls.iteritems(), s)
martineau's even better version:
repls = ('hello', 'goodbye'), ('world', 'earth')
s = 'hello, world'
reduce(lambda a, kv: a.replace(*kv), repls, s)
This is just a more concise recap of F.J and MiniQuark great answers and last but decisive improvement by bgusach. All you need to achieve multiple simultaneous string replacements is the following function:
def multiple_replace(string, rep_dict):
pattern = re.compile("|".join([re.escape(k) for k in sorted(rep_dict,key=len,reverse=True)]), flags=re.DOTALL)
return pattern.sub(lambda x: rep_dict[x.group(0)], string)
Usage:
>>>multiple_replace("Do you like cafe? No, I prefer tea.", {'cafe':'tea', 'tea':'cafe', 'like':'prefer'})
'Do you prefer tea? No, I prefer cafe.'
If you wish, you can make your own dedicated replacement functions starting from this simpler one.
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can apply the replacements within a list comprehension:
# text = "The quick brown fox jumps over the lazy dog"
# replacements = [("brown", "red"), ("lazy", "quick")]
[text := text.replace(a, b) for a, b in replacements]
# text = 'The quick red fox jumps over the quick dog'
I built this upon F.J.s excellent answer:
import re
def multiple_replacer(*key_values):
replace_dict = dict(key_values)
replacement_function = lambda match: replace_dict[match.group(0)]
pattern = re.compile("|".join([re.escape(k) for k, v in key_values]), re.M)
return lambda string: pattern.sub(replacement_function, string)
def multiple_replace(string, *key_values):
return multiple_replacer(*key_values)(string)
One shot usage:
>>> replacements = (u"café", u"tea"), (u"tea", u"café"), (u"like", u"love")
>>> print multiple_replace(u"Do you like café? No, I prefer tea.", *replacements)
Do you love tea? No, I prefer café.
Note that since replacement is done in just one pass, "café" changes to "tea", but it does not change back to "café".
If you need to do the same replacement many times, you can create a replacement function easily:
>>> my_escaper = multiple_replacer(('"','\\"'), ('\t', '\\t'))
>>> many_many_strings = (u'This text will be escaped by "my_escaper"',
u'Does this work?\tYes it does',
u'And can we span\nmultiple lines?\t"Yes\twe\tcan!"')
>>> for line in many_many_strings:
... print my_escaper(line)
...
This text will be escaped by \"my_escaper\"
Does this work?\tYes it does
And can we span
multiple lines?\t\"Yes\twe\tcan!\"
Improvements:
turned code into a function
added multiline support
fixed a bug in escaping
easy to create a function for a specific multiple replacement
Enjoy! :-)
I would like to propose the usage of string templates. Just place the string to be replaced in a dictionary and all is set! Example from docs.python.org
>>> from string import Template
>>> s = Template('$who likes $what')
>>> s.substitute(who='tim', what='kung pao')
'tim likes kung pao'
>>> d = dict(who='tim')
>>> Template('Give $who $100').substitute(d)
Traceback (most recent call last):
[...]
ValueError: Invalid placeholder in string: line 1, col 10
>>> Template('$who likes $what').substitute(d)
Traceback (most recent call last):
[...]
KeyError: 'what'
>>> Template('$who likes $what').safe_substitute(d)
'tim likes $what'
In my case, I needed a simple replacing of unique keys with names, so I thought this up:
a = 'This is a test string.'
b = {'i': 'I', 's': 'S'}
for x,y in b.items():
a = a.replace(x, y)
>>> a
'ThIS IS a teSt StrIng.'
Here my $0.02. It is based on Andrew Clark's answer, just a little bit clearer, and it also covers the case when a string to replace is a substring of another string to replace (longer string wins)
def multireplace(string, replacements):
"""
Given a string and a replacement map, it returns the replaced string.
:param str string: string to execute replacements on
:param dict replacements: replacement dictionary {value to find: value to replace}
:rtype: str
"""
# Place longer ones first to keep shorter substrings from matching
# where the longer ones should take place
# For instance given the replacements {'ab': 'AB', 'abc': 'ABC'} against
# the string 'hey abc', it should produce 'hey ABC' and not 'hey ABc'
substrs = sorted(replacements, key=len, reverse=True)
# Create a big OR regex that matches any of the substrings to replace
regexp = re.compile('|'.join(map(re.escape, substrs)))
# For each match, look up the new string in the replacements
return regexp.sub(lambda match: replacements[match.group(0)], string)
It is in this this gist, feel free to modify it if you have any proposal.
I needed a solution where the strings to be replaced can be a regular expressions,
for example to help in normalizing a long text by replacing multiple whitespace characters with a single one. Building on a chain of answers from others, including MiniQuark and mmj, this is what I came up with:
def multiple_replace(string, reps, re_flags = 0):
""" Transforms string, replacing keys from re_str_dict with values.
reps: dictionary, or list of key-value pairs (to enforce ordering;
earlier items have higher priority).
Keys are used as regular expressions.
re_flags: interpretation of regular expressions, such as re.DOTALL
"""
if isinstance(reps, dict):
reps = reps.items()
pattern = re.compile("|".join("(?P<_%d>%s)" % (i, re_str[0])
for i, re_str in enumerate(reps)),
re_flags)
return pattern.sub(lambda x: reps[int(x.lastgroup[1:])][1], string)
It works for the examples given in other answers, for example:
>>> multiple_replace("(condition1) and --condition2--",
... {"condition1": "", "condition2": "text"})
'() and --text--'
>>> multiple_replace('hello, world', {'hello' : 'goodbye', 'world' : 'earth'})
'goodbye, earth'
>>> multiple_replace("Do you like cafe? No, I prefer tea.",
... {'cafe': 'tea', 'tea': 'cafe', 'like': 'prefer'})
'Do you prefer tea? No, I prefer cafe.'
The main thing for me is that you can use regular expressions as well, for example to replace whole words only, or to normalize white space:
>>> s = "I don't want to change this name:\n Philip II of Spain"
>>> re_str_dict = {r'\bI\b': 'You', r'[\n\t ]+': ' '}
>>> multiple_replace(s, re_str_dict)
"You don't want to change this name: Philip II of Spain"
If you want to use the dictionary keys as normal strings,
you can escape those before calling multiple_replace using e.g. this function:
def escape_keys(d):
""" transform dictionary d by applying re.escape to the keys """
return dict((re.escape(k), v) for k, v in d.items())
>>> multiple_replace(s, escape_keys(re_str_dict))
"I don't want to change this name:\n Philip II of Spain"
The following function can help in finding erroneous regular expressions among your dictionary keys (since the error message from multiple_replace isn't very telling):
def check_re_list(re_list):
""" Checks if each regular expression in list is well-formed. """
for i, e in enumerate(re_list):
try:
re.compile(e)
except (TypeError, re.error):
print("Invalid regular expression string "
"at position {}: '{}'".format(i, e))
>>> check_re_list(re_str_dict.keys())
Note that it does not chain the replacements, instead performs them simultaneously. This makes it more efficient without constraining what it can do. To mimic the effect of chaining, you may just need to add more string-replacement pairs and ensure the expected ordering of the pairs:
>>> multiple_replace("button", {"but": "mut", "mutton": "lamb"})
'mutton'
>>> multiple_replace("button", [("button", "lamb"),
... ("but", "mut"), ("mutton", "lamb")])
'lamb'
Note: Test your case, see comments.
Here's a sample which is more efficient on long strings with many small replacements.
source = "Here is foo, it does moo!"
replacements = {
'is': 'was', # replace 'is' with 'was'
'does': 'did',
'!': '?'
}
def replace(source, replacements):
finder = re.compile("|".join(re.escape(k) for k in replacements.keys())) # matches every string we want replaced
result = []
pos = 0
while True:
match = finder.search(source, pos)
if match:
# cut off the part up until match
result.append(source[pos : match.start()])
# cut off the matched part and replace it in place
result.append(replacements[source[match.start() : match.end()]])
pos = match.end()
else:
# the rest after the last match
result.append(source[pos:])
break
return "".join(result)
print replace(source, replacements)
The point is in avoiding many concatenations of long strings. We chop the source string to fragments, replacing some of the fragments as we form the list, and then join the whole thing back into a string.
I was doing a similar exercise in one of my school homework. This was my solution
dictionary = {1: ['hate', 'love'],
2: ['salad', 'burger'],
3: ['vegetables', 'pizza']}
def normalize(text):
for i in dictionary:
text = text.replace(dictionary[i][0], dictionary[i][1])
return text
See result yourself on test string
string_to_change = 'I hate salad and vegetables'
print(normalize(string_to_change))
You can use the pandas library and the replace function which supports both exact matches as well as regex replacements. For example:
df = pd.DataFrame({'text': ['Billy is going to visit Rome in November', 'I was born in 10/10/2010', 'I will be there at 20:00']})
to_replace=['Billy','Rome','January|February|March|April|May|June|July|August|September|October|November|December', '\d{2}:\d{2}', '\d{2}/\d{2}/\d{4}']
replace_with=['name','city','month','time', 'date']
print(df.text.replace(to_replace, replace_with, regex=True))
And the modified text is:
0 name is going to visit city in month
1 I was born in date
2 I will be there at time
You can find an example here. Notice that the replacements on the text are done with the order they appear in the lists
I was struggling with this problem as well. With many substitutions regular expressions struggle, and are about four times slower than looping string.replace (in my experiment conditions).
You should absolutely try using the Flashtext library (blog post here, Github here). In my case it was a bit over two orders of magnitude faster, from 1.8 s to 0.015 s (regular expressions took 7.7 s) for each document.
It is easy to find use examples in the links above, but this is a working example:
from flashtext import KeywordProcessor
self.processor = KeywordProcessor(case_sensitive=False)
for k, v in self.my_dict.items():
self.processor.add_keyword(k, v)
new_string = self.processor.replace_keywords(string)
Note that Flashtext makes substitutions in a single pass (to avoid a --> b and b --> c translating 'a' into 'c'). Flashtext also looks for whole words (so 'is' will not match 'this'). It works fine if your target is several words (replacing 'This is' by 'Hello').
I face similar problem today, where I had to do use .replace() method multiple times but it didn't feel good to me. So I did something like this:
REPLACEMENTS = {'<': '<', '>': '>', '&': '&'}
event_title = ''.join([REPLACEMENTS.get(c,c) for c in event['summary']])
I feel this question needs a single-line recursive lambda function answer for completeness, just because. So there:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.popitem()), d)
Usage:
>>> mrep('abcabc', {'a': '1', 'c': '2'})
'1b21b2'
Notes:
This consumes the input dictionary.
Python dicts preserve key order as of 3.6; corresponding caveats in other answers are not relevant anymore. For backward compatibility one could resort to a tuple-based version:
>>> mrep = lambda s, d: s if not d else mrep(s.replace(*d.pop()), d)
>>> mrep('abcabc', [('a', '1'), ('c', '2')])
Note: As with all recursive functions in python, too large recursion depth (i.e. too large replacement dictionaries) will result in an error. See e.g. here.
You should really not do it this way, but I just find it way too cool:
>>> replacements = {'cond1':'text1', 'cond2':'text2'}
>>> cmd = 'answer = s'
>>> for k,v in replacements.iteritems():
>>> cmd += ".replace(%s, %s)" %(k,v)
>>> exec(cmd)
Now, answer is the result of all the replacements in turn
again, this is very hacky and is not something that you should be using regularly. But it's just nice to know that you can do something like this if you ever need to.
For replace only one character, use the translate and str.maketrans is my favorite method.
tl;dr > result_string = your_string.translate(str.maketrans(dict_mapping))
demo
my_string = 'This is a test string.'
dict_mapping = {'i': 's', 's': 'S'}
result_good = my_string.translate(str.maketrans(dict_mapping))
result_bad = my_string
for x, y in dict_mapping.items():
result_bad = result_bad.replace(x, y)
print(result_good) # ThsS sS a teSt Strsng.
print(result_bad) # ThSS SS a teSt StrSng.
I don't know about speed but this is my workaday quick fix:
reduce(lambda a, b: a.replace(*b)
, [('o','W'), ('t','X')] #iterable of pairs: (oldval, newval)
, 'tomato' #The string from which to replace values
)
... but I like the #1 regex answer above. Note - if one new value is a substring of another one then the operation is not commutative.
Here is a version with support for basic regex replacement. The main restriction is that expressions must not contain subgroups, and there may be some edge cases:
Code based on #bgusach and others
import re
class StringReplacer:
def __init__(self, replacements, ignore_case=False):
patterns = sorted(replacements, key=len, reverse=True)
self.replacements = [replacements[k] for k in patterns]
re_mode = re.IGNORECASE if ignore_case else 0
self.pattern = re.compile('|'.join(("({})".format(p) for p in patterns)), re_mode)
def tr(matcher):
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
return self.replacements[index]
self.tr = tr
def __call__(self, string):
return self.pattern.sub(self.tr, string)
Tests
table = {
"aaa" : "[This is three a]",
"b+" : "[This is one or more b]",
r"<\w+>" : "[This is a tag]"
}
replacer = StringReplacer(table, True)
sample1 = "whatever bb, aaa, <star> BBB <end>"
print(replacer(sample1))
# output:
# whatever [This is one or more b], [This is three a], [This is a tag] [This is one or more b] [This is a tag]
The trick is to identify the matched group by its position. It is not super efficient (O(n)), but it works.
index = next((index for index,value in enumerate(matcher.groups()) if value), None)
Replacement is done in one pass.
Starting from the precious answer of Andrew i developed a script that loads the dictionary from a file and elaborates all the files on the opened folder to do the replacements. The script loads the mappings from an external file in which you can set the separator. I'm a beginner but i found this script very useful when doing multiple substitutions in multiple files. It loaded a dictionary with more than 1000 entries in seconds. It is not elegant but it worked for me
import glob
import re
mapfile = input("Enter map file name with extension eg. codifica.txt: ")
sep = input("Enter map file column separator eg. |: ")
mask = input("Enter search mask with extension eg. 2010*txt for all files to be processed: ")
suff = input("Enter suffix with extension eg. _NEW.txt for newly generated files: ")
rep = {} # creation of empy dictionary
with open(mapfile) as temprep: # loading of definitions in the dictionary using input file, separator is prompted
for line in temprep:
(key, val) = line.strip('\n').split(sep)
rep[key] = val
for filename in glob.iglob(mask): # recursion on all the files with the mask prompted
with open (filename, "r") as textfile: # load each file in the variable text
text = textfile.read()
# start replacement
#rep = dict((re.escape(k), v) for k, v in rep.items()) commented to enable the use in the mapping of re reserved characters
pattern = re.compile("|".join(rep.keys()))
text = pattern.sub(lambda m: rep[m.group(0)], text)
#write of te output files with the prompted suffice
target = open(filename[:-4]+"_NEW.txt", "w")
target.write(text)
target.close()
this is my solution to the problem. I used it in a chatbot to replace the different words at once.
def mass_replace(text, dct):
new_string = ""
old_string = text
while len(old_string) > 0:
s = ""
sk = ""
for k in dct.keys():
if old_string.startswith(k):
s = dct[k]
sk = k
if s:
new_string+=s
old_string = old_string[len(sk):]
else:
new_string+=old_string[0]
old_string = old_string[1:]
return new_string
print mass_replace("The dog hunts the cat", {"dog":"cat", "cat":"dog"})
this will become The cat hunts the dog
Another example :
Input list
error_list = ['[br]', '[ex]', 'Something']
words = ['how', 'much[ex]', 'is[br]', 'the', 'fish[br]', 'noSomething', 'really']
The desired output would be
words = ['how', 'much', 'is', 'the', 'fish', 'no', 'really']
Code :
[n[0][0] if len(n[0]) else n[1] for n in [[[w.replace(e,"") for e in error_list if e in w],w] for w in words]]
My approach would be to first tokenize the string, then decide for each token whether to include it or not.
Potentially, might be more performant, if we can assume O(1) lookup for a hashmap/set:
remove_words = {"we", "this"}
target_sent = "we should modify this string"
target_sent_words = target_sent.split()
filtered_sent = " ".join(list(filter(lambda word: word not in remove_words, target_sent_words)))
filtered_sent is now 'should modify string'
Or just for a fast hack:
for line in to_read:
read_buffer = line
stripped_buffer1 = read_buffer.replace("term1", " ")
stripped_buffer2 = stripped_buffer1.replace("term2", " ")
write_to_file = to_write.write(stripped_buffer2)
Here is another way of doing it with a dictionary:
listA="The cat jumped over the house".split()
modify = {word:word for number,word in enumerate(listA)}
modify["cat"],modify["jumped"]="dog","walked"
print " ".join(modify[x] for x in listA)
sentence='its some sentence with a something text'
def replaceAll(f,Array1,Array2):
if len(Array1)==len(Array2):
for x in range(len(Array1)):
return f.replace(Array1[x],Array2[x])
newSentence=replaceAll(sentence,['a','sentence','something'],['another','sentence','something something'])
print(newSentence)

Categories

Resources