Python dictionary replacement with space in key - python

I have a string and a dictionary, I have to replace every occurrence of the dict key in that text.
text = 'I have a smartphone and a Smart TV'
dict = {
'smartphone': 'toy',
'smart tv': 'junk'
}
If there is no space in keys, I will break the text into word and compare one by one with dict. Look like it took O(n). But now the key have space inside it so thing is more complected. Please suggest me the good way to do this and please notice the key may not match case with the text.
Update
I have think of this solution but it not efficient. O(m*n) or more...
for k,v in dict.iteritems():
text = text.replace(k,v) #or regex...

If the key word in the text is not close to each others (keyword other keyword) we may do this. Took O(n) to me >"<
def dict_replace(dictionary, text, strip_chars=None, replace_func=None):
"""
Replace word or word phrase in text with keyword in dictionary.
Arguments:
dictionary: dict with key:value, key should be in lower case
text: string to replace
strip_chars: string contain character to be strip out of each word
replace_func: function if exist will transform final replacement.
Must have 2 params as key and value
Return:
string
Example:
my_dict = {
"hello": "hallo",
"hallo": "hello", # Only one pass, don't worry
"smart tv": "http://google.com?q=smart+tv"
}
dict_replace(my_dict, "hello google smart tv",
replace_func=lambda k,v: '[%s](%s)'%(k,v))
"""
# First break word phrase in dictionary into single word
dictionary = dictionary.copy()
for key in dictionary.keys():
if ' ' in key:
key_parts = key.split()
for part in key_parts:
# Mark single word with False
if part not in dictionary:
dictionary[part] = False
# Break text into words and compare one by one
result = []
words = text.split()
words.append('')
last_match = '' # Last keyword (lower) match
original = '' # Last match in original
for word in words:
key_word = word.lower().strip(strip_chars) if \
strip_chars is not None else word.lower()
if key_word in dictionary:
last_match = last_match + ' ' + key_word if \
last_match != '' else key_word
original = original + ' ' + word if \
original != '' else word
else:
if last_match != '':
# If match whole word
if last_match in dictionary and dictionary[last_match] != False:
if replace_func is not None:
result.append(replace_func(original, dictionary[last_match]))
else:
result.append(dictionary[last_match])
else:
# Only match partial of keyword
match_parts = last_match.split(' ')
match_original = original.split(' ')
for i in xrange(0, len(match_parts)):
if match_parts[i] in dictionary and \
dictionary[match_parts[i]] != False:
if replace_func is not None:
result.append(replace_func(match_original[i], dictionary[match_parts[i]]))
else:
result.append(dictionary[match_parts[i]])
result.append(word)
last_match = ''
original = ''
return ' '.join(result)

If your keys have no spaces:
output = [dct[i] if i in dct else i for i in text.split()]
' '.join(output)
You should use dct instead of dict so it doesn't collide with the built in function dict()
This makes use of a dictionary comprehension, and a ternary operator
to filter the data.
If your keys do have spaces, you are correct:
for k,v in dct.iteritems():
string.replace('d', dct[d])
And yes, this time complexity will be m*n, as you have to iterate through the string every time for each key in dct.

Drop all dictionary keys and the input text to lower case, so the comparisons are easy. Now ...
for entry in my_dict:
if entry in text:
# process the match
This assumes that the dictionary is small enough to warrant the match. If, instead, the dictionary is large and the text is small, you'll need to take each word, then each 2-word phrase, and see whether they're in the dictionary.
Is that enough to get you going?

You need to test all the neighbor permutations from 1 (each individual word) to len(text) (the entire string). You can generate the neighbor permutations this way:
text = 'I have a smartphone and a Smart TV'
array = text.lower().split()
key_permutations = [" ".join(array[j:j + i]) for i in range(1, len(array) + 1) for j in range(0, len(array) - (i - 1))]
>>> key_permutations
['i', 'have', 'a', 'smartphone', 'and', 'a', 'smart', 'tv', 'i have', 'have a', 'a smartphone', 'smartphone and', 'and a', 'a smart', 'smart tv', 'i have a', 'have a smartphone', 'a smartphone and', 'smartphone and a', 'and a smart', 'a smart tv', 'i have a smartphone', 'have a smartphone and', 'a smartphone and a', 'smartphone and a smart', 'and a smart tv', 'i have a smartphone and', 'have a smartphone and a', 'a smartphone and a smart', 'smartphone and a smart tv', 'i have a smartphone and a', 'have a smartphone and a smart', 'a smartphone and a smart tv', 'i have a smartphone and a smart', 'have a smartphone and a smart tv', 'i have a smartphone and a smart tv']
Now we substitute through the dictionary:
import re
for permutation in key_permutations:
if permutation in dict:
text = re.sub(re.escape(permutation), dict[permutation], text, flags=re.IGNORECASE)
>>> text
'I have a toy and a junk'
Though you'll likely want to try the permutations in the reverse order, longest first, so more specific phrases have precedence over individual words.

You can do this pretty easily with regular expressions.
import re
text = 'I have a smartphone and a Smart TV'
dict = {
'smartphone': 'toy',
'smart tv': 'junk'
}
for k, v in dict.iteritems():
regex = re.compile(re.escape(k), flags=re.I)
text = regex.sub(v, text)
It still suffers from the problem of depending on processing order of the dict keys, if the replacement value for one item is part of the search term for another item.

Related

Python - Trying to replace words in a list of strings but having problems with single letter words

I have a list of strings such as
words = ['Twinkle Twinkle', 'How I wonder']
I am trying to create a function that will find and replace words in the original list and I was able to do that except for when the user inputs single letter words such as 'I' or 'a' etc.
current function
def sub(old: string, new: string, words: list):
words[:] = [w.replace(old, new) for w in words]
if input for old = 'I'
and new = 'ASD'
current output = ['TwASDnkle TwASDnkle', 'How ASD wonder']
intended output = ['Twinkle Twinkle', 'How ASD wonder']
This is my first post here and I have only been learning python for a few months now so I would appreciate any help, thank you
Don't use str.replace in a loop. This often doesn't do what is expected as it doesn't work on words but on all matches.
Instead, split the words, replace on match and join:
l = ['Twinkle Twinkle', 'How I wonder']
def sub(old: str, new: str, words: list):
words[:] = [' '.join(new if w==old else w for w in x.split()) for x in words]
sub('I', 'ASD', l)
Output: ['Twinkle Twinkle', 'How ASD wonder']
Or use a regex with word boundaries:
import re
def sub(old, new, words):
words[:] = [re.sub(fr'\b{re.escape(old)}\b', new, w) for w in words]
l = ['Twinkle Twinkle', 'How I wonder']
sub('I', 'ASD', l)
# ['Twinkle Twinkle', 'How ASD wonder']
NB. As #re-za pointed out, it might be a better practice to return a new list rather than mutating the input, just be aware of it
It seems like you are replacing letters and not words. I recommend splitting sentences (strings) into words by splitting strings by the ' ' (space char).
output = []
I would first get each string from the list like this:
for string in words:
I would then split the strings into a list of words like this:
temp_string = '' # a temp string we will use later to reconstruct the words
for word in string.split(' '):
Then I would check to see if the word is the one we are looking for by comparing it to old, and replacing (if it matches) with new:
if word == old:
temp_string += new + ' '
else:
temp_string += word + ' '
Now that we have each word reconstructed or replaced (if needed) back into a temp_string we can put all the temp_strings back into the array like this:
output.append(temp_string[:-1]) # [:-1] means we omit the space at the end
It should finally look like this:
def sub(old: string, new: string, words: list):
output = []
for string in words:
temp_string = '' # a temp string we will use later to reconstruct the words
for word in string.split(' '):
if word == old:
temp_string += new + ' '
else:
temp_string += word + ' '
output.append(temp_string[:-1]) # [:-1] means we omit the space at the end
return output

How do I split a string with several delimiters, but only once on each delimiter? Python

I am trying to split a string such as the one below, with all of the delimiters below, but only once.
string = 'it; seems; like\ta good\tday to watch\va\vmovie.'
delimiters = '\t \v ;'
The output, in this case, would be:
['it', ' seems; like', 'a good\tday to watch', 'a\vmovie.']
Obviously the example above is a nonsense example, but I am trying to learn whether or not this is possible. Would a fairly involved regex be in order?
Apologies if this question had been asked before. I did a fair bit of searching and could not find something quite like my example. Thanks for your time!
This should do the trick:
import re
def split_once_by(s, delims):
delims = set(delims)
parts = []
while delims:
delim_re = '({})'.format('|'.join(re.escape(d) for d in delims))
result = re.split(delim_re, s, maxsplit=1)
if len(result) == 3:
first, delim, s = result
parts.append(first)
delims.remove(delim)
else:
break
parts.append(s)
return parts
Example:
>>> split_once_by('it; seems; like\ta good\tday to watch\va\vmovie.', '\t\v;')
['it', ' seems; like', 'a good\tday to watch', 'a\x0bmovie.']
Burning Alcohol's answer inspired me to write this (IMO) better function:
def split_once_by(s, delims):
split_points = sorted((s.find(d), -len(d), d) for d in delims)
start = 0
for stop, _longest_first, d in split_points:
if stop < start: continue
yield s[start:stop]
start = stop + len(d)
yield s[start:]
with usage:
>>> list(split_once_by('it; seems; like\ta good\tday to watch\va\vmovie.', '\t\v;'))
['it', ' seems; like', 'a good\tday to watch', 'a\x0bmovie.']
A simple algorithm would do,
test_string = r'it; seems; like\ta good\tday to watch\va\vmovie.'
delimiters = [r'\t', r'\v', ';']
# find the index of each first occurence and sort it
delimiters = sorted(delimiters, key=lambda delimiter: test_string.find(delimiter))
splitted_string = [test_string]
# perform split with option maxsplit
for index, delimiter in enumerate(delimiters):
if delimiter in splitted_string[-1]:
splitted_string += splitted_string[-1].split(delimiter, maxsplit=1)
splitted_string.pop(index)
print(splitted_string)
# ['it', ' seems; like', 'a good\\tday to watch', 'a\\vmovie.']
Just create a list of patterns and apply them once:
string = 'it; seems; like\ta good\tday to watch\va\vmovie.'
patterns = ['\t', '\v', ';']
for pattern in patterns:
string = '*****'.join(string.split(pattern, maxsplit=1))
print(string.split('*****'))
Output:
['it', ' seems; like', 'a good\tday to watch', 'a\x0bmovie.']
So, what is "*****" ??
On each iteration, when you apply the split method you get a list. So, in the next iteration, You can't apply the .split () method (because you have a list), so you have to join each value of that list with some weird character like "****" or "###" or "^^^^^^^" or whatever you want, in order to re-apply the split () in the next iteration.
Finally, for each "*****" on your string, you will have one pattern of the list, so you can use this to make a final split.

Remove a list of phrase from string

I have a list of phrases (n-grams) that need to be removed from a given sentence.
removed = ['range', 'drinks', 'food and drinks', 'summer drinks']
sentence = 'Oranges are the main ingredient for a wide range of food and drinks'
I want to get:
new_sentence = 'Oranges are the main ingredient for a wide of'
I tried Remove list of phrases from string but it doesn't work ('Oranges' turns into 'Os', 'drinks' is removed instead of a phrase 'food and drinks')
Does anyone know how to solve it? Thank you!
Since you want to match on whole words only, I think the first step is to turn everything into lists of words, and then iterate from longest to shortest phrase in order to find things to remove:
>>> removed = ['range', 'drinks', 'food and drinks', 'summer drinks']
>>> sentence = 'Oranges are the main ingredient for a wide range of food and drinks'
>>> words = sentence.split()
>>> for ngram in sorted([r.split() for r in removed], key=len, reverse=True):
... for i in range(len(words) - len(ngram)+1):
... if words[i:i+len(ngram)] == ngram:
... words = words[:i] + words[i+len(ngram):]
... break
...
>>> " ".join(words)
'Oranges are the main ingredient for a wide of'
Note that there are some flaws with this simple approach -- multiple copies of the same n-gram won't be removed, but you can't continue with that loop after modifying words either (the length will be different), so if you want to handle duplicates, you'll need to batch the updates.
Regular expression time!
In [116]: removed = ['range', 'drinks', 'food and drinks', 'summer drinks']
...: removed = sorted(removed, key=len, reverse=True)
...: sentence = 'Oranges are the main ingredient for a wide range of food and drinks'
...: new_sentence = sentence
...: import re
...: removals = [r'\b' + phrase + r'\b' for phrase in removed]
...: for removal in removals:
...: new_sentence = re.sub(removal, '', new_sentence)
...: new_sentence = ' '.join(new_sentence.split())
...: print(sentence)
...: print(new_sentence)
Oranges are the main ingredient for a wide range of food and drinks
Oranges are the main ingredient for a wide of
import re
removed = ['range', 'drinks', 'food and drinks', 'summer drinks']
sentence = 'Oranges are the main ingredient for a wide range of food and drinks'
# sort the removed tokens according to their length,
removed = sorted(removed, key=len, reverse=True)
# using word boundaries
for r in removed:
sentence = re.sub(r"\b{}\b".format(r), " ", sentence)
# replace multiple whitspaces with a single one
sentence = re.sub(' +',' ',sentence)
I hope this would help:
first, you need to sort the removed strings according to their length, in this way 'food and drinks' will be replaced before 'drinks'
Here you go
removed = ['range', 'drinks', 'food and drinks', 'summer drinks','are']
sentence = 'Oranges are the main ingredient for a wide range of food and drinks'
words = sentence.split()
resultwords = [word for word in words if word.lower() not in removed]
result = ' '.join(resultwords)
print(result)
Results:
Oranges the main ingredient for a wide of food and

Removing orphan letters in a list with Python

I have a python list:
list = ['clothing items s','shoes s','handbag d','fashion k']
I have used a for loop that removed words from the above list using another list.
The challenge I have been facing is the issue around plurals/singulars. This has left me with random orphan letters.
Do you know how to loop through the list items and identify single letters such as 's','d','k' (in the above example) and remove them? While in the example the orphan is at the end of the string, it is not always the case.
Here is my current loop:
new_new_keywords = []
#first we start looping over every keyword
for keyword in new_keywords2:
# loop over every stop
for stop in new_stops:
# check if this stop is inside the current new_key
if stop in keyword:
# if it is, update the new key to remove the current stop
keyword = keyword.replace(stop, '')
#regex removes numbers at the end of the string in the list
keyword = re.sub(" \d+", " ", keyword)
#loop over the keyword over and over again until
#remove every stop word
# append the new stop-less keyword to the end of the array
# even if there are no changes
new_new_keywords.append(keyword)
The following is a rather old fashioned (and inefficient) approach which should work. This will preserve your original strings, apart from removing the unwanted characters:
test_list = ['clothing items s','shoes s','handbag d','fashion k', 'keep a', 'keep i', 'leave a alone remove k', 'keep , spacing b']
remove_list = "sdk" # letters that need to be removed
newlist = []
for item in test_list:
item += "_" # append unused symbol to end of string
for letter in remove_list:
item = item.replace(" %s " % letter, "")
item = item.replace(" %s_" % letter, "")
newlist.append(item.rstrip("_"))
print newlist
It gives the following output:
['clothing items', 'shoes', 'handbag', 'fashion', 'keep a', 'keep i', 'leave a alone remove', 'keep , spacing b']
If at some point you choose to give regular expressions a go, then similar logic can be achieved using the following:
import re
test_list = ['clothing items s','shoes s','handbag d','fashion k', 'keep a', 'keep i', 'leave a alone remove k', 'keep , spacing b']
remove_list = "sdk"
newlist = [re.sub(" ([%s])( |$)" % remove_list, "", item) for item in test_list]
print newlist
Take each string s, split it in words w, then reassemble s filtering out words with only 1 letter:
map(lambda s: ' '.join(w for w in s.split() if len(w) > 1), list)
You can use a set to decide what are invalid ending single letters preceded by a space, once the string length is > 1, the second last letter is a space and the last is in the rm set then slice the string to remove the chars, else just keep the string as is.:
lst = ['clothing items s','clothing s','shoes s','handbag d','fashion k']
rm = set((" bcdefghjklnpqrstuvwzy"))
print([ch[:-2] if all((len(ch) > 1,ch[-2].isspace(),ch[-1] in rm)) else ch
for ch in lst])
['clothing items', 'clothing', 'shoes', 'handbag', 'fashion']
You can reverse the logic with what letters that are valid.
lst = ['clothing items s','clothing s','shoes s','handbag d','fashion k']
st = set("ioa")
print([ch[:-2] if all((len(ch) > 1,ch[-2].isspace(),ch[-1] not in st)) else ch
for ch in lst])
You might also want to call str.lower on the strings as I and O should be capitalised when used by themselves.
You can use rsplit again and a loop, you just have to decide if you want to keep only valid single letter words I,O,a but that would not mean your sentence was grammatically correct either:
lst = ['clothing items s', 'clothing s', 'shoes s', 'handbag d', 'fashion k']
rm = set("bcdefghjklnpqrstuvwzy")
out = []
for s in lst:
spl = s.rsplit(None,1)
if spl[-1] not in rm:
out.append(s)
else:
out.append(s[:-2])
print(out)
Or using a regex:
lst = ['clothing items s', 'clothing s', 'shoes s', 'handbag d', 'fashion k']
import re
r = re.compile(r"\s[bcdefghjklnpqrstuvwzy]$")
print([r.sub("", ele) for ele in lst])
['clothing items', 'clothing', 'shoes', 'handbag', 'fashion']
Even considering what are possible one letter words then you would still need to see if the sentence was grammatically correct, for that you would need to use something like nltk, you could add a lowercase i and o to re or the set of letters to further filter your data but only you can decide what is relevant. If you want a robust solution and the sentence to be grammatically correct then there is a lot more work than simply just removing all or certain single trailing letters at the end of the string.
Straightforward solution - it deletes single letter words starting from last element:
def trim(s):
parts = s.split()
while parts:
if len(parts[-1]) == 1:
del parts[-1]
else:
break
return ' '.join(parts)
assert trim('clothing items s') == 'clothing items'
assert trim('fashion a b c') == 'fashion'
assert trim('stack overflow') == 'stack overflow'
assert trim('have a nice day') == 'have a nice day'
assert trim('a b c') == ''

Python tokenize sentence with optional key/val pairs

I'm trying to parse a sentence (or line of text) where you have a sentence and optionally followed some key/val pairs on the same line. Not only are the key/value pairs optional, they are dynamic. I'm looking for a result to be something like:
Input:
"There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
Output:
Values = {'theSentence' : "There was a cow at home.",
'home' : "mary",
'cowname' : "betsy",
'date'= "10-jan-2013"
}
Input:
"Mike ordered a large hamburger. lastname=Smith store=burgerville"
Output:
Values = {'theSentence' : "Mike ordered a large hamburger.",
'lastname' : "Smith",
'store' : "burgerville"
}
Input:
"Sam is nice."
Output:
Values = {'theSentence' : "Sam is nice."}
Thanks for any input/direction. I know the sentences appear that this is a homework problem, but I'm just a python newbie. I know it's probably a regex solution, but I'm not the best regarding regex.
I'd use re.sub:
import re
s = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
d = {}
def add(m):
d[m.group(1)] = m.group(2)
s = re.sub(r'(\w+)=(\S+)', add, s)
d['theSentence'] = s.strip()
print d
Here's more compact version if you prefer:
d = {}
d['theSentence'] = re.sub(r'(\w+)=(\S+)',
lambda m: d.setdefault(m.group(1), m.group(2)) and '',
s).strip()
Or, maybe, findall is a better option:
rx = '(\w+)=(\S+)|(\S.+?)(?=\w+=|$)'
d = {
a or 'theSentence': (b or c).strip()
for a, b, c in re.findall(rx, s)
}
print d
If your sentence is guaranteed to end on ., then, you could follow the following approach.
>>> testList = inputString.split('.')
>>> Values['theSentence'] = testList[0]+'.'
For the rest of the values, just do.
>>> for elem in testList[1].split():
key, val = elem.split('=')
Values[key] = val
Giving you a Values like so
>>> Values
{'date': '10-jan-2013', 'home': 'mary', 'cowname': 'betsy', 'theSentence': 'There was a cow at home.'}
>>> Values2
{'lastname': 'Smith', 'theSentence': 'Mike ordered a large hamburger.', 'store': 'burgerville'}
>>> Values3
{'theSentence': 'Sam is nice.'}
The first step is to do
inputStr = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
theSentence, others = str.split('.')
You're going to then want to break up "others". Play around with split() (the argument you pass in tells Python what to split the string on), and see what you can do. :)
Assuming there could be only 1 dot, that divides the sentence and assignment pairs:
input = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
sentence, assignments = input.split(". ")
result = {'theSentence': sentence + "."}
for item in assignments.split():
key, value = item.split("=")
result[key] = value
print result
prints:
{'date': '10-jan-2013',
'home': 'mary',
'cowname': 'betsy',
'theSentence': 'There was a cow at home.'}
Assuming = doesn't appear in the sentence itself. This seems to be more valid than assuming the sentence ends with a ..
s = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
eq_loc = s.find('=')
if eq_loc > -1:
meta_loc = s[:eq_loc].rfind(' ')
s = s[:meta_loc]
metastr = s[meta_loc + 1:]
metadict = dict(m.split('=') for m in metastr.split())
else:
metadict = {}
metadict["theSentence"] = s
So as usual, there's a bunch of ways to do this. Here's a regexp based approach that looks for key=value pairs:
import re
sentence = "..."
values = {}
for match in re.finditer("(\w+)=(\S+)", sentence):
if not values:
# everything left to the first key/value pair is the sentence
values["theSentence"] = sentence[:match.start()].strip()
else:
key, value = match.groups()
values[key] = value
if not values:
# no key/value pairs, keep the entire sentence
values["theSentence"] = sentence
This assumes that the key is a Python-style identifiers, and that the value consists of one or more non-whitespace characters.
Supposing that the first period separates the sentence from the values, you can use something like this:
#! /usr/bin/python3
a = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
values = (lambda s, tail: (lambda d, kv: (d, d.update (kv) ) ) ( {'theSentence': s}, {k: v for k, v in (x.split ('=') for x in tail.strip ().split (' ') ) } ) ) (*a.split ('.', 1) ) [0]
print (values)
Nobody posted a comprehensible one-liner. The question is answered, but gotta do it in one line, it's the Python way!
{"theSentence": sentence.split(".")[0]}.update({item.split("=")[0]: item.split("=")[1] for item in sentence.split(".")[1].split()})
Eh, not super elegant, but it's totally in one line. No imports even.
use the regular expression findall. the first capture group is the sentence. | is the or condition for the second capture group: one or more spaces, one or more characters, the equal sign, and one or more non space characters.
s = "There was a cow at home. home=mary cowname=betsy date=10-jan-2013"
all_matches = re.findall(r'([\w+\s]+\.{1})|((\s+\w+)=(\S+))',s)
d={}
for i in np.arange(len(all_matches)):
#print(all_matches[i])
if all_matches[i][0] != "":
d["theSentence"]=all_matches[i][0]
else:
d[all_matches[i][2]]=all_matches[i][3]
print(d)
output:
{'theSentence': 'There was a cow at home.', ' home': 'mary', ' cowname': 'betsy', ' date': '10-jan-2013'}

Categories

Resources