I am pulling hotel names through the Expedia API and cross referencing results with another travel service provider.
The problem I am encountering is that many of the hotel names appear differently on the Expedia API than they do with the other provider and I cannot figure out a good way to match them.
I am storing the results of both in separate dicts with room rates. So, for example, the results from Expedia on a search for Vilnius in Lithuania might look like this:
expediadict = {'Ramada Hotel & Suites Vilnius': 120, 'Hotel Rinno': 100,
'Vilnius Comfort Hotel': 110}
But the results from the other provider might look like this:
altproviderdict = {'Ramada Vilnius': 120, 'Rinno Hotel': 100,
'Comfort Hotel LT': 110}
The only thing I can think of doing is stripping out all instances of 'Hotel', 'Vilnius', 'LT' and 'Lithuania' and then testing whether part of the expediadict key matches part of an altprovderdict key. This seems messy and not very Pythonic, so I wondered if any of you had any cleaner ideas?
>>> def simple_clean(word):
... return word.lower().replace(" ","").replace("hotel","")
...
>>> a = "Ramada Hotel & Suites Vilnius"
>>> b = "Hotel Ramada Suites Vilnous"
>>> a = simple_clean(a)
>>> b = simple_clean(b)
>>> a
'ramada&suitesvilnius'
>>> b
'ramadasuitesvilnous'
>>> import difflib
>>> difflib.SequenceMatcher(None,a,b).ratio()
0.9230769230769231
Do cleaning and normalization of the words : eg. remove words like Hotel,The,Resort etc
, and convert to lower case without spaces etc
Then use a fuzzy string matching algorithm like leveinstein, eg from difflib module.
This method is pretty raw and just an example, you can enhance it to suit your needs for optimal results.
If you only want to match names when the words appear in the same order, you might want to use some longest common sub sequence algorithm like it's used in diff tools. But with words instead of characters or lines.
If order is not important, it's simpler: Put all the words of the name into a set like this:
set(name.split())
and in order to match two names, test the size of the intersection of these two sets. Or test if the symmetric_difference only contains unimportant words.
Related
I have two lists, one that contains the user input and the other one that contains the mapping.
The user input looks like this :
The mapping looks like this :
I am trying to split the strings in the user input list. Sometime they enter one record as CO109CO45 but in reality these are two codes and don't belong together. They need to be separated with a comma or space as such CO109,CO45.
There are many examples that have the same behavior and i was thinking to use a mapping list to match and split. Is this something that can be done? What do you suggest? Thanks in advance for your help!
Use a combination of look ahead and look behind regex in the split.
df = pd.DataFrame({'RCode': ['CO109', 'CO109CO109']})
print(df)
RCode
0 CO109
1 CO109CO109
df.RCode.str.split('(?<=\d)(?=\D)')
0 [CO109]
1 [CO109, CO109]
Name: RCode, dtype: object
You can try with regex:
import re
l = ['CO2740CO96', 'CO12', 'CO973', 'CO870CO397', 'CO584', 'CO134CO42CO685']
df = pd.DataFrame({'code':l})
df.code = df.code.str.findall('[A-Za-z]+\d+')
print(df)
Output:
code
0 [CO2740, CO96]
1 [CO12]
2 [CO973]
3 [CO870, CO397]
4 [CO584]
5 [CO134, CO42, CO685]
I usually use something like this, for an input original_list:
output_list = [
[
('CO' + target).strip(' ,')
for target in item.split('CO')
]
for item in original_list
]
There are probably more efficient ways of doing it, but you don't need the overhead of dataframes / pandas, or the hard-to-read aspects of regexes.
If you have a manageable number of prefixes ("CO", "PR", etc.), you can set up a recursive function splitting on each of them. - Or you can use .find() with the full codes.
Let's say I have this Person class that consist of last_name string field.
I would like to display links of all first letters of names existing in db.
So for example:
A B D E... when there's Adams, Brown, Douglas, Evans and no one that last_name starts with C.
Of course view is not a problem here as I want to prepare all of this on backend. So the question is how to write good model's or view's function that will provide this.
I would like it to be DB-independent, however tricks for any particular DB would be a bonus.
Also, I would like this to be quite well optimized in terms of speed because it could be a lot of names. It should be less naive than this most simple algorithm:
Get all people
Create set (because of the uniqueness of elements) of the first letters
Sort and return
So for example (in views.py):
names = Person.objects.values_list('last_name', flat=True)
letters = {name[0] for name in names}
letters_sorted = sorted(letters)
Of course I could order_by first or assign new attribute to each objects (containing first letter) but I don't think it will speed up the process.
I also think that assuming that all letters are in use is bad assumption if I would go check each letter if at least one name for this letter exists ;-)
Which approach would be best effective for databases and django here?
You could also use the orm methods to generate the list of initals (including count):
from django.db.models import Count
from django.db.models.functions import Left
initials = (Person
.objects
.annotate(initial=Left('last_name', 1))
.values('initial')
.annotate(count=Count('initial'))
.order_by('initial'))
This will result in something like
<QuerySet [{'initial': 'A', 'count': 1},
{'initial': 'B', 'count': 2},
...
{'initial': 'Y', 'count': 1}]>
I would sort the names first, then unpack the names as arguments for zip, then consume just the first tuple that zip yields:
names = sorted(Person.objects.values_list('last_name', flat=True))
first_letters = next(zip(*names))
This doesn't use sets or remove duplicates or anything like that. Is that critical? If it is, you could do this:
names = Person.objects.values_list('last_name', flat=True)
first_letters = sorted(set(next(zip(*names))))
Though this would be hardly more performant that what you've already written.
import string
alphabet_list = list(string.ascii_lowercase) + list(string.ascii_uppercase)
result = dict(map(lambda x: (x, Person.objects.filter(name__start_with=x).exists()), alphabet_list))
Is there a way to search for a value in a dataframe column using FuzzyWuzzy or similar library?
I'm trying to find a value in one column that corresponds to the value in another while taking fuzzy matching into account. So
So for example, if I have State Names in one column and State Codes in another, how would I find the state code for Florida, which is FL while catering for abbreviations like "Flor"?
So in other words, I want to find a match for a State Name corresponding to "Flor" and get the corresponding State Code "FL".
Any help is greatly appreciated.
If the abbreviations are all prefixes, you can use the .startswith() string method against either the short or long version of the state.
>>> test_value = "Flor"
>>> test_value.upper().startswith("FL")
True
>>> "Florida".lower().startswith(test_value.lower())
True
However, if you have more complex abbreviations, difflib.get_close_matches will probably do what you want!
>>> import pandas as pd
>>> import difflib
>>> df = pd.DataFrame({"states": ("Florida", "Texas"), "st": ("FL", "TX")})
>>> df
states st
0 Florida FL
1 Texas TX
>>> difflib.get_close_matches("Flor", df["states"].to_list())
['Florida']
>>> difflib.get_close_matches("x", df["states"].to_list(), cutoff=0.2)
['Texas']
>>> df["st"][df.index[df["states"]=="Texas"]].iloc[0]
'TX'
You will probably want to try/except IndexError around reading the first member of the returned list from difflib and possibly tweak the cutoff to get less false matches with close states (perhaps offer all the states as possibilities to some user or require more letters for close states).
You may also see the best results combining the two; testing prefixes first before trying the fuzzy match.
Putting it all together
def state_from_partial(test_text, df, col_fullnames, col_shortnames):
if len(test_text) < 2:
raise ValueError("must have at least 2 characters")
# if there's exactly two characters, try to directly match short name
if len(test_text) == 2 and test_text.upper() in df[col_shortnames]:
return test_text.upper()
states = df[col_fullnames].to_list()
match = None
# this will definitely fail at least for states starting with M or New
#for state in states:
# if state.lower().startswith(test_text.lower())
# match = state
# break # leave loop and prepare to find the prefix
if not match:
try: # see if there's a fuzzy match
match = difflib.get_close_matches(test_text, states)[0] # cutoff=0.6
except IndexError:
pass # consider matching against a list of problematic states with different cutoff
if match:
return df[col_shortnames][df.index[df[col_fullnames]==match]].iloc[0]
raise ValueError("couldn't find a state matching partial: {}".format(test_text))
Beware of states which start with 'New' or 'M' (and probably others), which are all pretty close and will probably want special handling. Testing will do wonders here.
I have three lists, (1) treatments (2) medicine name and (3) medicine code symbol. I am trying to identify the respective medicine code symbol for each of 14,700 treatments. My current approach is to identify if any name in (2) is "in" (1), and then return the corresponding (3). However, I am returned an abitrary list (correct length) of medicine code symbols corresponding to the 14,700 treatments. Code for the method I've written is below:
codes = pandas.read_csv('Codes.csv', dtype=str)
codes_list = _codes.values.tolist()
names = pandas.read_csv('Names.csv', dtype=str)
names_list = names.values.tolist()
treatments = pandas.read_csv('Treatments.csv', dtype=str)
treatments_list = treatments.values.tolist()
matched_codes_list = range(len(treatments_list))
for i in range(len(treatments_list)):
for j in range(len(names_list)):
if names_list[j] in treatments_list[i]:
matched_codes_list[i]=codes_list_text[j]
print matched_codes_list
Any suggestions for where I am going wrong would be much appreciated!
I can't tell what you are expecting. You should replace the xxx_list code with examples instead, since you don't seem to have any problems with the csv reading.
Let's suppose you did that, and your result looks like this.
codes_list = ['shark', 'panda', 'horse']
names_list = ['fin', 'paw', 'hoof']
assert len(codes_list) == len(names_list)
treatments_list = ['tape up fin', 'reverse paw', 'stand on one hoof', 'pawn affinity maneuver', 'alert wing patrol']
it sounds like you are trying to determine the 'code' for each 'treatment', assuming that the number of codes and names are the same (and indicate some mapping). You plan to use the presence of the name to determine the code.
we can zip together the name and codes list to avoid using indexes there, and we can use iteration over the treatment list instead of indexes for pythonic readability
matched_codes_list = []
for treatment in treatment:
matched_codes = []
for name, code in zip(names_list, codes_list):
if name in treatment:
matched_codes.append(code)
matched_codes_list.append(matched_codes)
this would give something like
assert matched_codes_list == [
['shark'], # 'tape up fin'
['panda'], # 'reverse paw'
['horse'], # 'stand on one hoof'
['shark', 'panda', 'horse'], # 'pawn affinity maneuver'
[], # 'alert wing patrol'
]
note that the method used to do this is quite slow (and probably will give false positives, see 4th entry). You will traverse the text of all treatment descriptions once for each name/code pair.
You can use a dictionary like 'lookup = {name: code for name, code in zip(names_list, codes_list)}, or itertools.izip for minor gains. Otherwise something more clever might be needed, perhaps splitting treatments into a set containing words, or mapping words into multiple codes.
I am parsing a relatively simple text, where each line describes a game unit. I have little knowledge of parsing techniques, so I used the following ad hoc solution:
class Unit:
# rules is an ordered dictionary of tagged regex that is intended to be applied in the given order
# the group named V would correspond to the value (if any) for that particular tag
rules = (
('Level', r'Lv. (?P<V>\d+)'),
('DPS', r'DPS: (?P<V>\d+)'),
('Type', r'(?P<V>Tank|Infantry|Artillery'),
#the XXX will be expanded into a list of valid traits
#note: (XXX| )* wouldn't work; it will match the first space it finds,
#and stop at that if it's in front of something other than a trait
('Traits', r'(?P<V>(XXX)(XXX| )*)'),
# flavor text, if any, ends with a dot
('FlavorText', r'(?P<V>.*\."?$)'),
)
rules = collections.OrderedDict(rules)
traits = '|'.join('All-Terrain', 'Armored', 'Anti-Aircraft', 'Motorized')
rules['Traits'] = re.sub('XXX', effects, rules['Traits'])
for x in rules:
rules[x] = re.sub('<V>', '<'+x+'>', rules[x])
rules[x] = re.compile(rules[x])
def __init__(self, data)
# data looks like this:
# Lv. 5 Tank DPS: 55 Motorized Armored
for field, regex in Item.rules.items():
data = regex.sub(self.parse, data, 1)
if data:
raise ParserError('Could not parse part of the input: ' + data)
def parse(self, m):
if len(m.groupdict()) != 1:
Exception('Expected a single named group')
field, value = m.groupdict().popitem()
setattr(self, field, value)
return ''
It works fine, but I feel I reached the limit of regex power. Specifically, in the case of Traits, the value ends up being a string that I need to split and convert into a list at a later point: e.g., obj.Traits would be set to 'Motorized Armored' in this code, but in a later function changed to ('Motorized', 'Armored').
I'm thinking of converting this code to use either EBNF or pyparsing grammar or something like that. My goals are:
make this code neater and less error-prone
avoid the ugly treatment of the case with a list of values (where I need do replacement inside the regex first, and later post-process the result to convert a string into a list)
What would be your suggestions about what to use, and how to rewrite the code?
P.S. I skipped some parts of the code to avoid clutter; if I introduced any errors in the process, sorry - the original code does work :)
I started to write up a coaching guide for pyparsing, but looking at your rules, they translate pretty easily into pyparsing elements themselves, without dealing with EBNF, so I just cooked up a quick sample:
from pyparsing import Word, nums, oneOf, Group, OneOrMore, Regex, Optional
integer = Word(nums)
level = "Lv." + integer("Level")
dps = "DPS:" + integer("DPS")
type_ = oneOf("Tank Infantry Artillery")("Type")
traits = Group(OneOrMore(oneOf("All-Terrain Armored Anti-Aircraft Motorized")))("Traits")
flavortext = Regex(r".*\.$")("FlavorText")
rule = (Optional(level) & Optional(dps) & Optional(type_) &
Optional(traits) & Optional(flavortext))
I included the Regex example so you could see how a regular expression could be dropped in to an existing pyparsing grammar. The composition of rule using '&' operators means that the individual items could be found in any order (so the grammar takes care of the iterating over all the rules, instead of you doing it in your own code). Pyparsing uses operator overloading to build up complex parsers from simple ones: '+' for sequence, '|' and '^' for alternatives (first-match or longest-match), and so on.
Here is how the parsed results would look - note that I added results names, just as you used named groups in your regexen:
data = "Lv. 5 Tank DPS: 55 Motorized Armored"
parsed_data = rule.parseString(data)
print parsed_data.dump()
print parsed_data.DPS
print parsed_data.Type
print ' '.join(parsed_data.Traits)
prints:
['Lv.', '5', 'Tank', 'DPS:', '55', ['Motorized', 'Armored']]
- DPS: 55
- Level: 5
- Traits: ['Motorized', 'Armored']
- Type: Tank
55
Tank
Motorized Armored
Please stop by the wiki and see the other examples. You can easy_install to install pyparsing, but if you download the source distribution from SourceForge, there is a lot of additional documentation.