Sorting a list by conditional criteria - python

I know how to simply sort a list in Python using the sort() method and an appropriate lambda rule. However I don't know how to deal with the following situation :
I have a list of strings, that either contain only letters or contain a specific keyword and a number. I want to sort the list first so as to put the elements with the keyword at the end, then sort those by the number they contain.
e.g. my list could be: mylist = ['abc','xyz','keyword 2','def','keyword 1'] and I want it sorted to ['abc','def','xyz','keyword 1','keyword 2'].
I already have something like
mylist.sort(key=lambda x: x.split("keyword")[0],reverse=True)
which produces only
['xyz', 'def', 'abc', 'keyword 2', 'keyword 1']

One liner solution:
mylist.sort(key=lambda x: (len(x.split())>1, x if len(x.split())==1 else int(x.split()[-1]) ) )
Explanation:
First condition len(x.split())>1 makes sure that multi word strings go behind single word strings as they will probably have numbers. So now ties will be there only between single word strings with single word strings or multi word strings with multi word strings due to first condition. Note there won't be any ties with multi word and single word strings. So if multi word string I return an integer else return string itself.
Example:
['xyz', 'keyword 1000', 'def', 'abc', 'keyword 2', 'keyword 1']
Results :
>>> mylist=['xyz', 'keyword 1000', 'def', 'abc', 'keyword 2', 'keyword 1']
>>> mylist.sort(key=lambda x: (len(x.split())>1, x if len(x.split())==1 else int(x.split()[-1]) ) )
>>> mylist
['abc', 'def', 'xyz', 'keyword 1', 'keyword 2', 'keyword 1000']

You can use the "last" element that doesn't contain your keyword as a barrier to sort first the words without the keyword and then the words with the keyword:
barrier = max(filter(lambda x: 'keyword' not in x, mylist))
# 'xyz'
mylist_barriered = [barrier + x if 'keyword' in x else x for x in mylist]
# ['abc', 'xyz', 'xyzkeyword 2', 'def', 'xyzkeyword 1']
res = sorted(mylist_barriered)
# ['abc', 'def', 'xyz', 'xyzkeyword 1', 'xyzkeyword 2']
# Be sure not to replace the barrier itself, `x != barrier`
res = [x.replace(barrier, '') if barrier in x and x != barrier else x for x in res]
res is now:
['abc', 'def', 'xyz', 'keyword 1', 'keyword 2']
The benefit of this non-hard-coded approach (outside out 'keyword', obviously), is that your keyword can occur anywhere in the string and the method will still work. Try the above code with ['abc', 'def', '1 keyword 2', 'xyz', '1 keyword 4'] to see what I mean.
Another easy way to do this, with a divide-and-conquer approach:
precedes = [x for x in mylist if 'keyword' not in x]
sort_precedes = sorted(precedes)
follows = [x for x in mylist if 'keyword' in x]
sort_follows = sorted(follows)
together = sort_precedes + sort_follows
together
['abc', 'def', 'xyz', 'keyword 1', 'keyword 2']

Sort with a tuple by first checking if the item starts with the keyword. If it is, set the first item in the tuple to 1 and then set the other item to the number following the keyword. For non-keyword items, set the first tuple item to 0 (so they always come before keywords) and then the other tuple item can be used for a lexicographical sort:
def func(x):
if x.startswith('keyword'):
return 1, int(x.split()[-1])
return 0, x
mylist.sort(key=func)
print(mylist)
# ['abc', 'def', 'xyz', 'keyword 1', 'keyword 2']

I am prefixing the strings containing "keyword" with the highest value in the ascii table so they go at the end when evaluated by the built-in sort function. https://repl.it/H66r/1
mylist.sort(key=lambda x: x if (x.find("keyword", 0) != -1) else '\127' + x)
EDIT:
This wasn't sorting the keywords strings according to their numbers.
Using the tuple solution we can come up with this : https://repl.it/H66r/8
The first value of the tuple index is very low if not containing "keyword" and its actual value otherwise. Letting the system sort all the keys with similar values.
mylist.sort(key=lambda x: (- sys.maxsize, x) if (x.find("keyword", 0) == -1) else (int(x.split(" ")[1]), x) )

Related

Using filter(lambda, list) in python to clean data

I'm web-scraping a website as a project. I am currently clearing the data.
I have a list containing some information/sentences, but some are empty and I wanted to delete them.
My thought was to create a lambda function that identifies null and non-null values ​​to return False or True. Then I would put this function inside the filter() method and pass it to my list. So filter() would apply the function and delete the empty strings from the list.
check x == ""
f = lambda x: x is not None and x != ""
You don't need a lambda here. Use this:
lst = ['', 'abc', '', 'def', '', 1, 2, '']
list(filter(None, lst))
Output:
['abc', 'def', 1, 2]
You can use the fact that:
bool(None) is False
bool("") (empty string) is False
bool("something") (non-empty string) is True
>>> info = ['', 'abc', '', 'def', '', None]
>>> list(filter(bool, info))
['abc', 'def']

Trailing empty string after re.split()

I have two strings where I want to isolate sequences of digits from everything else.
For example:
import re
s = 'abc123abc'
print(re.split('(\d+)', s))
s = 'abc123abc123'
print(re.split('(\d+)', s))
The output looks like this:
['abc', '123', 'abc']
['abc', '123', 'abc', '123', '']
Note that in the second case, there's a trailing empty string.
Obviously I can test for that and remove it if necessary but it seems cumbersome and I wondered if the RE can be improved to account for this scenario.
You can use filter and don't return this empty string like below:
>>> s = 'abc123abc123'
>>> re.split('(\d+)', s)
['abc', '123', 'abc', '123', '']
>>> list(filter(None,re.split('(\d+)', s)))
['abc', '123', 'abc', '123']
By thanks #chepner you can generate list comprehension like below:
>>> [x for x in re.split('(\d+)', s) if x]
['abc', '123', 'abc', '123']
If maybe you have symbols or other you need split:
>>> s = '&^%123abc123$##123'
>>> list(filter(None,re.split('(\d+)', s)))
['&^%', '123', 'abc', '123', '$##', '123']
This has to do with the implementation of re.split() itself: you can't change it. When the function splits, it doesn't check anything that comes after the capture group, so it can't choose for you to either keep or discard the empty string that is left after splitting. It just splits there and leaves the rest of the string (which can be empty) to the next cycle.
If you don't want that empty string, you can get rid of it in various ways before collecting the results into a list. user1740577's is one example, but personally I prefer a list comprehension, since it's more idiomatic for simple filter/map operations:
parts = [part for part in re.split('(\d+)', s) if part]
I recommend against checking and getting rid of the element after the list has already been created, because it involves more operations and allocations.
A simple way to use regular expressions for this would be re.findall:
def bits(s):
return re.findall(r"(\D+|\d+)", s)
bits("abc123abc123")
# ['abc', '123', 'abc', '123']
But it seems easier and more natural with itertools.groupby. After all, you are chunking an iterable based on a single condition:
from itertools import groupby
def bits(s):
return ["".join(g) for _, g in groupby(s, key=str.isdigit)]
bits("abc123abc123")
# ['abc', '123', 'abc', '123']

delete the second item that starts with the same substring

I have a list l = ['abcdef', 'abcd', 'ghijklm', 'ghi', 'xyz', 'pqrs']
I want to delete the elements that start with the same sub-string if they exist (in this case 'abcd' and 'ghi').
N.B: in my situation, I know that the 'repeated' elements, if they exist, can be only 'abcd' or 'ghi'.
To delete them, I used this:
>>> l.remove('abcd') if ('abcdef' in l and 'abcd' in l) else l
>>> l.remove('ghi') if ('ghijklm' in l and 'ghi' in l) else l
>>> l
>>> ['abcdef', 'ghijklm', 'xyz', 'pqrs']
Is there a more efficient (or more automated) way to do this?
You can do it in linear time and O(n*m²) memory (where m is the length of your elements):
prefixes = {}
for word in l:
for x in range(len(word) - 1):
prefixes[word[:x]] = True
result = [word for word in l if word not in prefixes]
Iterate over each word and create a dictionary of the first character of each word, then the first two characters, then three, all the way up to all the characters of the word except the last one. Then iterate over the list again and if a word appears in that dictionary it's a shorter subset of some other word in the list
l = ['abcdef', 'abcd', 'ghijklm', 'ghi', 'xyz', 'pqrs']
for a in l[:]:
for b in l[:]:
if a.startswith(b) and a != b:
l.remove(b)
print(l)
Output
['abcdef', 'ghijklm', 'xyz', 'pqrs']
The following code does what you described.
your_list = ['abcdef', 'abcd', 'ghijklm', 'ghi', 'xyz', 'pqrs']
print("Original list: %s" % your_list)
helper_list = []
for element in your_list:
for element2 in your_list:
if element.startswith(element2) and element != element2:
print("%s starts with %s" % (element, element2))
print("Remove: %s" % element)
your_list.remove(element)
print("Removed list: %s" % your_list)
Output:
Original list: ['abcdef', 'abcd', 'ghijklm', 'ghi', 'xyz', 'pqrs']
abcdef starts with abcd
Remove: abcdef
ghijklm starts with ghi
Remove: ghijklm
Removed list: ['abcd', 'ghi', 'xyz', 'pqrs']
On the other hand, I think there is more simple solution and you can solve it with list comprehension if you want.
#Andrew Allen's way
l = ['abcdef', 'abcd', 'ghijklm', 'ghi', 'xyz', 'pqrs']
i=0
l = sorted(l)
while True:
try:
if l[i] in l[i+1]:
l.remove(l[i])
continue
i += 1
except:
break
print(l)
#['abcdef', 'ghijklm', 'pqrs', 'xyz']
Try this it will work
l =['abcdef', 'abcd', 'ghijklm', 'ghi', 'xyz', 'pqrs']
for i in l:
for j in l:
if len(i)>len(j) and j in i:
l.remove(j)
You can use
l = ['abcdef', 'abcd', 'ghijklm', 'ghi', 'xyz', 'pqrs']
if "abcdef" in l: # only 1 check for containment instead of 2
l = [x for x in l if x != "abcd"] # to remove _all_ abcd
# or
l = l.remove("abcd") # if you know there is only one abcd in it
This might be slightly faster (if you have far more elements then you show) because you only need to check once for "abcdef" - and then once untile the first/all of list for replacement.
>>> l.remove('abcd') if ('abcdef' in l and 'abcd' in l) else l
checks l twice for its full size to check containment (if unlucky) and then still needs to remove something from it
DISCLAIMER:
If this is NOT proven, measured bottleneck or security critical etc. I would not bother to do it unless I have measurements that suggests this is the biggest timesaver/optimization of all code overall ... with lists up to some dozends/hundreds (tummy feeling - your data does not support any analysis) the estimated gain from it is negligable.

Need to merge a nested list into a single list in Python

I have a list list1 (example) as shown below. It's the result of a function in my code.
Example:
list1 = ['2 String 2'] ['3 string 3']
Expected output from the above list is as below:
list1 = ['2 String 2', '3 string 3']
I'm stuck at this point.
Assuming your list looks like this (and not as in your example):
list1 = [['2 String 2'], ['3 string 3']]
Then simply:
list1 = [i[0] for i in list1]

Search a list of strings with a list of substrings

I have a list of strings and currently I can search for one substring at the time:
str = ['abc', 'efg', 'xyz']
[s for s in str if "a" in s]
which correctly returns
['abc']
Now let's say I have a list of substrings instead:
subs = ['a', 'ef']
I want a command like
[s for s in str if anyof(subs) in s]
which should return
['abc', 'efg']
>>> s = ['abc', 'efg', 'xyz']
>>> subs = ['a', 'ef']
>>> [x for x in s if any(sub in x for sub in subs)]
['abc', 'efg']
Don't use str as a variable name, it's a builtin.
Gets a little convoluted but you could do
[s for s in str if any([sub for sub in subs if sub in s])]
Simply use them one after the other:
[s for s in str for r in subs if r in s]
>>> r = ['abc', 'efg', 'xyz']
>>> s = ['a', 'ef']
>>> [t for t in r for x in s if x in t]
['abc', 'efg']
I still like map and filter, despite what is being said against and how comprehension can always replace a map and a filter. Hence, here is a map + filter + lambda version:
print filter(lambda x: any(map(x.__contains__,subs)), s)
which reads:
filter elements of s that contain any element from subs
I like how this uses words that carry a strong semantic meaning, rather than only if, for, in

Categories

Resources