I am trying to understand this code:
edit_two_set = set()
edit_two_set = set.union(*[edit_two_set.union(edit_one_letter(w, allow_switches)) for w in one])
Here one is a set of strings. allow_switches is True.
edit_one_letter takes in one word and makes either one character insertion, deletion or one switch of corresponding characters.
I understand:
[edit_two_set.union(edit_one_letter(w, allow_switches)) for w in one]
is performing a list comprehension in which for every word in one we make one character edit and then take the union of the resulting set with the previous set.
I am mainly stuck at trying to understand what:
set.union(*[])
is doing?
Thanks!
You can refer to this:
https://docs.python.org/3/library/stdtypes.html#frozenset.union
The list comprehension returns a list of sets.
set.union(*) would perform a union of the sets within the list and return a new set.
Related
I have a list of words (equivalent to about two full sentences) and I want to split it into two parts: one part containing 90% of the words and another part containing 10% of them. After that, I want to print a list of the unique words within the 10% list, lexicographically sorted. This is what I have so far:
pos_90 = (90*len(words)) // 100 #list with 90% of the words
pos_90 = pos_90 + 1 #I incremented the number by 1 in order to use it as an index
pos_10 = (10*len(words)) // 100 #list with 10% of the words
list_90 = words[:pos_90] #Creation of the 90% list
list_10 = words[pos_10:] #Creation of the 10% list
uniq_10 = set(list_10) #List of unique words out of the 10% list
split_10 = uniq_10.split()
sorted_10 = split_10.sort()
print(sorted_10)
I get an error saying that split cannot be applied to set, so I assume my mistake must be in the last lines of code. Any idea about what I'm missing here?
split only makes sense when converting from one long str to a list of the components of said str. If the input was in the form 'word1 word2 word3', yes, split would convert that str to ['word1', 'word2', 'word3'], but your input is a set, and there is no sane way to "split" a set like you seem to want; it's already a bag of separated items.
All you really need to do is convert your set back to a sorted list. Replace:
split_10 = uniq_10.split()
sorted_10 = split_10.sort()
with either:
sorted_10 = list(uniq_10)
sorted_10.sort() # NEVER assign the result of .sort(); it's always going to be None
or the simpler one-liner that encompasses both listifying and sorting:
sorted_10 = sorted(uniq_10) # sorted, unlike list.sort, returns a new list
The final option is generally the most Pythonic approach to converting an arbitrary iterable to list and sorting that new list, returning the result. It doesn't mutate the input, doesn't rely on the input being a specific type (set, tuple, list, it doesn't matter), and it's simpler to boot. You only use list.sort() when you already have a known list, and don't mind mutating it.
I have a list of several thousand unordered tuples that are of the format
(mainValue, (value, value, value, value))
Given a main value (which may or may not be present), is there a 'nice' way, other than iterating through every item looking and incrementing a value, where I can produce a list of indexes of tuples that match like this:
index = 0;
for destEntry in destList:
if destEntry[0] == sourceMatch:
destMatches.append(index)
index = index + 1
So I can compare the sub values against another set, and remove the best match from the list if necessary.
This works fine, but just seems like python would have a better way!
Edit:
As per the question, when writing the original question, I realised that I could use a dictionary instead of the first value (in fact this list is within another dictionary), but after removing the question, I still wanted to know how to do it as a tuple.
With list comprehension your for loop can be reduced to this expression:
destMatches = [i for i,destEntry in enumerate(destList) if destEntry[0] == sourceMatch]
You can also use filter()1 built in function to filter your data:
destMatches = filter(lambda destEntry:destEntry[0] == sourceMatch, destList)
1: In Python 3 filter is a class and returns a filter object.
I have an ordered dictionary like following:
source =([('a',[1,2,3,4,5,6,7,11,13,17]),('b',[1,2,3,12])])
I want to calculate the length of each key's value first, then calculate the sqrt of it, say it is L.
Insert L to the positions which can be divided without remainder and insert "1" after other number.
For example, source['a'] = [1,2,3,4,5,6,7,11,13,17] the length of it is 9.
Thus sqrt of len(source['a']) is 3.
Insert number 3 at the position which can be divided exactly by 3 (eg. position 3, position 6, position 9) if the position of the number can not be divided exactly by 3 then insert 1 after it.
To get a result like folloing:
result=([('a',["1,1","2,1","3,3","4,1","5,1","6,3","7,1","11,1","13,3","10,1"]),('b',["1,1","2,2","3,1","12,2"])]
I dont know how to change the item in the list to a string pair. BTW, this is not my homework assignment, I was trying to build a boolean retrival engine, the source data is too big, so I just created a simple sample here to explain what I want to achive :)
As this seems to be a homework, I will try to help you with the part you are facing problem with
I dont know how to change the item in the list to a string pair.
As the entire list needs to be updated, its better to recreate it rather than update it in place, though its possible as lists are mutable
Consider a list
lst = [1,2,3,4,5]
to convert it to a list of strings, you can use list comprehension
lst = [str(e) for e in lst]
You may also use built-in map as map(str,lst), but you need to remember than in Py3.X, map returns a map object, so it needs to be handled accordingly
Condition in a comprehension is best expressed as a conditional statement
<TRUE-STATEMENT> if <condition> else <FALSE-STATEMENT>
To get the index of any item in a list, your best bet is to use the built-in enumerate
If you need to create a formatted string expression from a sequence of items, its suggested to use the format string specifier
"{},{}".format(a,b)
The length of any sequence including a list can be calculated through the built-in len
You can use the operator ** with fractional power or use the math module and invoke the sqrt function to calculate the square-root
Now you just have to combine each of the above suggestion to solve your problem.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How to find positions of the list maximum?
A question from homework:
Define a function censor(words,nasty) that takes a list of words, and
replaces all the words appearing in nasty with the word CENSORED, and
returns the censored list of words.
>>> censor([’it’,’is’,’raining’], [’raining’])
[’it’,’is’,’CENSORED’]
I see solution like this:
find an index of nasty
replace words matching that index
with "CENSORED"
but i get stuck on finding the index..
You can find the index of any element of a list by using the .index method.
>>> l=['a','b','c']
>>> l.index('b')
1
Actually you don't have to operate with indexes here. Just iterate over words list and check if the word is listed in nasty. If it is append 'CENSORED' to the result list, else append the word itself.
Or you can involve list comprehension and conditional expression to get more elegant version:
Your approach might work, but it’s unnecessarily complicated.
Python allows a very simple syntax to check whether something is contained in a list:
censor = [ 'bugger', 'nickle' ]
word = 'bugger'
if word in censor: print 'CENSORED'
With that approach, simply walk over your list of words and test for each words whether it’s in the censor list.
To walk over your list of words, you can use the for loop. Since you might need to modify the current word, use an index, like so:
for index in len(words)):
print index, words[index]
Now all you need to do is put the two code fragments together.
You could use the handy built-in enumerate() function to step through the items in the list. For example:
def censor(words, nasty):
for i,word in enumerate(words):
if word...
I never actually thought I'd run into speed-issues with python, but I have. I'm trying to compare really big lists of dictionaries to each other based on the dictionary values. I compare two lists, with the first like so
biglist1=[{'transaction':'somevalue', 'id':'somevalue', 'date':'somevalue' ...}, {'transactio':'somevalue', 'id':'somevalue', 'date':'somevalue' ...}, ...]
With 'somevalue' standing for a user-generated string, int or decimal. Now, the second list is pretty similar, except the id-values are always empty, as they have not been assigned yet.
biglist2=[{'transaction':'somevalue', 'id':'', 'date':'somevalue' ...}, {'transactio':'somevalue', 'id':'', 'date':'somevalue' ...}, ...]
So I want to get a list of the dictionaries in biglist2 that match the dictionaries in biglist1 for all other keys except id.
I've been doing
for item in biglist2:
for transaction in biglist1:
if item['transaction'] == transaction['transaction']:
list_transactionnamematches.append(transaction)
for item in biglist2:
for transaction in list_transactionnamematches:
if item['date'] == transaction['date']:
list_transactionnamematches.append(transaction)
... and so on, not comparing id values, until I get a final list of matches. Since the lists can be really big (around 3000+ items each), this takes quite some time for python to loop through.
I'm guessing this isn't really how this kind of comparison should be done. Any ideas?
Index on the fields you want to use for lookup. O(n+m)
matches = []
biglist1_indexed = {}
for item in biglist1:
biglist1_indexed[(item["transaction"], item["date"])] = item
for item in biglist2:
if (item["transaction"], item["date"]) in biglist1_indexed:
matches.append(item)
This is probably thousands of times faster than what you're doing now.
What you want to do is to use correct data structures:
Create a dictionary of mappings of tuples of other values in the first dictionary to their id.
Create two sets of tuples of values in both dictionaries. Then use set operations to get the tuple set you want.
Use the dictionary from the point 1 to assign ids to those tuples.
Forgive my rusty python syntax, it's been a while, so consider this partially pseudocode
import operator
biglist1.sort(key=(operator.itemgetter(2),operator.itemgetter(0)))
biglist2.sort(key=(operator.itemgetter(2),operator.itemgetter(0)))
i1=0;
i2=0;
while i1 < len(biglist1) and i2 < len(biglist2):
if (biglist1[i1]['date'],biglist1[i1]['transaction']) == (biglist2[i2]['date'],biglist2[i2]['transaction']):
biglist3.append(biglist1[i1])
i1++
i2++
elif (biglist1[i1]['date'],biglist1[i1]['transaction']) < (biglist2[i2]['date'],biglist2[i2]['transaction']):
i1++
elif (biglist1[i1]['date'],biglist1[i1]['transaction']) > (biglist2[i2]['date'],biglist2[i2]['transaction']):
i2++
else:
print "this wont happen if i did the tuple comparison correctly"
This sorts both lists into the same order, by (date,transaction). Then it walks through them side by side, stepping through each looking for relatively adjacent matches. It assumes that (date,transaction) is unique, and that I am not completely off my rocker with regards to tuple sorting and comparison.
In O(m*n)...
for item in biglist2:
for transaction in biglist1:
if (item['transaction'] == transaction['transaction'] &&
item['date'] == transaction['date'] &&
item['foo'] == transaction['foo'] ) :
list_transactionnamematches.append(transaction)
The approach I would probably take to this is to make a very, very lightweight class with one instance variable and one method. The instance variable is a pointer to a dictionary; the method overrides the built-in special method __hash__(self), returning a value calculated from all the values in the dictionary except id.
From there the solution seems fairly obvious: Create two initially empty dictionaries: N and M (for no-matches and matches.) Loop over each list exactly once, and for each of these dictionaries representing a transaction (let's call it a Tx_dict), create an instance of the new class (a Tx_ptr). Then test for an item matching this Tx_ptr in N and M: if there is no matching item in N, insert the current Tx_ptr into N; if there is a matching item in N but no matching item in M, insert the current Tx_ptr into M with the Tx_ptr itself as a key and a list containing the Tx_ptr as the value; if there is a matching item in N and in M, append the current Tx_ptr to the value associated with that key in M.
After you've gone through every item once, your dictionary M will contain pointers to all the transactions which match other transactions, all neatly grouped together into lists for you.
Edit: Oops! Obviously, the correct action if there is a matching Tx_ptr in N but not in M is to insert a key-value pair into M with the current Tx_ptr as the key and as the value, a list of the current Tx_ptr and the Tx_ptr that was already in N.
Have a look at Psyco. Its a Python compiler that can create very fast, optimized machine code from your source.
http://sourceforge.net/projects/psyco/
While this isn't a direct solution to your code's efficiency issues, it could still help speed things up without needing to write any new code. That said, I'd still highly recommend optimizing your code as much as possible AND use Psyco to squeeze as much speed out of it as possible.
Part of their guide specifically talks about using it to speed up list, string, and numeric computation heavy functions.
http://psyco.sourceforge.net/psycoguide/node8.html
I'm also a newbie. My code is structured in much the same way as his.
for A in biglist:
for B in biglist:
if ( A.get('somekey') <> B.get('somekey') and #don't match to itself
len( set(A.get('list')) - set(B.get('list')) ) > 10:
[do stuff...]
This takes hours to run through a list of 10000 dictionaries. Each dictionary contains lots of stuff but I could potentially pull out just the ids ('somekey') and lists ('list') and rewrite as a single dictionary of 10000 key:value pairs.
Question: how much faster would that be? And I assume this is faster than using a list of lists, right?