Python Radix Sort - python

I'm trying to implement Radix sort in python.
My current program is not working correctly in that a list like [41,51,2,3,123] will be sorted correctly to [2,3,41,51,123], but something like [52,41,51,42,23] will become [23,41,42,52,51] (52 and 51 are in the wrong place).
I think I know why this is happening, because when I compare the digits in the tens place, I don't compare units as well (same for higher powers of 10).
How do I fix this issue so that my program runs in the fastest way possible? Thanks!
def radixsort(aList):
BASEMOD = 10
terminateLoop = False
temp = 0
power = 0
newList = []
while not terminateLoop:
terminateLoop = True
tempnums = [[] for x in range(BASEMOD)]
for x in aList:
temp = int(x / (BASEMOD ** power))
tempnums[temp % BASEMOD].append(x)
if terminateLoop:
terminateLoop = False
for y in tempnums:
for x in range(len(y)):
if int(y[x] / (BASEMOD ** (power+1))) == 0:
newList.append(y[x])
aList.remove(y[x])
power += 1
return newList
print(radixsort([1,4,1,5,5,6,12,52,1,5,51,2,21,415,12,51,2,51,2]))

Currently, your sort does nothing to reorder values based on anything but their highest digit. You get 41 and 42 right only by chance (since they are in the correct relative order in the initial list).
You should be always build a new list based on each cycle of the sort.
def radix_sort(nums, base=10):
result_list = []
power = 0
while nums:
bins = [[] for _ in range(base)]
for x in nums:
bins[x // base**power % base].append(x)
nums = []
for bin in bins:
for x in bin:
if x < base**(power+1):
result_list.append(x)
else:
nums.append(x)
power += 1
return result_list
Note that radix sort is not necessarily faster than a comparison-based sort. It only has a lower complexity if the number of items to be sorted is larger than the range of the item's values. Its complexity is O(len(nums) * log(max(nums))) rather than O(len(nums) * log(len(nums))).

Radix sort sorts the elements by first grouping the individual digits of the same place value. [2,3,41,51,123] first we group them based on first digits.
[[],[41,51],[2],[3,123],[],[],[],[],[],[]]
Then, sort the elements according to their increasing/decreasing order. new array will be
[41,51,2,3,123]
then we will be sorting based on tenth digit. in this case [2,3]=[02,03]
[[2,3],[],[123],[],[41],[51],[],[],[],[]]
now new array will be
[2,3,123,41,51]
lastly based on 100th digits. this time [2,3,41,51]=[002,003,041,051]
[[2,3,41,51],[123],[],[],[],[],[],[],[],[]]
finally we end up having [2,3,41,51,123]
def radixsort(A):
if not isinstance(A,list):
raise TypeError('')
n=len(A)
maxelement=max(A)
digits=len(str(maxelement)) # how many digits in the maxelement
l=[]
bins=[l]*10 # [[],[],.........[]] 10 bins
for i in range(digits):
for j in range(n): #withing this we traverse unsorted array
e=int((A[j]/pow(10,i))%10)
if len(bins[e])>0:
bins[e].append(A[j]) #adds item to the end
else:
bins[e]=[A[j]]
k=0 # used for the index of resorted arrayA
for x in range(10):#we traverse the bins and sort the array
if len(bins[x])>0:
for y in range(len(bins[x])):
A[k]=bins[x].pop(0) #remove element from the beginning
k=k+1

Related

Iterate through a tuple of numbers alternating between positive and negative values of the number [duplicate]

This question already has answers here:
python positive and negative number list possibilities
(3 answers)
Closed 1 year ago.
I want to iterate through numbers to give the output:
(0,0)
(1,0)
(0,1)
(0,-1)
(-1,0)
(1,-1)
(-1,1)
(1,1)
(2,0)
(-2,0)
(2,1)
(-2,1)
(2,-1)
(2,2)
(0,2) # the order in which they come isn't important as long as it doesn't start the next absolute value once all smaller have been done i.e. don't start two once every combination of 0,1 and -1 has been found
...
# up to n, unless the condition is met then it will break the loop
So effectively every combination of positive and negative numbers up to +/- n.
I'm currently using this for a, b in itertools.permutations(range(-n,n), 2):. However, I'm then appending all the values to an array (valid_answers) and finding the smallest sum of absolute values of them. (vals = sorted(valid_answers, key=lambda t: sum([abs(t[0]), abs(t[1])])))
I just want to iterate from 0 rather than from -n to n. It will break the first time the condition is met. I hope this code is sufficient to explain what I want to do. If not the full code (well enough to replicate what I am doing) is available here. (lines 51 onwards)
Edit
I am thinking maybe multiplying by powers of -1 is a possible approach to take but I am not too how to approach it.
This may be a little verbose but it should work.
from itertools import permutations
def get_values(n):
out = []
if n < 0:
return out
out.append((0, 0))
if n == 0:
return out
for i in range(1, n + 1):
out += [(i, j) for j in range(-i, i + 1)]
out += [(-i, j) for j in range(-i, i + 1)]
return out
This is the solution I went for.
def get_values(n: int) -> list:
if n < 0:
return []
if n == 0:
return [(0, 0)]
out = set()
for i in range(0, n + 1):
for j in range(0, i + 1):
out.add((i, j))
out.add((-i, j))
out.add((i, -j))
out.add((-i, -j))
return sorted(out, key=lambda t: abs(t[0]))
...
for item in get_values(n=1000):
a, b = item
...
It creates a set (no duplicates) and adds all combinations of the numbers to the set. Then returns a sorted list of this set. I don't think it's the most efficient way of doing it so would appreciate faster, cleaner etc. methods.
This answer was largely taken from another answer so thanks!

Count pairs of elements in an array whose sum equals a given sum (but) do it in a single iteration(!)

Given an array of integers, and a number ‘sum’, find the number of pairs of integers in the array whose sum is equal to given ‘sum’ in a SINGLE iteration. (O(n) Time complexity is not enough!).
Usually, I would iterate twice through the array once to create hashmap of frequencies and another to find the number of pairs as shown below
def getPairsCount(arr, n, sum):
m=defaultdict(int)
for i in range(0, n): #iteration NO. 1
m[arr[i]] += 1
twice_count = 0
for i in range(0, n): #iteration NO. 2
twice_count += m[sum - arr[i]]
if (sum - arr[i] == arr[i]):
twice_count -= 1
return int(twice_count / 2)
I was asked to do the same in a single iteration instead of two by an interviewer. I am at loss how to do it wihout breaking it at edge cases like {2,2,1,1} where required sum is 3.
A way is to build the hash map at the same time as you are consuming it (thereby only looping the list once). Thus, for each value in the array, check if you have seen the complement (the value needed for the sum) before. If so, you know you have a new pair, and you remove the complement from the seen values. Otherwise you do not have a sum and you add the value you have just seen.
In code this looks like follows:
from collections import defaultdict
def get_pairs_count(array, sum):
pairs_count = 0
seen_values = defaultdict(int)
for value in array:
complement = sum - value
if seen_values[complement] > 0:
pairs_count += 1
seen_values[complement] -= 1
else:
seen_values[value] += 1
return pairs_count
Another way:
def pair_sum2(arr, k):
if len(arr)<2:
return
seen=set()
output=set()
for num in arr:
target=k-num
print("target",target)
if target not in seen:
print("seen before add",seen)
seen.add(num)
print("seen",seen)
else:
output.add( (min(num, target), max(num, target)) )
print("op:",output)
print ('\n'.join( map(str, list(output)) ) )

Guidance on removing a nested for loop from function

I'm trying to write the fastest algorithm possible to return the number of "magic triples" (i.e. x, y, z where z is a multiple of y and y is a multiple of x) in a list of 3-2000 integers.
(Note: I believe the list was expected to be sorted and unique but one of the test examples given was [1,1,1] with the expected result of 1 - that is a mistake in the challenge itself though because the definition of a magic triple was explicitly noted as x < y < z, which [1,1,1] isn't. In any case, I was trying to optimise an algorithm for sorted lists of unique integers.)
I haven't been able to work out a solution that doesn't include having three consecutive loops and therefore being O(n^3). I've seen one online that is O(n^2) but I can't get my head around what it's doing, so it doesn't feel right to submit it.
My code is:
def solution(l):
if len(l) < 3:
return 0
elif l == [1,1,1]:
return 1
else:
halfway = int(l[-1]/2)
quarterway = int(halfway/2)
quarterIndex = 0
halfIndex = 0
for i in range(len(l)):
if l[i] >= quarterway:
quarterIndex = i
break
for i in range(len(l)):
if l[i] >= halfway:
halfIndex = i
break
triples = 0
for i in l[:quarterIndex+1]:
for j in l[:halfIndex+1]:
if j != i and j % i == 0:
multiple = 2
while (j * multiple) <= l[-1]:
if j * multiple in l:
triples += 1
multiple += 1
return triples
I've spent quite a lot of time going through examples manually and removing loops through unnecessary sections of the lists but this still completes a list of 2,000 integers in about a second where the O(n^2) solution I found completes the same list in 0.6 seconds - it seems like such a small difference but obviously it means mine takes 60% longer.
Am I missing a really obvious way of removing one of the loops?
Also, I saw mention of making a directed graph and I see the promise in that. I can make the list of first nodes from the original list with a built-in function, so in principle I presume that means I can make the overall graph with two for loops and then return the length of the third node list, but I hit a wall with that too. I just can't seem to make progress without that third loop!!
from array import array
def num_triples(l):
n = len(l)
pairs = set()
lower_counts = array("I", (0 for _ in range(n)))
upper_counts = lower_counts[:]
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[i] += 1
upper_counts[j] += 1
return sum(nx * nz for nz, nx in zip(lower_counts, upper_counts))
Here, lower_counts[i] is the number of pairs of which the ith number is the y, and z is the other number in the pair (i.e. the number of different z values for this y).
Similarly, upper_counts[i] is the number of pairs of which the ith number is the y, and x is the other number in the pair (i.e. the number of different x values for this y).
So the number of triples in which the ith number is the y value is just the product of those two numbers.
The use of an array here for storing the counts is for scalability of access time. Tests show that up to n=2000 it makes negligible difference in practice, and even up to n=20000 it only made about a 1% difference to the run time (compared to using a list), but it could in principle be the fastest growing term for very large n.
How about using itertools.combinations instead of nested for loops? Combined with list comprehension, it's cleaner and much faster. Let's say l = [your list of integers] and let's assume it's already sorted.
from itertools import combinations
def div(i,j,k): # this function has the logic
return l[k]%l[j]==l[j]%l[i]==0
r = sum([div(i,j,k) for i,j,k in combinations(range(len(l)),3) if i<j<k])
#alaniwi provided a very smart iterative solution.
Here is a recursive solution.
def find_magicals(lst, nplet):
"""Find the number of magical n-plets in a given lst"""
res = 0
for i, base in enumerate(lst):
# find all the multiples of current base
multiples = [num for num in lst[i + 1:] if not num % base]
res += len(multiples) if nplet <= 2 else find_magicals(multiples, nplet - 1)
return res
def solution(lst):
return find_magicals(lst, 3)
The problem can be divided into selecting any number in the original list as the base (i.e x), how many du-plets we can find among the numbers bigger than the base. Since the method to find all du-plets is the same as finding tri-plets, we can solve the problem recursively.
From my testing, this recursive solution is comparable to, if not more performant than, the iterative solution.
This answer was the first suggestion by #alaniwi and is the one I've found to be the fastest (at 0.59 seconds for a 2,000 integer list).
def solution(l):
n = len(l)
lower_counts = dict((val, 0) for val in l)
upper_counts = lower_counts.copy()
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[lower] += 1
upper_counts[upper] += 1
return sum((lower_counts[y] * upper_counts[y] for y in l))
I think I've managed to get my head around it. What it is essentially doing is comparing each number in the list with every other number to see if the smaller is divisible by the larger and makes two dictionaries:
One with the number of times a number is divisible by a larger
number,
One with the number of times it has a smaller number divisible by
it.
You compare the two dictionaries and multiply the values for each key because the key having a 0 in either essentially means it is not the second number in a triple.
Example:
l = [1,2,3,4,5,6]
lower_counts = {1:5, 2:2, 3:1, 4:0, 5:0, 6:0}
upper_counts = {1:0, 2:1, 3:1, 4:2, 5:1, 6:3}
triple_tuple = ([1,2,4], [1,2,6], [1,3,6])

Code challenge: finding the divisible in a list

I am playing a code challenge. Simply speaking, the problem is:
Given a list L (max length is of the order of 1000) containing positive integers.
Find the number of "Lucky Triples", which is L[i] divides L[j], and L[j] divides L[k].
for example, [1,2,3,4,5,6] should give the answer 3 because [1,2,4], [1,2,6],[1,3,6]
My attempt:
Sort the list. (let say there are n elements)
3 For loops: i, j, k (i from 1 to n-2), (j from i+1 to n-1), (k from j+1 to n)
only if L[j] % L[i] == 0, the k for loop will be executed
The algorithm seems to give the correct answer. But the challenge said that my code exceeded the time limit. I tried on my computer for the list [1,2,3,...,2000], count = 40888(I guess it is correct). The time is around 5 second.
Is there any faster way to do that?
This is the code I have written in python.
def answer(l):
l.sort()
cnt = 0
if len(l) == 2:
return cnt
for i in range(len(l)-2):
for j in range(1,len(l)-1-i):
if (l[i+j]%l[i] == 0):
for k in range(1,len(l)-j-i):
if (l[i+j+k]%l[i+j] == 0):
cnt += 1
return cnt
You can use additional space to help yourself. After you sort the input list you should make a map/dict where the key is each element in the list and value is a list of elements which are divisible by that in the list so you would have something like this
assume sorted list is list = [1,2,3,4,5,6] your map would be
1 -> [2,3,4,5,6]
2-> [4,6]
3->[6]
4->[]
5->[]
6->[]
now for every key in the map you find what it can divide and then you find what that divides, for example you know that
1 divides 2 and 2 divides 4 and 6, similarly 1 divides 3 and 3 divides 6
the complexity of sorting should be O(nlogn) and that of constructing the list should be better than O(n^2) (but I am not sure about this part) and then I am not sure about the complexity of when you are actually checking for multiples but I think this should be much much faster than a brute force O(n^3)
If someone could help me figure out the time complexity of this I would really appreciate it
EDIT :
You can make the map creation part faster by incrementing by X (and not 1) where X is the number in the list you are currently on since it is sorted.
Thank you guys for all your suggestions. They are brilliant. But it seems that I still can't pass the speed test or I cannot handle with duplicated elements.
After discussing with my friend, I have just come up with another solution. It should be O(n^2) and I passed the speed test. Thanks all!!
def answer(lst):
lst.sort()
count = 0
if len(lst) == 2:
return count
#for each middle element, count the divisors at the front and the multiples at the back. Then multiply them.
for i, middle in enumerate(lst[1:len(lst)-1], start = 1):
countfirst = 0
countthird = 0
for first in (lst[0:i]):
if middle % first == 0:
countfirst += 1
for third in (lst[i+1:]):
if third % middle == 0:
countthird += 1
count += countfirst*countthird
return count
I guess sorting the list is pretty inefficient. I would rather try to iteratively reduce the number of candidates. You could do that in two steps.
At first filter all numbers that do not have a divisor.
from itertools import combinations
candidates = [max(pair) for pair in combinations(l, 2) if max(pair)%min(pair) == 0]
After that, count the number of remaining candidates, that do have a divisor.
result = sum(max(pair)%min(pair) == 0 for pair in combinations(candidates, 2))
Your original code, for reference.
def answer(l):
l.sort()
cnt = 0
if len(l) == 2:
return cnt
for i in range(len(l)-2):
for j in range(1,len(l)-1-i):
if (l[i+j]%l[i] == 0):
for k in range(1,len(l)-j-i):
if (l[i+j+k]%l[i+j] == 0):
cnt += 1
return cnt
There are a number of misimplementations here, and with just a few tweaks we can probably get this running much faster. Let's start:
def answer(lst): # I prefer not to use `l` because it looks like `1`
lst.sort()
count = 0 # use whole words here. No reason not to.
if len(lst) == 2:
return count
for i, first in enumerate(lst):
# using `enumerate` here means you can avoid ugly ranges and
# saves you from a look up on the list afterwards. Not really a
# performance hit, but definitely looks and feels nicer.
for j, second in enumerate(lst[i+1:], start=i+1):
# this is the big savings. You know since you sorted the list that
# lst[1] can't divide lst[n] if n>1, but your code still starts
# searching from lst[1] every time! Enumerating over `l[i+1:]`
# cuts out a lot of unnecessary burden.
if second % first == 0:
# see how using enumerate makes that look nicer?
for third in lst[j+1:]:
if third % second == 0:
count += 1
return count
I bet that on its own will pass your speed test, but if not, you can check for membership instead. In fact, using a set here is probably a great idea!
def answer2(lst):
s = set(lst)
limit = max(s) # we'll never have a valid product higher than this
multiples = {} # accumulator for our mapping
for n in sorted(s):
max_prod = limit // n # n * (max_prod+1) > limit
multiples[n] = [n*k for k in range(2, max_prod+1) if n*k in s]
# in [1,2,3,4,5,6]:
# multiples = {1: [2, 3, 4, 5, 6],
# 2: [4, 6],
# 3: [6],
# 4: [],
# 5: [],
# 6: []}
# multiples is now a mapping you can use a Depth- or Breadth-first-search on
triples = sum(1 for j in multiples
for k in multiples.get(j, [])
for l in multiples.get(k, []))
# This basically just looks up each starting value as j, then grabs
# each valid multiple and assigns it to k, then grabs each valid
# multiple of k and assigns it to l. For every possible combination there,
# it adds 1 more to the result of `triples`
return triples
I'll give you just an idea, the implementation should be up to you:
Initialize the global counter to zero.
Sort the list, starting with smallest number.
Create a list of integers (one entry per number with same index).
Iterate through each number (index i), and do the following:
Check for dividers at positions 0 to i-1.
Store the number of dividers in the list at the position i.
Fetch the number of dividers from the list for each divider, and add each number to the global counter.
Unless you finished, go to 3rd.
Your result should be in the global counter.

Python 2 lists of positive integers finding prime number

Given 2 lists of positive integers, find how many ways you can select a number from each of the lists such that their sum is a prime number.
My code is tooo slow As i have both list1 and list 2 containing 50000 numbers each. So any way to make it faster so it solves it in minutes instead of days?? :)
# 2 is the only even prime number
if n == 2: return True
# all other even numbers are not primes
if not n & 1: return False
# range starts with 3 and only needs to go
# up the squareroot of n for all odd numbers
for x in range(3, int(n**0.5)+1, 2):
if n % x == 0: return False
return True
for i2 in l2:
for i1 in l1:
if isprime(i1 + i2):
n = n + 1 # increasing number of ways
s = "{0:02d}: {1:d}".format(n, i1 + i2)
print(s) # printing out
Sketch:
Following #Steve's advice, first figure out all the primes <= max(l1) + max(l2). Let's call that list primes. Note: primes doesn't really need to be a list; you could instead generate primes up the max one at a time.
Swap your lists (if necessary) so that l2 is the longest list. Then turn that into a set: l2 = set(l2).
Sort l1 (l1.sort()).
Then:
for p in primes:
for i in l1:
diff = p - i
if diff < 0:
# assuming there are no negative numbers in l2;
# since l1 is sorted, all diffs at and beyond this
# point will be negative
break
if diff in l2:
# print whatever you like
# at this point, p is a prime, and is the
# sum of diff (from l2) and i (from l1)
Alas, if l2 is, for example:
l2 = [2, 3, 100000000000000000000000000000000000000000000000000]
this is impractical. It relies on that, as in your example, max(max(l1), max(l2)) is "reasonably small".
Fleshed out
Hmm! You said in a comment that the numbers in the lists are up to 5 digits long. So they're less than 100,000. And you said at the start that the list have 50,000 elements each. So they each contain about half of all possible integers under 100,000, and you're going to have a very large number of sums that are primes. That's all important if you want to micro-optimize ;-)
Anyway, since the maximum possible sum is less than 200,000, any way of sieving will be fast enough - it will be a trivial part of the runtime. Here's the rest of the code:
def primesum(xs, ys):
if len(xs) > len(ys):
xs, ys = ys, xs
# Now xs is the shorter list.
xs = sorted(xs) # don't mutate the input list
sum_limit = xs[-1] + max(ys) # largest possible sum
ys = set(ys) # make lookups fast
count = 0
for p in gen_primes_through(sum_limit):
for x in xs:
diff = p - x
if diff < 0:
# Since xs is sorted, all diffs at and
# beyond this point are negative too.
# Since ys contains no negative integers,
# no point continuing with this p.
break
if diff in ys:
#print("%s + %s = prime %s" % (x, diff, p))
count += 1
return count
I'm not going to supply my gen_primes_through(), because it's irrelevant. Pick one from the other answers, or write your own.
Here's a convenient way to supply test cases:
from random import sample
xs = sample(range(100000), 50000)
ys = sample(range(100000), 50000)
print(primesum(xs, ys))
Note: I'm using Python 3. If you're using Python 2, use xrange() instead of range().
Across two runs, they each took about 3.5 minutes. That's what you asked for at the start ("minutes instead of days"). Python 2 would probably be faster. The counts returned were:
219,334,097
and
219,457,533
The total number of possible sums is, of course, 50000**2 == 2,500,000,000.
About timing
All the methods discussed here, including your original one, take time proportional to the product of two lists' lengths. All the fiddling is to reduce the constant factor. Here's a huge improvement over your original:
def primesum2(xs, ys):
sum_limit = max(xs) + max(ys) # largest possible sum
count = 0
primes = set(gen_primes_through(sum_limit))
for i in xs:
for j in ys:
if i+j in primes:
# print("%s + %s = prime %s" % (i, j, i+j))
count += 1
return count
Perhaps you'll understand that one better. Why is it a huge improvement? Because it replaces your expensive isprime(n) function with a blazing fast set lookup. It still takes time proportional to len(xs) * len(ys), but the "constant of proportionality" is slashed by replacing a very expensive inner-loop operation with a very cheap operation.
And, in fact, primesum2() is faster than my primesum() in many cases too. What makes primesum() faster in your specific case is that there are only around 18,000 primes less than 200,000. So iterating over the primes (as primesum() does) goes a lot faster than iterating over a list with 50,000 elements.
A "fast" general-purpose function for this problem would need to pick different methods depending on the inputs.
You should use the Sieve of Eratosthenes to calculate prime numbers.
You are also calculating the prime numbers for each possible combination of sums. Instead, consider finding the maximum value you can achieve with the sum from the lists. Generate a list of all the prime numbers up to that maximum value.
Whilst you are adding up the numbers, you can see if the number appears in your prime number list or not.
I would find the highest number in each range. The range of primes is the sum of the highest numbers.
Here is code to sieve out primes:
def eras(n):
last = n + 1
sieve = [0, 0] + list(range(2, last))
sqn = int(round(n ** 0.5))
it = (i for i in xrange(2, sqn + 1) if sieve[i])
for i in it:
sieve[i * i:last:i] = [0] * (n // i - i + 1)
return filter(None, sieve)
It takes around 3 seconds to find the primes up to 10 000 000. Then I would use the same n ^ 2 algorithm you are using for generating sums. I think there is an n logn algorithm but I can't come up with it.
It would look something like this:
from collections import defaultdict
possible = defaultdict(int)
for x in range1:
for y in range2:
possible[x + y] += 1
def eras(n):
last = n + 1
sieve = [0, 0] + list(range(2, last))
sqn = int(round(n ** 0.5))
it = (i for i in xrange(2, sqn + 1) if sieve[i])
for i in it:
sieve[i * i:last:i] = [0] * (n // i - i + 1)
return filter(None, sieve)
n = max(possible.keys())
primes = eras(n)
possible_primes = set(possible.keys()).intersection(set(primes))
for p in possible_primes:
print "{0}: {1} possible ways".format(p, possible[p])

Categories

Resources