Problem:
On a given standard dial pad, what is the # of unique numbers that can be generated from jumping N times with the constraint that when you jump, you must move like the knight chess piece. You also cannot land on any invalid values like the X's but you may pass through them.
Dialer:
1 2 3
4 5 6
7 8 9
X 0 X
Very similar to this
Generate 10-digit number using a phone keypad
What I have so far but is crazy slow (Python 2.7)
jumpMap = {
1: [6,8],
2: [7,9],
3: [4,8],
4: [0, 3, 9],
5: [],
6: [0, 1, 7],
7: [2, 6],
8: [1, 3],
9: [2, 4],
0: [4, 6]
}
def findUnique(start, jumps):
if jumps == 1:
# Base case 1 jump
return len(jumpMap[start])
if start == 5:
return 0
sum = 0
for value in (jumpMap[start]):
sum = sum + findUnique(value, jumps-1)
return sum
I'm guessing the easiest way to optimize would to have some kind of memoization but I can't figure out how to use one given the problem constraints.
Let K(k, n) be the number of unique numbers of length n, starting with key k. Then, K(k, n+1) = sum(K(i, n)) where i ranges over the keys that it's possible to jump to from key k.
This can be calculated efficiently using dynamic programming; here's one way that takes O(n) time and O(1) space:
jumpMap = [map(int, x) for x in '46,68,79,48,039,,017,26,13,24'.split(',')]
def jumps(n):
K = [1] * 10
for _ in xrange(n):
K = [sum(K[j] for j in jumpMap[i]) for i in xrange(10)]
return sum(K)
for i in xrange(10):
print i, jumps(i)
Faster: it's possible to compute the answer in log(n) time and O(1) space. Let M be the 10 by 10 matrix with M[i,j] = 1 if it's possible to jump from i to j, and 0 otherwise. Then sum(M^n * ones(10, 1)) is the answer. The matrix power can be computed using exponentiation by squaring in log(n) time. Here's some code using numpy:
jumpMap = [map(int, x) for x in '46,68,79,48,039,,017,26,13,24'.split(',')]
M = numpy.matrix([[1 if j in jumpMap[i] else 0 for j in xrange(10)] for i in xrange(10)])
def jumps_mat(n):
return sum(M ** n * numpy.ones((10, 1)))[0,0]
for i in xrange(10):
print i, jumps_mat(i)
You can use the lru_cache, it will memoize calls:
from functools import lru_cache
jumpMap = {
1: [6,8],
2: [7,9],
3: [4,8],
4: [0, 3, 9],
5: [],
6: [0, 1, 7],
7: [2, 6],
8: [1, 3],
9: [2, 4],
0: [4, 6]
}
#lru_cache(maxsize=1000)
def findUnique(start, jumps):
if jumps == 1:
return len(jumpMap[start])
if start == 5:
return 0
sum = 0
for value in (jumpMap[start]):
sum = sum + findUnique(value, jumps-1)
return sum
Related
Given a number n, I want to generate a sorted list of all the unique divisors of n (with no duplications).
Solving this problem is really straight forward, but what I'm interested in is the efficiency.
What is the fastest way to do it?
Here is one way, with pure python:
def get_divisors(n):
"""
:param n: positive integer.
:return: list of all different divisors of n.
"""
if n <= 0:
return []
divisors = [1, n]
for div in range(1, int(n ** 0.5 + 1)):
if n % div == 0:
divisors.extend([n // div, div])
return sorted(list(set(divisors)))
Any suggestions on how to optimize this?
Numpy and other optimized libraries are welcome.
You already have the square root optimization. Next would be to leverage numpy's parallelism:
import numpy as np
def npDivs(N):
divs = np.arange(1,int(N**0.5)+1) # potential divisors up to √N
divs = divs[N % divs==0] # divisors
comp = N//divs[::-1] # complement quotients
return np.concatenate((divs,comp[divs[-1]==comp[0]:])) # combined
print(getDivs(1001**2))
[ 1 7 11 13 49 77 91 121 143
169 539 637 847 1001 1183 1573 1859 5929
7007 8281 11011 13013 20449 77077 91091 143143 1002001]
comp[divs[-1]==comp[0]:] avoids repeating the square root if it is an integer.
An even faster approach, would be to get prime factors and combine them in a resulting set:
def getDivs(N):
factors = {1}
maxP = int(N**0.5)
p,inc = 2,1
while p <= maxP:
while N%p==0:
factors.update([f*p for f in factors])
N //= p
maxP = int(N**0.5)
p,inc = p+inc,2
if N>1:
factors.update([f*N for f in factors])
return sorted(factors)
Benchmarks:
from timeit import timeit
N = 1010101**2
print(timeit(lambda:getDivs(N),number=100)) # 0.0015
print(timeit(lambda:npDivs(N),number=100)) # 0.9753
print(timeit(lambda:get_divisors(N),number=100)) # 8.5605
Since the input is assumed not to be bigger than 1 billion, you can compute the prime factors using a Wheel factorization (with the basis {2, 3}) which is an improvement of the basic trial division. This is fast because the number of prime factors is always small (no more than 30 values). You can then transform the prime factors to the list of divisor (with possibly thousands of items). The factorization can be computed efficiently using the Numba just-in-time compiler (JIT). Here is the resulting code:
import numba as nb
#nb.njit('List(int_)(int_)')
def get_prime_divisors(n):
divisors = []
while n % 2 == 0:
divisors.append(2)
n //= 2
while n % 3 == 0:
divisors.append(3)
n //= 3
i = 5
while i*i <= n:
for k in (i, i+2):
while n % k == 0:
divisors.append(k)
n //= k
i += 6
if n > 1:
divisors.append(n)
return divisors
#nb.njit('List(int_)(int_)')
def get_divisors(n):
divisors = []
if n == 1:
divisors.append(1)
elif n > 1:
prime_factors = get_prime_divisors(n)
divisors = [1]
last_prime = 0
factor = 0
slice_len = 0
# Find all the products that are divisors of n
for prime in prime_factors:
if last_prime != prime:
slice_len = len(divisors)
factor = prime
else:
factor *= prime
for i in range(slice_len):
divisors.append(divisors[i] * factor)
last_prime = prime
divisors.sort()
return divisors
Here are timings on my machine for 5000 random integers between 1 and 1 million:
Initial get_divisors: 125 ms
Alain's getDivs: 40 ms
Tim Peters' get_divisors: 87 ms
This solution: 7 ms
Here are timings on my machine for 2000 random integers between 1 and 1 billion:
Initial get_divisors: 1403 ms
Alain's getDivs: 231 ms
Tim Peters' get_divisors: 178 ms
This solution: 8 ms
Thus, this solution is up to 6~22 times faster than the fastest alternative solution and up to 18~175 times faster than the initial implementation.
I expect the fastest way is to build the divisors "ground up" from a full prime factorization of the input. But you're not going to get an answer here for "the fastest way" to get a prime factorization to begin with: that's a huge topic all on its own, and still a very active area of research for large integers.
The way below simply uses trial division, up through the square root of what remains to be factored, but skipping multiples of 2 and 3 (except for 2 and 3 themselves). This is reasonably zippy through 32-bit ints.
Given a prime factorization
n == p0**e0 * p1**e1 * p2**e2 ...
Then the divisors of n are all and only the integers of that form with exponents less than or equal equal to the e0, e1, e2, .... itertools.product() can straightforwardly generate all such tuples of exponents, and then it's just a matter of doing the powers and multiplying the results.
def get_divisors(n):
from math import isqrt, prod
from itertools import accumulate, chain, cycle, product
if n <= 1:
return [n]
ps = []
exps = []
limit = isqrt(n)
for cand in chain([2, 3], accumulate(cycle([2, 4]),
initial=5)):
if cand > limit:
break
if n % cand == 0:
count = 1
n //= cand
while n % cand == 0:
count += 1
n //= cand
ps.append(cand)
exps.append(count)
limit = isqrt(n)
if n > 1:
ps.append(n)
exps.append(1)
result = []
for pows in product(*(range(exp + 1) for exp in exps)):
result.append(prod(p**e for p, e in zip(ps, pows)))
return sorted(result)
>>> for i in range(22):
... print(i, get_divisors(i))
0 [0]
1 [1]
2 [1, 2]
3 [1, 3]
4 [1, 2, 4]
5 [1, 5]
6 [1, 2, 3, 6]
7 [1, 7]
8 [1, 2, 4, 8]
9 [1, 3, 9]
10 [1, 2, 5, 10]
11 [1, 11]
12 [1, 2, 3, 4, 6, 12]
13 [1, 13]
14 [1, 2, 7, 14]
15 [1, 3, 5, 15]
16 [1, 2, 4, 8, 16]
17 [1, 17]
18 [1, 2, 3, 6, 9, 18]
19 [1, 19]
20 [1, 2, 4, 5, 10, 20]
21 [1, 3, 7, 21]
I'm working on a validator of credit cards. That sphere is new for me, so please, don't laugh:D
I'm trying to finish it without any libraries.
def creditCardValidation(creditcard):
creditcard = creditcard.replace( ' ', '' )
creditcard = [int(i) for i in creditcard]
evens = creditcard[::2]
odds = creditcard[1::2]
evens = [element * 2 for element in evens]
for j in evens:
if j >= 10:
j = [int(d) for d in str(j)]
for x in j:
evens.append(x)
for j in evens:
if j >= 10:
evens.remove(j)
return ((sum(evens) + sum(odds)) % 10 == 0)
creditCardValidation('1234 5678 9101 1213')
creditCardValidation('4561 2612 1234 5464')
creditCardValidation('4561 2612 1234 5467')
So the problem is in the array evens.
It returns
[2, 6, 14, 0, 2, 2, 1, 0, 1, 4, 1, 8]
[8, 4, 2, 2, 6, 12, 1, 2, 1, 0, 1, 2]
[8, 4, 2, 2, 6, 12, 1, 2, 1, 0, 1, 2]
It should return the same results except those which greater than 10. Everything works fine. Take a look at the first array, 18 deleted as well as 10, but 14 is not.
Removing while iterating over the array is not the best thing to do and will mostly result in skipping some elements in the array while iterating, so a safer way to do this
for j in evens:
if j >= 10:
evens.remove(j)
is to collect all the elements you want to remove in another list then subtract it from your original if you are using numpy arrrays or removing them one by one, as python lists has no subtraction operation defined to subtract one array from a another
to_remove = []
for j in evens:
if j >= 10:
to_remove.append(j)
for j in to_remove:
events.remove(j)
or you could whitelist instead of blacklisting
small_evens = []
for j in evens:
if j < 10:
small_evens.append(j)
# use small_evens and discard evens array
A couple of issues:
Python has zero indexed arrays, so evens[::2] is actually returning the first, third, etc. digit. Luhn's algo requires even digits (assuming 1 indexing) to be doubled.
You shouldn't modify a list you are iterating over.
You can simplify, removing a lot of the list creations:
def creditCardValidation(creditcard):
*creditcard, checkdigit = creditcard.replace(' ', '')
total = 0
for i, digit in enumerate(map(int, creditcard), 1):
if i % 2 == 0:
digit *= 2
total += digit // 10 # Will be zero for single digits
total += digit % 10
return 9*total % 10 == int(checkdigit)
In []:
creditCardValidation('1234 5678 9101 1213')
Out[]:
True
In []:
creditCardValidation('4561 2612 1234 5464')
Out[]:
False
Suppose we need to transform an array of integers and then compute the sum.
The transformation is the following:
For each integer in the array, subtract the first subsequent integer that is equal or less than its value.
For example, the array:
[6, 1, 3, 4, 6, 2]
becomes
[5, 1, 1, 2, 4, 2]
because
6 > 1 so 6 - 1 = 5
nothing <= to 1 so 1 remains 1
3 > 2 so 3 - 2 = 1
4 > 2 so 4 - 2 = 2
6 > 2 so 6 - 2 = 4
nothing <= to 2 so 2 remains 2
so we sum [5, 1, 1, 2, 4, 2] = 15
I already have the answer below but apparently there is a more optimal method. My answer runs in quadratic time complexity (nested for loop) and I can't figure out how to optimize it.
prices = [6, 1, 3, 4, 6, 2]
results = []
counter = 0
num_prices = len(prices)
for each_item in prices:
flag = True
counter += 1
for each_num in range(counter, num_prices):
if each_item >= prices[each_num] and flag == True:
cost = each_item - prices[each_num]
results.append(cost)
flag = False
if flag == True:
results.append(each_item)
print(sum(results))
Can someone figure out how to answer this question faster than quadratic time complexity? I'm pretty sure this can be done only using 1 for loop but I don't know the data structure to use.
EDIT:
I might be mistaken... I just realized I could have added a break statement after flag = False and that would have saved me from a few unnecessary iterations. I took this question on a quiz and half the test cases said there was a more optimal method. They could have been referring to the break statement so maybe there isn't a faster method than using nested for loop
You can use a stack (implemented using a Python list). The algorithm is linear since each element is compared at most twice (one time with the next element, one time with the next number smaller or equals to it).
def adjusted_total(prices):
stack = []
total_substract = i = 0
n = len(prices)
while i < n:
if not stack or stack[-1] < prices[i]:
stack.append(prices[i])
i += 1
else:
stack.pop()
total_substract += prices[i]
return sum(prices) - total_substract
print(adjusted_total([6, 1, 3, 4, 6, 2]))
Output:
15
a simple way to do it with lists, albeit still quadratic..
p = [6, 1, 3, 4, 6, 2]
out= []
for i,val in zip(range(len(p)),p):
try:
out.append(val - p[[x <= val for x in p[i+1:]].index(True)+(i+1)])
except:
out.append(val)
sum(out) # equals 15
NUMPY APPROACH - honestly don't have alot of programming background so I'm not sure if its linear or not (depending on how the conditional masking works in the background) but still interesting
p = np.array([6, 1, 3, 4, 6, 2])
out = np.array([])
for i,val in zip(range(len(p)),p):
pp = p[i+1:]
try:
new = val - pp[pp<=val][0]
out = np.append(out,new)
except:
out = np.append(out,p[i])
out.sum() #equals 15
Design an algorithm that outputs the number of entries in A that are smaller than or equal to x. Your algorithm should run in O(n) time.
For example in the array below if my target was '5' then I would return 2 b/c 1 and 3 are smaller.
[1, 3, 5]
[2, 6, 9]
[3, 6, 10]
I gave it a shot with the code below which is close to working and I think it's O(n) ... the problem I see is if I don't have the # in my array I am not sure if I am returning the right value?
def findLessX(m,n,x):
i = 0
j = n-1
while (i < n and j >= 0):
if i == n or j == n:
print("n not found")
return (i+1)*(j+1)-1
if (m[i][j] == x):
print(" n Found at ", i , " ", j)
return (i+1)*(j+1)-1
elif (m[i][j] > x):
print(" Moving left one column")
j = j - 1
elif (m[i][j] < x):
print(" Moving down one row")
i = i + 1
print(" n Element not found so return max")
return (i)*(j+1)
# Driver code
x = 5
n = 3
m = [ [1, 3, 5],
[2, 6, 9],
[3, 6, 9]]
print("Count=", findLessX(m, n, x))
Inspect the Count and simple matrix above to see if soln works ~
If both columns and rows are sorted ascending, then for any given border value some stairs line does exist. It divides matrix into two parts - higher (and equal) and lower than border value. That line always goes left and down (if traversal starts from top right corner).
[1, 3, |5]
____|
[2,| 6, 9]
[3,| 6, 10]
So scan from top right corner, find starting cell for that line on the right or top edge, then follow the line, counting elements being left to it.
Complexity is linear, because line never turns back.
P.P.S. I hoped that you could write code with given clues
def countLessX(m,n,x):
col = n-1
count = 0
for row in range(n):
while (col >= 0) and (m[row] [col] >= x):
col = col - 1
count = count + col + 1
if col < 0: #early stop for loop
break
return count
# Driver code
n = 3
m = [ [1, 3, 5],
[2, 6, 9],
[3, 6, 9]]
for x in range(11):
print("x=", x, "Count=", countLessX(m, n, x))
x= 0 Count= 0
x= 1 Count= 0
x= 2 Count= 1
x= 3 Count= 2
x= 4 Count= 4
x= 5 Count= 4
x= 6 Count= 5
x= 7 Count= 7
x= 8 Count= 7
x= 9 Count= 7
x= 10 Count= 9
As mentioned in my comment your problem is not solveable in O(n) for most matrices. Some other thoughts:
Why count j downwards?
i and j can never become n
Here is a solution in O(n) that perhaps fullfills your needs.
Here is the adapted code:
def findLessX(m,n,x):
i = 0
j = 0
while True:
if i+1<n and m[i+1][j]<x:
i=i+1
elif j+1<n and m[i][j+1]<x:
j=j+1
else:
print("n found at ", i+1 , " ", j+1, "or element not found so return max")
return (i+1)*(j+1)
Both answers suggested above will result in O(n^2).
In worst case, the algorithms will inspect all n^2 elements in the matrix.
This question already has answers here:
What is the most efficient way of finding all the factors of a number in Python?
(29 answers)
Closed 9 years ago.
This is the code I have right now. I can't get it to return the right results for the question.
def problem(n):
myList = [1,n]
for i in range(1,n):
result = int(n ** .5)
new = n/result
i = i + 1
myList.append(new)
return myList
Factors of n are all numbers that divide into n evenly. So i is a factor of n if n % i == 0.
You need to do is perform this test for each number from 1 to n, and if that condition is true append that number to your list.
If you have issues as you start to write this code, update your question with what you tried.
Note that the above approach is not the most efficient way to find factors, but it seems to me like this is just an exercise for a beginning programmer so a naive approach is expected.
There are a few problems with your code. First of all you do not need to increment i as your for loop already does that. Secondly, using some basic math principles you only need to go through a range of numbers up to the square root of your passed in number. I will leave the second part for you to play and experiment with.
def problem(n):
myList = []
for i in range(1, n+1):
if n % i == 0:
myList.append(i)
return myList
For a more advanced approach you can try list comprehensions which are very powerful but are usually better for smaller data sets.
def problem(n):
return [x for x in range(1, n+1) if n % x == 0]
You only need to iterate from 1 to n ** 0.5 + 1, and your factors will be all i's, and n/i's you pick up along the way.
For example: factors of 10:
We only need to iterate from 1 to 4
i = 1 => 10 % 1 == 0, so factors: i = 1, 10 / i = 10
i = 2 => 10 % 2 == 0, so factors: i = 2, 10 / i = 5
i = 3 => 10 % 3 != 0, no factors
We don't need to go any further, the answer is 1, 2, 5, 10.
def problem(n):
myList = []
for i in xrange(1, int(n ** 0.5 + 1)):
if n % i == 0:
if (i != n/i):
myList.append(i)
myList.append(n / i)
else:
myList.append(i)
return myList
Result:
>>> problem(10)
[1, 10, 2, 5]
>>> problem(12)
[1, 12, 2, 6, 3, 4]
>>> problem(77)
[1, 77, 7, 11]
>>> problem(4)
[1, 4, 2]
>>> problem(64)
[1, 64, 2, 32, 4, 16, 8]
>>> len(problem(10 ** 12))
169
use a list comprehension:
In [4]: num=120
In [5]: [x for x in range(2,int(num/2)+1) if num%x==0]
Out[5]: [2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60]
In [6]: num=121
In [7]: [x for x in range(2,int(num/2)+1) if num%x==0]
Out[7]: [11]