Is a list (potentially) divisible by another? - python

Problem
Say you have two lists A = [a_1, a_2, ..., a_n] and B = [b_1, b_2, ..., b_n] of integers. We say A is potentially-divisible by B if there is a permutation of B that makes a_i divisible by b_i for all i. The problem is then: is it possible to reorder (i.e. permute) B so that a_i is divisible by b_i for all i?
For example, if you have
A = [6, 12, 8]
B = [3, 4, 6]
Then the answer would be True, as B can be reordered to be B = [3, 6, 4] and then we would have that a_1 / b_1 = 2, a_2 / b_2 = 2, and a_3 / b_3 = 2, all of which are integers, so A is potentially-divisible by B.
As an example which should output False, we could have:
A = [10, 12, 6, 5, 21, 25]
B = [2, 7, 5, 3, 12, 3]
The reason this is False is that we can't reorder B as 25 and 5 are in A, but the only divisor in B would be 5, so one would be left out.
Approach
Obviously the straightforward approach would be to get all the permutations of B and see if one would satisfy potential-divisibility, something along the lines of:
import itertools
def is_potentially_divisible(A, B):
perms = itertools.permutations(B)
divisible = lambda ls: all( x % y == 0 for x, y in zip(A, ls))
return any(divisible(perm) for perm in perms)
Question
What is the fastest way to know if a list is potentially-divisible by another list? Any thoughts? I was thinking if there's is a clever way to do this with primes, but I couldn't come up with a solution.
Much appreciated!
Edit: It's probably irrelevant to most of you, but for the sake of completeness, I'll explain my motivation. In Group Theory there is a conjecture on finite simple groups on whether or not there is a bijection from irreducible characters and conjugacy classes of the group such that every character degree divides the corresponding class size. For example, for U6(4) here are what A and B would look like. Pretty big lists, mind you!

Build bipartite graph structure - connect a[i] with all its divisors from b[].
Then find maximum matching and check whether it is perfect matching (number of edges in matching is equal to the number of pairs (if graph is directed) or to doubled number).
Arbitrary chosen Kuhn algorithm implementation here.
Upd:
#Eric Duminil made great concise Python implementation here
This approach has polynomial complexity from O(n^2) to O(n^3) depending on chosen matching algorithm and number of edges (division pairs) against factorial complexity for brute-force algorithm.

Code
Building on #MBo's excellent answer, here's an implementation of bipartite graph matching using networkx.
import networkx as nx
def is_potentially_divisible(multiples, divisors):
if len(multiples) != len(divisors):
return False
g = nx.Graph()
g.add_nodes_from([('A', a, i) for i, a in enumerate(multiples)], bipartite=0)
g.add_nodes_from([('B', b, j) for j, b in enumerate(divisors)], bipartite=1)
edges = [(('A', a, i), ('B', b, j)) for i, a in enumerate(multiples)
for j, b in enumerate(divisors) if a % b == 0]
g.add_edges_from(edges)
m = nx.bipartite.maximum_matching(g)
return len(m) // 2 == len(multiples)
print(is_potentially_divisible([6, 12, 8], [3, 4, 6]))
# True
print(is_potentially_divisible([6, 12, 8], [3, 4, 3]))
# True
print(is_potentially_divisible([10, 12, 6, 5, 21, 25], [2, 7, 5, 3, 12, 3]))
# False
Notes
According to the documentation:
The dictionary returned by maximum_matching() includes a mapping for
vertices in both the left and right vertex sets.
It means that the returned dict should be twice as large as A and B.
The nodes are converted from
[10, 12, 6, 5, 21, 25]
to:
[('A', 10, 0), ('A', 12, 1), ('A', 6, 2), ('A', 5, 3), ('A', 21, 4), ('A', 25, 5)]
in order to avoid collisions between nodes from A and B. The id is also added in order to keep nodes distinct in case of duplicates.
Efficiency
The maximum_matching method uses Hopcroft-Karp algorithm, which runs in O(n**2.5) in the worst case. The graph generation is O(n**2), so the whole method runs in O(n**2.5). It should work fine with large arrays. The permutation solution is O(n!) and won't be able to process arrays with 20 elements.
With diagrams
If you're interested in a diagram showing the best matching, you can mix matplotlib and networkx:
import networkx as nx
import matplotlib.pyplot as plt
def is_potentially_divisible(multiples, divisors):
if len(multiples) != len(divisors):
return False
g = nx.Graph()
l = [('l', a, i) for i, a in enumerate(multiples)]
r = [('r', b, j) for j, b in enumerate(divisors)]
g.add_nodes_from(l, bipartite=0)
g.add_nodes_from(r, bipartite=1)
edges = [(a,b) for a in l for b in r if a[1] % b[1]== 0]
g.add_edges_from(edges)
pos = {}
pos.update((node, (1, index)) for index, node in enumerate(l))
pos.update((node, (2, index)) for index, node in enumerate(r))
m = nx.bipartite.maximum_matching(g)
colors = ['blue' if m.get(a) == b else 'gray' for a,b in edges]
nx.draw_networkx(g, pos=pos, arrows=False, labels = {n:n[1] for n in g.nodes()}, edge_color=colors)
plt.axis('off')
plt.show()
return len(m) // 2 == len(multiples)
print(is_potentially_divisible([6, 12, 8], [3, 4, 6]))
# True
print(is_potentially_divisible([6, 12, 8], [3, 4, 3]))
# True
print(is_potentially_divisible([10, 12, 6, 5, 21, 25], [2, 7, 5, 3, 12, 3]))
# False
Here are the corresponding diagrams:

Since you're comfortable with math, I just want to add a gloss to the other answers. Terms to search for are shown in bold.
The problem is an instance of permutations with restricted positions, and there's a whole lot that can be said about those. In general, a zero-one NxN matrix M can be constructed where M[i][j] is 1 if and only if position j is allowed for the element originally at position i. The number of distinct permutations meeting all the restrictions is then the permanent of M (defined the same way as the determinant, except that all terms are non-negative).
Alas - unlike as for the determinant - there are no known general ways to compute the permanent quicker than exponential in N. However, there are polynomial time algorithms for determining whether or not the permanent is 0.
And that's where the answers you got start ;-) Here's a good account of how the "is the permanent 0?" question is answered efficiently by considering perfect matchings in bipartite graphs:
https://cstheory.stackexchange.com/questions/32885/matrix-permanent-is-0
So, in practice, it's unlikely you'll find any general approach faster than the one #Eric Duminil gave in their answer.
Note, added later: I should make that last part clearer. Given any "restricted permutation" matrix M, it's easy to construct integer "divisibilty lists" corresponding to it. Therefore your specific problem is no easier than the general problem - unless perhaps there's something special about which integers may appear in your lists.
For example, suppose M is
0 1 1 1
1 0 1 1
1 1 0 1
1 1 1 0
View the rows as representing the first 4 primes, which are also the values in B:
B = [2, 3, 5, 7]
The first row then "says" that B[0] (= 2) can't divide A[0], but must divide A[1], A[2], and A[3]. And so on. By construction,
A = [3*5*7, 2*5*7, 2*3*7, 2*3*5]
B = [2, 3, 5, 7]
corresponds to M. And there are permanent(M) = 9 ways to permute B such that each element of A is divisible by the corresponding element of the permuted B.

This is not the ultimate answer but I think this might be something worthful. You can first list the factors(1 and itself included) of all the elements in the list [(1,2,5,10),(1,2,3,6,12),(1,2,3,6),(1,5),(1,3,7,21),(1,5,25)]. The list we are looking for must have one of the factors in it(to evenly divide).
Since we don't have some factors in the list we arre checking against([2,7,5,3,12,3]) This list can further be filtered as:
[(2,5),(2,3,12),(2,3),(5),(3,7),(5)]
Here, 5 is needed two places(where we don't have any options at all), but we only have a 5, so, we can pretty much stop here and say that the case is false here.
Let's say we had [2,7,5,3,5,3] instead:
Then we would have option as such:
[(2,5),(2,3),(2,3),(5),(3,7),(5)]
Since 5 is needed at two places:
[(2),(2,3),(2,3),{5},(3,7),{5}] Where {} signifies ensured position.
Also 2 is ensured:
[{2},(2,3),(2,3),{5},(3,7),{5}] Now since 2 is taken the two places of 3 are ensured:
[{2},{3},{3},{5},(3,7),{5}] Now of course 3 are taken and 7 is ensured:
[{2},{3},{3},{5},{7},{5}]. which is still consistent with our list so the casse is true. Remember we shall be looking at the consistencies with our list in every iteration where we can readily break out.

You can try this:
import itertools
def potentially_divisible(A, B):
A = itertools.permutations(A, len(A))
return len([i for i in A if all(c%d == 0 for c, d in zip(i, B))]) > 0
l1 = [6, 12, 8]
l2 = [3, 4, 6]
print(potentially_divisible(l1, l2))
Output:
True
Another example:
l1 = [10, 12, 6, 5, 21, 25]
l2 = [2, 7, 5, 3, 12, 3]
print(potentially_divisible(l1, l2))
Output:
False

Related

Find total number of intersections between a given set of ranges in python

What is the best way to count number of intersections of a given set of ranges.
For ex:
consider a list of range pairs[start,stop]
[[1,5], [3,7], [9,11], [6,8]]
Here there are total 2 intersections ,
[1,5] intersects with [3,7]
and [3,7] intersects with [6,8]
This problem can be done in nlogn time, of course you can do it in n^2 but it sounds like you want it time optimal.
I'd call these interval overlaps, and it' a classic you'll find variants of it in many interview books. Here's how to do it:
Sort the items by the starts.
Now step through the items. Store a minheap for each item of where it ends, costs nlogn but doesn't matter, we already paid that. At each new start number you know how many of the previous intersections are overlapping. Remove the intervals that you've run past the end of.
Ends up being nlogn due to the search, doesn't matter that you're then stepping through in n time.
Example:
Sort to [1, 5], [3, 7], [6, 8], [9, 11]
Store a heap item that says it ends at 5.
Get to the second item. Heap is size 1, so add 1 to the overlap count. Add a 7 to the heap.
Get to the third item. Drop the 5 from the heap. Leave the 7, so add 1 again. Add the 8 to the heap.
Get to the 4th item, Drop the 7 and the 8, leaving an empty heap. Result is 2.
import heapq
import operator
def mysol(v):
overlaps = 0
minheap = []
vsorted = sorted(v, key=operator.itemgetter(0))
for i in range(len(vsorted)):
while len(minheap) > 0 and minheap[0] < vsorted[i][0]:
heapq.heappop(minheap)
overlaps += len(minheap)
heapq.heappush(minheap, vsorted[i][1])
return overlaps
Try this one-liner simple method which uses a list comprehension and itertools.combinations -
Iterate over combinations of lists
Turn them into ranges and then sets of those ranges
Take an intersection
Only return those with intersection = True
import itertools
r = [[1,5], [3,7], [9,11], [6,8]]
overlapping_ranges = [i for i in list(itertools.combinations(r, 2))\
if set(range(*i[0])).intersection(set(range(*i[1])))]
print('Count of overlapping ranges:',len(overlapping_ranges))
print(overlapping_ranges)
Count of overlapping ranges: 2
[([1, 5], [3, 7]), ([3, 7], [6, 8])]
Modification of algorithm to find Maximum Number of Overlaps to compute number of overlaps instead.
Approach
The idea is to store coordinates in a new vector of pair mapped with characters ‘x’ and ‘y’, to identify coordinates.
Sort the vector.
Traverse the vector, if an x coordinate is encountered it means a new range is added, so increment overlap count
If count > 1, we have a new overlap
so increment number of overlaps
if y coordinate is encountered that means a range ends, so decrement overlap count
The result is the number of overlaps
The algorithm complexity is O(n*log(n)) (from sort)
Code
def overlap(v):
# variable to store the maximum
# count
ans = 0
count = 0
data = []
# storing the x and y
# coordinates in data vector
for i in range(len(v)):
# pushing the x coordinate
data.append([v[i][0], 'x'])
# pushing the y coordinate
data.append([v[i][1], 'y'])
# sorting of ranges
data = sorted(data)
# Traverse the data vector to
# count number of overlaps
for i in range(len(data)):
# if x occur it means a new range
# is added so we increase count
if (data[i][1] == 'x'):
count += 1
if count > 1:
ans += (count - 1) # new range intersets
# count - 1 existing
# ranges
# if y occur it means a range
# is ended so we decrease count
if (data[i][1] == 'y'):
count -= 1
# Return number of overlaps
return ans
Tests
v = [ [ 1, 2 ], [ 2, 4 ], [ 3, 6 ] ]
print(overlap(v)) # Output 2
v = [[1,5], [3,7], [9,11], [6,8]]
print(overlap(v)) # Output 2
v = [[1,5], [3,7], [9,11], [6,8], [1, 11]]
print(overlap(v)) # Output 6
v = [ [ 1, 3 ], [ 2, 7 ], [3, 5], [4, 6] ]
print(overlap(v)) # Output 5
Using the intspan module, a solution could be:
>>> from itertools import combinations
>>> from intspan import intspan
>>> L = [[1,5], [3,7], [9,11], [6,8]]
>>> for s1, s2 in combinations(L, 2):
if intspan.from_range(*s1) & intspan.from_range(*s2):
print(s1, 'intersects', s2)
Prints:
[1, 5] intersects [3, 7]
[3, 7] intersects [6, 8]
The lazy man's way would be to just generate ranges and check for set intersection, this is terrible for memory usage and not extendable to non integers, but it is short:
import itertools
def range_intersection(a,b):
return len(set(range(*a)) & set(range(*b))) > 0
data = [[1,5], [3,7], [9,11], [6,8]]
for a,b in itertools.combinations(data, 2):
if range_intersection(a,b):
print(a,b)
Here set(range(*a)) uses the start and end arguments to range then makes a set so it can be intersected with another range, then we just check if the length of intersection is greater than 0 (there are common elements). itertools.combinations is to simplify checking all combinations of data together.
The "portion" module might be helpful when you work with segments.
import portion as P
from itertools import combinations
li = [[1,5], [3,7], [9,11], [6,8]]
pairs=[el for el in combinations(li,2) if P.open(*el[0]) & P.open(*el[1]) != P.empty()]
print(pairs)
The output is a list of pairs that intersect:
[([1, 5], [3, 7]), ([3, 7], [6, 8])]

Replace climbing sequence with its average

I have a random list like this
X = [0, 1, 5, 6, 7, 10, 15]
and need to find and replace every climbing sequence with its average.
In the end it should look like this:
X = [0, 6, 10, 15] #the 0 and 1 to 0; and the 5,6,7 to 6
I tried to find the sequence by subtracting the second value from the first like this:
y = 0
z = []
while X[y +1] -X[y] == 1:
z.append(X[y])
y = y +1
And now I dont know how to delete for example 5,6 and 7 and replace it with the average 6.
You can use itertools.groupby on the list with a key function that returns each item's difference with an incremental counter:
from itertools import groupby, count
from statistics import mean
X = [0, 1, 5, 6, 7, 10, 15]
c = count()
X = [int(mean(g)) for _, g in groupby(X, key=lambda i: i - next(c))]
X becomes:
[0, 6, 10, 15]
You can iterate and group in the same list each climbing sequence for then taking the mean.
>>> res = [[x[0]]]
>>> for i in range(1, len(x)):
... if x[i] == x[i-1] + 1:
... res[-1].append(x[i])
... else:
... res.append([x[i]]
>>> res
[[0, 1], [5, 6, 7], [10], [15]]
>>> [int(sum(l)/len(l)) for l in res]
[0, 6, 10, 15]
Here's a starting technique: make a new list that's the difference of adjacent elements in the list:
diff = [X[i] - X[i-1] for i in range(1, len(X)) ]
There are more "Pythonic" ways to do this, but I want to make sure this is accessible to newer programmers.
You now have diff as
[1, 4, 1, 1, 3, 5]
Where you have a 1 in diff, you have a climbing pair in X. Iterate through diff to find a sequence of 1 values. Where you find this, take the slice of X that corresponds to the 1 values. The middle element of that slice is your mean.
If the value is not 1, then you simply take the corresponding element of X, as you've been doing.
append the identified values to z, and there's your desired result.
Can you take it from there?
Not really to answer the question, which is a fairly basic CS 101 question that people should try to figure out themselves, but what I noticed about the nice answer of #blhsing was that it appeared fairly slow. I found that mean() is incredibly slow!
from itertools import groupby, count
from statistics import mean
from timeit import timeit
def generate_1step_seq1(xs):
result = []
n = 0
while n < len(xs):
# sequences with step of 1 only
if not result or xs[n] == result[-1] + 1:
result += [xs[n]]
else:
# int result, rounding down
yield sum(result) // len(result)
result = [xs[n]]
n += 1
if result:
yield sum(result) // len(result)
def generate_1step_seq2(xs):
c = count()
return [int(sum(xs) // len(xs)) for xs in [list(g) for _, g in groupby(xs, key=lambda i: i - next(c))]]
def generate_1step_seq3(xs):
c = count()
return [int(mean(g)) for _, g in groupby(xs, key=lambda i: i - next(c))]
values = [0, 1, 5, 6, 7, 10, 15]
print(list(generate_1step_seq1(values)))
print(generate_1step_seq2(values))
print(generate_1step_seq3(values))
print(timeit(lambda: list(generate_1step_seq1(values)), number=10000))
print(timeit(lambda: list(generate_1step_seq2(values)), number=10000))
print(timeit(lambda: list(generate_1step_seq3(values)), number=10000))
Initially I figured that was probably due to the tiny list size, but even for large lists, mean() is horribly slow. Anyone happen to know why? It appears due to the very safe nature of statistics _sum, trying to avoid float rounding errors?

Cyclic rotation of array explanation

Premise: My question is not a duplicate of Cyclic rotation in Python . I am not asking how to resolve the problem or why my solution does not work, I have already resolved it and it works. My question is about another particular solution to the same problem I found, because I would like to understand the logic behind the other solution.
I came across the following cyclic array rotation problem (below the sources):
Cyclic rotation in Python
https://app.codility.com/programmers/lessons/2-arrays/cyclic_rotation/
An array A consisting of N integers is given. Rotation of the array means that each element is shifted right by one index, and the last element of the array is moved to the first place. For example, the rotation of array A = [3, 8, 9, 7, 6] is [6, 3, 8, 9, 7] (elements are shifted right by one index and 6 is moved to the first place).
The goal is to rotate array A K times; that is, each element of A will be shifted to the right K times.
which I managed to solve with the following Python code:
def solution(A , K):
N = len(A)
if N < 1 or N == K:
return A
K = K % N
for x in range(K):
tmp = A[N - 1]
for i in range(N - 1, 0, -1):
A[i] = A[i - 1]
A[0] = tmp
return A
Then, on the following website https://www.martinkysel.com/codility-cyclicrotation-solution/, I have found the following fancy solution to the same problem:
def reverse(arr, i, j):
for idx in xrange((j - i + 1) / 2):
arr[i+idx], arr[j-idx] = arr[j-idx], arr[i+idx]
def solution(A, K):
l = len(A)
if l == 0:
return []
K = K%l
reverse(A, l - K, l -1)
reverse(A, 0, l - K -1)
reverse(A, 0, l - 1)
return A
Could someone explain me how this particular solution works? (The author does not explain it on his website)
My solution does not perform quite well for large A and K, where K < N, e.g.:
A = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] * 1000
K = 1000
expectedResult = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] * 1000
res = solution(A, K) # 1455.05908203125 ms = almost 1.4 seconds
Because for K < N, my code has a time complexity of O(N * K), where N is the length of the array.
For large K and small N (K > N), my solution performs well thanks to the modulo operation K = K % N:
A = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
K = 999999999999999999999999
expectedRes = [2, 3, 4, 5, 6, 7, 8, 9, 10, 1]
res = solution(A, K) # 0.0048828125 ms, because K is torn down to 9 thanks to K = K % N
The other solution, on the other hand, performs greatly in all cases, even when N > K and has a complexity of O(N).
What is the logic behind that solution?
Thank you for the attention.
Let me talk first the base case with K < N, the idea in this case is to split the array in two parts A and B, A is the first N-K elements array and B the last K elements. the algorithm reverse A and B separately and finally reverse the full array (with the two part reversed separately). To manage the case with K > N, think that every time you reverse the array N times you obtain the original array again so we can just use the module operator to find where to split the array (reversing only the really useful times avoiding useless shifting).
Graphical Example
A graphical step by step example can help understanding better the concept. Note that
The bold line indicate the the splitting point of the array (K = 3 in this example);
The red array indicate the input and the expected output.
Starting from:
look that what we want in front of the final output will be the last 3 letter reversed, for now let reverse it in place (first reverse of the algorithm):
now reverse the first N-K elements (second reverse of the algorithm):
we already have the solution but in the opposite direction, we can solve it reversing the whole array (third and last reverse of the algorithm):
Here the final output, the original array cyclical rotated with K = 3.
Code Example
Let give also another step by step example with python code, starting from:
A = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
K = 22
N = len(A)
we find the splitting index:
K = K%N
#2
because, in this case, the first 20 shift will be useless, now we reverse the last K (2) elements of the original array:
reverse(A, N-K, N-1)
# [1, 2, 3, 4, 5, 6, 7, 8, 10, 9]
as you can see 9 and 10 has been shift, now we reverse the first N-K elements:
reverse(A, 0, N-K-1)
# [8, 7, 6, 5, 4, 3, 2, 1, 10, 9]
And, finally, we reverse the full array:
reverse(A, 0, N-1)
# [9, 10, 1, 2, 3, 4, 5, 6, 7, 8]
Note that reversing an array have time complexity O(N).
Here is a very simple solution in Ruby. (scored 100% in codility)
Remove the last element in the array, and insert it in the beginning.
def solution(a, k)
if a.empty?
return []
end
modified = a
1.upto(k) do
last_element = modified.pop
modified = modified.unshift(last_element)
end
return modified
end

python, select random #k numbers from (1, n) excluding numbers in list

For a given exclude_list = [3, 5, 8], n = 30, k = 5
I'd like to pick 5(k) random numbers between 1 and 30.
But I should not pick numbers in the exclude_list
Suppose exclude_list, n could be potentially large.
When there's no need for exclusion, it is easy to get k random samples
rand_numbers = sample(range(1, n), k)
So to get the answer, I could do
sample(set(range(1, n)) - set(exclude_numbers), k)
I read that range keeps one number in memory at a time.
I'm not quite sure how it affects the two lines above.
The first question is, does the following code puts all n numbers in memory or does it put each number at a time?
rand_numbers = sample(range(1, n), k)
2nd question is, if the above code indeed puts one number at a time in memory, can I do the similar with the additional constraint of the exclusion list?
Sample notes in sample's docstring:
To choose a sample in a range of integers, use range as an argument.
This is especially fast and space efficient for sampling from a
large population: sample(range(10000000), 60)
I can test this on my machine:
In [11]: sample(range(100000000), 3)
Out[11]: [70147105, 27647494, 41615897]
In [12]: list(range(100000000)) # crash/takes a long time
One way to sample with an exclude list efficiently is to use the same range trick but "hop over" the exclusions (we can do this in O(k * log(len(exclude_list))) with the bisect module:
import bisect
import random
def sample_excluding(n, k, excluding):
# if we assume excluding is unique and sorted we can avoid the set usage...
skips = [j - i for i, j in enumerate(sorted(set(excluding)))]
s = random.sample(range(n - len(skips)), k)
return [i + bisect.bisect_right(skips, i) for i in s]
and we can see it working:
In [21]: sample_excluding(10, 3, [2, 4, 7])
Out[21]: [6, 3, 9]
In [22]: sample_excluding(10, 3, [1, 2, 8])
Out[22]: [0, 4, 3]
In [23]: sample_excluding(10, 6, [1, 2, 8])
Out[23]: [0, 7, 9, 6, 3, 5]
Specifically we've done this without using O(n) memory:
In [24]: sample_excluding(10000000, 6, [1, 2, 8])
Out[24]: [1495143, 270716, 9490477, 2570599, 8450517, 8283229]

Transforming nested Python loops into list comprehensions

I've started working on some Project Euler problems, and have solved number 4 with a simple brute force solution:
def mprods(a,b):
c = range(a,b)
f = []
for d in c:
for e in c:
f.append(d*e)
return f
max([z for z in mprods(100,1000) if str(z)==(''.join([str(z)[-i] for i in range(1,len(str(z))+1)]))])
After solving, I tried to make it as compact as possible, and came up with that horrible bottom line!
Not to leave something half-done, I am trying to condense the mprods function into a list comprehension. So far, I've come up with these attempts:
[d*e for d,e in (range(a,b), range(a,b))]
Obviously completely on the wrong track. :-)
[d*e for x in [e for e in range(1,5)] for d in range(1,5)]
This gives me [4, 8, 12, 16, 4, 8, 12, 16, 4, 8, 12, 16, 4, 8, 12, 16], where I expect
[1, 2, 3, 4, 2, 4, 6, 8, 3, 6, 9, 12, 4, 8, 12, 16] or similar.
Any Pythonistas out there that can help? :)
c = range(a, b)
print [d * e for d in c for e in c]
from itertools import product
def palindrome(i):
return str(i) == str(i)[::-1]
x = xrange(900,1000)
max(a*b for (a,b) in (product(x,x)) if palindrome(a*b))
xrange(900,1000) is like range(900,1000) but instead of returning a list it returns an object that generates the numbers in the range on demand. For looping, this is slightly faster than range() and more memory efficient.
product(xrange(900,1000),xrange(900,1000)) gives the Cartesian product of the input iterables. It is equivalent to nested for-loops. For example, product(A, B) returns the same as: ((x,y) for x in A for y in B). The leftmost iterators are in the outermost for-loop, so the output tuples cycle in a manner similar to an odometer (with the rightmost element changing on every iteration).
product('ab', range(3)) --> ('a',0) ('a',1) ('a',2) ('b',0) ('b',1) ('b',2)
product((0,1), (0,1), (0,1)) --> (0,0,0) (0,0,1) (0,1,0) (0,1,1) (1,0,0) ...
str(i)[::-1] is list slicing shorthand to reverse a list.
Note how everything is wrapped in a generator expression, a high performance, memory efficient generalization of list comprehensions and generators.
Also note that the largest palindrome made from the product of two 2-digit numbers is made from the numbers 91 99, two numbers in the range(90,100). Extrapolating to 3-digit numbers you can use range(900,1000).
I think you'll like this one-liner (formatted for readability):
max(z for z in (d*e
for d in xrange(100, 1000)
for e in xrange(100, 1000))
if str(z) == str(z)[::-1])
Or slightly changed:
c = range(100, 1000)
max(z for z in (d*e for d in c for e in c) if str(z) == str(z)[::-1])
Wonder how many parens that would be in Lisp...

Categories

Resources