Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I tried to write code to solve the standard Integer Partition problem (Wikipedia). The code I wrote was a mess. I need an elegant solution to solve the problem, because I want to improve my coding style. This is not a homework question.
A smaller and faster than Nolen's function:
def partitions(n, I=1):
yield (n,)
for i in range(I, n//2 + 1):
for p in partitions(n-i, i):
yield (i,) + p
Let's compare them:
In [10]: %timeit -n 10 r0 = nolen(20)
1.37 s ± 28.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [11]: %timeit -n 10 r1 = list(partitions(20))
979 µs ± 82.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [13]: sorted(map(sorted, r0)) == sorted(map(sorted, r1))
Out[14]: True
Looks like it's 1370 times faster for n = 20.
Anyway, it's still far from accel_asc:
def accel_asc(n):
a = [0 for i in range(n + 1)]
k = 1
y = n - 1
while k != 0:
x = a[k - 1] + 1
k -= 1
while 2 * x <= y:
a[k] = x
y -= x
k += 1
l = k + 1
while x <= y:
a[k] = x
a[l] = y
yield a[:k + 2]
x += 1
y -= 1
a[k] = x + y
y = x + y - 1
yield a[:k + 1]
It's not only slower, but requires much more memory (but apparently is much easier to remember):
In [18]: %timeit -n 5 r2 = list(accel_asc(50))
114 ms ± 1.04 ms per loop (mean ± std. dev. of 7 runs, 5 loops each)
In [19]: %timeit -n 5 r3 = list(partitions(50))
527 ms ± 8.86 ms per loop (mean ± std. dev. of 7 runs, 5 loops each)
In [24]: sorted(map(sorted, r2)) == sorted(map(sorted, r3))
Out[24]: True
You can find other versions on ActiveState: Generator For Integer Partitions (Python Recipe).
I use Python 3.6.1 and IPython 6.0.0.
While this answer is fine, I'd recommend skovorodkin's answer.
>>> def partition(number):
... answer = set()
... answer.add((number, ))
... for x in range(1, number):
... for y in partition(number - x):
... answer.add(tuple(sorted((x, ) + y)))
... return answer
...
>>> partition(4)
set([(1, 3), (2, 2), (1, 1, 2), (1, 1, 1, 1), (4,)])
If you want all permutations(ie (1, 3) and (3, 1)) change answer.add(tuple(sorted((x, ) + y)) to answer.add((x, ) + y)
I've compared the solution with perfplot (a little project of mine for such purposes) and found that Nolen's top-voted answer is also the slowest.
Both answers supplied by skovorodkin are much faster. (Note the log-scale.)
To to generate the plot:
import perfplot
import collections
def nolen(number):
answer = set()
answer.add((number,))
for x in range(1, number):
for y in nolen(number - x):
answer.add(tuple(sorted((x,) + y)))
return answer
def skovorodkin(n):
return set(skovorodkin_yield(n))
def skovorodkin_yield(n, I=1):
yield (n,)
for i in range(I, n // 2 + 1):
for p in skovorodkin_yield(n - i, i):
yield (i,) + p
def accel_asc(n):
return set(accel_asc_yield(n))
def accel_asc_yield(n):
a = [0 for i in range(n + 1)]
k = 1
y = n - 1
while k != 0:
x = a[k - 1] + 1
k -= 1
while 2 * x <= y:
a[k] = x
y -= x
k += 1
l = k + 1
while x <= y:
a[k] = x
a[l] = y
yield tuple(a[: k + 2])
x += 1
y -= 1
a[k] = x + y
y = x + y - 1
yield tuple(a[: k + 1])
def mct(n):
partitions_of = []
partitions_of.append([()])
partitions_of.append([(1,)])
for num in range(2, n + 1):
ptitions = set()
for i in range(num):
for partition in partitions_of[i]:
ptitions.add(tuple(sorted((num - i,) + partition)))
partitions_of.append(list(ptitions))
return partitions_of[n]
perfplot.show(
setup=lambda n: n,
kernels=[nolen, mct, skovorodkin, accel_asc],
n_range=range(1, 17),
logy=True,
# https://stackoverflow.com/a/7829388/353337
equality_check=lambda a, b: collections.Counter(set(a))
== collections.Counter(set(b)),
xlabel="n",
)
I needed to solve a similar problem, namely the partition of an integer n into d nonnegative parts, with permutations. For this, there's a simple recursive solution (see here):
def partition(n, d, depth=0):
if d == depth:
return [[]]
return [
item + [i]
for i in range(n+1)
for item in partition(n-i, d, depth=depth+1)
]
# extend with n-sum(entries)
n = 5
d = 3
lst = [[n-sum(p)] + p for p in partition(n, d-1)]
print(lst)
Output:
[
[5, 0, 0], [4, 1, 0], [3, 2, 0], [2, 3, 0], [1, 4, 0],
[0, 5, 0], [4, 0, 1], [3, 1, 1], [2, 2, 1], [1, 3, 1],
[0, 4, 1], [3, 0, 2], [2, 1, 2], [1, 2, 2], [0, 3, 2],
[2, 0, 3], [1, 1, 3], [0, 2, 3], [1, 0, 4], [0, 1, 4],
[0, 0, 5]
]
I'm a bit late to the game, but I can offer a contribution which might qualify as more elegant in a few senses:
def partitions(n, m = None):
"""Partition n with a maximum part size of m. Yield non-increasing
lists in decreasing lexicographic order. The default for m is
effectively n, so the second argument is not needed to create the
generator unless you do want to limit part sizes.
"""
if m is None or m >= n: yield [n]
for f in range(n-1 if (m is None or m >= n) else m, 0, -1):
for p in partitions(n-f, f): yield [f] + p
Only 3 lines of code. Yields them in lexicographic order. Optionally allows imposition of a maximum part size.
I also have a variation on the above for partitions with a given number of parts:
def sized_partitions(n, k, m = None):
"""Partition n into k parts with a max part of m.
Yield non-increasing lists. m not needed to create generator.
"""
if k == 1:
yield [n]
return
for f in range(n-k+1 if (m is None or m > n-k+1) else m, (n-1)//k, -1):
for p in sized_partitions(n-f, k-1, f): yield [f] + p
After composing the above, I ran across a solution I had created almost 5 years ago, but which I had forgotten about. Besides a maximum part size, this one offers the additional feature that you can impose a maximum length (as opposed to a specific length). FWIW:
def partitions(sum, max_val=100000, max_len=100000):
""" generator of partitions of sum with limits on values and length """
# Yields lists in decreasing lexicographical order.
# To get any length, omit 3rd arg.
# To get all partitions, omit 2nd and 3rd args.
if sum <= max_val: # Can start with a singleton.
yield [sum]
# Must have first*max_len >= sum; i.e. first >= sum/max_len.
for first in range(min(sum-1, max_val), max(0, (sum-1)//max_len), -1):
for p in partitions(sum-first, first, max_len-1):
yield [first]+p
Much quicker than the accepted response and not bad looking, either. The accepted response does lots of the same work multiple times because it calculates the partitions for lower integers multiple times. For example, when n=22 the difference is 12.7 seconds against 0.0467 seconds.
def partitions_dp(n):
partitions_of = []
partitions_of.append([()])
partitions_of.append([(1,)])
for num in range(2, n+1):
ptitions = set()
for i in range(num):
for partition in partitions_of[i]:
ptitions.add(tuple(sorted((num - i, ) + partition)))
partitions_of.append(list(ptitions))
return partitions_of[n]
The code is essentially the same except we save the partitions of smaller integers so we don't have to calculate them again and again.
Here is a recursive function, which uses a stack in which we store the numbers of the partitions in increasing order.
It is fast enough and very intuitive.
# get the partitions of an integer
Stack = []
def Partitions(remainder, start_number = 1):
if remainder == 0:
print(" + ".join(Stack))
else:
for nb_to_add in range(start_number, remainder+1):
Stack.append(str(nb_to_add))
Partitions(remainder - nb_to_add, nb_to_add)
Stack.pop()
When the stack is full (the sum of the elements of the stack then corresponds to the number we want the partitions), we print it,
remove its last value and test the next possible value to be stored in the stack. When all the next values have been tested, we pop the last value of the stack again and we go back to the last calling function.
Here is an example of the output (with 8):
Partitions(8)
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
1 + 1 + 1 + 1 + 1 + 1 + 2
1 + 1 + 1 + 1 + 1 + 3
1 + 1 + 1 + 1 + 2 + 2
1 + 1 + 1 + 1 + 4
1 + 1 + 1 + 2 + 3
1 + 1 + 1 + 5
1 + 1 + 2 + 2 + 2
1 + 1 + 2 + 4
1 + 1 + 3 + 3
1 + 1 + 6
1 + 2 + 2 + 3
1 + 2 + 5
1 + 3 + 4
1 + 7
2 + 2 + 2 + 2
2 + 2 + 4
2 + 3 + 3
2 + 6
3 + 5
4 + 4
8
The structure of the recursive function is easy to understand and is illustrated below (for the integer 31):
remainder corresponds to the value of the remaining number we want a partition (31 and 21 in the example above).
start_number corresponds to the first number of the partition, its default value is one (1 and 5 in the example above).
If we wanted to return the result in a list and get the number of partitions, we could do this:
def Partitions2_main(nb):
global counter, PartitionList, Stack
counter, PartitionList, Stack = 0, [], []
Partitions2(nb)
return PartitionList, counter
def Partitions2(remainder, start_number = 1):
global counter, PartitionList, Stack
if remainder == 0:
PartitionList.append(list(Stack))
counter += 1
else:
for nb_to_add in range(start_number, remainder+1):
Stack.append(nb_to_add)
Partitions2(remainder - nb_to_add, nb_to_add)
Stack.pop()
Last, a big advantage of the function Partitions shown above is that it adapts very easily to find all the compositions of a natural number (two compositions can have the same set of numbers, but the order differs in this case):
we just have to drop the variable start_number and set it to 1 in the for loop.
# get the compositions of an integer
Stack = []
def Compositions(remainder):
if remainder == 0:
print(" + ".join(Stack))
else:
for nb_to_add in range(1, remainder+1):
Stack.append(str(nb_to_add))
Compositions(remainder - nb_to_add)
Stack.pop()
Example of output:
Compositions(4)
1 + 1 + 1 + 1
1 + 1 + 2
1 + 2 + 1
1 + 3
2 + 1 + 1
2 + 2
3 + 1
4
I think the recipe here may qualify as being elegant. It's lean (20 lines long), fast and based upon Kelleher and O'Sullivan's work which is referenced therein:
def aP(n):
"""Generate partitions of n as ordered lists in ascending
lexicographical order.
This highly efficient routine is based on the delightful
work of Kelleher and O'Sullivan.
Examples
========
>>> for i in aP(6): i
...
[1, 1, 1, 1, 1, 1]
[1, 1, 1, 1, 2]
[1, 1, 1, 3]
[1, 1, 2, 2]
[1, 1, 4]
[1, 2, 3]
[1, 5]
[2, 2, 2]
[2, 4]
[3, 3]
[6]
>>> for i in aP(0): i
...
[]
References
==========
.. [1] Generating Integer Partitions, [online],
Available: http://jeromekelleher.net/generating-integer-partitions.html
.. [2] Jerome Kelleher and Barry O'Sullivan, "Generating All
Partitions: A Comparison Of Two Encodings", [online],
Available: http://arxiv.org/pdf/0909.2331v2.pdf
"""
# The list `a`'s leading elements contain the partition in which
# y is the biggest element and x is either the same as y or the
# 2nd largest element; v and w are adjacent element indices
# to which x and y are being assigned, respectively.
a = [1]*n
y = -1
v = n
while v > 0:
v -= 1
x = a[v] + 1
while y >= 2 * x:
a[v] = x
y -= x
v += 1
w = v + 1
while x <= y:
a[v] = x
a[w] = y
yield a[:w + 1]
x += 1
y -= 1
a[v] = x + y
y = a[v] - 1
yield a[:w]
# -*- coding: utf-8 -*-
import timeit
ncache = 0
cache = {}
def partition(number):
global cache, ncache
answer = {(number,), }
if number in cache:
ncache += 1
return cache[number]
if number == 1:
cache[number] = answer
return answer
for x in range(1, number):
for y in partition(number - x):
answer.add(tuple(sorted((x, ) + y)))
cache[number] = answer
return answer
print('To 5:')
for r in sorted(partition(5))[::-1]:
print('\t' + ' + '.join(str(i) for i in r))
print(
'Time: {}\nCache used:{}'.format(
timeit.timeit(
"print('To 30: {} possibilities'.format(len(partition(30))))",
setup="from __main__ import partition",
number=1
), ncache
)
)
or https://gist.github.com/sxslex/dd15b13b28c40e695f1e227a200d1646
I don't know if my code is the most elegant, but I've had to solve this many times for research purposes. If you modify the
sub_nums
variable you can restrict what numbers are used in the partition.
def make_partitions(number):
out = []
tmp = []
sub_nums = range(1,number+1)
for num in sub_nums:
if num<=number:
tmp.append([num])
for elm in tmp:
sum_elm = sum(elm)
if sum_elm == number:
out.append(elm)
else:
for num in sub_nums:
if sum_elm + num <= number:
L = [i for i in elm]
L.append(num)
tmp.append(L)
return out
F(x,n) = \union_(i>=n) { {i}U g| g in F(x-i,i) }
Just implement this recursion. F(x,n) is the set of all sets that sum to x and their elements are greater than or equal to n.
I have trouble solving this problem:
First line of input - N. N+1 is number of train stations.
Second line of input - N integers c(i) - price of a ticket between stations i-1 and i.
Third line of input - k - number of passengers.
Next k lines: int a and int b (first and last station for each passenger).
Desired output: price of ticket for each client. I.E.
Input:
4
12 23 34 45
3
0 4
1 3
3 2
Output:
114
57
34
My code:
n = int(input())
prices = list(map(int, input().split()))
x = int(input())
for i in range(x):
a, b = sorted(map(int, input().split()))
print(sum(prices[a:b]))
I guess my solution is far from optimal as I get Time Limit Exceeded error.
Solution using accumulated array
def accum(a):
" creates the accumulation of array a as input "
b = [0] * (len(a) + 1)
for i, v in enumerate(a):
b[i+1] = b[i] + v
return b
def price(acc, t):
" Price using accumulated array "
# t provides the start, stop points (e.g. [0, 4])
mini, maxi = min(t), max(t)
return acc[maxi] - acc[mini]
Usage of above functions
prices = [12, 23, 34, 45]
# create assumulation of prices
acc = accum(prices)
# Using your test cases
tests = [[0, 4], [1, 3], [3, 2]]
for t in tests:
print(t, price(acc, t))
Output
[0, 4] 114
[1, 3] 57
[3, 2] 34
I'm trying to solve the following math problem:
A knight in standard international chess is sitting on a board as follows
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
The knight starts on square "0" and makes jumps to other squares according to the allowable moves in chess (so that at each space, it has between two to four valid moves). The knight chooses amongst the allowable moves at each jump uniformly at random and keeps track of the running sum S of keys on which it lands.
a. After T = 16 moves, what is the mean of the quantity S modulo 13?
b. What is the standard deviation?
c. After T = 512 moves, what is the mean of the quantity S modulo 311?
d. What is the standard deviation?
e. After T = 16 moves, what is the probability that the sum is divisible by 5, given that it is divisible by 13?
f. After T = 512 moves, what is the probability that the sum is divisible by 7, given that it is divisible by 43?
So far, I've written a program which calculates the probability mass function (pmf) of S:
from itertools import chain, product
import numpy as np
import pytest
def index_to_grid(index):
return index // 4, index % 4
def grid_to_index(i, j):
return 4*i + j
def in_board(i, j):
return (0 <= i < 4) and (0 <= j < 4)
def available_moves(index):
pos = np.array(index_to_grid(index))
knight_hops = [np.array(hop) for hop in chain(product([-2, 2], [-1, 1]), product([-1, 1], [-2, 2]))]
return set(grid_to_index(*newpos) for newpos in pos + knight_hops if in_board(*newpos))
def transition_matrix():
T = np.zeros((16, 16))
for i in range(16):
js = available_moves(i)
for j in js:
T[i, j] = 1/len(js)
return T
def calculate_S(N):
'''Calculate the matrix S(i, n) of the expected value of S given initial state i after n transitions'''
T = transition_matrix()
S = np.zeros((16, N+1))
for i in range(16):
S[i, 0] = i
# Use a bottom-up dynamic programming approach
for n in range(1, N+1):
for i in range(16):
S[i, n] = sum(T[i, j] * (i + S[j, n-1]) for j in range(16))
return S
Here are a few unit tests I've used to check my results so far:
def test_available_moves():
assert available_moves(0) == {6, 9}
assert available_moves(1) == {8, 10, 7}
assert available_moves(10) == {4, 1, 12, 3}
def test_transition_matrix():
T = transition_matrix()
assert T[0, 6] == T[0, 9] == 1/2
assert all(T[0, j] == 0 for j in set(range(16)) - {6, 9})
assert T[1, 8] == T[1, 10] == T[1, 7] == 1/3
assert all(T[1, j] == 0 for j in set(range(16)) - {8, 10, 7})
assert T[10, 4] == T[10, 1] == T[10, 12] == T[10, 3] == 1/4
assert all(T[10, j] == 0 for j in set(range(16)) - {4, 1, 12, 3})
def test_calculate_S():
S = calculate_S(2)
assert S[15, 1] == 15 + 1/2 * 6 + 1/2 * 9
assert S[4, 1] == 4 + 1/3 * 2 + 1/3 * 10 + 1/3 * 13
assert S[15, 2] == 15 + 1/2 * 9 + 1/2 * (1/4 * 0 + 1/4 * 2 + 1/4 * 7 + 1/4 * 15) \
+ 1/2 * 6 + 1/2 * (1/4 * 0 + 1/4 * 8 + 1/4 * 13 + 1/4 * 15)
if __name__ == "__main__":
pytest.main([__file__, "-s"])
So for example, to calculate the expected value of S itself after T = 16, I would evaluate calculate_S()[0, 16].
The problem is that I am having trouble generalizing this to the expected value of S % 13 (S modulo 13). Given that 13 (and all its 'equivalents' in subsequent questions) are all prime numbers, I suspect there is a key observation to be made using the 'primeness', but so far I haven't figured out what. Any ideas?
The trick is to use dynamic programming, and do all calculations mod some number. For each step you need the probability of it being at each square, with some sum mod some number.
For example for problem f you need to do your sum calculations mod 7*43 = 301. So for each step you need the probabilities of being in all of the 16*301 = 4816 possible combinations of position and running sum mod 301.
This makes your needed transition matrix much bigger.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I tried to write code to solve the standard Integer Partition problem (Wikipedia). The code I wrote was a mess. I need an elegant solution to solve the problem, because I want to improve my coding style. This is not a homework question.
A smaller and faster than Nolen's function:
def partitions(n, I=1):
yield (n,)
for i in range(I, n//2 + 1):
for p in partitions(n-i, i):
yield (i,) + p
Let's compare them:
In [10]: %timeit -n 10 r0 = nolen(20)
1.37 s ± 28.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [11]: %timeit -n 10 r1 = list(partitions(20))
979 µs ± 82.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [13]: sorted(map(sorted, r0)) == sorted(map(sorted, r1))
Out[14]: True
Looks like it's 1370 times faster for n = 20.
Anyway, it's still far from accel_asc:
def accel_asc(n):
a = [0 for i in range(n + 1)]
k = 1
y = n - 1
while k != 0:
x = a[k - 1] + 1
k -= 1
while 2 * x <= y:
a[k] = x
y -= x
k += 1
l = k + 1
while x <= y:
a[k] = x
a[l] = y
yield a[:k + 2]
x += 1
y -= 1
a[k] = x + y
y = x + y - 1
yield a[:k + 1]
It's not only slower, but requires much more memory (but apparently is much easier to remember):
In [18]: %timeit -n 5 r2 = list(accel_asc(50))
114 ms ± 1.04 ms per loop (mean ± std. dev. of 7 runs, 5 loops each)
In [19]: %timeit -n 5 r3 = list(partitions(50))
527 ms ± 8.86 ms per loop (mean ± std. dev. of 7 runs, 5 loops each)
In [24]: sorted(map(sorted, r2)) == sorted(map(sorted, r3))
Out[24]: True
You can find other versions on ActiveState: Generator For Integer Partitions (Python Recipe).
I use Python 3.6.1 and IPython 6.0.0.
While this answer is fine, I'd recommend skovorodkin's answer.
>>> def partition(number):
... answer = set()
... answer.add((number, ))
... for x in range(1, number):
... for y in partition(number - x):
... answer.add(tuple(sorted((x, ) + y)))
... return answer
...
>>> partition(4)
set([(1, 3), (2, 2), (1, 1, 2), (1, 1, 1, 1), (4,)])
If you want all permutations(ie (1, 3) and (3, 1)) change answer.add(tuple(sorted((x, ) + y)) to answer.add((x, ) + y)
I've compared the solution with perfplot (a little project of mine for such purposes) and found that Nolen's top-voted answer is also the slowest.
Both answers supplied by skovorodkin are much faster. (Note the log-scale.)
To to generate the plot:
import perfplot
import collections
def nolen(number):
answer = set()
answer.add((number,))
for x in range(1, number):
for y in nolen(number - x):
answer.add(tuple(sorted((x,) + y)))
return answer
def skovorodkin(n):
return set(skovorodkin_yield(n))
def skovorodkin_yield(n, I=1):
yield (n,)
for i in range(I, n // 2 + 1):
for p in skovorodkin_yield(n - i, i):
yield (i,) + p
def accel_asc(n):
return set(accel_asc_yield(n))
def accel_asc_yield(n):
a = [0 for i in range(n + 1)]
k = 1
y = n - 1
while k != 0:
x = a[k - 1] + 1
k -= 1
while 2 * x <= y:
a[k] = x
y -= x
k += 1
l = k + 1
while x <= y:
a[k] = x
a[l] = y
yield tuple(a[: k + 2])
x += 1
y -= 1
a[k] = x + y
y = x + y - 1
yield tuple(a[: k + 1])
def mct(n):
partitions_of = []
partitions_of.append([()])
partitions_of.append([(1,)])
for num in range(2, n + 1):
ptitions = set()
for i in range(num):
for partition in partitions_of[i]:
ptitions.add(tuple(sorted((num - i,) + partition)))
partitions_of.append(list(ptitions))
return partitions_of[n]
perfplot.show(
setup=lambda n: n,
kernels=[nolen, mct, skovorodkin, accel_asc],
n_range=range(1, 17),
logy=True,
# https://stackoverflow.com/a/7829388/353337
equality_check=lambda a, b: collections.Counter(set(a))
== collections.Counter(set(b)),
xlabel="n",
)
I needed to solve a similar problem, namely the partition of an integer n into d nonnegative parts, with permutations. For this, there's a simple recursive solution (see here):
def partition(n, d, depth=0):
if d == depth:
return [[]]
return [
item + [i]
for i in range(n+1)
for item in partition(n-i, d, depth=depth+1)
]
# extend with n-sum(entries)
n = 5
d = 3
lst = [[n-sum(p)] + p for p in partition(n, d-1)]
print(lst)
Output:
[
[5, 0, 0], [4, 1, 0], [3, 2, 0], [2, 3, 0], [1, 4, 0],
[0, 5, 0], [4, 0, 1], [3, 1, 1], [2, 2, 1], [1, 3, 1],
[0, 4, 1], [3, 0, 2], [2, 1, 2], [1, 2, 2], [0, 3, 2],
[2, 0, 3], [1, 1, 3], [0, 2, 3], [1, 0, 4], [0, 1, 4],
[0, 0, 5]
]
I'm a bit late to the game, but I can offer a contribution which might qualify as more elegant in a few senses:
def partitions(n, m = None):
"""Partition n with a maximum part size of m. Yield non-increasing
lists in decreasing lexicographic order. The default for m is
effectively n, so the second argument is not needed to create the
generator unless you do want to limit part sizes.
"""
if m is None or m >= n: yield [n]
for f in range(n-1 if (m is None or m >= n) else m, 0, -1):
for p in partitions(n-f, f): yield [f] + p
Only 3 lines of code. Yields them in lexicographic order. Optionally allows imposition of a maximum part size.
I also have a variation on the above for partitions with a given number of parts:
def sized_partitions(n, k, m = None):
"""Partition n into k parts with a max part of m.
Yield non-increasing lists. m not needed to create generator.
"""
if k == 1:
yield [n]
return
for f in range(n-k+1 if (m is None or m > n-k+1) else m, (n-1)//k, -1):
for p in sized_partitions(n-f, k-1, f): yield [f] + p
After composing the above, I ran across a solution I had created almost 5 years ago, but which I had forgotten about. Besides a maximum part size, this one offers the additional feature that you can impose a maximum length (as opposed to a specific length). FWIW:
def partitions(sum, max_val=100000, max_len=100000):
""" generator of partitions of sum with limits on values and length """
# Yields lists in decreasing lexicographical order.
# To get any length, omit 3rd arg.
# To get all partitions, omit 2nd and 3rd args.
if sum <= max_val: # Can start with a singleton.
yield [sum]
# Must have first*max_len >= sum; i.e. first >= sum/max_len.
for first in range(min(sum-1, max_val), max(0, (sum-1)//max_len), -1):
for p in partitions(sum-first, first, max_len-1):
yield [first]+p
Much quicker than the accepted response and not bad looking, either. The accepted response does lots of the same work multiple times because it calculates the partitions for lower integers multiple times. For example, when n=22 the difference is 12.7 seconds against 0.0467 seconds.
def partitions_dp(n):
partitions_of = []
partitions_of.append([()])
partitions_of.append([(1,)])
for num in range(2, n+1):
ptitions = set()
for i in range(num):
for partition in partitions_of[i]:
ptitions.add(tuple(sorted((num - i, ) + partition)))
partitions_of.append(list(ptitions))
return partitions_of[n]
The code is essentially the same except we save the partitions of smaller integers so we don't have to calculate them again and again.
Here is a recursive function, which uses a stack in which we store the numbers of the partitions in increasing order.
It is fast enough and very intuitive.
# get the partitions of an integer
Stack = []
def Partitions(remainder, start_number = 1):
if remainder == 0:
print(" + ".join(Stack))
else:
for nb_to_add in range(start_number, remainder+1):
Stack.append(str(nb_to_add))
Partitions(remainder - nb_to_add, nb_to_add)
Stack.pop()
When the stack is full (the sum of the elements of the stack then corresponds to the number we want the partitions), we print it,
remove its last value and test the next possible value to be stored in the stack. When all the next values have been tested, we pop the last value of the stack again and we go back to the last calling function.
Here is an example of the output (with 8):
Partitions(8)
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
1 + 1 + 1 + 1 + 1 + 1 + 2
1 + 1 + 1 + 1 + 1 + 3
1 + 1 + 1 + 1 + 2 + 2
1 + 1 + 1 + 1 + 4
1 + 1 + 1 + 2 + 3
1 + 1 + 1 + 5
1 + 1 + 2 + 2 + 2
1 + 1 + 2 + 4
1 + 1 + 3 + 3
1 + 1 + 6
1 + 2 + 2 + 3
1 + 2 + 5
1 + 3 + 4
1 + 7
2 + 2 + 2 + 2
2 + 2 + 4
2 + 3 + 3
2 + 6
3 + 5
4 + 4
8
The structure of the recursive function is easy to understand and is illustrated below (for the integer 31):
remainder corresponds to the value of the remaining number we want a partition (31 and 21 in the example above).
start_number corresponds to the first number of the partition, its default value is one (1 and 5 in the example above).
If we wanted to return the result in a list and get the number of partitions, we could do this:
def Partitions2_main(nb):
global counter, PartitionList, Stack
counter, PartitionList, Stack = 0, [], []
Partitions2(nb)
return PartitionList, counter
def Partitions2(remainder, start_number = 1):
global counter, PartitionList, Stack
if remainder == 0:
PartitionList.append(list(Stack))
counter += 1
else:
for nb_to_add in range(start_number, remainder+1):
Stack.append(nb_to_add)
Partitions2(remainder - nb_to_add, nb_to_add)
Stack.pop()
Last, a big advantage of the function Partitions shown above is that it adapts very easily to find all the compositions of a natural number (two compositions can have the same set of numbers, but the order differs in this case):
we just have to drop the variable start_number and set it to 1 in the for loop.
# get the compositions of an integer
Stack = []
def Compositions(remainder):
if remainder == 0:
print(" + ".join(Stack))
else:
for nb_to_add in range(1, remainder+1):
Stack.append(str(nb_to_add))
Compositions(remainder - nb_to_add)
Stack.pop()
Example of output:
Compositions(4)
1 + 1 + 1 + 1
1 + 1 + 2
1 + 2 + 1
1 + 3
2 + 1 + 1
2 + 2
3 + 1
4
I think the recipe here may qualify as being elegant. It's lean (20 lines long), fast and based upon Kelleher and O'Sullivan's work which is referenced therein:
def aP(n):
"""Generate partitions of n as ordered lists in ascending
lexicographical order.
This highly efficient routine is based on the delightful
work of Kelleher and O'Sullivan.
Examples
========
>>> for i in aP(6): i
...
[1, 1, 1, 1, 1, 1]
[1, 1, 1, 1, 2]
[1, 1, 1, 3]
[1, 1, 2, 2]
[1, 1, 4]
[1, 2, 3]
[1, 5]
[2, 2, 2]
[2, 4]
[3, 3]
[6]
>>> for i in aP(0): i
...
[]
References
==========
.. [1] Generating Integer Partitions, [online],
Available: http://jeromekelleher.net/generating-integer-partitions.html
.. [2] Jerome Kelleher and Barry O'Sullivan, "Generating All
Partitions: A Comparison Of Two Encodings", [online],
Available: http://arxiv.org/pdf/0909.2331v2.pdf
"""
# The list `a`'s leading elements contain the partition in which
# y is the biggest element and x is either the same as y or the
# 2nd largest element; v and w are adjacent element indices
# to which x and y are being assigned, respectively.
a = [1]*n
y = -1
v = n
while v > 0:
v -= 1
x = a[v] + 1
while y >= 2 * x:
a[v] = x
y -= x
v += 1
w = v + 1
while x <= y:
a[v] = x
a[w] = y
yield a[:w + 1]
x += 1
y -= 1
a[v] = x + y
y = a[v] - 1
yield a[:w]
# -*- coding: utf-8 -*-
import timeit
ncache = 0
cache = {}
def partition(number):
global cache, ncache
answer = {(number,), }
if number in cache:
ncache += 1
return cache[number]
if number == 1:
cache[number] = answer
return answer
for x in range(1, number):
for y in partition(number - x):
answer.add(tuple(sorted((x, ) + y)))
cache[number] = answer
return answer
print('To 5:')
for r in sorted(partition(5))[::-1]:
print('\t' + ' + '.join(str(i) for i in r))
print(
'Time: {}\nCache used:{}'.format(
timeit.timeit(
"print('To 30: {} possibilities'.format(len(partition(30))))",
setup="from __main__ import partition",
number=1
), ncache
)
)
or https://gist.github.com/sxslex/dd15b13b28c40e695f1e227a200d1646
I don't know if my code is the most elegant, but I've had to solve this many times for research purposes. If you modify the
sub_nums
variable you can restrict what numbers are used in the partition.
def make_partitions(number):
out = []
tmp = []
sub_nums = range(1,number+1)
for num in sub_nums:
if num<=number:
tmp.append([num])
for elm in tmp:
sum_elm = sum(elm)
if sum_elm == number:
out.append(elm)
else:
for num in sub_nums:
if sum_elm + num <= number:
L = [i for i in elm]
L.append(num)
tmp.append(L)
return out
F(x,n) = \union_(i>=n) { {i}U g| g in F(x-i,i) }
Just implement this recursion. F(x,n) is the set of all sets that sum to x and their elements are greater than or equal to n.
I have an array A of the form:
1.005 1.405 1.501 1.635
2.020 2.100 2.804 2.067
3.045 3.080 3.209 3.627
4.080 4.005 4.816 4.002
5.125 5.020 5.025 5.307
6.180 6.045 6.036 6.015
7.245 7.320 7.049 7.807
8.320 8.125 8.064 8.042
9.405 9.180 9.581 9.060
10.500 10.245 10.100 10.082
and B of the form:
10
9
8
7
6
5
4
3
2
1
I would like to add or subtract each of the entries with a number that is less than a particular number, in this case 0.5, so that certain conditions are met, e.g. sum of (Bi-Ai)^2 is minimized, much like an optimization problem. As an example, let us take A23, that has a value 2.804, I need to vary it between 2.304 < A23 < 2.804 so that for a particular value in this range, the sum of of (Bi-Ai)^2. And then for A24 I vary it between 1.567 < A24 < 2.567 so that, D is minimized.
Reproducible code
import numpy as np
A = np.array([[1.005, 1.405, 1.501, 1.635],
[2.020, 2.100, 2.804, 2.067],
[3.045, 3.080, 3.209, 3.627],
[4.080, 4.005, 4.816, 4.002],
[5.125, 5.020, 5.025, 5.307],
[6.180, 6.045, 6.036, 6.015],
[7.245, 7.320, 7.049, 7.807],
[8.320, 8.125, 8.064, 8.042],
[9.405, 9.180, 9.581, 9.060],
[10.500, 10.245, 10.100, 10.082]])
B = np.array([10, 9, 8, 7, 6, 5, 4, 3, 2, 1])
C = np.empty(shape = (A.shape[0], A.shape[1]))
D = np.empty(shape = (A.shape[0], ))
m, n = A.shape
for i in range(m):
for j in range (n):
C[i, j] = np.sum((B[i] - A[i, j]) ** 2)
D = np.sum(C, axis = 0)