create list of adjacent elements of another list in Python - python

I am looking to take as input a list and then create another list which contains tuples (or sub-lists) of adjacent elements from the original list, wrapping around for the beginning and ending elements. The input/output would look like this:
l_in = [0, 1, 2, 3]
l_out = [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
My question is closely related to another titled getting successive adjacent elements of a list, but this other question does not take into account wrapping around for the end elements and only handles pairs of elements rather than triplets.
I have a somewhat longer approach to do this involving rotating deques and zipping them together:
from collections import deque
l_in = [0, 1, 2, 3]
deq = deque(l_in)
deq.rotate(1)
deq_prev = deque(deq)
deq.rotate(-2)
deq_next = deque(deq)
deq.rotate(1)
l_out = list(zip(deq_prev, deq, deq_next))
# l_out is [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
However, I feel like there is probably a more elegant (and/or efficient) way to do this using other built-in Python functionality. If, for instance, the rotate() function of deque returned the rotated list instead of modifying it in place, this could be a one- or two-liner (though this approach of zipping together rotated lists is perhaps not the most efficient). How can I accomplish this more elegantly and/or efficiently?

One approach may be to use itertools combined with more_itertools.windowed:
import itertools as it
import more_itertools as mit
l_in = [0, 1, 2, 3]
n = len(l_in)
list(it.islice(mit.windowed(it.cycle(l_in), 3), n-1, 2*n-1))
# [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
Here we generated an infinite cycle of sliding windows and sliced the desired subset.
FWIW, here is an abstraction of the latter code for a general, flexible solution given any iterable input e.g. range(5), "abcde", iter([0, 1, 2, 3]), etc.:
def get_windows(iterable, size=3, offset=-1):
"""Return an iterable of windows including an optional offset."""
it1, it2 = it.tee(iterable)
n = mit.ilen(it1)
return it.islice(mit.windowed(it.cycle(it2), size), n+offset, 2*n+offset)
list(get_windows(l_in))
# [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
list(get_windows("abc", size=2))
# [('c', 'a'), ('a', 'b'), ('b', 'c')]
list(get_windows(range(5), size=2, offset=-2))
# [(3, 4), (4, 0), (0, 1), (1, 2), (2, 3)]
Note: more-itertools is a separate library, easily installed via:
> pip install more_itertools

This can be done with slices:
l_in = [0, 1, 2, 3]
l_in = [l_in[-1]] + l_in + [l_in[0]]
l_out = [l_in[i:i+3] for i in range(len(l_in)-2)]
Well, or such a perversion:
div = len(l_in)
n = 3
l_out = [l_in[i % div: i % div + 3]
if len(l_in[i % div: i % div + 3]) == 3
else l_in[i % div: i % div + 3] + l_in[:3 - len(l_in[i % div: i % div + 3])]
for i in range(3, len(l_in) + 3 * n + 2)]
You can specify the number of iterations.

Well I figured out a better solution as I was writing the question, but I already went through the work of writing it, so here goes. This solution is at least much more concise:
l_out = list(zip(l_in[-1:] + l_in[:-1], l_in, l_in[1:] + l_in[:1]))
See this post for different answers on how to rotate lists in Python.
The one-line solution above should be at least as efficient as the solution in the question (based on my understanding) since the slicing should not be more expensive than the rotating and copying of the deques (see https://wiki.python.org/moin/TimeComplexity).
Other answers with more efficient (or elegant) solutions are still welcome though.

as you found there is a list rotation slicing based idiom lst[i:] + lst[:i]
using it inside a comprehension taking a variable n for the number of adjacent elements wanted is more general [lst[i:] + lst[:i] for i in range(n)]
so everything can be parameterized, the number of adjacent elements n in the cyclic rotation and the 'phase' p, the starting point if not the 'natural' 0 base index, although the default p=-1 is set to -1 to fit the apparant desired output
tst = list(range(4))
def rot(lst, n, p=-1):
return list(zip(*([lst[i+p:] + lst[:i+p] for i in range(n)])))
rot(tst, 3)
Out[2]: [(3, 0, 1), (0, 1, 2), (1, 2, 3), (2, 3, 0)]
showing the shortend code as per the comment

Related

Quickest way to remove mirror opposites from a list

Say I have a list of tuples [(0, 1, 2, 3), (4, 5, 6, 7), (3, 2, 1, 0)], I would like to remove all instances where a tuple is reversed e.g. removing (3, 2, 1, 0) from the above list.
My current (rudimentary) method is:
L = list(itertools.permutations(np.arange(x), 4))
for ll in L:
if ll[::-1] in L:
L.remove(ll[::-1])
Where time taken increases exponentially with increasing x. So if x is large this takes ages! How can I speed this up?
Using set comes to mind:
L = set()
for ll in itertools.permutations(np.arange(x), 4):
if ll[::-1] not in L:
L.add(ll)
or even, for slightly better performance:
L = set()
for ll in itertools.permutations(np.arange(x), 4):
if ll not in L:
L.add(ll[::-1])
The need to keep the first looks like it forces you to iterate with a contitional.
a = [(0, 1, 2, 3), (4, 5, 6, 7), (3, 2, 1, 0)]
s = set(); a1 = []
for t in a:
if t not in s:
a1.append(t)
s.add(t[::-1])
Edit: The accepted answer addresses the example code (i.e. the itertools permutations sample). This answers the generalized question for any list (or iterable).

Generate itertools.product in different order

I have some sorted/scored lists of parameters. I'd like to generate possible combinations of parameters (cartesian product). However, if the number of parameters is large, this quickly (very quickly!!) becomes a very large number. Basically, I'd like to do a cartesian product, but stop early.
import itertools
parameter_options = ['1234',
'123',
'1234']
for parameter_set in itertools.product(*parameter_options):
print ''.join(parameter_set)
generates:
111
112
113
114
121
122
123
124
131
132
133
134
...
I'd like to generate (or something similar):
111
112
121
211
122
212
221
222
...
So that if I stop early, I'd at least get a couple of "good" sets of parameters, where a good set of parameters comes mostly early from the lists. This particular order would be fine, but I am interested in any technique that changes the "next permutation" choice order. I'd like the early results generated to have most items from the front of the list, but don't really care whether a solution generates 113 or 122 first, or whether 211 or 112 comes first.
My plan is to stop after some number of permutations are generated (maybe 10K or so? Depends on results). So if there are fewer than the cutoff, all should be generated, ultimately. And preferably each generated only once.
I think you can get your results in the order you want if you think of the output in terms of a graph traversal of the output space. You want a nearest-first traversal, while the itertools.product function is a depth-first traversal.
Try something like this:
import heapq
def nearest_first_product(*sequences):
start = (0,)*len(sequences)
queue = [(0, start)]
seen = set([start])
while queue:
priority, indexes = heapq.heappop(queue)
yield tuple(seq[index] for seq, index in zip(sequences, indexes))
for i in range(len(sequences)):
if indexes[i] < len(sequences[i]) - 1:
lst = list(indexes)
lst[i] += 1
new_indexes = tuple(lst)
if new_indexes not in seen:
new_priority = sum(index * index for index in new_indexes)
heapq.heappush(queue, (new_priority, new_indexes))
seen.add(new_indexes)
Example output:
for tup in nearest_first_product(range(1, 5), range(1, 4), range(1, 5)):
print(tup)
(1, 1, 1)
(1, 1, 2)
(1, 2, 1)
(2, 1, 1)
(1, 2, 2)
(2, 1, 2)
(2, 2, 1)
(2, 2, 2)
(1, 1, 3)
(1, 3, 1)
(3, 1, 1)
(1, 2, 3)
(1, 3, 2)
(2, 1, 3)
(2, 3, 1)
(3, 1, 2)
(3, 2, 1)
(2, 2, 3)
(2, 3, 2)
(3, 2, 2)
(1, 3, 3)
(3, 1, 3)
(3, 3, 1)
(1, 1, 4)
(2, 3, 3)
(3, 2, 3)
(3, 3, 2)
(4, 1, 1)
(1, 2, 4)
(2, 1, 4)
(4, 1, 2)
(4, 2, 1)
(2, 2, 4)
(4, 2, 2)
(3, 3, 3)
(1, 3, 4)
(3, 1, 4)
(4, 1, 3)
(4, 3, 1)
(2, 3, 4)
(3, 2, 4)
(4, 2, 3)
(4, 3, 2)
(3, 3, 4)
(4, 3, 3)
(4, 1, 4)
(4, 2, 4)
(4, 3, 4)
You can get a bunch of slightly different orders by changing up the calculation of new_priority in the code. The current version uses squared Cartesian distance as the priorities, but you could use some other value if you wanted to (for instance, one that incorporates the values from the sequences, not only the indexes).
If you don't care too much about whether (1, 1, 3) comes before (1, 2, 2) (so long as they both come after (1, 1, 2), (1, 2, 1) and (2, 1, 1)), you could probably do a breadth-first traversal instead of nearest-first. This would be a bit simpler, as you could use a regular queue (like a collections.deque) rather than a priority queue.
The queues used by this sort of graph traversal mean that this code uses some amount of memory. However, the amount of memory is a lot less than if you had to produce the results all up front before putting them in order. The maximum memory used is proportional to the surface area of the result space, rather than its volume.
Your question is a bit ambigous, but reading your comments and another answers, it seems you want a cartesian product implementation that does a breadth search instead of a depth search.
Recently I had your same need, but also with the requirement that it doesn't store intermediate results in memory. This is very important to me because I am working with large number of parameters (thus a extremely big cartesian product) and any implementation that stores values or do recursive calls is non-viable. As you state in your question, this seems to be your case also.
As I didn't find an answer that fulfils this requirement, I came to this solution:
def product(*sequences):
'''Breadth First Search Cartesian Product'''
# sequences = tuple(tuple(seq) for seqin sequences)
def partitions(n, k):
for c in combinations(range(n+k-1), k-1):
yield (b-a-1 for a, b in zip((-1,)+c, c+(n+k-1,)))
max_position = [len(i)-1 for i in sequences]
for i in range(sum(max_position)):
for positions in partitions(i, len(sequences)):
try:
yield tuple(map(lambda seq, pos: seq[pos], sequences, positions))
except IndexError:
continue
yield tuple(map(lambda seq, pos: seq[pos], sequences, max_position))
In terms of speed, this generator works fine in the beginning but starts getting slower in the latest results. So, although this implementation is a bit slower it works as a generator that doesn't use memory and doesn't give repeated values.
As I mentioned in #Blckknght answer, parameters here also must be sequences (subscriptable and length-defined iterables). But you can also bypass this limitation (sacrificing a bit of memory) by uncommenting the first line. This may be useful if you are working with generators/iterators as parameters.
I hope I've helped you and let me know if this helps to your problem.
This solution possibly isn't the best as it forces every combination into memory briefly, but it does work. It just might take a little while for large data sets.
import itertools
import random
count = 100 # the (maximum) amount of results
results = random.sample(list(itertools.product(*parameter_options)), count)
for parameter_set in results:
print "".join(parameter_set)
This will give you a list of products in a random order.

Pairwise circular Python 'for' loop

Is there a nice Pythonic way to loop over a list, retuning a pair of elements? The last element should be paired with the first.
So for instance, if I have the list [1, 2, 3], I would like to get the following pairs:
1 - 2
2 - 3
3 - 1
A Pythonic way to access a list pairwise is: zip(L, L[1:]). To connect the last item to the first one:
>>> L = [1, 2, 3]
>>> zip(L, L[1:] + L[:1])
[(1, 2), (2, 3), (3, 1)]
I would use a deque with zip to achieve this.
>>> from collections import deque
>>>
>>> l = [1,2,3]
>>> d = deque(l)
>>> d.rotate(-1)
>>> zip(l, d)
[(1, 2), (2, 3), (3, 1)]
I'd use a slight modification to the pairwise recipe from the itertools documentation:
def pairwise_circle(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ... (s<last>,s0)"
a, b = itertools.tee(iterable)
first_value = next(b, None)
return itertools.zip_longest(a, b,fillvalue=first_value)
This will simply keep a reference to the first value and when the second iterator is exhausted, zip_longest will fill the last place with the first value.
(Also note that it works with iterators like generators as well as iterables like lists/tuples.)
Note that #Barry's solution is very similar to this but a bit easier to understand in my opinion and easier to extend beyond one element.
I would pair itertools.cycle with zip:
import itertools
def circular_pairwise(l):
second = itertools.cycle(l)
next(second)
return zip(l, second)
cycle returns an iterable that yields the values of its argument in order, looping from the last value to the first.
We skip the first value, so it starts at position 1 (rather than 0).
Next, we zip it with the original, unmutated list. zip is good, because it stops when any of its argument iterables are exhausted.
Doing it this way avoids the creation of any intermediate lists: cycle holds a reference to the original, but doesn't copy it. zip operates in the same way.
It's important to note that this will break if the input is an iterator, such as a file, (or a map or zip in python-3), as advancing in one place (through next(second)) will automatically advance the iterator in all the others. This is easily solved using itertools.tee, which produces two independently operating iterators over the original iterable:
def circular_pairwise(it):
first, snd = itertools.tee(it)
second = itertools.cycle(snd)
next(second)
return zip(first, second)
tee can use large amounts of additional storage, for example, if one of the returned iterators is used up before the other is touched, but as we only ever have one step difference, the additional storage is minimal.
There are more efficient ways (that don't built temporary lists), but I think this is the most concise:
> l = [1,2,3]
> zip(l, (l+l)[1:])
[(1, 2), (2, 3), (3, 1)]
Pairwise circular Python 'for' loop
If you like the accepted answer,
zip(L, L[1:] + L[:1])
you can go much more memory light with semantically the same code using itertools:
from itertools import islice, chain #, izip as zip # uncomment if Python 2
And this barely materializes anything in memory beyond the original list (assuming the list is relatively large):
zip(l, chain(islice(l, 1, None), islice(l, None, 1)))
To use, just consume (for example, with a list):
>>> list(zip(l, chain(islice(l, 1, None), islice(l, None, 1))))
[(1, 2), (2, 3), (3, 1)]
This can be made extensible to any width:
def cyclical_window(l, width=2):
return zip(*[chain(islice(l, i, None), islice(l, None, i)) for i in range(width)])
and usage:
>>> l = [1, 2, 3, 4, 5]
>>> cyclical_window(l)
<itertools.izip object at 0x112E7D28>
>>> list(cyclical_window(l))
[(1, 2), (2, 3), (3, 4), (4, 5), (5, 1)]
>>> list(cyclical_window(l, 4))
[(1, 2, 3, 4), (2, 3, 4, 5), (3, 4, 5, 1), (4, 5, 1, 2), (5, 1, 2, 3)]
Unlimited generation with itertools.tee with cycle
You can also use tee to avoid making a redundant cycle object:
from itertools import cycle, tee
ic1, ic2 = tee(cycle(l))
next(ic2) # must still queue up the next item
and now:
>>> [(next(ic1), next(ic2)) for _ in range(10)]
[(1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2)]
This is incredibly efficient, an expected usage of iter with next, and elegant usage of cycle, tee, and zip.
Don't pass cycle directly to list unless you have saved your work and have time for your computer to creep to a halt as you max out its memory - if you're lucky, after a while your OS will kill the process before it crashes your computer.
Pure Python Builtin Functions
Finally, no standard lib imports, but this only works for up to the length of original list (IndexError otherwise.)
>>> [(l[i], l[i - len(l) + 1]) for i in range(len(l))]
[(1, 2), (2, 3), (3, 1)]
You can continue this with modulo:
>>> len_l = len(l)
>>> [(l[i % len_l], l[(i + 1) % len_l]) for i in range(10)]
[(1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2), (2, 3), (3, 1), (1, 2)]
I would use a list comprehension, and take advantage of the fact that l[-1] is the last element.
>>> l = [1,2,3]
>>> [(l[i-1],l[i]) for i in range(len(l))]
[(3, 1), (1, 2), (2, 3)]
You don't need a temporary list that way.
Amazing how many different ways there are to solve this problem.
Here's one more. You can use the pairwise recipe but instead of zipping with b, chain it with the first element that you already popped off. Don't need to cycle when we just need a single extra value:
from itertools import chain, izip, tee
def pairwise_circle(iterable):
a, b = tee(iterable)
first = next(b, None)
return izip(a, chain(b, (first,)))
I like a solution that does not modify the original list and does not copy the list to temporary storage:
def circular(a_list):
for index in range(len(a_list) - 1):
yield a_list[index], a_list[index + 1]
yield a_list[-1], a_list[0]
for x in circular([1, 2, 3]):
print x
Output:
(1, 2)
(2, 3)
(3, 1)
I can imagine this being used on some very large in-memory data.
This one will work even if the list l has consumed most of the system's memory. (If something guarantees this case to be impossible, then zip as posted by chepner is fine)
l.append( l[0] )
for i in range( len(l)-1):
pair = l[i],l[i+1]
# stuff involving pair
del l[-1]
or more generalizably (works for any offset n i.e. l[ (i+n)%len(l) ] )
for i in range( len(l)):
pair = l[i], l[ (i+1)%len(l) ]
# stuff
provided you are on a system with decently fast modulo division (i.e. not some pea-brained embedded system).
There seems to be a often-held belief that indexing a list with an integer subscript is un-pythonic and best avoided. Why?
This is my solution, and it looks Pythonic enough to me:
l = [1,2,3]
for n,v in enumerate(l):
try:
print(v,l[n+1])
except IndexError:
print(v,l[0])
prints:
1 2
2 3
3 1
The generator function version:
def f(iterable):
for n,v in enumerate(iterable):
try:
yield(v,iterable[n+1])
except IndexError:
yield(v,iterable[0])
>>> list(f([1,2,3]))
[(1, 2), (2, 3), (3, 1)]
How about this?
li = li+[li[0]]
pairwise = [(li[i],li[i+1]) for i in range(len(li)-1)]
from itertools import izip, chain, islice
itr = izip(l, chain(islice(l, 1, None), islice(l, 1)))
(As above with #j-f-sebastian's "zip" answer, but using itertools.)
NB: EDITED given helpful nudge from #200_success. previously was:
itr = izip(l, chain(l[1:], l[:1]))
If you don't want to consume too much memory, you can try my solution:
[(l[i], l[(i+1) % len(l)]) for i, v in enumerate(l)]
It's a little slower, but consume less memory.
Starting in Python 3.10, the new pairwise function provides a way to create sliding pairs of consecutive elements:
from itertools import pairwise
# l = [1, 2, 3]
list(pairwise(l + l[:1]))
# [(1, 2), (2, 3), (3, 1)]
or simply pairwise(l + l[:1]) if you don't need the result as a list.
Note that we pairwise on the list appended with its head (l + l[:1]) so that rolling pairs are circular (i.e. so that we also include the (3, 1) pair):
list(pairwise(l)) # [(1, 2), (2, 3)]
l + l[:1] # [1, 2, 3, 1]
Just another try
>>> L = [1,2,3]
>>> zip(L,L[1:]) + [(L[-1],L[0])]
[(1, 2), (2, 3), (3, 1)]
L = [1, 2, 3]
a = zip(L, L[1:]+L[:1])
for i in a:
b = list(i)
print b
this seems like combinations would do the job.
from itertools import combinations
x=combinations([1,2,3],2)
this would yield a generator. this can then be iterated over as such
for i in x:
print i
the results would look something like
(1, 2)
(1, 3)
(2, 3)

Readable way to form pairs while available [duplicate]

This question already has answers here:
Iterating over every two elements in a list [duplicate]
(22 answers)
Closed 7 years ago.
I'm trying to turn a list into pairs, but only for as long as possible (i.e. my list can be odd, in that case I want to ignore the last element).
E.g. my input is x = [0, 1, 2, 3, 4], which I would want to turn into [(0, 1), (2, 3)]. Similarly, x = [0, 1, 2, 3, 4, 5] should become [(0, 1), (2, 3), (4, 5)].
What I'm currently doing is [(x[i], x[i+1]) for i in range(0, len(x), 2)]. This breaks, as range(0, len(x), 2) still includes x[-1] if len(x) is odd. Note that something of the form [(l, r) for l, r in ...] would also be preferable, rather than having to fiddle with indices.
Bonus points: Here's some more context. I'm not completely ignoring the last element of an odd sequence, of course. I'm applying a function to each pair, but I do not want to apply this function H to the singleton element. Currently, I'm doing the following:
next_layer = [H(layer[i], layer[i+1]) for i in range(0, len(layer), 2)]
if len(layer) & 1: # if there is a lone node left on this layer
next_layer.append(layer[-1])
An extra elegant solution would incorporate this into the above as well.
Use a zip
This function returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables. The returned list is truncated in length to the length of the shortest argument sequence.
>>> a = [1, 2, 3, 4, 5]
>>> b = [0, 1, 2, 3, 4, 5]
>>> zip(a[::2], a[1::2])
[(1, 2), (3, 4)]
>>> zip(b[::2], b[1::2])
[(0, 1), (2, 3), (4, 5)]

Generating all possible combinations of a list, "itertools.combinations" misses some results

Given a list of items in Python, how can I get all the possible combinations of the items?
There are several similar questions on this site, that suggest using itertools.combinations, but that returns only a subset of what I need:
stuff = [1, 2, 3]
for L in range(0, len(stuff)+1):
for subset in itertools.combinations(stuff, L):
print(subset)
()
(1,)
(2,)
(3,)
(1, 2)
(1, 3)
(2, 3)
(1, 2, 3)
As you see, it returns only items in a strict order, not returning (2, 1), (3, 2), (3, 1), (2, 1, 3), (3, 1, 2), (2, 3, 1), and (3, 2, 1). Is there some workaround for that? I can't seem to come up with anything.
Use itertools.permutations:
>>> import itertools
>>> stuff = [1, 2, 3]
>>> for L in range(0, len(stuff)+1):
for subset in itertools.permutations(stuff, L):
print(subset)
...
()
(1,)
(2,)
(3,)
(1, 2)
(1, 3)
(2, 1)
(2, 3)
(3, 1)
....
Help on itertools.permutations:
permutations(iterable[, r]) --> permutations object
Return successive r-length permutations of elements in the iterable.
permutations(range(3), 2) --> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)
You can generate all the combinations of a list in python using this simple code
import itertools
a = [1,2,3,4]
for i in xrange(1,len(a)+1):
print list(itertools.combinations(a,i))
Result:
[(1,), (2,), (3,), (4,)]
[(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]
[(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]
[(1, 2, 3, 4)]
Are you looking for itertools.permutations instead?
From help(itertools.permutations),
Help on class permutations in module itertools:
class permutations(__builtin__.object)
| permutations(iterable[, r]) --> permutations object
|
| Return successive r-length permutations of elements in the iterable.
|
| permutations(range(3), 2) --> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)
Sample Code :
>>> from itertools import permutations
>>> stuff = [1, 2, 3]
>>> for i in range(0, len(stuff)+1):
for subset in permutations(stuff, i):
print(subset)
()
(1,)
(2,)
(3,)
(1, 2)
(1, 3)
(2, 1)
(2, 3)
(3, 1)
(3, 2)
(1, 2, 3)
(1, 3, 2)
(2, 1, 3)
(2, 3, 1)
(3, 1, 2)
(3, 2, 1)
From Wikipedia, the difference between permutations and combinations :
Permutation :
Informally, a permutation of a set of objects is an arrangement of those objects into a particular order. For example, there are six permutations of the set {1,2,3}, namely (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1).
Combination :
In mathematics a combination is a way of selecting several things out of a larger group, where (unlike permutations) order does not matter.
itertools.permutations is going to be what you want. By mathematical definition, order does not matter for combinations, meaning (1,2) is considered identical to (2,1). Whereas with permutations, each distinct ordering counts as a unique permutation, so (1,2) and (2,1) are completely different.
Here is a solution without itertools
First lets define a translation between an indicator vector of 0 and 1s and a sub-list (1 if the item is in the sublist)
def indicators2sublist(indicators,arr):
return [item for item,indicator in zip(arr,indicators) if int(indicator)==1]
Next, Well define a mapping from a number between 0 and 2^n-1 to the its binary vector representation (using string's format function) :
def bin(n,sz):
return ('{d:0'+str(sz)+'b}').format(d=n)
All we have left to do, is to iterate all the possible numbers, and call indicators2sublist
def all_sublists(arr):
sz=len(arr)
for n in xrange(0,2**sz):
b=bin(n,sz)
yield indicators2sublist(b,arr)
I assume you want all possible combinations as 'sets' of values. Here is a piece of code that I wrote that might help give you an idea:
def getAllCombinations(object_list):
uniq_objs = set(object_list)
combinations = []
for obj in uniq_objs:
for i in range(0,len(combinations)):
combinations.append(combinations[i].union([obj]))
combinations.append(set([obj]))
return combinations
Here is a sample:
combinations = getAllCombinations([20,10,30])
combinations.sort(key = lambda s: len(s))
print combinations
... [set([10]), set([20]), set([30]), set([10, 20]), set([10, 30]), set([20, 30]), set([10, 20, 30])]
I think this has n! time complexity, so be careful. This works but may not be most efficient
just thought i'd put this out there since i couldn't fine EVERY possible outcome and keeping in mind i only have the rawest most basic of knowledge when it comes to python and there's probably a much more elegant solution...(also excuse the poor variable names
testing = [1, 2, 3]
testing2= [0]
n = -1
def testingSomethingElse(number):
try:
testing2[0:len(testing2)] == testing[0]
n = -1
testing2[number] += 1
except IndexError:
testing2.append(testing[0])
while True:
n += 1
testing2[0] = testing[n]
print(testing2)
if testing2[0] == testing[-1]:
try:
n = -1
testing2[1] += 1
except IndexError:
testing2.append(testing[0])
for i in range(len(testing2)):
if testing2[i] == 4:
testingSomethingElse(i+1)
testing2[i] = testing[0]
i got away with == 4 because i'm working with integers but you may have to modify that accordingly...

Categories

Resources