Transforming nested Python loops into list comprehensions - python

I've started working on some Project Euler problems, and have solved number 4 with a simple brute force solution:
def mprods(a,b):
c = range(a,b)
f = []
for d in c:
for e in c:
f.append(d*e)
return f
max([z for z in mprods(100,1000) if str(z)==(''.join([str(z)[-i] for i in range(1,len(str(z))+1)]))])
After solving, I tried to make it as compact as possible, and came up with that horrible bottom line!
Not to leave something half-done, I am trying to condense the mprods function into a list comprehension. So far, I've come up with these attempts:
[d*e for d,e in (range(a,b), range(a,b))]
Obviously completely on the wrong track. :-)
[d*e for x in [e for e in range(1,5)] for d in range(1,5)]
This gives me [4, 8, 12, 16, 4, 8, 12, 16, 4, 8, 12, 16, 4, 8, 12, 16], where I expect
[1, 2, 3, 4, 2, 4, 6, 8, 3, 6, 9, 12, 4, 8, 12, 16] or similar.
Any Pythonistas out there that can help? :)

c = range(a, b)
print [d * e for d in c for e in c]

from itertools import product
def palindrome(i):
return str(i) == str(i)[::-1]
x = xrange(900,1000)
max(a*b for (a,b) in (product(x,x)) if palindrome(a*b))
xrange(900,1000) is like range(900,1000) but instead of returning a list it returns an object that generates the numbers in the range on demand. For looping, this is slightly faster than range() and more memory efficient.
product(xrange(900,1000),xrange(900,1000)) gives the Cartesian product of the input iterables. It is equivalent to nested for-loops. For example, product(A, B) returns the same as: ((x,y) for x in A for y in B). The leftmost iterators are in the outermost for-loop, so the output tuples cycle in a manner similar to an odometer (with the rightmost element changing on every iteration).
product('ab', range(3)) --> ('a',0) ('a',1) ('a',2) ('b',0) ('b',1) ('b',2)
product((0,1), (0,1), (0,1)) --> (0,0,0) (0,0,1) (0,1,0) (0,1,1) (1,0,0) ...
str(i)[::-1] is list slicing shorthand to reverse a list.
Note how everything is wrapped in a generator expression, a high performance, memory efficient generalization of list comprehensions and generators.
Also note that the largest palindrome made from the product of two 2-digit numbers is made from the numbers 91 99, two numbers in the range(90,100). Extrapolating to 3-digit numbers you can use range(900,1000).

I think you'll like this one-liner (formatted for readability):
max(z for z in (d*e
for d in xrange(100, 1000)
for e in xrange(100, 1000))
if str(z) == str(z)[::-1])
Or slightly changed:
c = range(100, 1000)
max(z for z in (d*e for d in c for e in c) if str(z) == str(z)[::-1])
Wonder how many parens that would be in Lisp...

Related

insert an array into main array [duplicate]

This question already has answers here:
Combining two sorted lists in Python
(22 answers)
Closed 6 months ago.
I have a function insarrintomain which takes 2 arguments. The first one is main list, the second one is an insert list. I need to create a new list, where I will have all numbers from both arrays in increasing order. For example: main is [1, 2, 3, 4, 8, 9, 12], ins is [5, 6, 7, 10]. I should get [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12]
Here is my code:
def insarrintomain(main, ins):
arr = []
c = 0
for i, el in enumerate(main):
if c < len(ins):
if el > ins[c]:
for j, ins_el in enumerate(ins):
if ins_el < el:
c += 1
arr.append(ins_el)
else:
break
else:
arr.append(el)
else:
arr.append(el)
return arr
What did I miss?
Why not
new_array = main + insert
new_array.sort()
The pyhonic way of solving this problem is something like this:
def insarrintomain(main, ins):
new_list = main + ins
new_list.sort()
return new_list
In Python readability counts.
This code is pythonic because it’s easy to read: the function takes two lists, concatenates them into one new list, sorts the result and returns it.
Another reason why this code is pythonic is because it uses built-in functions. There is no need to reinvent the wheel: someone already needed to concatenate two lists, or to sort one. Built-in functions such as sort have been optimised for decades and are mostly written in C language. By no chance we can beat them using Python.
Let’s analyse the implementation from #RiccardoBucco.
That is perfect C code. You barely can understand what is happening without comments. The algorithm is the best possible for our case (it exploits the existing ordering of the lists) and if you can find in the standard libraries an implementation of that algorithm you should substitute sort with that.
But this is Python, not C. Solving your problem from scratch and not by using built-ins results in an uglier and slower solution.
You can have a proof of that by running the following script and watching how many time each implementation needs
import time
long_list = [x for x in range(100000)]
def insarrintomain(main, ins):
# insert here the code you want to test
return new_list
start = time.perf_counter()
_ = insarrintomain(long_list, long_list)
stop = time.perf_counter()
print(stop - start)
On my computer my implementation took nearly 0.003 seconds, while the C-style implementation from #RiccardoBucco needed 0.055 seconds.
A simple solution would be:
def insarrintomain(main, ins):
return (main + ins).sorted()
But this solution is clearly not optimal (the complexity is high, as we are not using the fact that the input arrays are already sorted). Specifically, the complexity here is O(k * log(k)), where k is the sum of n and m (n is the length of main and m is the length of ins).
A better solution:
def insarrintomain(main, ins):
i = j = 0
arr = []
while i < len(main) and j < len(ins):
if main[i] < ins[j]:
arr.append(main[i])
i += 1
else:
arr.append(ins[j])
j += 1
while i < len(main):
arr.append(main[i])
i += 1
while j < len(ins):
arr.append(ins[j])
j += 1
return arr
Example:
>>> insarrintomain([1, 2, 3, 4, 8, 9, 12], [5, 6, 7, 10])
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12]
This solution is much faster for big arrays (O(k), where k is the sum of n and m, n is the length of main and m is the length of ins).

Replace climbing sequence with its average

I have a random list like this
X = [0, 1, 5, 6, 7, 10, 15]
and need to find and replace every climbing sequence with its average.
In the end it should look like this:
X = [0, 6, 10, 15] #the 0 and 1 to 0; and the 5,6,7 to 6
I tried to find the sequence by subtracting the second value from the first like this:
y = 0
z = []
while X[y +1] -X[y] == 1:
z.append(X[y])
y = y +1
And now I dont know how to delete for example 5,6 and 7 and replace it with the average 6.
You can use itertools.groupby on the list with a key function that returns each item's difference with an incremental counter:
from itertools import groupby, count
from statistics import mean
X = [0, 1, 5, 6, 7, 10, 15]
c = count()
X = [int(mean(g)) for _, g in groupby(X, key=lambda i: i - next(c))]
X becomes:
[0, 6, 10, 15]
You can iterate and group in the same list each climbing sequence for then taking the mean.
>>> res = [[x[0]]]
>>> for i in range(1, len(x)):
... if x[i] == x[i-1] + 1:
... res[-1].append(x[i])
... else:
... res.append([x[i]]
>>> res
[[0, 1], [5, 6, 7], [10], [15]]
>>> [int(sum(l)/len(l)) for l in res]
[0, 6, 10, 15]
Here's a starting technique: make a new list that's the difference of adjacent elements in the list:
diff = [X[i] - X[i-1] for i in range(1, len(X)) ]
There are more "Pythonic" ways to do this, but I want to make sure this is accessible to newer programmers.
You now have diff as
[1, 4, 1, 1, 3, 5]
Where you have a 1 in diff, you have a climbing pair in X. Iterate through diff to find a sequence of 1 values. Where you find this, take the slice of X that corresponds to the 1 values. The middle element of that slice is your mean.
If the value is not 1, then you simply take the corresponding element of X, as you've been doing.
append the identified values to z, and there's your desired result.
Can you take it from there?
Not really to answer the question, which is a fairly basic CS 101 question that people should try to figure out themselves, but what I noticed about the nice answer of #blhsing was that it appeared fairly slow. I found that mean() is incredibly slow!
from itertools import groupby, count
from statistics import mean
from timeit import timeit
def generate_1step_seq1(xs):
result = []
n = 0
while n < len(xs):
# sequences with step of 1 only
if not result or xs[n] == result[-1] + 1:
result += [xs[n]]
else:
# int result, rounding down
yield sum(result) // len(result)
result = [xs[n]]
n += 1
if result:
yield sum(result) // len(result)
def generate_1step_seq2(xs):
c = count()
return [int(sum(xs) // len(xs)) for xs in [list(g) for _, g in groupby(xs, key=lambda i: i - next(c))]]
def generate_1step_seq3(xs):
c = count()
return [int(mean(g)) for _, g in groupby(xs, key=lambda i: i - next(c))]
values = [0, 1, 5, 6, 7, 10, 15]
print(list(generate_1step_seq1(values)))
print(generate_1step_seq2(values))
print(generate_1step_seq3(values))
print(timeit(lambda: list(generate_1step_seq1(values)), number=10000))
print(timeit(lambda: list(generate_1step_seq2(values)), number=10000))
print(timeit(lambda: list(generate_1step_seq3(values)), number=10000))
Initially I figured that was probably due to the tiny list size, but even for large lists, mean() is horribly slow. Anyone happen to know why? It appears due to the very safe nature of statistics _sum, trying to avoid float rounding errors?

Is a list (potentially) divisible by another?

Problem
Say you have two lists A = [a_1, a_2, ..., a_n] and B = [b_1, b_2, ..., b_n] of integers. We say A is potentially-divisible by B if there is a permutation of B that makes a_i divisible by b_i for all i. The problem is then: is it possible to reorder (i.e. permute) B so that a_i is divisible by b_i for all i?
For example, if you have
A = [6, 12, 8]
B = [3, 4, 6]
Then the answer would be True, as B can be reordered to be B = [3, 6, 4] and then we would have that a_1 / b_1 = 2, a_2 / b_2 = 2, and a_3 / b_3 = 2, all of which are integers, so A is potentially-divisible by B.
As an example which should output False, we could have:
A = [10, 12, 6, 5, 21, 25]
B = [2, 7, 5, 3, 12, 3]
The reason this is False is that we can't reorder B as 25 and 5 are in A, but the only divisor in B would be 5, so one would be left out.
Approach
Obviously the straightforward approach would be to get all the permutations of B and see if one would satisfy potential-divisibility, something along the lines of:
import itertools
def is_potentially_divisible(A, B):
perms = itertools.permutations(B)
divisible = lambda ls: all( x % y == 0 for x, y in zip(A, ls))
return any(divisible(perm) for perm in perms)
Question
What is the fastest way to know if a list is potentially-divisible by another list? Any thoughts? I was thinking if there's is a clever way to do this with primes, but I couldn't come up with a solution.
Much appreciated!
Edit: It's probably irrelevant to most of you, but for the sake of completeness, I'll explain my motivation. In Group Theory there is a conjecture on finite simple groups on whether or not there is a bijection from irreducible characters and conjugacy classes of the group such that every character degree divides the corresponding class size. For example, for U6(4) here are what A and B would look like. Pretty big lists, mind you!
Build bipartite graph structure - connect a[i] with all its divisors from b[].
Then find maximum matching and check whether it is perfect matching (number of edges in matching is equal to the number of pairs (if graph is directed) or to doubled number).
Arbitrary chosen Kuhn algorithm implementation here.
Upd:
#Eric Duminil made great concise Python implementation here
This approach has polynomial complexity from O(n^2) to O(n^3) depending on chosen matching algorithm and number of edges (division pairs) against factorial complexity for brute-force algorithm.
Code
Building on #MBo's excellent answer, here's an implementation of bipartite graph matching using networkx.
import networkx as nx
def is_potentially_divisible(multiples, divisors):
if len(multiples) != len(divisors):
return False
g = nx.Graph()
g.add_nodes_from([('A', a, i) for i, a in enumerate(multiples)], bipartite=0)
g.add_nodes_from([('B', b, j) for j, b in enumerate(divisors)], bipartite=1)
edges = [(('A', a, i), ('B', b, j)) for i, a in enumerate(multiples)
for j, b in enumerate(divisors) if a % b == 0]
g.add_edges_from(edges)
m = nx.bipartite.maximum_matching(g)
return len(m) // 2 == len(multiples)
print(is_potentially_divisible([6, 12, 8], [3, 4, 6]))
# True
print(is_potentially_divisible([6, 12, 8], [3, 4, 3]))
# True
print(is_potentially_divisible([10, 12, 6, 5, 21, 25], [2, 7, 5, 3, 12, 3]))
# False
Notes
According to the documentation:
The dictionary returned by maximum_matching() includes a mapping for
vertices in both the left and right vertex sets.
It means that the returned dict should be twice as large as A and B.
The nodes are converted from
[10, 12, 6, 5, 21, 25]
to:
[('A', 10, 0), ('A', 12, 1), ('A', 6, 2), ('A', 5, 3), ('A', 21, 4), ('A', 25, 5)]
in order to avoid collisions between nodes from A and B. The id is also added in order to keep nodes distinct in case of duplicates.
Efficiency
The maximum_matching method uses Hopcroft-Karp algorithm, which runs in O(n**2.5) in the worst case. The graph generation is O(n**2), so the whole method runs in O(n**2.5). It should work fine with large arrays. The permutation solution is O(n!) and won't be able to process arrays with 20 elements.
With diagrams
If you're interested in a diagram showing the best matching, you can mix matplotlib and networkx:
import networkx as nx
import matplotlib.pyplot as plt
def is_potentially_divisible(multiples, divisors):
if len(multiples) != len(divisors):
return False
g = nx.Graph()
l = [('l', a, i) for i, a in enumerate(multiples)]
r = [('r', b, j) for j, b in enumerate(divisors)]
g.add_nodes_from(l, bipartite=0)
g.add_nodes_from(r, bipartite=1)
edges = [(a,b) for a in l for b in r if a[1] % b[1]== 0]
g.add_edges_from(edges)
pos = {}
pos.update((node, (1, index)) for index, node in enumerate(l))
pos.update((node, (2, index)) for index, node in enumerate(r))
m = nx.bipartite.maximum_matching(g)
colors = ['blue' if m.get(a) == b else 'gray' for a,b in edges]
nx.draw_networkx(g, pos=pos, arrows=False, labels = {n:n[1] for n in g.nodes()}, edge_color=colors)
plt.axis('off')
plt.show()
return len(m) // 2 == len(multiples)
print(is_potentially_divisible([6, 12, 8], [3, 4, 6]))
# True
print(is_potentially_divisible([6, 12, 8], [3, 4, 3]))
# True
print(is_potentially_divisible([10, 12, 6, 5, 21, 25], [2, 7, 5, 3, 12, 3]))
# False
Here are the corresponding diagrams:
Since you're comfortable with math, I just want to add a gloss to the other answers. Terms to search for are shown in bold.
The problem is an instance of permutations with restricted positions, and there's a whole lot that can be said about those. In general, a zero-one NxN matrix M can be constructed where M[i][j] is 1 if and only if position j is allowed for the element originally at position i. The number of distinct permutations meeting all the restrictions is then the permanent of M (defined the same way as the determinant, except that all terms are non-negative).
Alas - unlike as for the determinant - there are no known general ways to compute the permanent quicker than exponential in N. However, there are polynomial time algorithms for determining whether or not the permanent is 0.
And that's where the answers you got start ;-) Here's a good account of how the "is the permanent 0?" question is answered efficiently by considering perfect matchings in bipartite graphs:
https://cstheory.stackexchange.com/questions/32885/matrix-permanent-is-0
So, in practice, it's unlikely you'll find any general approach faster than the one #Eric Duminil gave in their answer.
Note, added later: I should make that last part clearer. Given any "restricted permutation" matrix M, it's easy to construct integer "divisibilty lists" corresponding to it. Therefore your specific problem is no easier than the general problem - unless perhaps there's something special about which integers may appear in your lists.
For example, suppose M is
0 1 1 1
1 0 1 1
1 1 0 1
1 1 1 0
View the rows as representing the first 4 primes, which are also the values in B:
B = [2, 3, 5, 7]
The first row then "says" that B[0] (= 2) can't divide A[0], but must divide A[1], A[2], and A[3]. And so on. By construction,
A = [3*5*7, 2*5*7, 2*3*7, 2*3*5]
B = [2, 3, 5, 7]
corresponds to M. And there are permanent(M) = 9 ways to permute B such that each element of A is divisible by the corresponding element of the permuted B.
This is not the ultimate answer but I think this might be something worthful. You can first list the factors(1 and itself included) of all the elements in the list [(1,2,5,10),(1,2,3,6,12),(1,2,3,6),(1,5),(1,3,7,21),(1,5,25)]. The list we are looking for must have one of the factors in it(to evenly divide).
Since we don't have some factors in the list we arre checking against([2,7,5,3,12,3]) This list can further be filtered as:
[(2,5),(2,3,12),(2,3),(5),(3,7),(5)]
Here, 5 is needed two places(where we don't have any options at all), but we only have a 5, so, we can pretty much stop here and say that the case is false here.
Let's say we had [2,7,5,3,5,3] instead:
Then we would have option as such:
[(2,5),(2,3),(2,3),(5),(3,7),(5)]
Since 5 is needed at two places:
[(2),(2,3),(2,3),{5},(3,7),{5}] Where {} signifies ensured position.
Also 2 is ensured:
[{2},(2,3),(2,3),{5},(3,7),{5}] Now since 2 is taken the two places of 3 are ensured:
[{2},{3},{3},{5},(3,7),{5}] Now of course 3 are taken and 7 is ensured:
[{2},{3},{3},{5},{7},{5}]. which is still consistent with our list so the casse is true. Remember we shall be looking at the consistencies with our list in every iteration where we can readily break out.
You can try this:
import itertools
def potentially_divisible(A, B):
A = itertools.permutations(A, len(A))
return len([i for i in A if all(c%d == 0 for c, d in zip(i, B))]) > 0
l1 = [6, 12, 8]
l2 = [3, 4, 6]
print(potentially_divisible(l1, l2))
Output:
True
Another example:
l1 = [10, 12, 6, 5, 21, 25]
l2 = [2, 7, 5, 3, 12, 3]
print(potentially_divisible(l1, l2))
Output:
False

Flattening nested generator expressions

I'm trying to flatten a nested generator of generators but I'm getting an unexpected result:
>>> g = ((3*i + j for j in range(3)) for i in range(3))
>>> list(itertools.chain(*g))
[6, 7, 8, 6, 7, 8, 6, 7, 8]
I expected the result to look like this:
[0, 1, 2, 3, 4, 5, 6, 7, 8]
I think I'm getting the unexpected result because the inner generators are not being evaluated until the outer generator has already been iterated over, setting i to 2. I can hack together a solution by forcing evaluation of the inner generators by using a list comprehension instead of a generator expression:
>>> g = ([3*i + j for j in range(3)] for i in range(3))
>>> list(itertools.chain(*g))
[0, 1, 2, 3, 4, 5, 6, 7, 8]
Ideally, I would like a solution that's completely lazy and doesn't force evaluation of the inner nested elements until they're used.
Is there a way to flatten nested generator expressions of arbitrary depth (maybe using something other than itertools.chain)?
Edit:
No, my question is not a duplicate of Variable Scope In Generators In Classes. I honestly can't tell how these two questions are related at all. Maybe the moderator could explain why he thinks this is a duplicate.
Also, both answers to my question are correct in that they can be used to write a function that flattens nested generators correctly.
def flattened1(iterable):
iter1, iter2 = itertools.tee(iterable)
if isinstance(next(iter1), collections.Iterable):
return flattened1(x for y in iter2 for x in y)
else:
return iter2
def flattened2(iterable):
iter1, iter2 = itertools.tee(iterable)
if isinstance(next(iter1), collections.Iterable):
return flattened2(itertools.chain.from_iterable(iter2))
else:
return iter2
As far as I can tell with timeit, they both perform identically.
>>> timeit(test1, setup1, number=1000000)
18.173431718023494
>>> timeit(test2, setup2, number=1000000)
17.854709611972794
I'm not sure which one is better from a style standpoint either, since x for y in iter2 for x in y is a bit of a brain twister, but arguably more elegant than itertools.chain.from_iterable(iter2). Input is appreciated.
Regrettably, I was only able to mark one of the two equally good answers correct.
Instead of using chain(*g), you can use chain.from_iterable:
>>> g = ((3*i + j for j in range(3)) for i in range(3))
>>> list(itertools.chain(*g))
[6, 7, 8, 6, 7, 8, 6, 7, 8]
>>> g = ((3*i + j for j in range(3)) for i in range(3))
>>> list(itertools.chain.from_iterable(g))
[0, 1, 2, 3, 4, 5, 6, 7, 8]
How about this:
[x for y in g for x in y]
Which yields:
[0, 1, 2, 3, 4, 5, 6, 7, 8]
Guess you already have your answer, but here's another perspective.
The problem is that when each inner generator is created, the value-generating expression is closed over the outer variable i so even when the first inner generator starts generating values, it's using the "current" value of i. This will have value i=2 if the outer generator has been fully consumed (and that's exactly the case right after the argument in the chain(*g) call is evaluated, before chain is actually called).
The following devious trick will work around the problem:
g = ((3*i1 + j for i1 in [i] for j in range(3)) for i in range(3))
Note that these inner generators aren't closed over i because the for clauses are evaluated at generator creation time so the singleton list [i] is evaluated and its value "frozen" in the face of further changes to the value of i.
This approach has the advantage over the from_iterable answer that it's a little more general if you want to use it outside a chain.from_iterable call -- it will always produce the "correct" inner generators, whether the outer generator is partially or fully consumed before the inner generators are used. For example, in the following code:
g = ((3*i1 + j for i1 in [i] for j in range(3)) for i in range(3))
g1 = next(g)
g2 = next(g)
g3 = next(g)
you can insert the lines:
list(g1)
list(g2)
list(g3)
in any order at any point after the respective inner generator has been defined, and you'll get the correct results.

Remove all values within one list from another list? [duplicate]

This question already has answers here:
Remove all the elements that occur in one list from another
(13 answers)
Closed 6 years ago.
I am looking for a way to remove all values within a list from another list.
Something like this:
a = range(1,10)
a.remove([2,3,7])
print a
a = [1,4,5,6,8,9]
>>> a = range(1, 10)
>>> [x for x in a if x not in [2, 3, 7]]
[1, 4, 5, 6, 8, 9]
I was looking for fast way to do the subject, so I made some experiments with suggested ways. And I was surprised by results, so I want to share it with you.
Experiments were done using pythonbenchmark tool and with
a = range(1,50000) # Source list
b = range(1,15000) # Items to remove
Results:
def comprehension(a, b):
return [x for x in a if x not in b]
5 tries, average time 12.8 sec
def filter_function(a, b):
return filter(lambda x: x not in b, a)
5 tries, average time 12.6 sec
def modification(a,b):
for x in b:
try:
a.remove(x)
except ValueError:
pass
return a
5 tries, average time 0.27 sec
def set_approach(a,b):
return list(set(a)-set(b))
5 tries, average time 0.0057 sec
Also I made another measurement with bigger inputs size for the last two functions
a = range(1,500000)
b = range(1,100000)
And the results:
For modification (remove method) - average time is 252 seconds
For set approach - average time is 0.75 seconds
So you can see that approach with sets is significantly faster than others. Yes, it doesn't keep similar items, but if you don't need it - it's for you.
And there is almost no difference between list comprehension and using filter function. Using 'remove' is ~50 times faster, but it modifies source list.
And the best choice is using sets - it's more than 1000 times faster than list comprehension!
If you don't have repeated values, you could use set difference.
x = set(range(10))
y = x - set([2, 3, 7])
# y = set([0, 1, 4, 5, 6, 8, 9])
and then convert back to list, if needed.
a = range(1,10)
itemsToRemove = set([2, 3, 7])
b = filter(lambda x: x not in itemsToRemove, a)
or
b = [x for x in a if x not in itemsToRemove]
Don't create the set inside the lambda or inside the comprehension. If you do, it'll be recreated on every iteration, defeating the point of using a set at all.
The simplest way is
>>> a = range(1, 10)
>>> for x in [2, 3, 7]:
... a.remove(x)
...
>>> a
[1, 4, 5, 6, 8, 9]
One possible problem here is that each time you call remove(), all the items are shuffled down the list to fill the hole. So if a grows very large this will end up being quite slow.
This way builds a brand new list. The advantage is that we avoid all the shuffling of the first approach
>>> removeset = set([2, 3, 7])
>>> a = [x for x in a if x not in removeset]
If you want to modify a in place, just one small change is required
>>> removeset = set([2, 3, 7])
>>> a[:] = [x for x in a if x not in removeset]
Others have suggested ways to make newlist after filtering e.g.
newl = [x for x in l if x not in [2,3,7]]
or
newl = filter(lambda x: x not in [2,3,7], l)
but from your question it looks you want in-place modification for that you can do this, this will also be much much faster if original list is long and items to be removed less
l = range(1,10)
for o in set([2,3,7,11]):
try:
l.remove(o)
except ValueError:
pass
print l
output:
[1, 4, 5, 6, 8, 9]
I am checking for ValueError exception so it works even if items are not in orginal list.
Also if you do not need in-place modification solution by S.Mark is simpler.
>>> a=range(1,10)
>>> for i in [2,3,7]: a.remove(i)
...
>>> a
[1, 4, 5, 6, 8, 9]
>>> a=range(1,10)
>>> b=map(a.remove,[2,3,7])
>>> a
[1, 4, 5, 6, 8, 9]

Categories

Resources