Python has a nice feature:
print([j**2 for j in [2, 3, 4, 5]]) # => [4, 9, 16, 25]
In Ruby it's even simpler:
puts [2, 3, 4, 5].map{|j| j**2}
but if it's about nested loops Python looks more convenient.
In Python we can do this:
digits = [1, 2, 3]
chars = ['a', 'b', 'c']
print([str(d)+ch for d in digits for ch in chars if d >= 2 if ch == 'a'])
# => ['2a', '3a']
The equivalent in Ruby is:
digits = [1, 2, 3]
chars = ['a', 'b', 'c']
list = []
digits.each do |d|
chars.each do |ch|
list.push d.to_s << ch if d >= 2 && ch == 'a'
end
end
puts list
Does Ruby have something similar?
The common way in Ruby is to properly combine Enumerable and Array methods to achieve the same:
digits.product(chars).select{ |d, ch| d >= 2 && ch == 'a' }.map(&:join)
This is only 4 or so characters longer than the list comprehension and just as expressive (IMHO of course, but since list comprehensions are just a special application of the list monad, one could argue that it's probably possible to adequately rebuild that using Ruby's collection methods), while not needing any special syntax.
As you know Ruby has no syntactic sugar for list-comprehensions, so the closer you can get is by using blocks in imaginative ways. People have proposed different ideas, take a look at lazylist and verstehen approaches, both support nested comprehensions with conditions:
require 'lazylist'
list { [x, y] }.where(:x => [1, 2], :y => [3, 4]) { x+y>4 }.to_a
#=> [[1, 4], [2, 3], [2, 4]]
require 'verstehen'
list { [x, y] }.for(:x).in { [1, 2] }.for(:y).in { [3, 4] }.if { x+y>4 }.comprehend
#=> [[1, 4], [2, 3], [2, 4]]
Of course that's not what you'd call idiomatic Ruby, so it's usually safer to use the typical product + select + map approach.
As suggested by RBK above, List comprehension in Ruby provides a whole slew of different ways to do things sort of like list comprehensions in Ruby.
None of them explicitly describe nested loops, but at least some of them can be nested quite easily.
For example, the accepted answer by Robert Gamble suggests adding an Array#comprehend method.
class Array
def comprehend(&block)
return self if block.nil?
self.collect(&block).compact
end
end
Having done that, you can write your code as:
digits.comprehend{|d| chars.comprehend{|ch| d.to_s+ch if ch =='a'} if d>=2}
Compare to the Python code:
[str(d)+ch for d in digits for ch in chars if d >= 2 if ch == 'a']
The differences are pretty minor:
The Ruby code is a bit longer. But that's mostly just the fact that "comprehend" is spelled out; you can always call it something shorter if you want.
The Ruby code puts things in a different order—the arrays come at the beginning instead of in the middle. But if you think about it, that's exactly what you'd expect, and want, because of the "everything is a method" philosophy.
The Ruby code requires nested braces for nested comprehensions. I can't think of an obvious way around this that doesn't make things worse (you don't want to call "[str,digits].comprehend2" or anything…).
Of course the real strength of Python here is that if you decide you want to evaluate the list lazily, you can convert your comprehension into a generator expression just by removing the brackets (or turning them into parentheses, depending on the context). But even there, you could create an Array#lazycomprehend or something.
Related
I'm taking a data structures course in Python, and a suggestion for a solution includes this code which I don't understand.
This is a sample of a dictionary:
vc_metro = {
'Richmond-Brighouse': set(['Lansdowne']),
'Lansdowne': set(['Richmond-Brighouse', 'Aberdeen'])
}
It is suggested that to remove some of the elements in the value, we use this code:
vc_metro['Lansdowne'] -= set(['Richmond-Brighouse'])
I have never seen such a structure, and using it in a basic situation such as:
my_list = [1, 2, 3, 4, 5, 6]
other_list = [1, 2]
my_list -= other_list
doesn't work. Where can I learn more about this recommended strategy?
You can't subtract lists, but you can subtract set objects meaningfully. Sets are hashtables, somewhat similar to dict.keys(), which allow only one instance of an object.
The -= operator is equivalent to the difference method, except that it is in-place. It removes all the elements that are present in both operands from the left one.
Your simple example with sets would look like this:
>>> my_set = {1, 2, 3, 4, 5, 6}
>>> other_set = {1, 2}
>>> my_set -= other_set
>>> my_set
{3, 4, 5, 6}
Curly braces with commas but no colons are interpreted as a set object. So the direct constructor call
set(['Richmond-Brighouse'])
is equivalent to
{'Richmond-Brighouse'}
Notice that you can't do set('Richmond-Brighouse'): that would add all the individual characters of the string to the set, since strings are iterable.
The reason to use -=/difference instead of remove is that differencing only removes existing elements, and silently ignores others. The discard method does this for a single element. Differencing allows removing multiple elements at once.
The original line vc_metro['Lansdowne'] -= set(['Richmond-Brighouse']) could be rewritten as
vc_metro['Lansdowne'].discard('Richmond-Brighouse')
In my program I have many lines where I need to both iterate over a something and modify it in that same for loop.
However, I know that modifying the thing over which you're iterating is bad because it may - probably will - result in an undesired result.
So I've been doing something like this:
for el_idx, el in enumerate(theList):
if theList[el_idx].IsSomething() is True:
theList[el_idx].SetIt(False)
Is this the best way to do this?
This is a conceptual misunderstanding.
It is dangerous to modify the list itself from within the iteration, because of the way Python translates the loop to lower level code. This can cause unexpected side effects during the iteration, there's a good example here :
https://unspecified.wordpress.com/2009/02/12/thou-shalt-not-modify-a-list-during-iteration/
But modifying mutable objects stored in the list is acceptable, and common practice.
I suspect that you're thinking that because the list is made up of those objects, that modifying those objects modifies the list. This is understandable - it's just not how it's normally thought of. If it helps, consider that the list only really contains references to those objects. When you modify the objects within the loop - you are merely using the list to modify the objects, not modifying the list itself.
What you should not do is add or remove items from the list during the iteration.
Your problem seems to be unclear to me. But if we talk about harmful of modifying list during a for loop iteration in Python. I can think about two scenarios.
First, You modify some elements in list that suppose to be used on the next round of computation as its original value.
e.g. You want to write a program that have such inputs and outputs like these.
Input:
[1, 2, 3, 4]
Expected output:
[1, 3, 6, 10] #[1, 1 + 2, 1 + 2 + 3, 1 + 2 + 3 + 4]
But...you write a code in this way:
#!/usr/bin/env python
mylist = [1, 2, 3, 4]
for idx, n in enumerate(mylist):
mylist[idx] = sum(mylist[:idx + 1])
print mylist
Result is:
[1, 3, 7, 15] # undesired result
Second, you make some change on size of list during a for loop iteration.
e.g. From python-delete-all-entries-of-a-value-in-list:
>>> s=[1,4,1,4,1,4,1,1,0,1]
>>> for i in s:
... if i ==1: s.remove(i)
...
>>> s
[4, 4, 4, 0, 1]
The example shows the undesired result that raised from side-effect of changing size in list. This obviously shows you that for each loop in Python can not handle list with dynamic size in a proper way. Below, I show you some simple way to overcome this problem:
#!/usr/bin/env python
s=[1, 4, 1, 4, 1, 4, 1, 1, 0, 1]
list_size=len(s)
i=0
while i!=list_size:
if s[i]==1:
del s[i]
list_size=len(s)
else:
i=i + 1
print s
Result:
[4, 4, 4, 0]
Conclusion: It's definitely not harmful to modify any elements in list during a loop iteration, if you don't 1) make change on size of list 2) make some side-effect of computation by your own.
you could get index first
idx = [ el_idx for el_idx, el in enumerate(theList) if el.IsSomething() ]
[ theList[i].SetIt(False) for i in idx ]
Is there any pre-made optimized tool/library in Python to cut/slice lists for values "less than" something?
Here's the issue: Let's say I have a list like:
a=[1,3,5,7,9]
and I want to delete all the numbers which are <= 6, so the resulting list would be
[7,9]
6 is not in the list, so I can't use the built-in index(6) method of the list. I can do things like:
#!/usr/bin/env python
a = [1, 3, 5, 7, 9]
cut=6
for i in range(len(a)-1, -2, -1):
if a[i] <= cut:
break
b = a[i+1:]
print "Cut list: %s" % b
which would be fairly quick method if the index to cut from is close to the end of the list, but which will be inefficient if the item is close to the beginning of the list (let's say, I want to delete all the items which are >2, there will be a lot of iterations).
I can also implement my own find method using binary search or such, but I was wondering if there's a more... wide-scope built in library to handle this type of things that I could reuse in other cases (for instance, if I need to delete all the number which are >=6).
Thank you in advance.
You can use the bisect module to perform a sorted search:
>>> import bisect
>>> a[bisect.bisect_left(a, 6):]
[7, 9]
bisect.bisect_left is what you are looking for, I guess.
If you just want to filter the list for all elements that fulfil a certain criterion, then the most straightforward way is to use the built-in filter function.
Here is an example:
a_list = [10,2,3,8,1,9]
# filter all elements smaller than 6:
filtered_list = filter(lambda x: x<6, a_list)
the filtered_list will contain:
[2, 3, 1]
Note: This method does not rely on the ordering of the list, so for very large lists it might be that a method optimised for ordered searching (as bisect) performs better in terms of speed.
Bisect left and right helper function
#!/usr/bin/env python3
import bisect
def get_slice(list_, left, right):
return list_[
bisect.bisect_left(list_, left):
bisect.bisect_left(list_, right)
]
assert get_slice([0, 1, 1, 3, 4, 4, 5, 6], 1, 5) == [1, 1, 3, 4, 4]
Tested in Ubuntu 16.04, Python 3.5.2.
Adding to Jon's answer, if you need to actually delete the elements less than 6 and want to keep the same reference to the list, rather than returning a new one.
del a[:bisect.bisect_right(a,6)]
You should note as well that bisect will only work on a sorted list.
Suppose I have a list containing (among other things) sublists of different types:
[1, 2, [3, 4], {5, 6}]
that I'd like to flatten in a selective way, depending on the type of its elements (i.e. I'd like to only flatten sets, and leave the rest unflattened):
[1, 2, [3, 4], 5, 6]
My current solution is a function, but just for my intellectual curiosity, I wonder if it's possible to do it with a single list comprehension?
List comprehensions aren't designed for flattening (since they don't have a way to combine the values corresponding to multiple input items).
While you can get around this with nested list comprehensions, this requires each element in your top level list to be iterable.
Honestly, just use a function for this. It's the cleanest way.
Amber is probably right that a function is preferable for something like this. On the other hand, there's always room for a little variation. I'm assuming the nesting is never more than one level deep -- if it is ever more than one level deep, then you should definitely prefer a function for this. But if not, this is a potentially viable approach.
>>> from itertools import chain
>>> from collections import Set
>>> list(chain.from_iterable(x if isinstance(x, Set) else (x,) for x in l))
[1, 2, [3, 4], 5, 6]
The non-itertools way to do this would involve nested list comprehensions. Better to break that into two lines:
>>> packaged = (x if isinstance(x, collections.Set) else (x,) for x in l)
>>> [x for y in packaged for x in y]
[1, 2, [3, 4], 5, 6]
I don't have a strong intuition about whether either of these would be faster or slower than a straightforward function. These create lots of singleton tuples -- that's kind of a waste -- but they also happen at LC speed, which is usually pretty good.
You can use flatten function from funcy library:
from funcy import flatten, isa
flat_list = flatten(your_list, follow=isa(set))
You can also peek at its implementation.
Python not support adding a tuple to a list:
>>> [1,2,3] + (4,5,6)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "tuple") to list
What are the disadvantages for providing such a support in the language? Note that I would expect this to be symmetric: [1, 2] + (3, 4) and (1, 2) + [3, 4] would both evaluate to a brand-new list [1, 2, 3, 4]. My rationale is that once someone applied operator + to a mix of tuples and lists, they are likely to do it again (very possibly in the same expression), so we might as well provide the list to avoid extra conversions.
Here's my motivation for this question.
It happens quite often that I have small collections that I prefer to store as tuples to avoid accidental modification and to help performance. I then need to combine such tuples with lists, and having to convert each of them to list makes for very ugly code.
Note that += or extend may work in simple cases. But in general, when I have an expression
columns = default_columns + columns_from_user + calculated_columns
I don't know which of these are tuples and which are lists. So I either have to convert everything to lists:
columns = list(default_columns) + list(columns_from_user) + list(calculated_columns)
Or use itertools:
columns = list(itertools.chain(default_columns, columns_from_user, calculated_columns))
Both of these solutions are uglier than a simple sum; and the chain may also be slower (since it must iterate through the inputs an element at a time).
This is not supported because the + operator is supposed to be symmetric. What return type would you expect? The Python Zen includes the rule
In the face of ambiguity, refuse the temptation to guess.
The following works, though:
a = [1, 2, 3]
a += (4, 5, 6)
There is no ambiguity what type to use here.
Why python doesn't support adding different type: simple answer is that they are of different types, what if you try to add a iterable and expect a list out? I myself would like to return another iterable. Also consider ['a','b']+'cd' what should be the output? considering explicit is better than implicit all such implicit conversions are disallowed.
To overcome this limitation use extend method of list to add any iterable e.g.
l = [1,2,3]
l.extend((4,5,6))
If you have to add many list/tuples write a function
def adder(*iterables):
l = []
for i in iterables:
l.extend(i)
return l
print adder([1,2,3], (3,4,5), range(6,10))
output:
[1, 2, 3, 3, 4, 5, 6, 7, 8, 9]
You can use the += operator, if that helps:
>>> x = [1,2,3]
>>> x += (1,2,3)
>>> x
[1, 2, 3, 1, 2, 3]
You can also use the list constructor explicitly, but like you mentioned, readability might suffer:
>>> list((1,2,3)) + list((1,2,3))
[1, 2, 3, 1, 2, 3]