Pythonic pattern for building up parallel lists - python

I am new-ish to Python and I am finding that I am writing the same pattern of code over and over again:
def foo(list):
results = []
for n in list:
#do some or a lot of processing on N and possibly other variables
nprime = operation(n)
results.append(nprime)
return results
I am thinking in particular about the creation of the empty list followed by the append call. Is there a more Pythonic way to express this pattern? append might not have the best performance characteristics, but I am not sure how else I would approach it in Python.
I often know exactly the length of my output, so calling append each time seems like it might be causing memory fragmentation, or performance problems, but I am also wondering if that is just my old C ways tripping me up. I am writing a lot of text parsing code that isn't super performance sensitive on any particular loop or piece because all of the performance is really contained in gensim or NLTK code and is in much more capable hands than mine.
Is there a better/more pythonic pattern for doing this type of operation?

First, a list comprehension may be all you need (if all the processing mentioned in your comment occurs in operation.
def foo(list):
return [operation(n) for n in list]
If a list comprehension will not work in your situation, consider whether foo really needs to build the list and could be a generator instead.
def foo(list):
for n in list:
# Processing...
yield operation(n)
In this case, you can iterate over the sequence, and each value is calculated on demand:
for x in foo(myList):
...
or you can let the caller decide if a full list is needed:
results = list(foo())
If neither of the above is suitable, then building up the return list in the body of the loop as you are now is perfectly reasonable.

[..] so calling append each time seems like it might be causing memory fragmentation, or performance problems, but I am also wondering if that is just my old C ways tripping me up.
If you are worried about this, don't. Python over-allocates when a new resizing of the list is required (lists are dynamically resized based on their size) in order to perform O(1) appends. Either you manually call list.append or build it with a list comprehension (which internally also uses .append) the effect, memory wise, is similar.
The list-comprehension just performs (speed wise) a bit better; it is optimized for creating lists with specialized byte-code instructions that aid it (LIST_APPEND mainly that directly calls lists append in C).
Of course, if memory usage is of concern, you could always opt for the generator approach as highlighted in chepners answer to lazily produce your results.
In the end, for loops are still great. They might seem clunky in comparison to comprehensions and maps but they still offer a recognizable and readable way to achieve a goal. for loops deserve our love too.

Related

Python Equivalent of Scheme/Lisp CAR and CDR Functions

I am currently trying to implement fold/reduce in Python, since I don't like the version from functools. This naturally involved implementing something like the Lisp CDR function, since Python doesn't seem to have anything like it. Here is what I am thinking of trying:
def tail(lat):
# all elements of list except first
acc = []
for i in range(1,len(lat)):
acc = acc + [lat[i]]
Would this be an efficient way of implementing this function? Am I missing some kind of built-in function? Thanks in advance!
"Something like the Lisp CDR function" is trivial:
acc[1:]
This will be significantly faster than your attempt, but only by a constant factor.
However, it doesn't make much sense to do this in the first place. The whole point of CDR is that, when your lists are linked lists stored in CONS cells, going from one cell to its tail is a single machine-language operation. But with arrays (which is what Python lists are), acc[1:]—or the more complicated thing you tried to write, or in fact any possible implementation—allocates a whole new array of size N-1 and copies over N-1 values.
The efficiency cost of doing that over and over again (in an algorithm that was expecting it to be nearly free) is going to be so huge that the constant-factor speedup of using acc[1:] is unlikely to be nearly enough of an improvemnt to make it acceptable.
Most algorithms that are fast with CDR are going to be slow with this kind of slicing, and most algorithms that are fast with this kind of slicing would be slow with CDR. That's why we have multiple data structures in the first place: because they're good for different things.
If you want to know the most efficient way to fold/reduce on an array—it's the way functools.reduce (and the variations of it that libraries like toolz offer) do it: just iterate.
And just iterating has another huge advantage. Python doesn't just have lists, it has an abstraction called iterables, which include iterators and other types that can generate their contents lazily. If you're folding forward, you can take advantage of that laziness. (Folding backward does of course take linear space, either explicitly or on the stack—but it's still better than quadratic copying.) Ignoring that fact defeats the purpose.

suggestions for practicing Generators(python)

I understand the concept behind Generators and why one would choose that over lists.., but i'm struggling so much with getting quality practice by actually implementing them in my coding..Any suggestions on the type of problems I should play around with? I did the 'Fibonacci' code already but would like to practice with other types that would put generators to good use.--thanks--
How about this one: implement a generator that reads chunks from a large file or a big database (so big that it wouldn't fit into the memory). Alternatively, consider a stream of infinitely many values as input.
As you might already have learned, this is a common use case in real world applications:
https://docs.python.org/3/howto/functional.html
With a list comprehension, you get back a Python list; [...] Generator expressions return an iterator that computes the values as necessary, not needing to materialize all the values at once. This means that list comprehensions aren’t useful if you’re working with iterators that return an infinite stream or a very large amount of data. Generator expressions are preferable in these situations.
http://naiquevin.github.io/python-generators-and-being-lazy.html
Now you may ask how does this differ from an ordinary list and what is the use of all this anyway? The key difference is that the generator gives out new values on the fly and doesn't keep the elements in memory.
https://wiki.python.org/moin/Generators
The performance improvement from the use of generators is the result of the lazy (on demand) generation of values, which translates to lower memory usage. Furthermore, we do not need to wait until all the elements have been generated before we start to use them. This is similar to the benefits provided by iterators, but the generator makes building iterators easy.

Efficiency of for loops in python3

I am currently learning Python (3), having mostly experience with R as main programming language. While in R for-loops have mostly the same functionality as in Python, I was taught to avoid using it for big operations and instead use apply, which is more efficient.
My question is: how efficient are for-loops in Python, are there alternatives and is it worth exploring those possibilities as a Python newbie?
For example:
p = some_candidate_parameter_generator(data)
for i in p:
fit_model_with paramter(data, i)
Bear with me, it is tricky to give an example without going too much into specific code. But this is something that in R I would have writting with apply, especially if p is large.
The comments correctly point out that for loops are "only as efficient as your logic"; however, the range and xrange in Python do have performance implications, and this may be what you had in mind when asking this question. These methods have nothing to do with the intrinsic performance of for loops though.
In Python 3.0, xrange is now implicitly just range; however, in Python versions less than 3.0, there used to be a distinction – range loaded your entire iterable into memory, and then iterated over each item, while xrange was more akin to a generator, where each item was loaded into memory only when needed and then removed from memory after it was iterated over.
After your updated question:
In other words, if you have a giant list of items that you need to iterate over via a for loop, it is often more memory efficient to use a generator, not a list or a tuple, etc. Again though, this has nothing to do with how the Python for-loop operates, but more to do with what you're iterating over. If in doubt, use a generator, and your memory-efficiency will be as good as it will get with Python.

In Python, is there a way to call a method on every item of an iterable? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is there a map without result in python?
I often come to a situation in my programs when I want to quickly/efficiently call an in-place method on each of the items contained by an iterable. (Quickly meaning the overhead of a for loop is unacceptable). A good example would be a list of sprites when I want to call draw() on each of the Sprite objects.
I know I can do something like this:
[sprite.draw() for sprite in sprite_list]
But I feel like the list comprehension is misused since I'm not using the returned list. The same goes for the map function. Stone me for premature optimization, but I also don't want the overhead of the return value.
What I want to know is if there's a method in Python that lets me do what I just explained, perhaps like the hypothetical function I suggest below:
do_all(sprite_list, draw)
You can always write your own do_all function:
def do_all(iterable, func):
for i in iter(iterable):
func(i)
Then call it whenever you want.
There is really no performance problem with using an explicit for loop.
There is a performance problem with using a list comprehension or map, but only in building the list of results. Obviously, iterating over 500M items will be a lot slower if you have to build up a 500M list along the way.
It's worth pointing out here that this is almost certainly not going to arise for things like drawing a list of sprites. You don't have 500M sprites to draw. And if you do, it'll probably take a lot longer than creating a list of 500M copies of None. And in most plausible cases where you do need to do the same very simple thing to 500M objects, there are better solutions, like switching to numpy. But there are some conceivable cases where this could arise.
The easy way around that is to use a generator expression or itertools.imap (or, in Python 3, just map) and then dispose of the values by writing a dispose function. One possibility:
def dispose(iterator):
for i in iterator:
pass
Then:
dispose(itertools.imap(Sprite.draw, sprite_list))
You could even define do_all as:
def do_all(iterable, func):
dispose(itertools.imap(func, iterable))
If you're doing this for clarity or simplicity, I think it's misguided. The for loop version is perfectly easy to read, and this version looks like you're trying to write Haskell with the wrong function names and syntax.
If you're doing it for performance… well, if there were ever a real-life performance situation where this mattered (which doesn't seem very likely), you'd probably want to play with a bunch of different potential implementations of dispose, and possibly move the dispose back into the do_all to avoid the extra function call, and maybe even implement the whole thing in C (borrowing the fast-iteration code from the stdlib's itertools.c).
Or, better, pip install more-itertools, then use more_itertools.consume. For what it's worth, the current version just does collections.deque(iterator, maxlen=0), and in a test against a custom C implementation, it's less than 1% slower (except for very tiny iterators—the cutoff is 19 on my system), so it's probably not worth implementing in C. But if someone does, or if some future Python (or PyPy) provides a faster way to implement it, chances are it'll be added into more-itertools before you find out about it and change your code.
Assuming sprite_list is a list of Sprite objects, you can do:
map(Sprite.draw, sprite_list)
This will call Sprite.draw() on each item in sprite_list which is essentially the same as the list comprehension you posted. If you don't want to create a list, you can just use a for loop:
for sprite in sprite_list:
sprite.draw()

Python sort parallel arrays in place?

Is there an easy (meaning without rolling one's own sorting function) way to sort parallel lists without unnecessary copying in Python? For example:
foo = range(5)
bar = range(5, 0, -1)
parallelSort(bar, foo)
print foo # [4,3,2,1,0]
print bar # [1,2,3,4,5]
I've seen the examples using zip but it seems silly to copy all your data from parallel lists to a list of tuples and back again if this can be easily avoided.
Here's an easy way:
perm = sorted(xrange(len(foo)), key=lambda x:foo[x])
This generates a list of permutations - the value in perm[i] is the index of the ith smallest value in foo. Then, you can access both lists in order:
for p in perm:
print "%s: %s" % (foo[p], bar[p])
You'd need to benchmark it to find out if it's any more efficient, though - I doubt it makes much of a difference.
Is there an easy way? Yes. Use zip.
Is there an "easy way that doesn't use a zip variant"? No.
If you wanted to elaborate on why you object to using zip, that would be helpful. Either you're copying objects, in which case Python will copy by reference, or you're copying something so lightweight into a lightweight tuple as to not be worthy of optimization.
If you really don't care about execution speed but are especially concerned for some reason about memory pressure, you could roll your own bubble sort (or your sort algorithm of choice) on your key list which swaps both the key list and the target lists elements when it does a swap. I would call this the opposite of easy, but it would certainly limit your working set.
To achieve this, you would have to implement your own sort.
However: Does the unnecessary copying really hurt your application? Often parts of Python strike me as inefficient, too, but they are efficient enough for what I need.
Any solution I can imagine short of introducing a sort from scratch uses indices, or a dict, or something else that really is not apt to save you memory. In any event, using zip will only increase memory usage by a constant factor, so it is worth making sure this is really a problem before a solution.
If it does get to be a problem, there may be more effective solutions. Since the elements of foo and bar are so closely related, are you sure their right representation is not a list of tuples? Are you sure they should not be in a more compact data structure if you are running out of memory, such as a numpy array or a database (the latter of which is really good at this kind of manipulation)?
(Also, incidentally, itertools.izip can save you a little bit of memory over zip, though you still end up with the full zipped list in list form as the result of sorted.)

Categories

Resources