I'm using python to work with large-ish (approx 2000 x 2000) matrices, where each I, J point in the matrix represents a single pixel.
The matrices themselves are sparse (ie a substantial portion of them will have zero values), but when they are updated they tend to be increment operations, to a large number of adjacent pixels in a rectangular 'block', rather than random pixels here or there (a property i do not currently use to my advantage..).
Afraid a bit new to matrix arithmetic, but I've looked into a number of possible solutions, including the various flavours of scipy sparse matrices. So far co-ordinate (COO) matrices seem to be the most promising.
So for instance where I want to increment one block shape, I'd have to do something along the lines of:
>>> from scipy import sparse
>>> from numpy import array
>>> I = array([0,0,0,0])
>>> J = array([0,1,2,3])
>>> V = array([1,1,1,1])
>>> incr_matrix = sparse.coo_matrix((V,(I,J)),shape=(100,100))
>>> main_matrix += incr_matrix #where main_matrix was previously defined
In the future, i'd like to have a richer pixel value representation in anycase (tuples to represent RGB etc), something that numpy array doesnt support out of the box (or perhaps I need to use this).
Ultimately i'll have a number of these matrices that I would need to do simple arithmitic on, and i'd need the code to be as efficient as possible -- and distributable, so i'd need to be able to persist and exchange these objects in a small-ish representation without substantial penalties. I'm wondering if this is the right way to go, or should I be looking rolling my own structures using dicts etc?
The general rule is, get the code working first, then optimize if needed...
In this case, use a normal numpy 2000x2000 array, or 2000x2000x3 for RGB. This will be much easier and faster to work with, is only a small memory requirement, and has many other advantages, for example, you can use the standard image processing tools, etc.
Then, if needed, "to persist and exchange these objects", you can just compress them using gzip, pytables, jpeg, or whatever, but there's no need to limit your data manipulation based storage requirements.
This way you get both faster processing and better compression.
I would say, yes, this is the way to go. Definitely over building something out of dictionaries! When building a "vector", array, then use a structured array, i.e. define your own dtype:
rgbtype = [('r','uint8'),('g','uint8'),('b','uint8')]
when incrementing your blocks, it will look something like this:
main_matrix['r'][blk_slice] += incr_matrix['r']
main_matrix['g'][blk_slice] += incr_matrix['g']
main_matrix['b'][blk_slice] += incr_matrix['b']
Update:
It looks like you can't do matrix operations with a coo_matrix, they exist simply as a convenient way to populate a sparse matrix. You have to convert them to another (sparse) matrix type before doing the updates. documentation
You might want to consider looking into a quadtree as an implementation. The quadtree structure is pretty efficient at storing sparse data, and has the added advantage that if you're working with structures composed of lots of blocks of similar data the representation can be very compact. I'm not sure if this will be particularly applicable to what you're doing, since I don't know what you mean by "working in blocks," but it's certainly worth checking out as an alternative sparse matrix implementation.
Related
I have a program which creates an array:
List1 = zeros((x, y), dtype=complex_)
Currently I am using x = 500 and y = 1000000.
I will initialize the first column of the list by some formula. Then the subsequent columns will calculate their own values based on the preceding column.
After the list is completely filled, I will then display this multidimensional array using imshow().
The size of each value (item) in the list is 24 bytes.
A sample value from the code is: 4.63829355451e-32
When I run the code with y = 10000000, it takes up too much RAM and the system stops the run. How do I solve this problem? Is there a way to save my RAM while still being able to process the list using imshow() easily? Also, how large a list can imshow() display?
There's no way to solve this problem (in any general way).
Computers (as commonly understood) have a limited amount of RAM, and they require elements to be in RAM in order to operate on them.
An complex128 array size of 10000000x500 would require around 74GiB to store. You'll need to somehow reduce the amount of data you're processing if you hope to use a regular computer to do it (as opposed to a supercomputer).
A common technique is partitioning your data and processing each partition individually (possibly on multiple computers). Depending on the problem you're trying to solve, there may be special data structures that you can use to reduce the amount of memory needed to represent the data - a good example is a sparse matrix.
It's very unusual to need this much memory - make sure to carefully consider if it's actually needed before you dwell into the extremely complex workarounds.
Just a short question that I can't find the answer to before i head off for the day,
When i do something like this:
v1 = float_list_python = ... # <some list of floats>
v2 = float_array_NumPy = ... # <some numpy.ndarray of floats>
# I guess they don't have to be floats -
# but some object that also has a native
# object in C, so that numpy can just use
# that
If i want to multiply these vectors by a scalar, my understanding has always been that the python list is a list of object references, and so looping through the list to do the multiplication must fetch the locations of all the floats, and then must get the floats in order to do it - which is one of the reasons it's slow.
If i do the same thing in NumPy, then, well, i'm not sure what happens. There are a number of things i imagine could happen:
It splits the multpilication up across the cores.
It vectorises the multications (as well?)
The documentation i've found suggests that many of the primitives in numpy take advantage of the first option there whenever they can (i don't have a computer on hand at the moment i can test it on). And my intuition tells me that number 2 should happen whenever it's possible.
So my question is, if I create a NumPy array of python objects, will it still at least perform operations on the list in parallel? I know that if you create an array of objects that have native C types, then it will actually create a contiguous array in memory of the actual objects, and that if you create an numpy array of python objects it will create an array of references, but i don't see why this would rule out parallel operations on said list, and cannot find anywhere that explicitly states that.
EDIT: I feel there's a bit of confusion over what i'm asking. I understand what vectorisation is, I understand that it is a compiler optimisation, and not something you necesarily program in (though aligning the data such that it's contiguous in memory is important). On the grounds of vectorisation, all i wanted to know was whether or not numpy uses it. If i do something like np_array1 * np_array2 does the underlying library call use vectorisation (presuming that dtype is a compatible type).
For the splitting up over the cores, all i mean there, is if i again do something like np_array1 * np_array2, but this time dtype=object: would it divide that work up amongst there cores?
numpy is fast because it performs numeric operations like this in fast compiled C code. In contrast the list operation operates at the interpreted Python level (streamlined as much as possible with Python bytecodes etc).
A numpy array of numeric type stores those numbers in a data buffer. At least in the simple cases this is just a block of bytes that C code can step through efficiently. The array also has shape and strides information that allows multidimensional access.
When you multiply the array by a scalar, it, in effect, calls a C function titled something like 'multiply_array_by_scalar', which does the multiplication in fast compiled code. So this kind of numpy operation is fast (compared to Python list code) regardless of the number of cores or other multi-processing/threading enhancements.
Arrays of objects do not have any special speed advantage (compared to lists), at least not at this time.
Look at my answer to a question about creating an array of arrays, https://stackoverflow.com/a/28284526/901925
I had to use iteration to initialize the values.
Have you done any time experiments? For example, construct an array, say (1000,2). Use tolist() to create an equivalent list of lists. And make a similar array of objects, with each object being a (2,) array or list (how much work did that take?). Now do something simple like len(x) for each of those sub lists.
#hpaulj provided a good answer to your question. In general, from reading your question it occurred to me that you do not actually understand what "vectorization" does under the hood. This writeup is a pretty decent explanation of vectorization and how it enables faster computations - http://quantess.net/2013/09/30/vectorization-magic-for-your-computations/
With regards to point 1 - Distributing computations across multiple cores, this is not always the case with Numpy. However, there are libraries like numexpr that enable multithreaded, highly efficient Numpy array computations with support for several basic logical and arithmetic operators. Numexpr can be used to turbo charge critical computations when used in conjunction with Numpy as it avoids replicating large arrays in memory for vectorization routines (as is the case for Numpy) and can use all cores on your system to perform computations.
What are the advantages of NumPy over regular Python lists?
I have approximately 100 financial markets series, and I am going to create a cube array of 100x100x100 = 1 million cells. I will be regressing (3-variable) each x with each y and z, to fill the array with standard errors.
I have heard that for "large matrices" I should use NumPy as opposed to Python lists, for performance and scalability reasons. Thing is, I know Python lists and they seem to work for me.
What will the benefits be if I move to NumPy?
What if I had 1000 series (that is, 1 billion floating point cells in the cube)?
NumPy's arrays are more compact than Python lists -- a list of lists as you describe, in Python, would take at least 20 MB or so, while a NumPy 3D array with single-precision floats in the cells would fit in 4 MB. Access in reading and writing items is also faster with NumPy.
Maybe you don't care that much for just a million cells, but you definitely would for a billion cells -- neither approach would fit in a 32-bit architecture, but with 64-bit builds NumPy would get away with 4 GB or so, Python alone would need at least about 12 GB (lots of pointers which double in size) -- a much costlier piece of hardware!
The difference is mostly due to "indirectness" -- a Python list is an array of pointers to Python objects, at least 4 bytes per pointer plus 16 bytes for even the smallest Python object (4 for type pointer, 4 for reference count, 4 for value -- and the memory allocators rounds up to 16). A NumPy array is an array of uniform values -- single-precision numbers takes 4 bytes each, double-precision ones, 8 bytes. Less flexible, but you pay substantially for the flexibility of standard Python lists!
NumPy is not just more efficient; it is also more convenient. You get a lot of vector and matrix operations for free, which sometimes allow one to avoid unnecessary work. And they are also efficiently implemented.
For example, you could read your cube directly from a file into an array:
x = numpy.fromfile(file=open("data"), dtype=float).reshape((100, 100, 100))
Sum along the second dimension:
s = x.sum(axis=1)
Find which cells are above a threshold:
(x > 0.5).nonzero()
Remove every even-indexed slice along the third dimension:
x[:, :, ::2]
Also, many useful libraries work with NumPy arrays. For example, statistical analysis and visualization libraries.
Even if you don't have performance problems, learning NumPy is worth the effort.
Alex mentioned memory efficiency, and Roberto mentions convenience, and these are both good points. For a few more ideas, I'll mention speed and functionality.
Functionality: You get a lot built in with NumPy, FFTs, convolutions, fast searching, basic statistics, linear algebra, histograms, etc. And really, who can live without FFTs?
Speed: Here's a test on doing a sum over a list and a NumPy array, showing that the sum on the NumPy array is 10x faster (in this test -- mileage may vary).
from numpy import arange
from timeit import Timer
Nelements = 10000
Ntimeits = 10000
x = arange(Nelements)
y = range(Nelements)
t_numpy = Timer("x.sum()", "from __main__ import x")
t_list = Timer("sum(y)", "from __main__ import y")
print("numpy: %.3e" % (t_numpy.timeit(Ntimeits)/Ntimeits,))
print("list: %.3e" % (t_list.timeit(Ntimeits)/Ntimeits,))
which on my systems (while I'm running a backup) gives:
numpy: 3.004e-05
list: 5.363e-04
Here's a nice answer from the FAQ on the scipy.org website:
What advantages do NumPy arrays offer over (nested) Python lists?
Python’s lists are efficient general-purpose containers. They support
(fairly) efficient insertion, deletion, appending, and concatenation,
and Python’s list comprehensions make them easy to construct and
manipulate. However, they have certain limitations: they don’t support
“vectorized” operations like elementwise addition and multiplication,
and the fact that they can contain objects of differing types mean
that Python must store type information for every element, and must
execute type dispatching code when operating on each element. This
also means that very few list operations can be carried out by
efficient C loops – each iteration would require type checks and other
Python API bookkeeping.
All have highlighted almost all major differences between numpy array and python list, I will just brief them out here:
Numpy arrays have a fixed size at creation, unlike python lists (which can grow dynamically). Changing the size of ndarray will create a new array and delete the original.
The elements in a Numpy array are all required to be of the same data type (we can have the heterogeneous type as well but that will not gonna permit you mathematical operations) and thus will be the same size in memory
Numpy arrays are facilitated advances mathematical and other types of operations on large numbers of data. Typically such operations are executed more efficiently and with less code than is possible using pythons build in sequences
The standard mutable multielement container in Python is the list. Because of Python's dynamic typing, we can even create heterogeneous list. To allow these flexible types, each item in the list must contain its own type info, reference count, and other information. That is, each item is a complete Python object.
In the special case that all variables are of the same type, much of this information is redundant; it can be much more efficient to store data in a fixed-type array (NumPy-style).
Fixed-type NumPy-style arrays lack this flexibility, but are much more efficient for storing and manipulating data.
NumPy is not another programming language but a Python extension module. It provides fast and efficient operations on arrays of homogeneous data.
Numpy has fixed size of creation.
In Python :lists are written with square brackets.
These lists can be homogeneous or heterogeneous
The main advantages of using Numpy Arrays Over Python Lists:
It consumes less memory.
Fast as compared to the python List.
Convenient to use.
What are the advantages of NumPy over regular Python lists?
I have approximately 100 financial markets series, and I am going to create a cube array of 100x100x100 = 1 million cells. I will be regressing (3-variable) each x with each y and z, to fill the array with standard errors.
I have heard that for "large matrices" I should use NumPy as opposed to Python lists, for performance and scalability reasons. Thing is, I know Python lists and they seem to work for me.
What will the benefits be if I move to NumPy?
What if I had 1000 series (that is, 1 billion floating point cells in the cube)?
NumPy's arrays are more compact than Python lists -- a list of lists as you describe, in Python, would take at least 20 MB or so, while a NumPy 3D array with single-precision floats in the cells would fit in 4 MB. Access in reading and writing items is also faster with NumPy.
Maybe you don't care that much for just a million cells, but you definitely would for a billion cells -- neither approach would fit in a 32-bit architecture, but with 64-bit builds NumPy would get away with 4 GB or so, Python alone would need at least about 12 GB (lots of pointers which double in size) -- a much costlier piece of hardware!
The difference is mostly due to "indirectness" -- a Python list is an array of pointers to Python objects, at least 4 bytes per pointer plus 16 bytes for even the smallest Python object (4 for type pointer, 4 for reference count, 4 for value -- and the memory allocators rounds up to 16). A NumPy array is an array of uniform values -- single-precision numbers takes 4 bytes each, double-precision ones, 8 bytes. Less flexible, but you pay substantially for the flexibility of standard Python lists!
NumPy is not just more efficient; it is also more convenient. You get a lot of vector and matrix operations for free, which sometimes allow one to avoid unnecessary work. And they are also efficiently implemented.
For example, you could read your cube directly from a file into an array:
x = numpy.fromfile(file=open("data"), dtype=float).reshape((100, 100, 100))
Sum along the second dimension:
s = x.sum(axis=1)
Find which cells are above a threshold:
(x > 0.5).nonzero()
Remove every even-indexed slice along the third dimension:
x[:, :, ::2]
Also, many useful libraries work with NumPy arrays. For example, statistical analysis and visualization libraries.
Even if you don't have performance problems, learning NumPy is worth the effort.
Alex mentioned memory efficiency, and Roberto mentions convenience, and these are both good points. For a few more ideas, I'll mention speed and functionality.
Functionality: You get a lot built in with NumPy, FFTs, convolutions, fast searching, basic statistics, linear algebra, histograms, etc. And really, who can live without FFTs?
Speed: Here's a test on doing a sum over a list and a NumPy array, showing that the sum on the NumPy array is 10x faster (in this test -- mileage may vary).
from numpy import arange
from timeit import Timer
Nelements = 10000
Ntimeits = 10000
x = arange(Nelements)
y = range(Nelements)
t_numpy = Timer("x.sum()", "from __main__ import x")
t_list = Timer("sum(y)", "from __main__ import y")
print("numpy: %.3e" % (t_numpy.timeit(Ntimeits)/Ntimeits,))
print("list: %.3e" % (t_list.timeit(Ntimeits)/Ntimeits,))
which on my systems (while I'm running a backup) gives:
numpy: 3.004e-05
list: 5.363e-04
Here's a nice answer from the FAQ on the scipy.org website:
What advantages do NumPy arrays offer over (nested) Python lists?
Python’s lists are efficient general-purpose containers. They support
(fairly) efficient insertion, deletion, appending, and concatenation,
and Python’s list comprehensions make them easy to construct and
manipulate. However, they have certain limitations: they don’t support
“vectorized” operations like elementwise addition and multiplication,
and the fact that they can contain objects of differing types mean
that Python must store type information for every element, and must
execute type dispatching code when operating on each element. This
also means that very few list operations can be carried out by
efficient C loops – each iteration would require type checks and other
Python API bookkeeping.
All have highlighted almost all major differences between numpy array and python list, I will just brief them out here:
Numpy arrays have a fixed size at creation, unlike python lists (which can grow dynamically). Changing the size of ndarray will create a new array and delete the original.
The elements in a Numpy array are all required to be of the same data type (we can have the heterogeneous type as well but that will not gonna permit you mathematical operations) and thus will be the same size in memory
Numpy arrays are facilitated advances mathematical and other types of operations on large numbers of data. Typically such operations are executed more efficiently and with less code than is possible using pythons build in sequences
The standard mutable multielement container in Python is the list. Because of Python's dynamic typing, we can even create heterogeneous list. To allow these flexible types, each item in the list must contain its own type info, reference count, and other information. That is, each item is a complete Python object.
In the special case that all variables are of the same type, much of this information is redundant; it can be much more efficient to store data in a fixed-type array (NumPy-style).
Fixed-type NumPy-style arrays lack this flexibility, but are much more efficient for storing and manipulating data.
NumPy is not another programming language but a Python extension module. It provides fast and efficient operations on arrays of homogeneous data.
Numpy has fixed size of creation.
In Python :lists are written with square brackets.
These lists can be homogeneous or heterogeneous
The main advantages of using Numpy Arrays Over Python Lists:
It consumes less memory.
Fast as compared to the python List.
Convenient to use.
I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind:
1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise?
2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code.
3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables?
Thank you
In terms of memory consumption, numpy arrays are more compact than Python tuples.
A numpy array uses a single contiguous block of memory. All elements of the numpy array must be of a declared type (e.g. 32-bit or 64-bit float.) A Python tuple does not necessarily use a contiguous block of memory, and the elements of the tuple can be arbitrary Python objects, which generally consume more memory than numpy numeric types.
So this issue is a hands-down win for numpy, (assuming the elements of the array can be stored as a numpy numeric type).
On the issue of speed, I think the choice boils down to the question, "Can you vectorize your code?"
That is, can you express your calculations as operations done on entire arrays element-wise.
If the code can be vectorized, then numpy will most likely be faster than Python tuples. (The only case I could imagine where it might not be, is if you had many very small tuples. In this case the overhead of forming the numpy arrays and one-time cost of importing numpy might drown-out the benefit of vectorization.)
An example of code that could not be vectorized would be if your calculation involved looking at, say, the first complex number in an array z, doing a calculation which produces an integer index idx, then retrieving z[idx], doing a calculation on that number, which produces the next index idx2, then retrieving z[idx2], etc. This type of calculation might not be vectorizable. In this case, you might as well use Python tuples, since you won't be able to leverage numpy's strength.
I wouldn't worry about the speed of accessing the real/imaginary parts of a complex number. My guess is the issue of vectorization will most likely determine which method is faster. (Though, by the way, numpy can transform an array of complex numbers to their real parts simply by striding over the complex array, skipping every other float, and viewing the result as floats. Moreover, the syntax is dead simple: If z is a complex numpy array, then z.real is the real parts as a float numpy array. This should be far faster than the pure Python approach of using a list comprehension of attribute lookups: [z.real for z in zlist].)
Just out of curiosity, what is your reason for porting the C++ code to Python?
A numpy array with an extra dimension is tighter in memory use, and at least as fast!, as a numpy array of tuples; complex numbers are at least as good or even better, including for your third question. BTW, you may have noticed that -- while questions asked later than yours were getting answers aplenty -- your was laying fallow: part of the reason is no doubt that asking three questions within a question turns responders off. Why not just ask one question per question? It's not as if you get charged for questions or anything, you know...!-)