I recently ran into the following issue with broadcasting with numpy.
y = randn(100)
x = randn(100,1)
(y+x).shape
> 100,100
While I realize that this is according to the rules of https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html, it seems to be counter intuitive to what one would expect - that the result be a (100,1) vector.
I was just wondering - is there a good reason for this behavior (i.e. this is behavior desirable) - or is it just a by product of the way broadcasting rules are defined.
The basic idea is that when one or the other array requires iteration for the result shapes to make sense, then you iteratively perform the operation for each entry of the major axis (Separately, NumPy offers ways to cause the iteration to happen over different axes if desired, such as with einsum).
In this case, x has 100 different things along its major axis, each of which is individually added to y. Let's take just the first value x[0] and add it to y. Now we're talking about y having 100 things that are iteratively added to x[0], so the result is a shape-of-y thing. Repeat this for x[1] and so forth.
If you do x.T, then along x's major axis there is just 1 thing, namely a length-100 "row". So then it can be elementwise added to y without modification, so no more broadcasting is needed and you get the "naive" vector math operation you might have had in mind.
NumPy's broadcasting rules are trying to be effective for programming and iteration across a wide swath of possible calculations and operations, many having absolutely nothing to do with linear algebra or common vector/matrix operations. So broadcasting doesn't always (and shouldn't always) assume things in order to privilege the linear algebra sort of expectation.
Related
I want to carry out np.convolve for two 2d arrays in a vectorized manner. Here is the thing:
The function np.convolve takes two 1d arrays, a and v, and computes the convolution. The result reads:
output[n] = \sum_m a[m] v[n - m] .
What I want to do is, for 2d arrays a and v, to repeat "convolution along axis=0" over axis=1. That is,
output[n][k] = \sum_m a[m][k] v[n - m][k] .
Though this is attained by a for statement with respect to the axis=1, I'm keen on finding a way to "vectorize" this to improve performance.
I looked up for a while but no clue. I'm glad if someone could find a way out. Thank you so much.
Depending on the size of your arrays it may be quicker to do a multiplication of the fft arrays, which is mathematically equivalent to the convolution operation. Based on the answer here, it is quicker, however it appears limited when you get to exceptionally long sequences, as proven here. An example of what this could look like:
from scipy.signal import fftconvolve
output = fftconvolve(a, v, mode='same') #mode='full', too
If they help you don't forget to give them an upvote, too :).
I have a matrix A = np.array([[1,1,1],[1,2,3],[4,4,4]]) and I want only the linearly independent rows in my new matrix. The answer might be A_new = np.array([1,1,1],[1,2,3]]) or A_new = np.array([1,2,3],[4,4,4])
Since I have a very large matrix so I need to decompose the matrix into smaller linearly independent full rank matrix. Can someone please help?
There are many ways to do this, and which way is best will depend on your needs. And, as you noted in your statement, there isn't even a unique output.
One way to do this would be to use Gram-Schmidt to find an orthogonal basis, where the first $k$ vectors in this basis have the same span as the first $k$ independent rows. If at any step you find a linear dependence, drop that row from your matrix and continue the procedure.
A simple way do do this with numpy would be,
q,r = np.linalg.qr(A.T)
and then drop any columns where R_{i,i} is zero.
For instance, you could do
A[np.abs(np.diag(R))>=1e-10]
While this will work perfectly in exact arithmetic, it may not work as well in finite precision. Almost any matrix will be numerically independent, so you will need some kind of thresholding to determine if there is a linear dependence. If you use the built in QR method, you will have to make sure that there is no dependence on columns which you previously dropped.
If you need even more stability, you could iteratively solve the least squares problem
A.T[:,dependent_cols] x = A.T[:,col_to_check]
using a stable direct method. If you can solve this exactly, then A.T[:,k] is dependent on the previous vectors, with the combination given by x.
Which solver to use may also be dictated by your data type.
I am currently using version 1.92 of numpy which I believe is the latest publicly available one. I wish to use the central difference method quickly on an n dimensional array and specify an axis over which to perform the calculation.
My first thought was to use numpy.diff which allows axis specification, however this returns a right difference rather than a central difference and has limited functionality.
I understand the following method works using numpy.gradient
num_vectors=10 #number of 3-vectors in the 2D array
vectorarray=numpy.empty((num_vectors,3))
vectorarray[0]=[4,5,6]
vectorarray[1]=[1,4,4]
vectorarray[2]=[8,8,1] #add some arbitrary data for illustrative purposes
c1,c2=numpy.gradient(vectorarray)
So c1 stores the useful information that I require. The problem is that I also have to generate c2, and I want to do this sort of calculation with many dimensional arrays and will incur a time loss by generating all this useless data.
Is there any other method I can achieve this same result without the redundancy, preferably this also includes using nested for loops.
You can read Numpy's gradient code at https://github.com/numpy/numpy/blob/master/numpy/lib/function_base.py#L1119 and use the algorithm therein.
You can also copy it and change for i, axis in enumerate(axes): to for i, axis in [[0,0]]: so that it only runs once, but keep in mind that your modified code may fall under Numpy's license.
In case of a 2D array array.cumsum(0).cumsum(1) gives the Integral image of the array.
What happens if I compute array.cumsum(0).cumsum(1).cumsum(2) over a 3D array?
Do I get a 3D extension of Integral Image i.e, Integral volume over the array?
Its hard to visualize what happens in case of 3D.
I have gone through this discussion.
3D variant for summed area table (SAT)
This gives a recursive way on how to compute the Integral volume. What if I use the cumsum along the 3 axes. Will it give me the same thing?
Will it be more efficient than the recursive method?
Yes, the formula you give, array.cumsum(0).cumsum(1).cumsum(2), will work.
What the formula does is compute a few partial sums so that the sum of these sums is the volume sum. That is, every element needs to be summed exactly once, or, in other words, no element can be skipped and no element counted twice. I think going through each of these questions (is any element skipped or counted twice) is a good way to verify to yourself that this will work. And also run a small test:
x = np.ones((20,20,20)).cumsum(0).cumsum(1).cumsum(2)
print x[2,6,10] # 231.0
print 3*7*11 # 231
Of course, with all ones there could two errors that cancel each other out, but this wouldn't happen everywhere so it's a reasonable test.
As for efficiency, I would guess that the single pass approach is probably faster, but not by a lot. Also, the above could be sped up using an output array, eg, cumsum(n, out=temp) as otherwise three arrays will be created for this calculation. The best way to know is to test (but only if you need to).
I'm porting an C++ scientific application to python, and as I'm new to python, some problems come to my mind:
1) I'm defining a class that will contain the coordinates (x,y). These values will be accessed several times, but they only will be read after the class instantiation. Is it better to use an tuple or an numpy array, both in memory and access time wise?
2) In some cases, these coordinates will be used to build a complex number, evaluated on a complex function, and the real part of this function will be used. Assuming that there is no way to separate real and complex parts of this function, and the real part will have to be used on the end, maybe is better to use directly complex numbers to store (x,y)? How bad is the overhead with the transformation from complex to real in python? The code in c++ does a lot of these transformations, and this is a big slowdown in that code.
3) Also some coordinates transformations will have to be performed, and for the coordinates the x and y values will be accessed in separate, the transformation be done, and the result returned. The coordinate transformations are defined in the complex plane, so is still faster to use the components x and y directly than relying on the complex variables?
Thank you
In terms of memory consumption, numpy arrays are more compact than Python tuples.
A numpy array uses a single contiguous block of memory. All elements of the numpy array must be of a declared type (e.g. 32-bit or 64-bit float.) A Python tuple does not necessarily use a contiguous block of memory, and the elements of the tuple can be arbitrary Python objects, which generally consume more memory than numpy numeric types.
So this issue is a hands-down win for numpy, (assuming the elements of the array can be stored as a numpy numeric type).
On the issue of speed, I think the choice boils down to the question, "Can you vectorize your code?"
That is, can you express your calculations as operations done on entire arrays element-wise.
If the code can be vectorized, then numpy will most likely be faster than Python tuples. (The only case I could imagine where it might not be, is if you had many very small tuples. In this case the overhead of forming the numpy arrays and one-time cost of importing numpy might drown-out the benefit of vectorization.)
An example of code that could not be vectorized would be if your calculation involved looking at, say, the first complex number in an array z, doing a calculation which produces an integer index idx, then retrieving z[idx], doing a calculation on that number, which produces the next index idx2, then retrieving z[idx2], etc. This type of calculation might not be vectorizable. In this case, you might as well use Python tuples, since you won't be able to leverage numpy's strength.
I wouldn't worry about the speed of accessing the real/imaginary parts of a complex number. My guess is the issue of vectorization will most likely determine which method is faster. (Though, by the way, numpy can transform an array of complex numbers to their real parts simply by striding over the complex array, skipping every other float, and viewing the result as floats. Moreover, the syntax is dead simple: If z is a complex numpy array, then z.real is the real parts as a float numpy array. This should be far faster than the pure Python approach of using a list comprehension of attribute lookups: [z.real for z in zlist].)
Just out of curiosity, what is your reason for porting the C++ code to Python?
A numpy array with an extra dimension is tighter in memory use, and at least as fast!, as a numpy array of tuples; complex numbers are at least as good or even better, including for your third question. BTW, you may have noticed that -- while questions asked later than yours were getting answers aplenty -- your was laying fallow: part of the reason is no doubt that asking three questions within a question turns responders off. Why not just ask one question per question? It's not as if you get charged for questions or anything, you know...!-)