so this is a question regarding the use of reshape and how this functions uses each axis on a multidimensional scale.
Suppose I have the following array that contains matrices indexed by the first index. What I want to achieve is to instead index the columns of each matrix with the first index. In order to illustrate this problem, consider the following example where the given numpy array that indexes matrices with its first index is z.
x = np.arange(9).reshape((3, 3))
y = np.arange(9, 18).reshape((3, 3))
z = np.dstack((x, y)).T
Where z looks like:
array([[[ 0, 3, 6],
[ 1, 4, 7],
[ 2, 5, 8]],
[[ 9, 12, 15],
[10, 13, 16],
[11, 14, 17]]])
And its shape is (2, 3, 3). Here, the first index are the two images and the three x three is a matrix.
The question more specifically phrased then, is how to use reshape to obtain the following desired output:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]])
Whose shape is (6, 3). This achieves that the dimension of the array indexes the columns of the matrix x and y as presented above. My natural inclination was to use reshape directly on z in the following way:
out = z.reshape(2 * 3, 3)
But its output is the following which indexes the rows of the matrices and not the columns:
array([[ 0, 3, 6],
[ 1, 4, 7],
[ 2, 5, 8],
[ 9, 12, 15],
[10, 13, 16],
[11, 14, 17]]
Could reshape be used to obtain the desired output above? Or more general, can you control how each axis is used when you use the reshape function?
Two things:
I know how to solve the problem. I can go through each element of the big matrix (z) transposed and then apply reshape in the way above. This increases computation time a little bit and is not really problematic. But it does not generalize and it does not feel python. So I was wondering if there is a standard enlightened way of doing this.
I was not clear on how to phrase this question. If anyone has suggestion on how to better phrase this problem I am all ears.
Every array has a natural (1D flattened) order to its elements. When you reshape an array, it is as though it were flattened first (thus obtaining the natural order), and then reshaped:
In [54]: z.ravel()
Out[54]:
array([ 0, 3, 6, 1, 4, 7, 2, 5, 8, 9, 12, 15, 10, 13, 16, 11, 14,
17])
In [55]: z.ravel().reshape(2*3, 3)
Out[55]:
array([[ 0, 3, 6],
[ 1, 4, 7],
[ 2, 5, 8],
[ 9, 12, 15],
[10, 13, 16],
[11, 14, 17]])
Notice that in the "natural order", 0 and 1 are far apart. However you reshape it, 0 and 1 will not be next to each other along the last axis, which is what you want in the desired array:
desired = np.array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]])
This requires some reordering, which in this case can be done by swapaxes:
In [53]: z.swapaxes(1,2).reshape(2*3, 3)
Out[53]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]])
because swapaxes(1,2) places the values in the desired order
In [56]: z.swapaxes(1,2).ravel()
Out[56]:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17])
In [57]: desired.ravel()
Out[57]:
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17])
Note that the reshape method also has a order parameter which can be used to control the (C- or F-) order with which the elements are read from the array and placed in the reshaped array. However, I don't think this helps in your case.
Another way to think about the limits of reshape is to say that all reshapes followed by ravel are the same:
In [71]: z.reshape(3,3,2).ravel()
Out[71]:
array([ 0, 3, 6, 1, 4, 7, 2, 5, 8, 9, 12, 15, 10, 13, 16, 11, 14,
17])
In [72]: z.reshape(3,2,3).ravel()
Out[72]:
array([ 0, 3, 6, 1, 4, 7, 2, 5, 8, 9, 12, 15, 10, 13, 16, 11, 14,
17])
In [73]: z.reshape(3*2,3).ravel()
Out[73]:
array([ 0, 3, 6, 1, 4, 7, 2, 5, 8, 9, 12, 15, 10, 13, 16, 11, 14,
17])
In [74]: z.reshape(3*3,2).ravel()
Out[74]:
array([ 0, 3, 6, 1, 4, 7, 2, 5, 8, 9, 12, 15, 10, 13, 16, 11, 14,
17])
So if the ravel of the desired array is different, there is no way to obtain it only be reshaping.
The same goes for reshaping with order='F', provided you also ravel with order='F':
In [109]: z.reshape(2,3,3, order='F').ravel(order='F')
Out[109]:
array([ 0, 9, 1, 10, 2, 11, 3, 12, 4, 13, 5, 14, 6, 15, 7, 16, 8,
17])
In [110]: z.reshape(2*3*3, order='F').ravel(order='F')
Out[110]:
array([ 0, 9, 1, 10, 2, 11, 3, 12, 4, 13, 5, 14, 6, 15, 7, 16, 8,
17])
In [111]: z.reshape(2*3,3, order='F').ravel(order='F')
Out[111]:
array([ 0, 9, 1, 10, 2, 11, 3, 12, 4, 13, 5, 14, 6, 15, 7, 16, 8,
17])
It is possible to obtain the desired array using two reshapes:
In [83]: z.reshape(2, 3*3, order='F').reshape(2*3, 3)
Out[83]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]])
but I stumbled upon this serendipidously.
If I've totally misunderstood your question and x and y are the givens (not z) then you could obtain the desired array using row_stack instead of dstack:
In [88]: z = np.row_stack([x, y])
In [89]: z
Out[89]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]])
It you look at dstack code you'll discover that
np.dstack((x, y)).T
is effectively:
np.concatenate([i[:,:,None] for i in (x,y)],axis=2).transpose([2,1,0])
It reshapes each component array and then joins them along this new axis. Finally it transposes axes.
Your target is the same as (row stack)
np.concatenate((x,y),axis=0)
So with a bit of reverse engineering we can create it from z with
np.concatenate([i[...,0] for i in np.split(z.T,2,axis=2)],axis=0)
np.concatenate([i.T[:,:,0] for i in np.split(z,2,axis=0)],axis=0)
or
np.concatenate(np.split(z.T,2,axis=2),axis=0)[...,0]
or with a partial transpose we can keep the split-and-rejoin axis first, and just use concatenate:
np.concatenate(z.transpose(0,2,1),axis=0)
or its reshape equivalent
(z.transpose(0,2,1).reshape(-1,3))
Related
I am looking for an efficient way of indexing the columns of a numpy array with several ranges, when only the indexes of the desired ranges are given.
For example, given the following array, and a range size r_size=3:
import numpy as np
arr = np.arange(18).reshape((2,9))
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17]])
This would mean that there are a total of 3 sets of ranges [r0, r1, r2] whose elements in the array are distributed as:
[[r0_00, r0_01, r0_02, r1_00, r1_01, r1_02, r2_00, r2_01, r2_02]
[r0_10, r0_11, r0_12, r1_10, r1_11, r1_12, r2_10, r2_11, r2_12]]
So if I want to access the ranges r0 and r2, then I would obtain:
arr = np.arange(18).reshape((2,9))
r_size = 3
ranges = [0, 2]
# --------------------------------------------------------
# Line that index arr, with the variable ranges... Output:
# --------------------------------------------------------
array([[ 0, 1, 2, 6, 7, 8],
[ 9, 10, 11, 15, 16, 17]])
The fastest way that I've found is the following:
import numpy as np
from itertools import chain
arr = np.arange(18).reshape((2,9))
r_size = 3
ranges = [0,2]
arr[:, list(chain(*[range(r_size*x,r_size*x+r_size) for x in ranges]))]
array([[ 0, 1, 2, 6, 7, 8],
[ 9, 10, 11, 15, 16, 17]])
But I am not sure if it can be improved in terms of speed.
Thanks in advance!
You could start by splitting the array up in r_size chunks:
>>> splits = np.split(arr, r_size, axis=1)
[array([[ 0, 1, 2],
[ 9, 10, 11]]),
array([[ 3, 4, 5],
[12, 13, 14]]),
array([[ 6, 7, 8],
[15, 16, 17]])]
Stack with np.stack and select the correct ranges:
>>> stack = np.stack(splits)[ranges]
array([[[ 0, 1, 2],
[ 9, 10, 11]],
[[ 6, 7, 8],
[15, 16, 17]]])
And concatenate horizontally with np.hstack or np.concantenate on axis=1:
>>> np.stack(stack)
array([[ 0, 1, 2, 6, 7, 8],
[ 9, 10, 11, 15, 16, 17]])
Overall this looks like:
>>> np.hstack(np.stack(np.split(arr, r_size, axis=1))[ranges])
array([[ 0, 1, 2, 6, 7, 8],
[ 9, 10, 11, 15, 16, 17]])
Alternatively, you can work with np.reshapes exclusively which will be faster:
Initial reshape:
>>> arr.reshape(len(arr), -1, r_size)
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]]])
Indexing with ranges:
>>> arr.reshape(len(arr), -1, r_size)[:, ranges]
array([[[ 0, 1, 2],
[ 6, 7, 8]],
[[ 9, 10, 11],
[15, 16, 17]]])
Then, reshaping back to the final form:
>>> arr.reshape(len(arr), -1, r_size)[:, ranges].reshape(len(arr), -1)
You will inevitably need to copy the data to get the desired result in a contiguous array. Although to make it efficient I would suggest trying to minimize the number of times you copy the data. Any kind of reshaping operation can be expressed with np.lib.stride_tricks.as_strided.
Assume the original array contains 64-bit integers, then each element is 8 bytes arranged in some shape:
import numpy as np
arr = np.arange(18).reshape((2,9))
arr.shape, arr.strides
output:
((2, 9), (72, 8))
so each column skips 8 bytes and each row skips 72 bytes. arr.reshape(len(arr), -1, r_size) can be expressed as:
np.lib.stride_tricks.as_strided(arr, (2,3,3), (72,24,8))
output:
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8]],
[[ 9, 10, 11],
[12, 13, 14],
[15, 16, 17]]])
And arr.reshape(len(arr), -1, r_size)[:, ranges] can be expressed as:
np.lib.stride_tricks.as_strided(arr, (2,2,3), (72,24*2,8))
Output:
array([[[ 0, 1, 2],
[ 6, 7, 8]],
[[ 9, 10, 11],
[15, 16, 17]]])
So far, we have only changes the metadata of the array which means that no data has been copied. This operation has a near-zero performance cost. But to get the final array you will need to copy the data somehow:
np.lib.stride_tricks.as_strided(arr, (2,2,3), (72,24*2,8)).reshape(len(arr), -1)
Output:
array([[ 0, 1, 2, 6, 7, 8],
[ 9, 10, 11, 15, 16, 17]])
This is not a generalized solution, but it might give you some ideas nonetheless on how to optimize.
Unfortunately, my timings do not back these claims but it is intuitive still and worth testing for some larger arrays.
Imagine that you have created an array with 100 dimensions and then you calculate something and fill this array. for whatever reason, you have not created 2d array, what is wrong with this question that you want to assign another dimension to this data, with this justification that for example 250 samples should have this calculated data?!!
I have searched this but I could not find any solution. Maybe I am not searching with correct keyword!
Actually I want to reshape a numpy array of (100,) to (250,100).
I have read this link and a couple of other links but did not help me.
I have also tried this way:
numpyarray = (100,)
transformed_numpyarray = np.reshape(numpyarray,(100,-1)).T
which gives me this output:
(1, 100)
but I really do not want 1 as the first item of 2d array.
what Im trying to do is to either convert to (,100) or at least something like this (250,100). "250" is a constant number I know already so I want to say for example for 250 samples with 100 dimension.
Thanks.
I'm still confused about what you are trying to do. So far I can picture two alternatives - reshape and repeat. To illustrate:
In [148]: x = np.arange(16)
In [149]: x
Out[149]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
In [150]: x.reshape(4,4)
Out[150]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
In [151]: np.repeat(x[None,:], 4, axis=0)
Out[151]:
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]])
numpy's arrays are static sized, you can't have an array with a variable shape. If you don't know beforehand how many samples you will have you can gradually add them with vstack:
In [4]: numpyarray.shape
Out[4]: (3, 4)
In [5]: new_sample.shape
Out[5]: (4,)
In [6]: numpyarray = np.vstack([numpyarray, new_sample])
In [7]: numpyarray.shape
Out[7]: (4, 4)
you can also first define the size by creating an array full of zeros and then progressively fill it with samples.
numpyarray = np.zeros((250,100))
...
numpyarray[i] = new_sample
say we have
a = numpy.arange(25).reshape(5,5)
> array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]])
By going
numpy.where(a[1])
> array([0, 1, 2, 3, 4])
and then something like
a[1][numpy.where(a[1])]
> array([5, 6, 7, 8, 9])
I can select the horizontal rows of an array and the respective values, However how can I have a similar where condition to select only specific vertical columns
ie.
numpy.where(condition)
> array([1, 6, 11, 16, 21])
I'm not sure exactly if this is what you mean, but you can index columns using [:,column_number], where : stands for "all rows":
a[:,1][numpy.where(a[1])]
# array([ 1, 6, 11, 16, 21])
The above, however, is equivalent to simply a[:,1]:
>>> a[:,1]
array([ 1, 6, 11, 16, 21])
Have a look at this tutorial to learn how to apply slicing on numpy arrays (https://docs.scipy.org/doc/numpy-1.15.1/reference/arrays.indexing.html). As for your question, the answer is:
a[:,1]
Given this 2D numpy array:
a=numpy.array([[31,22,43],[44,55,6],[17,68,19],[12,11,18],...,[99,98,97]])
given the need to flatten it using numpy.ravel:
b=numpy.ravel(a)
and given the need to later dump it into a pandas dataframe, how can I make sure the sequential order of the values in a is preserved when applying numpy.ravel? e.g., How can I check/ensure that numpy.ravel does not mess up with the original sequential order?
Of course the intended result should be that the numbers coming before and after 17 in b, for instance, are the same as in a.
First of all you need to formulate what "sequential" order means for you, as numpy.ravel() does preserve order. Here is a tip how to formulate what you need: try with a simplest possible toy example:
import numpy as np
X = np.arange(20).reshape(-1,4)
X
#array([[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11],
# [12, 13, 14, 15],
# [16, 17, 18, 19]])
X.ravel()
# array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
# 13, 14, 15, 16, 17, 18, 19])
Does it meet your expectation? Or you want to see this order:
Z = X.T
Z
# array([[ 0, 4, 8, 12, 16],
# [ 1, 5, 9, 13, 17],
# [ 2, 6, 10, 14, 18],
# [ 3, 7, 11, 15, 19]])
Z.ravel()
# array([ 0, 4, 8, 12, 16, 1, 5, 9, 13, 17, 2, 6, 10,
# 14, 18, 3, 7, 11, 15, 19])
I'm trying to flatten a 3d array in numpy over an axis (that is, reducing over an axis and flattening over another)
for instance, if I have
X = array(
[[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9,10,11]],
[[12,13,14,15],
[16,17,18,19],
[20,21,22,23]]])
I want to find the operation that turns X in this:
array([
[ 0, 1, 2, 3,12,13,14,15],
[ 4, 5, 6, 7,16,17,18,19],
[ 8, 9,10,11,20,21,22,23]])
I found that in this case np.concatenate((X[0],X[1]), axis=1) gives the solution, however I want a more generic and efficient way to perform this operation for a N dimensional numpy array.
Use numpy.transpose:
>>> X.transpose(1, 0, 2).ravel()
array([ 0, 1, 2, 3, 12, 13, 14, 15, 4, 5, 6, 7, 16, 17, 18, 19, 8,
9, 10, 11, 20, 21, 22, 23])