What is the difference between contiguous and non-contiguous arrays? - python

In the numpy manual about the reshape() function, it says
>>> a = np.zeros((10, 2))
# A transpose make the array non-contiguous
>>> b = a.T
# Taking a view makes it possible to modify the shape without modifying the
# initial object.
>>> c = b.view()
>>> c.shape = (20)
AttributeError: incompatible shape for a non-contiguous array
My questions are:
What are continuous and noncontiguous arrays? Is it similar to the contiguous memory block in C like What is a contiguous memory block?
Is there any performance difference between these two? When should we use one or the other?
Why does transpose make the array non-contiguous?
Why does c.shape = (20) throws an error incompatible shape for a non-contiguous array?
Thanks for your answer!

A contiguous array is just an array stored in an unbroken block of memory: to access the next value in the array, we just move to the next memory address.
Consider the 2D array arr = np.arange(12).reshape(3,4). It looks like this:
In the computer's memory, the values of arr are stored like this:
This means arr is a C contiguous array because the rows are stored as contiguous blocks of memory. The next memory address holds the next row value on that row. If we want to move down a column, we just need to jump over three blocks (e.g. to jump from 0 to 4 means we skip over 1,2 and 3).
Transposing the array with arr.T means that C contiguity is lost because adjacent row entries are no longer in adjacent memory addresses. However, arr.T is Fortran contiguous since the columns are in contiguous blocks of memory:
Performance-wise, accessing memory addresses which are next to each other is very often faster than accessing addresses which are more "spread out" (fetching a value from RAM could entail a number of neighbouring addresses being fetched and cached for the CPU.) This means that operations over contiguous arrays will often be quicker.
As a consequence of C contiguous memory layout, row-wise operations are usually faster than column-wise operations. For example, you'll typically find that
np.sum(arr, axis=1) # sum the rows
is slightly faster than:
np.sum(arr, axis=0) # sum the columns
Similarly, operations on columns will be slightly faster for Fortran contiguous arrays.
Finally, why can't we flatten the Fortran contiguous array by assigning a new shape?
>>> arr2 = arr.T
>>> arr2.shape = 12
AttributeError: incompatible shape for a non-contiguous array
In order for this to be possible NumPy would have to put the rows of arr.T together like this:
(Setting the shape attribute directly assumes C order - i.e. NumPy tries to perform the operation row-wise.)
This is impossible to do. For any axis, NumPy needs to have a constant stride length (the number of bytes to move) to get to the next element of the array. Flattening arr.T in this way would require skipping forwards and backwards in memory to retrieve consecutive values of the array.
If we wrote arr2.reshape(12) instead, NumPy would copy the values of arr2 into a new block of memory (since it can't return a view on to the original data for this shape).

Maybe this example with 12 different array values will help:
In [207]: x=np.arange(12).reshape(3,4).copy()
In [208]: x.flags
Out[208]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
...
In [209]: x.T.flags
Out[209]:
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : False
...
The C order values are in the order that they were generated in. The transposed ones are not
In [212]: x.reshape(12,) # same as x.ravel()
Out[212]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
In [213]: x.T.reshape(12,)
Out[213]: array([ 0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11])
You can get 1d views of both
In [214]: x1=x.T
In [217]: x.shape=(12,)
the shape of x can also be changed.
In [220]: x1.shape=(12,)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-220-cf2b1a308253> in <module>()
----> 1 x1.shape=(12,)
AttributeError: incompatible shape for a non-contiguous array
But the shape of the transpose cannot be changed. The data is still in the 0,1,2,3,4... order, which can't be accessed accessed as 0,4,8... in a 1d array.
But a copy of x1 can be changed:
In [227]: x2=x1.copy()
In [228]: x2.flags
Out[228]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
...
In [229]: x2.shape=(12,)
Looking at strides might also help. A strides is how far (in bytes) it has to step to get to the next value. For a 2d array, there will be be 2 stride values:
In [233]: x=np.arange(12).reshape(3,4).copy()
In [234]: x.strides
Out[234]: (16, 4)
To get to the next row, step 16 bytes, next column only 4.
In [235]: x1.strides
Out[235]: (4, 16)
Transpose just switches the order of the strides. The next row is only 4 bytes- i.e. the next number.
In [236]: x.shape=(12,)
In [237]: x.strides
Out[237]: (4,)
Changing the shape also changes the strides - just step through the buffer 4 bytes at a time.
In [238]: x2=x1.copy()
In [239]: x2.strides
Out[239]: (12, 4)
Even though x2 looks just like x1, it has its own data buffer, with the values in a different order. The next column is now 4 bytes over, while the next row is 12 (3*4).
In [240]: x2.shape=(12,)
In [241]: x2.strides
Out[241]: (4,)
And as with x, changing the shape to 1d reduces the strides to (4,).
For x1, with data in the 0,1,2,... order, there isn't a 1d stride that would give 0,4,8....
__array_interface__ is another useful way of displaying array information:
In [242]: x1.__array_interface__
Out[242]:
{'strides': (4, 16),
'typestr': '<i4',
'shape': (4, 3),
'version': 3,
'data': (163336056, False),
'descr': [('', '<i4')]}
The x1 data buffer address will be same as for x, with which it shares the data. x2 has a different buffer address.
You could also experiment with adding a order='F' parameter to the copy and reshape commands.

Related

Apply matrix dot between a list of matrices and a list of vectors in Numpy

Let's suppose I have these two variables
matrices = np.random.rand(4,3,3)
vectors = np.random.rand(4,3,1)
What I would like to perform is the following:
dot_products = [matrix # vector for (matrix,vector) in zip(matrices,vectors)]
Therefore, I've tried using the np.tensordot method, which at first seemed to make sense, but this happened when testing
>>> np.tensordot(matrices,vectors,axes=([-2,-1],[-2,-1]))
...
ValueError: shape-mismatch for sum
>>> np.tensordot(matrices,vectors,axes=([-2,-1]))
...
ValueError: shape-mismatch for sum
Is it possible to achieve these multiple dot products with the mentioned Numpy method? If not, is there another way that I can accomplish this using Numpy?
The documentation for # is found at np.matmul. It is specifically designed for this kind of 'batch' processing:
In [76]: matrices = np.random.rand(4,3,3)
...: vectors = np.random.rand(4,3,1)
In [77]: dot_products = [matrix # vector for (matrix,vector) in zip(matrices,vectors)]
In [79]: np.array(dot_products).shape
Out[79]: (4, 3, 1)
In [80]: (matrices # vectors).shape
Out[80]: (4, 3, 1)
In [81]: np.allclose(np.array(dot_products), matrices#vectors)
Out[81]: True
A couple of problems with tensordot. The axes parameter specify which dimensions are summed, "dotted", In your case it would be the last of matrices and 2nd to the last of vectors. That's the standard dot paring.
In [82]: np.dot(matrices, vectors).shape
Out[82]: (4, 3, 4, 1)
In [84]: np.tensordot(matrices, vectors, (-1,-2)).shape
Out[84]: (4, 3, 4, 1)
You tried to specify 2 pairs of axes for summing. Also dot/tensordot does a kind of outer product on the other dimensions. You'd have to take the "diagonal" on the 4's. tensordot is not what you want for this operation.
We can be more explicit about the dimensions with einsum:
In [83]: np.einsum('ijk,ikl->ijl',matrices, vectors).shape
Out[83]: (4, 3, 1)

What does layout = torch.strided mean?

As I was going through pytorch documentation I came across a term layout = torch.strided in many of the functions. Can anyone help me in understanding where is it used and how. The description says it's the the desired layout of returned Tensor. What does layout mean and how many types of layout are there ?
torch.rand(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)
strides is number of steps (or jumps) that is needed to go from one element to next element, in a given dimension. In computer memory, the data is stored linearly in a contiguous block of memory. What we view is just a (re)presentation.
Let's take an example tensor for understanding this:
# a 2D tensor
In [62]: tensor = torch.arange(1, 16).reshape(3, 5)
In [63]: tensor
Out[63]:
tensor([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]])
With this tensor in place, the strides are:
# get the strides
In [64]: tensor.stride()
Out[64]: (5, 1)
What this resultant tuple (5, 1) says is:
to traverse along the 0th dimension/axis (Y-axis), let's say we want to jump from 1 to 6, we should take 5 steps (or jumps)
to traverse along the 1st dimension/axis (X-axis), let's say we want to jump from 7 to 8, we should take 1 step (or jump)
The order (or index) of 5 & 1 in the tuple represents the dimension/axis. You can also pass the dimension, for which you want the stride, as an argument:
# get stride for axis 0
In [65]: tensor.stride(0)
Out[65]: 5
# get stride for axis 1
In [66]: tensor.stride(1)
Out[66]: 1
With that understanding, we might have to ask why is this extra parameter needed when we create the tensors? The answer to that is for efficiency reasons. (How can we store/read/access the elements in the (sparse) tensor most efficiently?).
With sparse tensors (a tensor where most of the elements are just zeroes), so we don't want to store these values. we only store the non-zero values and their indices. With a desired shape, the rest of the values can then be filled with zeroes, yielding the desired sparse tensor.
For further reading on this, the following articles might be of help:
numpy.ndarray.strides
torch.layout
torch.sparse
P.S: I guess there's a typo in the torch.layout documentation which says
Strides are a list of integers ...
The composite data type returned by tensor.stride() is a tuple, not a list.
For quick understanding, layout=torch.strided corresponds to dense tensors while layout=torch.sparse_coo corresponds to sparse tensors.
From another perspective, we can understand it together with torch.tensor.view.
A tensor can be viewed indicates it is contiguous. If we change the view of a tensor, the strides will change accordingly, but the data will keep the same. More specifically, view returns a new tensor with the same data but different shape, and strides is compatible with the view to indicate how to access the data in the memory.
For example
In [1]: import torch
In [2]: a = torch.arange(15)
In [3]: a.data_ptr()
Out[3]: 94270437164688
In [4]: a.stride()
Out[4]: (1,)
In [5]: a = a.view(3, 5)
In [6]: a.data_ptr() # share the same data pointer
Out[6]: 94270437164688
In [7]: a.stride() # the stride changes as the view changes
Out[7]: (5, 1)
In addition, the idea of torch.strided is basically the same as strides in numpy.
View this question for more detailed understanding.
How to understand numpy strides for layman?
As per the official pytorch documentation here,
A torch.layout is an object that represents the memory layout of a
torch.Tensor. Currently, we support torch.strided (dense Tensors) and
have experimental support for torch.sparse_coo (sparse COO Tensors).
torch.strided represents dense Tensors and is the memory layout that
is most commonly used. Each strided tensor has an associated
torch.Storage, which holds its data. These tensors provide
multi-dimensional, strided view of a storage. Strides are a list of
integers: the k-th stride represents the jump in the memory necessary
to go from one element to the next one in the k-th dimension of the
Tensor. This concept makes it possible to perform many tensor
operations efficiently.
Example:
>>> x = torch.Tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])
>>> x.stride()
(5, 1)
>>> x.t().stride()
(1, 5)
layout means the way memory organize the element in that tensor,I think currently there are 2 types of layout to store tensor
one is torch.strided and another is torch.sparse_coo
strided means the element is arranged one by one in a very dense way, think about a strided troops, squares,so each soldier actually has neighbours.
while for sparse_coo I think should be deal with sparse matrix, the exact storage structure I am not sure but I guess it just stores non-zero elements' indices and values
It need to separate for those two types because for sparse matrix no need to arrange element one by one in a dense form, because it will take maybe one hundred steps for the non-zero element get to its next non-zero elements

Multiply arrays along a given axis [duplicate]

It seems I am getting lost in something potentially silly.
I have an n-dimensional numpy array, and I want to multiply it with a vector (1d array) along some dimension (which can change!).
As an example, say I want to multiply a 2d array by a 1d array along axis 0 of the first array, I can do something like this:
a=np.arange(20).reshape((5,4))
b=np.ones(5)
c=a*b[:,np.newaxis]
Easy, but I would like to extend this idea to n-dimensions (for a, while b is always 1d) and to any axis. In other words, I would like to know how to generate a slice with the np.newaxis at the right place. Say that a is 3d and I want to multiply along axis=1, I would like to generate the slice which would correctly give:
c=a*b[np.newaxis,:,np.newaxis]
I.e. given the number of dimensions of a (say 3), and the axis along which I want to multiply (say axis=1), how do I generate and pass the slice:
np.newaxis,:,np.newaxis
Thanks.
Solution Code -
import numpy as np
# Given axis along which elementwise multiplication with broadcasting
# is to be performed
given_axis = 1
# Create an array which would be used to reshape 1D array, b to have
# singleton dimensions except for the given axis where we would put -1
# signifying to use the entire length of elements along that axis
dim_array = np.ones((1,a.ndim),int).ravel()
dim_array[given_axis] = -1
# Reshape b with dim_array and perform elementwise multiplication with
# broadcasting along the singleton dimensions for the final output
b_reshaped = b.reshape(dim_array)
mult_out = a*b_reshaped
Sample run for a demo of the steps -
In [149]: import numpy as np
In [150]: a = np.random.randint(0,9,(4,2,3))
In [151]: b = np.random.randint(0,9,(2,1)).ravel()
In [152]: whos
Variable Type Data/Info
-------------------------------
a ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
b ndarray 2: 2 elems, type `int32`, 8 bytes
In [153]: given_axis = 1
Now, we would like to perform elementwise multiplications along given axis = 1. Let's create dim_array:
In [154]: dim_array = np.ones((1,a.ndim),int).ravel()
...: dim_array[given_axis] = -1
...:
In [155]: dim_array
Out[155]: array([ 1, -1, 1])
Finally, reshape b & perform the elementwise multiplication:
In [156]: b_reshaped = b.reshape(dim_array)
...: mult_out = a*b_reshaped
...:
Check out the whos info again and pay special attention to b_reshaped & mult_out:
In [157]: whos
Variable Type Data/Info
---------------------------------
a ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
b ndarray 2: 2 elems, type `int32`, 8 bytes
b_reshaped ndarray 1x2x1: 2 elems, type `int32`, 8 bytes
dim_array ndarray 3: 3 elems, type `int32`, 12 bytes
given_axis int 1
mult_out ndarray 4x2x3: 24 elems, type `int32`, 96 bytes
Avoid copying data and waste resources!
Utilizing casting and views, instead of actually copying data N times into a new array with appropriate shape (as existing answers do) is way more memory efficient. Here is such a method (based on #ShuxuanXU's code):
def mult_along_axis(A, B, axis):
# ensure we're working with Numpy arrays
A = np.array(A)
B = np.array(B)
# shape check
if axis >= A.ndim:
raise AxisError(axis, A.ndim)
if A.shape[axis] != B.size:
raise ValueError(
"Length of 'A' along the given axis must be the same as B.size"
)
# np.broadcast_to puts the new axis as the last axis, so
# we swap the given axis with the last one, to determine the
# corresponding array shape. np.swapaxes only returns a view
# of the supplied array, so no data is copied unnecessarily.
shape = np.swapaxes(A, A.ndim-1, axis).shape
# Broadcast to an array with the shape as above. Again,
# no data is copied, we only get a new look at the existing data.
B_brc = np.broadcast_to(B, shape)
# Swap back the axes. As before, this only changes our "point of view".
B_brc = np.swapaxes(B_brc, A.ndim-1, axis)
return A * B_brc
You could build a slice object, and select the desired dimension in that:
import numpy as np
a = np.arange(18).reshape((3,2,3))
b = np.array([1,3])
ss = [None] * a.ndim
ss[1] = slice(None) # set the dimension along which to broadcast
print ss # [None, slice(None, None, None), None]
c = a*b[ss]
I got a similar demand when I was working on some numerical calculation.
Let's assume we have two arrays (A and B) and a user-specified 'axis'.
A is a multi-dimensional array.
B is a 1-d array.
The basic idea is to expand B so that A and B have the same shape. Here is the solution code
import numpy as np
from numpy.core._internal import AxisError
def multiply_along_axis(A, B, axis):
A = np.array(A)
B = np.array(B)
# shape check
if axis >= A.ndim:
raise AxisError(axis, A.ndim)
if A.shape[axis] != B.size:
raise ValueError("'A' and 'B' must have the same length along the given axis")
# Expand the 'B' according to 'axis':
# 1. Swap the given axis with axis=0 (just need the swapped 'shape' tuple here)
swapped_shape = A.swapaxes(0, axis).shape
# 2. Repeat:
# loop through the number of A's dimensions, at each step:
# a) repeat 'B':
# The number of repetition = the length of 'A' along the
# current looping step;
# The axis along which the values are repeated. This is always axis=0,
# because 'B' initially has just 1 dimension
# b) reshape 'B':
# 'B' is then reshaped as the shape of 'A'. But this 'shape' only
# contains the dimensions that have been counted by the loop
for dim_step in range(A.ndim-1):
B = B.repeat(swapped_shape[dim_step+1], axis=0)\
.reshape(swapped_shape[:dim_step+2])
# 3. Swap the axis back to ensure the returned 'B' has exactly the
# same shape of 'A'
B = B.swapaxes(0, axis)
return A * B
And here is an example
In [33]: A = np.random.rand(3,5)*10; A = A.astype(int); A
Out[33]:
array([[7, 1, 4, 3, 1],
[1, 8, 8, 2, 4],
[7, 4, 8, 0, 2]])
In [34]: B = np.linspace(3,7,5); B
Out[34]: array([3., 4., 5., 6., 7.])
In [35]: multiply_along_axis(A, B, axis=1)
Out[34]:
array([[21., 4., 20., 18., 7.],
[ 3., 32., 40., 12., 28.],
[21., 16., 40., 0., 14.]])
Simplifying #Neinstein's solution, I arrived at
def multiply_along_axis(A, B, axis):
return np.swapaxes(np.swapaxes(A, axis, -1) * B, -1, axis)
This example also avoids copying and wasting memory. The explicit broadcasting is avoided by swapping the desired axis in A to the last position, perform the multiplication, and than swapping the axis back to the original position. The additional advantage is that numpy takes care of the error handling and type conversion.
You could also use a simple matrices trick
c = np.matmul(a,diag(b))
basically just doing matrix multiplication between a and a matrix whose diagonals are the elements of b. Maybe not as efficient but it's a nice single line solution

PyArray_New or PyArray_SimpleNewFromData specify dimensions for a 3D array

I have 1 dimensional float array (from C space) which I want to read inside python space with zero copy. So far what I have done (reading SO mostly) is:
// wrap c++ array as numpy array
//From Max http://stackoverflow.com/questions/10701514/how-to-return-numpy-array-from-boostpython
boost::python::object exposeNDarray(float * result, long size) {
npy_intp shape[1] = { size }; // array size
PyObject* obj = PyArray_SimpleNewFromData(1, shape, NPY_FLOAT, result);
/*PyObject* obj = PyArray_New(&PyArray_Type, 1, shape, NPY_FLOAT, // data type
NULL, result, // data pointer
0, NPY_ARRAY_CARRAY_RO, // NPY_ARRAY_CARRAY_RO for readonly
NULL);*/
handle<> array( obj );
return object(array);
}
The PyArray_New commented part is equivalent in functionality to the PyArray_SimpleNewFromData one.
My problem is that this 1 dimensional array should actually be a 3 dimensional ndarray. I can control how my result float array is constructed and I want if possible for that continuous block of memory to be interpreted as 3 Dimensional array.
I think this can be done by specifying the shape variable but, I can't find any reference to how the memory is going to interpreted.
Say i need my array to look like: np.empty((x,y,z)). When i specify that in the shape variable, what section of my result array would make up the first dimension, what section the second and so on?
There's documentation that describes the layout of a numpy array, e.g. https://docs.scipy.org/doc/numpy/reference/arrays.html
but maybe a simple example will help.
Let's make a 1d array of 24 integers, and reshape it to a 3d shape. If 'reshape' doesn't make sense, you'll need to review some array basics, including the notion of a view versus copy.
In [226]: arr = np.arange(24).reshape(2,3,4)
In [227]: arr
Out[227]:
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]])
A handy way of seeing the basic attributes of this array is this dictionary:
In [228]: arr.__array_interface__
Out[228]:
{'data': (159342384, False),
'descr': [('', '<i4')],
'shape': (2, 3, 4),
'strides': None,
'typestr': '<i4',
'version': 3}
data identifies the location of the data buffer that actually stores the values. In your construction this will be your C array (or a copy).
In this case it is a buffer of 96 bytes - 4 bytes per element. This buffer was created by the arange function, and 'reused' by the reshape.
In [229]: arr.tostring()
Out[229]: b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\t\x00\x00\x00\n\x00\x00\x00\x0b\x00\x00\x00\x0c\x00\x00\x00\r\x00\x00\x00\x0e\x00\x00\x00\x0f\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x14\x00\x00\x00\x15\x00\x00\x00\x16\x00\x00\x00\x17\x00\x00\x00'
In [230]: len(_)
Out[230]: 96
In [231]: 24*4
the descr or arr.dtype identifies how the bytes are interpreted - here as a 4 byte integer, '
shape and strides determine how the 1d array is viewed - in this case as a 3d array.
In [232]: arr.strides
Out[232]: (48, 16, 4)
In [233]: arr.shape
Out[233]: (2, 3, 4)
This says that the first dimension (plane) is 48 bytes long, and there are 2 of them. The 2nd (each row) is 16 bytes long, and the step between column elements is 4 bytes.
By simply changing the strides and shape, a 1d array can be viewed as 2d, 3d. Even the array transpose is implemented by changing shape and strides (and another attribute, order) .
You can use pybind11 for this. You can actually base yourself off an unit test that takes a c array and reads from it as a numpy view

Append numpy array into an element

I have a Numpy array of shape (5,5,3,2). I want to take the element (1,4) of that matrix, which is also a matrix of shape (3,2), and add an element to it -so it becomes a (4,2) array.
The code I'm using is the following:
import numpy as np
a = np.random.rand(5,5,3,2)
a = np.array(a, dtype = object) #So I can have different size sub-matrices
a[2][3] = np.append(a[2][3],[[1.0,1.0]],axis=0) #a[2][3] shape = (3,2)
I'm always obtaining the error:
ValueError: could not broadcast input array from shape (4,2) into shape (3,2)
I understand that the shape returned by the np.append function is not the same as the a[2][3] sub-array, but I thought that the dtype=object would solve my problem. However, I need to do this. Is there any way to go around this limitation?
I also tried to use the insert function but I don't know how could I add the element in the place I want.
Make sure you understand what you have produced. That requires checking the shape and dtype, and possibly looking at the values
In [29]: a = np.random.rand(5,5,3,2)
In [30]: b=np.array(a, dtype=object)
In [31]: a.shape
Out[31]: (5, 5, 3, 2) # a is a 4d array
In [32]: a.dtype
Out[32]: dtype('float64')
In [33]: b.shape
Out[33]: (5, 5, 3, 2) # so is b
In [34]: b.dtype
Out[34]: dtype('O')
In [35]: b[2,3].shape
Out[35]: (3, 2)
In [36]: c=np.append(b[2,3],[[1,1]],axis=0)
In [37]: c.shape
Out[37]: (4, 2)
In [38]: c.dtype
Out[38]: dtype('O')
b[2][3] is also an array. b[2,3] is the proper numpy way of indexing 2 dimensions.
I suspect you wanted b to be a (5,5) array containing arrays (as objects), and you think that you you can simply replace one of those with a (4,2) array. But the b constructor simply changes the floats of a to objects, without changing the shape (or 4d nature) of b.
I could construct a (5,5) object array, and fill it with values from a. And then replace one of those values with a (4,2) array:
In [39]: B=np.empty((5,5),dtype=object)
In [40]: for i in range(5):
...: for j in range(5):
...: B[i,j]=a[i,j,:,:]
...:
In [41]: B.shape
Out[41]: (5, 5)
In [42]: B.dtype
Out[42]: dtype('O')
In [43]: B[2,3]
Out[43]:
array([[ 0.03827568, 0.63411023],
[ 0.28938383, 0.7951006 ],
[ 0.12217603, 0.304537 ]])
In [44]: B[2,3]=c
In [46]: B[2,3].shape
Out[46]: (4, 2)
This constructor for B is a bit crude. I've answered other questions about creating/filling object arrays, but I'm not going to take the time here to streamline this case. It's for illustration purposes only.
In an array of object, any element can be indeed an array (or any kind of object).
import numpy as np
a = np.random.rand(5,5,3,2)
a = np.array(a, dtype=object)
# Assign an 1D array to the array element ``a[2][3][0][0]``:
a[2][3][0][0] = np.arange(10)
a[2][3][0][0][9] # 9
However a[2][3] is not an array element, it is a whole array.
a[2][3].ndim # 2
Therefore when you do a[2][3] = (something) you are using broadcasting instead of assigning an element: numpy tries to replace the content of the subarray a[2][3] and fails because of shape mismatch. The memory layout of numpy arrays does not allow to change the shape of subarrays.
Edit: Instead of using numpy arrays you could use nested lists. These nested lists can have arbitrary sizes. Note that the memory is higher and that the access time is higher compared to numpy array.
import numpy as np
a = np.random.rand(5,5,3,2)
a = np.array(a, dtype=object)
b = np.append(a[2][3], [[1.0,1.0]],axis=0)
a_list = a.tolist()
a_list[2][3] = b.tolist()
The problem here, is that you try to assign to a[2][3]
Make a new array instead.
new_array = np.append(a[2][3],np.array([[1.0,1.0]]),axis=0)

Categories

Resources