Couldn't transpose numpy 3D array - python

Couldn't transpose np.array
import numpy as np
arr = np.arange(16).reshape((2, 2, 4))
print(arr)
arr.transpose(1, 0, 2)
print('------------')
print(arr)
output:
[[[ 0 1 2 3]
[ 4 5 6 7]]
[[ 8 9 10 11]
[12 13 14 15]]]
------------
[[[ 0 1 2 3]
[ 4 5 6 7]]
[[ 8 9 10 11]
[12 13 14 15]]]
I think that's weird. Here the same example but it works. numpy==1.17.2 What could be wrong?

Try typing 'arr = arr.transpose(1, 0, 2)' in place of 'arr.transpose(1, 0, 2)'. You can also try typing 'print(arr.transpose(1, 0, 2)' in place of 'print(arr)'.

Related

Transformation of the 3d numpy array

I have 3d array and I need to set to zero its right part. For each 2d slice (n, :, :) of the array the index of the column should be taken from vector b. This index defines separating point - the left and right parts, as shown in the figure below.
a_before = [[[ 1 2 3 4]
[ 5 6 7 8]
[ 9 10 11 12]
[13 14 15 16]]
[[17 18 19 20]
[21 22 23 24]
[25 26 27 28]
[29 30 31 32]]
[[33 34 35 36]
[37 38 39 40]
[41 42 43 44]
[45 46 47 48]]]
a_before.shape = (3, 4, 4)
b = (2, 3, 1)
a_after_1 = [[[ 1 2 0 0]
[ 5 6 0 0]
[ 9 10 0 0]
[13 14 0 0]]
[[17 18 19 0]
[21 22 23 0]
[25 26 27 0]
[29 30 31 0]]
[[33 0 0 0]
[37 0 0 0]
[41 0 0 0]
[45 0 0 0]]]
After this, for each 2d slice (n, :, :) I have to take index of the column from c vector and multiply by the corresponding value taken from the vector d.
c = (1, 2, 0)
d = (50, 100, 150)
a_after_2 = [[[ 1 100 0 0]
[ 5 300 0 0]
[ 9 500 0 0]
[13 700 0 0]]
[[17 18 1900 0]
[21 22 2300 0]
[25 26 2700 0]
[29 30 3100 0]]
[[4950 0 0 0]
[5550 0 0 0]
[6150 0 0 0]
[6750 0 0 0]]]
I did it but my version looks ugly. Maybe someone can help me.
P.S. I would like to avoid for loops and use only numpy methods.
Thank You.
Here's a version without loops.
In [232]: A = np.arange(1,49).reshape(3,4,4)
In [233]: b = np.array([2,3,1])
In [234]: d = np.array([50,100,150])
In [235]: I,J = np.nonzero(b[:,None]<=np.arange(4))
In [236]: A[I,:,J]=0
In [237]: A[np.arange(3),:,b-1] *= d[:,None]
In [238]: A
Out[238]:
array([[[ 1, 100, 0, 0],
[ 5, 300, 0, 0],
[ 9, 500, 0, 0],
[ 13, 700, 0, 0]],
[[ 17, 18, 1900, 0],
[ 21, 22, 2300, 0],
[ 25, 26, 2700, 0],
[ 29, 30, 3100, 0]],
[[4950, 0, 0, 0],
[5550, 0, 0, 0],
[6150, 0, 0, 0],
[6750, 0, 0, 0]]])
Before I developed this, I wrote an iterative version. It helped me visualize the problem.
In [240]: Ac = np.arange(1,49).reshape(3,4,4)
In [241]:
In [241]: for i,v in enumerate(b):
...: Ac[i,:,v:]=0
...:
In [242]: for i,(bi,di) in enumerate(zip(b,d)):
...: Ac[i,:,bi-1]*=di
It may be easier to understand, and in that sense, less ugly!
The fact that your A has middle dimension that is "just-going-along" for the ride, complicates "vectorizing" the problem.
With a (3,4) 2d array, the solution is just:
In [251]: Ab = Ac[:,0,:]
In [252]: Ab[b[:,None]<=np.arange(4)]=0
In [253]: Ab[np.arange(3),b-1]*=d
Here it is:
import numpy as np
a = np.arange(1,49).reshape(3,4,4)
b = np.array([2,3,1])
c = np.array([1,2,0])
d = np.array([50,100,150])
for i in range(len(b)):
a[i,:,b[i]:] = 0
for i,j in enumerate(c):
a[i,:,j] = a[i,:,j]* d[i]
print(a)
#
[[[ 1 100 0 0]
[ 5 300 0 0]
[ 9 500 0 0]
[ 13 700 0 0]]
[[ 17 18 1900 0]
[ 21 22 2300 0]
[ 25 26 2700 0]
[ 29 30 3100 0]]
[[4950 0 0 0]
[5550 0 0 0]
[6150 0 0 0]
[6750 0 0 0]]]

Vectorized method to compute Hankel matrix for multi-input, multi-output data sequence

I'm constructing a Hankel matrix and wondered if there's a way to further vectorize the following computation (i.e. without for loops or list comprehensions).
# Imagine this is some time-series data
q = 2 # Number of inputs
p = 2 # Number of outputs
nt = 6 # Number of timesteps
y = np.array(range(p*q*nt)).reshape([nt, p, q]).transpose()
assert y.shape == (q, p, nt)
print(y.shape)
(2, 2, 6)
print(y[:,:,0])
[[0 2]
[1 3]]
print(y[:,:,1])
[[4 6]
[5 7]]
print(y[:,:,2])
[[ 8 10]
[ 9 11]]
Desired results
# Desired Hankel matrix
m = n = 3 # dimensions
assert nt >= m + n
h = np.zeros((q*m, p*n), dtype=int)
for i in range(m):
for j in range(n):
h[q*i:q*(i+1), p*j:p*(j+1)] = y[:, :, i+j]
print(h)
[[ 0 2 4 6 8 10]
[ 1 3 5 7 9 11]
[ 4 6 8 10 12 14]
[ 5 7 9 11 13 15]
[ 8 10 12 14 16 18]
[ 9 11 13 15 17 19]]
(Note how the 2x2 blocks are stacked)
# Alternative method using stacking
b = np.hstack([y[:,:,i] for i in range(y.shape[2])])
assert b.shape == (q, p*nt)
print(b)
[[ 0 2 4 6 8 10 12 14 16 18 20 22]
[ 1 3 5 7 9 11 13 15 17 19 21 23]]
h = np.vstack([b[:, i*p:i*p+n*q] for i in range(m)])
print(h)
[[ 0 2 4 6 8 10]
[ 1 3 5 7 9 11]
[ 4 6 8 10 12 14]
[ 5 7 9 11 13 15]
[ 8 10 12 14 16 18]
[ 9 11 13 15 17 19]]
You can use stride_tricks:
>>> from numpy.lib.stride_tricks import as_strided
>>>
>>> a = np.arange(20).reshape(5,2,2)
>>> s0,s1,s2 = a.strides
>>> as_strided(a,(3,2,3,2),(s0,s2,s0,s1)).reshape(6,6)
array([[ 0, 2, 4, 6, 8, 10],
[ 1, 3, 5, 7, 9, 11],
[ 4, 6, 8, 10, 12, 14],
[ 5, 7, 9, 11, 13, 15],
[ 8, 10, 12, 14, 16, 18],
[ 9, 11, 13, 15, 17, 19]])

method for cells adjacent/connected to vertex in fipy?

Is there such a function or easy method?
The only functions I have found so far are mesh.vertexCoords and mesh.faceVertexIDs but I couldn't figure quit out if they might help me.
As the comments suggest, the vertex to cell data shouldn't usually be required in a finite volume scheme. However, the following is a solution for finding the vertex to cell IDs given the cell to vertex IDs. The cell to vertex data is available in FiPy with the mesh._cellVertexIDs array.
The following uses sparse matrices to represent the cell to vertex link and then a transpose to find the vertex to cell links.
from fipy import Grid2D
import numpy as np
from scipy.sparse import coo_matrix
import itertools
def lists_to_numpy(x):
"""List of lists of different length to Numpy array. See
https://stackoverflow.com/questions/38619143/convert-python-sequence-to-numpy-array-filling-missing-values
>>> print(lists_to_numpy([[0], [0, 1], [0, 1, 2]]))
array([[ 0, -1, -1],
[ 0, 1, -1],
[ 0, 1, 2]])
"""
return np.array(list(itertools.zip_longest(*x, fillvalue=-1))).T
def invert_sparse_bool(x, mshape):
"""Invert a sparse bool matrix represented by a 2D array and return as
inverted 2D array.
>>> a = numpy.array([[0, 2], [1, 3], [0, 3], [3, 4]])
>>> print(invert_sparse_bool(a, (4, 5)))
[[ 0 2 -1]
[ 1 -1 -1]
[ 0 -1 -1]
[ 1 2 3]
[ 3 -1 -1]]
"""
arr1 = np.indices(x.shape)[0]
arr2 = np.stack((arr1, x), axis=-1)
arr3 = np.reshape(arr2, (-1, 2))
lists = coo_matrix(
(np.ones(len(arr3), dtype=int),
(arr3[:, 0], arr3[:, 1])),
shape=mshape
).tolil().T.rows
return lists_to_numpy(lists)
m = Grid2D(nx=3, ny=3)
cellVertexIDs = m._cellVertexIDs.swapaxes(0, 1)
vertexCellIDs = invert_sparse_bool(
cellVertexIDs,
(m.numberOfCells, m.numberOfVertices)
)
print('cellVertexIDs:', m._cellVertexIDs)
print('vertexCellIDs:', vertexCellIDs)
Note that the m._cellVertexIDs are of shape (maxNumberOfVerticesPerCell, numberOfCells), but it's a little easier to implement when they are reshaped. The new vertexCellIDs array are shaped as (numberOfVertices, maxNumberOfCellsPerVertex). The vertexCellIDs do need a fill value as each vertex won't be connected to the same number of cells.
The output from this is
cellVertexIDs: [[ 1 5 4 0]
[ 2 6 5 1]
[ 3 7 6 2]
[ 5 9 8 4]
[ 6 10 9 5]
[ 7 11 10 6]
[ 9 13 12 8]
[10 14 13 9]
[11 15 14 10]]
vertexCellIDs: [[ 0 -1 -1 -1]
[ 0 1 -1 -1]
[ 1 2 -1 -1]
[ 2 -1 -1 -1]
[ 0 3 -1 -1]
[ 0 1 3 4]
[ 1 2 4 5]
[ 2 5 -1 -1]
[ 3 6 -1 -1]
[ 3 4 6 7]
[ 4 5 7 8]
[ 5 8 -1 -1]
[ 6 -1 -1 -1]
[ 6 7 -1 -1]
[ 7 8 -1 -1]
[ 8 -1 -1 -1]]
which makes sense to me for a 3x3 mesh with 9 cells and 16 vertices and an ordered numbering system for both cells and vertices (left to right, bottom to top).

Convert matrix to tuples

Say I generate a sequence of values, tile them by the range provided and then increment each value in each row by that current row ID, then mask some values outside of a desired range like below:
>>> range = 5
>>> matrix = np.arange(-5, 10, 1)
>>> matrix = np.tile(matrix, (range, 1))
>>> matrix = np.add(matrix, np.arange(0, range)[:, None])
>>> matrix = ma.masked_outside(matrix, 0, 10)
[[-- -- -- -- -- 0 1 2 3 4 5 6 7 8 9]
[-- -- -- -- 0 1 2 3 4 5 6 7 8 9 10]
[-- -- -- 0 1 2 3 4 5 6 7 8 9 10 --]
[-- -- 0 1 2 3 4 5 6 7 8 9 10 -- --]
[-- 0 1 2 3 4 5 6 7 8 9 10 -- -- --]]
How would you best convert the above output to a matrix of the format [non-masked value, row-id], i.e.:
[0,0], [1, 0], [2,0] ... [10, 4]
Also, is the original code too wasteful to achieve the final desired step?
Playing around with your matrix I produced this:
In [50]: np.stack((matrix.compressed(), np.where(~matrix.mask)[0]),1)
Out[50]:
array([[ 0, 0],
[ 1, 0],
[ 2, 0],
[ 3, 0],
[ 4, 0],
[ 5, 0],
[ 6, 0],
[ 7, 0],
[ 8, 0],
[ 9, 0],
....
We could probably skip the masked array step, creating the mask directly. The compressed for example is produced by matrix.data[~matrix.mask].
In [52]: mask = ~matrix.mask
In [53]: data = matrix.data
In [54]: np.stack((data[mask], np.where(mask)[0]), 1)

More numpy way of iterating through the 'orthogonal' diagonals of a 2D array

I have the following code that iterates along the diagonals that are orthogonal to the diagonals normally returned by np.diagonal. It starts at position (0, 0) and works its way towards the lower right coordinate.
The code works as intended but is not very numpy with all its loops and inefficient in having to create many arrays to do the trick.
So I wonder if there is a nicer way to do this, because I don't see how I would stride my array or use the diagonal-methods of numpy to do it in a nicer way (though I expect there are some tricks I fail to see).
import numpy as np
A = np.zeros((4,5))
#Construct a distance array of same size that uses (0, 0) as origo
#and evaluates distances along first and second dimensions slightly
#differently so that no values in the array is the same
D = np.zeros(A.shape)
for i in range(D.shape[0]):
for j in range(D.shape[1]):
D[i, j] = i * (1 + 1.0 / (grid_shape[0] + 1)) + j
print D
#[[ 0. 1. 2. 3. 4. ]
# [ 1.05882353 2.05882353 3.05882353 4.05882353 5.05882353]
# [ 2.11764706 3.11764706 4.11764706 5.11764706 6.11764706]
# [ 3.17647059 4.17647059 5.17647059 6.17647059 7.17647059]]
#Make a flat sorted copy
rD = D.ravel().copy()
rD.sort()
#Just to show how it works, assigning incrementing values
#iterating along the 'orthagonal' diagonals starting at (0, 0) position
for i, v in enumerate(rD):
A[D == v] = i
print A
#[[ 0 1 3 6 10]
# [ 2 4 7 11 14]
# [ 5 8 12 15 17]
# [ 9 13 16 18 19]]
Edit
To clarify, I want to iterate element-wise through the entire A but doing so in the order the code above invokes (which is displayed in the final print).
It is not important which direction the iteration goes along the diagonals (if 1 and 2 switched placed, and 3 and 5 etc. in A) only that the diagonals are orthogonal to the main diagonal of A (the one produced by np.diag(A)).
The application/reason for this question is in my previous question (in the solution part at the bottom of that question): Constructing a 2D grid from potentially incomplete list of candidates
Here is a way that avoids Python for-loops.
First, let's look at our addition tables:
import numpy as np
grid_shape = (4,5)
N = np.prod(grid_shape)
y = np.add.outer(np.arange(grid_shape[0]),np.arange(grid_shape[1]))
print(y)
# [[0 1 2 3 4]
# [1 2 3 4 5]
# [2 3 4 5 6]
# [3 4 5 6 7]]
The key idea is that if we visit the sums in the addition table in order, we would be iterating through the array in the desired order.
We can find out the indices associated with that order using np.argsort:
idx = np.argsort(y.ravel())
print(idx)
# [ 0 1 5 2 6 10 3 7 11 15 4 8 12 16 9 13 17 14 18 19]
idx is golden. It is essentially everything you need to iterate through any 2D array of shape (4,5), since a 2D array is just a 1D array reshaped.
If your ultimate goal is to generate the array A that you show above at the end of your post, then you could use argsort again:
print(np.argsort(idx).reshape(grid_shape[0],-1))
# [[ 0 1 3 6 10]
# [ 2 4 7 11 14]
# [ 5 8 12 15 17]
# [ 9 13 16 18 19]]
Or, alternatively, if you need to assign other values to A, perhaps this would be more useful:
A = np.zeros(grid_shape)
A1d = A.ravel()
A1d[idx] = np.arange(N) # you can change np.arange(N) to any 1D array of shape (N,)
print(A)
# [[ 0. 1. 3. 6. 10.]
# [ 2. 4. 7. 11. 15.]
# [ 5. 8. 12. 16. 18.]
# [ 9. 13. 14. 17. 19.]]
I know you asked for a way to iterate through your array, but I wanted to show the above because generating arrays through whole-array assignment or numpy function calls (like np.argsort) as done above will probably be faster than using a Python loop. But if you need to use a Python loop, then:
for i, j in enumerate(idx):
A1d[j] = i
print(A)
# [[ 0. 1. 3. 6. 10.]
# [ 2. 4. 7. 11. 15.]
# [ 5. 8. 12. 16. 18.]
# [ 9. 13. 14. 17. 19.]]
>>> D
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
>>> D[::-1].diagonal(offset=1)
array([16, 12, 8, 4])
>>> D[::-1].diagonal(offset=-3)
array([0])
>>> np.hstack([D[::-1].diagonal(offset=-x) for x in np.arange(-4,4)])[::-1]
array([ 0, 1, 5, 2, 6, 10, 3, 7, 11, 15, 4, 8, 12, 16, 9, 13, 17,
14, 18, 19])
Simpler as long as it is not a large matrix.
I'm not sure if this is what you really want, but maybe:
>>> import numpy as np
>>> ar = np.random.random((4,4))
>>> ar
array([[ 0.04844116, 0.10543146, 0.30506354, 0.4813217 ],
[ 0.59962641, 0.44428831, 0.16629692, 0.65330539],
[ 0.61854927, 0.6385717 , 0.71615447, 0.13172049],
[ 0.05001291, 0.41577457, 0.5579213 , 0.7791656 ]])
>>> ar.diagonal()
array([ 0.04844116, 0.44428831, 0.71615447, 0.7791656 ])
>>> ar[::-1].diagonal()
array([ 0.05001291, 0.6385717 , 0.16629692, 0.4813217 ])
Edit
As a general solution, for arbitrarily shape arrays, you can use
import numpy as np
shape = tuple([np.random.randint(3,10) for i in range(2)])
ar = np.arange(np.prod(shape)).reshape(shape)
out = np.hstack([ar[::-1].diagonal(offset=x) \
for x in np.arange(-ar.shape[0]+1,ar.shape[1]-1)])
print ar
print out
giving, for example
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]]
[ 0 5 1 10 6 2 15 11 7 3 20 16 12 8 4 21 17 13 9 22 18 14 23 19]

Categories

Resources