I'm trying to apply world transformation to a numpy matrix. However, I can't seem to find a numpy way to perform a 4x4 matrix multiplication by Nx4 vector where N is the number of vertices.
I have both tried Nx4x4#Nx4 and 4x4#Nx4 multiplications. Sure, I could do this element wise but I'm hoping there's a smarter way to do this.
vertices = np.ones([VERTEX_COUNT, 4])
vertices[:, 0:3] = vertex_map[element.path_vertices]
matrix = np.full([VERTEX_COUNT, 4, 4], np.reshape(element.matrix, [4, 4]))
transformed = matrix # vertices # dimension mismatch
# i would rather not do this
# matrix = np.reshape(element.matrix, [4, 4])
# transformed = np.array([matrix # vertex for vertex in vertices])
Related
I would like to apply the same matrix (3x3) to a large list of points that are contained in a vector. The vector is of the form (40000 x 3). The below code does the job but it is too slow. Are there any numpy tricks I can use to eliminate the for loop and append function?
def apply_matrix_to_shape(Matrix,Points):
"""input a desired transformation and an array of points that are in
the format np.array([[x1,y1,z1],[x2,y2,z2],...,]]). will output
a new array of translated points with the same format"""
New_shape = np.array([])
M = Matrix
for p in Points:
New_shape = np.append(New_shape,[p[0]*M[0][0]+p[1]*M[0][1]+p[2]*M[0][2],
p[0]*M[1][0]+p[1]*M[1][1]+p[2]*M[1][2],
p[0]*M[2][0]+p[1]*M[2][1]+p[2]*M[2][2]])
Rows = int(len(New_shape) / 3)
return np.reshape(New_shape,(Rows,3))
You basically want the matrix multiplication of both arrays (not an element-wise one). You just need to tranpose so the shapes are aligned, and transpose back the result:
m.dot(p.T).T
Or equivalently:
(m#p.T).T
m = np.random.random((3,3))
p = np.random.random((15,3))
np.allclose((m#p.T).T, apply_matrix_to_shape(m, p))
# True
Indeed, I think what you want is one of the main reason why NumPy came to live. You can use the dot product function and the transpose function (simply .T or .transpose())
import numpy as np
points = np.array([[1, 2, 3],
[4, 5, 6]])
T_matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
result = points.dot(T_matrix.T)
print(result)
>>> [[ 14 32 50]
[ 32 77 122]]
Getting the inverse of a diagonal matrix is very simple and does not require complex methods. Does scipy.linalg.inv check whether the matrix is diagonal before it applies more complex methods or do I need to check this myself?
As you can see the Github code of scipy.linalg.inv, function inv first calls
getrf, getri, getri_lwork = get_lapack_funcs(('getrf', 'getri','getri_lwork'),
Then function getrf does it job to give the LU decomposition and so on. Now we have to investigate how getrf function gives the LU decomposition. Because if it checks if it's diagonal before to process the input matrix, then no need to check it yourself.
Function getrf is obtained by calling _get_funcs but I can't go further from there (_get_funcs is called with the following arguments _get_funcs(names, arrays, dtype, "LAPACK", _flapack, _clapack, "flapack", "clapack", _lapack_alias)).
I suggest that you run an experiment with a large diagonal matrix to compare the time given to spit the output with linalg and an inversion by hand.
Update (by question author):
import numpy as np
from scipy.linalg import inv
a = np.diag(np.random.random(19999))
b = a.copy()
np.fill_diagonal(a, 1/a.diagonal())
c = inv(b)
does not even require a time measuring tool: It it very obvious that inv is much slower... (that is surprisingly disappointing).
Please check: scipy.linalg.inv
If you put scipy.linalg.inv in try except if it raises LinAlgError when matrix a is singular. The determinant for singular matrix it zero.
try:
# your code that will (maybe) throw scipy.linalg.inv(your matrix)
except np.linalg.LinAlgError as err:
# It shows your matrix is singular
# Its determinant of a matrix is equal to zero
# The matrix does not have an inverse.
# You can conclude if the matrix is diagonal or not
If the determinant of a matrix is equal to zero:
The matrix is less than full rank. The matrix is singular. The matrix
does not have an inverse.
As manually:
def is_diagonal(matrix):
#create a dummy matrix
dummy_matrix = np.ones(matrix.shape, dtype=np.uint8)
# Fill the diagonal of dummy matrix with 0.
np.fill_diagonal(dummy_matrix, 0)
return np.count_nonzero(np.multiply(dummy_matrix, matrix)) == 0
diagonal_matrix = np.array([[3, 0, 0],
[0, 7, 0],
[0, 0, 4]])
print is_diagonal(diagonal_matrix)
>>> True
random_matrix = np.array([[3, 8, 0],
[1, 7, 8],
[5, 0, 4]])
print is_diagonal(random_matrix)
>>> False
scipy.sparse.dia_matrix.diagonal returns the k-th diagonal of the matrix.
from scipy.sparse import csr_matrix
A = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]])
A.diagonal()
array([1, 0, 5])
A.diagonal(k=1)
array([2, 3])
Also, from scipy.linalg import block_diag creates diagonal matrix if input arrays are square, therefore if they are not square, it can not create diagonal matrix.
Please consider in Jupyter you can find out time complexity. %timeit yourfunctionname
I would like to create a lower triangular matrix with unit diagonal elements from a vector.
From a vector
[a_21, a_31, a_32, ..., a_N1, ... , a_N(N-1)]
how to convert it into a lower triangular matrix with unit diagonal elements of the form,
[[1, 0, ..., 0], [a_21, 1, ..., 0], [a_31, a_32, 1, ..., 0], ..., [a_N1, a_N2, ... , a_N(N-1), 1]]
So far with NumPy
import numpy
A = np.eye(N)
idx = np.tril_indices(N, k=-1)
A[idx] = X
The TensorFlow, however, doesn't support item assignment. I think fill_triangular or tf.reshape help solve the problem, but I'm not sure how to do it.
I found the similar question and answer:
Packing array into lower triangular of a tensor
Based on the page above, I made a function which transform a vector into a lower triangular with unit diagonal elements:
def flat_to_mat_TF(vector, n):
idx = list(zip(*np.tril_indices(n, k=-1)))
idx = tf.constant([list(i) for i in idx], dtype=tf.int64)
values = tf.constant(vector, dtype=tf.float32)
dense = tf.sparse_to_dense(sparse_indices=idx, output_shape=[n, n], \
sparse_values=values, default_value=0, \
validate_indices=True)
mat = tf.matrix_set_diag(dense, tf.cast(tf.tile([1], [n]), dtype=tf.float32))
return mat
If the input vector is already a Tensor, values = tf.constant() could be eliminated.
You could use fill_triangular_inverse on an ascending array (e.g. like one from np.arange).
Then you have the indices how they end up in the lower triangle and you can apply them to your array to resort it and pass it to fill_triangular.
I have a 4x4 identity matrix in numpy and I want to scale the first 3 dimensions by a factor. Currently, the way I am doing it as follows:
# Some scaling factors passed as a parameter by the user
scale = (2, 3, 4)
scale += (1,) # extend the tuple
my_mat = scale * np.eye(4)
Out of curiosity, I was wondering if there is some way to do this without extending the tuple.
This is quickly done with numpy broadcasting rules and indexing
A = np.eye(4)
scale = [2, 3, 4]
A[:3, :3] *= scale
I've got an array which contains a bunch of points (3D vectors, specifically):
pts = np.array([
[1, 1, 1],
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
[5, 5, 5],
])
And I would like to multiply each one of those points by a transformation matrix:
pts[0] = np.dot(transform_matrix, pts[0])
pts[1] = np.dot(transform_matrix, pts[1])
…
pts[n] = np.dot(transform_matrix, pts[n])
How can I do this efficiently?
I find it helps to write the einsum version first-- after you see the indices you can often recognize that there's a simpler version. For example, starting from
>>> pts = np.random.random((5,3))
>>> transform_matrix = np.random.random((3,3))
>>>
>>> pts_brute = pts.copy()
>>> for i in range(len(pts_brute)):
... pts_brute[i] = transform_matrix.dot(pts_brute[i])
...
>>> pts_einsum = np.einsum("ij,kj->ik", pts, transform_matrix)
>>> np.allclose(pts_brute, pts_einsum)
True
you can see this is simply
>>> pts_dot = pts.dot(transform_matrix.T)
>>> np.allclose(pts_brute, pts_dot)
True
Matrix-matrix multiplication can be thought of as "batch-mode" matrix-vector multiplication, where each column in the second matrix is one of the vectors being multiplied by the first, with the result vectors being the columns of the resulting matrix.
Also note that since (AB)T = BTAT, and therefore (by transposing both sides) ((AB)T)T = AB = (BTAT)T you can make a similar statement about the rows of the first matrix being batch-(left-)multiplied by the transpose of the second matrix, with the result vectors being the rows of the matrix product.