Calculating a vectors with its transposed vector - python

I'm working on a calculation for a within matrix scatter where i have a 50x20 vector and something that occured to me is that multiplying transposed vectors by the original vector, gives me a dimensional error, saying the following:
operands could not be broadcast together with shapes (50,20) (20,50)
What i tried is: array = my_array * my_array_transposed and got the aforementioned error.
The alternative was to do, then:
new_array = np.dot(my_array, np.transpose(my_array))
In Octave for instance, this would've been a lot easier, but due to the size of the vector, it's kinda hard for me to confirm for ground truth if this is the way to do the following calculation:
Because as far as i know, there is something related as to whether the multiplication is element wise.
My question is, am i applying that formula the right way? If not, whats the right way to multiply a transposed vector by the non-tranposed vector?

Yes, the np.dot formula is the correct one. If you write array = my_array * my_array_transposed you are asking Python to perform component-wise multiplication. Instead you need a row-by-column multiplication which is achieved in numpy with np.dot.

Related

House-Holder Reflection for QR Decomposition

I am trying to implement the QR decomposition via householder reflectors. While attempting this on a very simple array, I am getting weird numbers. Anyone who can tell me, also, why using the # vs * operator between vec and vec.T on the last line of the function definition gets major bonus points.
This has stumped two math/comp sci phds as of this morning.
import numpy as np
def householder(vec):
vec[0] += np.sign(vec[0])*np.linalg.norm(vec)
vec = vec/vec[0]
gamma = 2/(np.linalg.norm(vec)**2)
return np.identity(len(vec)) - gamma*(vec*vec.T)
array = np.array([1, 3 ,4])
Q = householder(array)
print(Q#array)
Output:
array([-4.06557377, -7.06557377, -6.06557377])
Where it should be:
array([5.09, 0, 0])
* is elementwise multiplication, # is matrix multiplication. Both have their uses, but for matrix calculations you most likely want the matrix product.
vec.T for an array returns the same array. A simple array only has one dimension, there is nothing to transpose. vec*vec.T just returns the elementwise squared array.
You might want to use vec=vec.reshape(-1,1) to get a proper column vector, a one-column matrix. Then vec*vec.T does "by accident" the correct thing. You might want to put the matrix multiplication operator there anyway.

How to solve numpy matrix multiplication error

w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
w1 = w.T
print(np.matmul(X*w1))
This code gives the following error:
ValueError: operands could not be broadcast together with shapes (2,3) (1,2)
How can I solve it?
Matrix multiplication is not your problem here. It is the multiplication you are trying to do: X*w1. This is not possible. If you want to multiply two arrays, they have to have the same shape or you can use broadcasting. But for broadcasting to work, all the axes, except one, have to have the same length. So that would not be possible in this case.
It seems what you are actually trying to do is matrix multiplication. This needs two matrices, so you cannot multiply them first. Also, for two matrices to be multiplied this way, the number of columns of the first matrix needs to equal the number of rows of the second. So, the following would work and is probably what you are trying to do:
np.matmul(w1, X)

Normalize 2D matrix using scalar multiplication in numpy

I have a matrix thing that looks like this:
thing.shape
(8070829, 2)
and I want to scale all elements by some scalingfactor = np.iinfo(np.int16).max/thing.max() to normalize the values. Right now I am iterating over all elements which works, but is really slow:
for j, sample in enumerate(thing):
thing[j] = [int(sample[0] * scalingfactor), int(sample[1] * scalingfactor)]
I thought I could do the following, but the results are not the same:
np.multiply(thing, scalingfactor)
Is there are more efficient way to normalize a matrix?
Use vectorized elementwise multiplication and then change dtype (that does the floor-ing) -
(thing*scalingfactor).astype(int) # for thing as array type
Or use np.floor on the scaled version -
np.floor(thing*scalingfactor)
Using the posted code from the question : np.multiply(thing, scalingfactor) would work too, just needs the additional floor-ing step, as suggested earlier.

python vector * vector------> matrix

In the python computer graphics kit, there is a vec3 type for the representation of three-component vectors, but how can I do the following multiplication:
A three-component vector multiply by its transpose result in a 3*3 matrix, like the following example:
a = vec3(1,1,1)
matrix_m = a * a.transpose()
Anyone knows such a library that can handle multiplying a matrix of dimension 1*3 by another one of dimension 3*1 and result in a matrix of 3*3.
Sorry, I have to clarify a bit more about this. I am talking about matrix math.
It is like:
[a0, a1, a2]*[a0, a1, a2]T = [a0*a0, a0*a1, a0*a2; a1*a0, a1*a1, a1*a2;a2*a0, a2*a1, a2*a2]
Maybe I can try write a function myself, it is so straightforward.....
Some vector math software, such as MATLAB, happily keep track of column vectors and row vectors as separate types of things. Python's Numpy doesn't, but does offer numpy.outer(A,B). Unfortunately, the Graphics Kit (I assume you refer to http://cgkit.sourceforge.net/) doesn't track rows vs columns, use numpy (which would be huge overkill), or provide a vector x vector --> matrix outer product. It looks like you'll have to write your own function to do that.

Numpy linalg on multidimensional arrays

Is there a way to use numpy.linalg.det or numpy.linalg.inv on an nx3x3 array (a line in a multiband image), for example? Right now I am doing something like:
det = numpy.array([numpy.linalg.det(i) for i in X])
but surely there is a more efficient way. Of course, I could use map:
det = numpy.array(map(numpy.linalg.det, X))
Any other more direct way?
I'm pretty sure there is no substantially more efficient way than what you have. You can save some memory by first creating an empty array for the results and writing all results directly to that array:
res = numpy.empty_like(X)
for i, A in enumerate(X):
res[i] = numpy.linalg.inv(A)
This won't be any faster, though -- it will only use less memory.
a "normal" determinant is only defined for a matrix (dimension=2), so if that's what you want i don't see another way.
if you really want to compute the determinant of a cube then you could try to implement one of the ways described here:
http://en.wikipedia.org/wiki/Hyperdeterminant
notice that it is not necessarily the same value as the one you're currently computing.
New answer to an old question: Since version 1.8.0, numpy supports evaluating a batch of 2D matrices. For a batch of MxM matrices, the input and output now looks like:
linalg.det(a)
Compute the determinant of an array.
Parameters a(…, M, M) array_like
Input array to compute determinants for.
Returns det(…) array_like
Determinant of a.
Note the ellipsis. There can be multiple "batch dimensions", where for example you can evaluate a determinants on a meshgrid.
https://numpy.org/doc/stable/reference/generated/numpy.linalg.det.html
https://numpy.org/doc/stable/reference/generated/numpy.linalg.inv.html

Categories

Resources