How can I multiply unaligned numpy matrices in python? - python

I've got two numpy matrices: the first, indata has a shape of (2, 0). The second (self.Ws[0] in my code) has a shape of (100, 0).
Is it possible to multiply these matrices by each other?
def Evaluate(self, indata):
sum = np.dot(self.Ws[0], indata) + self.bs[0]
self.As[0] = self.sigmoid(sum)
for i in range(1, len(self.numLayers)):
sum = np.dot(self.Ws[i], self.As[i-1] + self.bs[i])
self.As[i] = self.softmax(sum)
return self.As[len(self.numLayers)-1]
The error I'm getting when running this code is the following:
File "C:/Users/1/PycharmProjects/Assignment4PartC/Program.py", line 28, in main
NN.Train(10000, 0.1)
File "C:\Users\1\PycharmProjects\Assignment4PartC\Network.py", line 53, in Train
self.Evaluate(self.X[i])
File "C:\Users\1\PycharmProjects\Assignment4PartC\Network.py", line 38, in Evaluate
sum = np.dot(self.Ws[0], indata) + self.bs[0]
ValueError: shapes (100,) and (2,) not aligned: 100 (dim 0) != 2 (dim 0)
Hopefully somebody can help me out with this -- any help is appreciated! If anyone needs more granular information about what I'm running, just let me know and I'll update my post.

There is no such thing as shape (N, 0) for an array unless the array is empty. What you have is probably of shape (2,) and (100,). One way of multiplying these objects is:
np.dot(self.Ws[0].reshape((-1, 1)), indata.reshape((1, -1)))
This is going to give you a (100, 2) array.
Whether this is what you want to get from a mathematical perspective it is really hard to say.

Related

Numpy product of a vector and it's transpose

I have a N x N complex NumPy array U_0 and I have to do manipulation with it :
First, how can I increase the array with zero efficiently ? I can simply copy it into an np.zeros((2N-1, 2N-1)) but maybe you guys know a better method. Thanks to Alexander Riedel for answer this question with the solution of numpy.pad
Second, I tried with
b = np.array([1,2,3])
I saw on previous post to transpose a 1D vector you can do
b_T = b[..., None]
# or
b_T = np.atleast_2d(b).T
but when I try b_T.dot(b) I get shapes (3,1) and (3,) not aligned: 1 (dim 1) != 3 (dim 0). I don't know how to get b into a shape of (1,3) instead of (3,).
Thanks
You can use the expand_dims function to do what you want. The problem here is that numpy does not consider a shape (3, 1) and a (3, ) equivalent. Alternatively look into the matrix type
Filling the array with zeros, as the commenters pointed out is also the answer to your first question. If that is not efficient enough, look into using sparse matrices from scipy, maybe they have the features you're looking for.

Matrix multiplication python shapes not aligned

I am currently working on an assignment where we have to multiply four matrices (a.T, b, M, a.T) and I keep getting the error "ValueError: shapes (2,1) and (2,1) not aligned: 1 (dim 1) != 2 (dim 0)".
I have tried implementing it without the transpose which works fine but as soon as I add the transpose, the error shows up so I am assuming it has something to do with the transpose. I have never programmed in Python before so I would really appreciate some help.
def matrix_function(M, a, b):
part1=a.T.dot(b)
part2=M.dot(a.T)
out=part1.dot(part2)
return out
with:
M = np.array(range(4)).reshape((2,2))
a = np.array([[1,1]])
b = np.array([[10, 10]]).T
The assignment says the expected result is
(20 100) of shape (2,1) but I get neither the result nor the shape right. Can somebody help me? Thanks in advance

Is there a fast way to multiply one axis of a 4D array by elements in a vector of the same length as that axis?

I have two lists of shape (130, 64, 2048), call it (s, f, b), and one vector of length 64, call this v. I need to append these two lists together to make a list of shape (130, 2, 64, 2048) and multiply all 2048 values in f[i] with the i th value of v.
The output array also needs to have shape (130, 2, 64, 2048)
Obviously these two steps can be done interchangeably. I want to know the most Pythonic way of doing something like this.
My main issue is that my code takes forever in turning the list into a numpy array which is necessary for some of my calculations. I have:
new_prof = np.asarray( new_prof )
but this seems to take two long for the size and shape of my list. Any thoughts as to how I could initialise this better?
The problem outlined above is shown by my attempt:
# Converted data should have shape (130, 2, 64, 2048)
converted_data = IQUV_to_AABB( data, basis = "cartesian" )
new_converted = np.array((130, 2, 64, 2048))
# I think s.shape is (2, 64, 2048) and cal_fa has length 64
for i, s in enumerate( converted_data ):
aa = np.dot( s[0], cal_fa )
bb = np.dot( s[1], cal_fb )
new_converted[i].append( (aa, bb) )
However, this code doesn't work and I think it's got something to do with the dot product. Maybe??
I would also love to know why the process of changing my list to a numpy array is taking so long.
Try to start small and look at the results in the console:
import numpy as np
x = np.arange(36)
print(x)
y = np.reshape(x, (3, 4, 3))
print(y)
# this is a vector of the same size as dimension 1
a = np.arange(4)
print(a)
# expand and let numpy's broadcasting do the rest
# https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
# https://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc
b = a[np.newaxis, :, np.newaxis]
print(b)
c = y * b
print(c)
You can read about np.newaxis here, here and here.
Using numpy.append is rather slow as it has to preallocate memory and copy the whole array each time. A numpy array is a continuous block of memory.
You might have to use it if you run out of computer memory. But in this case try to iterate over appropriate chunks, as big as your computer can still handle them. Re-aranging the dimension is sometimes a way to speed up calculations.

python numpy ValueError:shapes (171,) and (784,500) not aligned: 171

I have data vector (1 coloumn) and I'm using
def propup(self, vis):
pre_sigmoid_activation = numpy.dot(vis, self.W) + self.hbias
return sigmoid(pre_sigmoid_activation)
but i getting error
ValueError: shapes (171,) and (784,500) not aligned: 171 (dim 0) !=
784 (dim 0)
A dot product between a matrix and a vector is defined as so.
This implies that the width of the matrix and the height of the vector should be identical.
In your case ves.shape = [,171] which is the height of the vector but self.W.shape = [784, 500] meaning the width of the matrix self.W is 784.
In order for numpy.dot to work properly you need to make sure ves.shape = [x, 784] where x is some integer.
Without more code i could only guess that you are trying to train a Neural Net to solve the MNIST problem (the 784 dimension is pretty specific).
So make sure that you are sending the right vector to propup().
In any case here are a few great sources on matrix multiplication:
Matrix multiplication: https://www.mathsisfun.com/algebra/matrix-multiplying.html

Multiplying tensors containing images in numpy

I have the following 3rd order tensors. Both tensors matrices the first tensor containing 100 10x9 matrices and the second containing 100 3x10 matrices (which I have just filled with ones for this example).
My aim is to multiply the matrices as the line up one to one correspondance wise which would result in a tensor with shape: (100, 3, 9) This can be done with a for loop that just zips up both tensors and then takes the dot of each but I am looking to do this just with numpy operators. So far here are some failed attempts
Attempt 1:
import numpy as np
T1 = np.ones((100, 10, 9))
T2 = np.ones((100, 3, 10))
print T2.dot(T1).shape
Ouput of attempt 1 :
(100, 3, 100, 9)
Which means it tried all possible combinations ... which is not what I am after.
Actually non of the other attempts even compile. I tried using np.tensordot , np.einsum (read here https://jameshensman.wordpress.com/2010/06/14/multiple-matrix-multiplication-in-numpy that it is supposed to do the job but I did not get Einsteins indices correct) also in the same link there is some crazy tensor cube reshaping method that I did not manage to visualize. Any suggestions / ideas-explanations on how to tackle this ?
Did you try?
In [96]: np.einsum('ijk,ilj->ilk',T1,T2).shape
Out[96]: (100, 3, 9)
The way I figure this out is look at the shapes:
(100, 10, 9)) (i, j, k)
(100, 3, 10) (i, l, j)
-------------
(100, 3, 9) (i, l, k)
the two j sum and cancel out. The others carry to the output.
For 4d arrays, with dimensions like (100,3,2,24 ) there are several options:
Reshape to 3d, T1.reshape(300,2,24), and after reshape back R.reshape(100,3,...). Reshape is virtually costless, and a good numpy tool.
Add an index to einsum: np.einsum('hijk,hilj->hilk',T1,T2), just a parallel usage to that of i.
Or use elipsis: np.einsum('...jk,...lj->...lk',T1,T2). This expression works with 3d, 4d, and up.

Categories

Resources