I have data vector (1 coloumn) and I'm using
def propup(self, vis):
pre_sigmoid_activation = numpy.dot(vis, self.W) + self.hbias
return sigmoid(pre_sigmoid_activation)
but i getting error
ValueError: shapes (171,) and (784,500) not aligned: 171 (dim 0) !=
784 (dim 0)
A dot product between a matrix and a vector is defined as so.
This implies that the width of the matrix and the height of the vector should be identical.
In your case ves.shape = [,171] which is the height of the vector but self.W.shape = [784, 500] meaning the width of the matrix self.W is 784.
In order for numpy.dot to work properly you need to make sure ves.shape = [x, 784] where x is some integer.
Without more code i could only guess that you are trying to train a Neural Net to solve the MNIST problem (the 784 dimension is pretty specific).
So make sure that you are sending the right vector to propup().
In any case here are a few great sources on matrix multiplication:
Matrix multiplication: https://www.mathsisfun.com/algebra/matrix-multiplying.html
Related
I'm Constructing 3d model from 2d images, I tried a lot but cannot fix this error.
def reprojection_loss_function(opt_variables, points_2d, num_pts):
P = opt_variables[0:12].reshape(3,4)
point_3d = opt_variables[12:].reshape((len(points_2d[0]),4))
In the neural network code that I've written, I could not get an answer since the problem of alignment.
I wrote a neural network code (based on some other). I tried to build input and output in the right way. While I defined the class and operations correctly, I could not get an answer since the problem of alignment. Error : shapes (127,3) and (1,4) not aligned: 3 (dim 1) != 1 (dim 0)
Datafile = pd.read_excel(r"C:\\Users\Hasan\Desktop\ANN\x.xlsx") is 127x3
Target = pd.read_excel(r"C:\\Users\Hasan\Desktop\ANN\y.xlsx") is 127x1
class Neural_Network(object):
def __init__(self):
self.inputlayer = 1
self.w1 = np.random.randn(self.inputlayer, self.hiddenlayer)
self.z = np.dot(Datafile, self.w1)
I think it's because of the dimension of two matrices but even, when I changed the dimensions it did not work.
All help will be appreciated
For matrix multiplication (dot product), number of columns of first matrix should be equal to number of rows of second matrix.
In your case, Datafile has 3 columns while w1 has 1 row, that's why it's giving you error because of incorrect dimensions.
For giving you example, I am assuming random matrices,
Datafile = np.random.rand(127, 3)
w1 = np.random.rand(3, 127)
z = np.dot(Datafile, w1)
print(z.shape)
Output: (127, 127)
In this example, Datafile has 3 columns and w1 has 3 rows so in this case, dot-product will be successful.
I've got two numpy matrices: the first, indata has a shape of (2, 0). The second (self.Ws[0] in my code) has a shape of (100, 0).
Is it possible to multiply these matrices by each other?
def Evaluate(self, indata):
sum = np.dot(self.Ws[0], indata) + self.bs[0]
self.As[0] = self.sigmoid(sum)
for i in range(1, len(self.numLayers)):
sum = np.dot(self.Ws[i], self.As[i-1] + self.bs[i])
self.As[i] = self.softmax(sum)
return self.As[len(self.numLayers)-1]
The error I'm getting when running this code is the following:
File "C:/Users/1/PycharmProjects/Assignment4PartC/Program.py", line 28, in main
NN.Train(10000, 0.1)
File "C:\Users\1\PycharmProjects\Assignment4PartC\Network.py", line 53, in Train
self.Evaluate(self.X[i])
File "C:\Users\1\PycharmProjects\Assignment4PartC\Network.py", line 38, in Evaluate
sum = np.dot(self.Ws[0], indata) + self.bs[0]
ValueError: shapes (100,) and (2,) not aligned: 100 (dim 0) != 2 (dim 0)
Hopefully somebody can help me out with this -- any help is appreciated! If anyone needs more granular information about what I'm running, just let me know and I'll update my post.
There is no such thing as shape (N, 0) for an array unless the array is empty. What you have is probably of shape (2,) and (100,). One way of multiplying these objects is:
np.dot(self.Ws[0].reshape((-1, 1)), indata.reshape((1, -1)))
This is going to give you a (100, 2) array.
Whether this is what you want to get from a mathematical perspective it is really hard to say.
I am struggling once again with Python, NumPy and arrays to compute some calculations between matrices.
The code part that is likely not working properly is as follows:
train, test, cv = np.array_split(data, 3, axis = 0)
train_inputs = train[:,: -1]
test_inputs = test[:,: -1]
cv_inputs = cv[:,: -1]
train_outputs = train[:, -1]
test_outputs = test[:, -1]
cv_outputs = cv[:, -1]
When printing those matrices informations (np.ndim, np.shape and dtype respectively), this is what you get:
2
1
2
1
2
1
(94936, 30)
(94936,)
(94936, 30)
(94936,)
(94935, 30)
(94935,)
float64
float64
float64
float64
float64
float64
I believe it is missing 1 dimension in all *_output arrays.
The other matrix I need is created by this command:
newMatrix = neuronLayer(30, 94936)
In which neuronLayer is a class defined as:
class neuronLayer():
def __init__(self, neurons, neuron_inputs):
self.weights = 2 * np.random.random((neuron_inputs, neurons)) - 1
Here's the final output:
outputLayer1 = self.__sigmoid(np.dot(inputs, self.layer1.weights))
ValueError: shapes (94936,30) and (94936,30) not aligned: 30 (dim 1) != 94936 (dim 0)
Python is clearly telling me the matrices are not adding up but I am not understanding where is the problem.
Any tips?
PS: The full code is pasted ħere.
layer1 = neuronLayer(30, 94936) # 29 neurons with 227908 inputs
layer2 = neuronLayer(1, 30) # 1 Neuron with the previous 29 inputs
where `nueronLayer creates
self.weights = 2 * np.random.random((neuron_inputs, neurons)) - 1
the 2 weights are (94936,30) and (30,1) in size.
This line does not make any sense. I surprised it doesn't give an error
layer1error = layer2delta.dot(self.layer2.weights.np.transpose)
I suspect you want np.transpose(self.layer2.weights) or self.layer2.weights.T.
But maybe it doesn't get there. train first calls think with a (94936,30) inputs
outputLayer1 = self.__sigmoid(np.dot(inputs, self.layer1.weights))
outputLayer2 = self.__sigmoid(np.dot(outputLayer1, self.layer2.weights))
So it tries to do a np.dot with 2 (94936,30), (94936,30) arrays. They aren't compatible for a dot. You could transpose one or the other, producing either (94936,94936) array or (30,30). One looks too big. The (30,30) is compatible with the weights for the 2nd layer.
np.dot(inputs.T, self.layer1.weights)
has a chance of working right.
np.dot(outputLayer1, self.layer2.weights)
(30,30) with (30,1) => (30,1)
But then you do
train_outputs - outputLayer2
That will have problems regardless of whether train_outputs is (94936,) or (94936,1)
You need to make sure that arrays shapes flow correctly through the calculation. Don't just check them at the start. Check then internally. And make you sure you understand what shapes they should have at each step.
It would be a whole lot easier to develop and test this code with much smaller inputs and layers, something like 10 samples and 3 features. That way you can look at the values as well as the shapes.
np.dot uses matrix multiplication when its arguments are matrices. It looks like your code is trying to multiply two non-square matrices together with the same dimensions which doesn't work. Perhaps you meant to transpose one of the matrices? Numpy matrices have a T property that returns the transpose, you could try:
self.__sigmoid(np.dot(inputs.T, self.layer1.weights))
Im using python 2.7 and am attempting a forcasting on some random data from 1.00000000 to 3.0000000008. There are approx 196 items in my array and I get the error
ValueError: operands could not be broadcast together with shape (2) (50)
I do not seem to be able to resolve this issue on my own. Any help or links to relevant documentation would be greatly appreciated.
Here is the code I am using that generates this error
nsample = 50
sig = 0.25
x1 = np.linspace(0,20, nsample)
X = np.c_[x1, np.sin(x1), (x1-5)**2, np.ones(nsample)]
beta = masterAverageList
y_true = ((X, beta))
y = y_true + sig * np.random.normal(size=nsample)
If X and beta do not have the same shape as the second term in the rhs of your last line (i.e. nsample), then you will get this type of error. To add an array to a tuple of arrays, they all must be the same shape.
I would recommend looking at the numpy broadcasting rules.