Getting around, numpy objects mismatch error in python - python

I'm having a problem with multiplying two big matrices in python using numpy.
I have a (15,7) matrix and I want to multipy it by its transpose, i.e. AT(7,15)*A(15*7) and mathemeticaly this should work, but I get an error :
ValueError:shape mismatch:objects cannot be broadcast to a single shape
I'm using numpy in Python. How can I get around this, anyone please help!

You've probably represented the matrices as arrays. You can either convert them to matrices with np.asmatrix, or use np.dot to do the matrix multiplication:
>>> X = np.random.rand(15 * 7).reshape((15, 7))
>>> X.T * X
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: operands could not be broadcast together with shapes (7,15) (15,7)
>>> np.dot(X.T, X).shape
(7, 7)
>>> X = np.asmatrix(X)
>>> (X.T * X).shape
(7, 7)
One difference between arrays and matrices is that * on a matrix is matrix product, while on an array it's an element-wise product.

Related

Question about tensor product between four-dimensional arrays

I'm trying to multiply together some 4 dimensional arrays (block matrices) in the following way:
where C has shape (50,50,12,6), Q has shape (50,50,12,12), R has shape (50,50,6,6),
I wonder how I should choose the correct axes to carry out tensor products? I tried doing matrix product in the following way:
H = np.tensordot(C_block.T,Q_block) # C_block
But a value error is returned:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_2976/3668270968.py in <module>
----> 1 H = np.tensordot(C_block.T,Q_block) # C_block
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (6,12,12,12)->(6,12,newaxis,newaxis) (50,50,12,6)->(50,50,newaxis,newaxis) and requested shape (12,6)
Creating some arrays of the right shape mix - with 10 instead of 50 for the batch dimensions. That is, treating the first 2 dimensions as batch that is repeated across all arrays including the result.
The sum-of-products dimension is size 12, right and left for Q.
This is most easily expressed with einsum.
In [71]: N=10; C=np.ones((N,N,12,6)); Q=np.ones((N,N,12,12)); R=np.ones((N,N,6,6))
In [73]: res = np.einsum('ijkl,ijkm,ijmn->ijln',C,Q,C)+R
In [74]: res.shape
Out[74]: (10, 10, 6, 6)
dot does not handle 'batches' right, hence the memory error in the other answer. np.matmul does though.
In [75]: res1 = C.transpose(0,1,3,2)#Q#C + R
In [76]: res1.shape
Out[76]: (10, 10, 6, 6)
With all ones, the value test isn't very diagnostic, still:
In [77]: np.allclose(res,res1)
Out[77]: True
matmul/# treats the first 2 dimensions as 'batch', and does a dot on the last 2. C.T in the equation should just swap the last 2 dimensions, not all.
Based on the sizes of your arrays, it looks like you are trying to do a regular matrix multiply on 50x50 arrays of matrices.
CT = np.swapaxes(C, 2, 3)
H = CT # Q # C + R
The documentation for np.matmul (which can be written using the operator #) specifically mentions this case.

Numpy einsum fails for matrix product with unspecified leading and trailing axes

I want to use dot product between the leading and trailing axes of arrays with unspecified dimensions. einsum doesn't seem to accommodate this use case. User error?
With prnd=np.random.RandomState(),
np.einsum('...i,i...', prnd.rand(5,2), prnd.rand(2,3))
Traceback (most recent call last):
File "/tmp/ipykernel_1106807/647899105.py", line 1, in <module>
np.einsum('...i,i...', prnd.rand(5,2), prnd.rand(2,3))
File "<__array_function__ internals>", line 5, in einsum
File "/home/elliot/anaconda3/envs/current/lib/python3.9/site-packages/numpy/core/einsumfunc.py", line 1359, in einsum
return c_einsum(*operands, **kwargs)
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (5,2)->(5,2) (2,3)->(3,2)
should be the same as...
np.einsum('ai,ib', prnd.rand(5,2), prnd.rand(2,3)).shape
Out[69]: (5, 3)
np.__version__
Out[70]: '1.21.4'
First of all, it has nothing to do with np.random.RandomState(), it works just the same with e.g. np.ones. Note that if you use ellipsis ... as part of the input shapes, the ellipsis will stand for the same (up to the rules of broadcasting) shape throughout the whole expression. In your case you try (slightly rewritten)
a = np.ones((5, 2))
b = np.ones((2, 3))
np.einsum('...i,i...->...', a, b)
But in a the ellipsis stands for 5 while in b the ellipsis stands for 3 which cannot be broadcast together, that is there is no possible output shape satisfying the broadcasting rules.
What would work is
a = np.ones((5, 2))
b = np.ones((2, 5))
or
a = np.ones((5, 2))
b = np.ones((2, 1))
or
a = np.ones((5, 2))
b = np.ones((2,))
But you can't use ellipsis to "glue together" the trailing or leading dimensions to a new shape.

matrix to the power of a column of a dense matrix using numpy in Python

I'm trying to obtain all the values in a matrix beta VxK to the power of all the values in a column Vx1 that is part of a dense matrix VxN. So each value in beta should be to the power of the corresponding line in the column and this should be done for all K columns in beta. When I use np.power on python for a practice numpy array for beta using:
np.power(head_beta.T, head_matrix[:,0])
I am able to obtain the results I want. The dimensions are (3, 10) for beta and (10,) for head_matrix[:,0] where in this case 3=K and 10=V.
However, if I do this on my actual matrix, which was obtained by using
matrix=csc_matrix((data,(row,col)), shape=(30784,72407) ).todense()
where data, row, and col are arrays, I am unable to do the same operation:
np.power(beta.T, matrix[:,0])
where the dimensions are (10, 30784) for beta and (30784, 1) for matrix where in this case 10=K and 30784=V. I get the following error
ValueError Traceback (most recent call last)
<ipython-input-29-9f55d4cb9c63> in <module>()
----> 1 np.power(beta.T, matrix[:,0])
ValueError: operands could not be broadcast together with shapes (10,30784) (30784,1) `
It seems that the difference is that matrix is a matrix (length,1) and head_matrix is actually a numpy array (length,) that I created. How can I do this same operation with the column of a dense matrix?
In the problem case it can't broadcast (10,30784) and (30784,1). As you note it works when (10,N) is used with (N,). That's because it can expand the (N,) to (1,N) and on to (10,N).
M = sparse.csr_matrix(...).todense()
is np.matrix which is always 2d, so M(:,0) is (N,1). There are several solutons.
np.power(beta.T, M[:,0].T) # change to a (1,N)
np.power(beta, M[:,0]) # line up the expandable dimensions
convert the sparse matrix to an array:
A = sparse.....toarray()
np.power(beta.T, A[:,0])
M[:,0].squeeze() and M[:,0].ravel() both produce a (1,N) matrix. So does M[:,0].reshape(-1). That 2d quality is persistent, as long as it returns a matrix.
M[:,0].A1 produces a (N,) array
From a while back: Numpy matrix to array
You can use the squeeze method on arrays to get rid of this extra dimension.
So
np.power(beta.T, matrix[:,0].squeeze()) should do the trick.

np.sum for row axis not working in Numpy

I wrote a softmax regression function def softmax_1(x) that essentially takes in a m x n matrix, exponentiates the matrix, then sums the exponentials of each column.
x = np.arange(-2.0, 6.0, 0.1)
scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)])
#scores shape is (3, 80)
def softmax_1(x):
"""Compute softmax values for each sets of scores in x."""
return(np.exp(x)/np.sum(np.exp(x),axis=0))
Converting it into a DataFrame I have to transpose
DF_activation_1 = pd.DataFrame(softmax_1(scores).T,index=x,columns=["x","1.0","0.2"])
So I wanted to try and make a version of the softmax function that takes in the transposed version and computes the softmax function
scores_T = scores.T
#scores_T shape is (80,3)
def softmax_2(y):
return(np.exp(y/np.sum(np.exp(y),axis=1)))
DF_activation_2 = pd.DataFrame(softmax_2(scores_T),index=x,columns=["x","1.0","0.2"])
Then I get this error:
Traceback (most recent call last):
File "softmax.py", line 22, in <module>
DF_activation_2 = pd.DataFrame(softmax_2(scores_T),index=x,columns=["x","1.0","0.2"])
File "softmax.py", line 18, in softmax_2
return(np.exp(y/np.sum(np.exp(y),axis=1)))
ValueError: operands could not be broadcast together with shapes (80,3) (80,)
Why doesn't this work when I transpose and switch the axis in the np.sum method?
Change
np.exp(y/np.sum(np.exp(y),axis=1))
to
np.exp(y)/np.sum(np.exp(y),axis=1, keepdims=True)
This will mean that np.sum will return an array of shape (80, 1) rather than (80,), which will broadcast correctly for the division. Also note the correction to the bracket closing.

Python and numpy : subtracting line by line a 2-dim array from a 1-dim array

In python, I wish to subtract line by line a 2-dim array from a 1-dim array.
I know how to do it with a 'for' loop and indexes but I suppose it may be quicker to use numpy functions. However I did not find a way to do it. Here is an example with a 'for' loop :
from numpy import *
x=array([[1,2,3,4,5],[6,7,8,9,10]])
y=array([20,10])
j=array([0, 1])
a=zeros([2,5])
for i in j :
... a[i]=y[i]-x[i]
And here is an example of something that does not work, replacing the 'for' loop by this:
a=y[j]-x[j,i]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: shape mismatch: objects cannot be broadcast to a single shape
Dou you have suggestions ?
The problem is that y-x have the respective shapes (2) (2,5). To do proper broadcasting, you'll need shapes (2,1) (2,5). We can do this with .reshape as long as the number of elements are preserved:
y.reshape(2,1) - x
Gives:
array([[19, 18, 17, 16, 15],
[ 4, 3, 2, 1, 0]])
y[:,newaxis] - x
should work too. The (little) comparative benefit is then you pay attention to the dimensions themselves, instead of the sizes of dimensions.

Categories

Resources