I wonder if there's a function in numpy/scipy for 1d array circular convolution. The scipy.signal.convolve() function only provides "mode" but not "boundary", while the signal.convolve2d() function needs 2d array as input.
I need to do this to compare open vs circular convolution as part of a time series homework.
By convolution theorem, you can use Fourier Transform to get circular convolution.
import numpy as np
def conv_circ( signal, ker ):
'''
signal: real 1D array
ker: real 1D array
signal and ker must have same shape
'''
return np.real(np.fft.ifft( np.fft.fft(signal)*np.fft.fft(ker) ))
Since this is for homework, I'm leaving out a few details.
By the definition of convolution, if you append a signal a to itself, then the convolution between aa and b will contain inside the cyclic convolution of a and b.
E.g., consider the following:
import numpy as np
from scipy import signal
%pylab inline
a = np.array([1] * 10)
b = np.array([1] * 10)
plot(signal.convolve(a, b));
That is the standard convolution. Now this, however
plot(signal.convolve(a, np.concatenate((b, b))));
In this last figure, try to see where is the result of the circular convolution, and how to generalize this.
Code for this that you can copy-paste, in the spirit of StackOverflow:
n = a.shape[0]
np.convolve(np.tile(a, 2), b)[n:2 * n]
This assumes that a, b have the same shape.
Related
If I have two matrices a and b, is there any function I can find the matrix x, that when dot multiplied by a makes b? Looking for python solutions, for matrices in the form of numpy arrays.
This problem of finding X such as A*X=B is equivalent to search the "inverse of A", i.e. a matrix such as X = Ainverse * B.
For information, in math Ainverse is noted A^(-1) ("A to the power -1", but you can say "A inverse" instead).
In numpy, this is a builtin function to find the inverse of a matrix a:
import numpy as np
ainv = np.linalg.inv(a)
see for instance this tutorial for explanations.
You need to be aware that some matrices are not "invertible", most obvious examples (roughly) are:
matrix that are not square
matrix that represent a projection
numpy can still approximate some value in certain cases.
if A is a full rank, square matrix
import numpy as np
from numpy.linalg import inv
X = inv(A) # B
if not, then such a matrix does not exist, but we can approximate it
import numpy as np
from numpy.linalg import inv
X = inv(A.T # A) # A.T # B
I am trying to implement the QR decomposition via householder reflectors. While attempting this on a very simple array, I am getting weird numbers. Anyone who can tell me, also, why using the # vs * operator between vec and vec.T on the last line of the function definition gets major bonus points.
This has stumped two math/comp sci phds as of this morning.
import numpy as np
def householder(vec):
vec[0] += np.sign(vec[0])*np.linalg.norm(vec)
vec = vec/vec[0]
gamma = 2/(np.linalg.norm(vec)**2)
return np.identity(len(vec)) - gamma*(vec*vec.T)
array = np.array([1, 3 ,4])
Q = householder(array)
print(Q#array)
Output:
array([-4.06557377, -7.06557377, -6.06557377])
Where it should be:
array([5.09, 0, 0])
* is elementwise multiplication, # is matrix multiplication. Both have their uses, but for matrix calculations you most likely want the matrix product.
vec.T for an array returns the same array. A simple array only has one dimension, there is nothing to transpose. vec*vec.T just returns the elementwise squared array.
You might want to use vec=vec.reshape(-1,1) to get a proper column vector, a one-column matrix. Then vec*vec.T does "by accident" the correct thing. You might want to put the matrix multiplication operator there anyway.
I have a matrix X and I need to write a function, which calculate a trace of matrix .
I wrote a next script:
import numpy as np
def test(matrix):
return (np.dot(matrix, matrix.T)).trace()
np.random.seed(42)
matrix = np.random.uniform(size=(1000, 1))
print(test(matrix))
It works fine on small matrix, but when I try to calculate on large matrix (for example on matrix with shape (50000, 1)), it gives me a memory error.
I tried to find a solution to the problem in other questions on the site, but nothing helped me. I would be grateful for any advice!
The number you're trying to compute is just the sum of the squares of all entries of X. Sum the squares instead of computing a giant matrix product full of entries you don't want:
return (X**2).sum()
Or ravel the matrix and use dot, which is probably faster for contiguous X:
raveled = X.ravel()
return raveled.dot(raveled)
Actually, ravel is probably faster for non-contiguous X, too - even when ravel needs to copy, it's not doing more allocation than (X**2).sum().
I have a linear system in which all the matrices are block diagonal. They have N blocks identical in shape.
Matrices are stored in compressed format as numpy arrays with shape (N, n, m), while the shape of the vectors is (N, m).
I currently implemented the matrix-vector product as
import numpy as np
def mvdot(m, v):
return (m * np.expand_dims(v, -2)).sum(-1)
Thanks to broadcasting rules, if all the blocks of a matrix are the same I have to store it only once in an array with shape (1, n, m): the product with a vector (N, m) still gives the correct (N, n) vector.
My questions are:
how to implement an efficient matrix-matrix product that yields the matrix with shape (N, n, m) from two matrices with shapes (N, n, p) and (N, p, m)?
is there a way to perform these operations with a numpy built-in (possibly faster) function? Functions like np.linalg.inv make me think that numpy was designed to support this compressed format for block diagonal matrices.
If I understand your question correctly, you have two arrays of shape (N,n,p) and (N,p,m), respectively, and their product should be of shape (N,n,m) where element [i,:,:] is the matrix product of M1[i,:,:] and M2[i,:,:]. This can be achieved using numpy.einsum:
import numpy as np
N = 7
n,p,m = 3,4,5
M1 = np.random.rand(N,n,p)
M2 = np.random.rand(N,p,m)
Mprod = np.einsum('ijk,ikl->ijl',M1,M2)
# check if all the submatrices are what we expect
all([np.allclose(np.dot(M1[k,...],M2[k,...]),Mprod[k,...]) for k in range(N)])
# True
Numpy's einsum is an incredibly versatile construction for complicated linear operations, and it's usually pretty efficient with two operands. The idea is to rewrite your operation in an indexed way: what you need is to multiply M1[i,j,k] with M2[i,k,l] for each i,j,l, and sum over k. This is exactly what the above call to einsum does: it collapses the index k, and performs the necessary products and assignments along the remaining dimensions in the given order.
The matrix-vector product can be done similarly:
M = np.random.rand(N,n,m)
v = np.random.rand(N,m)
Mvprod = np.einsum('ijk,ik->ij',M,v)
It's possible that numpy.dot can be coerced with the proper transposes and dimension tricks to directly do what you want, but I couldn't make that work.
Both of the above operations can be done in the same function call by allowing an implicit number of dimensions within einsum:
def mvdot(M1,M2):
return np.einsum('ijk,ik...->ij...',M1,M2)
Mprod = mvdot(M1,M2)
Mvprod = mvdot(M,v)
In case the input argument M2 is a block matrix, there will be a leading dimension appended to the result, creating a block matrix. In case M2 is a "block vector", the result will be a block vector.
Since Python 3.5 and above, the previous example can be simplified using the matrix multiplication operator # (numpy.matmul) which treats this case as a stack of matrices residing in the last two indexes and broadcast accordingly:
import numpy as np
N = 7
n,p,m = 3,4,5
M1 = np.random.rand(N,n,p)
M2 = np.random.rand(N,p,m)
Mprod = M1 # M2 # similar to np.matmul(M1, M2)
all([np.allclose(np.dot(M1[k,...],M2[k,...]),Mprod[k,...]) for k in range(N)])
#True
I may be misunderstanding how broadcasting works in Python, but I am still running into errors.
scipy offers a number of "special functions" which take in two arguments, in particular the eval_XX(n, x[,out]) functions.
See http://docs.scipy.org/doc/scipy/reference/special.html
My program uses many orthogonal polynomials, so I must evaluate these polynomials at distinct points. Let's take the concrete example scipy.special.eval_hermite(n, x, out=None).
I would like the x argument to be a matrix shape (50, 50). Then, I would like to evaluate each entry of this matrix at a number of points. Let's define n to be an a numpy array narr = np.arange(10) (where we have imported numpy as np, i.e. import numpy as np).
So, calling
scipy.special.eval_hermite(narr, matrix)
should return Hermitian polynomials H_0(matrix), H_1(matrix), H_2(matrix), etc. Each H_X(matrix) is of the shape (50,50), the shape of the original input matrix.
Then, I would like to sum these values. So, I call
matrix1 = np.sum( [scipy.eval_hermite(narr, matrix)], axis=0 )
but I get a broadcasting error!
ValueError: operands could not be broadcast together with shapes (10,) (50,50)
I can solve this with a for loop, i.e.
matrix2 = np.sum( [scipy.eval_hermite(i, matrix) for i in narr], axis=0)
This gives me the correct answer, and the output matrix2.shape = (50,50). But using this for loop slows down my code, big time. Remember, we are working with entries of matrices.
Is there a way to do this without a for loop?
eval_hermite broadcasts n with x, then evaluates Hn(x) at each point. Thus, the output shape will be the result of broadcasting n with x. So, if you want to make this work, you'll have to make n and x have compatible shapes:
import scipy.special as ss
import numpy as np
matrix = np.ones([100,100]) # example
narr = np.arange(10) # example
ss.eval_hermite(narr[:,None,None], matrix).shape # => (10, 100, 100)
But note that this might actually be faster:
out = np.zeros_like(matrix)
for n in narr:
out += ss.eval_hermite(n, matrix)
In testing, it appears to be between 5-10% faster than np.sum(...) of above.
The documentation for these functions is skimpy, and a lot of the code is compiled, so this is just based on experimentation:
special.eval_hermite(n, x, out=None)
n apparently is a scalar or array of integers. x can be an array of floats.
special.eval_hermite(np.ones(5,int)[:,None],np.ones(6)) gives me a (5,6) result. This is the same shape as what I'd get from np.ones(5,int)[:,None] * np.ones(6).
The np.ones(5,int)[:,None] is a (5,1) array, np.ones(6) a (6,), which for this purpose is equivalent of (1,6). Both can be expanded to (5,6).
So as best I can tell, broadcasting rules in these special functions is the same as for operators like *.
Since special.eval_hermite(nar[:,None,None], x) produces a (10,50,50), you just apply sum to axis 0 of that to produce the (50,50).
special.eval_hermite(nar[:,Nar,Nar], x).sum(axis=0)
Like I wrote before, the same broadcasting (and summing) rules apply for this hermite as they do for a basic operation like *.