I'm computing a square matrix V, each element of which is an integral that I compute with sympy. I compute only one definite integral V_nm, the result of which is a numerical expression with symbolic indices m and n. Say V_nm looks like this:
>>> V_nm
sin(3*n)*cos(m)
Now I wish to make a 2-D numerical (not symbolic!) matrix out of V_nm using m and n as indices of the array. Say for a 2 x 2 matrix, the result for the given V_nm would be:
[[sin(3)cos(1) sin(3)cos(2)]
[sin(6)cos(1) sin(6)cos(2)]]
i.e., n specifies the column and m specifies the rows. (Note: I start m and n at 1 and not 0, but that's no concern).
How do I achieve this?
I know I can use V_nm.subs([(n, ...), (m, ...)]) in a list comprehension followed by evalf() but that's the long route. I wish to achieve this using lambdify. I know how to use lambdify for 1-D arrays. Can you please tell me how to implement it for 2-D arrays?
There is sympy's FunctionMatrix which is intended for this kind of case. Note that it uses zero-based indexing:
In [1]: m, n, i, j = symbols('m, n, i, j')
In [2]: V_nm = FunctionMatrix(m, n, Lambda((i, j), 100*(i+1) + (j+1)))
In [3]: V_nm
Out[3]: [100⋅i + j + 101]
In [4]: V_nm.subs({m:2, n:3}).as_explicit()
Out[4]:
⎡101 102 103⎤
⎢ ⎥
⎣201 202 203⎦
In [5]: lambdify((m, n), V_nm)(2, 3)
Out[5]:
array([[101., 102., 103.],
[201., 202., 203.]])
What you're asking doesn't look like a standard functionality. But in two steps it's possible. First lambdify the expression, and then create a function that generates the intended 2D array via numpy's broadcasting:
from sympy import sin, cos, lambdify
from sympy.abc import m, n
import numpy as np
V_mn = sin(3 * n) * cos(m)
V_mn_np = lambdify((m, n), V_mn)
# using list comprehension:
# V_mn_np2D = lambda m, n: np.array([[V_mn_np(i, j) for j in range(n)] for i in range(m)])
# using numpy's broadcasting (faster for large arrays):
V_mn_np2D = lambda m, n: V_mn_np(np.arange(m)[:, None], np.arange(n))
V_mn_np2D(2, 2)
To have the numbering start at 1 instead of 0, use np.arange(1, m+1) and np.arange(1, n+1).
As a test, a function such as 100 * m + n makes it easy to verify that the approach works as intended.
W_mn = 100 * m + n
W_mn_np = lambdify((m, n), W_mn)
W_mn_np2D = lambda m, n: W_mn_np(np.arange(1, m+1)[:, None], np.arange(1, n+1))
W_mn_np2D(2, 3)
Output:
array([[101, 102, 103],
[201, 202, 203]])
Related
I'm trying to implement a differential in python via numpy that can accept a scalar, a vector, or a matrix.
import numpy as np
def foo_scalar(x):
f = x * x
df = 2 * x
return f, df
def foo_vector(x):
f = x * x
n = x.size
df = np.zeros((n, n))
for mu in range(n):
for i in range(n):
if mu == i:
df[mu, i] = 2 * x[i]
return f, df
def foo_matrix(x):
f = x * x
m, n = x.shape
df = np.zeros((m, n, m, n))
for mu in range(m):
for nu in range(n):
for i in range(m):
for j in range(n):
if (mu == i) and (nu == j):
df[mu, nu, i, j] = 2 * x[i, j]
return f, df
This works fine, but it seems like there should be a way to do this in a single function, and let numpy "figure out" the correct dimensions. I could force everything into a 2-D array form with something like
x = np.array(x)
if len(x.shape) == 0:
x = x.reshape(1, 1)
elif len(x.shape) == 1:
x = x.reshape(-1, 1)
if len(f.shape) == 0:
f = f.reshape(1, 1)
elif len(f.shape) == 1:
f = f.reshape(-1, 1)
and always have 4 nested for loops, but this doesn't scale if I need to generalize to higher-order tensors.
Is what I'm trying to do possible, and if so, how?
I highly doubt there is a function to generate the second parameter returned by the function in Numpy. That being said you can play with the feature of Numpy and Python so to vectorize this and make the function faster. You first need to generate the indices and, then generate the target matrix and set it. Note that operating with N-dimensional generic arrays tends to be slow and tricky in non-trivial cases. The magic * unrolling operator is used to generate N parameters.
def foo_generic(x):
f = x ** 2
idx = np.stack(np.meshgrid(*[np.arange(e) for e in x.shape], indexing='ij'))
idx = tuple(np.concatenate((idx, idx)).reshape(2*x.ndim, -1))
df = np.zeros([*x.shape, *x.shape])
df[idx] = 2 * x.ravel()
return f, df
Note that foo_generic does not support scalar and it would be very inefficient to use it for that anyway, but you can add a condition in it to support this special case apart.
The df matrix will very quickly be huge for higher order so I strongly advise you not to use dense matrices for that since the number of zeros is huge compared to the number of values in the matrix case already. Sparse matrices fix this. In fact, for a 5x5 matrix, there are >95% of zeros. Not to mention the matrix becomes quickly huge and willing a huge matrix full of zeros is not efficient.
I have two 3D arrays A and B with shapes (k, n, n) and (k, m, m) respectively. I would like to create a matrix C of shape (k, n+m, n+m) such that for each 0 <= i < k, the 2D matrix C[i,:,:] is the block diagonal matrix obtained by putting A[i, :, :] at the upper left n x n part and B[i, :, :] at the lower right m x m part.
Currently I am using the following to achieve this is NumPy:
C = np.empty((k, n+m, n+m))
for i in range(k):
C[i, ...] = np.block([[A[i,...], np.zeros((n,m))],
[np.zeros((m,n)), B[i,...]]])
I was wondering if there is a way to do this without the for loop. I think if k is large my solution is not very efficient.
IIUC You can simply slice and assign -
C = np.zeros((k, n+m, n+m),dtype=np.result_type(A,B))
C[:,:n,:n] = A
C[:,n:,n:] = B
Given a numpy array items of of shape (D, N, Q) and another array of indices ids of shape (N, P), how can I make a new array my_items of shape (D, N, P), by using the indices nq_ids, like the following:
# How can these loops be avoided?
my_items = np.zeros((D, N, P))
for n in range(N):
for p in range(P):
my_items[:, n, p] = items[:, n, ids[n, p]]
with numpy magic instead of using any explicit loops? Here is a minimal example:
import numpy as np
D, N, Q, P = 2, 5, 4, 3 # Reduced problem dimensions.
items = 1.0 * np.arange(D * N * Q).reshape((D, N, Q)) # Example data
ids = np.arange(0, N * P).reshape(N, P) % Q # Example ids
# How can these loops be avoided?
my_items = np.zeros((D, N, P))
for n in range(N):
for p in range(P):
my_items[:, n, p] = items[:, n, ids[n, p]]
# print('items', items)
# print('ids', ids)
# print('my_items', my_items)
I would also like to preserve the element order if possible.
This should work now, returning the exact same ndarray as your loop:
np.stack([np.take(items[:,i,:], ids[i, :], axis=1)
for i in range(ids.shape[0])], axis=2).transpose((0,2,1))
However, #hpaulj's method is faster, by 23.5 µs vs 5 µs. So use that.
Suppose I have two 2D NumPy arrays A and B, I would like to compute the matrix C whose entries are C[i, j] = f(A[i, :], B[:, j]), where f is some function that takes two 1D arrays and returns a number.
For instance, if def f(x, y): return np.sum(x * y) then I would simply have C = np.dot(A, B). However, for a general function f, are there NumPy/SciPy utilities I could exploit that are more efficient than doing a double for-loop?
For example, take def f(x, y): return np.sum(x != y) / len(x), where x and y are not simply 0/1-bit vectors.
Here is a reasonably general approach using broadcasting.
First, reshape your two matrices to be rank-four tensors.
A = A.reshape(A.shape + (1, 1))
B = B.reshape((1, 1) + B.shape)
Second, apply your function element by element without performing any reduction.
C = f(A, B) # e.g. A != B
Having reshaped your matrices allows numpy to broadcast. The resulting tensor C has shape A.shape + B.shape.
Third, apply any desired reduction by, for example, summing over the indices you want to discard:
C = C.sum(axis=(1, 3)) / C.shape[0]
I have a bunch of 3x2 matrices, let's say 777 of them, and just as many right-hand sides of size 3. For each of them, I would like to know the least squared solution, so I'm doing
import numpy
A = numpy.random.rand(3, 2, 777)
b = numpy.random.rand(3, 777)
for k in range(777):
numpy.linalg.lstsq(A[..., k], b[..., k])
That works, but is slow. I'd much rather compute all the solutions in one go, but upon
numpy.linalg.lstsq(A, b)
I'm getting
numpy.linalg.linalg.LinAlgError: 3-dimensional array given. Array must be two-dimensional
Any hints on how to broadcast numpy.linalg.lstsq?
One can make use of the fact that if A = U \Sigma V^T is the singular value decomposition of A,
x = V \Sigma^+ U^T b
is the least-squares solution to Ax = b. SVD is broadcasted in numpy. It now only requires a bit of fiddling with einsums to get it all right:
A = numpy.random.rand(7, 3, 2)
b = numpy.random.rand(7, 3)
for k in range(7):
x, res, rank, sigma = numpy.linalg.lstsq(A[k], b[k])
print(x)
print
u, s, v = numpy.linalg.svd(A, full_matrices=False)
uTb = numpy.einsum('ijk,ij->ik', u, b)
xx = numpy.einsum('ijk, ij->ik', v, uTb / s)
print(xx)