Eigenvalues of sample covariance far from eigenvalues of covariance - python

I wrote the following code to create a random matrix Sigma with specified eigenvalues, and then sample vectors from multivariate normal distribution with zero mean and covariance Sigma.
def generate_data(N, d):
eigenvalues = [0] * (d + 1)
for k in range(1, d + 2):
eigenvalues[k - 1] = k
random_matrix = numpy.random.randn(d + 1, d + 1)
random_orthogonal = numpy.linalg.qr(random_matrix)[0]
sqrt_cov = random_orthogonal # numpy.diag(numpy.sqrt(eigenvalues))
X = numpy.zeros((N, d + 1))
for i in range(N):
vec = numpy.random.standard_normal(d + 1)
X[i] = sqrt_cov # vec
After this code, X should be N by d+1 matrix that's been sampled from the desired distribution.
Now I want to know what are the eigenvalues of the sample covariance matrix of X. If I am not mistaken, it should be similar to Sigma
def get_sample_covariance(data):
data = data - data.mean(axis=0)
sample_cov = data.T # data / (data.shape[0] - 1)
return sample_cov
I then plotted the eigenvalues of sample_cov
I expected a roughly linear function, going from d (which was 500) to 1.
I got this
What gives? Where's the mistake?

The generation of samples appears to be correct. But the estimate for covariance from N random samples is only correct if N >> d. In particular the highest eigenvalue tends to be systematically off. However, if you take the mean eigenvalue out of all eigenvalues, it's quite accurate.
def get_max_mean_eig_run(N, d):
"""Return highest and mean eigenvalue for random samples"""
X = generate_data(N, d)
cov = get_sample_covariance(X)
eig = np.linalg.eigvals(cov)
return eig.max(), eig.mean()
plt.close('all')
np.random.seed(1)
ds = np.arange(1, 501, 25)
# shape (N, 2) for max and mean eigenvalues
eigs = np.array([get_max_mean_eig_run(1000, d) for d in ds])
plt.close('all')
plt.xlabel('d')
plt.ylabel('Eigenvalue')
plt.plot(ds, eigs[:, 0], 'r-', label='Max')
plt.plot(ds, eigs[:, 1], 'r--', label='Mean')
plt.scatter(ds, ds+1, color='k', marker='o', label='max expected')
plt.scatter(ds, (ds+2)/2, color='k', marker='*', label='mean expected')
plt.legend()
For this particular input spectrum of eigenvalues, you need N > 100*d to get the maximum eigenvalue close (+5%) to the maximum input eigenvalue, but this will likely be different for more realistic cases.
Here is a histogram of the eigenvalues (for N=1000, d=500)
plt.close('all')
X = generate_data(1000, 500)
eigs = np.linalg.eigvals(get_sample_covariance(X))
hist, bin_edges = np.histogram(eigs, bins=20)
bin_centers = (bin_edges[1:] + bin_edges[:-1])/2
plt.step(bin_centers, hist, where='mid')
plt.xlabel('Eigenvalue')
plt.ylabel('Count (per bin)')

Related

Is there a way of batch sampling from Numpy's multivariate normal distribution in a vectorised fashion?

I'm presently trying to run a vectorised batch multivariate sampling operation via Numpy. I have k mean vectors of shape [N,] corresponding to k covariance matrices of dimensions [N, N], and I'm trying to return k draws of shape [N,] from the multivariate normal distributions.
I presently have a loop that does the above,
for batch in range(batch_size):
c[batch, :] = np.random.multivariate_normal(mean = a[batch, :], cov = b[batch, :, :])
but would like to consolidate the above into a vectorised operation. The issue is that np.random.multivariate_normal can only take a 1-D array as the mean and a 2-D array as the covariance.
I can do batch-sampling via PyTorch's multivariate normal class, but I'm trying to integrate with some pre-existing Numpy code, and I'd prefer to limit the number of conversions happening.
Googling pulled up this question, which could be resolved by melting the mean, but in my case, I'm not using the same covariance matrix and can't go about things exactly the same way.
Thank you very much for your help. I figure there's a good chance that I won't be able to handle batches using the Numpy distribution because of the argument constraints, but wanted to make sure I wasn't missing anything.
I couldn't find a builtin function in numpy, but it can be self-implemented by performing a cholesky decomposition of the covariance matrix Σ = LLᵀ and then making use of the fact that, given a vector X of i.i.d. standard normal variables, the transformation LX + µ has covariance Σ and mean µ.
This can be implemented using e.g. np.linalg.cholesky() (note that this function supports batch mode!), and np.random.normal():
# cov: (*B, D, D)
# mean: (*B, D)
# result: (*S, *B, D)
L = np.linalg.cholesky(cov)
X = np.random.standard_normal((*S, *B, D, 1))
Y = (L # X).reshape(*S, *B, D) + mean
Here, packed in a function for easier use:
import numpy as np
def sample_batch_mvn(
mean: np.ndarray,
cov: np.ndarray,
size: "tuple | int" = (),
) -> np.ndarray:
"""
Batch sample multivariate normal distribution.
Arguments:
mean: expected values of shape (…M, D)
cov: covariance matrices of shape (…M, D, D)
size: additional batch shape (…B)
Returns: samples from the multivariate normal distributions
shape: (…B, …M, D)
It is not required that ``mean`` and ``cov`` have the same shape
prefix, only that they are broadcastable against each other.
"""
mean = np.asarray(mean)
cov = np.asarray(cov)
size = (size, ) if isinstance(size, int) else tuple(size)
shape = size + np.broadcast_shapes(mean.shape, cov.shape[:-1])
X = np.random.standard_normal((*shape, 1))
L = np.linalg.cholesky(cov)
return (L # X).reshape(shape) + mean
Now in order to test this function, we first need a good batch of covariance matrices. We'll generate a couple to test the sampling performance a bit:
# Generate N batch of D-dimensional covariance matrices C:
N = 5000
D = 2
L = np.zeros((N, D, D))
L[(..., *np.tril_indices(D))] = \
np.random.normal(size=(N, D * (D + 1) // 2))
cov = L # np.swapaxes(L, -1, -2)
The method used to generate the covariance matrices here in fact works by sampling the Cholesky factors L. With prior knowledge of these factors, we of course wouldn't need to compute the Cholesky decomposition in the sampling function. However, to test the general applicability of the function, we will forget about them and just pass the covariance matrices C:
mean = np.zeros(2)
samples = sample_batch_mvn(mean, cov, 1000)
print(samples.shape) # (1000, 5000, 2)
Sampling these 5 million 2D vectors takes about 0.4s on my PC.
And, as almost always, the a considerable amount of effort will go into plotting (here showing some samples for the first 9 of the 5000 covariance matrices):
import scipy.stats as stats
import matplotlib.pyplot as plt
fig, axs = plt.subplots(3, 3, figsize=(9, 9))
for ax, i in zip(axs.ravel(), range(5000)):
cc = cov[i]
xsamples = samples[:100, i, 0]
ysamples = samples[:100, i, 1]
xmin = xsamples.min()
xmax = xsamples.max()
ymin = ysamples.min()
ymax = ysamples.max()
xpad = (xmax - xmin) * 0.05
ypad = (ymax - ymin) * 0.05
xlim = (xmin - xpad, xmax + xpad)
ylim = (ymin - ypad, ymax + ypad)
xs = np.linspace(*xlim, num=51)
ys = np.linspace(*ylim, num=51)
xy = np.dstack(np.meshgrid(xs, ys))
pdf = stats.multivariate_normal.pdf(xy, mean, cc)
ax.contourf(xs, ys, pdf, 33, cmap='YlGnBu')
ax.plot(xsamples, ysamples, 'r.', alpha=.6,
markeredgecolor='k', markeredgewidth=0.5)
ax.set_xlim(*xlim)
ax.set_ylim(*ylim)
plt.show()
Some inspiration for this:
Some notes on sampling from a multivariate normal
Pinheiro and Bates, 1996, Unconstrained Parameterizations for Variance-Covariance Matrices

plot least-square regression line in the log-log scale plot

I want to plot the least-square regression line for the X and Y in the log-log scale plot and find coefficients. The line function is log(Y) = a.log(X) + b equivalently, Y = 10^b . X^a. What are
the coefficients a and b? how can I use polyfit in NumPy?
I use code below using this code but I get this runtime error:
divide by zero encountered in log10 X_log = np.log10(X)
X_log = np.log10(X)
Y_log = np.log10(Y)
X_mean = np.mean(X_log)
Y_mean = np.mean(Y_log)
num = 0
den = 0
for i in range(len(X)):
num += (X_log[i] - X_mean)*(Y_log[i] - Y_mean)
den += (X_log[i] - X_mean)**2
m = num / den
c = Y_mean - m*X_mean
print (m, c)
Y_pred = m*X_log + c
plt.plot([min(X_log), max(X_log)], [min(Y_pred), max(Y_pred)], color='red') # predicted
plt.show()
It seems like you have X-values that are too close to zero, can you show the values you send to log_x = np.log10(x)?
To use np.polyfit just write
coeff = np.polyfit(np.log10(x), np.log10(y), deg = 1)
coeff will now be a list [a,b] with your coefficients for a first-degree fit (hence deg = 1) to the data points (log(x), log(y)). If you want the variance in the coefficients use
coeff, cov = np.polyfit(np.log10(x), np.log10(y), deg = 1, cov = True)
cov is now your covariance matrix.

Finding Fourier coefficients algorithm

Ok, so I have been trying to code a "naive" method to calculate the coefficients for a standard Fourier Series in complex form. I am getting very close, I think, but there are some odd behaviors. This may be more of a math question than programming one, but I already asked on math.stackexchange and got zero answers. Here is my working code:
import matplotlib.pyplot as plt
import numpy as np
def coefficients(fn, dx, m, L):
"""
Calculate the complex form fourier series coefficients for the first M
waves.
:param fn: function to sample
:param dx: sampling frequency
:param m: number of waves to compute
:param L: We are solving on the interval [-L, L]
:return: an array containing M Fourier coefficients c_m
"""
N = 2*L / dx
coeffs = np.zeros(m, dtype=np.complex_)
xk = np.arange(-L, L + dx, dx)
# Calculate the coefficients for each wave
for mi in range(m):
coeffs[mi] = 1/N * sum(fn(xk)*np.exp(-1j * mi * np.pi * xk / L))
return coeffs
def fourier_graph(range, L, c_coef, function=None, plot=True, err_plot=False):
"""
Given a range to plot and an array of complex fourier series coefficients,
this function plots the representation.
:param range: the x-axis values to plot
:param c_coef: the complex fourier coefficients, calculated by coefficients()
:param plot: Default True. Plot the fourier representation
:param function: For calculating relative error, provide function definition
:param err_plot: relative error plotted. requires a function to compare solution to
:return: the fourier series values for the given range
"""
# Number of coefficients to sum over
w = len(c_coef)
# Initialize solution array
s = np.zeros(len(range))
for i, ix in enumerate(range):
for iw in np.arange(w):
s[i] += c_coef[iw] * np.exp(1j * iw * np.pi * ix / L)
# If a plot is desired:
if plot:
plt.suptitle("Fourier Series Plot")
plt.xlabel(r"$t$")
plt.ylabel(r"$f(x)$")
plt.plot(range, s, label="Fourier Series")
if err_plot:
plt.plot(range, function(range), label="Actual Solution")
plt.legend()
plt.show()
# If error plot is desired:
if err_plot:
err = abs(function(range) - s) / function(range)
plt.suptitle("Plot of Relative Error")
plt.xlabel("Steps")
plt.ylabel("Relative Error")
plt.plot(range, err)
plt.show()
return s
if __name__ == '__main__':
# Assuming the interval [-l, l] apply discrete fourier transform:
# number of waves to sum
wvs = 50
# step size for calculating c_m coefficients (trap rule)
deltax = .025 * np.pi
# length of interval for Fourier Series is 2*l
l = 2 * np.pi
c_m = coefficients(np.exp, deltax, wvs, l)
# The x range we would like to interpolate function values
x = np.arange(-l, l, .01)
sol = fourier_graph(x, l, c_m, np.exp, err_plot=True)
Now, there is a factor of 2/N multiplying each coefficient. However, I have a derivation of this sum in my professor's typed notes that does not include this factor of 2/N. When I derived the form myself, I arrived at a formula with a factor of 1/N that did not cancel no matter what tricks I tried. I asked over at math.stackexchange what was going on, but got no answers.
What I did notice is that adding the 1/N decreased the difference between the actual solution and the fourier series by a massive amount, but it's still not right. so I tried 2/N and got even better results. I am really trying to figure this out so I can write a nice, clean algorithm for basic fourier series before I try to learn about Fast Fourier Transforms.
So what am I doing wrong here?
assuming c_n is given by A_n as in mathworld
idem c_n = 1/T \int_{-T/2}^{T/2}f(x)e^{-2ipinx/T}dx
we can compute (trivially) the coeffs c_n analytically (which is a good way to compare to your trapezoidal integral)
k = (1-2in)/2
c_n = 1/(4*pi*k)*(e^{2pik} - e^{-2pik})
So your coeffs are likely to be properly computed (the both wrong curves look alike)
Now notice that when you reconstitue f, you add the coeff c_0 up to c_m
But the reconstruction should occur with c_{-m} to c_m
So you are missing half of the coeffs.
Below a fix with your adaptated coefficients function and the theoretical coeffs
import matplotlib.pyplot as plt
import numpy as np
def coefficients(fn, dx, m, L):
"""
Calculate the complex form fourier series coefficients for the first M
waves.
:param fn: function to sample
:param dx: sampling frequency
:param m: number of waves to compute
:param L: We are solving on the interval [-L, L]
:return: an array containing M Fourier coefficients c_m
"""
N = 2*L / dx
coeffs = np.zeros(m, dtype=np.complex_)
xk = np.arange(-L, L + dx, dx)
# Calculate the coefficients for each wave
for mi in range(m):
n = mi - m/2
coeffs[mi] = 1/N * sum(fn(xk)*np.exp(-1j * n * np.pi * xk / L))
return coeffs
def fourier_graph(range, L, c_coef, ref, function=None, plot=True, err_plot=False):
"""
Given a range to plot and an array of complex fourier series coefficients,
this function plots the representation.
:param range: the x-axis values to plot
:param c_coef: the complex fourier coefficients, calculated by coefficients()
:param plot: Default True. Plot the fourier representation
:param function: For calculating relative error, provide function definition
:param err_plot: relative error plotted. requires a function to compare solution to
:return: the fourier series values for the given range
"""
# Number of coefficients to sum over
w = len(c_coef)
# Initialize solution array
s = np.zeros(len(range), dtype=complex)
t = np.zeros(len(range), dtype=complex)
for i, ix in enumerate(range):
for iw in np.arange(w):
n = iw - w/2
s[i] += c_coef[iw] * (np.exp(1j * n * ix * 2 * np.pi / L))
t[i] += ref[iw] * (np.exp(1j * n * ix * 2 * np.pi / L))
# If a plot is desired:
if plot:
plt.suptitle("Fourier Series Plot")
plt.xlabel(r"$t$")
plt.ylabel(r"$f(x)$")
plt.plot(range, s, label="Fourier Series")
plt.plot(range, t, label="expected Solution")
plt.legend()
if err_plot:
plt.plot(range, function(range), label="Actual Solution")
plt.legend()
plt.show()
return s
def ref_coefficients(m):
"""
Calculate the complex form fourier series coefficients for the first M
waves.
:param fn: function to sample
:param dx: sampling frequency
:param m: number of waves to compute
:param L: We are solving on the interval [-L, L]
:return: an array containing M Fourier coefficients c_m
"""
coeffs = np.zeros(m, dtype=np.complex_)
# Calculate the coefficients for each wave
for iw in range(m):
n = iw - m/2
k = (1-(1j*n)/2)
coeffs[iw] = 1/(4*np.pi*k)* (np.exp(2*np.pi*k) - np.exp(-2*np.pi*k))
return coeffs
if __name__ == '__main__':
# Assuming the interval [-l, l] apply discrete fourier transform:
# number of waves to sum
wvs = 50
# step size for calculating c_m coefficients (trap rule)
deltax = .025 * np.pi
# length of interval for Fourier Series is 2*l
l = 2 * np.pi
c_m = coefficients(np.exp, deltax, wvs, l)
# The x range we would like to interpolate function values
x = np.arange(-l, l, .01)
ref = ref_coefficients(wvs)
sol = fourier_graph(x, 2*l, c_m, ref, np.exp, err_plot=True)

Measuring the similarity between two irregular plots

I have two irregular lines as a list of [x,y] coordinates, which has peaks and troughs. The length of the list might vary slightly(unequal). I want to measure their similarity such that to check occurence of the peaks and troughs (of similar depth or height) are coming at proper interval and give a similarity measure. I want to do this in Python. Is there any inbuilt function to do this?
I don't know of any builtin functions in Python to do this.
I can give you a list of possible functions in the Python ecosystem you can use. This is in no way a complete list of functions, and there are probably quite a few methods out there that I am not aware of.
If the data is ordered, but you don't know which data point is the first and which data point is last:
Use the directed Hausdorff distance
If the data is ordered, and you know the first and last points are correct:
Discrete Fréchet distance *
Dynamic Time Warping (DTW) *
Partial Curve Mapping (PCM) **
A Curve-Length distance metric (uses arc length distance from beginning to end) **
Area between two curves **
* Generally mathematical method used in a variety of machine learning tasks
** Methods I've used to identify unique material hysteresis responses
First let's assume we have two of the exact same random X Y data. Note that all of these methods will return a zero. You can install the similaritymeasures from pip if you do not have it.
import numpy as np
from scipy.spatial.distance import directed_hausdorff
import similaritymeasures
import matplotlib.pyplot as plt
# Generate random experimental data
np.random.seed(121)
x = np.random.random(100)
y = np.random.random(100)
P = np.array([x, y]).T
# Generate an exact copy of P, Q, which we will use to compare
Q = P.copy()
dh, ind1, ind2 = directed_hausdorff(P, Q)
df = similaritymeasures.frechet_dist(P, Q)
dtw, d = similaritymeasures.dtw(P, Q)
pcm = similaritymeasures.pcm(P, Q)
area = similaritymeasures.area_between_two_curves(P, Q)
cl = similaritymeasures.curve_length_measure(P, Q)
# all methods will return 0.0 when P and Q are the same
print(dh, df, dtw, pcm, cl, area)
The printed output is
0.0, 0.0, 0.0, 0.0, 0.0, 0.0
This is because the curves P and Q are exactly the same!
Now let's assume P and Q are different.
# Generate random experimental data
np.random.seed(121)
x = np.random.random(100)
y = np.random.random(100)
P = np.array([x, y]).T
# Generate random Q
x = np.random.random(100)
y = np.random.random(100)
Q = np.array([x, y]).T
dh, ind1, ind2 = directed_hausdorff(P, Q)
df = similaritymeasures.frechet_dist(P, Q)
dtw, d = similaritymeasures.dtw(P, Q)
pcm = similaritymeasures.pcm(P, Q)
area = similaritymeasures.area_between_two_curves(P, Q)
cl = similaritymeasures.curve_length_measure(P, Q)
# all methods will return 0.0 when P and Q are the same
print(dh, df, dtw, pcm, cl, area)
The printed output is
0.107, 0.743, 37.69, 21.5, 6.86, 11.8
which quantify how different P is from Q according to each method.
You now have many methods to compare the two curves. I would start with DTW, since this has been used in many time series applications which look like the data you have uploaded.
We can visualize what P and Q look like with the following code.
plt.figure()
plt.plot(P[:, 0], P[:, 1])
plt.plot(Q[:, 0], Q[:, 1])
plt.show()
Since your arrays are not the same size ( and I am assuming you are taking the same real time) , you need to interpolate them to compare across related set of points.
The following code does that, and calculates correlation measures:
#!/usr/bin/python
import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import scipy.spatial.distance as ssd
import scipy.stats as ss
x = np.linspace(0, 10, num=11)
x2 = np.linspace(1, 11, num=13)
y = 2*np.cos( x) + 4 + np.random.random(len(x))
y2 = 2* np.cos(x2) + 5 + np.random.random(len(x2))
# Interpolating now, using linear, but you can do better based on your data
f = interp1d(x, y)
f2 = interp1d(x2,y2)
points = 15
xnew = np.linspace ( min(x), max(x), num = points)
xnew2 = np.linspace ( min(x2), max(x2), num = points)
ynew = f(xnew)
ynew2 = f2(xnew2)
plt.plot(x,y, 'r', x2, y2, 'g', xnew, ynew, 'r--', xnew2, ynew2, 'g--')
plt.show()
# Now compute correlations
print ssd.correlation(ynew, ynew2) # Computes a distance measure based on correlation between the two vectors
print np.correlate(ynew, ynew2, mode='valid') # Does a cross-correlation of same sized arrays and gives back correlation
print np.corrcoef(ynew, ynew2) # Gives back the correlation matrix for the two arrays
print ss.spearmanr(ynew, ynew2) # Gives the spearman correlation for the two arrays
Output:
0.499028272458
[ 363.48984942]
[[ 1. 0.50097173]
[ 0.50097173 1. ]]
SpearmanrResult(correlation=0.45357142857142857, pvalue=0.089485900143027278)
Remember that the correlations here are parametric and pearson type and assume monotonicity for calculating correlations. If this is not the case, and you think that your arrays are just changing sign together, you can use Spearman's correlation as in the last example.
I'm not aware of an inbuild function, but sounds like you can modify Levenshtein's distance. The following code is adopted from the code at wikibooks.
def point_distance(p1, p2):
# Define distance, if they are the same, then the distance should be 0
def levenshtein_point(l1, l2):
if len(l1) < len(l2):
return levenshtein(l2, l1)
# len(l1) >= len(l2)
if len(l2) == 0:
return len(l1)
previous_row = range(len(l2) + 1)
for i, p1 in enumerate(l1):
current_row = [i + 1]
for j, p2 in enumerate(l2):
print('{},{}'.format(p1, p2))
insertions = previous_row[j + 1] + 1 # j+1 instead of j since previous_row and current_row are one character longer
deletions = current_row[j] + 1 # than l2
substitutions = previous_row[j] + point_distance(p1, p2)
current_row.append(min(insertions, deletions, substitutions))
previous_row = current_row
return previous_row[-1]

Compute divergence of vector field using python

Is there a function that could be used for calculation of the divergence of the vectorial field? (in matlab) I would expect it exists in numpy/scipy but I can not find it using Google.
I need to calculate div[A * grad(F)], where
F = np.array([[1,2,3,4],[5,6,7,8]]) # (2D numpy ndarray)
A = np.array([[1,2,3,4],[1,2,3,4]]) # (2D numpy ndarray)
so grad(F) is a list of 2D ndarrays
I know I can calculate divergence like this but do not want to reinvent the wheel. (I would also expect something more optimized) Does anyone have suggestions?
Just a hint for everybody reading that:
the functions above do not compute the divergence of a vector field. they sum the derivatives of a scalar field A:
result = dA/dx + dA/dy
in contrast to a vector field (with three dimensional example):
result = sum dAi/dxi = dAx/dx + dAy/dy + dAz/dz
Vote down for all! It is mathematically simply wrong.
Cheers!
import numpy as np
def divergence(field):
"return the divergence of a n-D field"
return np.sum(np.gradient(field),axis=0)
Based on Juh_'s answer, but modified for the correct divergence of a vector field formula
def divergence(f):
"""
Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
:param f: List of ndarrays, where every item of the list is one dimension of the vector field
:return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
Matlab's documentation uses this exact formula (scroll down to Divergence of a Vector Field)
The answer of #user2818943 is good, but it can be optimized a little:
def divergence(F):
""" compute the divergence of n-D scalar field `F` """
return reduce(np.add,np.gradient(F))
Timeit:
F = np.random.rand(100,100)
timeit reduce(np.add,np.gradient(F))
# 1000 loops, best of 3: 318 us per loop
timeit np.sum(np.gradient(F),axis=0)
# 100 loops, best of 3: 2.27 ms per loop
About 7 times faster:
sum implicitely construct a 3d array from the list of gradient fields which are returned by np.gradient. This is avoided using reduce
Now, in your question what do you mean by div[A * grad(F)]?
about A * grad(F): A is a 2d array, and grad(f) is a list of 2d arrays. So I considered it means to multiply each gradient field by A.
about applying divergence to the (scaled by A) gradient field is unclear. By definition, div(F) = d(F)/dx + d(F)/dy + .... I guess this is just an error of formulation.
For 1, multiplying summed elements Bi by a same factor A can be factorized:
Sum(A*Bi) = A*Sum(Bi)
Thus, you can get this weighted gradient simply with: A*divergence(F)
If ̀A is instead a list of factor, one for each dimension, then the solution would be:
def weighted_divergence(W,F):
"""
Return the divergence of n-D array `F` with gradient weighted by `W`
̀`W` is a list of factors for each dimension of F: the gradient of `F` over
the `i`th dimension is multiplied by `W[i]`. Each `W[i]` can be a scalar
or an array with same (or broadcastable) shape as `F`.
"""
wGrad = return map(np.multiply, W, np.gradient(F))
return reduce(np.add,wGrad)
result = weighted_divergence(A,F)
What Daniel had modified is the right answer, let me explain self defined func divergence further in more detail :
Function np.gradient() defined as : np.gradient(f) = df/dx, df/dy, df/dz +...
but we need define func divergence as : divergence ( f) = dfx/dx + dfy/dy + dfz/dz +... = np.gradient( fx) + np.gradient(fy) + np.gradient(fz) + ...
Let's test, compare with example of divergence in matlab
import numpy as np
import matplotlib.pyplot as plt
NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)
NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)
def divergence(f):
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])
x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)
Fx = np.cos(x + 2*y)
Fy = np.sin(x - 2*y)
F = [Fx, Fy]
g = divergence(F)
plt.pcolormesh(x, y, g)
plt.colorbar()
plt.savefig( 'Div' + str(NY) +'.png', format = 'png')
plt.show()
---------- UPDATED VERSION: Include the differential Steps----------------
Thank the comment from #henry, the np.gradient take the default step as 1, so the results may have some mismatch. We can provide our own differential steps.
#https://stackoverflow.com/a/47905007/5845212
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)
NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)
def divergence(f,h):
"""
div(F) = dFx/dx + dFy/dy + ...
g = np.gradient(Fx,dx, axis=1)+ np.gradient(Fy,dy, axis=0) #2D
g = np.gradient(Fx,dx, axis=2)+ np.gradient(Fy,dy, axis=1) +np.gradient(Fz,dz,axis=0) #3D
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], h[i], axis=i) for i in range(num_dims)])
y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])
x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)
Fx = np.cos(x + 2*y)
Fy = np.sin(x - 2*y)
F = [Fx, Fy]
h = [dx, dy]
print('plotting')
rows = 1
cols = 2
#plt.clf()
plt.figure(figsize=(cols*3.5,rows*3.5))
plt.minorticks_on()
#g = np.gradient(Fx,dx, axis=1)+np.gradient(Fy,dy, axis=0) # equivalent to our func
g = divergence(F,h)
ax = plt.subplot(rows,cols,1,aspect='equal',title='div numerical')
#im=plt.pcolormesh(x, y, g)
im = plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.quiver(x,y,Fx,Fy)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(im, cax = cax,format='%.1f')
g = -np.sin(x+2*y) -2*np.cos(x-2*y)
ax = plt.subplot(rows,cols,2,aspect='equal',title='div analytical')
im=plt.pcolormesh(x, y, g)
im = plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.quiver(x,y,Fx,Fy)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(im, cax = cax,format='%.1f')
plt.tight_layout()
plt.savefig( 'divergence.png', format = 'png')
plt.show()
Based on #paul_chen answer, and with some additions for Matplotlib 3.3.0 (a shading param needs to be passed, and default colormap I guess has changed)
import numpy as np
import matplotlib.pyplot as plt
NY = 20; ymin = -2.; ymax = 2.
dy = (ymax -ymin )/(NY-1.)
NX = NY
xmin = -2.; xmax = 2.
dx = (xmax -xmin)/(NX-1.)
def divergence(f):
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])
x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)
Fx = np.cos(x + 2*y)
Fy = np.sin(x - 2*y)
F = [Fx, Fy]
g = divergence(F)
plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.colorbar()
plt.quiver(x,y,Fx,Fy)
plt.savefig( 'Div.png', format = 'png')
The divergence as a built-in function is included in matlab, but not numpy. This is the sort of thing that it may perhaps be worthwhile to contribute to pylab, an effort to create a viable open-source alternative to matlab.
http://wiki.scipy.org/PyLab
Edit: Now called http://www.scipy.org/stackspec.html
As far as I can tell, the answer is that there is no native divergence function in numpy. Therefore, the best method for calculating divergence is to sum the components of the gradient vector i.e. calculate the divergence.
I don't think the answer by #Daniel is correct, especially when the input is in order [Fx, Fy, Fz, ...].
A simple test case
See the MATLAB code:
a = [1 2 3;1 2 3; 1 2 3];
b = [[7 8 9] ;[1 5 8] ;[2 4 7]];
divergence(a,b)
which gives the result:
ans =
-5.0000 -2.0000 0
-1.5000 -1.0000 0
2.0000 0 0
and Daniel's solution:
def divergence(f):
"""
Daniel's solution
Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
:param f: List of ndarrays, where every item of the list is one dimension of the vector field
:return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
if __name__ == '__main__':
a = np.array([[1, 2, 3]] * 3)
b = np.array([[7, 8, 9], [1, 5, 8], [2, 4, 7]])
div = divergence([a, b])
print(div)
pass
which gives:
[[1. 1. 1. ]
[4. 3.5 3. ]
[2. 2.5 3. ]]
Explanation
The mistake of Daniel's solution is, in Numpy, the x axis is the last axis instead of the first axis. When using np.gradient(x, axis=0), Numpy actually gives the gradient of y direction (when x is a 2d array).
My solution
There is my solution based on Daniel's answer.
def divergence(f):
"""
Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
:param f: List of ndarrays, where every item of the list is one dimension of the vector field
:return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[num_dims - i - 1], axis=i) for i in range(num_dims)])
which gives the same result as MATLAB divergence in my test case.
Somehow the previous attempts to compute the divergence are wrong! Let me show you:
We have the following vector field F:
F(x) = cos(x+2y)
F(y) = sin(x-2y)
If we compute the divergence (using Mathematica):
Div[{Cos[x + 2*y], Sin[x - 2*y]}, {x, y}]
we get:
-2 Cos[x - 2 y] - Sin[x + 2 y]
which has a maximum value in the range of y [-1,2] and x [-2,2]:
N[Max[Table[-2 Cos[x - 2 y] - Sin[x + 2 y], {x, -2, 2 }, {y, -2, 2}]]] = 2.938
Using the divergence equation given here:
def divergence(f):
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
we get a maximum value of about 0.625
Correct divergence function: Compute divergence with python

Categories

Resources