I am using the least squares method below to calculate coefficients for:
#Estimate coefficients of linear equation y = a + b*x
def calc_coefficients(_x, _y):
x, y = np.mean(_x), np.mean(_y)
xy = np.mean(_x*_y)
x2, y2 = np.mean(_x**2), np.mean(_y**2)
n = len(_x)
b = (xy - x*y) / (x2 - x**2)
a = y - b*x
sig_b = np.sqrt((y2-y**2)/(x2-x**2)-b**2) / np.sqrt(n)
sig_a = sig_b * np.sqrt(x2 - x**2)
return a, b, sig_a, sig_b
example data:
_x= [(0.009412743,0.014965211,0.013263312,0.013529132,0.009989368,0.013932615,0.020849682,0.010953529,0.003608903,0.007220992,0.012750529,0.021608436,0.031742052,0.022482958,0.021137599,0.018703295,0.021633681,0.019866029,0.020260629,0.034433715,0.009241074,0.012027059)]
_y = 0.294158677,0.359935335,0.313484808,0.301917271,0.169190763,0.486254864,0.305846328,0.347077387,0.188928817,0.422194367,0.41157232,0.39281496,0.497935681,0.34763333,0.281712023,0.352045535,0.339958296,0.395932086,0.359905526,0.450004349,0.395200865,0.365162443)]
However, I need a (y-intercept) to be zero. (y = bx).
I have tried using:
np.linalg.lstsq(_x, _y)
but I get this error:
LinAlgError: 1-dimensional array given. Array must be two-dimensional
What is the best method to fit the data for y = bx?
The error is because you pass a 1-dimensional array, which should have been a two-dimensional array of shape (n, 1) - so, a matrix with 1 column. You could just do x.reshape(-1, 1) but here is a way to do least squares fit with an arbitrary set of degrees of x:
import numpy as np
x = np.array([0, 1, 2, 3, 4, 5])
y = np.array([3, 6, 5, 7, 9, 1])
degrees = [1] # list of degrees of x to use
matrix = np.stack([x**d for d in degrees], axis=-1) # stack them like columns
coeff = np.linalg.lstsq(matrix, y)[0] # lstsq returns some additional info we ignore
print("Coefficients", coeff)
fit = np.dot(matrix, coeff)
print("Fitted curve/line", fit)
The matrix you pass to lstsq should have columns of the form f(x) where f runs through the terms you allow in the model. So if it's a general linear model, you'll have x**0 and x**1. With zero intercept forced, it's just x**1. In general these don't have to be powers of x, either.
Output for degrees = [1], model y = bx
Coefficients [ 1.41818182]
Fitted curve/line [ 0. 1.41818182 2.83636364 4.25454545 5.67272727 7.09090909]
Output for degrees = [0, 1], model y = a + bx
Coefficients [ 5.0952381 0.02857143]
Fitted curve/line [ 5.0952381 5.12380952 5.15238095 5.18095238 5.20952381 5.23809524]
Related
Suppose I am given an affine space as some conjunction of equalities say:
x + y + z = 2 && x - 3z = 4
which I represent in python as:
[ [1,1,1,2] , [1,0,-3,4] ]
I would like to find the basis of this affine set. Does python have any such library?
Little bit of mathematics:
Let the affine space be given by the matrix equation Ax = b. Let the k vectors {x_1, x_2, .. x_k } be the basis of the nullspace of A i.e. the space represented by Ax = 0. Let y be any particular solution of Ax = b. Then the basis of the affine space represented by Ax = b is given by the (k+1) vectors {y, y + x_1, y + x_2, .. y + x_k }. If there is no particular solution for Ax = b, then return some error message, as the set represented is empty.
For the above equation the matrix equation is:
Ax = b where
A = [[1, 1 , 1] , [1, 0 , -3]]
x = [x , y , z]^T
b = [2, 4]^T
If you are looking for a numerical (i.e. approximate) solution then you can try this:
import numpy as np
import scipy
A = np.array([[1,1,1] , [1,0,-3]])
b = np.array([2, 4])
null_sp = scipy.linalg.null_space(A)
x0 = np.linalg.lstsq(A, b, rcond=None)[0][..., None]
aff_basis = np.c_[np.zeros(A.shape[1])[..., None], x] + x0
print(aff_basis)
It gives:
[[ 1.69230769 1.10395929]
[ 1.07692308 1.86138762]
[-0.76923077 -0.9653469 ]]
You find two points on the line by adding two constraints and solving for the remaining unknowns:
x = 1 -> (1, 2, -1)
z = 0 -> (4, -2, 0)
This requires the resolution of two easy 2x2 systems, no need for a library.
I am fitting a 2d polynomial with the numpy function linalg.lstsq:
coeffs = np.array([y*0+1, y, x, x**2, y**2]).T
coeff_r, r, rank, s =np.linalg.lstsq(coeffs, values)
Some points that I am trying to fit are more reliable than others.
Is there a way to weigh the points differently?
Thanks
lstsq is enough for this; the weights can be applied to the equations. That is, if in an overdetermined system
3*a + 2*b = 9
2*a + 3*b = 4
5*a - 4*b = 2
you care about the first equation more than about the others, multiply it by some number greater than 1. For example, by 5:
15*a + 10*b = 45
2*a + 3*b = 4
5*a - 4*b = 2
Mathematically, the system is the same, but the least squares solution will be different because it minimizes the sum of squares of the residuals, and the residual of the 1st equation got multiplied by 5.
Here is an example based on your code (with small adjustments to make it more NumPythonic). First, unweighted fit:
import numpy as np
x, y = np.meshgrid(np.arange(0, 3), np.arange(0, 3))
x = x.ravel()
y = y.ravel()
values = np.sqrt(x+y+2) # some values to fit
functions = np.stack([np.ones_like(y), y, x, x**2, y**2], axis=1)
coeff_r = np.linalg.lstsq(functions, values, rcond=None)[0]
values_r = functions.dot(coeff_r)
print(values_r - values)
This displays the residuals as
[ 0.03885814 -0.00502763 -0.03383051 -0.00502763 0.00097465 0.00405298
-0.03383051 0.00405298 0.02977753]
Now I give the 1st data point greater weight.
weights = np.ones_like(x)
weights[0] = 5
coeff_r = np.linalg.lstsq(functions*weights[:, None], values*weights, rcond=None)[0]
values_r = functions.dot(coeff_r)
print(values_r - values)
Residuals:
[ 0.00271103 -0.01948647 -0.04828936 -0.01948647 0.00820407 0.0112824
-0.04828936 0.0112824 0.03700695]
The first residual is now an order of magnitude smaller, of course at the expense of others residuals.
I've written some beginner code to calculate the co-efficients of a simple linear model using the normal equation.
# Modules
import numpy as np
# Loading data set
X, y = np.loadtxt('ex1data3.txt', delimiter=',', unpack=True)
data = np.genfromtxt('ex1data3.txt', delimiter=',')
def normalEquation(X, y):
m = int(np.size(data[:, 1]))
# This is the feature / parameter (2x2) vector that will
# contain my minimized values
theta = []
# I create a bias_vector to add to my newly created X vector
bias_vector = np.ones((m, 1))
# I need to reshape my original X(m,) vector so that I can
# manipulate it with my bias_vector; they need to share the same
# dimensions.
X = np.reshape(X, (m, 1))
# I combine these two vectors together to get a (m, 2) matrix
X = np.append(bias_vector, X, axis=1)
# Normal Equation:
# theta = inv(X^T * X) * X^T * y
# For convenience I create a new, tranposed X matrix
X_transpose = np.transpose(X)
# Calculating theta
theta = np.linalg.inv(X_transpose.dot(X))
theta = theta.dot(X_transpose)
theta = theta.dot(y)
return theta
p = normalEquation(X, y)
print(p)
Using the small data set found here:
http://www.lauradhamilton.com/tutorial-linear-regression-with-octave
I get the co-efficients: [-0.34390603; 0.2124426 ] using the above code instead of: [24.9660; 3.3058]. Could anyone help clarify where I am going wrong?
You can implement normal equation like below:
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
This assumes X is an m by n+1 dimensional matrix where x_0 always = 1 and y is a m-dimensional vector.
import numpy as np
step1 = np.dot(X.T, X)
step2 = np.linalg.pinv(step1)
step3 = np.dot(step2, X.T)
theta = np.dot(step3, y) # if y is m x 1. If 1xm, then use y.T
Your implementation is correct. You've only swapped X and y (look closely how they define x and y), that's why you get a different result.
The call normalEquation(y, X) gives [ 24.96601443 3.30576144] as it should.
Here is the normal equation in one line:
theta = np.dot(np.linalg.inv(np.dot(X.T,X)),np.dot(X.T,Y))
Is there a function that could be used for calculation of the divergence of the vectorial field? (in matlab) I would expect it exists in numpy/scipy but I can not find it using Google.
I need to calculate div[A * grad(F)], where
F = np.array([[1,2,3,4],[5,6,7,8]]) # (2D numpy ndarray)
A = np.array([[1,2,3,4],[1,2,3,4]]) # (2D numpy ndarray)
so grad(F) is a list of 2D ndarrays
I know I can calculate divergence like this but do not want to reinvent the wheel. (I would also expect something more optimized) Does anyone have suggestions?
Just a hint for everybody reading that:
the functions above do not compute the divergence of a vector field. they sum the derivatives of a scalar field A:
result = dA/dx + dA/dy
in contrast to a vector field (with three dimensional example):
result = sum dAi/dxi = dAx/dx + dAy/dy + dAz/dz
Vote down for all! It is mathematically simply wrong.
Cheers!
import numpy as np
def divergence(field):
"return the divergence of a n-D field"
return np.sum(np.gradient(field),axis=0)
Based on Juh_'s answer, but modified for the correct divergence of a vector field formula
def divergence(f):
"""
Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
:param f: List of ndarrays, where every item of the list is one dimension of the vector field
:return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
Matlab's documentation uses this exact formula (scroll down to Divergence of a Vector Field)
The answer of #user2818943 is good, but it can be optimized a little:
def divergence(F):
""" compute the divergence of n-D scalar field `F` """
return reduce(np.add,np.gradient(F))
Timeit:
F = np.random.rand(100,100)
timeit reduce(np.add,np.gradient(F))
# 1000 loops, best of 3: 318 us per loop
timeit np.sum(np.gradient(F),axis=0)
# 100 loops, best of 3: 2.27 ms per loop
About 7 times faster:
sum implicitely construct a 3d array from the list of gradient fields which are returned by np.gradient. This is avoided using reduce
Now, in your question what do you mean by div[A * grad(F)]?
about A * grad(F): A is a 2d array, and grad(f) is a list of 2d arrays. So I considered it means to multiply each gradient field by A.
about applying divergence to the (scaled by A) gradient field is unclear. By definition, div(F) = d(F)/dx + d(F)/dy + .... I guess this is just an error of formulation.
For 1, multiplying summed elements Bi by a same factor A can be factorized:
Sum(A*Bi) = A*Sum(Bi)
Thus, you can get this weighted gradient simply with: A*divergence(F)
If ̀A is instead a list of factor, one for each dimension, then the solution would be:
def weighted_divergence(W,F):
"""
Return the divergence of n-D array `F` with gradient weighted by `W`
̀`W` is a list of factors for each dimension of F: the gradient of `F` over
the `i`th dimension is multiplied by `W[i]`. Each `W[i]` can be a scalar
or an array with same (or broadcastable) shape as `F`.
"""
wGrad = return map(np.multiply, W, np.gradient(F))
return reduce(np.add,wGrad)
result = weighted_divergence(A,F)
What Daniel had modified is the right answer, let me explain self defined func divergence further in more detail :
Function np.gradient() defined as : np.gradient(f) = df/dx, df/dy, df/dz +...
but we need define func divergence as : divergence ( f) = dfx/dx + dfy/dy + dfz/dz +... = np.gradient( fx) + np.gradient(fy) + np.gradient(fz) + ...
Let's test, compare with example of divergence in matlab
import numpy as np
import matplotlib.pyplot as plt
NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)
NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)
def divergence(f):
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])
x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)
Fx = np.cos(x + 2*y)
Fy = np.sin(x - 2*y)
F = [Fx, Fy]
g = divergence(F)
plt.pcolormesh(x, y, g)
plt.colorbar()
plt.savefig( 'Div' + str(NY) +'.png', format = 'png')
plt.show()
---------- UPDATED VERSION: Include the differential Steps----------------
Thank the comment from #henry, the np.gradient take the default step as 1, so the results may have some mismatch. We can provide our own differential steps.
#https://stackoverflow.com/a/47905007/5845212
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
NY = 50
ymin = -2.
ymax = 2.
dy = (ymax -ymin )/(NY-1.)
NX = NY
xmin = -2.
xmax = 2.
dx = (xmax -xmin)/(NX-1.)
def divergence(f,h):
"""
div(F) = dFx/dx + dFy/dy + ...
g = np.gradient(Fx,dx, axis=1)+ np.gradient(Fy,dy, axis=0) #2D
g = np.gradient(Fx,dx, axis=2)+ np.gradient(Fy,dy, axis=1) +np.gradient(Fz,dz,axis=0) #3D
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], h[i], axis=i) for i in range(num_dims)])
y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])
x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)
Fx = np.cos(x + 2*y)
Fy = np.sin(x - 2*y)
F = [Fx, Fy]
h = [dx, dy]
print('plotting')
rows = 1
cols = 2
#plt.clf()
plt.figure(figsize=(cols*3.5,rows*3.5))
plt.minorticks_on()
#g = np.gradient(Fx,dx, axis=1)+np.gradient(Fy,dy, axis=0) # equivalent to our func
g = divergence(F,h)
ax = plt.subplot(rows,cols,1,aspect='equal',title='div numerical')
#im=plt.pcolormesh(x, y, g)
im = plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.quiver(x,y,Fx,Fy)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(im, cax = cax,format='%.1f')
g = -np.sin(x+2*y) -2*np.cos(x-2*y)
ax = plt.subplot(rows,cols,2,aspect='equal',title='div analytical')
im=plt.pcolormesh(x, y, g)
im = plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.quiver(x,y,Fx,Fy)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plt.colorbar(im, cax = cax,format='%.1f')
plt.tight_layout()
plt.savefig( 'divergence.png', format = 'png')
plt.show()
Based on #paul_chen answer, and with some additions for Matplotlib 3.3.0 (a shading param needs to be passed, and default colormap I guess has changed)
import numpy as np
import matplotlib.pyplot as plt
NY = 20; ymin = -2.; ymax = 2.
dy = (ymax -ymin )/(NY-1.)
NX = NY
xmin = -2.; xmax = 2.
dx = (xmax -xmin)/(NX-1.)
def divergence(f):
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
y = np.array([ ymin + float(i)*dy for i in range(NY)])
x = np.array([ xmin + float(i)*dx for i in range(NX)])
x, y = np.meshgrid( x, y, indexing = 'ij', sparse = False)
Fx = np.cos(x + 2*y)
Fy = np.sin(x - 2*y)
F = [Fx, Fy]
g = divergence(F)
plt.pcolormesh(x, y, g, shading='nearest', cmap=plt.cm.get_cmap('coolwarm'))
plt.colorbar()
plt.quiver(x,y,Fx,Fy)
plt.savefig( 'Div.png', format = 'png')
The divergence as a built-in function is included in matlab, but not numpy. This is the sort of thing that it may perhaps be worthwhile to contribute to pylab, an effort to create a viable open-source alternative to matlab.
http://wiki.scipy.org/PyLab
Edit: Now called http://www.scipy.org/stackspec.html
As far as I can tell, the answer is that there is no native divergence function in numpy. Therefore, the best method for calculating divergence is to sum the components of the gradient vector i.e. calculate the divergence.
I don't think the answer by #Daniel is correct, especially when the input is in order [Fx, Fy, Fz, ...].
A simple test case
See the MATLAB code:
a = [1 2 3;1 2 3; 1 2 3];
b = [[7 8 9] ;[1 5 8] ;[2 4 7]];
divergence(a,b)
which gives the result:
ans =
-5.0000 -2.0000 0
-1.5000 -1.0000 0
2.0000 0 0
and Daniel's solution:
def divergence(f):
"""
Daniel's solution
Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
:param f: List of ndarrays, where every item of the list is one dimension of the vector field
:return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
if __name__ == '__main__':
a = np.array([[1, 2, 3]] * 3)
b = np.array([[7, 8, 9], [1, 5, 8], [2, 4, 7]])
div = divergence([a, b])
print(div)
pass
which gives:
[[1. 1. 1. ]
[4. 3.5 3. ]
[2. 2.5 3. ]]
Explanation
The mistake of Daniel's solution is, in Numpy, the x axis is the last axis instead of the first axis. When using np.gradient(x, axis=0), Numpy actually gives the gradient of y direction (when x is a 2d array).
My solution
There is my solution based on Daniel's answer.
def divergence(f):
"""
Computes the divergence of the vector field f, corresponding to dFx/dx + dFy/dy + ...
:param f: List of ndarrays, where every item of the list is one dimension of the vector field
:return: Single ndarray of the same shape as each of the items in f, which corresponds to a scalar field
"""
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[num_dims - i - 1], axis=i) for i in range(num_dims)])
which gives the same result as MATLAB divergence in my test case.
Somehow the previous attempts to compute the divergence are wrong! Let me show you:
We have the following vector field F:
F(x) = cos(x+2y)
F(y) = sin(x-2y)
If we compute the divergence (using Mathematica):
Div[{Cos[x + 2*y], Sin[x - 2*y]}, {x, y}]
we get:
-2 Cos[x - 2 y] - Sin[x + 2 y]
which has a maximum value in the range of y [-1,2] and x [-2,2]:
N[Max[Table[-2 Cos[x - 2 y] - Sin[x + 2 y], {x, -2, 2 }, {y, -2, 2}]]] = 2.938
Using the divergence equation given here:
def divergence(f):
num_dims = len(f)
return np.ufunc.reduce(np.add, [np.gradient(f[i], axis=i) for i in range(num_dims)])
we get a maximum value of about 0.625
Correct divergence function: Compute divergence with python
I can generate Gaussian data with random.gauss(mu, sigma) function, but how can I generate 2D gaussian? Is there any function like that?
If you can use numpy, there is numpy.random.multivariate_normal(mean, cov[, size]).
For example, to get 10,000 2D samples:
np.random.multivariate_normal(mean, cov, 10000)
where mean.shape==(2,) and cov.shape==(2,2).
I'd like to add an approximation using exponential functions. This directly generates a 2d matrix which contains a movable, symmetric 2d gaussian.
I should note that I found this code on the scipy mailing list archives and modified it a little.
import numpy as np
def makeGaussian(size, fwhm = 3, center=None):
""" Make a square gaussian kernel.
size is the length of a side of the square
fwhm is full-width-half-maximum, which
can be thought of as an effective radius.
"""
x = np.arange(0, size, 1, float)
y = x[:,np.newaxis]
if center is None:
x0 = y0 = size // 2
else:
x0 = center[0]
y0 = center[1]
return np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)
For reference and enhancements, it is hosted as a gist here. Pull requests welcome!
Since the standard 2D Gaussian distribution is just the product of two 1D Gaussian distribution, if there are no correlation between the two axes (i.e. the covariant matrix is diagonal), just call random.gauss twice.
def gauss_2d(mu, sigma):
x = random.gauss(mu, sigma)
y = random.gauss(mu, sigma)
return (x, y)
import numpy as np
# define normalized 2D gaussian
def gaus2d(x=0, y=0, mx=0, my=0, sx=1, sy=1):
return 1. / (2. * np.pi * sx * sy) * np.exp(-((x - mx)**2. / (2. * sx**2.) + (y - my)**2. / (2. * sy**2.)))
x = np.linspace(-5, 5)
y = np.linspace(-5, 5)
x, y = np.meshgrid(x, y) # get 2D variables instead of 1D
z = gaus2d(x, y)
Straightforward implementation and example of the 2D Gaussian function. Here sx and sy are the spreads in x and y direction, mx and my are the center coordinates.
Numpy has a function to do this. It is documented here. Additionally to the method proposed above it allows to draw samples with arbitrary covariance.
Here is a small example, assuming ipython -pylab is started:
samples = multivariate_normal([-0.5, -0.5], [[1, 0],[0, 1]], 1000)
plot(samples[:, 0], samples[:, 1], '.')
samples = multivariate_normal([0.5, 0.5], [[0.1, 0.5],[0.5, 0.6]], 1000)
plot(samples[:, 0], samples[:, 1], '.')
In case someone find this thread and is looking for somethinga little more versatile (like I did), I have modified the code from #giessel. The code below will allow for asymmetry and rotation.
import numpy as np
def makeGaussian2(x_center=0, y_center=0, theta=0, sigma_x = 10, sigma_y=10, x_size=640, y_size=480):
# x_center and y_center will be the center of the gaussian, theta will be the rotation angle
# sigma_x and sigma_y will be the stdevs in the x and y axis before rotation
# x_size and y_size give the size of the frame
theta = 2*np.pi*theta/360
x = np.arange(0,x_size, 1, float)
y = np.arange(0,y_size, 1, float)
y = y[:,np.newaxis]
sx = sigma_x
sy = sigma_y
x0 = x_center
y0 = y_center
# rotation
a=np.cos(theta)*x -np.sin(theta)*y
b=np.sin(theta)*x +np.cos(theta)*y
a0=np.cos(theta)*x0 -np.sin(theta)*y0
b0=np.sin(theta)*x0 +np.cos(theta)*y0
return np.exp(-(((a-a0)**2)/(2*(sx**2)) + ((b-b0)**2) /(2*(sy**2))))
We can try just using the numpy method np.random.normal to generate a 2D gaussian distribution.
The sample code is np.random.normal(mean, sigma, (num_samples, 2)).
A sample run by taking mean = 0 and sigma 20 is shown below :
np.random.normal(0, 20, (10,2))
>>array([[ 11.62158316, 3.30702215],
[-18.49936277, -11.23592946],
[ -7.54555371, 14.42238838],
[-14.61531423, -9.2881661 ],
[-30.36890026, -6.2562164 ],
[-27.77763286, -23.56723819],
[-18.18876597, 41.83504042],
[-23.62068377, 21.10615509],
[ 15.48830184, -15.42140269],
[ 19.91510876, 26.88563983]])
Hence we got 10 samples in a 2d array with mean = 0 and sigma = 20