I have tried to use a toy problem of linear regression for implanting the optimisation on the MSE function using the algorithm of gradient decent.
import numpy as np
# Data points
x = np.array([1, 2, 3, 4])
y = np.array([1, 1, 2, 2])
# MSE function
f = lambda a, b: 1 / len(x) * np.sum(np.power(y - (a * x + b), 2))
# Gradient
def grad_f(v_coefficients):
a = v_coefficients[0, 0]
b = v_coefficients[1, 0]
return np.array([1 / len(x) * np.sum(2 * (y - (a * x + b)) * x),
1 / len(x) * np.sum(2 * (y - (a * x + b)))]).reshape(2, 1)
# Gradient Decent with epsilon as tol vector and alpha as the step/learning rate
def gradient_decent(v_prev):
tol = 10 ** -3
epsilon = np.array([tol * np.ones([2, 1], int)])
alpha = 0.2
v_next = v_prev - alpha * grad_f(v_prev)
if (np.abs(v_next - v_prev) <= epsilon).all():
return v_next
else:
gradient_decent(v_next)
# v_0 is the initial guess
v_0 = np.array([[1], [1]])
gradient_decent(v_0)
I have tried different alpha values but the code never converges (infinite recursion) it seems that the issue is with the stop condition of the recursion, but after few runs the v_next and v_prev bounces between -infinte to infinite
It's great that you are learning machine learning (^_^) by implementing some base algorithms by yourself. Regarding your question, there are two problems in your code, first one is mathematical, the sign in:
def grad_f(v_coefficients):
a = v_coefficients[0, 0]
b = v_coefficients[1, 0]
return np.array([1 / len(x) * np.sum(2 * (y - (a * x + b)) * x),
1 / len(x) * np.sum(2 * (y - (a * x + b)))]).reshape(2, 1)
should be
return -np.array(...)
since
the second one is programming, this kind of code will not return you a result in Python:
def add(x):
new_x = x + 1
if new_x > 10:
return new_x
else:
add(new_x)
you must use return in both clauses of the if statement, so it should be
def add(x):
new_x = x + 1
if new_x > 10:
return new_x
else:
return add(new_x)
There is also a minor issue with the alpha coefficient for these particular data points alpha=0.2 is too big for algorithm to converge, you need to use smaller alpha. I also slightly refactor your initial code using numpy broadcasting convention (https://numpy.org/doc/stable/user/basics.broadcasting.html) to get the following result:
import numpy as np
# Data points
x = np.array([1, 2, 3, 4])
y = np.array([1, 1, 2, 2])
# MSE function
f = lambda a, b: np.mean(np.power(y - (a * x + b), 2))
# Gradient
def grad_f(v_coefficients):
a = v_coefficients[0, 0]
b = v_coefficients[1, 0]
return -np.array([np.mean(2 * (y - (a * x + b)) * x),
np.mean(2 * (y - (a * x + b)))]).reshape(2, 1)
# Gradient Decent with epsilon as tol vector and alpha as the step/learning rate
def gradient_decent(v_prev):
tol = 1e-3
# epsilon = np.array([tol * np.ones([2, 1], int)]) do not need this, due to numpy broadcasting rules
alpha = 0.1
v_next = v_prev - alpha * grad_f(v_prev)
if (np.abs(v_next - v_prev) <= alpha).all():
return v_next
else:
return gradient_decent(v_next)
# v_0 is the initial guess
v_0 = np.array([[1], [1]])
gradient_decent(v_0)
Related
I am trying to numerically compute in python integrals of the form
To that aim, I first define two discrete sets of x and t values, let's say
x_samples = np.linspace(-10, 10, 100)
t_samples = np.linspace(0, 1, 100)
dx = x_samples[1]-x_samples[0]
dt = t_samples[1]-t_samples[0]
declare symbolically that the function g(x,t) is equal to 0 if t<0 and discretise the two functions to integrate as
discretG = g(x_samples[None, :], t_samples[:, None])
discretH = h(x_samples[None, :], t_samples[:, None])
I have then tried to run
discretF = signal.fftconvolve(discretG, discretH, mode='full') * dx * dt
Yet, on basic test functions such as
g(x,t) = lambda x,t: np.exp(-np.abs(x))+t
h(x,t) = lambda x,t: np.exp(-np.abs(x))-t
I don't find an agreement between the the numerical integration and the convolution using scipy and I would like to have a fairly fast way of computing these integrals, especially when I only have access to discretised representations of the functions rather than their symbolic one.
According to your code, I assume you want to conduct convolution on two function g and h that are non-zero only on [a, b]*[m,n].
Of course you can use signal.fftconvolve to compute the convolution. The key is don't forget the transformation between the indices inside discretF and the real coordinates. Here I use interpolation to compute for arbitrary (x,t).
import numpy as np
from scipy import signal, interpolate
a = -1
b = 2
m = -10
n = 15
samples_num = 1000
x_eval_index = 200
t_eval_index = 300
x_samples = np.linspace(a, b, samples_num)
t_samples = np.linspace(m, n, samples_num)
dx = x_samples[1]-x_samples[0]
dt = t_samples[1]-t_samples[0]
g = lambda x,t: np.exp(-np.abs(x))+t
h = lambda x,t: np.exp(-np.abs(x))-t
discretG = g(x_samples[None, :], t_samples[:, None])
discretH = h(x_samples[None, :], t_samples[:, None])
discretF = signal.fftconvolve(discretG, discretH, mode='full')
def compute_f(x, t):
if x < 2*a or x > 2*b or t < 2*m or t > 2*n:
return 0
# use interpolation t get data on new point
x_samples_for_conv = np.linspace(2*a, 2*b, 2*samples_num-1)
t_samples_for_conv = np.linspace(2*m, 2*n, 2*samples_num-1)
f = interpolate.RectBivariateSpline(x_samples_for_conv, t_samples_for_conv, discretF.T)
return f(x, t)[0, 0] * dx * dt
Note: you can extend my codes to compute convolution on a meshgrid defined by x and y, where x and y are 1D array. (In my code, x and y are float now)
You can use the following code to explore the "agreement" between "the numerical integration" and "the convolution using scipy" (and also, the correctness of compute_f function above):
# how the convolve work
# for 1D f[i]=sigma_{j} g[j]h[i-j]
sum = 0
for y_idx, y in enumerate(x_samples[0:]):
for s_idx, s in enumerate(t_samples[0:]):
if x_eval_index - y_idx < 0 or t_eval_index - s_idx < 0:
continue
if t_eval_index - s_idx >= len(x_samples[0:]) or x_eval_index - y_idx >= len(t_samples[0:]):
continue
sum += discretG[t_eval_index - s_idx, x_eval_index - y_idx] * discretH[s_idx, y_idx] * dx * dt
print("Do discrete convolution manually, I get: %f" % sum)
print("Do discrete convolution using scipy, I get: %f" % (discretF[t_eval_index, x_eval_index] * dx * dt))
# numerical integral
# the x_val and t_val
# take 1D convolution as example, function defined on [a, b], and index of your samples range from [0, samples_num-1]
# after convolution, function defined on [2a, 2b], index of your samples range from [0, 2*samples_num-2]
dx_prime = (b-a) / (samples_num-1)
dt_prime = (n-m) / (samples_num-1)
x_eval = 2*a + x_eval_index * dx_prime
t_eval = 2*m + t_eval_index * dt_prime
sum = 0
for y in x_samples[:]:
for s in t_samples[:]:
if x_eval - y < a or x_eval - y > b:
continue
if t_eval - s < m or t_eval - s > n:
continue
if y < a or y >= b:
continue
if s < m or s >= n:
continue
sum += g(x_eval - y, t_eval - s) * h(y, s) * dx * dt
print("Do numerical integration, I get: %f" % sum)
print("The convolution result of 'compute_f' is: %f" % compute_f(x_eval, t_eval))
Which gives:
Do discrete convolution manually, I get: -154.771369
Do discrete convolution using scipy, I get: -154.771369
Do numerical integration, I get: -154.771369
The convolution result of 'compute_f' is: -154.771369
I want to use the Riemann method to evaluate numerically an partial integral in Python. I would like to integrate with respect to x and find a function of t, but i don't know how do this
My fonction : f(x) = cos(2*pi*x*t) its primitive between [-1/2,1/2]: f(t) = sin(pi*t)/t
def riemann(a, b, dx):
if a > b:
a,b = b,a
n = int((b - a) / dx)
s = 0.0
x = a
for i in range(n):
f_i[k] = np.cos(2*np.pi*x)
s += f_i[k]
x += dx
f_i = s * dx
return f_i,t
There's nothing too horrible about your approach. The result does come out close to the true value:
import numpy as np
def riemann(a, b, dx):
if a > b:
a, b = b, a
n = int((b - a) / dx)
s = 0.0
x = a
for i in range(n):
s += np.cos(2 * np.pi * x)
x += dx
return s * dx
print(riemann(0.0, 0.25, 1.0e-3))
print(1 / (2 * np.pi))
0.15965441949277526
0.15915494309189535
Some remarks:
You wouldn't call this Riemann method. It's the midpoint method (of numerical integration).
Pay a little more attention at the boundaries of your domain. Right now, your numerical domain is [a - dx, b + dx].
If you're looking for speed, best collect all your x values (perhaps with linspace), evaluate the function once with all the points, and then np.sum the values up. (Loops in Python are slow.)
General:
I am using maximum entropy to find distribution for on positive integers vectors, I can estimate the mean and variance, and have three equation I am trying to find a and b,
The equations:
integral(exp(a*x^2+bx+c) from (0 , infinity))-1
integral(xexp(ax^2+bx+c)from (0 , infinity))- mean
integral(x^2*exp(a*x^2+bx+c) from (0 , infinity))- mean^2 - var
(integrals between [0,∞))
The problem:
I am trying to use numerical solver and I used fsolve of sympy
But I guess I am missing some knowledge.
My code:
import numpy as np
import sympy as sym
from scipy.optimize import *
def myFunction(x,*data):
y = sym.symbols('y')
m,v=data
F = [0]*3
x[0] = - abs(x[0])
print(x)
F[0] = (sym.integrate(sym.exp(x[0] * y ** 2 + x[1] * y + x[2]), (y, 0,sym.oo)) -1).evalf()
F[1] = (sym.integrate(y*sym.exp(x[0] * y ** 2 + x[1] * y + x[2]), (y, 0,sym.oo))-m).evalf()
F[2] = (sym.integrate((y**2)*sym.exp(x[0] * y ** 2 + x[1] * y + x[2]), (y,0,sym.oo)) -v-m).evalf()
print(F)
return F
data = (10,3.5) # mean and var for example
xGuess = [1, 1, 1]
z = fsolve(myFunction,xGuess,args = data)
print(z)
my result are not that accurate, is there a better way to solve it?
integral(exp(a*x^2+bx+c))-1 = 5.67659292676884
integral(xexp(ax^2+bx+c))- mean = −1.32123173796713
integral(x^2*exp(a*x^2+bx+c))- mean^2 - var = −2.20825624606312
Thanks
I have rewritten the problem replacing sympy with numpy and lambdas (inline functions).
Also note that in your problem statement you subtract the third equation with $mean^2$, but in your code you only subtract $mean$.
import numpy as np
from scipy.optimize import minimize
from scipy.integrate import quad
def myFunction(x,data):
m,v=data
F = np.zeros(3) # use numpy array
# use scipy.integrade.quad for integration of lambda functions
# quad output is (result, error), so we just select the result value at the end
F[0] = quad(lambda y: np.exp(x[0] * y ** 2 + x[1] * y + x[2]), 0, np.inf)[0] -1
F[1] = quad(lambda y: y*np.exp(x[0] * y ** 2 + x[1] * y + x[2]), 0, np.inf)[0] -m
F[2] = quad(lambda y: (y**2)*np.exp(x[0] * y ** 2 + x[1] * y + x[2]), 0, np.inf)[0] -v-m**2
# minimize the squared error
return np.sum(F**2)
data = (10,3.5) # mean and var for example
xGuess = [-1, 1, 1]
z = minimize(lambda x: myFunction(x, data), x0=xGuess,
bounds=((None, 0), (None, None), (None, None))) # use bounds for negative first coefficient
print(z)
# x: array([-0.99899311, 2.18819689, 1.85313181])
Does this seem more reasonable?
Starting with:
a,b=np.ogrid[0:n+1:1,0:n+1:1]
B=np.exp(1j*(np.pi/3)*np.abs(a-b))
B[z,b] = np.exp(1j * (np.pi/3) * np.abs(z - b +x))
B[a,z] = np.exp(1j * (np.pi/3) * np.abs(a - z +x))
B[diag,diag]=1-1j/np.sqrt(3)
this produces an n*n grid that acts as a matrix.
n is just a number chosen to represent the indices, i.e. an a*b matrix where a and b both go up to n.
Where z is a constant I choose to replace a row and column with the B[z,b] and B[a,z] formulas. (Essentially the same formula but with a small number added to the np.abs(a-b))
The diagonal of the matrix is given by the bottom line:
B[diag,diag]=1-1j/np.sqrt(3)
where,
diag=np.arange(n+1)
I would like to repeat this code 50 times where the only thing that changes is x so I will end up with 50 versions of the B np.ogrid. x is a randomly generated number between -0.8 and 0.8 each time.
x=np.random.uniform(-0.8,0.8)
I want to generate 50 versions of B with random values of x each time and take a geometric average of the 50 versions of B using the definition:
def geo_mean(y):
y = np.asarray(y)
return np.prod(y ** (1.0 / y.shape[0]), axis=-1)
I have tried to set B as a function of some index and then use a for _ in range(): loop, this doesn't work. Aside from copy and pasting the block 50 times and denoting each one as B1, B2, B3 etc; I can't think of another way of working this out.
EDIT:
I'm now using part of a given solution in order to show clearly what I am looking for:
#A matrix with 50 random values between -0.8 and 0.8 to be used in the loop
X=np.random.uniform(-0.8,0.8, (50,1))
#constructing the base array before modification by random x values in position z
a,b = np.ogrid[0:n+1:1,0:n+1:1]
B = np.exp(1j * ( np.pi / 3) * np.abs( a - b ))
B[diag,diag] = 1 - 1j / np.sqrt(3)
#list to store all modified arrays
randomarrays = []
for i in range( 0,50 ):
#copy array and modify it
Bnew = np.copy( B )
Bnew[z, b] = np.exp( 1j * ( np.pi / 3 ) * np.abs(z - b + X[i]))
Bnew[a, z] = np.exp( 1j * ( np.pi / 3 ) * np.abs(a - z + X[i]))
randomarrays.append(Bnew)
Bstack = np.dstack(randomarrays)
#calculate the geometric mean value along the axis that was the row in 2D arrays
B0 = geo_mean(Bstack)
From this example, every iteration of i uses the same value of X, I can't seem to get a way to get each new loop of i to use the next value in the matrix X. I am unsure of the ++ action in python, I know it does not work in python, I just don't know how to use the python equivalent. I want a loop to use a value of X, then the next loop to use the next value and so on and so forth so I can dstack all the matrices at the end and find a geo_mean for each element in the stacked matrices.
One pedestrian way would be to use a list comprehension or generator expression:
>>> def f(n, z, x):
... diag = np.arange(n+1)
... a,b=np.ogrid[0:n+1:1,0:n+1:1]
... B=np.exp(1j*(np.pi/3)*np.abs(a-b))
... B[z,b] = np.exp(1j * (np.pi/3) * np.abs(z - b +x))
... B[a,z] = np.exp(1j * (np.pi/3) * np.abs(a - z +x))
... B[diag,diag]=1-1j/np.sqrt(3)
... return B
...
>>> X = np.random.uniform(-0.8, 0.8, (10,))
>>> np.prod((*map(np.power, map(f, 10*(4,), 10*(2,), X), 10 * (1/10,)),), axis=0)
But in your concrete example we can do much better than that;
using the identity exp(a) x exp(b) = exp(a + b) we can convert the geometric mean after exponentiation to an arithmetic mean before exponentition. A bit of care is required because of the multivaluedness of the complex n-th root which occurs in the geometric mean. In the code below we normalize the angles occurring to range -pi, pi so as to always hit the same branch as the n-th root.
Please also note that the geo_mean function you provide is definitely wrong. It fails the basic sanity check that taking the average of copies of the same thing should return the same thing. I've provided a better version. It is still not perfect, but I think there actually is no perfect solution, because of the nonuniqueness of the complex root.
Because of this I recommend taking the average before exponentiating. As long as your random spread is less than pi this allows a well-defined averaging procedure with an average that is actually close to the samples
import numpy as np
def f(n, z, X, do_it_pps_way=True):
X = np.asanyarray(X)
diag = np.arange(n+1)
a,b=np.ogrid[0:n+1:1,0:n+1:1]
B=np.exp(1j*(np.pi/3)*np.abs(a-b))
X = X.reshape(-1,1,1)
if do_it_pps_way:
zbx = np.mean(np.abs(z-b+X), axis=0)
azx = np.mean(np.abs(a-z+X), axis=0)
else:
zbx = np.mean((np.abs(z-b+X)+3) % 6 - 3, axis=0)
azx = np.mean((np.abs(a-z+X)+3) % 6 - 3, axis=0)
B[z,b] = np.exp(1j * (np.pi/3) * zbx)
B[a,z] = np.exp(1j * (np.pi/3) * azx)
B[diag,diag]=1-1j/np.sqrt(3)
return B
def geo_mean(y):
y = np.asarray(y)
dim = len(y.shape)
y = np.atleast_2d(y)
v = np.prod(y, axis=0) ** (1.0 / y.shape[0])
return v[0] if dim == 1 else v
def geo_mean_correct(y):
y = np.asarray(y)
return np.prod(y ** (1.0 / y.shape[0]), axis=0)
# demo that orig geo_mean is wrong
B = np.exp(1j * np.random.random((5, 5)))
# the mean of four times the same thing should be the same thing:
if not np.allclose(B, geo_mean([B, B, B, B])):
print('geo_mean failed')
if np.allclose(B, geo_mean_correct([B, B, B, B])):
print('but geo_mean_correct works')
n, z, m = 10, 3, 50
X = np.random.uniform(-0.8, 0.8, (m,))
B0 = f(n, z, X, do_it_pps_way=False)
B1 = np.prod((*map(np.power, map(f, m*(n,), m*(z,), X), m * (1/m,)),), axis=0)
B2 = geo_mean_correct([f(n, z, x) for x in X])
# This is the recommended way:
B_recommended = f(n, z, X, do_it_pps_way=True)
print()
print(np.allclose(B1, B0))
print(np.allclose(B2, B1))
I think you should rely more on numpy functionality, when approaching your problem. Not a numpy expert myself, so there is surely room for improvement:
from scipy.stats import gmean
n = 2
z = 1
a = np.arange(n + 1).reshape(1, n + 1)
#constructing the base array before modification by random x values in position z
B = np.exp(1j * (np.pi / 3) * np.abs(a - a.T))
B[a, a] = 1 - 1j / np.sqrt(3)
#list to store all modified arrays
random_arrays = []
for _ in range(50):
#generate random x value
x=np.random.uniform(-0.8, 0.8)
#copy array and modify it
B_new = np.copy(B)
B_new[z, a] = np.exp(1j * (np.pi / 3) * np.abs(z - a + x))
B_new[a, z] = np.exp(1j * (np.pi / 3) * np.abs(a - z + x))
random_arrays.append(B_new)
#store all B arrays as a 3D array
B_stack = np.stack(random_arrays)
#calculate the geometric mean value along the axis that was the row in 2D arrays
geom_mean_for_rows = gmean(B_stack, axis = 2)
It uses the geometric mean function from scipy.stats module to have a vectorised approach for this calculation.
I try to implement the Fourier series function according to the following formulas:
...where...
...and...
Here is my approach to the problem:
import numpy as np
import pylab as py
# Define "x" range.
x = np.linspace(0, 10, 1000)
# Define "T", i.e functions' period.
T = 2
L = T / 2
# "f(x)" function definition.
def f(x):
return np.sin(np.pi * 1000 * x)
# "a" coefficient calculation.
def a(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for i in np.linspace(a, b, accuracy):
x = a + i * dx
integration += f(x) * np.cos((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# "b" coefficient calculation.
def b(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for i in np.linspace(a, b, accuracy):
x = a + i * dx
integration += f(x) * np.sin((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# Fourier series.
def Sf(x, L, n = 10):
a0 = a(0, L)
sum = 0
for i in np.arange(1, n + 1):
sum += ((a(i, L) * np.cos(n * np.pi * x)) + (b(i, L) * np.sin(n * np.pi * x)))
return (a0 / 2) + sum
# x axis.
py.plot(x, np.zeros(np.size(x)), color = 'black')
# y axis.
py.plot(np.zeros(np.size(x)), x, color = 'black')
# Original signal.
py.plot(x, f(x), linewidth = 1.5, label = 'Signal')
# Approximation signal (Fourier series coefficients).
py.plot(x, Sf(x, L), color = 'red', linewidth = 1.5, label = 'Fourier series')
# Specify x and y axes limits.
py.xlim([0, 10])
py.ylim([-2, 2])
py.legend(loc = 'upper right', fontsize = '10')
py.show()
...and here is what I get after plotting the result:
I've read the How to calculate a Fourier series in Numpy? and I've implemented this approach already. It works great, but it use the expotential method, where I want to focus on trigonometry functions and the rectangular method in case of calculating the integraions for a_{n} and b_{n} coefficients.
Thank you in advance.
UPDATE (SOLVED)
Finally, here is a working example of the code. However, I'll spend more time on it, so if there is anything that can be improved, it will be done.
from __future__ import division
import numpy as np
import pylab as py
# Define "x" range.
x = np.linspace(0, 10, 1000)
# Define "T", i.e functions' period.
T = 2
L = T / 2
# "f(x)" function definition.
def f(x):
return np.sin((np.pi) * x) + np.sin((2 * np.pi) * x) + np.sin((5 * np.pi) * x)
# "a" coefficient calculation.
def a(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for x in np.linspace(a, b, accuracy):
integration += f(x) * np.cos((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# "b" coefficient calculation.
def b(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for x in np.linspace(a, b, accuracy):
integration += f(x) * np.sin((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# Fourier series.
def Sf(x, L, n = 10):
a0 = a(0, L)
sum = np.zeros(np.size(x))
for i in np.arange(1, n + 1):
sum += ((a(i, L) * np.cos((i * np.pi * x) / L)) + (b(i, L) * np.sin((i * np.pi * x) / L)))
return (a0 / 2) + sum
# x axis.
py.plot(x, np.zeros(np.size(x)), color = 'black')
# y axis.
py.plot(np.zeros(np.size(x)), x, color = 'black')
# Original signal.
py.plot(x, f(x), linewidth = 1.5, label = 'Signal')
# Approximation signal (Fourier series coefficients).
py.plot(x, Sf(x, L), '.', color = 'red', linewidth = 1.5, label = 'Fourier series')
# Specify x and y axes limits.
py.xlim([0, 5])
py.ylim([-2.2, 2.2])
py.legend(loc = 'upper right', fontsize = '10')
py.show()
Consider developing your code in a different way, block by block. You should be surprised if a code like this would work at the first try. Debugging is one option, as #tom10 said. The other option is rapid prototyping the code step by step in the interpreter, even better with ipython.
Above, you are expecting that b_1000 is non-zero, since the input f(x) is a sinusoid with a 1000 in it. You're also expecting that all other coefficients are zero right?
Then you should focus on the function b(n, L, accuracy = 1000) only. Looking at it, 3 things are going wrong. Here are some hints.
the multiplication of dx is within the loop. Sure about that?
in the loop, i is supposed to be an integer right? Is it really an integer? by prototyping or debugging you would discover this
be careful whenever you write (1/L) or a similar expression. If you're using python2.7, you're doing likely wrong. If not, at least use a from __future__ import division at the top of your source. Read this PEP if you don't know what I am talking about.
If you address these 3 points, b() will work. Then think of a in a similar fashion.