Fitting arbitrary gaussian functions, massive memory consumption in python - python

I'm trying to (in python) fit a series of an arbitrary number of gaussian functions (determined by a simple algorithm still being improved) to a data set. For my current sample data set, I have 174 gaussian functions. I have a procedure for doing the fit, but it's basically complicated guess-and-check, and consumes all 4GB of memory available.
Is there any way to accomplish this using something in scipy or numpy?
Here is what I'm trying to use, where wavelength[] is the list of x-coordinates, and fluxc[] is the list of y-coordinates:
#Pick a gaussian
for repeat in range(0,2):
for f in range(0,len(centroid)):
#Iterate over every other gaussian
for i in range(0,len(centroid)):
if i!= f:
#For every wavelength,
for w in wavelength:
#Append the value of each to an list, called others
others.append(height[i]*math.exp(-(w-centroid[i])**2/(2*width[i]**2)))
#Optimize the centroid of the current gaussian
prev = centroid[f]
best = centroid[f]
#Pick an order of magnitude
for p in range (int(round(math.log10(centroid[i]))-3-repeat),int(round(math.log10(centroid[i])))-6-repeat,-1):
#Pick a value of that order of magnitude
for m in range (-5,9):
#Change the value of the current item
centroid[f] = prev + m * 10 **(p)
#Increment over all wavelengths, make a list of the new values
variancy = 0
residual = 0
test = []
#Increment across every wavelength and evaluate if this change gets R^2 any larger
for k in range(0,len(wavelength)):
test.append(height[i]*math.exp(-(wavelength[k]-centroid[f])**2/(2*width[i]**2)))
residual += (test[k]+others[k]-cflux[k])**2
variancy += (test[k]+others[k]-avgcflux)**2
rsquare = 1-(residual/variancy)
#Check the R^2 value for this new fit
if rsquare > bestr:
bestr = rsquare
best = centroid[f]
centroid[f] = best
#Optimize the height of the current gaussian
prev = height[f]
best = height[f]
#Pick an order of magnitude
for p in range (int(round(math.log10(height[i]))-repeat),int(round(math.log10(height[i])))-3-repeat,-1):
#Pick a value of that order of magnitude
for m in range (-5,9):
#Change the value of the current item
height[f] = prev + m * 10 **(p)
#Increment over all wavelengths, make a list of the new values
variancy = 0
residual = 0
test = []
#Increment across every wavelength and evaluate if this change gets R^2 any larger
for k in range(0,len(wavelength)):
test.append(height[f]*math.exp(-(wavelength[k]-centroid[i])**2/(2*width[i]**2)))
residual += (test[k]+others[k]-cflux[k])**2
variancy += (test[k]+others[k]-avgcflux)**2
rsquare = 1-(residual/variancy)
#Check the R^2 value for this new fit
if rsquare > bestr:
bestr = rsquare
best = height[f]
height[f] = best
#Optimize the width of the current gaussian
prev = width[f]
best = width[f]
#Pick an order of magnitude
for p in range (int(round(math.log10(width[i]))-repeat),int(round(math.log10(width[i])))-3-repeat,-1):
#Pick a value of that order of magnitude
for m in range (-5,9):
if prev + m * 10**(p) == 0:
m+=1
#Change the value of the current item
width[f] = prev + m * 10 **(p)
#Increment over all wavelengths, make a list of the new values
variancy = 0
residual = 0
test = []
#Increment across every wavelength and evaluate if this change gets R^2 any larger
for k in range(0,len(wavelength)):
test.append(height[i]*math.exp(-(wavelength[k]-centroid[i])**2/(2*width[f]**2)))
residual += (test[k]+others[k]-cflux[k])**2
variancy += (test[k]+others[k]-avgcflux)**2
rsquare = 1-(residual/variancy)
#Check the R^2 value for this new fit
if rsquare > bestr:
bestr = rsquare
best = width[f]
width[f] = best
count += 1
#print '{} of {} peaks optimized, iteration {} of {}'.format(f+1,len(centroid),repeat+1,2)
complete = round(100*(count/(float(len(centroid))*2)),2)
print '{}% completed'.format(complete)
print 'New R^2 = {}'.format(bestr)

Yes, it can likely be done better (easier) using scipy. But firstly, refactor your code into smaller functions; it justs makes it a lot easier to read and understand what's going on.
As for the memory consumption: you're probably overextending a list far too much somewhere (others is a candidate: I never see it cleared (or initialized!), while it gets filled in a quadruple loop). That, or your data is simply that large (in which case you really should be using numpy arrays, just to speed up things). I can't tell, because you're introducing various variables without giving any idea of the size (how big is wavelengths? How large does others get? What and where are all the initializations of your data arrays?)
Also, fitting 174 Gaussians is just a bit crazy; either look into another way of determining whatever you want to get out of your data, or split things up. From the wavelengths variable, it appears you're trying to fit lines in a high resolution spectrum; perhaps isolating most of the lines and fitting those isolated groups separately is better. If they all overlap, I doubt any normal fitting technique is going to help you.
Lastly, perhaps a package like pandas can help (e.g., the computation subpackage).
Perhaps very lastly, since I see a lot that can be improved in the code. At some point codereview may also be useful. Though for now I guess your memory usage is the most problematic part.

Related

How to generate a Rank 5 matrix with entries Uniform?

I want to generate a rank 5 100x600 matrix in numpy with all the entries sampled from np.random.uniform(0, 20), so that all the entries will be uniformly distributed between [0, 20). What will be the best way to do so in python?
I see there is an SVD-inspired way to do so here (https://math.stackexchange.com/questions/3567510/how-to-generate-a-rank-r-matrix-with-entries-uniform), but I am not sure how to code it up. I am looking for a working example of this SVD-inspired way to get uniformly distributed entries.
I have actually managed to code up a rank 5 100x100 matrix by vertically stacking five 20x100 rank 1 matrices, then shuffling the vertical indices. However, the resulting 100x100 matrix does not have uniformly distributed entries [0, 20).
Here is my code (my best attempt):
import numpy as np
def randomMatrix(m, n, p, q):
# creates an m x n matrix with lower bound p and upper bound q, randomly.
count = np.random.uniform(p, q, size=(m, n))
return count
Qs = []
my_rank = 5
for i in range(my_rank):
L = randomMatrix(20, 1, 0, np.sqrt(20))
# L is tall
R = randomMatrix(1, 100, 0, np.sqrt(20))
# R is long
Q = np.outer(L, R)
Qs.append(Q)
Q = np.vstack(Qs)
#shuffle (preserves rank 5 [confirmed])
np.random.shuffle(Q)
Not a perfect solution, I must admit. But it's simple and comes pretty close.
I create 5 vectors that are gonna span the space of the matrix and create random linear combinations to fill the rest of the matrix.
My initial thought was that a trivial solution will be to copy those vectors 20 times.
To improve that, I created linear combinations of them with weights drawn from a uniform distribution, but then the distribution of the entries in the matrix becomes normal because the weighted mean basically causes the central limit theorm to take effect.
A middle point between the trivial approach and the second approach that doesn't work is to use sets of weights that favor one of the vectors over the others. And you can generate these sorts of weight vectors by passing any vector through the softmax function with an appropriately high temperature parameter.
The distribution is almost uniform, but the vectors are still very close to the base vectors. You can play with the temperature parameter to find a sweet spot that suits your purpose.
from scipy.stats import ortho_group
from scipy.special import softmax
import numpy as np
from matplotlib import pyplot as plt
N = 100
R = 5
low = 0
high = 20
sm_temperature = 100
p = np.random.uniform(low, high, (1, R, N))
weights = np.random.uniform(0, 1, (N-R, R, 1))
weights = softmax(weights*sm_temperature, axis = 1)
p_lc = (weights*p).sum(1)
rand_mat = np.concatenate([p[0], p_lc])
plt.hist(rand_mat.flatten())
I just couldn't take the fact the my previous solution (the "selection" method) did not really produce strictly uniformly distributed entries, but only close enough to fool a statistical test sometimes. The asymptotical case however, will almost surely not be distributed uniformly. But I did dream up another crazy idea that's just as bad, but in another manner - it's not really random.
In this solution, I do smth similar to OP's method of forming R matrices with rank 1 and then concatenating them but a little differently. I create each matrix by stacking a base vector on top of itself multiplied by 0.5 and then I stack those on the same base vector shifted by half the dynamic range of the uniform distribution. This process continues with multiplication by a third, two thirds and 1 and then shifting and so on until i have the number of required vectors in that part of the matrix.
I know it sounds incomprehensible. But, unfortunately, I couldn't find a way to explain it better. Hopefully, reading the code would shed some more light.
I hope this "staircase" method will be more reliable and useful.
import numpy as np
from matplotlib import pyplot as plt
'''
params:
N - base dimention
M - matrix length
R - matrix rank
high - max value of matrix
low - min value of the matrix
'''
N = 100
M = 600
R = 5
high = 20
low = 0
# base vectors of the matrix
base = low+np.random.rand(R-1, N)*(high-low)
def build_staircase(base, num_stairs, low, high):
'''
create a uniformly distributed matrix with rank 2 'num_stairs' different
vectors whose elements are all uniformly distributed like the values of
'base'.
'''
l = levels(num_stairs)
vectors = []
for l_i in l:
for i in range(l_i):
vector_dynamic = (base-low)/l_i
vector_bias = low+np.ones_like(base)*i*((high-low)/l_i)
vectors.append(vector_dynamic+vector_bias)
return np.array(vectors)
def levels(total):
'''
create a sequence of stritcly increasing numbers summing up to the total.
'''
l = []
sum_l = 0
i = 1
while sum_l < total:
l.append(i)
i +=1
sum_l = sum(l)
i = 0
while sum_l > total:
l[i] -= 1
if l[i] == 0:
l.pop(i)
else:
i += 1
if i == len(l):
i = 0
sum_l = sum(l)
return l
n_rm = R-1 # number of matrix subsections
m_rm = M//n_rm
len_rms = [ M//n_rm for i in range(n_rm)]
len_rms[-1] += M%n_rm
rm_list = []
for len_rm in len_rms:
# create a matrix with uniform entries with rank 2
# out of the vector 'base[i]' and a ones vector.
rm_list.append(build_staircase(
base = base[i],
num_stairs = len_rms[i],
low = low,
high = high,
))
rm = np.concatenate(rm_list)
plt.hist(rm.flatten(), bins = 100)
A few examples:
and now with N = 1000, M = 6000 to empirically demonstrate the nearly asymptotic behavior:

Performance issue with Scipy's solve_bvp and coupled differential equations

I'm facing a problem while trying to implement the coupled differential equation below (also known as single-mode coupling equation) in Python 3.8.3. As for the solver, I am using Scipy's function scipy.integrate.solve_bvp, whose documentation can be read here. I want to solve the equations in the complex domain, for different values of the propagation axis (z) and different values of beta (beta_analysis).
The problem is that it is extremely slow (not manageable) compared with an equivalent implementation in Matlab using the functions bvp4c, bvpinit and bvpset. Evaluating the first few iterations of both executions, they return the same result, except for the resulting mesh which is a lot greater in the case of Scipy. The mesh sometimes even saturates to the maximum value.
The equation to be solved is shown here below, along with the boundary conditions function.
import h5py
import numpy as np
from scipy import integrate
def coupling_equation(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty_like(a).astype(complex)
for idx, z_i in enumerate(z_mesh):
beta_zf_i = np.interp(z_i, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_i, z_a, ka_z) # Get ka at the desired point of the mesh
coupling_matrix = np.empty((2, 2), complex)
coupling_matrix[0] = [-1j * beta_zf_i, ka_z_i]
coupling_matrix[1] = [ka_z_i, 1j * beta_zf_i]
a_p[:, idx] = np.matmul(coupling_matrix, a[:, idx]) # Solve the coupling matrix
return a_p
def boundary_conditions(a_a, a_b):
return np.hstack(((a_a[0]-1), a_b[1]))
Moreover, I couldn't find a way to pass k, z and beta_zf as arguments of the function coupling_equation, given that the fun argument of the solve_bpv function must be a callable with the parameters (x, y). My approach is to define some global variables, but I would appreciate any help on this too if there is a better solution.
The analysis function which I am trying to code is:
def analysis(k, z, beta_analysis, max_mesh):
s11_analysis = np.empty_like(beta_analysis, dtype=complex)
s21_analysis = np.empty_like(beta_analysis, dtype=complex)
initial_mesh = np.linspace(z[0], z[-1], 10) # Initial mesh of 10 samples along L
mesh = initial_mesh
# a_init must be complex in order to solve the problem in a complex domain
a_init = np.vstack((np.ones(np.size(initial_mesh)).astype(complex),
np.zeros(np.size(initial_mesh)).astype(complex)))
for idx, beta in enumerate(beta_analysis):
print(f"Iteration {idx}: beta_analysis = {beta}")
global beta_zf
beta_zf = beta * np.ones(len(z)) # Global variable so as to use it in coupling_equation(x, y)
a = integrate.solve_bvp(fun=coupling_equation,
bc=boundary_conditions,
x=mesh,
y=a_init,
max_nodes=max_mesh,
verbose=1)
# mesh = a.x # Mesh for the next iteration
# a_init = a.y # Initial guess for the next iteration, corresponding to the current solution
s11_analysis[idx] = a.y[1][0]
s21_analysis[idx] = a.y[0][-1]
return s11_analysis, s21_analysis
I suspect that the problem has something to do with the initial guess that is being passed to the different iterations (see commented lines inside the loop in the analysis function). I try to set the solution of an iteration as the initial guess for the following (which must reduce the time needed for the solver), but it is even slower, which I don't understand. Maybe I missed something, because it is my first time trying to solve differential equations.
The parameters used for the execution are the following:
f2 = h5py.File(r'path/to/file', 'r')
k = np.array(f2['k']).squeeze()
z = np.array(f2['z']).squeeze()
f2.close()
analysis_points = 501
max_mesh = 1e6
beta_0 = 3e2;
beta_low = 0; # Lower value of the frequency for the analysis
beta_up = beta_0; # Upper value of the frequency for the analysis
beta_analysis = np.linspace(beta_low, beta_up, analysis_points);
s11_analysis, s21_analysis = analysis(k, z, beta_analysis, max_mesh)
Any ideas on how to improve the performance of these functions? Thank you all in advance, and sorry if the question is not well-formulated, I accept any suggestions about this.
Edit: Added some information about performance and sizing of the problem.
In practice, I can't find a relation that determines de number of times coupling_equation is called. It must be a matter of the internal operation of the solver. I checked the number of callings in one iteration by printing a line, and it happened in 133 ocasions (this was one of the fastests). This must be multiplied by the number of iterations of beta. For the analyzed one, the solver returned this:
Solved in 11 iterations, number of nodes 529.
Maximum relative residual: 9.99e-04
Maximum boundary residual: 0.00e+00
The shapes of a and z_mesh are correlated, since z_mesh is a vector whose length corresponds with the size of the mesh, recalculated by the solver each time it calls coupling_equation. Given that a contains the amplitudes of the progressive and regressive waves at each point of z_mesh, the shape of a is (2, len(z_mesh)).
In terms of computation times, I only managed to achieve 19 iterations in about 2 hours with Python. In this case, the initial iterations were faster, but they start to take more time as their mesh grows, until the point that the mesh saturates to the maximum allowed value. I think this is because of the value of the input coupling coefficients in that point, because it also happens when no loop in beta_analysisis executed (just the solve_bvp function for the intermediate value of beta). Instead, Matlab managed to return a solution for the entire problem in just 6 minutes, aproximately. If I pass the result of the last iteration as initial_guess (commented lines in the analysis function, the mesh overflows even faster and it is impossible to get more than a couple iterations.
Based on semi-random inputs, we can see that max_mesh is sometimes reached. This means that coupling_equation can be called with a quite big z_mesh and a arrays. The problem is that coupling_equation contains a slow pure-Python loop iterating on each column of the arrays. You can speed the computation up a lot using Numpy vectorization. Here is an implementation:
def coupling_equation_fast(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty(a.shape, dtype=np.complex128)
beta_zf_i = np.interp(z_mesh, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_mesh, z_a, ka_z) # Get ka at the desired point of the mesh
# Fast manual matrix multiplication
a_p[0] = (-1j * beta_zf_i) * a[0] + ka_z_i * a[1]
a_p[1] = ka_z_i * a[0] + (1j * beta_zf_i) * a[1]
return a_p
This code provides a similar output with semi-random inputs compared to the original implementation but is roughly 20 times faster on my machine.
Furthermore, I do not know if max_mesh happens to be big with your inputs too and even if this is normal/intended. It may make sense to decrease the value of max_mesh in order to reduce the execution time even more.

Estimate velocity on a spring by iterative approach

The problem:
Consider a system with a mass and a spring as shown in the picture below. The stiffness of the spring and the mass of the object are known. Therefore, if the spring is stretched the force the spring exerts can be calculated from Hooke`s law and the instantaneous acceleration can be estimated from Newton´s laws of motion. Integrating the acceleration twice yields the distance the spring would move and subtracting that from the initial length results in a new position to calculate the acceleration and start the loop again. Therefore as the acceleration decreases linearly the speed levels off at a certain value (top right) Everything after that point, spring compressing & decelerating is neglected for this case.
My question is how would to go about coding that up in python. So far I have written some pseudocode.
instantaneous_acceleration = lambda x: 5*x/10 # a = kx/m
delta_time = 0.01 #10 milliseconds
a[0] = instantaneous_acceleration(12) #initial acceleration when stretched to 12 m
v[0] = 0 #initial velocity 0 m/s
s[0] = 12 #initial length 12 m
i = 1
while a[i] > 12:
v[i] = a[i-1]*delta_time + v[i-1] #calculate the next velocity
s[i] = v[i]*delta_time + s[i-1] #calculate the next position
a[i] = instantaneous_acceleration (s[i]) #use the position to derive the new accleration
i = i + 1
Any help or tips are greatly appreciated.
If you're going to integrate up front - which is a good idea and absolutely the way to go when you can - then you can just write down the equations as functions of t for everything:
x'' = -kx/m
x'' + (k/m)x = 0
r^2 + k/m = 0
r^2 = -(k/m)
r = i*sqrt(k/m)
x(t) = A*e^(i*sqrt(k/m)t)
= A*cos(sqrt(k/m)t + B) + i*A*sin(sqrt(k/m)t + B)
= A*cos(sqrt(k/m)t + B)
From initial conditions we know that
x(0) = 12 = A*cos(B)
v(0) = 0 = -sqrt(k/m)*A*sin(B)
The second of these equation is true only if we choose A = 0 or B = 0 or B = Pi.
if A = 0, then the first equation has no solution.
if B = 0, the first equation has solution A = 12.
if B = Pi, the first equation has solution A = -12.
We probably prefer B = 0 and A = 12. This gives
x(t) = 12*cos(sqrt(k/m)t)
v(t) = -12*sqrt(k/m)*sin(sqrt(k/m)t)
a(t) = -12*(k/m)cos(sqrt(k/m)t)
Thus, at any incremental time t[n+1] = t[n] + dt, we can simply calculate the precise position, velocity and acceleration for t[n] without any drift or inaccuracy ever accumulating.
All that said, if you are interested in how to numerically find x(t) and v(t) and a(t) given an arbitrary ordinary differential equation, the answer is much harder. There are lots of good ways of doing what can be called numerical integration. Euler's method is the easiest:
// initial conditions
t[0] = 0
x[0] = …
x'[0] = …
…
x^(n-1)[0] = …
x^(n)[0] = 0
// iterative step
x^(n)[k+1] = f(x^(n-1)[k], …, x'[k], x[k], t[k])
x^(n-1)[k+1] = x^(n-1)[k] + dt * x^(n)[k]
…
x'[k+1] = x'[k] + dt * x''[k]
x[k+1] = x[k] + dt * x'[k]
t[k+1] = t[k] + dt
The smaller a value of dt you choose, the longer it takes to run for a fixed duration of time, but the more accurate the results you get. This is basically doing a Riemann sum of the function and all its derivatives up to the highest one involved in the ODE.
A more accurate version of this, Simpson's rule, does the same thing but takes the average value over the last time quantum (rather than either endpoint's value; the example above uses the beginning of the interval). The average value over the interval is guaranteed to be closer to the true value over the interval than either endpoint (unless the function was constant over that interval, in which case Simpson is at least as good).
Probably the best standard numerical integration methods for ODEs (assuming you don't need something like leapfrog methods for greater stability) are the Runge Kutta methods. An adaptive timestep Runge Kutta method of sufficient order should usually do the trick and give you accurate answers. Unfortunately, the mathematics to explain the Runge Kutta methods is probably too advanced and time consuming to cover here, but you can find information on these and other advanced techniques online or in e.g. Numerical Recipes, a series of books on numerical methods which contains lots of very useful code samples.
Even the Runge Kutta methods work basically by refining the guess at the function's value over the time quantum, though. They just do it in more sophisticated ways which provably reduce the error at each step.
You have a sign error in the force, for a spring or any other oscillation it should always be opposite to the excitation direction. Correcting this gives instantly an oscillation. However, your loop condition will now never be satisfied, so you have to also adapt that.
You can immediately increase the order of your method by elevating it from the current symplectic Euler method to Leapfrog-Verlet. You only have to change the interpretation of v[i] to be the velocity at t[i]-dt/2. Then the first update uses the acceleration in the middle at t[i-1] to compute the velocity at t[i-1]+dt/2=t[i]-dt/2 from the velocity at t[i-1]-dt/2 using a midpoint formula. Then in the next line the position update is a similar midpoint formula using the velocity at the middle time between the position times. All you have to change in the code to get this advantage is to set the initial velocity to the one at time t[0]-dt/2 using the Taylor expansion at t[0].
instantaneous_acceleration = lambda x: -5*x/10 # a = kx/m
delta_time = 0.01 #10 milliseconds
s0, v0 = 12, 0 #initial length 12 m, initial velocity 0 m/s
N=1000
s = np.zeros(N+1); v = s.copy(); a = s.copy()
a[0] = instantaneous_acceleration(s0) #initial acceleration when stretched to 12 m
v[0] = v0-a[0]*delta_time/2
s[0] = s0
for i in range(N):
v[i+1] = a[i]*delta_time + v[i] #calculate the next velocity
s[i+1] = v[i+1]*delta_time + s[i] #calculate the next position
a[i+1] = instantaneous_acceleration (s[i+1]) #use the position to derive the new acceleration
#produce plots of all these functions
t=np.arange(0,N+1)*delta_time;
fig, ax = plt.subplots(3,1,figsize=(5,3*1.5))
for g, y in zip(ax,(s,v,a)):
g.plot(t,y); g.grid();
plt.tight_layout(); plt.show();
This is obviously and correctly an oscillation. The exact solution is 12*cos(sqrt(0.5)*t), using it and its derivatives to compute the errors in the numerical solution (remember the leap-frogging of the velocities) gives via
w=0.5**0.5; dt=delta_time;
fig, ax = plt.subplots(3,1,figsize=(5,3*1.5))
for g, y in zip(ax,(s-12*np.cos(w*t),v+12*w*np.sin(w*(t-dt/2)),a+12*w**2*np.cos(w*t))):
g.plot(t,y); g.grid();
plt.tight_layout(); plt.show();
the plot below, showing errors in the expected size delta_time**2.
An analytical approach is the simplest way to obtain the velocity of a simple system that obeys Hooke's law.
However, if you desire a physically accurate numerical/iterative approach I strongly advise against methods like standard Euler or runge-kutta methods (suggested by Patrick87). [Correction: OPs method is a symplectic 1st order method, if the sign of acceleration term is corrected.]
You probably want to use a Hamiltonian approach and a suitable symplectic integrator such as the second order leapfrog (suggested also by Patrick87).
For Hookes law, you can express the Hamiltonian H = T(p) + V(q), where p is momentum (associated with velocity) and q is position (associated to how far the string located from equilibrium).
You have the kinetic energy T and potential energy V
T(p) = 0.5*p^2/m
V(q) = 0.5*k*q^2
You simply need the derivatives of these two expressions to simulate the system
dT/dp = p/m
dV/dq = k*q
I provided a detailed example (although for another 2-dimensional system), including an implementation of 1st and a 4th order method here:
https://zymplectic.com/case3.html under method 0 and method 1
These are symplectic integrators, which have an energy-preserving property that means you can perform long simulation without dissipative errors.

Prevent gradient descent from stopping too far from a local minimum

I'm implementing an algorithm in Python to find the nearest minimum of a 2-dimensional function using the gradient descent method. It takes a precision interval eps as input and stops when the distance between the initial and newly-found point on a given iteration is lesser than eps. The code at that stage looked like this:
while(slow_steps <= 4):
lbd_current = lbd(x_current)
x_previous = x_current.copy()
grad = normalized_gradient(f_multi, x_previous)
x_current = [x_previous[i] + lbd_current * grad[i] for i in range(len(x_previous))]
x_list.append(x_current.copy())
iteration += 1
if(distance(x_previous, x_current) <= eps):
slow_steps += 1
However, I encountered a problem with the initial version of the algorithm: it frequently got stuck in 'valleys' such as this one, depending on the function.
So far I have attempted to add a second step to traverse valleys: if the algorithm detects that the descent is slow, instead of ending immediately, it takes the points found on the latest and third-to-latest iterations and finds the nearest low point on the line between those, a line that hopefully aligns with the direction of the valley.
if(distance(x_previous, x_current) <= eps * 10 and canyon_steps >= 3):
x_canyon = x_list[len(x_list) - 2]
vector_canyon = [x_current[i] - x_canyon[i] for i in range(len(x_canyon))]
lbd_current = lbd_canyon()
x_current = [x_canyon[i] + vector_canyon[i] * lbd_current for i in range(len(x_canyon))]
if(distance(x_previous, x_current) > eps):
canyon_steps = 0
slow_steps = 0
if(distance(x_previous, x_current) <= eps * 10):
canyon_steps += 1
This algorithm works for most starting positions that I've tried, but for others, such as this one, it fails if the precision is low and seems to take a very long time to finish otherwise. How can I ensure that the algorithm arrives at a local minimum with as good chances as possible?

Take and store only some particular values inside a "while" loop

I am solving some partial differential equation discretizing the time and space, to avoid complexity I avoid this thing and just consider that I solve in an iterative way the problem using a function that I called "computation" . The point is that I want to take (and store in some matrix called "Cn") some values given by "y" of the loop "while", but without take all the values of the iteration in time.
To be precise: I am doing a loop "while" for the time evolution taking some time steep dt. I am runing it from t=1 up to t=100 using dt=0.001. My solution "y" is computed to each time steep. The point is that I want to store "y" at some particular values of the time "t", not at each time steep of the loop, i.e for instance I want to store the values at t=1.0,2.0,3.0,...,100.0 using the values that I compute inside the loop while. But I don't want to store the values of "y" at t=1.001,1.002,1.003 etc
I show you the code that I did
import numpy as np
import math
from matplotlib import pyplot as plt
import matplotlib.animation as animation
# grid in 1D
xmin = 0.0
xmax = 100.0
Nx = 120
dx = (xmax-xmin)/Nx
x = np.linspace(xmin,xmax,Nx)
# timing of the numerical simulation
t_initial = 1.0
t_final = 100.0
t = t_initial
dt = 10**(-2)
#initial profile
y = np.exp(-0.5*x**2)
#number of time points to store the numerical solution
dt_solution = 1.0 #time steep to save the numerical data inside the loop while
Nt = int((t_final-t_initial)/dt_solution)
def computation(t,y):
return np.log(t)+y
Cn = np.zeros((Nt, len(x))) # holds the numerical solution
Cn[0,:] = y #we put the initial y
ite = 0
while t<t_final:
t += dt #WE MAKE THE TIME STEEP
ite +=1
y = computation(t,y)
#Cn[ite,:] = y #I WANT TO SAVE THE VECTOR Y FOR THE TIMES t THAT I AM INTERESTD, NOT THE ONES GIVEN INSIDE THE LOOP WHILE
Someone knows how to do that? I was thinking maybe solve this problem using two loops, but I would like to know if its possible to use some more efficient way. Thanks! (I hope that my question is clear, if not please tell me)
You could use a modulo operator. This operator shows the remainder when one number is divided by another. For example:
10%2 = 0 # 10 is exactly divisible by 2
11%2 = 1 # 11 is divisible by 2 with remainder 1
We can use this with an if condition within the while loop.
#...
t = 0
dt=0.001 #timestep for iteration
# set the condition threshold
threshold = dt/10
# choose the step you want to save values at
store_step = 0.1
while t<100:
t += dt
y = computation(t,y)
if (t%store_step<threshold) or (t%store_step>(store_step-threshold)):
# store y values
Cn[ite,:] = y
Note If your timestep is an integer you could use: if (t%1==0) as your condition.
Add this to your while loop where you want to save y:
if int(t % 1) == 0:
Cn[ite,:] = y
So this only saves y when t is divisible by 1, i.e., t is 1.000, 2.000...
Likewise, if you have other conditions in which you want to only save y under, simply check against that condition in a way that can be computed. If not, a static list or set is a viable option as well.

Categories

Resources