I found this old stackoverflow article that essentially is exactly what I want.
Algorithm to optimize multiple variables more efficiently than trial-and-error
unforunately my more advanced maths are a bit lacking and I have some questions about the answer by ElKamina, if anyone can take a look and advise some of these basic math concepts, hopefully it will help me out.
The answer I am referring to is as follows:
def simAnneal( w, seed_x, numSteps=100000, sigma=0.01 ):
optimal_x = [i for i in seed_x]
optimal_w = w(optimal_x)
cur_w = w(seed_x)
for i in range(numSteps):
new_x = [i+random.gauss(0, sigma) for i in seed_x]
new_w = w(new_x)
if (new_w > cur_w) or (random.random() > new_w / cur_w) :
cur_x = new_x
cur_w = new_w
if cur_w > optimal_w:
optimal_w = cur_w
optimal_x = cur_x
return optimal_x
I am unfamiliar with seed_x, sigma and gaussian distribution so I am not sure how they are coming up with new_x.
I am attempting to solve a value based on many variables, (>10) and am trying to optimize better than randomly guessing as it would take forever.
Thanks!
Simulated Annealing TLDR:
We're trying to find a set of parameters that will maximize a function by adding random noise to parameters. If change leads to improvement, changes are accepted; once in a while we accept negative changes, but the probability of that lowers with time and how bad the change is.
In the snippet above, the function actually uses multiple parameters but accepts them as a list:
w is the function which parameters are optimized
seed_x is the initial guess of parameters - can be selected at random, but an informed guess would be better
Gaussian is just "shape" of the noise, such that small values are more common. random.random()*sigma (all values are equally likely) would work just fine there, too.
sigma is the magnitude of noise to be injected. It should not exceed a couple percent of typical param values. If param values vastly differ in magnitude, consider using a list of sigmas specific for each parameter.
MISSING: notion of temperature, which will actually make it simulated annealing
Rewriting it with temperature, more descriptive names, and more explicit:
def simAnneal(utility_func, initial_params, numSteps=100000,
noise_magnitude=0.01, cooling_rate=0.999):
optimal_params = initial_params
params = initial_params.copy() # lists are mutable, so .copy()
best_utility = utility = utility_func(*initial_params)
temperature = 1.0
for i in range(numSteps):
temperature *= cooling_rate
# consider using numpy/scipy for params and noise
new_params = [param+random.gauss(0, noise_magnitude)
for param in params]
# explicitly passing multiple parameters
new_utility = utility_func(*new_params)
if (new_utility > best_utility
or random.random()*temperature > new_utility / best_utility):
params, utility = new_params, new_utility
if new_utility > best_utility:
optimal_params, best_utility = params, utility
return optimal_params
Last but not least - unless the problem is extremely non-convex, I'd bet SGD would perform much better.
Related
I'm working through a book called Bayesian Analysis in Python. The book focuses heavily on the package PyMC3 but is a little vague on the theory behind it so I'm quite confused.
Say I have data like this:
data = np.array([51.06, 55.12, 53.73, 50.24, 52.05, 56.40, 48.45, 52.34, 55.65, 51.49, 51.86, 63.43, 53.00, 56.09, 51.93, 52.31, 52.33, 57.48, 57.44, 55.14, 53.93, 54.62, 56.09, 68.58, 51.36, 55.47, 50.73, 51.94, 54.95, 50.39, 52.91, 51.5, 52.68, 47.72, 49.73, 51.82, 54.99, 52.84, 53.19, 54.52, 51.46, 53.73, 51.61, 49.81, 52.42, 54.3, 53.84, 53.16])
And I'm looking at a model like this:
Using Metropolis Sampling,
how can I fit a model that estimates mu and sigma.
Here is my guess at pseudo-code from what I've read:
M, S = 50, 1
G = 1
# These are priors right?
mu = stats.norm(loc=M, scale=S)
sigma = stats.halfnorm(scale=G)
target = stats.norm
steps = 1000
mu_samples = [50]
sigma_samples = [1]
for i in range(steps):
# proposed sample...
mu_i, sigma_i = mu.rvs(), sigma.rvs()
# Something happens here
# How do I calculate the likelidhood??
"..."
# some evaluation of a likelihood ratio??
a = "some"/"ratio"
acceptance_bar = np.random.random()
if a > acceptance_bar:
mu_samples.append(mu_i)
sigma_samples.append(sigma_i)
What am I missing??
I hope the following example helps you. In this example I am going to assume we know the value of sigma, so we only have a prior for mu.
sigma = data.std() # we are assuming we know sigma
steps = 1000
mu_old = data.mean() # initial value, just a good guest
mu_samples = []
# we evaluate the prior for the initial point
prior_old = stats.norm(M, S).pdf(mu_old)
# we evaluate the likelihood for the initial point
likelihood_old = np.prod(stats.norm(mu_old, sigma).pdf(data))
# Bayes' theorem (omitting the denominator) for the initial point
post_old = prior_old * likelihood_old
for i in range(steps):
# proposal distribution, propose a new value from the old one
mu_new = stats.norm.rvs(mu_old, 0.1)
# we evaluate the prior
prior_new = stats.norm(M, S).pdf(mu_new)
# we evaluate the likelihood
likelihood_new = np.prod(stats.norm(mu_new, sigma).pdf(data))
# Bayes' theorem (omitting the denominator)
post_new = prior_new * likelihood_new
# the ratio of posteriors (we do not need to know the normalizing constant)
a = post_new / post_old
if np.random.random() < a:
mu_old = mu_new
post_old = post_new
mu_samples.append(mu_old)
Notes:
Notice that I have defined a proposal distribution, which in this case is a Gaussian centered at mu_old with a standard deviation of 0.1 (an arbitrary value). In practice the efficiency of MH depends heavily on the proposal distribution, thus PyMC3 (as well as other practical implementations of MH) use some heuristic to tune the proposal distribution.
For simplicity I used the pdf in this example but in practice is convenient to work with the logpdf. This avoid underflow problems without changing the results.
The likelihood is computed as a product
The ratio you were missing is the ratio of posteriors
If you do not accept the new proposed value, then you save (again) the old value.
Remember to check this repository for the Errata and for updated versions of the code. The major difference with the updated code respect to the code in the book, is that now the preferred way to run a model with PyMC3 is to just use pm.sample() and let PyMC3 choose the samplers and initialization points for you.
Is it possible to vectorize (or otherwise speedup) an element-wise optimization with NumPy (and SciPy)?
In the most abstract sense, I have a function, y, which is parabolically shaped and could be expressed basically as y=x^2+b*x+z, where x is an array of known values, and I want to find a z that makes the minimum value of y exactly zero (said another way, I want to find a value z that makes my parabola only have one zero). For this, I've chosen to implement a simple bisection-like method. The code for this is below:
import numpy as np
def find_single_root():
x = np.arange(-5, 6,0.1) # domain
z = 1 # initial guess
delta = 1 # initial step size
tol = 0.001 # tolerance
while True:
y = x**2-5*x+z
minimum = np.nanmin(y)
# update z
print(delta)
print(z)
if minimum > 0:
if delta > 0:
delta = -1*delta/2
z += delta
else:
if delta < 0:
delta = -1*delta/2
z += delta
# check if step is smaller than tolerance
if np.abs(delta) < tol:
return z
Now lets say x(v,w), and I want to create a 2D array of z values, where each is optimized. What I have right now is below (note, the new function definition and domain are as follows)
def find_single_root(v, w):
x = np.arange(-5*v/w, 6*w,0.1) # domain
... # rest of the function
vs = np.arange(1,5)
ws = np.arange(1,5)
zs = np.zeros((len(vs),len(ws)))
for i, v in enumerate(vs):
for j, w in enumerate(ws):
zs[i][j] = find_single_root(v,w)
Right now I just have these simple nested for loops, but is there a way I can approach this differently or speed it up with NumPy vectorizing?
Vectorization may be applicable when the computations to be performed are precisely known in advance. Like "take two arrays of numbers, and multiply them pairwise".
Vectorization is not applicable when the computations adapt to the given data. Any kind of optimization algorithm is adaptive, because where you look for the minimum depends on what the function returns. If you have a bunch of functions, and need to find the minimum of each, you are going to have to minimize them one at a time, in a loop. If this process is slow, it's because it takes long to minimize a bunch of function, not because there is a for loop in the program.
Concerning your program, I would try using some of SciPy methods for both minimization and root-finding. Have a function min_of_f(z) which finds the minimum for a given value of parameter z, possibly using minimize_scalar. Then feed min_of_f to a root-finding routine. How long these will take can be controlled by their tolerance parameters (xtol and others).
OP edit:
I wanted to give credit for this as a correct answer, but still provide more information.
I ended up using numpy.vectorize to vectorize without restructuring the problem. Although numpy.vectorize is not meant for increasing performance, the performance in my specific use case was a modest factor of two faster. Applying the same approach to the original problem in the question resulted in virtually no speed up with 100x100 vectors so YMMV.
Even though I wasn't able to vectorize this problem from a speed aspect for the reasons given in the above answer, being able to use plain vector syntax instead of nested for loops all over my code was useful.
I am currently working on a machine learning project where - given a data matrix Z and a vector rho - I have to compute the value and slope of the logistic loss function at rho. The computation involves basic matrix-vector multiplication and log/exp operations, with a trick to avoid numerical overflow (described in this previous post).
I am currently doing this in Python using NumPy as shown below (as a reference, this code runs in 0.2s). Although this works well, I would like to speed it up since I call the function multiple times in my code (and it represents over 90% of the computation involved in my project).
I am looking for any way to improve the runtime of this code without parallelization (i.e. only 1 CPU). I am happy using any publicly available package in Python, or calling C or C++ (since I have heard that this improves runtimes by an order of magnitude). Preprocessing the data matrix Z would also be OK. Some things that could be exploited for better computation are that the vector rho is usually sparse (with around 50% of entries = 0) and there are usually far more rows than columns (in most cases n_cols <= 100)
import time
import numpy as np
np.__config__.show() #make sure BLAS/LAPACK is being used
np.random.seed(seed = 0)
#initialize data matrix X and label vector Y
n_rows, n_cols = 1e6, 100
X = np.random.random(size=(n_rows, n_cols))
Y = np.random.randint(low=0, high=2, size=(n_rows, 1))
Y[Y==0] = -1
Z = X*Y # all operations are carried out on Z
def compute_logistic_loss_value_and_slope(rho, Z):
#compute the value and slope of the logistic loss function in a way that is numerically stable
#loss_value: (1 x 1) scalar = 1/n_rows * sum(log( 1 .+ exp(-Z*rho))
#loss_slope: (n_cols x 1) vector = 1/n_rows * sum(-Z*rho ./ (1+exp(-Z*rho))
#see also: https://stackoverflow.com/questions/20085768/
scores = Z.dot(rho)
pos_idx = scores > 0
exp_scores_pos = np.exp(-scores[pos_idx])
exp_scores_neg = np.exp(scores[~pos_idx])
#compute loss value
loss_value = np.empty_like(scores)
loss_value[pos_idx] = np.log(1.0 + exp_scores_pos)
loss_value[~pos_idx] = -scores[~pos_idx] + np.log(1.0 + exp_scores_neg)
loss_value = loss_value.mean()
#compute loss slope
phi_slope = np.empty_like(scores)
phi_slope[pos_idx] = 1.0 / (1.0 + exp_scores_pos)
phi_slope[~pos_idx] = exp_scores_neg / (1.0 + exp_scores_neg)
loss_slope = Z.T.dot(phi_slope - 1.0) / Z.shape[0]
return loss_value, loss_slope
#initialize a vector of integers where more than half of the entries = 0
rho_test = np.random.randint(low=-10, high=10, size=(n_cols, 1))
set_to_zero = np.random.choice(range(0,n_cols), size =(np.floor(n_cols/2), 1), replace=False)
rho_test[set_to_zero] = 0.0
start_time = time.time()
loss_value, loss_slope = compute_logistic_loss_value_and_slope(rho_test, Z)
print "total runtime = %1.5f seconds" % (time.time() - start_time)
Libraries of the BLAS family are already highly tuned for best performance. So no effort to link to some C/C++ code is likely to give you any benefits. You could however try various BLAS implementations, since there are quite a few of them around, including some specially tuned to some CPUs.
The other thing that comes to my mind is to use a library like theano (or Google's tensorflow) that is able to represent the entire computational graph (all of the operations in your function above) and apply global optimizations to it. It can then generate CPU code from that graph via C++ (and by flipping a simple switch also GPU code). It can also automatically compute symbolic derivatives for you. I've used theano for machine learning problems and it's a really great library for that, although not the easiest one to learn.
(I'm posting this as an answer because it's too long for a comment)
Edit:
I actually had a go at this in theano, but the result is actually about 2x slower on the CPU, see below why. I'll post it here anyway, maybe it's a starting point for someone else to do something better: (this is only partial code, complete with the code from the original post)
import theano
def make_graph(rho, Z):
scores = theano.tensor.dot(Z, rho)
# this is very inefficient... it calculates everything twice and
# then picks one of them depending on scores being positive or not.
# not sure how to express this in theano in a more efficient way
pos = theano.tensor.log(1 + theano.tensor.exp(-scores))
neg = theano.tensor.log(scores + theano.tensor.exp(scores))
loss_value = theano.tensor.switch(scores > 0, pos, neg)
loss_value = loss_value.mean()
# however computing the derivative is a real joy now:
loss_slope = theano.tensor.grad(loss_value, rho)
return loss_value, loss_slope
sym_rho = theano.tensor.col('rho')
sym_Z = theano.tensor.matrix('Z')
sym_loss_value, sym_loss_slope = make_graph(sym_rho, sym_Z)
compute_logistic_loss_value_and_slope = theano.function(
inputs=[sym_rho, sym_Z],
outputs=[sym_loss_value, sym_loss_slope]
)
# use function compute_logistic_loss_value_and_slope() as in original code
Numpy is quite optimized. The best you can do is to try other libraries with data of the same size initialized to random (not initialized to 0) and do your own benchmark.
If you want to try, you can of course try BLAS. You should also give a try to eigen, I personally found it faster on one of my applications.
I spent some time these days on a problem. I have a set of data:
y = f(t), where y is very small concentration (10^-7), and t is in second. t varies from 0 to around 12000.
The measurements follow an established model:
y = Vs * t - ((Vs - Vi) * (1 - np.exp(-k * t)) / k)
And I need to find Vs, Vi, and k. So I used curve_fit, which returns the best fitting parameters, and I plotted the curve.
And then I used a similar model:
y = (Vs * t/3600 - ((Vs - Vi) * (1 - np.exp(-k * t/3600)) / k)) * 10**7
By doing that, t is a number of hour, and y is a number between 0 and about 10. The parameters returned are of course different. But when I plot each curve, here is what I get:
http://i.imgur.com/XLa4LtL.png
The green fit is the first model, the blue one with the "normalized" model. And the red dots are the experimental values.
The fitting curves are different. I think it's not expected, and I don't understand why. Are the calculations more accurate if the numbers are "reasonnable" ?
The docstring for optimize.curve_fit says,
p0 : None, scalar, or M-length sequence
Initial guess for the parameters. If None, then the initial
values will all be 1 (if the number of parameters for the function
can be determined using introspection, otherwise a ValueError
is raised).
Thus, to begin with, the initial guess for the parameters is by default 1.
Moreover, curve fitting algorithms have to sample the function for various values of the parameters. The "various values" are initially chosen with an initial step size on the order of 1. The algorithm will work better if your data varies somewhat smoothly with changes in the parameter values that on the order of 1.
If the function varies wildly with parameter changes on the order of 1, then the algorithm may tend to miss the optimum parameter values.
Note that even if the algorithm uses an adaptive step size when it tweaks the parameter values, if the initial tweak is so far off the mark as to produce a big residual, and if tweaking in some other direction happens to produce a smaller residual, then the algorithm may wander off in the wrong direction and miss the local minimum. It may find some other (undesired) local minimum, or simply fail to converge. So using an algorithm with an adaptive step size won't necessarily save you.
The moral of the story is that scaling your data can improve the algorithm's chances of of finding the desired minimum.
Numerical algorithms in general all tend to work better when applied to data whose magnitude is on the order of 1. This bias enters into the algorithm in numerous ways. For instance, optimize.curve_fit relies on optimize.leastsq, and the call signature for optimize.leastsq is:
def leastsq(func, x0, args=(), Dfun=None, full_output=0,
col_deriv=0, ftol=1.49012e-8, xtol=1.49012e-8,
gtol=0.0, maxfev=0, epsfcn=None, factor=100, diag=None):
Thus, by default, the tolerances ftol and xtol are on the order of 1e-8. If finding the optimum parameter values require much smaller tolerances, then these hard-coded default numbers will cause optimize.curve_fit to miss the optimize parameter values.
To make this more concrete, suppose you were trying to minimize f(x) = 1e-100*x**2. The factor of 1e-100 squashes the y-values so much that a wide range of x-values (the parameter values mentioned above) will fit within the tolerance of 1e-8. So, with un-ideal scaling, leastsq will not do a good job of finding the minimum.
Another reason to use floats on the order of 1 is because there are many more (IEEE754) floats in the interval [-1,1] than there are far away from 1. For example,
import struct
def floats_between(x, y):
"""
http://stackoverflow.com/a/3587987/190597 (jsbueno)
"""
a = struct.pack("<dd", x, y)
b = struct.unpack("<qq", a)
return b[1] - b[0]
In [26]: floats_between(0,1) / float(floats_between(1e6,1e7))
Out[26]: 311.4397707054894
This shows there are over 300 times as many floats representing numbers between 0 and 1 than there are in the interval [1e6, 1e7].
Thus, all else being equal, you'll typically get a more accurate answer if working with small numbers than very large numbers.
I would imagine it has more to do with the initial parameter estimates you are passing to curve fit. If you are not passing any I believe they all default to 1. Normalizing your data makes those initial estimates closer to the truth. If you don't want to use normalized data just pass the initial estimates yourself and give them reasonable values.
Others have already mentioned that you probably need to have a good starting guess for your fit. In cases like this is, I usually try to find some quick and dirty tricks to get at least a ballpark estimate of the parameters. In your case, for large t, the exponential decays pretty quickly to zero, so for large t, you have
y == Vs * t - (Vs - Vi) / k
Doing a first-order linear fit like
[slope1, offset1] = polyfit(t[t > 2000], y[t > 2000], 1)
you will get slope1 == Vs and offset1 == (Vi - Vs) / k.
Subtracting this straight line from all the points you have, you get the exponential
residual == y - slope1 * t - offset1 == (Vs - Vi) * exp(-t * k)
Taking the log of both sides, you get
log(residual) == log(Vs - Vi) - t * k
So doing a second fit
[slope2, offset2] = polyfit(t, log(y - slope1 * t - offset1), 1)
will give you slope2 == -k and offset2 == log(Vs - Vi), which should be solvable for Vi since you already know Vs. You might have to limit the second fit to small values of t, otherwise you might be taking the log of negative numbers. Collect all the parameters you obtained with these fits and use them as the starting points for your curve_fit.
Finally, you might want to look into doing some sort of weighted fit. The information about the exponential part of your curve is contained in just the first few points, so maybe you should give those a higher weight. Doing this in a statistically correct way is not trivial.
I've been trying to fit some histogram data with scipy.optimize.curve_fit, but so far I haven't once been able to produce fit parameters that differ significantly from my guess parameters.
I wouldn't be terribly surprised to find that the more arcane parameters in my fit get stuck in local minima, but even linear coefficients won't move from my initial guesses!
If you've seen anything like this before, I'd love some advice. Do least-squared minimization routines just not work for certain classes of functions?
I try this,
import numpy as np
from matplotlib.pyplot import *
from scipy.optimize import curve_fit
def grating_hist(x,frac,xmax,x0):
# model data to be turned into a histogram
dx = x[1]-x[0]
z = np.linspace(0,1,20000,endpoint=True)
grating = np.cos(frac*np.pi*z)
norm_grating = xmax*(grating-grating[-1])/(1-grating[-1])+x0
# produce the histogram
bin_edges = np.append(x,x[-1]+x[1]-x[0])
hist,bin_edges = np.histogram(norm_grating,bins=bin_edges)
return hist
x = np.linspace(0,5,512)
p_data = [0.7,1.1,0.8]
pct = grating_hist(x,*p_data)
p_guess = [1,1,1]
p_fit,pcov = curve_fit(grating_hist,x,pct,p0=p_guess)
plot(x,pct,label='Data')
plot(x,grating_hist(x,*p_fit),label='Fit')
legend()
show()
print 'Data Parameters:', p_data
print 'Guess Parameters:', p_guess
print 'Fit Parameters:', p_fit
print 'Covariance:',pcov
and I see this: http://i.stack.imgur.com/GwXzJ.png (I'm new here, so I can't post images)
Data Parameters: [0.7, 1.1, 0.8]
Guess Parameters: [1, 1, 1]
Fit Parameters: [ 0.97600854 0.99458336 1.00366634]
Covariance: [[ 3.50047574e-06 -5.34574971e-07 2.99306123e-07]
[ -5.34574971e-07 9.78688795e-07 -6.94780671e-07]
[ 2.99306123e-07 -6.94780671e-07 7.17068753e-07]]
Whaaa? I'm pretty sure this isn't a local minimum for variations in xmax and x0, and it's a long way from the global minimum best fit. The fit parameters still don't change, even with better guesses. Different choices for curve functions (e.g. the sum of two normal distributions) do produce new parameters for the same data, so I know it's not the data itself. I also tried the same thing with scipy.optimize.leastsq itself just in case, but no dice; the parameters still don't move. If you have any thoughts on this, I'd love to hear them!
The problem you're facing is actually not due to curve_fit (or leastsq). It is due to the landscape of the objective of your optimisation problem. In your case the objective is the sum of residuals' squares, which you are trying to minimise. Now, if you look closely at your objective in a close surrounding of your initial conditions, for example using the code below, which only focuses on the first parameter:
p_ind = 0
eps = 1e-6
n_points = 100
frac_surroundings = np.linspace(p_guess[p_ind] - eps, p_guess[p_ind] + eps, n_points)
obj = []
temp_guess = p_guess.copy()
for p in frac_surroundings:
temp_guess[0] = p
obj.append(((grating_hist(x, *p_data) - grating_hist(x, *temp_guess))**2.0).sum())
py.plot(frac_surroundings, obj)
py.show()
you will notice that the landscape is piecewise constant (you can easily check that the situation is the same for other parameters. The problem with that is that these pieces are of the order of 10^-6, whereas the initial step of the fitting procedure is somewhere around 10^-8, hence the procedure ends quickly concluding that you cannot improve from the given initial condition. You could try to fix it by changing epsfcn parameter in curve_fit, but you would quickly notice that the landscape, on top of being piecewise constant, is also very "rugged". In other words, curve_fit is simply not well suited for such a problem, which is simply difficult for gradient based methods, as it is highly non-convex. Probably, some stochastic optimisation methods could do a better job. That is, however, a different question/problem.
I think it is a local minimum, or the algorith fails for a non trivial reason. It is far easier to fit the data to the input, instead of fitting the statistical description of the data to the statistical description of the input.
Here's a modified version of the code doing so:
z = np.linspace(0,1,20000,endpoint=True)
def grating_hist_indicator(x,frac,xmax,x0):
# model data to be turned into a histogram
dx = x[1]-x[0]
grating = np.cos(frac*np.pi*z)
norm_grating = xmax*(grating-grating[-1])/(1-grating[-1])+x0
return norm_grating
x = np.linspace(0,5,512)
p_data = [0.7,1.1,0.8]
pct = grating_hist(x,*p_data)
pct_indicator = grating_hist_indicator(x,*p_data)
p_guess = [1,1,1]
p_fit,pcov = curve_fit(grating_hist_indicator,x,pct_indicator,p0=p_guess)
plot(x,pct,label='Data')
plot(x,grating_hist(x,*p_fit),label='Fit')
legend()
show()