I’m new to PyMC and am trying to model a situation where you are rolling marbles at a wall and trying to find the block. The data is only for the values where the marble hits the block.
I’m first sampling an x location and size and then calculating a point from those with a uniform, but I’m getting an error.
import pymc3 as pm
import theano.tensor as tt
basic_model = pm.Model()
with basic_model:
# We are assuming independence of these.
x = pm.Uniform("x", lower=1, upper=30)
l = pm.Uniform("l", lower=1, upper=30)
lower = pm.Deterministic('lower', x-0.5*l)
upper = pm.Deterministic('upper', x+0.5*l)
point_x = pm.Uniform('point_x', lower=lower, upper=upper, observed=x_vals)
pm.sample()
With the error:
SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'x_interval__': array(0.), 'l_interval__': array(0.)}
Initial evaluation results:
x_interval__ -1.39
l_interval__ -1.39
point_x -inf
Name: Log-probability of test_point, dtype: float64
Clearly the issue is with point_x. I’m guessing the error has to do with the fact that the observed data may potentially fall outside the lower-upper range depending on the value of x and l sampled. But how might I fix this?
The sampler doesn't know how to handle starting off in an invalid region of the parameter space. A quick and dirty fix is to provide testval arguments that ensure the sampling begins in a logically valid solution. For example, we know the minimum block must have:
l_0 = np.max(x_vals) - np.min(x_vals)
x_0 = np.min(x_vals) + 0.5*l_0
and could use those:
x = pm.Uniform("x", lower=1, upper=30, testval=x_0)
l = pm.Uniform("l", lower=1, upper=30, testval=l_0)
Also, the nature of this model leads to many rejections due to impossibility, so you may want to use Metropolis for sampling, which almost always needs more steps and tuning
pm.sample(tune=10000, draws=10000, step=pm.Metropolis())
Alternative Models
Otherwise, consider reparameterizing so that only valid solutions are in the parameter space. One approach would be to sample l and then use that to constrain x. Something like:
other_model = pm.Model()
x_min = np.min(x_vals)
x_max = np.max(x_vals)
l_0 = x_max - x_min
with other_model:
# these have logical constraints from the data
l = pm.Uniform("l", lower=l_0, upper=30)
x = pm.Uniform("x", lower=x_max - 0.5*l, upper=x_min + 0.5*l)
lower = pm.Deterministic('lower', x - 0.5*l)
upper = pm.Deterministic('upper', x + 0.5*l)
point_x = pm.Uniform('point_x', lower=lower, upper=upper, observed=x_vals)
res = pm.sample(step=pm.NUTS(), return_inferencedata=True)
Another approach would be to sample lower and upper directly, and compute the x and l as deterministic variables from those.
Related
In the following code I want to optimize a wind farm using a penalty function.
Using the first function(newsite), I have defined the wind turbines numbers and layout. Then in the next function, after importing x0(c=x0=initial guess), for each range of 10 wind directions (wd) I took the c values for the mean wd of each range. For instance, for wd:[0,10] mean value is 5 and I took c values of wd=5 and put it for all wd in the range[0,10] and for each wind speed(ws). I have to mention that c is the value that shows that wind turbines are off or on(c=0 means wt is off). then I have defined operating according to the c, which means that if operating is 0,c=0 and that wt is off.
Then I defined the penalty function to optimize power output. indeed wherever TI_eff>0.14, I need to implement a penalty function so this function must be subtracted from the original power output. For instance, if sim_res.TI_eff[1][2][3] > 0.14, so I need to apply penalty function so curr_func[1][2][3]=sim_res.Power[1][2][3]-10000*(sim_res.TI_eff[1][2][3]-0.14)**2.
The problem is that I run this code but it did not give me any results and I waited for long hours, I think it was stuck in a loop that could not reach converge. so I want to know what is the problem?
import time
from py_wake.examples.data.hornsrev1 import V80
from py_wake.examples.data.hornsrev1 import Hornsrev1Site # We work with the Horns Rev 1 site, which comes already set up with PyWake.
from py_wake import BastankhahGaussian
from py_wake.turbulence_models import GCLTurbulence
from py_wake.deflection_models.jimenez import JimenezWakeDeflection
from scipy.optimize import minimize
from py_wake.wind_turbines.power_ct_functions import PowerCtFunctionList, PowerCtTabular
import numpy as np
def newSite(x,y):
xNew=np.array([x[0]+560*i for i in range(4)])
yNew=np.array([y[0]+560*i for i in range(4)])
x_newsite=np.array([xNew[0],xNew[0],xNew[0],xNew[1]])
y_newsite=np.array([yNew[0],yNew[1],yNew[2],yNew[0]])
return (x_newsite,y_newsite)
def wt_simulation(c):
c = c.reshape(4,360,23)
site = Hornsrev1Site()
x, y = site.initial_position.T
x_newsite,y_newsite=newSite(x,y)
windTurbines = V80()
for item in range(4):
for j in range(10,370,10):
for i in range(j-10,j):
c[item][i]=c[item][j-5]
windTurbines.powerCtFunction = PowerCtFunctionList(
key='operating',
powerCtFunction_lst=[PowerCtTabular(ws=[0, 100], power=[0, 0], power_unit='w', ct=[0, 0]), # 0=No power and ct
windTurbines.powerCtFunction], # 1=Normal operation
default_value=1)
operating = np.ones((4,360,23)) # shape=(#wt,wd,ws)
operating[c <= 0.5]=0
wf_model = BastankhahGaussian(site, windTurbines,deflectionModel=JimenezWakeDeflection(),turbulenceModel=GCLTurbulence())
# run wind farm simulation
sim_res = wf_model(
x_newsite, y_newsite, # wind turbine positions
h=None, # wind turbine heights (defaults to the heights defined in windTurbines)
wd=None, # Wind direction (defaults to site.default_wd (0,1,...,360 if not overriden))
ws=None, # Wind speed (defaults to site.default_ws (3,4,...,25m/s if not overriden))
operating=operating
)
curr_func=np.ones((4,360,23))
for i in range(4):
for l in range(360):
for k in range(23):
if sim_res.TI_eff[i][l][k]-0.14 > 0 :
curr_func[i][l][k]=sim_res.Power[i][l][k]-10000*(sim_res.TI_eff[i][l][k]-0.14)**2
else:
curr_func[i][l][k]=sim_res.Power[i][l][k]
return -float(np.sum(curr_func)) # negative because of scipy minimize
t0 = time.perf_counter()
def solve():
wt =4 # for V80
wd=360
ws=23
x0 = np.ones((wt,wd,ws)).reshape(-1) # initial value for c
b=(0,1)
bounds=np.full((wt,wd,ws,2),b).reshape(-1, 2)
res = minimize(wt_simulation, x0=x0, bounds=bounds)
return res
res=solve()
print(f'success status: {res.success}')
print(f'aep: {-res.fun}') # negative to get the true maximum aep
print(f'c values: {res.x}\n')
print(f'elapse: {round(time.perf_counter() - t0)}s')
sim_res=wt_simulation(res.x)
There are a number of things in your approach that are either wrong or incomprehensible to me. Just for fun I have tried your code. A few observations:
Your set of parameters (optimization variables) has a shape of (4, 360, 23), i.e. you are looking at 33,120 parameters. There is no nonlinear optimization algorithm that is going to give you any meaningful answer to a problem that big. Ever. But then again, you shouldn't be looking at SciPy optimize if your optimization variables should only assume 0/1 values.
Calling SciPy minimize like this:
res = minimize(wt_simulation, x0=x0, bounds=bounds)
Is going to select a nonlinear optimizer between BFGS, L-BFGS-B or SLSQP (according to the documentation at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html)
Those algorithms are gradient-based, and since you're not providing a gradient of your objective function SciPy is going to calculate them numerically. Good luck with that when you have 33,000 parameters. Never going to finish.
At the beginning of your objective function you are doing this:
for item in range(4):
for j in range(10,370,10):
for i in range(j-10,j):
c[item][i]=c[item][j-5]
I don't understand why you're doing it but you are overriding the input values of c coming from the optimizer.
Your objective function takes 20-25 seconds to evaluate on my powerful workstation. Even if you had only 10-15 optimization parameters, it would take you several days to get any answer out of an optimizer. You have 33,000+ variables. No way.
I don't know why you are doing this and why you're doing it the way you're doing it. You should rethink your approach.
I am using emcee.EnsembleSampler to sample some likelihoods. I am doing this on millions of different data sets, and so performance is important.
It would be great to be able to get a chain to stop sampling once it has satisfied some convergence criterion on the autocorrelation time.
I can not find any way of doing this in the documentation, or through my searches, so was hoping someone has a recipe for it.
While I too couldn't find anything in the current emcee docs, the latest versions seems to have added a tutorial which happens to show the exact thing you are trying to do: link
In case the link breaks or things change, here is the example:
import emcee
import numpy as np
np.random.seed(42)
# The definition of the log probability function
# We'll also use the "blobs" feature to track the "log prior" for each step
def log_prob(theta):
log_prior = -0.5 * np.sum((theta-1.0)**2 / 100.0)
log_prob = -0.5 * np.sum(theta**2) + log_prior
return log_prob, log_prior
# Initialize the walkers
coords = np.random.randn(32, 5)
nwalkers, ndim = coords.shape
# Set up the backend
# Don't forget to clear it in case the file already exists
filename = "tutorial.h5"
backend = emcee.backends.HDFBackend(filename)
backend.reset(nwalkers, ndim)
# Initialize the sampler
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, backend=backend)
max_n = 100000
# We'll track how the average autocorrelation time estimate changes
index = 0
autocorr = np.empty(max_n)
# This will be useful to testing convergence
old_tau = np.inf
# Now we'll sample for up to max_n steps
for sample in sampler.sample(coords, iterations=max_n, progress=True):
# Only check convergence every 100 steps
if sampler.iteration % 100:
continue
# Compute the autocorrelation time so far
# Using tol=0 means that we'll always get an estimate even
# if it isn't trustworthy
tau = sampler.get_autocorr_time(tol=0)
autocorr[index] = np.mean(tau)
index += 1
# Check convergence
converged = np.all(tau * 100 < sampler.iteration)
converged &= np.all(np.abs(old_tau - tau) / tau < 0.01)
if converged:
break
old_tau = tau
I have a function Imaginary which describes a physics process and I want to fit this to a dataset x_interpolate, y_interpolate. The function is a form of a Lorentzian peak function and I have some initial values that are user given, except for f_peak (the peak location) which I find using a peak finding algorithm. All of the fit parameters, except for the offset, are expected to be positive and thus I have set bounds_I accordingly.
def Imaginary(freq, alpha, res, Ms, off):
numerator = (2*alpha*freq*res**2)
denominator = (4*(alpha*res*freq)**2) + (res**2 - freq**2)**2
Im = Ms*(numerator/denominator) + off
return Im
pI = np.array([alpha_init, f_peak, Ms_init, 0])
bounds_I = ([0,0,0,0, -np.inf], [np.inf,np.inf,np.inf, np.inf])
poptI, pcovI = curve_fit(Imaginary, x_interpolate, y_interpolate, pI, bounds=bounds_I)
In some situations I want to keep the parameter f_peak fixed during the fitting process. I tried an easy solution by changing bounds_I to:
bounds_I = ([0,f_peak+0.001,0,0, -np.inf], [np.inf,f_peak-0.001,np.inf, np.inf])
This is for many reasons not an optimal way of doing this so I was wondering if there is a more Pythonic way of doing this? Thank you for your help
If a parameter is fixed, it is not really a parameter, so it should be removed from the list of parameters. Define a model that has that parameter replaced by a fixed value, and fit that. Example below, simplified for brevity and to be self-contained:
x = np.arange(10)
y = np.sqrt(x)
def parabola(x, a, b, c):
return a*x**2 + b*x + c
fit1 = curve_fit(parabola, x, y) # [-0.02989396, 0.56204598, 0.25337086]
b_fixed = 0.5
fit2 = curve_fit(lambda x, a, c: parabola(x, a, b_fixed, c), x, y)
The second call to fit returns [-0.02350478, 0.35048631], which are the optimal values of a and c. The value of b was fixed at 0.5.
Of course, the parameter should be removed from the initial vector pI and the bounds as well.
You might find lmfit (https://lmfit.github.io/lmfit-py/) helpful. This library adds a higher-level interface to the scipy optimization routines, aiming for a more Pythonic approach to optimization and curve fitting. For example, it uses Parameter objects to allow setting bounds and fixing parameters without having to modify the objective or model function. For curve-fitting, it defines high level Model functions that can be used.
For you example, you could use your Imaginary function as you've written it with
from lmfit import Model
lmodel = Model(Imaginary)
and then create Parameters (lmfit will name the Parameter objects according to your function signature), providing initial values:
params = lmodel.make_params(alpha=alpha_init, res=f_peak, Ms=Ms_init, off=0)
By default all Parameters are unbound and will vary in the fit, but you can modify these attributes (without rewriting the model function):
params['alpha'].min = 0
params['res'].min = 0
params['Ms'].min = 0
You can set one (or more) of the parameters to not vary in the fit as with:
params['res'].vary = False
To be clear: this does not require altering the model function, making it much easier to change with is fixed, what bounds might be imposed, and so forth.
You would then perform the fit with the model and these parameters:
result = lmodel.fit(y_interpolate, params, freq=x_interpolate)
you can get a report of fit statistics, best-fit values and uncertainties for parameters with
print(result.fit_report())
The best fit Parameters will be held in result.params.
FWIW, lmfit also has builtin Models for many common forms, including Lorentzian and a Constant offset. So, you could construct this model as
from lmfit.models import LorentzianModel, ConstantModel
mymodel = LorentzianModel(prefix='l_') + ConstantModel()
params = mymodel.make_params()
which will have Parameters named l_amplitude, l_center, l_sigma, and c (where c is the constant) and the model will use the name x for the independent variable (your freq). This approach can become very convenient when you may want to change the functional form of the peaks or background, or when fitting multiple peaks to a spectrum.
I was able to solve this issue regarding arbitrary number of parameters and arbitrary positioning of the fixed parameters:
def d_fit(x, y, param, boundMi, boundMx, listparam):
Sparam, SboundMi, SboundMx = asarray([]), asarray([]), asarray([])
Nparam, NboundMi, NboundMx = asarray([]), asarray([]), asarray([])
for i in range(len(param)):
if(listparam[i] == 1):
Sparam = append(Sparam,asarray(param[i]))
SboundMi = append(SboundMi,asarray(boundMi[i]))
SboundMx = append(SboundMx,asarray(boundMx[i]))
else:
Nparam = append(Nparam,asarray(param[i]))
def funF(x, Sparam):
j = 0
for i in range(len(param)):
if(listparam[i] == 1):
param[i] = Sparam[i-j]
else:
param[i] = Nparam[j]
j = j + 1
return fun(x, param)
return curve_fit(lambda x, *Sparam: funF(x, Sparam), x, y, p0 = Sparam, bounds = (SboundMi,SboundMx))
In this case:
param = [a,b,c,...] # parameters array (any size)
boundMi = [min_a, min_b, min_c,...] # minimum allowable value of each parameter
boundMx = [max_a, max_b, max_c,...] # maximum allowable value of each parameter
listparam = [0,1,1,0,...] # 1 = fit and 0 = fix the corresponding parameter in the fit routine
and the root function is define as
def fun(x, param):
a,b,c,d.... = param
return a*b/c... # any function of the params a,b,c,d...
This way, you can change the root function and the number of parameters without changing the fit routine.
And, at any time, you can fix or let fit any parameter by changing "listparam".
Use like this:
popt, pcov = d_fit(x, y, param, boundMi, boundMx, listparam)
"popt" and "pcov" are 1D arrays of the size of the number of "1" in "listparam" bringing the results of the fitted parameters (best value and err matrix)
"param" will ramain an 1D array of the same size of the original (input) "param", HOWEVER IT WILL BE UPDATED AUTOMATICALLY TO THE FITTED VALUES (same as "popt") for the fitted values, keeping the fixed values according to "listparam"
Hope can be usefull!
Obs1: x = 1D-array independent values and y = 1D-array dependent values
Obs2: This is my first post. Please let me know if I can improove it!
I am trying to solve a simple example with the dopri5 integrator in scipy.integrate.ode. As the documentation states
This is an explicit runge-kutta method of order (4)5 due to Dormand & Prince (with stepsize control and dense output).
this should work. So here is my example:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def MassSpring_with_force(t, state):
""" Simple 1DOF dynamics model: m ddx(t) + k x(t) = f(t)"""
# unpack the state vector
x = state[0]
xd = state[1]
# these are our constants
k = 2.5 # Newtons per metre
m = 1.5 # Kilograms
# force
f = force(t)
# compute acceleration xdd
xdd = ( ( -k*x + f) / m )
# return the two state derivatives
return [xd, xdd]
def force(t):
""" Excitation force """
f0 = 1 # force amplitude [N]
freq = 20 # frequency[Hz]
omega = 2 * np.pi *freq # angular frequency [rad/s]
return f0 * np.sin(omega*t)
# Time range
t_start = 0
t_final = 1
# Main program
state_ode_f = ode(MassSpring_with_force)
state_ode_f.set_integrator('dopri5', rtol=1e-6, nsteps=500,
first_step=1e-6, max_step=1e-3)
state2 = [0.0, 0.0] # initial conditions
state_ode_f.set_initial_value(state2, 0)
sol = np.array([[t_start, state2[0], state2[1]]], dtype=float)
print("Time\t\t Timestep\t dx\t\t ddx\t\t state_ode_f.successful()")
while state_ode_f.t < (t_final):
state_ode_f.integrate(t_final, step=True)
sol = np.append(sol, [[state_ode_f.t, state_ode_f.y[0], state_ode_f.y[1]]], axis=0)
print("{0:0.8f}\t {1:0.4e} \t{2:10.3e}\t {3:0.3e}\t {4}".format(
state_ode_f.t, sol[-1, 0]- sol[-2, 0], state_ode_f.y[0], state_ode_f.y[1], state_ode_f.successful()))
The result I get is:
Time Timestep dx ddx state_ode_f.successful()
0.49763822 4.9764e-01 2.475e-03 -8.258e-04 False
0.99863822 5.0100e-01 3.955e-03 -3.754e-03 False
1.00000000 1.3618e-03 3.950e-03 -3.840e-03 False
with a warning:
c:\python34\lib\site-packages\scipy\integrate_ode.py:1018: UserWarning: dopri5: larger nmax is needed
self.messages.get(idid, 'Unexpected idid=%s' % idid))
The result is incorect. If I run the same code with vode integrator, I get the expected result.
Edit
A similar issue is described here:
Using adaptive step sizes with scipy.integrate.ode
The suggested solution recommends setting nsteps=1, which solves the ODE correctly and with step-size control. However the integrator returns state_ode_f.successful() as False.
No, there is nothing wrong. You are telling the integrator to perform an integration step to t_final and it performs that step. Internal steps of the integrator are not reported.
The sensible thing to do is to give the desired sampling points as input of the algorithm, set for example dt=0.1 and use
state_ode_f.integrate( min(state_ode_f.t+dt, t_final) )
There is no single-step method in dopri5, only vode has it defined, see the source code https://github.com/scipy/scipy/blob/v0.14.0/scipy/integrate/_ode.py#L376, this could account for the observed differences.
As you found in Using adaptive step sizes with scipy.integrate.ode, one can force single-step behavior by setting the iteration bound nsteps=1. This will produce a warning every time, so one has to suppress these specific warnings to see a sensible result.
You should not use a parameter (which is a constant for the integration interval) for a time-dependent force. Use inside MassSpring_with_force the evaluation f=force(t). Possibly you could pass the function handle of force as parameter.
I'm a bit new to Python and PyMC, and making rapid progress. But I'm just confused about the use of setting deterministic values of a 2D matrix. I have a model below, that I cannot get to parse correctly. The problem relates to setting the value theta in the model.
import numpy as np
import pymc
define known variables
N = 2
T = 10
tau = 1
define model... which I cannot get to parse correctly. It's the allocation of theta that I'm having trouble with. The aim to to get samples of D and x. Theta is just an intermediate variable, but I need to keep it as it's used in more complex variations of the model.
def NAFCgenerator():
D = np.empty(T, dtype=object)
theta = np.empty([N,T], dtype=object)
x = np.empty([N,T], dtype=object)
# true location of signal
for t in range(T):
D[t] = pymc.DiscreteUniform('D_%i' % t, lower=0, upper=N-1)
for t in range(T):
for n in range(N):
#pymc.deterministic(plot=False)
def temp_theta(dt=D[t], n=n):
return dt==n
theta[n,t] = temp_theta
x[n,t] = pymc.Normal('x_%i,%i' % (n,t),
mu=theta[n,t], tau=tau)
return locals()
** EDIT **
Explicit indexing is useful for me as I'm learning both PyMC and Python. But it seems that extracting MCMC samples is a bit clunky, e.g.
D0values = pymc_generator.trace('D_0')[:]
But I am probably missing something. But did I managed to get a vectorised version working
# Approach 1b - actually quite promising
def NAFCgenerator():
# NOTE TO SELF. It's important to declare these as objects
D = np.empty(T, dtype=object)
theta = np.empty([N,T], dtype=object)
x = np.empty([N,T], dtype=object)
# true location of signal
D = pymc.Categorical('D', spatial_prior, size=T)
# displayed stimuli
#pymc.deterministic(plot=False)
def theta(D=D):
theta = np.zeros([N,T])
theta[0,D==0]=1
theta[1,D==1]=1
return theta
#for n in range(N):
x = pymc.Normal('x', mu=theta, tau=tau)
return locals()
Which seems easier to get at MCMC samples using this for example
Dvalues = pymc_generator.trace('D')[:]
In PyMC2, when creating deterministic nodes with decorators, the default is to take the node name from the function name. The solution is simple: specify the node name as a parameter for the decorator.
#pymc.deterministic(name='temp_theta_%d_%d'%(t,n), plot=False)
def temp_theta(dt=D[t], n=n):
return dt==n
theta[n,t] = temp_theta
Here is a notebook that puts this in context.