I have two sets of frequencies data from experiment and from theoretical formula. I want to use minimize function of scipy.
Here's my code snippet.
where g is coupling which I want to find out.
Ad ind is inductance for plotting on x-axis.
from scipy.optimize import minimize
def eigenfreq1_func(ind,w_q,w_r,g):
return (w_q+w_r)+np.sqrt((w_q+w_r)**2.0-4*(w_q+w_r-g**2.0))/2
def eigenfreq2_func(ind,w_q,w_r,g):
return (w_q+w_r)-np.sqrt((w_q+w_r)**2.0-4*(w_q+w_r-g**2))/2.0
def err_func(y1,y1_fit,y2,y2_fit):
return np.sqrt((y1-y1_fit)**2+(y2-y2_fit)**2)
g_init=80e6
res1=eigenfreq1_func(ind,qubit_freq,readout_freq,g_init)
print res1
res2=eigenfreq2_func(ind,qubit_freq,readout_freq,g_init)
print res2
fit=minimize(err_func,args=[qubit_freq,res1,readout_freq,res2])
But it's showing the following error :
"TypeError: minimize() takes at least 2 arguments (2 given)"
First, the indentation in your example is messed up. Hope you don't try and run this
Second, here is a baby example to minimize the chi2 with the function scipy.optimize.minimize (note you can minimize what you want: likelihood, |chi|**?, toto, etc.):
import numpy as np
import scipy.optimize as opt
def functionyouwanttofit(x,y,z,t,u):
return np.array([x+y+z+t+u , x+y+z+t-u , x+y+z-t-u , x+y-z-t-u ]) # baby test here but put what you want
def calc_chi2(parameters):
x,y,z,t,u = parameters
data = np.array([100,250,300,500])
chi2 = sum( (data-functiontofit(x,y,z,t,u))**2 )
return chi2
# baby example for init, min & max values
x_init = 0
x_min = -1
x_max = 10
y_init = 1
y_min = -2
y_max = 9
z_init = 2
z_min = 0
z_max = 1000
t_init = 10
t_min = 1
t_max = 100
u_init = 10
u_min = 1
u_max = 100
parameters = [x_init,y_init,z_init,t_init,u_init]
bounds = [[x_min,x_max],[y_min,y_max],[z_min,z_max],[t_min,t_max],[u_min,u_max]]
result = opt.minimize(calc_chi2,parameters,bounds=bounds)
In your example you don't give initial values... This with the indentation... Were you waiting for someone doing the job for you ?
Third, note the optimization processes proposed by scipy are not always adapted to your needs. You may prefer minimizers such as lmfit
Related
I am solving a first order initial value problem of the form:
dy/dt = f(t,y(t)), y(0)=y0
I would like to obtain y(n+1) from a given numerical scheme, like for example :
using explicit Euler's scheme, we have
y(i) = y(i-1) + f(t-1,y(t-1)) * dt
Example code:
# Test code to evaluate different time integrators for the following equation:
# y' = (1/2) y + 2sin(3t) ; y(0) = -24/37
def dy_dt(y,t):
func = (1/2)*y + 2*np.sin(3*t)
return func
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
tmin = 0
tmax = 50
delt= 1e-2
t = np.arange(tmin,tmax,delt)
total_steps = len(t)
y_explicit=np.zeros(total_steps)
#y_ODEint=np.zeros(total_steps)
y0 = -24/37
y_explicit[0]=y0
#y_ODEint[0]=y0
# exact solution
y_exact = -(24/37)*np.cos(3*t)- (4/37)*np.sin(3*t) + (y0+24/37)*np.exp(0.5*t)
# Solution using ODEint Python
y_ODEint = odeint(dy_dt,y0,t)
for i in range(1,total_steps):
# Explicit scheme
y_explicit[i] = y_explicit[i-1] + (dy_dt(y_explicit[i-1],t[i-1]))*delt
# Update using ODEint
# y_ODEint[i] = odeint(dy_dt,y_ODEint[i-1],[0,delt])[-1]
plt.figure()
plt.plot(t,y_exact)
plt.plot(t,y_explicit)
# plt.plot(t,y_ODEint)
The current issue I am having is that the functions like ODEint in python provide the entire y(t) as opposed to y(i). like in the line "y_ODEint = odeint(dy_dt,y0,t)"
See in the code, how I have coded the explicit scheme, which gives y(i) for every time step. I want to do the same with ODEint, i tried something but didn't work (all commented lines)
I want to obtain y(i) rather than all ys using ODEint. Is that possible ?
Your system is time variant so you cannot translate the time step from (t[i-1], t[i]) to (0, delt).
The step by step integration will is unstable for your differential equation though
Here is what I get
def dy_dt(y,t):
func = (1/2)*y + 2*np.sin(3*t)
return func
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
tmin = 0
tmax = 40
delt= 1e-2
t = np.arange(tmin,tmax,delt)
total_steps = len(t)
y_explicit=np.zeros(total_steps)
#y_ODEint=np.zeros(total_steps)
y0 = -24/37
y_explicit[0]=y0
# exact solution
y_exact = -(24/37)*np.cos(3*t)- (4/37)*np.sin(3*t) + (y0+24/37)*np.exp(0.5*t)
# Solution using ODEint Python
y_ODEint = odeint(dy_dt,y0,t)
# To be filled step by step
y_ODEint_2 = np.zeros_like(y_ODEint)
y_ODEint_2[0] = y0
for i in range(1,len(y_ODEint_2)):
# update your code to run with the correct time interval
y_ODEint_2[i] = odeint(dy_dt,y_ODEint_2[i-1],[tmin+(i-1)*delt,tmin+i*delt])[-1]
plt.figure()
plt.plot(t,y_ODEint, label='single run')
plt.plot(t,y_ODEint_2, label='step-by-step')
plt.plot(t, y_exact, label='exact')
plt.legend()
plt.ylim([-20, 20])
plt.grid()
Important to notice that both methods are unstable, but the step-by-step explodes slightly before than the single odeint call.
With, for example dy_dt(y,t): -(1/2)*y + 2*np.sin(3*t) the integration becomes more stable, for instance, there is no noticeable error after integrating from zero to 200.
in the following code I want to optimize(maximize power output) a wind farm using scipy optimize. the variable in each iteration is c, which c=0 shows the wind turbine is off and c=1 shows, it is running. so in each iteration, I want to change C values and get the power output. the problem is that the initial guess which is x0 and indeed is c remains the same at each iteration and will not update. so optimization is not useful and does not work since c remains the same. now I want to know how can I solve this problem? I have put some print to show that how c and power output(function must be optimized) will not change.I mean I put any c values as initial guess(x0) will give a power output which will not change neither c, nor power output.
import time
from py_wake.examples.data.hornsrev1 import V80
from py_wake.examples.data.hornsrev1 import Hornsrev1Site # We work with the Horns Rev 1 site, which comes already set up with PyWake.
from py_wake import BastankhahGaussian
from py_wake.turbulence_models import GCLTurbulence
from py_wake.deflection_models.jimenez import JimenezWakeDeflection
from scipy.optimize import minimize
from py_wake.wind_turbines.power_ct_functions import PowerCtFunctionList, PowerCtTabular
import numpy as np
def newSite(x,y):
xNew=np.array([x[0]+560*i for i in range(4)])
yNew=np.array([y[0]+560*i for i in range(4)])
x_newsite=np.array([xNew[0],xNew[0],xNew[0],xNew[1]])
y_newsite=np.array([yNew[0],yNew[1],yNew[2],yNew[0]])
return (x_newsite,y_newsite)
def wt_simulation(c):
c = c.reshape(4,360,23)
site = Hornsrev1Site()
x, y = site.initial_position.T
x_newsite,y_newsite=newSite(x,y)
windTurbines = V80()
for item in range(4):
for j in range(10,370,10):
for i in range(j-10,j):
c[item][i]=c[item][j-5]
windTurbines.powerCtFunction = PowerCtFunctionList(
key='operating',
powerCtFunction_lst=[PowerCtTabular(ws=[0, 100], power=[0, 0], power_unit='w', ct=[0, 0]), # 0=No power and ct
windTurbines.powerCtFunction], # 1=Normal operation
default_value=1)
operating = np.ones((4,360,23)) # shape=(#wt,wd,ws)
operating[c <= 0.5]=0
wf_model = BastankhahGaussian(site, windTurbines,deflectionModel=JimenezWakeDeflection(),turbulenceModel=GCLTurbulence())
# run wind farm simulation
sim_res = wf_model(
x_newsite, y_newsite, # wind turbine positions
h=None, # wind turbine heights (defaults to the heights defined in windTurbines)
wd=None, # Wind direction (defaults to site.default_wd (0,1,...,360 if not overriden))
ws=None, # Wind speed (defaults to site.default_ws (3,4,...,25m/s if not overriden))
operating=operating
)
print(-float(np.sum(sim_res.Power)))
return -float(np.sum(sim_res.Power)) # negative because of scipy minimize
t0 = time.perf_counter()
def solve():
wt =4 # for V80
wd=360
ws=23
x0 = np.ones((wt,wd,ws)).reshape(-1) # initial value for c
b=(0,1)
bounds=np.full((wt,wd,ws,2),b).reshape(-1, 2)
res = minimize(wt_simulation, x0=x0, bounds=bounds)
return res
res=solve()
print(f'success status: {res.success}')
print(f'aep: {-res.fun}') # negative to get the true maximum aep
print(f'c values: {res.x}\n')
print(f'elapse: {round(time.perf_counter() - t0)}s')
sim_res=wt_simulation(res.x)
I am trying to run a code that outputs a Gaussian distribtuion by integrating the 1-D gaussian distribution equation using Monte Carlo integration. I am trying to use the mcint module. I defined the gaussian equation and the sampler function that is used in the mcint module. I am not sure what the 'measure' part in the mcint function does and what it should be set to. Does anyone know what measure is supposed to be? And how do I know what to set it as?
from matplotlib import pyplot as mp
import numpy as np
import mcint
import random
#f equation
def gaussian(x,x0,sig0,time,var):
[velocity,diffussion_coeffient] = var
mu = x0 + (velocity*time)
sig = sig0 + np.sqrt(2.0*diffussion_coeffient*time)
return (1/(np.sqrt(2.0*np.pi*(sig**2.0))))*(np.exp((-(x-mu)**2.0)/(2.0*(sig**2.0))))
#random variables that are generated during the integration
def sampler(varinterval):
while True:
velocity = random.uniform(varinterval[0][0],varinterval[0][1])
diffussion_coeffient = random.uniform(varinterval[1][0],varinterval[1][1])
yield (velocity,diffussion_coeffient)
if __name__ == "__main__":
x0 = 0
#ranges for integration
velocitymin = -3.0
velocitymax = 3.0
diffussion_coeffientmin = 0.01
diffussion_coeffientmax = 0.89
varinterval = [[velocitymin,velocitymax],[diffussion_coeffientmin,diffussion_coeffientmax]]
time = 1
sig0 = 0.05
x = np.linspace(-20, 20, 120)
res = []
for i in np.linspace(-10, 10, 120):
result, error = mcint.integrate(lambda v: gaussian(i,x0,sig0,time,v), sampler(varinterval), measure=1, n=1000)
res.append(result)
mp.plot(x,res)
mp.show()
Is this the module you are talking about? If that's the case the whole source is only 17 lines long (at the times of writing). The relevant line is the last one, which reads:
return (measure*sample_mean, measure*math.sqrt(sample_var/n))
As you can see, the measure argument (whose default value is unity) is used to scale the values returned by the integrate method.
My program:
# -*- coding: utf-8 -*-
import numpy as np
import itertools
from scipy.optimize import minimize
global width
width = 0.3
def time_builder(f, t0=0, tf=300):
return list(np.round(np.arange(t0, tf, 1/f*1000),3))
def duo_stim_overlap(t1, t2):
"""
Function taking 2 timelines build by time_builder function in input
and returning the ids of overlapping pulses between the 2.
len(t1) < len(t2)
"""
pulse_id_t1 = [x for x in range(len(t1)) for y in range(len(t2)) if abs(t1[x] - t2[y]) < width]
pulse_id_t2 = [x for x in range(len(t2)) for y in range(len(t1)) if abs(t2[x] - t1[y]) < width]
return pulse_id_t1, pulse_id_t2
def optimal_delay(s):
frequences = [20, 60, 80, 250, 500]
t0 = 0
tf = 150
delay = 0 # delay between signals,
timelines = list()
overlap = dict()
for i in range(len(frequences)):
timelines.append(time_builder(frequences[i], t0+delay, tf))
overlap[i] = list()
delay += s
for subset in itertools.combinations(timelines, 2):
p1_stim, p2_stim = duo_stim_overlap(subset[0], subset[1])
overlap[timelines.index(subset[0])] += p1_stim
overlap[timelines.index(subset[1])] += p2_stim
optim_param = 0
for key, items in overlap.items():
optim_param += (len(list(set(items)))/len(timelines[key]))
return optim_param
res = minimize(optimal_delay, 1.5, method='Nelder-Mead', tol = 0.01, bounds = [(0, 5)], options={'disp': True})
So my goal is to minimize the value optim_param computed by the function optimal_delay.
First of all, gradient methods don't do anything. They stop at the first iteration.
Second, I would need to set bounds for the s value of optimal delay (between 0 and 5 for instance). I know it's not possible with the Nelder-Mead simplex method, but the others didn't work at all.
Third, I don't really know how to set the parameter tol for termination. Bot tol = 0.01 and tol = 0.0000001 didn' t gave me good result. (and really close ones).
And finally if I start at 1.8 for instance, the minimize function gives me a value far from being a minimum...
What am I doing wrong?
If you plot your optimal_delay function you'll see that it's far from convex. The search will just find any local minima close to your starting point.
I am trying to introduce myself to MCMC sampling with emcee. I want to simply take a sample from a Maxwell Boltzmann distribution using a set of example code on github, https://github.com/dfm/emcee/blob/master/examples/quickstart.py.
The example code is really excellent, but when I change the distribution from a Gaussian to a Maxwellian, I receive the error, TypeError: lnprob() takes exactly 2 arguments (3 given)
However it is not called anywhere where it is not given the appropriate parameters? In need of some guidance as to how to define a Maxwellian Curve and have it fit into this example code.
Here is what I have;
from __future__ import print_function
import numpy as np
import emcee
try:
xrange
except NameError:
xrange = range
def lnprob(x, a, icov):
pi = np.pi
return np.sqrt(2/pi)*x**2*np.exp(-x**2/(2.*a**2))/a**3
ndim = 2
means = np.random.rand(ndim)
cov = 0.5-np.random.rand(ndim**2).reshape((ndim, ndim))
cov = np.triu(cov)
cov += cov.T - np.diag(cov.diagonal())
cov = np.dot(cov,cov)
icov = np.linalg.inv(cov)
nwalkers = 50
p0 = [np.random.rand(ndim) for i in xrange(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=[means, icov])
pos, prob, state = sampler.run_mcmc(p0, 5000)
sampler.reset()
sampler.run_mcmc(pos, 100000, rstate0=state)
Thanks
I think there are a couple of problems that I see. The main one is that emcee wants you to give it the natural logarithm of the probability distribution function that you want to sample. So, rather than having:
def lnprob(x, a, icov):
pi = np.pi
return np.sqrt(2/pi)*x**2*np.exp(-x**2/(2.*a**2))/a**3
you would instead want, e.g.
def lnprob(x, a):
pi = np.pi
if x < 0:
return -np.inf
else:
return 0.5*np.log(2./pi) + 2.*np.log(x) - (x**2/(2.*a**2)) - 3.*np.log(a)
where the if...else... statement is to explicitly say that negative values of x have zero probability (or -infinity in log-space).
You also shouldn't have to calculate icov and pass it to lnprob as that's only needed for the Gaussian case in the example you link to.
When you call:
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=[means, icov])
the args value should just be any additional arguments that your lnprob function requires, so in your case this would be the value of a that you want to set your Maxwell-Boltxmann distribution up with. This should be a single value rather than the two randomly initialised values you have set when creating mean.
Overall, the following should work for you:
from __future__ import print_function
import emcee
import numpy as np
from numpy import pi as pi
# define the natural log of the Maxwell-Boltzmann distribution
def lnprob(x, a):
if x < 0:
return -np.inf
else:
return 0.5*np.log(2./pi) + 2.*np.log(x) - (x**2/(2.*a**2)) - 3.*np.log(a)
# choose a value of 'a' for the distributions
a = 5. # I'm choosing 5!
# choose the number of walkers
nwalkers = 50
# set some initial points at which to calculate the lnprob
p0 = [np.random.rand(1) for i in xrange(nwalkers)]
# initialise the sampler
sampler = emcee.EnsembleSampler(nwalkers, 1, lnprob, args=[a])
# Run 5000 steps as a burn-in.
pos, prob, state = sampler.run_mcmc(p0, 5000)
# Reset the chain to remove the burn-in samples.
sampler.reset()
# Starting from the final position in the burn-in chain, sample for 100000 steps.
sampler.run_mcmc(pos, 100000, rstate0=state)
# lets check the samples look right
mbmean = 2.*a*np.sqrt(2./pi) # mean of Maxwell-Boltzmann distribution
print("Sample mean = {}, analytical mean = {}".format(np.mean(sampler.flatchain[:,0]), mbmean))
mbstd = np.sqrt(a**2*(3*np.pi-8.)/np.pi) # std. dev. of M-B distribution
print("Sample standard deviation = {}, analytical = {}".format(np.std(sampler.flatchain[:,0]), mbstd))