Scipy optimize minimize initial guess using SLSQP - python

For some purpose, in part of my code, I want to guess polynom 5th degree, that best fits my data and is nondecreasing in some points.
The sample code is:
import numpy as np
import scipy.optimize as optimize
def make_const(points):
constr = []
for point in points:
c = {'type' : 'ineq', 'fun' : der, 'args' : (point,)}
constr.append(c)
return constr
def der(args_pol, bod):
a, b, c, d, e, f = args_pol
return (5*a*bod**4 + 4*b*bod**3 + 3*c*bod**2 + 2*d*bod + e)
def squares(args_pol, x, y):
a, b, c, d, e, f = args_pol
return ((y-(a*x**5 + b*x**4 + c*x**3 + d*x**2 + e*x + f))**2).sum()
def ecdf(arr):
arr = np.array(arr)
F = [len(arr[arr<=t]) / len(arr) for t in arr]
return np.array(F)
pH = np.array([8,8,8,7,7,7,7,7,7,7,7,6,3,2,2,2,1])
pH = np.sort(pH)
e = ecdf(pH)
ppoints = [ 1., 2.75, 4.5, 6.25, 8. ]
constraints1 = make_const(ppoints)
p1 = optimize.minimize(squares, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
method = 'SLSQP', args = (pH, e), constraints = constraints1)
p2 = optimize.minimize(squares, [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0],
method = 'SLSQP', args = (pH, e), constraints = constraints1)
Here p1 fails to optimize, p2 terminates succesfully. In addition, if I have no constraints, thus if ppoints = [], p1 termites succesfully while p2 fails. The message if optimizations fails is always :
'Inequality constraints incompatible'
The problem is obviously in initial guess in optimize.minimize. I thougth that parameters of that guess must meet my contraints. But here, initial guess [1.0, 1.0, 1.0, 1.0, 1.0, 1.0] meets my contsraints. Can anybody please explain, where is the problem?

Yes, your initial point satisfies the constraints. But SLSQP works with linearized constraints, and is looking for a search direction that is compatible with all the linearizations (described here). Those may end up either incompatible, or poorly compatible in the sense that there is only a tiny range of directions that qualify, and the search fails to find them.
The starting point [1, 1, 1, 1, 1, 1] is not a good one. Consider that at x=8 the contribution of the leading coefficient 1 to the polynomial is 8**5, and since its gets squared in the objective function, you get about 8**10. This dwarfs the contribution of lower-order coefficients, which are nonetheless important for satisfying the constraints at points close to 0. So the algorithm is presented with a badly scaled problem when the initial point is all-ones.
Using np.zeros((6, )) as a starting point is a better idea; the search succeeds from there. Scaling the initial point as [7**(d-5) for d in range(6)] also works but just barely (replacing 7 by 6 or 8 yields another kind of error, "Positive directional derivative for linesearch").
So the summary is: the optimization problem has poor scaling, making the search difficult; and the error message is not very explicit about what actually went wrong.
Besides changing initial point, you can try providing the Jacobians of objective function and constraints (both are important, as the method works with the Lagrangian).

Related

Simulating a sub-system in a Python (non real-time) simulation environment through a transfer function

I am coding a dynamic system simulation (fixed step, non real-time; it runs on my desktop) and I want to model some of the system's components (e.g. filters...) through the tools made available by scipy.signal (e.g. dlsim). For those components, I know their representation in the classical form of transfer functions.
With scipy.signal, it is pretty easy and straightforward to simulate the output of the transfer functions "statically", that is once the time and input vectors are already known; on the other hand, I couldn't find a way to compute it within each simulation step. My simulator also include some closed-loop controllers, thus the outputs change dinamically as the simulation moves forward.
Any ideas?
PS I found this thread which seems to be quite similar, but I must admit that I do not understand the solution given by the author...: How to simulate one step to a transfer function in python
The described solution of the question How to simulate one step to a transfer function in python works as follows. You generate only two simulation steps for the inputs U (array-like) and T (also array-like). With both variables and your system you call the function scipy.signal.lsim (or scipy.signal.dsim for discrete systems) and you set also the initial values for the system states X. As result you get the output values and the new states Xn+1 which you store in the state variable for X.
In the next loop you take the last values of U and T and add the next input and time step. Now you call the lsim again but this time with the states X of the last iteration and so on.
Here some sample code of a Two-Mass-System:
(sorry, it's not beautiful but it works.)
import math
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
class TwoMassSystem:
def __init__(self):
# Source: Feiler2003
JM = 0.166 # kgm^2 moment of inertia (drive-mass)
JL = 0.333 # kgm^2 moment of inertia (load-mass)
d = 0.025 # Nms/rad damping coefficient (elastic shaft)
c = 410.0 # NM/rad stiffness (elastic shaft)
self.A = np.array([[-d/JM, -c/JM, d/JM],
[ 1.0, 0.0, -1.0],
[ d/JL, c/JL, -d/JL]] )
self.B = np.array([[ 1/JM, 0.0, 0.0],
[ 0.0, 0.0, 0.0],
[ 0.0, 0.0,-1/JL]] )
self.C = np.array([ 0.0, 0.0, 1.0 ])
self.D = np.array([ 0.0, 0.0, 0.0 ] )
self.X = np.array([ 0.0, 0.0, 0.0 ] )
self.T = np.array([ 0.0])
self.U = np.array([[0.0, 0.0, 0.0]])
self.sys1 = signal.StateSpace(self.A, self.B, self.C, self.D)
def resetStates(self):
self.X = np.array([ 0.0, 0.0, 0.0 ] )
self.T = np.array([ 0.0])
self.U = np.array([[0.0, 0.0, 0.0]])
def test_sim(self):
self.resetStates()
h = 0.1
ts = np.arange(0,10,h)
u = []
t = []
for i in ts:
uM = 1.0
if i > 1:
uL = 1.0
else:
uL = 0.0
u.append([ uM,
0.0,
uL])
t.append(i)
tout, y, x = signal.lsim(self.sys1, u, t, self.X)
return t, y
def test_step(self, uM, uL, tn):
"""
test_step(uM, uL, tn)
The call of the object instance simulates the two mass system with
the given input values for a discrete time step.
Parameters
----------
uM : float
input drive torque
uL : float
input load torque
tn : float
time step
Returns
-------
nM : float
angular velocity of the drive
nL : float
angular velocity of the load
"""
u_new = [ uM,
0.0,
uL]
self.T = np.array([self.T[-1], tn])
self.U = np.array([self.U[-1], u_new])
tout, y, x = signal.lsim(self.sys1, self.U, self.T, self.X)
# x and y contains 2 simulation points the newer one is the correct one.
self.X = x[-1] # update state
return y[-1] #
if __name__ == "__main__":
a = TwoMassSystem()
tsim, ysim = a.test_sim()
h = 0.1
ts = np.arange(h,10,h)
ys = []
a.resetStates()
for i in ts:
uM = 1.0
if i > 1:
uL = 1.0
else:
uL = 0.0
ys.append(a.test_step(uM, uL, i))
plt.plot(tsim, ysim, ts, ys)
plt.show()
However:
However, as you can see in the plot the result isn't identical and here is the problem, because I'dont know why. Therefore, I created a new question: Why does the result of a simulated step differ from the complete simulation?
Sources:
#InProceedings{Feiler2003,
author = {Feiler, M. and Westermaier, C. and Schroder, D.},
booktitle = {Proceedings of 2003 IEEE Conference on Control Applications, 2003. CCA 2003.},
title = {Adaptive speed control of a two-mass system},
year = {2003},
pages = {1112-1117 vol.2},
volume = {2},
doi = {10.1109/CCA.2003.1223166}
}

scipy curve_fit returns initial estimates

To fit a hyperbolic function I am trying to use the following code:
import numpy as np
from scipy.optimize import curve_fit
def hyperbola(x, s_1, s_2, o_x, o_y, c):
# x > Input x values
# s_1 > slope of line 1
# s_2 > slope of line 2
# o_x > x offset of crossing of asymptotes
# o_y > y offset of crossing of asymptotes
# c > curvature of hyperbola
b_2 = (s_1 + s_2) / 2
b_1 = (s_2 - s_1) / 2
return o_y + b_1 * (x - o_x) + b_2 * np.sqrt((x - o_x) ** 2 + c ** 2 / 4)
min_fit = np.array([-3.0, 0.0, -2.0, -10.0, 0.0])
max_fit = np.array([0.0, 3.0, 3.0, 0.0, 10.0])
guess = np.array([-2.5/3.0, 4/3.0, 1.0, -4.0, 0.5])
vars, covariance = curve_fit(f=hyperbola, xdata=n_step, ydata=n_mean, p0=guess, bounds=(min_fit, max_fit))
Where n_step and n_mean are measurement values generated earlier on. The code runs fine and gives no error message, but it only returns the initial guess with a very small change. Also, the covariance matrix contains only zeros. I tried to do the same fit with a better initial guess, but that does not have any influence.
Further, I plotted the exact same function with the initial guess as input and that gives me indeed a function which is close to the real values. Does anyone know where I make a mistake here? Or do I use the wrong function to make my fit?
The issue must lie with n_step and n_mean (which are not given in the question as currently stated); when trying to reproduce the issue with some arbitrarily chosen set of input parameters, the optimization works as expected. Let's try it out.
First, let's define some arbitrarily chosen input parameters in the given parameter space by
params = [-0.1, 2.95, -1, -5, 5]
Let's see what that looks like:
import matplotlib.pyplot as plt
xs = np.linspace(-30, 30, 100)
plt.plot(xs, hyperbola(xs, *params))
Based on this, let us define some rather crude inputs for xdata and ydata by
xdata = np.linspace(-30, 30, 10)
ydata = hyperbola(xs, *params)
With these, let us run the optimization and see if we match our given parameters:
vars, covariance = curve_fit(f=hyperbola, xdata=xdata, ydata=ydata, p0=guess, bounds=(min_fit, max_fit))
print(vars) # [-0.1 2.95 -1. -5. 5. ]
That is, the fit is perfect even though our params are rather different from our guess. In other words, if we are free to choose n_step and n_mean, then the method works as expected.
In order to try to challenge the optimization slightly, we could also try to add a bit of noise:
np.random.seed(42)
xdata = np.linspace(-30, 30, 10)
ydata = hyperbola(xdata, *params) + np.random.normal(0, 10, size=len(xdata))
vars, covariance = curve_fit(f=hyperbola, xdata=xdata, ydata=ydata, p0=guess, bounds=(min_fit, max_fit))
print(vars) # [ -1.18173287e-01 2.84522636e+00 -1.57023215e+00 -6.90851334e-12 6.14480856e-08]
plt.plot(xdata, ydata, '.')
plt.plot(xs, hyperbola(xs, *vars))
Here we note that the optimum ends up being different from both our provided params and the guess, still within the bounds provided by min_fit and max_fit, and still provided a good fit.

Problems with scipy.optimize using matrix as input, bounds, constraints

I have used Python to perform optimization in the past; however, I am now trying to use a matrix as the input for the objective function as well as set bounds on the individual element values and the sum of the value of each row in the matrix, and I am encountering problems.
Specifically, I would like to pass the objective function ObjFunc three parameters - w, p, ret - and then minimize the value of this function (technically I am trying to maximize the function by minimizing the value of -1*ObjFunc) by adjusting the value of w subject to the bound that all elements of w should fall within the range [0, 1] and the constraint that sum of each row in w should sum to 1.
I have included a simplified piece of example code below to demonstrate the issue I'm encountering. As you can see, I am using the minimize function from scipy.opimize. The problems begin in the first line of objective function x = np.dot(p, w) in which the optimization procedure attempts to flatten the matrix into a one-dimensional vector - a problem that does not occur when the function is called without performing optimization. The bounds = b and constraints = c are both producing errors as well.
I know that I am making an elementary mistake in how I am approaching this optimization and would appreciate any insight that can be offered.
import numpy as np
from scipy.optimize import minimize
def objFunc(w, p, ret):
x = np.dot(p, w)
y = np.multiply(x, ret)
z = np.sum(y, axis=1)
r = z.mean()
s = z.std()
ratio = r/s
return -1 * ratio
# CREATE MATRICES
# returns, ret, of each of the three assets in the 5 periods
ret = np.matrix([[0.10, 0.05, -0.03], [0.05, 0.05, 0.50], [0.01, 0.05, -0.10], [0.01, 0.05, 0.40], [1.00, 0.05, -0.20]])
# probability, p, of being in each stae {X, Y, Z} in each of the 5 periods
p = np.matrix([[0,0.5,0.5], [0,0.6,0.4], [0.2,0.4,0.4], [0.3,0.3,0.4], [1,0,0]])
# initial equal weights, w
w = np.matrix([[0.33333,0.33333,0.33333],[0.33333,0.33333,0.33333],[0.33333,0.33333,0.33333]])
# OPTIMIZATION
b = [(0, 1)]
c = ({'type': 'eq', 'fun': lambda w_: np.sum(w, 1) - 1})
result = minimize(objFunc, w, (p, ret), method = 'SLSQP', bounds = b, constraints = c)
Digging into the code a bit. minimize calls optimize._minimize._minimize_slsqp. One of the first things it does is:
x = asfarray(x0).flatten()
So you need to design your objFunc to work with the flattened version of w. It may be enough to reshape it at the start of that function.
I read the code from a IPython session, but you can also find it in your scipy directory:
/usr/local/lib/python3.5/dist-packages/scipy/optimize/_minimize.py

Linear regression with leastsq() and global minimum not found

In Python scipy.optimize.leastsq() is normally used for non-linear regression. However, leastsq() should in principle be expected to work with linear fitting functions also. Here appears to be a simple linear regression problem that leastsq() apparently fails to solve properly. Data is fitted with the line y=mx.
Code sample is at the bottom of the post. When plot_real_data = False, then 100 points of linearly correlated data are generated randomly. Here leastsq() can effectively find the minimum of the sum-squared error function:
Graph of correct solution
However, when plot_real_data = True, then 100 data points are taken from a real data set. Here, leastsq() cannot, for some unknown reason, find the minimum of the sum-squared error function:
Graph of incorrect solution
leastsq() consistently reports an optimal gradient parameter m=1.082, regardless of the initial guess of the gradient. However m=1.082 is not the global minimum. The proper value is closer to m=1.25:
print sum(errorfunc([1.0], x, y))
3.9511006207
print sum(errorfunc([1.08], x, y))
3.59052114948
print sum(errorfunc([1.25], x, y))
3.37109033259 (near the minimum)
print sum(errorfunc([1.4], x, y))
3.79503789072
This is puzzling behaviour. In this case, the sum squared error function is a simple quadratic and there is no risk of local minima.
I know that direct methods exist for linear regression, but any ideas on this issue with leastsq()?
Python 2.7.11 :: Anaconda 4.0.0 (64-bit)
Scipy version 0.17.0
CODE:
from __future__ import division
import matplotlib.pyplot as plt
import numpy
import random
from scipy.optimize import leastsq
def errorfunc(params, x_data, y_data) :
"""
Return error at each x point, to a straight line of gradient m
This 1-parameter error function has a clearly defined minimum
"""
squared_errors = []
for i, lm in enumerate(x_data) :
predicted_um = lm * params[0]
squared_errors.append((y_data[i] - predicted_um)**2)
return squared_errors
plt.figure()
###################################################################
# STEP 1: make a scatter plot of the data
plot_real_data = True
###################################################################
if plot_real_data :
# 100 points of real data
x = [0.85772, 0.17135, 0.03401, 0.17227, 0.17595, 0.1742, 0.22454, 0.32792, 0.19036, 0.17109, 0.16936, 0.17357, 0.6841, 0.24588, 0.22913, 0.28291, 0.19845, 0.3324, 0.66254, 0.1766, 0.47927, 0.47999, 0.50301, 0.16035, 0.65964, 0.0, 0.14308, 0.11648, 0.10936, 0.1983, 0.13352, 0.12471, 0.29475, 0.25212, 0.08334, 0.07697, 0.82263, 0.28078, 0.24192, 0.25383, 0.26707, 0.26457, 0.0, 0.24843, 0.26504, 0.24486, 0.0, 0.23914, 0.76646, 0.66567, 0.62966, 0.61771, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.79157, 0.06889, 0.07669, 0.1372, 0.11681, 0.11103, 0.13577, 0.07543, 0.10636, 0.09176, 0.10941, 0.08327, 1.19903, 0.20987, 0.21103, 0.21354, 0.26011, 0.28862, 0.28441, 0.2424, 0.29196, 0.20248, 0.1887, 0.20045, 1.2041, 0.20687, 0.22448, 0.23296, 0.25434, 0.25832, 0.25722, 0.24378, 0.24035, 0.17912, 0.18058, 0.13556, 0.97535, 0.25504, 0.20418, 0.22241]
y = [1.13085, 0.19213, 0.01827, 0.20984, 0.21898, 0.12174, 0.38204, 0.31002, 0.26701, 0.2759, 0.26018, 0.24712, 1.18352, 0.29847, 0.30622, 0.5195, 0.30406, 0.30653, 1.13126, 0.24761, 0.81852, 0.79863, 0.89171, 0.19251, 1.33257, 0.0, 0.19127, 0.13966, 0.15877, 0.19266, 0.12997, 0.13133, 0.25609, 0.43468, 0.09598, 0.08923, 1.49033, 0.27278, 0.3515, 0.38368, 0.35134, 0.37048, 0.0, 0.3566, 0.36296, 0.35054, 0.0, 0.32712, 1.23759, 1.02589, 1.02413, 0.9863, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.19224, 0.12192, 0.12815, 0.2672, 0.21856, 0.14736, 0.20143, 0.1452, 0.15965, 0.14342, 0.15828, 0.12247, 0.5728, 0.10603, 0.08939, 0.09194, 0.1145, 0.10313, 0.13377, 0.09734, 0.12124, 0.11429, 0.09536, 0.11457, 0.76803, 0.10173, 0.10005, 0.10541, 0.13734, 0.12192, 0.12619, 0.11325, 0.1092, 0.11844, 0.11373, 0.07865, 1.28568, 0.25871, 0.22843, 0.26608]
else :
# 100 points of test data with noise added
x_clean = numpy.linspace(0,1.2,100)
y_clean = [ i * 1.38 for i in x_clean ]
x = [ i + random.uniform(-1 * random.uniform(0, 0.1), random.uniform(0, 0.1)) for i in x_clean ]
y = [ i + random.uniform(-1 * random.uniform(0, 0.5), random.uniform(0, 0.5)) for i in y_clean ]
plt.subplot(2,1,1)
plt.scatter(x,y); plt.xlabel('x'); plt.ylabel('y')
# STEP 2: vary gradient m of a y = mx fitting line
# plot sum squared error with respect to gradient m
# here you can see by eye, the optimal gradient of the fitting line
plt.subplot(2,1,2)
try_m = numpy.linspace(0.1,4,200)
sse = [ sum(errorfunc([m], x, y)) for m in try_m ]
plt.plot(try_m,sse); plt.xlabel('line gradient, m'); plt.ylabel('sum-squared error')
# STEP 3: use leastsq() to find optimal gradient m
params = [2] # start with initial guess of 2 for gradient
params_fitted, cov, infodict, mesg, ier = leastsq(errorfunc, params[:], args=(x, y), full_output=1)
optimal_m = params_fitted[0]
print optimal_m
# optimal gradient m should be the minimum of the error function
plt.subplot(2,1,2)
plt.plot([optimal_m,optimal_m],[0,100], 'r')
# optimal gradient m should give best fit straight line
plt.subplot(2,1,1)
plt.plot([0, 1.2],[0, 1.2 * optimal_m],'r')
plt.show()

Error Using two objects A1 and A2 of class A as parameters of function on object B1 of class B: B1 not updating attributes?

Still Unsolved - the issue lies in the paralaxtrace2 method, in updating the Ray object between propagating through both lenses
I'm trying to build a Raytracer with Python using two different classes: sphericalrefraction (containing plane, convex and concave lenses) and outputplane (containing just one infinitely large plane), inheriting from one class Optical. Under each of these classes, there are methods intercept (to calculate where a ray with a given direction and point intersects with the lens), and refract (calculating the new direction vector of the ray using the most recent direction vector of the ray). They also have propagate_ray methods in each of the optical elements that appends the newest point to the ray as the intercept() point, and the newest direction as the refract() direction.The ray merely has 1d arrays with x,y,z elements, one for point and one for direction e.g.
class Ray:
def __init__(self, p = [0.0, 0.0, 0.0], k = [0.0, 0.0, 0.0]):
self._points = [np.array(p)]
self._directions = [np.array(k)/np.sqrt(sum(n**2 for n in k))]
self.checklength()
def p(self):
return self._points[len(self._points)-1]
def k(self):
return self._directions[len(self._directions)-1]
class SphericalRefraction(OpticalElement):
def __init__(self, z0 = 0.0, c = 0.0, n1 = 1.0, n2 = 1.0, ar = 0.0):
self.z0 = z0
self.c = c
self.n1 = n1
self.n2 = n2
self.ar = ar
self.R = self.radius()
self.s = self.surface()
self.centre = self.centre()
def intercept(self, ray):
ar_z = np.sqrt(self.R**2 - self.ar**2)
#ar_z = distance from aperture radius z intercept to centre of sphere
r = ray.p() - self.centre
r_mag = np.sqrt(sum(n**2 for n in r))
rdotk = np.dot(r, ray.k())
if (rdotk**2 - r_mag**2 + self.R**2) < 0:
return None
else:
l1 = -rdotk + np.sqrt(rdotk**2 - r_mag**2 + self.R**2)
l2 = -rdotk - np.sqrt(rdotk**2 - r_mag**2 + self.R**2)
lplane = (self.z0 - ray.p()[2]) / ray.k()[2]
if self.s == "convex":
if (rdotk**2 - r_mag**2 + self.R**2) == 0:
if self.centre[2] - ar_z >= (ray.p() + -rdotk*ray.k())[2]:
return ray.p() + -rdotk*ray.k()
def refract(self, ray):
n_unit = self.unitsurfacenormal(ray)
k1 = ray.k()
ref = self.n1/self.n2
ndotk1 = np.dot(n_unit, k1)
if np.sin(np.arccos(ndotk1)) > (1/ref):
return None
else:
return ref*k1 - (ref*ndotk1 - np.sqrt(1- (ref**2)*(1-ndotk1**2)))*n_unit
def propagate_ray(self, ray):
if self.intercept(ray) is None or self.refract(ray) is None:
return "Terminated"
else:
p = self.intercept(ray)
k2 = self.refract(ray)
ray.append(p, k2)
return "Final Point: %s" %(ray.p()) + " and Final Direction: %s" %(ray.k())
When I pass two rays through 1 sphericalrefraction and one output plane, I use this method:
def paralaxtrace(self, Ray, SphericalRefraction, OutputPlane):
SphericalRefraction.propagate_ray(self)
SphericalRefraction.propagate_ray(Ray)
OutputPlane.propagate_ray(self)
OutputPlane.propagate_ray(Ray)
self.plotparalax(Ray)
I get a graph looking like this, for example:
I've implemented this method to put it through two sphericalrefraction objects and one output plane, and for some reason it doesn't update between the sphericalrefraction elements?
def paralaxtrace2(self, Ray, sr1, sr2, OutputPlane):
sr1.propagate_ray(self)
sr1.propagate_ray(Ray)
sr2.propagate_ray(self)
sr2.propagate_ray(Ray)
OutputPlane.propagate_ray(self)
OutputPlane.propagate_ray(Ray)
self.plotparalax(Ray)
As you can see, the intercept/refract methods always use ray.p() so the newest point, but for some reason it doesn't actually append the new point/direction upon intersecting and refracting with the second spherical element? The graph looks exactly the same as the above one.
Am I passing objects wrongly? Is there another issue? If you need more of my code, please let me know, as I've put in the bare minimum to understand this issue.
Edit:
In the console:
>> import raytracer as rt
>> lense1 = rt.SphericalRefraction(50, .02, 1, 1.5168, 49.749)
>> lense2 = rt.SphericalRefraction(60, .02, 1, 1.5168, 49.749)
>> ray = rt.Ray([.1,.2,0],[0,0,1])
>> ray.paralaxtrace2(rt.Ray([-0.1, -0.2, 0],[0,0,1]), lense1, lense2, rt.OutputPlane(100))
x, y of 'ray' = [0.0, 50.000500002500019, 100.0] [0.20000000000000001, 0.20000000000000001, -0.13186017048586818]
x, y of passed ray: [0.0, 50.000500002500019, 100.0] [-0.20000000000000001, -0.20000000000000001, 0.13186017048586818]
For this, I get the graph above. What it should do, is since the second convex lens is at 60, it should converge the rays even more. Instead it looks like nothing happens.
Edit 2:
The issue doesn't seem to be the mutable default argument in the ray; I still get the same error. For some reason, it has more to do with adding another lens into the function as an argument. Between the propagation in each lens, it doesn't update the coordinates. Does this have to do with a mutable default argument error in the lens' class?
In order to demonstrate the mutable default argument problem, see this example:
class Ray(object):
def __init__(self, p = [0.0, 0.0, 0.0]):
self.p = p
ray1 = Ray()
ray2 = Ray()
ray1.p.append('appended to ray1.p')
ray2.p.append('appended to ray2.p')
print ray1.p
print ray2.p
Output:
[0.0, 0.0, 0.0, 'appended to ray1.p', 'appended to ray2.p']
[0.0, 0.0, 0.0, 'appended to ray1.p', 'appended to ray2.p']
(Why are both strings seemingly appended to both lists?)
Does this surprising behavior match the behavior you're seeing? If so, mutable default arguments are actually the root of your problem.
Short explanation:
The assignment p = [0.0, 0.0, 0.0] is not evaluated at object construction time (when __init__ is called), but instead once, at declaration time. Therefore p refers to the same list (initially [0.0, 0.0, 0.0]) for all instances of that class, and if you change one, all the others will change as well.
The way to avoid this is to use None (or a different, immutable marker value) as your default argument and set the actual default in __init__:
class Ray(object):
def __init__(self, p=None):
if p is None:
p = [0.0, 0.0, 0.0]
self.p = p

Categories

Resources