Setting source in Python-Meep for FDTD simulation - python

I'm trying to use Python-Meep package to conduct some FDTD simulations. First, I want to simulate a plane wave traveling through vacuum in 'z' direction. I have problems properly setting the source in three-dimensional case. In 2D case, I can make the source as a line which touches the borders of the computational matrix. In 3D it looks like it's impossible. Below are simple examples.
2D case: In 2D case, the source is a line from (x,y)=(0 , .1e-6) to (x,y)=(15e-6 , .1e-6) (from border to border). Thanks to this, the plane wave is traveling unperturbed to the opposite end of the matrix (where it is reflected).
import meep_mpi as meep
x, y, voxelsize = 15e-6, 15e-6, 50e-9
vol = meep.vol2d(x, y, 1/voxelsize)
class Model(meep.Callback):
def __init__(self):
meep.Callback.__init__(self)
def double_vec(self, r):
return 1
model = Model()
meep.set_EPS_Callback(model.__disown__())
struct = meep.structure(vol, meep.EPS)
f = meep.fields(struct)
f.add_volume_source(meep.Ex,
meep.continuous_src_time(473.755e12/3e8), # 632.8nm
meep.volume(meep.vec(0e-6, .1e-6), meep.vec(15e-6, .1e-6)))
while f.time()/3e8 < 30e-15:
f.step()
meep.del_EPS_Callback()
output = meep.prepareHDF5File("Ex1.h5")
f.output_hdf5(meep.Ex, vol.surroundings(), output)
del(output)
3D case: The source is a plane from (x,y,z)=(0 , 0 , .1e-6) to (x,y,z)=(15e-6 , 15e-6 , .1e-6). This should create a plane from border to border of the matrix. However, for unknown reason, the source does not touch the boundary (there is a small empty space) and whatever I do, I cannot force it to touch it. As a result, I cannot create a plane wave travelling in 'z' direction. Until now I tried: (a) explicitly giving no_pml argument (b) giving pml(0) argument, (c) changing sampling, (d) changing 'z' position of the source. With no luck. I will be grateful for any suggestions.
import meep_mpi as meep
x, y, z, voxelsize = 15e-6, 15e-6, 15e-6, 50e-9
vol = meep.vol3d(x, y, z, 1/voxelsize)
class Model(meep.Callback):
def __init__(self):
meep.Callback.__init__(self)
def double_vec(self, r):
return 1
model = Model()
meep.set_EPS_Callback(model.__disown__())
struct = meep.structure(vol, meep.EPS)
f = meep.fields(struct)
f.add_volume_source(meep.Ex,
meep.continuous_src_time(473.755e12/3e8), # 632.8nm
meep.volume(meep.vec(0, 0, .1e-6), meep.vec(15e-6, 15e-6, .1e-6)))
while f.time()/3e8 < 30e-15:
f.step()
meep.del_EPS_Callback()
output = meep.prepareHDF5File("Ex1.h5")
f.output_hdf5(meep.Ex, vol.surroundings(), output)
del(output)

Your inability to send a homogeneous plane wave with electric field polarised along the X axis indeed manifests at the simulation volume boundaries perpendicular to the Y axis, where the field amplitude drops to zero. This trouble does not occur on the two boundaries perpendicular to X.
This is however fully physical solution; by default, the boundaries behave as perfect electric/magnetic conductor; the electric field component parallel to PEC must be zero in its vicinity. (Good conductors screen the external electric field.)
If you need an exact plane wave, you will have to append another command after the initialisation of field, to define the boundary as periodic:
f.use_bloch(meep.X, 0)
f.use_bloch(meep.Y, 0)
Note that the second parameters doe not have to be zero, enabling the definition of arbitrary inclined wave sources.
For a more advanced (and more convenient) example, see https://github.com/FilipDominec/python-meep-utils/blob/master/scatter.py

Related

How to implement an interactive tone curve in Python?

I want to implement a photo editor in python using flask. So far, I managed to apply an s curve to a photo, like this:
import cv2
import numpy as np
image = cv2.imread('apple.jpg')
def sToneCurve(frame):
look_up_table = np.zeros((256, 1), dtype='uint8')
for i in range(256):
look_up_table[i][0] = 255 * (np.sin(np.pi * (i / 255 - 1 / 2)) + 1) / 2
return cv2.LUT(frame, look_up_table)
image_contrasted = sToneCurve(image)
cv2.imwrite('apple_dark.jpg', image_contrasted)
How could I implement an interactive tone curve, so that the user could select how he would like to edit the photos, like this: tone curve and not be a predefined formula applied to the photo, as in the code above. What would be the best approach, what libraries and visualizations for the curve plots to use?
You implement this using "standard" polynomial fitting: you have N points that you need a curve through, so you find the N-1st order polynomial that does that, then use that polynomial as your mapping function.
You're already using numpy, so use numpy.polynomial.polynomial.polyfit with:
x all your points' x coordinates, including your black and white points (which in a proper tone curve users should be able to move off of (0,0) and (1,1) respective),
y all your points' y coordinates,
deg if the polynomial has to pass through all points, which it should, this should be equal to len(x) - 1, as two points is a line, or a first degree polynomial, three points is a quadratic curve, or a second degree polynomial, etc. "The" polynomial through N points is an N-1 degree polynomial,
the rest of the args shouldn't particularly matter.
This gives you a numpy array of polynomial coefficients (let's call that array c) that you can then use for mapping: any pixel with lightness/intensity value i should get mapped to:
mapped = f(I) = c[0] * i**0 + c[1] * i**1 + c[2] * i**2 + ...
Which thankfully numpy can do for you by simply using the corresponding polyval function.
And of course, to make that fast, what you really want to do is build a LUT that you can just directly consult, every time the user changes a coordinate in the tone curve UI, so:
from numpy.polynomial.polynomial import polyfit, polyval
# How big of a LUT you actually need depends entirely
# on the bit depth you're working with, of course...
BIT_DEPTH = 2**16
TONE_LUT = range(0, BIT_DEPTH)
def update_from_tone_ui(coordinates):
"""
Called on user value update, with coordinates being
a list-of-lists a la [[0,0], [0.1,0.1], ...]
"""
x, y = zip(*coordinates)
coefficients = polyfit(x, y, len(x) - 1)
f = lamba i: clamp(polyval(i, coefficients), 0, 1)
# And remember to make sure the input range to f() matches
# the actual x/y domain that we used for the polyfit:
divisor = BIT_DEPTH - 1
TONE_LUT = [BIT_DEPTH * f(i/divisor) for i in range(0, BIT_DEPTH)]
with clamp coming from "somewhere", but if you don't already have one then it's trivially implemented with some shortcut returns:
def clamp(n, floor, ceiling):
if n < floor: return floor
if n > ceiling: return ceiling
return n
(And of course make sure to adjust your clamping values if you don't want your tone curve x and y coordinates in [0,1])
Now, rather than running the mapping function every time, you just directly look up the mapped value. Note that you get a bit of freedom in terms of precision: you could use a tone curve in which the x and y values run from 0 to 1, or you use have them run from 0 to whatever-bit-depth-you-use (28, 216, what have you) but whatever you use, make sure you scale your actual pixel intensities accordingly when you generate your LUT. Otherwise things will look really interesting.

Programmatical Change of basis for coordinate vectors with different origin of coordinates (python/general maths)

I have a Support Vector Machine that splits my data in two using a decision hyperplane (for visualisation purposes this is a sample dataset with three dimensions), like this:
Now I want to perform a change of basis, such that the hyperplane lies flatly on the x/y plane, such that the distance from each sample point to the decision hyperplane is simply their z-coordinate.
For that, I know that I need to perform a change of basis. The hyperplane of the SVM is given by their coefficient (3d-vector) and intercept (scalar), using (as far as I understand it) the general form for mathematical planes: ax+by+cz=d, with a,b,c being the coordinates of the coefficient and d being the intercept. When plotted as 3d-Vector, the coefficient is a vector orthogonal to the plane (in the image it's the cyan line).
Now to the change of basis: If there was no intercept, I could just assume the vector that is the coefficient is one vector of my new basis, one other can be a random vector that is on the plane and the third one is simply cross product of both, resulting in three orthogonal vectors that can be the column vectors of the transformation-matrix.
The z-function used in the code below comes from simple term rearrangement from the general form of planes: ax+by+cz=d <=> z=(d-ax-by)/c:
z_func = lambda interc, coef, x, y: (interc-coef[0]*x -coef[1]*y) / coef[2]
def generate_trafo_matrices(coefficient, z_func):
normalize = lambda vec: vec/np.linalg.norm(vec)
uvec2 = normalize(np.array([1, 0, z_func(1, 0)]))
uvec3 = normalize(np.cross(uvec1, uvec2))
back_trafo_matrix = np.array([uvec2, uvec3, coefficient]).T
#in other order such that its on the xy-plane instead of the yz-plane
trafo_matrix = np.linalg.inv(back_trafo_matrix)
return trafo_matrix, back_trafo_matrix
This transformation matrix would then be applied to all points, like this:
def _transform(self, points, inverse=False):
trafo_mat = self.inverse_trafo_mat if inverse else self.trafo_mat
points = np.array([trafo_mat.dot(point) for point in points])
return points
Now if the intercept would be zero, that would work perfectly and the plane would be flat on the xy-axis. However as soon as I have an intercept != zero, the plane is not flat anymore:
I understand that that is the case because this is not a simple change of basis, because the coordinate origin of my other basis is not at (0,0,0) but at a different place (the hyperplane could be crossing the coefficient-vector at any point), but my attempts of adding the intercept to the transformation all didn't lead to the correct result:
def _transform(self, points, inverse=False):
trafo_mat = self.inverse_trafo_mat if inverse else self.trafo_mat
intercept = self.intercept if inverse else -self.intercept
ursprung_translate = trafo_mat.dot(np.array([0,0,0])+trafo_matrix[:,0]*intercept)
points = np.array([point+trafo_matrix[:,0]*intercept for point in points])
points = np.array([trafo_mat.dot(point) for point in points])
points = np.array([point-ursprung_translate for point in points])
return points
is for example wrong. I am asking this on StackOverflow and not on the math StackExchange because I think I wouldn't be able to translate the respective math into code, I am glad I even got this far.
I have created a github gist with the code to do the transformation and create the plots at https://gist.github.com/cstenkamp/0fce4d662beb9e07f0878744c7214995, which can be launched using Binder under the link https://mybinder.org/v2/gist/jtpio/0fce4d662beb9e07f0878744c7214995/master?urlpath=lab%2Ftree%2Fchange_of_basis_with_translate.ipynb if somebody wants to play around with the code itself.
Any help is appreciated!
The problem here is that your plane is an affine space, not a vector space, so you can't use the usual transform matrix formula.
A coordinate system in affine space is given by an origin point and a basis (put together, they're called an affine frame). For example, if your origin is called O, the coordinates of the point M in the affine frame will be the cooordinates of the OM vector in the affine frame's basis.
As you can see, the "normal" R^3 space is a special case of affine space where the origin is (0,0,0).
Once we've determined those, we can use the frame change formulas in affine spaces: if we have two affine frames R = (O, b) and R' = (O', b'), the base change formula for a point M is: M(R') = base_change_matrix_from_b'_to_b * (M(R) - O'(R)) (with O'(R) the coordinates of O' in the coordinate system defined by R).
In our case, we're trying to go from the frame with an origin at (0,0,0) and
the canonical basis, to a frame where the origin is the orthogonal projection of (0,0,0) on the plane and the basis is, for instance, the one described in your initial post.
Let's implement these steps:
To begin with, we'll define a Plane class to make our lifes a bit easier:
from dataclasses import dataclass
import numpy as np
#dataclass
class Plane:
a: float
b: float
c: float
d: float
#property
def normal(self):
return np.array([self.a, self.b, self.c])
def __contains__(self, point:np.array):
return np.isclose(self.a*point[0] + self.b*point[1] + self.c*point[2] + self.d, 0)
def project(self, point):
x,y,z = point
k = (self.a*x + self.b*y + self.c*z + self.d)/(self.a**2 + self.b**2 + self.c**2)
return np.array([x - k*self.a, y-k*self.b, z-k*self.c])
def z(self, x, y):
return (- self.d - self.b*y - self.a*x)/self.c
We can then implement make_base_changer, which takes a Plane as an input, and return 2 lambda functions performing the forward and inverse transform (taking and returning a point). You should be able to test
def normalize(vec):
return vec/np.linalg.norm(vec)
def make_base_changer(plane):
uvec1 = plane.normal
uvec2 = [0, -plane.d/plane.b, plane.d/plane.c]
uvec3 = np.cross(uvec1, uvec2)
transition_matrix = np.linalg.inv(np.array([uvec1, uvec2, uvec3]).T)
origin = np.array([0,0,0])
new_origin = plane.project(origin)
forward = lambda point: transition_matrix.dot(point - new_origin)
backward = lambda point: np.linalg.inv(transition_matrix).dot(point) + new_origin
return forward, backward

OpenCV recoverPose camera coordinate system

I'm estimating the translation and rotation of a single camera using the following code.
E, mask = cv2.findEssentialMat(k1, k2,
focal = SCALE_FACTOR * 2868
pp = (1920/2 * SCALE_FACTOR, 1080/2 * SCALE_FACTOR),
method = cv2.RANSAC,
prob = 0.999,
threshold = 1.0)
points, R, t, mask = cv2.recoverPose(E, k1, k2)
where k1 and k2 are my matching set of key points, which are Nx2 matrices where the first column is the x-coordinates and the second column is y-coordinates.
I collect all the translations over several frames and generate a path that the camera traveled like this.
def generate_path(rotations, translations):
path = []
current_point = np.array([0, 0, 0])
for R, t in zip(rotations, translations):
path.append(current_point)
# don't care about rotation of a single point
current_point = current_point + t.reshape((3,)
return np.array(path)
So, I have a few issues with this.
The OpenCV camera coordinate system suggests that if I want to view the 2D "top down" view of the camera's path, I should plot the translations along the X-Z plane.
plt.plot(path[:,0], path[:,2])
This is completely wrong.
However, if I write this instead
plt.plot(path[:,0], path[:,1])
I get the following (after doing some averaging)
This path is basically perfect.
So, perhaps I am misunderstanding the coordinate system convention used by cv2.recoverPose? Why should the "birds eye view" of the camera path be along the XY plane and not the XZ plane?
Another, perhaps unrelated issue is that the reported Z-translation appears to decrease linearly, which doesn't really make sense.
I'm pretty sure there's a bug in my code since these issues appear systematic - but I wanted to make sure my understanding of the coordinate system was correct so I can restrict the search space for debugging.
At the very beginning, actually, your method is not producing a real path. The translation t produced by recoverPose() is always a unit vector. Thus, in your 'path', every frame is moving exactly 1 'meter' from the previous frame. The correct method would be, 1) initialize:(featureMatch, findEssentialMatrix, recoverPose), then 2) track:(triangluate, featureMatch, solvePnP). If you would like to dig deeper, finding tutorials on Monocular Visual SLAM would help.
Secondly, you might have messed up with the camera coordinate system and world coordinate system. If you want to plot the trajectory, you would use the world coordinate system rather than camera coordinate system. Besides, the results of recoverPose() are also in world coordinate system. And the world coordinate system is: x-axis pointing to right, y-axis pointing forward, z-axix pointing up.Thus, when you would like to plot the 'bird view', it is correct that you should plot along the X-Y plane.

Two-body orbit modelling problems

Skip to Update 2 below, if you don't want to read too much background.
I'm trying to implement a model for simple orbital simulations (two body).
However, when I try to use the code I've written, the plots generated from the result look quite odd.
The program uses initial state vectors (position and velocity) to calculate the Keplerian orbital elements, which are used to then calculate the next position, and returned as the next two state vectors.
This seems to work fine, and by itself, plots correctly as long as I keep the plot on the orbital plane. But I would like to rotate the plot to the frame of reference (the parent body) so that I can see a cool 3D view of what the orbits look like (obvs).
Right now, I suspect that the bug is in how I convert from the two state vectors in the orbital plane, to rotating them to the frame of reference. I am using the equations from step 6 of this document to create the following code from (but applying individual roation matricies [copied from here]):
from numpy import sin, cos, matrix, newaxis, asarray, squeeze, dot
def Rx(theta):
"""
Return a rotation matrix for the X axis and angle *theta*
"""
return matrix([
[1, 0, 0 ],
[0, cos(theta), -sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
def Rz(theta):
"""
Return a rotation matrix for the Z axis and angle *theta*
"""
return matrix([
[cos(theta), -sin(theta), 0],
[sin(theta), cos(theta), 0],
[0, 0, 1],
], dtype="float64")
def rotate1(vector, O, i, w):
# The starting value of *vector* is just a 1-dimensional numpy
# array.
# Transform into a column vector.
vector = vector[:, newaxis]
# Perform the rotation
R = Rz(-O) * Rx(-i) * Rz(-w)
res2 = dot(R, vector)
# Transform back into a row vector (because that's what
# the rest of the program uses)
return squeeze(asarray(res2))
(For context, this is the full class I am using for the orbit model.)
When I plot X and Y coordinates from the result, I get this:
But when I change the rotation matrix to R = Rz(-O) * Rx(-i), I get this more plausible plot (although obviously missing one rotation, and slightly off-center):
And when I reduce it further to R = Rx(-i), as one would expect, I get this:
So as I said, I am fairly sure that it is not the orbital calculation code that is behaving weirdly, but rather some error in the rotation code. But I'm not sure where to narrow this down, as I'm pretty new to both numpy and matrix math in general.
Update: Based on stochastic's answer I transposed the matricies (R = Rz(-O).T * Rx(-i).T * Rz(-w).T), but then got this plot:
which made me wonder if my conversion to screen coordinates was somehow wrong -- but it looks correct to me (and is the same code as the more-correct plots with less rotation) namely:
def recenter(v_position, viewport_width, viewport_height):
x, y, z = v_position
# the size of the viewport in meters
bounds = 20000000
# viewport_width is the screen pixels (800)
scale = viewport_width/bounds
# Perform the scaling operation
x *= scale
y *= scale
# recenter to screen X and Y measured from the top-left corner
# of the viewport
x += viewport_width/2
y = viewport_height/2 - y
# Cast to int, because we don't care about pixel fractions
return int(x), int(y)
Update 2
Although I have triple-checked my implementation of the equations, as well as the rotations with stochastic's help, I still can't get the orbits to come out right. They still appear basically the same as in the plots above.
Using data from the NASA Horizon's system, I set up an orbit with specific state vectors from the ISS (2457380.183935185 = A.D. 2015-Dec-23 16:24:52.0000 (TDB)), and checked them against the Kepler orbit elements for the same moment in time, which produces this result:
inclination :
0.900246137041
0.900246137041
true_anomaly :
0.11497063007
0.0982485984565
long_of_asc_node :
3.80727461492
3.80727461492
eccentricity :
0.000429082122137
0.000501850615905
semi_major_axis :
6778560.7037
6779057.01374
mean_anomaly :
0.114872215066
0.0981501816537
argument_of_periapsis :
0.843226618347
0.85994864996
The top values are my (calculated) values, and the bottom values are the NASA ones. Obviously some floating point precision error is to be expected, but the variations in mean_anomaly and true_anomaly did strike me as larger than I expected. (I'm currently running all of my numpy calculations using float128 numbers on a 64-bit system).
In addition, the resulting orbit still looks like the (quite) eccentric first plot, above (even though I know that this LEO ISS orbit is quite circular). So I'm a bit stumped as to what the source of the problem could be.
I believe you have at least two problems.
After looking more closely at the orbital simulation you are doing (see this additional document from the comments), I think the main problem is the initially-very-reasonable-but-yet-untrue assumption that the final plot should look like an ellipse. In general it will not, since an orbiting body will not necessarily stay in a single plane.
The other problem, I think, is that your rotation matrices are the transpose of what they should be, per the document you described (see below).
On transposed rotation matrices
The document you cited does not directly specify whether R_x and R_z should be right-handed rotations of the axes or of the vector they will multiply, though you can figure it out from equation 9 (or 10). It turns out that they should be right-handed rotations of the axes, not the vector. That means that they should be defined like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta), sin(theta) ],
[0,-sin(theta), cos(theta) ],
], dtype="float64")
instead of like this:
return matrix([
[1, 0, 0 ],
[0, cos(theta),-sin(theta) ],
[0, sin(theta), cos(theta) ],
], dtype="float64")
I found this out by reproducing equation 9 by hand on paper.
In that equation, look at the first component of the vector r(t).
There are two terms: one with o_x in it and one with o_y.
Look at the thing multliplying o_y. It is: -(sin(omega)*cos(Omega)+cos(omega)*cos(i)*sin(Omega)).
That leading minus sign is the key. It comes from the minus sign in the first row of your Rz matrix.
Since the Omega, i, and omega in equation 9 are all negated, that means that the minus sign needs to be on the second row of R_z, which would mean that R_z represents a right-handed rotation of the axes, not the vector.
Similarly, we can look at the o_y component of the last term and see that the minus sign needs to be on the second row of R_x, meaning (thank goodness for sanity) the both R_z and R_x right-handed rotations of the axes.
Your Rx and Rz functions are currently defining right handed rotations of a vector, not the axes.
You can fix this by either (all three are equivalent):
Removing the minus signs on your euler angles: Rz(O) * Rx(i) * Rz(w)
transposing your rotation matrices: Rz(-O).T * Rx(-i).T * Rz(-w).T
moving the - sign in the definition of Rx and Rz to the second row sine term, as shown above
I am going to mark stochastic's answer as right, because a) he deserves the points for being so helpful, and b) his advice was fundamentally correct.
However the source of the weird plot actually ended up being these lines in the linked Orbit class:
self.v_position = self.rotate(v_position, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
self.v_velocity = self.rotate(v_velocity, self.long_of_asc_node, self.inclination, self.argument_of_periapsis)
Notice that the self.v_position property is updated before the call to rotate the velocity vector happens; one might also notice, when reading the code, that I in my cleverness decided to make all of the orbital element values methods wrapped in #property decorators to make the calculations more clear.
But of course, this also means the methods are called -- and the values recalculated -- every time a property was accessed. So the second call to self.rotate() happens with slightly different values of the orbital elements from the first call and, more importantly, with values that don't match up 100% correctly with the "current" position and velocity state vectors!
So after a few days of banging my head against this bug, I figured it out from a bit of yak-shaving I was doing in the form of a refactoring, and now it all works perfectly.

Construct an array spacing proportional to a function or other array

I have a function (f : black line) which varies sharply in a specific, small region (derivative f' : blue line, and second derivative f'' : red line). I would like to integrate this function numerically, and if I distribution points evenly (in log-space) I end up with fairly large errors in the sharply varying region (near 2E15 in the plot).
How can I construct an array spacing such that it is very well sampled in the area where the second derivative is large (i.e. a sampling frequency proportional to the second derivative)?
I happen to be using python, but I'm interested in a general algorithm.
Edit:
1) It would be nice to be able to still control the number of sampling points (at least roughly).
2) I've considered constructing a probability distribution function shaped like the second derivative and drawing randomly from that --- but I think this will offer poor convergence, and in general, it seems like a more deterministic approach should be feasible.
Assuming f'' is a NumPy array, you could do the following
# Scale these deltas as you see fit
deltas = 1/f''
domain = deltas.cumsum()
To account only for order of magnitude swings, this could be adjusted as follows...
deltas = 1/(-np.log10(1/f''))
I'm just spitballing here ... (as I don't have time to try this out for real)...
Your data looks (roughly) linear on a log-log plot (at least, each segment seems to be... So, I might consider doing a sort-of integration in log-space.
log_x = log(x)
log_y = log(y)
Now, for each of your points, you can get the slope (and intercept) in log-log space:
rise = np.diff(log_y)
run = np.diff(log_x)
slopes = rise / run
And, similarly, the the intercept can be calculated:
# y = mx + b
# :. b = y - mx
intercepts = y_log[:-1] - slopes * x_log[:-1]
Alright, now we have a bunch of (straight) lines in log-log space. But, a straight line in log-log space, corresponds to y = log(intercept)*x^slope in real space. We can integrate that easily enough: y = a/(k+1) x ^ (k+1), so...
def _eval_log_log_integrate(a, k, x):
return np.log(a)/(k+1) * x ** (k+1)
def log_log_integrate(a, k, x1, x2):
return _eval_log_log_integrate(a, k, x2) - _eval_log_log_integrate(a, k, x1)
partial_integrals = []
for a, k, x_lower, x_upper in zip(intercepts, slopes, x[:-1], x[1:]):
partial_integrals.append(log_log_integrate(a, k, x_lower, x_upper))
total_integral = sum(partial_integrals)
You'll want to check my math -- It's been a while since I've done this sort of thing :-)
1) The Cool Approach
At the moment I implemented an 'adaptive refinement' approach inspired by hydrodynamics techniques. I have a function which I want to sample, f, and I choose some initial array of sample points x_i. I construct a "sampling" function g, which determines where to insert new sample points.
In this case I chose g as the slope of log(f) --- since I want to resolve rapid changes in log space. I then divide the span of g into L=3 refinement levels. If g(x_i) exceeds a refinement level, that span is subdivided into N=2 pieces, those subdivisions are added into the samples and are checked against the next level. This yields something like this:
The solid grey line is the function I want to sample, and the black crosses are my initial sampling points.
The dashed grey line is the derivative of the log of my function.
The colored dashed lines are my 'refinement levels'
The colored crosses are my refined sampling points.
This is all shown in log-space.
2) The Simple Approach
After I finished (1), I realized that I probably could have just chosen a maximum spacing in in y, and choose x-spacings to achieve that. Similarly, just divide the function evenly in y, and find the corresponding x points.... The results of this are shown below:
A simple approach would be to split the x-axis-array into three parts and use different spacing for each of them. It would allow you to maintain the total number of points and also the required spacing in different regions of the plot. For example:
x = np.linspace(10**13, 10**15, 100)
x = np.append(x, np.linspace(10**15, 10**16, 100))
x = np.append(x, np.linspace(10**16, 10**18, 100))
You may want to choose a better spacing based on your data, but you get the idea.

Categories

Resources