I'm new to doing Bayesian inference. I'm trying to adapt a grid search code I wrote to Bayesian Monte Carlo Markov Chain approach, and I'm using PyMC3. My problem is that the code has to call a function that can't be rewritten in theano syntax. (The function relies on a piece of Fortran code in an f2py wrapper.)
Here's the code I'm working with:
with pm.Model() as model:
# Independent parameters
x = pm.Normal('x', sx, sd=incx*float(nrangex)/2.0)
y = pm.Normal('y', sy, sd=incy*float(nrangey)/2.0)
depth = pm.Normal('depth', sdepth, sd=incdepth*float(nrangedepth)/2.0)
length = pm.Normal('length', slength, sd=inclength*float(nrangelength)/2.0)
width = pm.Normal('width', swidth, sd=incwidth*float(nrangewidth)/2.0)
strike = pm.Normal('strike', sstrike, sd=incstrike*float(nrangestrike)/2.0)
dip = pm.Normal('dip', sdip, sd=incdip*float(nrangedip)/2.0)
rake = pm.Normal('rake', srake, sd=incrake*float(nrangerake)/2.0)
# Model (This is the part that doesn't work.)
los_disp = zne2los(getdisp(lon2km(dsidata['lon'], x), lat2km(dsidata['lat'], y), depth, length, width, strike, dip, rake))
# Likelihood
odisp = pymc3.Normal('los_disp', mu = los_disp, sd = 0.5, observed = dsidata['disp'])
# Sampling
trace = pm.sample(100)
What this code is trying to do is invert for earthquake source parameters from ground displacement data. The dsidata data frame contains ground displacement data as a function of latitude and longitude. I'm trying to solve for the eight earthquake source parameters that produce the best fit to this ground surface displacement.
The getdisp function simply cannot be rewritten for theano because it calls a piece of Fortran that forward models ground surface displacement from earthquake source parameters. Is there a way to compile non-theano code into a theano form? Is there another way do this?
Since I'm new to this, and I can't find many great examples to look at, there may well be other errors in the code. Are there other mistakes I'm making?
Related
Context: I am trying to model Ocean surface waves, I have played using a linear superposition of sinusoids and also on the fact that phase speed (following a wave's crest) is twice as fast as group speed (following a group of waves). Finally, I more recently used such generation of a model of the random superposition of waves to model the wigggling lines formed by the refraction (bendings of the trajectory of a ray at the interface between air and water) of light, called caustics... So far so good...
Observing real-life ocean waves instructed me that if a single wave is well approximated by a sinusoidal waveform, each wave is qualitatively a bit different. Typical for these surface gravity waves are the sharp crests and flat troughs. As a matter of fact, modelling ocean waves is on one side very useful (Ocean dynamics and its impact on climate, modelling tides, tsunamis, diffraction in a bay to predict coastline evolution, ...) but quite demanding despite a well known mathematical model. Starting with the Navier-Stokes equations to an incompressible fluid (water) in a gravitational field leads to Luke's variational principle in certain simplifying conditions. Further simplifications lead to the approximate solution given by Stokes which gives the following shape as the sum of different harmonics:
This seems well enough at the moment and I will capture this shape in this notebook and notably if that applies to a random mixture of such waves... However, a simple implementation such as
def stokes(pos, a=.3/2/np.pi, k=2*np.pi):
# k*a is a dimensionless expansion parameter
elevation = (1-1/16*(k*a)**2) * np.cos(k*pos)
elevation += 1/2*(k*a)**1 * np.cos(2*k*pos)
elevation += 3/8*(k*a)**2 * np.cos(3*k*pos)
return a * elevation
N_pos = 501
pos = np.linspace(0, 1, N_pos, endpoint=True)
fig, ax = plt.subplots(figsize=(fig_width, fig_width/phi**3))
ax.plot(pos, stokes(pos))
ax.set_xlim(np.min(pos), np.max(pos));
fig, ax = plt.subplots(figsize=(fig_width, fig_width/phi**3))
for a in np.linspace(.001, .2, 10, endpoint=True):
elevation = stokes(pos, a=a)
ax.plot(pos, elevation)
ax.set_xlim(np.min(pos), np.max(pos));
Gives something similar to:
(The full implementation gives more details.)
My present implementation seem to not fit what is displayed on the wikipedia page and that I do not spot the bug I may have introduced... Does someone spot the bug?
I have several equations as follows:
windGust8 = -53.3 + (28.3 * log(windSpeed))
windGust7to8 = -70.0 + (30.8 * log(windSpeed))
windGust6to7 = -29.2 + (17.7 * log(windSpeed))
windGust6 = -32.3 + (16.7 * log(windSpeed))
where windSpeed is the wind speed from a model at 850mb.
I then use lapse rates from a model to determine which equation to use as such:
windGustTemp = where(greater(lapseRate,8.00),windGust8,windGustTemp)
windGustTemp = where(logical_and(less_equal(lapseRate,8.00),greater(lapseRate,7.00)),windGust7to8,windGustTemp)
windGustTemp = where(logical_and(less_equal(lapseRate,7.00),greater(lapseRate,6.00)),windGust6to7,windGustTemp)
windGustTemp = where(less_equal(lapseRate,6.00),windGust6,windGustTemp)
return windGustTemp
When I plot these equations as filled contours on a map there are sharp gradients as you would expect. What I would like to do is interpolate between these equations and maybe smooth a little to give a clean look to the graphic. I assume I would use scipy.interpolate.interp1d, but I am not sure how I would apply that to this circumstance. Any help would be much appreciated! Thanks!
EDIT to add output image example
This is an example of the output image generated currently. You can see how abrupt the edges are. I want to interpolate and smooth out this data.
There are data points every 1 km with the model data. For each point it is first determined what the lapse rate is...let's say 7.4. Then based on the logic equations above, it knows to use the equation associated with the lapse rate between 7 and 8. It then finds the model wind speed for that point, plugs it into the equation and gets a number that it plots on the map. This is done for all the model data points and generates the above image.
I'm trying to use Python-Meep package to conduct some FDTD simulations. First, I want to simulate a plane wave traveling through vacuum in 'z' direction. I have problems properly setting the source in three-dimensional case. In 2D case, I can make the source as a line which touches the borders of the computational matrix. In 3D it looks like it's impossible. Below are simple examples.
2D case: In 2D case, the source is a line from (x,y)=(0 , .1e-6) to (x,y)=(15e-6 , .1e-6) (from border to border). Thanks to this, the plane wave is traveling unperturbed to the opposite end of the matrix (where it is reflected).
import meep_mpi as meep
x, y, voxelsize = 15e-6, 15e-6, 50e-9
vol = meep.vol2d(x, y, 1/voxelsize)
class Model(meep.Callback):
def __init__(self):
meep.Callback.__init__(self)
def double_vec(self, r):
return 1
model = Model()
meep.set_EPS_Callback(model.__disown__())
struct = meep.structure(vol, meep.EPS)
f = meep.fields(struct)
f.add_volume_source(meep.Ex,
meep.continuous_src_time(473.755e12/3e8), # 632.8nm
meep.volume(meep.vec(0e-6, .1e-6), meep.vec(15e-6, .1e-6)))
while f.time()/3e8 < 30e-15:
f.step()
meep.del_EPS_Callback()
output = meep.prepareHDF5File("Ex1.h5")
f.output_hdf5(meep.Ex, vol.surroundings(), output)
del(output)
3D case: The source is a plane from (x,y,z)=(0 , 0 , .1e-6) to (x,y,z)=(15e-6 , 15e-6 , .1e-6). This should create a plane from border to border of the matrix. However, for unknown reason, the source does not touch the boundary (there is a small empty space) and whatever I do, I cannot force it to touch it. As a result, I cannot create a plane wave travelling in 'z' direction. Until now I tried: (a) explicitly giving no_pml argument (b) giving pml(0) argument, (c) changing sampling, (d) changing 'z' position of the source. With no luck. I will be grateful for any suggestions.
import meep_mpi as meep
x, y, z, voxelsize = 15e-6, 15e-6, 15e-6, 50e-9
vol = meep.vol3d(x, y, z, 1/voxelsize)
class Model(meep.Callback):
def __init__(self):
meep.Callback.__init__(self)
def double_vec(self, r):
return 1
model = Model()
meep.set_EPS_Callback(model.__disown__())
struct = meep.structure(vol, meep.EPS)
f = meep.fields(struct)
f.add_volume_source(meep.Ex,
meep.continuous_src_time(473.755e12/3e8), # 632.8nm
meep.volume(meep.vec(0, 0, .1e-6), meep.vec(15e-6, 15e-6, .1e-6)))
while f.time()/3e8 < 30e-15:
f.step()
meep.del_EPS_Callback()
output = meep.prepareHDF5File("Ex1.h5")
f.output_hdf5(meep.Ex, vol.surroundings(), output)
del(output)
Your inability to send a homogeneous plane wave with electric field polarised along the X axis indeed manifests at the simulation volume boundaries perpendicular to the Y axis, where the field amplitude drops to zero. This trouble does not occur on the two boundaries perpendicular to X.
This is however fully physical solution; by default, the boundaries behave as perfect electric/magnetic conductor; the electric field component parallel to PEC must be zero in its vicinity. (Good conductors screen the external electric field.)
If you need an exact plane wave, you will have to append another command after the initialisation of field, to define the boundary as periodic:
f.use_bloch(meep.X, 0)
f.use_bloch(meep.Y, 0)
Note that the second parameters doe not have to be zero, enabling the definition of arbitrary inclined wave sources.
For a more advanced (and more convenient) example, see https://github.com/FilipDominec/python-meep-utils/blob/master/scatter.py
I'm trying to implement Fabian Timm's pupil tracking algorithm [http://www.inb.uni-luebeck.de/publikationen/pdfs/TiBa11b.pdf] and the equation requires that I find all possible distance vectors within a coordinate grid. My implementation isn't working correctly, and I think it's because of this step. I know I should speed up my implementation using broadcasting, but any insight into other possible improvements would be appreciated. I would also be curious to know any tricks to testing code like this. Did I do this right?
eye_len = np.arange(eye.shape[0]) # this is usually in the range of 30-40
xx,yy = np.meshgrid(eye_len,eye_len) # coordinates of every point in the eye image
X1,X2 = np.meshgrid(xx.ravel(),xx.ravel())
Y1,Y2 = np.meshgrid(yy.ravel(),yy.ravel())
Dx,Dy = [X1-X2,Y1-Y2]
Dlen = np.sqrt(Dx**2+Dy**2)
Dx,Dy = [Dx/Dlen, Dy/Dlen] #normalized
I've been trying to implement the local ridge orientation for fingerprints in python. I've used the Gradient method, and using sobel operator to get the gradients I need. However it turned out that this method has quite a lot of flaws, especially around 90 degrees. I could include the code that I've done so far, but as it does not work as I want, I don't know if it's needed. I've also looked at the line segment method, however, I'm working with latent fingerprints so it is hard to know if one should look for maximum of black or white in the line segments. I've also tried to implement an algorithm to detect the area of maximum concentration of continous lines, but I couldn't get this to work. Any suggestion for other algorithms to use?
EDIT:
I'm using a function to apply my function to blocks, but that is hardly relevant
def lro(im_np):
orientsmoothsigma = 3
Gxx = cv2.Sobel(im_np,-1,2,0)
Gxy = cv2.Sobel(im_np,-1,1,1)
Gyy = cv2.Sobel(im_np,-1,0,2)
Gxx = scipy.ndimage.filters.gaussian_filter(Gxx, orientsmoothsigma)
Gxy = numpy.multiply(scipy.ndimage.filters.gaussian_filter(Gxy, orientsmoothsigma), 2.0)
Gyy = scipy.ndimage.filters.gaussian_filter(Gyy, orientsmoothsigma)
denom = numpy.sqrt(numpy.add(numpy.power(Gxy,2), (numpy.power(numpy.subtract(Gxx,Gyy),2))))# + eps;
sin2theta = numpy.divide(Gxy,denom) # Sine and cosine of doubled angles
cos2theta = numpy.divide(numpy.subtract(Gxx,Gyy),denom)
sze = math.floor(6*orientsmoothsigma);
if not sze%2: sze = sze+1
cos2theta = scipy.ndimage.filters.gaussian_filter(cos2theta, orientsmoothsigma) # Smoothed sine and cosine of
sin2theta = scipy.ndimage.filters.gaussian_filter(sin2theta, orientsmoothsigma)#filter2(f, sin2theta); # doubled angles
orientim = math.pi/2. + numpy.divide(numpy.arctan2(sin2theta,cos2theta),2.)
return orientim
I worked on this a long time ago, and wrote a paper on it. As I remember, look for both black and white ridges (invert the image and repeat the analysis) to give more results. I do remember some sensitivity at some angles. You probably need something with more extent than a pure Sobel. Try to reach out as many pixels as practical.
You may want to have a look at the work of Raymond Thai (Fingerprint Image
Enhancement and Minutiae Extraction), if you haven't already.