I am trying to create an above-head view heat map of relatively sparse EEG data (27 electrodes). I convert x,y cartesian coordinates of the EEG electrodes to polar, and attempt to map them as such. Each x,y coordinate corresponds to a given value (if you want to know: Hurst exponent) to which I would like the color around that location to correspond.
I started with working code from this page and tried to adapt it to my problem. Unfortunately, my adaptation is not working.
Here is my code:
from __future__ import division, print_function, absolute_import
from pylab import *
import numpy as np
from scipy.interpolate import griddata
# Setting the paremeters that define the circle
max_r = 1
max_theta = 2.0 * np.pi
###Cartesian coordinates of the 27 electrodes
###x axis goes from back of head to nose, with nose being the positive direction
###y axis goes from ear to ear, with toward right ear (from perspective of self) being the negative direction
# X coordinates
X = [0.95, 0.95, 0.673, 0.673, 0.000000000000000044, 0.000000000000000044,
-0.673, -0.673, -0.95, -0.95, 0.587, 0.587, 0.0000000000000000612, 0.0000000000000000612,
-0.587, -0.587, 0.719, 0.00000000000000000000000000000000375, -0.719,
0.375, 0.375, 0.999, -0.999, -0.375, -0.375, -0.9139, -0.9139,.5,.6,.7,.8]
# Y coordinates
Y = [0.309, -0.309, 0.545, -0.545, 0.719, -0.719, 0.545, -0.545,
0.309, -0.309, 0.809, -0.809, 0.999, -0.999, 0.809, -0.809, 0, -0.0000000000000000612,
-0.0000000000000000881, 0.375, -0.375, 0, -0.000000000000000122, 0.375, -0.375, 0.2063, -0.2063,.5,.6,.7,.8]
# Convert cartesian coordinates to polar
def convert_to_polar(x, y):
theta = np.arctan2(y, x)
r = np.sqrt(x ** 2 + y ** 2)
return theta, r
# Arrays that house the theta and radii from converted cartesian coordinates.
Thetas = []
Rs = []
# Converting cartesian coordinates to polar, for each electrode
for i in range(0, 31):
theta, r = convert_to_polar(X[i], Y[i])
Thetas.append(theta)
Rs.append(r)
# Making a two column list that contains the converted thetas and radii, so the appropriate shape is attained.
points = [Thetas,Rs]
values = [[.51,.71,.81,.91,.72,.87,.90,.67,.78,.89,.56,.45,.68,.96,.69,.63,.37,.85,.92,.70,.74,.97,.35,.76,.68,.46,.68,90,91,92,93],
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,30,78,56,90]]
# now we create a grid of values, interpolated from our random sample above
theta = np.linspace(0.0, max_theta, 100)
r = np.linspace(0, max_r, 200)
grid_r, grid_theta = np.meshgrid(r, theta)
data = griddata(points, values, (grid_r, grid_theta),fill_value=0)
# Create a polar projection
ax1 = plt.subplot(projection="polar")
ax1.pcolormesh(theta, r, data.T)
plt.show()
I get an error:
Traceback (most recent call last):
File "/Users/mac/NIH/EEG/Python/Testing heat map", line 50, in <module>
data = griddata(points, values, (grid_r, grid_theta),fill_value=0)
File "/Users/mac/anaconda/lib/python2.7/site-packages/scipy/interpolate/ndgriddata.py", line 217, in griddata rescale=rescale)
File "scipy/interpolate/interpnd.pyx", line 246, in scipy.interpolate.interpnd.LinearNDInterpolator.__init__
(scipy/interpolate/interpnd.c:4980)
File "scipy/spatial/qhull.pyx", line 1747, in scipy.spatial.qhull.Delaunay.__init__
(scipy/spatial/qhull.c:15918)
File "scipy/spatial/qhull.pyx", line 415, in scipy.spatial.qhull._Qhull.__init__
(scipy/spatial/qhull.c:5108) scipy.spatial.qhull.**QhullError: QH6214 qhull input error: not enough points(2) to construct initial simplex (need 33)**
While executing: | qhull d Qbb Qt Q12 Qx Qz Qc Options selected for
Qhull 2015.2.r 2016/01/18: run-id 1980533833 delaunay Qbbound-last
Qtriangulate Q12-no-wide-dup Qxact-merge Qz-infinity-point
Qcoplanar-keep _zero-centrum Qinterior-keep
Q3-no-merge-vertices-dim-high
The bolded portion is what I am trying to understand. When I add more points (that is, when I add more points to the lists X and Y that become polar coordinates), the number of points the error claims to need keeps increasing, always two steps ahead of how many points I have input.
Does anyone have any idea how to deal with this?
The first argument to griddata must have shape (n, D), where n is the number of points, and D is the dimension of those points. You passed in points = [Thetas,Rs], where Thetas and Rs are lists with length 31. When that input is converted to a two-dimensional array, it will have shape (2, 31). So griddata thinks you have passed in just two 31-dimensional points.
To fix this, you can create points using numpy.column_stack so that it is an array with shape (31, 2), e.g.:
points = np.column_stack((Thetas, Rs))
I was wondering why the values of weibull pdf with the prebuilt function dweibull.pdf are more or less the half they should be
I did a test. For the same x I created the weibull pdf for A=10 and K=2 twice, one by writing myself the formula and the other one with the prebuilt function of dweibull.
import numpy as np
from scipy.stats import exponweib,dweibull
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
K=2.0
A=10.0
x=np.arange(0.,20.,1)
#own function
def weib(data,a,k):
return (k / a) * (data / a)**(k - 1) * np.exp(-(data / a)**k)
pdf1=weib(x,A,K)
print sum(pdf1)
#prebuilt function
dist=dweibull(K,1,A)
pdf2=dist.pdf(x)
print sum(pdf2)
f=plt.figure()
suba=f.add_subplot(121)
suba.plot(x,pdf1)
suba.set_title('pdf dweibull')
subb=f.add_subplot(122)
subb.plot(x,pdf2)
subb.set_title('pdf own function')
f.show()
It seems with dweibull the pdf values are the half but that this is wrong as the summation should be in total 1 and not aroung 0.5 as it is with dweibull. By writing myself the formula the summation is around 1[
scipy.stats.dweibull implements the double Weibull distribution. Its support is the real line. Your function weib corresponds to the PDF of scipy's weibull_min distribution.
Compare your function weib to weibull_min.pdf:
In [128]: from scipy.stats import weibull_min
In [129]: x = np.arange(0, 20, 1.0)
In [130]: K = 2.0
In [131]: A = 10.0
Your implementation:
In [132]: weib(x, A, K)
Out[132]:
array([ 0. , 0.019801 , 0.03843158, 0.05483587, 0.0681715 ,
0.07788008, 0.08372116, 0.0857677 , 0.08436679, 0.08007445,
0.07357589, 0.0656034 , 0.05686266, 0.04797508, 0.03944036,
0.03161977, 0.02473752, 0.01889591, 0.014099 , 0.0102797 ])
scipy.stats.weibull_min.pdf:
In [133]: weibull_min.pdf(x, K, scale=A)
Out[133]:
array([ 0. , 0.019801 , 0.03843158, 0.05483587, 0.0681715 ,
0.07788008, 0.08372116, 0.0857677 , 0.08436679, 0.08007445,
0.07357589, 0.0656034 , 0.05686266, 0.04797508, 0.03944036,
0.03161977, 0.02473752, 0.01889591, 0.014099 , 0.0102797 ])
By the way, there is a mistake in this line of your code:
dist=dweibull(K,1,A)
The order of the parameters is shape, location, scale, so you are setting the location parameter to 1. That's why the values in your second plot are shifted by one. That line should have been
dist = dweibull(K, 0, A)
If I
define an intrinsic camera matrix A and poses [rvec, ...], [tvec, ...],
use them as parameters in cv2.projectPoints to generate the the images that would be generated by a camera when it views a grid of circles,
Detect the features (cv2.findCirclesGrid) in the resulting images
Use cv2.calibrateCamera on the feature detections to recover the camera parameters
Shouldn't I recover the original intrinsic and extrinsic parameters?
The full code at the bottom of this question does this process, but does not
recover the original camera parameters:
Kept 4 full captures out of 4 images
calibration error 133.796093439
Simulation matrix:
[[ 5.00000000e+03 0.00000000e+00 3.20000000e+02]
[ 0.00000000e+00 5.00000000e+03 2.40000000e+02]
[ 0.00000000e+00 0.00000000e+00 1.00000000e+00]]
Estimated matrix:
[[ 1.0331118 0. 317.58445168]
[ 0. 387.49075886 317.98450481]
[ 0. 0. 1. ]]
I.e. the mean error is huge, and the estimated camera matrix does not look like
the simulation camera matrix orginally used to generate the test images.
I'd expect that this sort of closed-loop simulation should result in a very good estimate of the intrinsic camera matrix. What am I doing wrong that this approach for validating cameraCalibration doesn't seem to work?
Edits in response to AldurDisciple comment
1) Added new function in code below direct_generation_of_points that skips
the image generation functions and uses cv2.projectPoints directly to
compute the circle locations that are passed into cv2.calibrateCamera.
This works correctly.
But this is confusing: the estimated circle locations (derived from my simulated
images) are typically within about a 10'th of a pixel from the exact ones, the main
difference is that the points are in a different order:
# compare the y-component's
In [245]: S.dots[0][:,0,1]
Out[245]:
array([ 146.33618164, 146.30953979, 146.36413574, 146.26707458,
146.17976379, 146.30110168, 146.17236328, 146.35955811,
146.33454895, 146.36776733, 146.2612915 , 146.21359253,
146.23895264, 146.27839661, 146.27764893, 177.51347351,
177.57495117, 177.53858948, 177.48587036, 177.63012695,
177.48597717, 177.51727295, 177.5202179 , 177.52545166,
177.57287598, 177.51008606, 177.51296997, 177.53715515,
177.53053284, 177.58164978, 208.69573975, 208.7252655 ,
208.69616699, 208.73510742, 208.63375854, 208.66760254,
208.71517944, 208.74360657, 208.62438965, 208.59814453,
208.67456055, 208.72662354, 208.70921326, 208.63339233,
208.70820618, 239.8401947 , 240.06373596, 239.87176514,
240.04118347, 239.97781372, 239.97572327, 240.04475403,
239.95411682, 239.80995178, 239.94726562, 240.01327515,
239.82675171, 239.99989319, 239.90107727, 240.07745361,
271.31692505, 271.28417969, 271.28216553, 271.33111572,
271.33279419, 271.33584595, 271.30758667, 271.21173096,
271.28588867, 271.3387146 , 271.33770752, 271.2104187 ,
271.38504028, 271.25054932, 271.29376221, 302.52420044,
302.47903442, 302.41482544, 302.39868164, 302.47793579,
302.49789429, 302.45016479, 302.48071289, 302.50463867,
302.51422119, 302.46307373, 302.42077637, 302.60791016,
302.48162842, 302.46142578, 333.70709229, 333.75698853,
333.64157104, 333.64926147, 333.6647644 , 333.69546509,
333.73342896, 333.76846313, 333.57540894, 333.76605225,
333.74307251, 333.60968018, 333.7739563 , 333.70132446,
333.62057495], dtype=float32)
In [246]: S.exact_dots[0][:,0,1]
Out[246]:
array([ 146.25, 177.5 , 208.75, 240. , 271.25, 302.5 , 333.75,
146.25, 177.5 , 208.75, 240. , 271.25, 302.5 , 333.75,
<< snipped 10 identical rows >>
146.25, 177.5 , 208.75, 240. , 271.25, 302.5 , 333.75,
146.25, 177.5 , 208.75, 240. , 271.25, 302.5 , 333.75,
146.25, 177.5 , 208.75, 240. , 271.25, 302.5 , 333.75], dtype=float32)
Here's the working version of what I'm trying to do:
import scipy
import cv2
import itertools
def direct_generation_of_points():
''' Skip the part where we actually generate the image,
just use cv2.projectPoints to generate the exact locations
of the grid centers.
** This seems to work correctly **
'''
S=Setup()
t=tvec(0.0,0.0,1.6) # keep the camera 1.6 meters away from target, looking at the origin
rvecs=[ rvec(0.0,0.0,0.0), rvec(0.0, scipy.pi/6,0.0), rvec(scipy.pi/8,0.0,0.0), rvec(0.0,0.0,0.5) ]
S.poses=[ (r,t) for r in rvecs ]
S.images='No images: just directly generate the extracted circle locations'
S.dots=S.make_locations_direct()
calib_flags=cv2.CALIB_ZERO_TANGENT_DIST|cv2.CALIB_SAME_FOCAL_LENGTH
calib_flags=calib_flags|cv2.CALIB_FIX_K3|cv2.CALIB_FIX_K4
calib_flags=calib_flags|cv2.CALIB_FIX_K5|cv2.CALIB_FIX_K6
S.calib_results=cv2.calibrateCamera( [S.grid,]*len(S.dots), S.dots, S.img_size, cameraMatrix=S.A, flags=calib_flags)
print "calibration error ", S.calib_results[0]
print "Simulation matrix: \n", S.A
print "Estimated matrix: \n", S.calib_results[1]
return S
def basic_test():
''' Uses a camera setup to
(1) generate an image of a grid of circles
(2) detects those circles
(3) generate an estimated camera model from the circle detections
** This does not work correctly **
'''
S=Setup()
t=tvec(0.0,0.0,1.6) # keep the camera 1.6 meters away from target, looking at the origin
rvecs=[ rvec(0.0,0.0,0.0), rvec(0.0, scipy.pi/6,0.0), rvec(scipy.pi/8,0.0,0.0), rvec(0.0,0.0,0.5) ]
S.poses=[ (r,t) for r in rvecs ]
S.images=S.make_images()
S.dots=extract_dots( S.images, S.grid_size[::-1] )
S.exact_dots=S.make_locations_direct()
calib_flags=cv2.CALIB_ZERO_TANGENT_DIST|cv2.CALIB_SAME_FOCAL_LENGTH
calib_flags=calib_flags|cv2.CALIB_FIX_K3|cv2.CALIB_FIX_K4|cv2.CALIB_FIX_K5
calib_flags=calib_flags|cv2.CALIB_FIX_K6
S.calib_results=cv2.calibrateCamera( [S.grid,]*len(S.dots), S.dots, S.img_size, cameraMatrix=S.A, flags=calib_flags)
print "calibration error ", S.calib_results[0]
print "Simulation matrix: \n", S.A
print "Estimated matrix: \n", S.calib_results[1]
return S
class Setup(object):
''' Class to simulate a camera, produces images '''
def __init__(self):
self.img_size=(480,640)
self.A=scipy.array( [ [5.0e3, 0.0, self.img_size[1]/2],
[ 0.0, 5.0e3, self.img_size[0]/2],
[ 0.0, 0.0, 1.0 ] ],
dtype=scipy.float32 )
# Nx, Ny, spacing, dot-size
self.grid_spec=( 15, 7, 0.01, 0.001 )
self.grid=square_grid_xy( self.grid_spec[0], self.grid_spec[1], self.grid_spec[2])
# a pose is a pair: rvec, tvec
self.poses=[ ( rvec(0.0, scipy.pi/6, 0.0), tvec( 0.0,0.0,1.6) ),
]
#property
def grid_size(self):
return self.grid_spec[:2]
def make_images(self):
return [make_dots_image(self.img_size, self.A, rvec, tvec, self.grid, self.grid_spec[-1] ) for (rvec,tvec) in self.poses]
def make_locations_direct(self):
return [cv2.projectPoints( self.grid, pose[0], pose[1], self.A, None)[0] for pose in self.poses]
def square_grid_xy( nx, ny, dx ):
''' Returns a square grid in the xy plane, useful
for defining test grids for camera calibration
'''
xvals=scipy.arange(nx)*dx
yvals=scipy.arange(ny)*dx
xvals=xvals-scipy.mean(xvals)
yvals=yvals-scipy.mean(yvals)
res=scipy.zeros( [3, nx*ny], dtype=scipy.float32 )
for (i,(x,y)) in enumerate( itertools.product(xvals, yvals)):
res[:,i]=scipy.array( [x,y,0.0] )
return res.transpose()
# single pixel dots were not detected?
#def make_single_pixel_dots( img_size, A, rvec, tvec, grid, dist_k=None):
# rgb=scipy.ones( img_size+(3,), dtype=scipy.uint8 )*0xff
# (dot_locs, jac)=cv2.projectPoints( grid, rvec, tvec, A, dist_k)
# for p in dot_locs:
# (c,r)=(int(p[0][0]+0.5), int(p[0][1]+0.5))
# if 0<=c<img_size[1] and 0<=r<img_size[0]:
# rgb[r,c,:]=0
# return rgb
def make_dots_image( img_size, A, rvec, tvec, grid, dotsize, dist_k=None):
''' Make the image of the dots, uses cv2.projectPoints to construct the image'''
# make white image
max_intensity=0xffffffff
intensity=scipy.ones( img_size, dtype=scipy.uint32)*max_intensity
# Monte-Carlo approach to draw the dots
for dot in grid:
deltas=2*dotsize*( scipy.rand(1024, 3 )-0.5) # no. of samples must be small relative to bit-depth of intensity array
deltas[:,2]=0
indicator=scipy.where( scipy.sum( deltas*deltas, 1)<dotsize*dotsize, 1, 0.0)
print "inside fraction: ", sum(indicator)/len(indicator)
(pts,jac)=cv2.projectPoints( dot+deltas, rvec, tvec, A, dist_k )
pts=( p for (ind,p) in zip(indicator, pts) if ind )
for p in pts:
(c,r)=( int(p[0][0]+0.5), int( p[0][1]+0.5 ) )
if r>=0 and c>=0 and c<img_size[1] and r<img_size[0]:
intensity[r,c]=intensity[r,c]-6
else:
print "col, row ", (c,r), " point rejected"
# rescale so that image goes from 0x0 to max intensity
min_intensity=min(intensity.flat)
# normalize the intensity
intensity=0xff*( (intensity-min_intensity)/float(max_intensity-min_intensity) )
pixel_img=scipy.ones( intensity.shape+(3,), dtype=scipy.uint8 )
return (pixel_img*intensity[:,:,scipy.newaxis]).astype(scipy.uint8 )
def extract_dots( img_list, grid_size ):
'''
#arg img_list: usually a list of images, can be a single image
'''
# convert single array, into a 1-element list
if type(img_list) is scipy.ndarray:
img_list=[img_list,]
def get_dots( img ):
res=cv2.findCirclesGridDefault( img, grid_size)
if not res[0]: # sometimes, reversing the grid size will make the detection successful
return cv2.findCirclesGridDefault( img, grid_size[::-1] )
return res
all_dots=[ get_dots( img) for img in img_list]
#all_dots=[cv2.findCirclesGrid( img, grid_size[::-1] ) for img in img_list ]
full_captures=[x[1] for x in all_dots if x[0] ]
print "Kept {0} full captures out of {1} images".format( len(full_captures), len(img_list) )
if len(full_captures)<len(img_list):
print "\t", [x[0] for x in all_dots]
return [scipy.squeeze(x) for x in full_captures]
# convenience functions
def vec3_32(x,y,z):
return scipy.array( [x,y,z], dtype=scipy.float32 )
rvec=vec3_32
tvec=vec3_32
if __name__=="__main__":
basic_test()
The key issue is in the organization of the grid points passed in the first argument of cv2.calibrateCamera,
in the question the points are organized in column major order, so to speak, and need to be organized in row-major order:
def square_grid_xy_fixed( nx, ny, dx ):
''' Returns a square grid in the xy plane, useful
for defining test grids for camera calibration
'''
xvals=scipy.arange(nx)*dx
yvals=scipy.arange(ny)*dx
xvals=xvals-scipy.mean(xvals)
yvals=yvals-scipy.mean(yvals)
res=scipy.zeros( [3, nx*ny], dtype=scipy.float32 )
# need to have "x" be the most rapidly varying index, i.e.
# it must be the final argument to itertools.product
for (i,(y,x)) in enumerate( itertools.product(yvals, xvals)):
res[:,i]=scipy.array( [x,y,0.0] )
return res.transpose()