I'm trying to draw a horizontal line across a shape (ellipse in this instance with only the centroid and the boundary of the ellipse on a black background) starting from the centroid of the shape (ellipse) . I started off checking each and every pixel along +x and -x axes from centroid and replacing each non-green pixel(boundary) to a white pixel (essentially drawing a line pixel by pixel) and stop converting as soon as I reach the first green pixel (boundary). Code is given at the end
According to my logic, the line (created using points) should stop as soon as it reaches the boundary aka first green pixel along a particular axis but there is a slight offset of the detected boundary. In the given image, you can clearly see the right and left most points calculated by checking each and every pixel is slightly off center from the actual line
Images are enlarged for better view
I checked my code multiple times and I freshly drew ellipses every time to make sure there is no stray green pixels left on the image but the offset is consistent for each try
So my question will be: How do I get rid of this offset and make my line incident on the boundary perfectly? Is this a visual glitch or Am I doing something wrong?
Note: I know there are rectFitting and MinAreaRect functions which I can use to draw perfect bounding boxes to get points but I wanted to know why this is happening. I'm not looking for optimal method instead I'm looking for the cause and solution for this issue.
If you can suggest better/accurate title, its much appreciated. I think I have explained everything for the time being.
Code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
%inline matplotlib
#Function to plot images
def display_img(img,name):
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111)
ax.imshow(img,cmap ="gray")
plt.title(name)
#A black canvas
canvas = np.zeros((1600,1200,3),np.uint8)
#Value obtained after ellipse fitting an object
val = ((654, 664),(264, 266),80)
centroid = (val[0][0],val[0][1])
#Drawing the ellipse on the canvas(green)
ell = cv2.ellipse(canvas,val,(0,255,0),1)
centroid_ = cv2.circle(canvas,centroid,1,(255,0,0),10) #High thickness to see it visibly (Red)
display_img(canvas,"Canvas w/ ellipse and centroid")
#variables for centers
y_center = centroid[1]
#Variables which iterate over time
right_pt = centroid[0]
left_pt = centroid[0]
#Using while loops to find the distance from the center to the
#nearby first green pixel (leftmost and rightmost boundary)
while(np.any(canvas[right_pt,y_center] != [0,255,0])):
cv2.circle(canvas,(right_pt,y_center),1,(255,255,255),1)
right_pt += 1
while(np.any(canvas[left_pt,y_center] != [0,255,0])):
cv2.circle(canvas,(left_pt,y_center),1,(255,255,255),1)
left_pt -= 1
#Drawing the obtained points
canvas = cv2.circle(canvas,(right_pt,y_center),1,(0,255,0),2)
canvas = cv2.circle(canvas,(left_pt,y_center),1,(0,255,0),2)
display_img(canvas,"Finale")
There are couple of problems, one hiding neatly behind another.
The first issue is evident in this snippet of code extracted from your script:
# ...
val = ((654, 664),(264, 266),80)
centroid = (val[0][0],val[0][1])
y_center = centroid[1]
right_pt = centroid[0]
left_pt = centroid[0]
while(np.any(canvas[right_pt,y_center] != [0,255,0])):
cv2.circle(canvas,(right_pt,y_center),1,(255,255,255),1)
right_pt += 1
# ...
Notice that you use the X and Y coordinates of the point you want to process
(represented by right_pt and y_center respectively) in the same order
to do both of the following:
index a numpy array: canvas[right_pt,y_center]
specify point coordinate to an OpenCV function: (right_pt,y_center)
That is a problem, because each of those libraries expects a different order:
numpy indexing is by default row-major, i.e. img[y,x]
points and sizes in OpenCV are column-major, i.e. (x,y)
In this particular case, the error is in the order of indexes for the numpy array canvas.
To fix it, just switch them around:
while(np.any(canvas[y_center,right_pt] != [0,255,0])):
cv2.circle(canvas,(right_pt,y_center),1,(255,255,255),1)
right_pt += 1
# ditto for the second loop
Once you fix that, and run your script, it will crash with an error like
while(np.any(canvas[y_center,right_pt] != [0,255,0])):
IndexError: index 1200 is out of bounds for axis 1 with size 1200
Why didn't this happen before? Since the centroid was (654, 664)
and you had the coordinates swapped, you were looking 10 rows away
from where you were drawing.
The problem lies in the fact that you're drawing white circles
into the same image you're also searching for green pixels, combined
with perhaps mistaken interpretation of what the radius parameter of
cv2.circle does. I suppose the best way to show this with an image
(representing 5 rows of 13 pixels):
The red dots are centers of respective circles,
white squares are the pixels drawn,
black squares are the pixels left untouched
and the yellow arrows indicate the direction of iteration along the row.
On the left side, you can see circle with radius 1, on the right radius 0.
Let's say we're approaching the green area we want to detect:
And make another iteration:
Oops, with radius of 1, we just changed the green pixel we're looking for to white.
Hence we can never find any green pixels (with the exception of the first point tested, since at that point we haven't drawn anything yet, and only in the first loop), and the loop will run out of bounds of the image.
There are several options on how to resolve this problem. The simplest one, if you're fine with a thinner line, is to change the radius to 0 in both calls to cv2.circle. Another possibility would be to cache a copy of "the row of interest", so that any drawing you do on canvas won't effect the search:
target_row = canvas[y_center].copy()
while(np.any(target_row[right_pt] != [0,255,0])):
cv2.circle(canvas,(right_pt,y_center),1,(255,255,255),1)
right_pt += 1
or
target_row = canvas[y_center] != [0,255,0]
while(np.any(target_row[right_pt])):
cv2.circle(canvas,(right_pt,y_center),1,(255,255,255),1)
right_pt += 1
or even better
target_row = np.any(canvas[y_center] != [0,255,0], axis=1)
while(target_row[right_pt]):
cv2.circle(canvas,(right_pt,y_center),1,(255,255,255),1)
right_pt += 1
Finally, you could skip the drawing in the loops, and just use a single function call to draw a line connecting the two endpoints you found.
target_row = np.any(canvas[y_center] != [0,255,0], axis=1)
while(target_row[right_pt]):
right_pt += 1
while(target_row[left_pt]):
left_pt -= 1
#Drawing the obtained points
cv2.line(canvas, (left_pt,y_center), (right_pt,y_center), (255,255,255), 2)
cv2.circle(canvas, (right_pt,y_center), 1, (0, 255, 0), 2)
cv2.circle(canvas, (left_pt,y_center), 1, (0, 255, 0), 2)
Bonus: Let's get rid of the explicit loops.
left_pt, right_pt = np.where(np.all(canvas[y_center] == [0,255,0], axis=1))[0]
This will (obviously) work only if there are two matching pixels on the row of interest. However, it is trivial to extend this to find the first one from the ellipses center in each direction (you get an array of all X coordinates/columns that contain a green pixel for that row).
Cropped output (generated by cv2.imshow) of that implementation can be seen in the following image (the centroid is blue, since you used (255,0,0) to draw it, and OpenCV uses BGR order by default):
I'm trying to isolate the gray matter in a brain image and color it based on the cortical thickness at each point giving a result similar to this:
Cortical thickness map based on this original: Original brain scan
So far I have segmented the white matter boundary and the gray matter boundary giving me this:
White + Gray matter segmentation
The next step is where I'm stuck.
I need to find the distance between the 2 boundaries by finding the closest white boundary pixel for each gray boundary pixel and record the distance between them as shown here: Distance
This can be done simply with some for loops and Euclidian distance.
My problem is how to then color the pixels in between them/assign the distance value to the pixels between them.
import numpy as np
import matplotlib.pyplot as plt
import nibabel as nib
from skimage import filters
from skimage import morphology
t1 = nib.load('raw_map1.nii').get_fdata()
t1map = nib.load('thickness_map1.nii').get_fdata()
filt_t1 = filters.gaussian(t1,sigma=1)
plt.imshow(filt_t1[:,128,:])
#Segment the white matter surface
wm = filt_t1 > 75
plt.imshow(wm[:,128,:])
med_wm = filters.median(wm)
plt.imshow(med_wm[:,128,:])
dilw = morphology.binary_dilation(med_wm)
edge_wm = dilw.astype(float) - med_wm
plt.imshow(edge_wm[:,128,:])
#Segment the gray matter surface
gm = (filt_t1 < 75) & (filt_t1 > 45)
plt.imshow(gm[:,128,:])
med_gm = filters.median(gm)
plt.imshow(med_gm[:,128,:])
dilg = morphology.binary_dilation(med_gm)
edge_gm = dilg.astype(float) - med_gm
plt.imshow(edge_gm[:,128,:])
dilw2 = morphology.binary_dilation(edge_wm)
plt.imshow(dilw2[:,128,:])
fedge_gm = edge_gm.astype(float) - dilw2
plt.imshow(fedge_gm[:,128,:])
fedge_gm2 = fedge_gm > 0
plt.imshow(fedge_gm2[:,128,:])
#Combine both surfaces
final = fedge_gm2 + edge_wm
plt.imshow(final[:,128,:])
You may use DL+DiReCT: https://github.com/SCAN-NRAD/DL-DiReCT
Starting from a brain scan (T1-weighted MRI) as input, DL+DiReCT labels anatomical regions including the cortex and calculates a voxel-wise cortical thickness map (T1w_norm_thickmap.nii.gz). For every voxel inside the cortex, the intensity indicates the thickness of the cortex in mm.
As part of writing a 3D game library, I am trying to implement frustum culling in order to avoid rendering objects that are outside of the camera perspective frustum. To do this, I first need to calculate a bounding sphere for each mesh and see if it collides with any of the six sides of the viewing frustum. Here is my currently (very) naive implementation of computing the bounding sphere for each model as written in model.py in my code:
from pyorama.entity import Entity
from pyorama.math3d.vec3 import Vec3
from pyorama.math3d.mat4 import Mat4
from pyorama.physics.sphere import Sphere
import math
import numpy as np
import itertools as its
class Model(Entity):
def __init__(self, mesh, texture, transform=Mat4.identity()):
super(Model, self).__init__()
self.mesh = mesh
self.texture = texture
self.transform = transform
def compute_bounding_sphere(self):
vertex_data = self.mesh.vertex_buffer.get_data()
vertices = []
for i in range(0, len(vertex_data), 3):
vertex = Vec3(vertex_data[i: i+3])
vertices.append(vertex)
max_pair = None
max_dist = 0
for a, b in its.combinations(vertices, 2):
dist = Vec3.square_distance(a, b)
if dist > max_dist:
max_pair = (a, b)
max_dist = dist
radius = math.sqrt(max_dist)/2.0
center = Vec3.lerp(max_pair[0], max_pair[1], 0.5)
return Sphere(center, radius)
I am just taking pairwise points from my mesh and using the largest distance I find as my diameter. Calling this on 100 simple cube test models every frame is extremely slow, driving my frame rate from 120 fps to 1 fps! This is not surprising since I assume the time complexity for this pairwise code is O(n^2).
My question is what algorithm is fast and reasonably simple to implement that computes (at least) an approximate bounding sphere given a set of 3D points from a mesh? I looked at this Wikipedia page and saw there was an algorithm called "Ritter's bounding sphere." However, this requires me to choose some random point x in the mesh and hope that it is the approximate center so that I get a reasonably tight bounding sphere. Is there a fast method for choosing a good starting point x? Any help or advice would be greatly appreciated!
UPDATE:
Following #Aaron3468's answer, here is the code in my library that would calculate the bounding box and the corresponding bounding sphere:
from pyorama.entity import Entity
from pyorama.math3d.vec3 import Vec3
from pyorama.math3d.mat4 import Mat4
from pyorama.physics.sphere import Sphere
from pyorama.physics.box import Box
import math
import numpy as np
import itertools as its
class Model(Entity):
def __init__(self, mesh, texture, transform=Mat4.identity()):
super(Model, self).__init__()
self.mesh = mesh
self.texture = texture
self.transform = transform
def compute_bounding_sphere(self):
box = self.compute_bounding_box()
a, b = box.min_corner, box.max_corner
radius = Vec3.distance(a, b)/2.0
center = Vec3.lerp(a, b, 0.5)
return Sphere(center, radius)
def compute_bounding_box(self):
vertex_data = self.mesh.vertex_buffer.get_data()
max_corner = Vec3(vertex_data[0:3])
min_corner = Vec3(vertex_data[0:3])
for i in range(0, len(vertex_data), 3):
vertex = Vec3(vertex_data[i: i+3])
min_corner = Vec3.min_components(vertex, min_corner)
max_corner = Vec3.max_components(vertex, max_corner)
return Box(min_corner, max_corner)
Iterate over the vertices once and collect the highest and lowest value for each dimension. This creates a bounding box made of Vec3(lowest.x, lowest.y, lowest.z) and Vec3(highest.x, highest.y, highest.z).
Use the median value of the highest and lowest value for each dimension. This creates the center of the box as Vec3((lowest.x + highest.x)/2, ...).
Then get the euclidean distance between the center and each of the 8 corners of the box. Use the largest distance, and the center you found to make a bounding circle.
You've only iterated once through the data set and have a good approximation of the bounding circle!
A bounding circle computed this way is almost certainly going to be bigger than the mesh. To shrink it, you can set the radius to the distance along the widest dimension from the center. This approach does risk chopping off faces that are in the corners.
You can iteratively shrink the radius and check that all points are in the bounding circle, but then you approach worse performance than your original algorithm.
I have created a triangle positioned in the centre of the screen.
from PIL import Image, ImageDraw
GRAY = (190, 190, 190)
im = Image.new('RGBA', (400, 400), WHITE)
points = (250, 250), (100, 250), (250, 100)
draw = ImageDraw.Draw(im)
draw.polygon(points, GRAY)
How do I duplicate this image and reflect it along each sides of the triangle at different random points. For example...
Plan: First find a random point on the edge of the big triangle where to put a smaller one, and then rotate it so it fits properly on the edge.
Suppose we can access the points of the triangle with something like this
triangle.edges[0].x,
triangle.edges[0].y,
triangle.edges[1].x,
etc
We can then find an arbitrary point by first selecting an edge, and "walk a random distance to the next edge":
r = randInt(3) # random integer between 0 and 2
first_edge = triangle.edges[r]
second_edge = r == 2 ? triangle.edges[0] : triangle.edges[r + 1]
## The next lines is kind of pseudo-code
r = randFloat(1)
random_point = (second_edge - first_edge)*r + first_edge
Our next problem is how to rotate a triangle. If you have done some algebra you might recognise this:
def rotatePointAroundOrigin(point, angle):
new_point = Point()
new_point.x = cos(angle)*point.x - sin(angle)*point.y
new_point.y = sin(angle).point.x + cos(angle)*point.y
return new_point
(see https://en.wikipedia.org/wiki/Rotation_matrix)
In addition to this you need to determine just how much to rotate the triangle, and then apply the function above to all of the points.
I have the following Python code to generate random circles in order to simulate Brownian motion. I need to find the total area of the small red circles so that I can compare it to the total area of a larger blue circle. Since the circles are generated randomly, many of them overlap making it difficult to find the area. I have read many other responses related to this question about pixel painting, etc. What is the best way to find the area of these circles? I do not want to modify the generation of the circles, I just need to find the total area of the red circles on the plot.
The code to generate the circles I need is as follows (Python v. 2.7.6):
import matplotlib.pyplot as plt
import numpy as np
new_line = []
new_angle = []
x_c = [0]
y_c = [0]
x_real = []
y_real = []
xy_dist = []
circ = []
range_value = 101
for x in range(0,range_value):
mu, sigma = 0, 1
new_line = np.random.normal(mu, sigma, 1)
new_angle = np.random.uniform(0, 360)*np.pi/180
x_c.append(new_line*np.cos(new_angle))
y_c.append(new_line*np.sin(new_angle))
x_real = np.cumsum(x_c)
y_real = np.cumsum(y_c)
a = np.mean(x_real)
b = np.mean(y_real)
i = 0
while i<=range_value:
xy_dist.append(np.sqrt((x_real[i]-a)**2+(y_real[i]-b)**2))
i += 1
circ_rad = max(xy_dist)
small_rad = 0.2
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
circ1 = plt.Circle((a,b), radius=circ_rad+small_rad, color='b')
ax.add_patch(circ1)
j = 0
while j<=range_value:
circ = plt.Circle((x_real[j], y_real[j]), radius=small_rad, color='r', fill=True)
ax.add_patch(circ)
j += 1
plt.axis('auto')
plt.show()
The package Shapely might be of some use:
https://gis.stackexchange.com/questions/11987/polygon-overlay-with-shapely
http://toblerity.org/shapely/manual.html#geometric-objects
I can think of an easy way to do it thought the result will have inaccuracies:
With Python draw all your circles on a white image, filling the circles as you draw them. At the end each "pixel" of your image will have one of 2 colors: white color is the background and the other color (let's say red) means that pixel is occupied by a circle.
You then need to sum the number of red pixels and multiply them by the scale with which you draw them. You will have then the area.
This is inaccurate as there is no way of drawing a circle using square pixels, so in the mapping you lose accuracy. Keep in mind that the bigger you draw the circles, the smaller the inaccuracy becomes.