How to detect the nearest square to the image center points? [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
How do I discover the nearest square to the center of the image?
I already have the square's vertices and the center, just need to find out which square is closest to the center.
I have a problem similar to this image. I would like to select the nearest square (the red square).

Two possibilities, of which one is mentioned in your answer and the comment by #miki
Square center
You do not need to implement your own distance function. SciPy already has a few. For example your euclidean distance in scipy.spatial.distance.euclidean:
from scipy.spatial import distance
distance.euclidean((x0, y0), (x1, y1))
No need to reinvent the wheel.
Square edges
In the following example it can be argued whether the red or the blue square are closer to the center. By euclidean distance, it is the blue one. The red one is overlapping the center, though.
If you wanted to have the square with the closest pixel to the center, you could do something like
square = (upper_left_x, upper_left_y, lower_right_x, lower_right_y)
center = (x, y)
if upper_left_x <= x <= lower_right_x and upper_left_y <= y <= lower_right_y:
return 0 # point in square
elif upper_left_x <= x <= lower_right_x: # on vertical line
return min(upper_left_y -y, lower_right_y - y)
elif upper_left_y <= y <= lower_right_y: # on horizontal line
return min(upper_left_x -x, lower_right_x - x)
else:
points = []
for x in (upper_left_x, lower_right_x):
for y in (upper_left_y, lower_right_y):
points.append((x,y))
return min([distance.euclidean((x,y), p) for p in points])
Original answer before edit to question
You can split this up:
import image
find the squares
find the corners
connect them
compute distance to image center
The main point here may be any of them (except for 1.2 probably).
There is a neat OpenCV Python tutorial, which tells you how to do some of that. Let us start:
import
import cv2
img = cv2.imread('L3h9H.png')
imports the image.
To see that you imported correctly, you can use
from matplotlib import pyplot as plt
plt.imshow(img, cmap = 'gray', interpolation = 'bicubic')
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.show()
which shows it to you.
find corners
import numpy as np
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv2.cornerHarris(gray, 2,3,0.04)
for i, valx in enumerate(dst):
for j, valy in enumerate(valx):
if valy > 0:
print '%s, %s: %s' % (i, j, valy)
Finds corner using one of the built-in algorithms. The corners are listed afterwards.
Next steps would be:
compute lines between corners (maybe).
Show min distance of line to image center.
Say if you need more help.

This is the solution I have for my problem. Where the x0 and y0 can be the coordinates of the center of my image and the x1 and y1 will be center of coordinates of the square center.
import math
def dist(x0, y0, x1, y1):
a = (x1 - x0)**2 + (y1 - y0)**2
b = math.sqrt(a)
return b
print dist(5, 5, 4, 6)
print dist(5, 5, 9, 2)

Related

Get points of any regular polygon that can be also rotated

I'm trying to draw any regular polygon, so from triangles to polygons with so many corners, it looks like a circle. to make it easier, they must be regular, so a normal pentagon/hexagon/octagon etc. I want to be able to rotate them. What ive tried is to draw a circle and divide 360 by the amount of points i want then, create a point every nth degrees around the circle, putting these points in pygame.draw.polygon() then creates the shape i want, the problem is it isn't the right size, i also want to be able to have stretch the shapes, so have a different width and height.
def regular_polygon(hapi, x, y, w, h, n, rotation, angle_offset = 0):
#angle_offset is the starting angle in the circle where rotation is rotating the circle
#so when its an oval, rotation rotates the oval and angle_offset is where on the oval to start from
if n < 3:
n = 3
midpoint = pygame.Vector2(x + w//2, y + h//2)
r = sqrt(w**2 + h**2)
#if angle_offset != 0:
#w = (w//2)//cos(angle_offset)
#if angle_offset != 90:
#h = (h//2)//sin(angle_offset)
w,h = r,r
points = []
for angle in range(0, 360, 360//n):
angle = radians(angle + angle_offset)
d = pygame.Vector2(-sin(angle)*w//2, -cos(angle)*h//2).rotate(rotation) #the negative sign is because it was drawing upside down
points.append(midpoint + d)
#draws the circle for debugging
for angle in range(0, 360, 1):
angle = radians(angle + angle_offset)
d = pygame.Vector2(-sin(angle)*w//2, -cos(angle)*h//2).rotate(rotation)
pygame.draw.rect(screen, (0,255,0), (midpoint[0] + d[0], midpoint[1] + d[1], 5, 5))
pygame.draw.polygon(screen,(255,0,0),points)
the red square is the the function above is making, the blue one behind is what it should be
as you can see the circle does line up with the edges of the rect, but because the angles of the circle are not even, the rectangle the func makes is not right.
i think i need to change the circle to an oval but cannot find out how to find the width radius and height radius of it. currently ive found the radius by using pythag.
this is what happens when i dont change the width or height
I found the solution
doing w *= math.sqrt(2) and h *= math.sqrt(2) works perfectly. dont quite understand the math, but after trial and error, this works. You can probable find the maths here but i just multiplied the width and height by a number and printed that number when it lined up which was very close to the sqrt(2)

Distance between two points in OpenCv based on known measurement [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I have an image in which, I have two set of coordinates between which I have draw a line.
#Get image
im_res = requests.get(image_url)
img = Image.open(BytesIO(im_res.content))
img = np.asarray(img)
#Draw first line
lineThickness = 3
cv.line(img, (ax, ay), (bx, by), (0,255,0), lineThickness)
#Draw second line
lineThickness = 3
cv.line(img, (cx, cy), (dx, dy), (0,255,0), lineThickness)
cv.imshow("Image", img)
cv.waitKey(0)
cv.destroyAllWindows()
Coordinates are A,B,C & D. I know the distance between C to D. However, the distance between A to B is unknown. What is the best way to calculate this in OpenCv?
Is there an OpenCv specific function or method to do this? Especially the distance we are taking about is in pixels? I am sorry if this question is foolish, I really don't want to to end up getting wrong values due to lack of understanding in this topic.
I saw certain references to cv2.norm() and cv2.magnitude() as solution to this problem. However, I quite didnt't understand how to choose for my situation, keeping in mind in this case the distance is within an image/photo.
Compute Euclidean from C to D and find the ratio of that with the known measurement.
ratio = known / Euclidean
Then find the Euclidean between A & B and use the earlier found ratio to convert the Euclidean to actual distance.
distance = euclidean * ratio
euclidean "sqrt((x2-x1)**2+(y2-y1)**2)"

How would I achieve this in opencv with an affine transform?

I was wondering how I would replicate what is being done in this image:
To break it down:
Get facial landmarks using dlib (green dots)
Rotate the image so that the eyes are horizontal
Find the midpoint of the face by averaging the left most and right most landmarks (blue dot) and center the image on the x-axis
Fix the position along the y-axis by by placing the eye center 45% from the top of the image, and the mouth center 25% from the image
Right now this is what I have:
I am kind of stuck on step 3, which I think can be done by an affine transform? But I'm completely stumped on step 4, I have no idea how I would achieve it.
Please tell me if you need me to provide the code!
EDIT: So after look at #Gal Dreiman's answer I was able to center the face perfectly so that the blue dot in the the center of my image.
Although when I implemented the second part of his answer I end up getting something like this:
I see that the points have been transformed to the right places, but it's not the outcome that I had desired as it is sheered quite dramatically. Any ideas?
EDIT 2:
After switching the x, y coordinates for the center points around, this is what I got:
As I see you section 3, the easiest way to do that is by:
Find the faces in the image:
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.cv.CV_HAAR_SCALE_IMAGE
)
For each face calculate the midpoint:
for (x, y, w, h) in faces:
mid_x = x + int(w/2)
mid_y = y + int(h/2)
Affine transform the image to center the blue dot you already calculated:
height, width = img.shape
x_dot = ...
y_dot = ...
dx_dot = int(width/2) - x_dot
dy_dot = int(height/2) - y_dot
M = np.float32([[1,0,dx_dot],[0,1,dy_dot]])
dst = cv2.warpAffine(img,M,(cols,rows))
Hope it was helpful.
Edit:
Regarding section 4:
In order to stretch (resize) the image, all you have to do is to perform an affine transform. To find the transformation matrix, we need three points from input image and their corresponding locations in output image.
p_1 = [eyes_x, eye_y]
p_2 = [int(width/2),int(height/2)] # default: center of the image
p_3 = [mouth_x, mouth_y]
target_p_1 = [eyes_x, int(eye_y * 0.45)]
target_p_2 = [int(width/2),int(height/2)] # don't want to change
target_p_3 = [mouth_x, int(mouth_y * 0.75)]
pts1 = np.float32([p_1,p_2,p_3])
pts2 = np.float32([target_p_1,target_p_2,target_p_3])
M = cv2.getAffineTransform(pts1,pts2)
output = cv2.warpAffine(image,M,(height,width))
To clear things out:
eye_x / eye_y is the location of the eye center.
The same applies on mouth_x / mouth_y, for mouth center.
target_p_1/2/3 are the target points.
Edit 2:
I see you are in trouble, I hope this time my suggestion will work for you:
There is another approach I can think of. you can perform sort of "crop" to the image by pointing on 4 points, lets define them as the 4 points which wrap the face, and change the image perspective according to their new position:
up_left = [x,y]
up_right = [...]
down_left = [...]
down_right = [...]
pts1 = np.float32([up_left,up_right,down_left,down_right])
pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
M = cv2.getPerspectiveTransform(pts1,pts2)
dst = cv2.warpPerspective(img,M,(300,300))
So all you have to do is to define those 4 points. My suggestion, calculate the conture around the face (which you already did) and after that add delta_x and delta_y (or subtract) to the coordinates.

Detecting edges of lasers/lights in images using Python

I am writing a program in Python to loop through images extracted from the frames of a video and detect lines within them. The images are of fairly poor quality and vary significantly in their content. Here are two examples:
Sample Image 1 | Sample Image 2
I am trying to detect the lasers in each image and look at their angles. Eventually I would like to look at the distribution of these angles and output a sample of three of them.
In order to detect the lines in the images, I have looked at various combinations of the following:
Hough Lines
Canny Edge Detection
Bilateral / Gaussian Filtering
Denoising
Histogram Equalising
Morphological Transformations
Thresholding
I have tried lots of combinations of lots of different methods and I can't seem to come up with anything that really works. What I have been trying is along these lines:
import cv2
import numpy as np
img = cv2.imread('testimg.jpg')
grey = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
equal = clahe.apply(grey)
denoise = cv2.fastNlMeansDenoising(equal, 10, 10, 7, 21)
blurred = cv2.GaussianBlur(denoise, (3, 3), 0)
blurred = cv2.medianBlur(blurred, 9)
(mu, sigma) = cv2.meanStdDev(blurred)
edge = cv2.Canny(blurred, mu - sigma, mu + sigma)
lines = cv2.HoughLines(edge, 1, np.pi/180, 50)
if lines is not None:
print len(lines[0])
for rho,theta in lines[0]:
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img, (x1, y1), (x2, y2), (0, 0, 255), 2)
cv2.imshow("preview", img)
cv2.waitKey(0)
This is just one of many different attempts. Even if I can find a method that works slightly better for one of the images, it proves to be much worse for another one. I am not expecting completely perfect results, but I'm sure that they could be better than I've managed so far!
Could anyone suggest a tactic to help me move forward?
Here is one answer. It is an answer that would help you if your camera is in a fixed position and so are your lasers...and your lasers emit from coordinates that you can determine. So, if you have many experiments that happen concurrently with the same setup, this can be a starting point.
The question image information along a polar coordinate system was helpful to get a polar transform. I chose not to use openCV because not everybody can get it going (windows). I took the code from the linked question and played around a bit. If you add his code to mine (without the imports or main method) then you'll have the required functions.
import numpy as np
import scipy as sp
import scipy.ndimage
import matplotlib.pyplot as plt
import sys
import Image
def main():
data = np.array(Image.open('crop1.jpg').convert('LA').convert('RGB'))
origin = (188, -30)
polar_grid, r, theta = reproject_image_into_polar(data, origin=origin)
means, angs = mean_move(polar_grid, 10, 5)
means = np.array(means)
means -= np.mean(means)
means[means<0] = 0
means *= means
plt.figure()
plt.bar(angs, means)
plt.show()
def mean_move(data, width, stride):
means = []
angs = []
x = 0
while True:
if x + width > data.shape[1]:
break
d = data[:,x:x+width]
m = np.mean(d[d!=0])
means.append(m)
ang = 180./data.shape[1] * float(x + x+width)/2.
angs.append(ang)
x += stride
return means, angs
# copy-paste Joe Kington code here
Image around the upper source.
Notice that I chose one laser and cropped a region around its source. This can be done automatically and repeated for each image. I also estimated the source coordinates (188, -30) (in x,y form) based on where I thought it emitted from. Following image(a gimp screenshot!) shows my reasoning(it appeared that there was a very faint ray that I traced back too and took the intersection)...it also shows the measurement of the angle ~140 degrees.
polar transform of image(notice the vertical band if intensity...it is vertical because we chose the correct origin for the laser)
And using a very hastily created moving window mean function and rough mapping to degree angles, along with a diff from mean + zeroing + squaring.
So your task becomes grabbing these peaks. Oh look ~140! Who's your daddy!
In recap, if the setup is fixed, then this may help you! I really need to get back to work and stop procrastinating.

Area of overlapping circles

I have the following Python code to generate random circles in order to simulate Brownian motion. I need to find the total area of the small red circles so that I can compare it to the total area of a larger blue circle. Since the circles are generated randomly, many of them overlap making it difficult to find the area. I have read many other responses related to this question about pixel painting, etc. What is the best way to find the area of these circles? I do not want to modify the generation of the circles, I just need to find the total area of the red circles on the plot.
The code to generate the circles I need is as follows (Python v. 2.7.6):
import matplotlib.pyplot as plt
import numpy as np
new_line = []
new_angle = []
x_c = [0]
y_c = [0]
x_real = []
y_real = []
xy_dist = []
circ = []
range_value = 101
for x in range(0,range_value):
mu, sigma = 0, 1
new_line = np.random.normal(mu, sigma, 1)
new_angle = np.random.uniform(0, 360)*np.pi/180
x_c.append(new_line*np.cos(new_angle))
y_c.append(new_line*np.sin(new_angle))
x_real = np.cumsum(x_c)
y_real = np.cumsum(y_c)
a = np.mean(x_real)
b = np.mean(y_real)
i = 0
while i<=range_value:
xy_dist.append(np.sqrt((x_real[i]-a)**2+(y_real[i]-b)**2))
i += 1
circ_rad = max(xy_dist)
small_rad = 0.2
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
circ1 = plt.Circle((a,b), radius=circ_rad+small_rad, color='b')
ax.add_patch(circ1)
j = 0
while j<=range_value:
circ = plt.Circle((x_real[j], y_real[j]), radius=small_rad, color='r', fill=True)
ax.add_patch(circ)
j += 1
plt.axis('auto')
plt.show()
The package Shapely might be of some use:
https://gis.stackexchange.com/questions/11987/polygon-overlay-with-shapely
http://toblerity.org/shapely/manual.html#geometric-objects
I can think of an easy way to do it thought the result will have inaccuracies:
With Python draw all your circles on a white image, filling the circles as you draw them. At the end each "pixel" of your image will have one of 2 colors: white color is the background and the other color (let's say red) means that pixel is occupied by a circle.
You then need to sum the number of red pixels and multiply them by the scale with which you draw them. You will have then the area.
This is inaccurate as there is no way of drawing a circle using square pixels, so in the mapping you lose accuracy. Keep in mind that the bigger you draw the circles, the smaller the inaccuracy becomes.

Categories

Resources