Related
I am trying to rotate the image for any given angle.
I am rotating with the center of the image as the origin.
But the code is not doing the rotation as expected.
I am attaching the code below.
import math
import numpy as np
import cv2
im = cv2.imread("Samples\\baboon.jpg", cv2.IMREAD_GRAYSCALE)
new = np.zeros(im.shape,np.uint8)
new_x = im.shape[0] // 2
new_y = im.shape[1] // 2
x = int(input("Enter the angle : "))
trans_mat = np.array([[math.cos(x), math.sin(x), 0],[-math.sin(x), math.cos(x), 0],[0, 0, 1]])
for i in range(-new_x, im.shape[0] - new_x):
for j in range(-new_y, im.shape[1] - new_y):
vec = np.matmul([i, j, 1], trans_mat)
if round(vec[0] + new_x) < 512 and round(vec[1] + new_y) < 512:
new[round(vec[0]+new_x), round(vec[1]+new_y)] = im[i+new_x,j+new_y]
cv2.imshow("rot",new)
cv2.imshow("1",im)
cv2.waitKey(0)
cv2.destroyAllWindows()
It looks like you are trying to implement a nearest-neighbor resampler. What you are doing is going through the image and mapping each input pixel to a new location in the output image. This can lead to problems like pixels overwriting each other incorrectly, output pixels being left empty, and similar.
I would suggest (based on experience) that you are looking at the problem backwards. Rather than looking at where an input pixel ends up in the output, you should consider where each output pixel originates in the input. That way, you have no ambiguity about nearest neighbors, and the entire image array will be filled.
You want to rotate about the center. The current rotation matrix you are using rotates about (0, 0). To compensate for that, you need to translate the center of the image to (0, 0), rotate, and then translate back. Rather than developing the full affine matrix, I will show you how to do the individual operations manually, and then how to combine them into the transform matrix.
Manual Computation
First get an input and output image:
im = cv2.imread("Samples\\baboon.jpg", cv2.IMREAD_GRAYSCALE)
new = np.zeros_like(im)
Then determine the center of rotation. Be clear about your dimensions x is usually the column (dim 1), not the row (dim 0):
center_row = im.shape[0] // 2
center_col = im.shape[1] // 2
Compute the radial coordinates of each pixel in the image, shaped to the corresponding dimension:
row_coord = np.arange(im.shape[0])[:, None] - center_row
col_coord = np.arange(im.shape[1]) - center_col
row_coord and col_coord are the distances from center in the output image. Now compute the locations where they came from in the input. Notice that we can use broadcasting to avoid the need for a loop. I'm following your original convention for angle definitions here, and finding the inverse rotation to determine the source location. The big difference here is that the input in degrees is converted to radians, since that's what the trigonometric functions expect:
angle = float(input('Enter Angle in Degrees: ')) * np.pi / 180.0
source_row = row_coord * np.cos(angle) - col_coord * np.sin(angle) + center_row
source_col = row_coord * np.sin(angle) + col_coord * np.cos(angle) + center_col
If all the indices were guaranteed to fall within the input image, you wouldn't even need to pre-allocate the output. You could literally just do new = im[source_row, source_col]. However, you need to mask the indices:
mask = source_row >= 0 & source_row < im.shape[0] & source_col >= 0 & source_col < im.shape[1]
new[mask] = im[source_row[mask].round().astype(int), source_col[mask].round().astype(int)]
Affine Transforms
Now let's take a look at using Affine transforms. First you want to subtract the center from your coordinates. Let's say you have a column vector [[r], [c], [1]]. A translation to zero would be the matrix
[[r'] [[1 0 -rc] [[r]
[c'] = [0 1 -cc] . [c]
[1 ]] [0 0 1 ]] [1]]
Then the (backwards) rotation is applied:
[[r''] [[cos(a) -sin(a) 0] [[r']
[c''] = [sin(a) cos(a) 0] . [c']
[ 1 ]] [ 0 0 1]] [1 ]]
And finally, you need to translate back to center:
[[r'''] [[1 0 rc] [[r'']
[c'''] = [0 1 cc] . [c'']
[ 1 ]] [0 0 1]] [ 1 ]]
If you multiply these three matrices out in order from right to left, you get
[[cos(a) -sin(a) cc * sin(a) - rc * cos(a) + rc]
M = [sin(a) cos(a) -cc * cos(a) - rc * sin(a) + cc]
[ 0 0 1 ]]
If you build a full matrix of output coordinates rather than the subset arrays we started with, you can use np.matmul, a.k.a. the # operator to do the multiplication for you. There is no need for this level of complexity for such a simple case though:
matrix = np.array([[np.cos(angle), -np.sin(angle), col_center * np.sin(angle) - row_center * np.cos(angle) + row_center],
[np.sin(angle), np.cos(angle), -col_center * np.cos(angle) - row_center * np.sin(angle) + col_center],
[0, 0, 1]])
coord = np.ones((*im.shape, 3, 1))
coord[..., 0, :] = np.arange(im.shape[0]).reshape(-1, 1, 1, 1)
coord[..., 1, :] = np.arange(im.shape[1]).reshape(-1, 1, 1)
source = (matrix # coord)[..., :2, 0]
The remainder of the processing is fairly similar to the manual computations:
mask = (source >= 0 & source_row < im.shape).all(axis=-1)
new[mask] = im[source[0, mask].round().astype(int), source_col[1, mask].round().astype(int)]
I tried to implement Madphysicist's matrix multiplication method. Here's is the implementation, for those who care:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
path = Path(".")
img = plt.imread(path.resolve().parent / "img_align" / "faces_imgs" / "4.jpg")
angle = 15
def _transform(rot_mat, x, y):
"""
conveninece method for matrix multiplication
"""
return np.matmul(rot_mat, np.array([x, y, 1]))
def rotate(img, angle):
angle %= 360
angle = np.radians(angle)
new = np.zeros_like(img)
cx, cy = tuple(x / 2 for x in img.shape[:2])
# Angles are reverse as we are interpolating from destination to source
rot_mat = np.array(
[
[np.cos(-angle), -np.sin(-angle), 0],
[np.sin(-angle), np.cos(-angle), 0],
[0, 0, 1],
]
)
rot_mat[0, 2], rot_mat[1, 2], _ = _transform(rot_mat, -cx, -cy)
# build combined affine transformation matrrix
rot_mat[0, 2] += cx
rot_mat[1, 2] += cy
coord = np.ones((*img.shape, 3, 1)) # [576x336x3x3x1]
coord[..., 0, :] = np.arange(img.shape[0]).reshape(-1, 1, 1, 1)
coord[..., 1, :] = np.arange(img.shape[1]).reshape(-1, 1, 1)
source = (rot_mat # coord)[..., :2, 0]
x_mask = source[..., 0]
y_mask = source[..., 1]
mask = (
(x_mask >= 0)
& (x_mask < img.shape[0])
& (y_mask >= 0)
& (y_mask < img.shape[1])
).all(axis=-1)
# Clipping values to avoid IndexError
new[mask] = img[
x_mask[..., 0][mask].round().astype(int).clip(None, img.shape[0] - 1),
y_mask[..., 1][mask].round().astype(int).clip(None, img.shape[1] - 1),
]
plt.imsave("test.jpg", new)
if __name__ == "__main__":
rotate(img, angle)
I think this is what you are looking for
Properly rotate image in OpenCV?
Here is the code
ang = int(input("Enter the angle : "))
im = cv2.imread("Samples\\baboon.jpg", cv2.IMREAD_GRAYSCALE)
def rotimage(image):
row,col = image.shape[0:2]
center=tuple(np.array([col,row])/2)
rot_mat = cv2.getRotationMatrix2D(center,ang,1.0)
new_image = cv2.warpAffine(image, rot_mat, (col,row))
return new_image
new_image = rotimage(im)
cv2.imshow("1",new_image)
cv2.waitKey(0)
I am currently completing a program in Pyhton (3.6) as per internal requirement. As part of it, I am having to loop through a colour image (3 bytes per pixel, R, G & B) and distort the image pixel by pixel.
I have the same code in other languages (C++, C#), and non-optimized code executes in about two seconds, while optimized code executes in less than a second. By non-optimized code I mean that the matrix multiplication is performed by a 10 line function I implemented. The optimized version just uses external libraries for multiplication.
In Python, this code takes close to 300 seconds. I can´t think of a way to vectorize this logic or speed it up, as there are a couple of "if"s inside the nested loop. Any help would be greatly appreciated.
import numpy as np
#for test purposes:
#roi = rect.rect(0, 0, 1200, 1200)
#input = DCImage.DCImage(1200, 1200, 3)
#correctionImage = DCImage.DCImage(1200,1200,3)
#siteToImage= np.zeros((3,3), np.float32)
#worldToSite= np.zeros ((4, 4))
#r11 = r12 = r13 = r21 = r22 = r23 = r31 = r32 = r33 = 0.0
#xMean = yMean = zMean = 0
#tx = ty = tz = 0
#epsilon = np.finfo(float).eps
#fx = fy = cx = cy = k1 = k2 = p1 = p2 = 0
for i in range (roi.x, roi.x + roi.width):
for j in range (roi.y , roi.y + roi.height):
if ( (input.pixels [i] [j] == [255, 0, 0]).all()):
#Coordinates conversion
siteMat = np.matmul(siteToImage, [i, j, 1])
world =np.matmul(worldToSite, [siteMat[0], siteMat[1], 0.0, 1.0])
xLocal = world[0] - xMean
yLocal = world[1] - yMean
zLocal = z_ortho - zMean
#From World to camera
xCam = r11*xLocal + r12*yLocal + r13*zLocal + tx
yCam = r21*xLocal + r22*yLocal + r23*zLocal + ty
zCam = r31*xLocal + r32*yLocal + r33*zLocal + tz
if (zCam > epsilon or zCam < -epsilon):
xCam = xCam / zCam
yCam = yCam / zCam
#// DISTORTIONS
r2 = xCam*xCam + yCam*yCam
a1 = 2*xCam*yCam
a2 = r2 + 2*xCam*xCam
a3 = r2 + 2*yCam*yCam
cdist = 1 + k1*r2 + k2*r2*r2
u = int((xCam * cdist + p1 * a1 + p2 * a2) * fx + cx + 0.5)
v = int((yCam * cdist + p1 * a3 + p2 * a1) * fy + cy + 0.5)
if (u>=0 and u<correctionImage.width and v>=0 and v < correctionImage.height):
input.pixels [i] [j] = correctionImage.pixels [u][v]
You normally vectorize this kind of thing by making a displacement map.
Make a complex image where each pixel has the value of its own coordinate, apply the usual math operations to compute whatever transform you want, then apply the map to your source image.
For example, in pyvips you might write:
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# this makes an image where pixel (0, 0) (at the top-left) has value [0, 0],
# and pixel (image.width, image.height) at the bottom-right has value
# [image.width, image.height]
index = pyvips.Image.xyz(image.width, image.height)
# make a version with (0, 0) at the centre, negative values up and left,
# positive down and right
centre = index - [image.width / 2, image.height / 2]
# to polar space, so each pixel is now distance and angle in degrees
polar = centre.polar()
# scale sin(distance) by 1/distance to make a wavey pattern
d = 10000 * (polar[0] * 3).sin() / (1 + polar[0])
# and back to rectangular coordinates again to make a set of vectors we can
# apply to the original index image
distort = index + d.bandjoin(polar[1]).rect()
# distort the image
distorted = image.mapim(distort)
# pick pixels from either the distorted image or the original, depending on some
# condition
result = (d.abs() > 10 or image[2] > 100).ifthenelse(distorted, image)
result.write_to_file(sys.argv[2])
That's just a silly wobble pattern, but you can swap it for any distortion you want. Then run as:
$ /usr/bin/time -f %M:%e ./wobble.py ~/pics/horse1920x1080.jpg x.jpg
54572:0.31
300ms and 55MB of memory on this two-core, 2015 laptop to make:
After much testing, the only way to speed the function without writing it in C++ was dissassembling it and vectorizing it. The way to do it in this particular instance is to create an array with the valid indexes at the beginning of the funcion and use them as tuples to index the final solution.
subArray[roi.y:roi.y+roi.height,roi.x:roi.x+roi.width,] = input.pixels[roi.y:roi.y+roi.height,roi.x:roi.x+roi.width,]
#Calculate valid XY indexes
y_index, x_index = np.where(np.all(subArray== np.array([255,0,0]), axis=-1))
#....
#do stuff
#....
#Join result values with XY indexes
ij_xy = np.column_stack((i, j, y_index, x_index))
#Only keep valid ij values
valids_ij_xy = ij_xy [(ij_xy [:,0] >= 0) & (ij_xy [:,0] < correctionImage.height) & (ij_xy [:,1] >= 0) & (ij_xy [:,1] < correctionImage.width)]
#Assign values
input.pixels [tuple(np.array(valids_ij_xy [:,2:]).T)] = correctionImage.pixels[tuple(np.array(valids_ij_xy [:,:2]).T)]
So I have 2D function which is sampled irregularly over a domain, and I want to calculate the volume underneath the surface. The data is organised in terms of [x,y,z], taking a simple example:
def f(x,y):
return np.cos(10*x*y) * np.exp(-x**2 - y**2)
datrange1 = np.linspace(-5,5,1000)
datrange2 = np.linspace(-0.5,0.5,1000)
ar = []
for x in datrange1:
for y in datrange2:
ar += [[x,y, f(x,y)]]
for x in xrange2:
for y in yrange2:
ar += [[x,y, f(x,y)]]
val_arr1 = np.array(ar)
data = np.unique(val_arr1)
xlist, ylist, zlist = data.T
where np.unique sorts the data in the first column then the second. The data is arranged in this way as I need to sample more heavily around the origin as there is a sharp feature that must be resolved.
Now I wondered about constructing a 2D interpolating function using scipy.interpolate.interp2d, then integrating over this using dblquad. As it turns out, this is not only inelegant and slow, but also kicks out the error:
RuntimeWarning: No more knots can be added because the number of B-spline
coefficients already exceeds the number of data points m.
Is there a better way to integrate data arranged in this fashion or overcoming this error?
If you can sample the data with high enough resolution around the feature of interest, then more sparsely everywhere else, the problem definition then becomes how to define the area under each sample. This is easy with regular rectangular samples, and could likely be done stepwise in increments of resolution around the origin. The approach I went after is to generate the 2D Voronoi cells for each sample in order to determine their area. I pulled most of the code from this answer, as it had almost all the components needed already.
import numpy as np
from scipy.spatial import Voronoi
#taken from: # https://stackoverflow.com/questions/28665491/getting-a-bounded-polygon-coordinates-from-voronoi-cells
#computes voronoi regions bounded by a bounding box
def square_voronoi(xy, bbox): #bbox: (min_x, max_x, min_y, max_y)
# Select points inside the bounding box
points_center = xy[np.where((bbox[0] <= xy[:,0]) * (xy[:,0] <= bbox[1]) * (bbox[2] <= xy[:,1]) * (bbox[2] <= bbox[3]))]
# Mirror points
points_left = np.copy(points_center)
points_left[:, 0] = bbox[0] - (points_left[:, 0] - bbox[0])
points_right = np.copy(points_center)
points_right[:, 0] = bbox[1] + (bbox[1] - points_right[:, 0])
points_down = np.copy(points_center)
points_down[:, 1] = bbox[2] - (points_down[:, 1] - bbox[2])
points_up = np.copy(points_center)
points_up[:, 1] = bbox[3] + (bbox[3] - points_up[:, 1])
points = np.concatenate((points_center, points_left, points_right, points_down, points_up,), axis=0)
# Compute Voronoi
vor = Voronoi(points)
# Filter regions (center points should* be guaranteed to have a valid region)
# center points should come first and not change in size
regions = [vor.regions[vor.point_region[i]] for i in range(len(points_center))]
vor.filtered_points = points_center
vor.filtered_regions = regions
return vor
#also stolen from: https://stackoverflow.com/questions/28665491/getting-a-bounded-polygon-coordinates-from-voronoi-cells
def area_region(vertices):
# Polygon's signed area
A = 0
for i in range(0, len(vertices) - 1):
s = (vertices[i, 0] * vertices[i + 1, 1] - vertices[i + 1, 0] * vertices[i, 1])
A = A + s
return np.abs(0.5 * A)
def f(x,y):
return np.cos(10*x*y) * np.exp(-x**2 - y**2)
#sampling could easily be shaped to sample origin more heavily
sample_x = np.random.rand(1000) * 10 - 5 #same range as example linspace
sample_y = np.random.rand(1000) - .5
sample_xy = np.array([sample_x, sample_y]).T
vor = square_voronoi(sample_xy, (-5,5,-.5,.5)) #using bbox from samples
points = vor.filtered_points
sample_areas = np.array([area_region(vor.vertices[verts+[verts[0]],:]) for verts in vor.filtered_regions])
sample_z = np.array([f(p[0], p[1]) for p in points])
volume = np.sum(sample_z * sample_areas)
I haven't exactly tested this, but the principle should work, and the math checks out.
Edit: Sorry, I didn't think about writing tests. I will do so, and see if I can't find out what I've done wrong. Thanks to the person who suggested I write tests!
I am trying to write a computer simulation in Python that simulates the electric force and how atoms interact with it. For those that don't know, essentially, things with opposite (positive and negative) charges attract, and like charges repel, and the magnitude of the force falls of as 1 / (distance squared). I try placing a negatively charged particle (oxygen ion) and a positive charged particle (hydrogen ion) into a coordinate system, and expect them to attract, and move closer together, but instead they alawys repel! Naturally, I thought it must just have been a typo, so I added a negative sign to my model of the eletric force, but they still repel! I have no idea what's going on, and was hoping some of you folks might have a clue as to what's going wrong here. Below is the code. I've included everything so that you can just run it from your own terminal to see for yourselves what happens.
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
import math
Å = 10 ** (-10)
u = 1.660539040 * (10 ** (-27))
k = 8.987551787368 * (10 ** 9)
e = 1.6021766208 * (10 ** (-19))
hydrogen1 = dict(
name='Hydrogen 1',
charge=1 * e,
mass=1.00793 * u,
position=[0.5 * Å, 0.5 * Å, 0.5 * Å],
velocity=[0, 0, 0],
acceleration=[0, 0, 0],
force=[0, 0, 0]
)
oxygen1 = dict(
name='Oxygen 1',
charge=-2 * e,
mass=15.9994 * u,
position=[0, 0, 0],
velocity=[0, 0, 0],
acceleration=[0, 0, 0],
force=[0, 0, 0]
)
atoms = [hydrogen1, oxygen1]
def magnitude(vector):
magnitude = 0
for coordinate in vector:
magnitude += (coordinate ** 2)
return math.sqrt(magnitude)
def scale_vector(vector, scalefactor):
scaled_vector = vector
i = 0
while i < len(vector):
vector[i] *= scalefactor
i += 1
return scaled_vector
def sum_vectors(vectors):
resultant_vector = [0, 0, 0]
for vector in vectors:
i = 0
while i < len(vector):
resultant_vector[i] += vector[i]
i += 1
return resultant_vector
def distance_vector(point1, point2):
if type(point1) is list and type(point2) is list:
pos1 = point1
pos2 = point2
elif type(point1) is dict and type(point2) is dict:
pos1 = point1['position']
pos2 = point2['position']
vector = []
i = 0
while i < len(pos1):
vector.append(pos2[i] - pos1[i])
i += 1
return vector
def distance(point1, point2):
return magnitude(distance_vector(point1, point2))
def direction_vector(point1, point2):
vector = distance_vector(point1, point2)
length = magnitude(vector)
return scale_vector(vector, 1 / length)
def eletric_force(obj1, obj2):
length = k * obj1['charge'] * \
obj2['charge'] / ((distance(obj1, obj2)) ** 2)
force_vector = scale_vector(direction_vector(obj1, obj2), length)
return force_vector
def force_to_acceleration(force, mass):
scalefactor = 1 / (mass)
return scale_vector(force, scalefactor)
time = 10
t = 0
period = 1 / 1000
while t < time:
i = 0
while i < len(atoms):
atom = atoms[i]
position = atom['position']
velocity = atom['velocity']
acceleration = atom['acceleration']
# Moving the atom
atom['position'] = sum_vectors(
[position, scale_vector(velocity, period)])
# Accelerating the atom using its current acceleration vector
atom['velocity'] = sum_vectors([
velocity, scale_vector(acceleration, period)])
# Calculating the net force on the atom
force = [0, 0, 0]
j = 0
while j < len(atoms):
if j != i:
force = sum_vectors([force, eletric_force(atoms[i], atoms[j])])
j += 1
# Updating the force and acceleration on the atom
atoms[i]['force'] = [force[0], force[1], force[2]]
atom['acceleration'] = force_to_acceleration(
[force[0], force[1], force[2]], atom['mass'])
i += 1
t += period
np.random.seed(19680801)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for atom in atoms:
name = atom['name']
position = atom['position']
X = position[0]
Y = position[1]
Z = position[2]
print(
f'Position of {name}: [{X}, {Y}, {Z}]')
color = 'green'
if 'Oxygen' in atom['name']:
color = 'red'
ax.scatter(X, Y, Z, color=color)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
Below are the tests:
from functions import *
from constants import *
from atoms import hydrogen1, hydrogen2, oxygen1
import math
def test_magnitude():
assert magnitude({1, 3, -5}) == math.sqrt(35)
def test_sum_vectors():
assert sum_vectors([[1, 2, 3], [0, -4, 8]]) == [1, -2, 11]
def test_scale_vector():
assert scale_vector([1, 4, -3], -2) == [-2, -8, 6]
def test_distance_vector():
assert distance_vector([1, 4, 3], [0, 1, 1]) == [-1, -3, -2]
assert distance_vector(hydrogen2, hydrogen1) == [Å, Å, Å]
def test_distance():
assert distance([1, 2, 3], [3, 2, 1]) == math.sqrt(8)
assert distance(hydrogen1, oxygen1) == math.sqrt(0.75) * Å
assert distance(hydrogen1, hydrogen2) == Å * math.sqrt(3)
def test_direction_vector():
assert direction_vector([1, 1, 1], [7, 5, -3]) == [6 /
math.sqrt(68), 4 / math.sqrt(68), -4 / math.sqrt(68)]
m = 1 / math.sqrt(3)
for component in direction_vector(hydrogen2, hydrogen1):
assert abs(component - m) < 10 ** (-12)
def test_electric_force():
m = 4.439972744 * 10 ** (-9)
for component in electric_force(hydrogen1, hydrogen2):
assert abs(component - m) < 10 ** (-12)
def test_force_to_acceleration():
assert force_to_acceleration(
[4, 3, -1], 5.43) == [4 / 5.43, 3 / 5.43, -1 / 5.43]
When you calculate electric charge, you take your direction vector, which is pointing to the partner atom, and multiply it by a negative number (because your mismatched charges give a negative product), which results in a force vector pointing away from your partner atom.
You should also consider the risks of doing this modeling with these numbers (the epsilon for floats is somewhere around 1e-16). If you intend you model on angstrom scales, it might be best to go with angstroms as your unit. If you intend to model on meter scales you might want to stick with what you have. Just be careful rescaling your constants.
The direction vector is the code problem, and if you fix that you get your next problem; at t=0, your atoms in your example have an acceleration of 2e19, and at the first point after t=0, they have a velocity of 2e16 (which, assuming I have your units right, is a bunch of orders of magnitude faster than the speed of light). They move so fast that they rocket toward and then past each other, and then the inverse square distance force of electrostatics functionally goes to 0 after the second tick and they'll never slow down from their hyperwarp.
There are options to deal with this; shorter ticks (femtoseconds?), changing to a relativistic velocity calculation, etc. You could also try modeling with a continuous curve instead of discrete points, but that will fall down so fast if you try to scale it to more than a couple atoms... ultimately this is just a core problem of physics modeling. Good luck!
I have an image and it has some shapes in it. I detected lines with using hough lines. How can I detect which lines are parallel?
Equation of a line in Cartesian coordinates:
y = k * x + b
Two lines y = k1 * x + b1, y = k2 * x + b2 are parallel, if k1 = k2.
So you need to calculate coefficient k for each detected line.
In order to uniquely identify the equation of a line you need to know the coordinates of two points that belong to line.
After having found lines with HoughLines (С++):
vector<Vec2f> lines;
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
you have the vector lines, which stores the parameters (r,theta) of the detected lines in polar coordinates. You need to transfer them in Cartesian coordinates:
Here example in C++:
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b)); //the first point
pt1.y = cvRound(y0 + 1000*(a)); //the first point
pt2.x = cvRound(x0 - 1000*(-b)); //the second point
pt2.y = cvRound(y0 - 1000*(a)); //the second point
}
After having got these two points of a line you can calculate its equation.
HoughLines returns its results in Polar coordinates. So just check the 2nd value for the angle. No need to convert to x,y
def findparallel(lines):
lines1 = []
for i in range(len(lines)):
for j in range(len(lines)):
if (i == j):continue
if (abs(lines[i][1] - lines[j][1]) == 0):
#You've found a parallel line!
lines1.append((i,j))
return lines1
As John proposed, the easiest way is to detect similar angles. OpenCVs HoughLines function represents a line by means of its distance to the origin and an angle.
So what you could basically do is to cluster the different angles with a hierarchical clustering algorithm:
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import ward, fcluster
img = cv2.imread('images/img01.bmp')
img_canny = cv2.Canny(img, 50, 200, 3)
lines = cv2.HoughLines(img_canny, 1, 5* np.pi / 180, 150)
def find_parallel_lines(lines):
lines_ = lines[:, 0, :]
angle = lines_[:, 1]
# Perform hierarchical clustering
angle_ = angle[..., np.newaxis]
y = pdist(angle_)
Z = ward(y)
cluster = fcluster(Z, 0.5, criterion='distance')
parallel_lines = []
for i in range(cluster.min(), cluster.max() + 1):
temp = lines[np.where(cluster == i)]
parallel_lines.append(temp.copy())
return parallel_lines