I am trying to use probabilistic HoughLine Tranforms
.
After using HoughLinesP I get the following lines:
Note that the blue thing denoted in the image is not part of it. I have made it using paint for demonstration purposes. I want the angles shown in blue.
So what I tried was using the points from the lines and calculated slope of it and took the arctan. But I did not get any result. Note that the following function is a part of a class.
def HoughLines(self):
self.copy_image = self.img.copy()
minLineLength = 10
maxLineGap = 30
self.lines = cv2.HoughLinesP(self.edges,1,np.pi/180,15,minLineLength=minLineLength,maxLineGap=maxLineGap)
for line in range(0, len(self.lines)):
for x1,y1,x2,y2 in self.lines[line]:
cv2.line(self.copy_image,(x1,y1),(x2,y2),(0,255,0),2)
# cv2.imshow('hough',self.copy_image)
# cv2.imwrite('test.jpg', self.copy_image)
# cv2.waitKey(0)
angle = 0.0
self.nlines = self.lines.size
for x1, y1, x2, y2 in self.lines[0]:
angle += np.arctan2(y2 - y1, x2 - x1)
print(angle)
Therefore, I am stuck and I do not how to proceed. What can be a possible solution?
Any help is appreciated. Thank You.
Related
I have tried several libraries and ways to detect faces and export them as an image.
The problem is that all the algorithms are cutting a lot of the head.
Example from the deepface doc:
While I want something like:
Is there a way of doing so? Or adding "padding" to the coordinates in a smart way?
I get start and end points.
I build a function to do that with simple math:
def increase_rectangle_size(points: list[int64,int64,int64,int64], increase_percentage: int):
delta_x = (points[0] - points[2]) * increase_percentage // 100
delta_y = (points[1] - points[3]) * increase_percentage // 100
new_points = [points[0] + delta_x, points[1] + delta_y, points[2] - delta_x, points[3] - delta_y]
return [(i > 0) * i for i in new_points] # Negative numbers to zeros.
What it basically does is increase the distance between the two dots (On the dots 'line').
I don't want less than 0 values so I checked for negative numbers at the end of the function. I do not care if I get out of the frame (for bigger numbers).
I get the two points as a list ([x1, y1, x2, y2]) because this is how the library I use handles that but you can change it to 2 points of course.
I have in a list of points two different curves (y1, y2) and I want to find the area between the curves when:
y1 > y2
y1 < y2
I have found this post, but it only calculates the sum of both areas.
If we plot what I want, I want separately the blue area and the red area.
Edit:
I noticed in hindsight, that this solution is not exact and there are probably cases where it doesn't work at all. As long as there is no other better answer I will leave this here.
You can use
diff = y1 - y2 # calculate difference
posPart = np.maximum(diff, 0) # only keep positive part, set other values to zero
negPart = -np.minimum(diff, 0) # only keep negative part, set other values to zero
to seperate the blue from the red part. Then calculate their areas with np.trapz:
posArea = np.trapz(posPart)
negArea = np.trapz(negPart)
I am using Garden-MapView in my Kivy app.
My issue is that the user can move outside the bounds of the map (pulled from OpenStreetMap) and continue to pan into the surrounding blue 'sea' area. Image below:
I'm not sure if the issue is specific to Garden-MapView, or if a generic kivy/widget answer could solve it.
My best attempt to solve this (out of many) is some crude code posted below. When the map extents move past the edge of the screen, the code calculates the center screen coordinate and pulls the center of the map to it. It works better for longitude. But this can slow down the app significantly due to the frequency of on_map_relocated event calls. I have also set MapView.min_zoom = 2:
class CustMapView(MapView):
def on_map_relocated(self, *kwargs):
x1, y1, x2, y2 = self.get_bbox()
centerX, centerY = Window.center
latRemainder = self.get_latlon_at(centerX, centerY, zoom=self.zoom)[0]-(x1+x2)/2
if x1 < -85.8: self.center_on((x1+x2)/2+latRemainder+.01, self.lon)
if x2 > 83.6: self.center_on((x1+x2)/2+latRemainder-.01, self.lon)
if y1 == -180: self.center_on(self.lat, (y1+y2)/2+0.01)
if y2 == 180: self.center_on(self.lat, (y1+y2)/2-0.01)
Full code to reproduce yourselves: https://pastebin.com/xX0GtPUb
.
I need to find a line that splits up the points so that the blue area is equal to the read area. I am doing this to a numpy array that has all of the x and y points in it. I have tried splitting it up and taking the areas of individual parts, but that is proving difficult for how many points I have.
My other idea was put this function on it's side, and integrate that way, and the areas would be equal when the integral is zero, but I can't find a function to let me choose the "x-axis" in that case. Anyone have any advice on how I might go about doing this?
[Edit] Original Picture (before the bad color job)
[Edit]
The x-values I am using can be found here
and the y-values to go along with those are here
EDIT The code below isn't very good at dealing with generic functions, this other version of area_difference is a little more robust. It will still fail if the passed x0 does not intersect the curve at least twice.
def area_difference(x0, x, y) :
transitions = np.where(np.diff(x < x0))[0]
x_ = x[transitions[0]:transitions[-1]]
y_ = y[transitions[0]:transitions[-1]]
return np.sum(np.diff(y_) * (x_[:-1] - x0))
You can get the area if you consider your curve defined as a parametric curve, the index of the array being the parameter. I think the following code is more or less straightforward given that basic idea. I haven't worried too much about getting off-by-one errors right, but any differences should be minor.
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize
x = np.genfromtxt('x.txt')
y = np.genfromtxt('y.txt')
def area_difference(x0, x, y) :
transitions = np.where(np.diff(x < x0))
x_right = x[transitions[0][0]:transitions[0][1]]
y_right = y[transitions[0][0]:transitions[0][1]]
x_left = x[transitions[0][1]:transitions[0][2]]
y_left = y[transitions[0][1]:transitions[0][2]]
return (np.sum(np.diff(y_right) * (x_right[:-1] - x0)) +
np.sum(np.diff(y_left) * (x_left[:-1] - x0)))
x0 = scipy.optimize.fsolve(area_difference, 3, args=(x, y))
plt.plot(x, y, 'b-')
plt.plot([x0, x0], [y.min(), y.max()], 'r-')
plt.show()
>>> x0
array([ 3.4174168])
I ended up solving my own problem by doing something fairly simple.
As can be seen in the image, I split the curve into its top, middle, and bottom sections (represented here by the different colors), then put a dividing line between them and did a Riemann Sum of sorts, moving the red line until the areas were equal.
This question is related to Transformation between two set of points . Hovewer this is better specified, and some assumptions added.
I have element image and some model.
I've detected contours on both
contoursModel0, hierarchyModel = cv2.findContours(model.copy(), cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE);
contoursModel = [cv2.approxPolyDP(cnt, 2, True) for cnt in contoursModel0];
contours0, hierarchy = cv2.findContours(canny.copy(), cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE);
contours = [cv2.approxPolyDP(cnt, 2, True) for cnt in contours0];
Then I've matched each contour to each other
modelMassCenters = [];
imageMassCenters = [];
for cnt in contours:
for cntModel in contoursModel:
result = cv2.matchShapes(cnt, cntModel, cv2.cv.CV_CONTOURS_MATCH_I1, 0);
if(result != 0):
if(result < 0.05):
#Here are matched contours
momentsModel = cv2.moments(cntModel);
momentsImage = cv2.moments(cnt);
massCenterModel = (momentsModel['m10']/momentsModel['m00'],
momentsModel['m01']/momentsModel['m00']);
massCenterImage = (momentsImage['m10']/momentsImage['m00'],
momentsImage['m01']/momentsImage['m00']);
modelMassCenters.append(massCenterModel);
imageMassCenters.append(massCenterImage);
Matched contours are something like features.
Now I want to detect transformation between this two sets of points.
Assumptions: element is rigid body, only rotation, displacement and scale change.
Some features may be miss detected how to eliminate them. I've once used cv2.findHomography and it takes two vectors and calculates homography between them even there are some miss matches.
cv2.getAffineTransformation takes only three points (can't cope missmatches) and here I have multiple features.
Answer in my previous question says how to calculate this transformation but does not take missmatches. Also I think that it is possible to return some quality level from algorithm (by checking how many points are missmatched, after computing some transformation from the rest)
And the last question: should I take all vector points to compute transformation or treat only mass centers of this shapes as feature?
To show it I've added simple image. Features with green are good matches in red bad matches. Here match should be computed from 3 green featrues and red missmatches should affect match quality.
I'm adding fragments of solution I've figured out for now (but I think it could be done much better):
for i in range(0, len(modelMassCenters) - 1):
for j in range(i + 1, len(modelMassCenters) - 1 ):
x1, y1 = modelMassCenters[i];
x2, y2 = modelMassCenters [j];
modelVec = (x2 - x1, y2 - y1);
x1, y1 = imageMassCenters[i];
x2, y2 = imageMassCenters[j];
imageVec = (x2 - x1, y2 - y1);
rotation = angle(modelVec,imageVec);
rotations.append((i, j, rotation));
scale = length(modelVec)/length(imageVec);
scales.append((i, j, scale));
After computing scales and rotation given by each pair of corresponding lines I'm going to find median value and average values of rotation which does not differ more than some delta from median value. The same thing with scale. Then points which are making those values taken to computation will be used to compute displacement.
Your second step (match contours to each other by doing a pairwise shape comparison) sounds very vulnerable to errors if features have a similar shape, e.g., you have several similar-sized circular contours. Yet if you have a rigid body with 5 circular features in one quadrant only, you could get a very robust estimate of the affine transform if you consider the body and its features as a whole. So don't discard information like a feature's range and direction from the center of the whole body when matching features. Those are at least as important in correlating features as size and shape of the individual contour.
I'd try something like (untested pseudocode):
"""
Convert from rectangular (x,y) to polar (r,w)
r = sqrt(x^2 + y^2)
w = arctan(y/x) = [-\pi,\pi]
"""
def polar(x, y): # w in radians
from math import hypot, atan2, pi
return hypot(x, y), atan2(y, x)
model_features = []
model = params(model_body_contour) # return tuple (center_x, center_y, area)
for contour in model_feature_contours:
f = params(countour)
range, angle = polar(f[0]-model[0], f[1]-model[1])
model_features.append((angle, range, f[2]))
image_features = []
image = params(image_body_contour)
for contour in image_feature_contours:
f = params(countour)
range, angle = polar(f[0]-image[0], f[1]-image[1])
image_features.append((angle, range, f[2]))
# sort image_features and model_features by angle, range
#
# correlate image_features against model_features across angle offsets
# rotation = angle offset of max correlation
# scale = average(model areas and ranges) / average(image areas and ranges)
If you have very challenging images, such as a ring of 6 equally-spaced similar-sized features, 5 of which have the same shape and one is different (e.g. 5 circles and a star), you could add extra parameters such as eccentricity and sharpness to the list of feature parameters, and include them in the correlation when searching for the rotation angle.