Find regions of high/low pixel density OpenCV moire patterns - python

I have a thresholded image of a moire pattern that looks like this (moire lines are a little bit inclined):
I want to find the x coordinate of the darkest region in the pattern (i.e., the x coordinate where in theory there is perfect overlap between two vertical lines).
By visual inspection I can easily approximate the expected x coordinate but I am not sure how to do it more precisely using computer vision automatic techniques and algorithms.
My first idea was to reduces the matrix to a vector by treating the matrix rows as a set of 1D vectors and performing the average operation on them until a single row is obtained. Then, plot this 1D array and hopefully find the x coordinate of the global minima that cooresponds to the overlap region (see reduced_OpenCV).
reduced= cv2.reduce(threshold, 0, cv2.REDUCE_AVG)
plt.plot(reduce[0,:])
The obtained signal is:
The results don't look good and I don't know how to proceed from here. Any ideas on how to process the signal and extract the x coordinate are more than welcome. New ideas on how to approch this problem are also welcome.

Related

how to cluster a 3d array in python?

I have to cluster a 3d array that looks like this
a=([[[1,2,3],[4,5,6],[7,8,9]],[[1,4,7],[2,5,9],[3,6,8]]])
Imagine that this array represents the coordinates of a triangle in a time series, so the first 2d array represents the coordinates of the vertices in the first frame, the second array represents the coordinates in the second frame and so on.
I need to cluster the position of this triangle in time, but the cluster algorithms of scikit -learn only work on 2d array. I have performed a reshape of the 3d array to obtain this
b=([[1,2,3,4,5,6,7,8,9],[1,4,7,2,5,9,3,6,8]])
but the performance of the cluster algorithms are poor (please note that the triangle is an example, I have to cluster the position of a much more complex figure so the dimensionality of the points in the 2d array is very high).
So I was wondering if there are other method to cluster a 3d array beside the reshape and dimensionality reduction techniques. I've read that converting the 3d array in a distance matrix could be a solution but I really don't know how to do this. If anyone has any kind of advice on how to do this or any other advice on how to solve this problem, I will really appreciate the help!
The clustering algorthm works with this format for a matrix: n_samples, n_features
So in your case your n_sample is your position in time and your n_features is your coordinate. I'm assuming you are trying to find the average location of your shapes across time. I would advise for this type of task to calculate the center point of your shape. Like this no matter the shape you have one point in the middle of the object to track across time. It would make a bit more sense than to try to track all corners on the object which I assume can rotate.
Hope it helps!

An algorithm that efficiently computes the distance of one labeled pixel to its nearest differently labeled pixel

I apologize for my lengthy title name. I have two questions, where the second question is based on the first one.
(1). Suppose I have a matrix, whose entries are either 0 or 1. Now, I pick an arbitrary 0 entry. Is there an efficient algorithm that searches the nearest entry with label 1 or calculates the distance between the chosen 0 entry and its nearest entry with label 1?
(2). Suppose now the distribution of entries 1 has a geometric property. To make this statement clearer, imagine this matrix as an image. In this image, there are multiple continuous lines (not necessarily straight). These lines form several boundaries that partition the image into smaller pieces. Assume the boundaries are labeled 1, whereas all the pixels in the partitioned area are labeled 0. Now, similar to (1), I pick a random pixel labeled as 0, and I hope to find out the coordinate of the nearest pixel labeled as 1 or the distance between them.
A hint/direction for part (1) is enough for me. If typing up an answer takes too much time, it is okay just to tell me the name of the algorithm, and I will look it up.
p.s.: If I post this question in an incorrect section, please let me know. I will re-post it to an appropriate section. Thank you!
I think that if you have a matrix, you can run a BFS version where the matrix A will be your graph G and the vertex v will be the arbitrary pixel you chose.
There is an edge between any two adjacent cells in the matrix.

How does the opencv findHomography method "internally" work? (return values) [duplicate]

I'm using OpenCV's findHomography function (with RANSAC) in Python to find the transformation between two sets of points.
Looking at the documentation, the output is a mask and a transformation matrix.
The documentation is not clear about what the mask represents, and how the matrix is structured.
Is a 1 in the output mask a point that fits the found transformation or a point that was ignored?
And could you explain the makeup of the 3x3 output transformation matrix?
Thanks in advance and sorry if I missed some documentation which explains this.
Based on my limited search, mask returned by findHomography() has status of inliers and outliers, i.e. it's a matrix representing matches after finding the homography of an object.
This answer addresses your first question.
This answer addresses what a mask is and what are its dimensions.
Well what do you need to do with the mask? Because that field is not needed so you don't have to put any mask.
As for the resulting matrix. It is called a homography matrix, or H matrix and it represents the transformation of one point in an image plane to the same point in another image plane.
X1 = H * X2
The point X1 is the same point (X2) in a different plane.
So the H matrix is basically the description of how one point in, lets say, image 1 matches 1 point in image2.

How to create a bidimensional Gaussian filter on a dense list of points

I am doing my best to replicate the algorithm described here in this paper for making an inpainting algorithm. The idea is to get the contour or edge points of the part of the image that needs to be inpainted. In order to find the most linear point in the region, the orthogonal normal vector is found. On page 6, a short description of the implementation is given.
In our implementation the contour
δΩ of the target region is modelled as a dense list of image point
locations. Given a point p ∈ δΩ, the normal direction np
is computed as follows: i) the positions of the
“control” points of δΩ are filtered via a bi-dimensional Gaussian
kernel and, ii) np is estimated as the unit vector orthogonal to
the line through the preceding and the successive points in the
list.
So it appears that I need to put all these points in a gaussian filter. How do I set up a bi-dimensional Gaussian filter when we have a single dimension or a list of points?
Lets say our contour is a box shape at points, then I create a 1 dimensional list of points: [1,1],[1,2],[1,3],[2,1],[2,3],[3,1],[3,2],[3,3]. Do I need to simply make a new 2d matrix table and put the points in and leave the middle point at [2,2] empty, then run a Gaussian filter on it? This doesn't seem very dense though.
I am trying to run this through python libraries.
a dense list of image points
is simply a line.
You are basically applying a gaussian filter to a black and white image where the line is black and background is white, from what I understand. I think by doing that, they approximate the curve model fitting.
Convolve all of the points in the 2D region surrounding the point and then overwrite the point with the result.
This will make any curve on the edge of the target region less sharp, lowering the noise in the calculation of the normal, which would be the vector orthogonal to the two points that surround the current one.

Estimating the boundary of arbitrarily distributed data

I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it.
Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch.
My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists.
1st attempt http://astro.dur.ac.uk/~dmurphy/data_limits.png
OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg:
from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx
y=ymin+dy, do 1
do 1-2, but now sample in y
An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments.
Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary?
A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon.
The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now?
Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really!
~~~~~~~~~~~~~~~~~~~~~~~~~
OK, here's attempt #2 using Mark's idea of convex hulls:
alt text http://astro.dur.ac.uk/~dmurphy/data_limitsv2.png
For this I used qconvex from the qhull package, getting it to return the extreme vertices. For those interested:
cat [data] | qconvex Fx > out
The sampling of the perimeter seems quite low, and although I haven't played much with the settings, I'm not convinced I can improve the fidelity.
I think what you are looking for is the Convex Hull of the data That will give a set of points that if connected will mean that all your points are on or inside the connected points
I may have mixed something, but what's the motivation for simply not determining the maximum and minimum x and y level? Unless you have an enormous amount of data you could simply iterate through your points determining minimum and maximum levels fairly quickly.
This isn't the most efficient example, but if your data set is small this won't be particularly slow:
import random
data = [(random.randint(-100, 100), random.randint(-100, 100)) for i in range(1000)]
x_min = min([point[0] for point in data])
x_max = max([point[0] for point in data])
y_min = min([point[1] for point in data])
y_max = max([point[1] for point in data])

Categories

Resources