I try to detect edge using python.
There are hundreds of algorithms for edge detection, however, the image is very obscure and unclear. The most serious problem is one edge is located at the local maximum value, but the other edge is located slightly shifted from local maximum value. Through the detailed examination, I found the other edge is located at the one of the inflection points of original values. I depicted this situation as a simple situation.
Is there any simple and beautiful ways for detection edges in various situations?
No, there is no simple and beautiful method to detect edges. This is an ill-posed problem. In particular, there is no absolute criterion to tell signal from noise.
A not-so-bad method is to consider the peaks of the derivative, provided they correspond to a sufficiently high step in the signal.
Related
I need help in python coding an algorithm capable of detecting the corners of a image. I have a thresholded image so far and I was using cornerHarris from opencv to detect all the corners. My problem is filtrating all those points to output only the ones I desired. Maybe I can do a loop to achieve this?
In my case, I want the two lowest corners and the two highest corners points. My main interest is to obtain the pixel coordinates of this corners. You can see an example of a image I'm processing here:
In this image I draw the corners points I'm interested in.
There are several ways to solve this problem. In real-world applications it's rare (that is, actually never occurs) that you need to solve a problem once for a single image. If you have additional images it would be nice to see how much the object of interest varies.
One method to find corners is the convex hull. This method is more generally used to find a convex shape encompassing scattered points, but it's worth knowing about and implementing.
https://en.wikipedia.org/wiki/Convex_hull
What's handy about the convex hull is that the concept of a "corner" (a vertex on the convex hull polygon) is easy to grasp and doesn't rely on parameter settings. You don't have to consider whether a corner is sharp enough, strong enough, pointy enough, unique in its neighborhood, etc.--the convex hull will simply make sense to you.
You should be able to write a functional version of a convex hull "gift wrapping" algorithm in a reasonable period of time.
https://en.wikipedia.org/wiki/Gift_wrapping_algorithm
There are many ways to compute the convex hull, but don't get lost in all the different methods. Choose one that makes sense to you and implement it. The fastest known method may still be Seidel, but don't even think about running down that rabbit hole. Simple is good.
Before you compute the convex hull, you'll need to reduce your white shape to edge points; otherwise the hull algorithm will check far too many points. Reducing the number of points to be considered can be done using edge-finding on the connected component (the white "blob"), edge-finding without first segmenting foreground from background, or any of various simple kernels (e.g. Sobel).
Although the algorithm is called the "convex" hull, your shape doesn't have to be convex, especially if you're only interested in the top and bottom vertices/corners as shown in your sample image.
Corner finders can be a bit disappointing, frankly, especially since the name implies, "Hey, it'll just find corners all the time." There are some good ones out there, but you could spend a lot of time investigating all the alternatives. Even then you'll likely have to set thresholds, consider whether your application will yield the occasional weird result given the shape and scale of corners, and so on.
Although you mention wanting to find only the top and bottom points, if you wanted to find those two odd triangular outcroppings on the left side the corner-finding gets a little more complicated; using the convex hull keeps this very simple.
Although you want to find a robust solution to corner detection, preferably using a known algorithm for which performance can be understood easily, you also want to avoid overgeneralizing. In any case, review some list of corner detectors and see what strikes your fancy. If you see a promising algorithm that looks easy-ish to implement, why not try implementing it?
https://en.wikipedia.org/wiki/Corner_detection
I know how to do basic Canny edge detection using OpenCV. However I need to discard all edges that do not fall within 15 degrees of a predetermined angle.
Any help would be greatly appreciated.
Its an old question but here is the process you should use.
1]Start by filter your source image (back-ground subtract/color/etc)
2]Apply a generic Edge detector or a steerable filter or (if you want to get some really good result & are doing it for research purposes look for Phase Strectch Transform Algorithm
3]Save those line in a vector/whatever
4]Create a circle drawing algorithm (here is the main idea)
Your circle drawing algorithm (CDA further) will take every point returned by your edge filter.
For each point it will build circles of a variable diameter [Dmin;Dmax] based on the max/min distance you can accept for two points be considered on the same line.
If no next-pixel are present in the circle octant corresponding to your angle, simply dismiss it.
Once you have your lines that match your angle you can sort them by length to dismiss line probably due to noise.
You should also note that there is other methods, this method as some good aspect:
1- Its robust against noise & low quality images/video
2- Its CUDA compliant (i.e. easy to push in parallel processing).
3-Its fast and roughly more accurate than most basic line detectors.
What the hysteresis threshold, and why is it useful in edge detection?
I am trying to write an edge detection program in python, and it seems to work well without using hysteresis, but many sources include it. I was wondering why it would be useful.
If you just have a threshold without hysteresis, then when an image is near the threshold you can have low and high transitions (edges) very near each other. What you likely want are real value transitions in order to recognize an edge. The hysteresis value gives a required change before going from a high edge to a low edge and the other way around.
I have written a program in Python which automatically reads score sheets like this one
At the moment I am using the following basic strategy:
Deskew the image using ImageMagick
Read into Python using PIL, converting the image to B&W
Calculate calculate the sums of pixels in the rows and the columns
Find peaks in these sums
Check the intersections implied by these peaks for fill.
The result of running the program is shown in this image:
You can see the peak plots below and to the right of the image shown in the top left. The lines in the top left image are the positions of the columns and the red dots show the identified scores. The histogram bottom right shows the fill levels of each circle, and the classification line.
The problem with this method is that it requires careful tuning, and is sensitive to differences in scanning settings. Is there a more robust way of recognising the grid, which will require less a-priori information (at the moment I am using knowledge about how many dots there are) and is more robust to people drawing other shapes on the sheets? I believe it may be possible using a 2D Fourier Transform, but I'm not sure how.
I am using the EPD, so I have quite a few libraries at my disposal.
First of all, I find your initial method quite sound and I would have probably tried the same way (I especially appreciate the row/column projection followed by histogramming, which is an underrated method that is usually quite efficient in real applications).
However, since you want to go for a more robust processing pipeline, here is a proposal that can probably be fully automated (also removing at the same time the deskewing via ImageMagick):
Feature extraction: extract the circles via a generalized Hough transform. As suggested in other answers, you can use OpenCV's Python wrapper for that. The detector may miss some circles but this is not important.
Apply a robust alignment detector using the circle centers.You can use Desloneux parameter-less detector described here. Don't be afraid by the math, the procedure is quite simple to implement (and you can find example implementations online).
Get rid of diagonal lines by a selection on the orientation.
Find the intersections of the lines to get the dots. You can use these coordinates for deskewing by assuming ideal fixed positions for these intersections.
This pipeline may be a bit CPU-intensive (especially step 2 that will proceed to some kind of greedy search), but it should be quite robust and automatic.
The correct way to do this is to use Connected Component analysis on the image, to segment it into "objects". Then you can use higher level algorithms (e.g. hough transform on the components centroids) to detect the grid and also determine for each cell whether it's on/off, by looking at the number of active pixels it contains.
I know that this problem has been solved before, but I've been great difficulty finding any literature describing the algorithms used to process this sort of data. I'm essentially doing some edge finding on a set of 2D data. I want to be able to find a couple points on an eye diagram (generally used to qualify high speed communications systems), and as I have had no experience with image processing I am struggling to write efficient methods.
As you can probably see, these diagrams are so called because they resemble the human eye. They can vary a great deal in the thickness, slope, and noise, depending on the signal and the system under test. The measurements that are normally taken are jitter (the horizontal thickness of the crossing region) and eye height (measured at either some specified percentage of the width or the maximum possible point). I know this can best be done with image processing instead of a more linear approach, as my attempts so far take several seconds just to find the left side of the first crossing. Any ideas of how I should go about this in Python? I'm already using NumPy to do some of the processing.
Here's some example data, it is formatted as a 1D array with associated x-axis data. For this particular example, it should be split up every 666 points (2 * int((1.0 / 2.5e9) / 1.2e-12)), since the rate of the signal was 2.5 GB/s, and the time between points was 1.2 ps.
Thanks!
Have you tried OpenCV (Open Computer Vision)? It's widely used and has a Python binding.
Not to be a PITA, but are you sure you wouldn't be better off with a numerical approach? All the tools I've seen for eye-diagram analysis go the numerical route; I haven't seen a single one that analyzes the image itself.
You say your algorithm is painfully slow on that dataset -- my next question would be why. Are you looking at an oversampled dataset? (I'm guessing you are.) And if so, have you tried decimating the signal first? That would at the very least give you fewer samples for your algorithm to wade through.
just going down your route for a moment, if you read those images into memory, as they are, wouldn't it be pretty easy to do two flood fills (starting centre and middle of left edge) that include all "white" data. if the fill routine recorded maximum and minimum height at each column, and maximum horizontal extent, then you have all you need.
in other words, i think you're over-thinking this. edge detection is used in complex "natural" scenes when the edges are unclear. here you edges are so completely obvious that you don't need to enhance them.