Hysteresis in Edge Detection - python

What the hysteresis threshold, and why is it useful in edge detection?
I am trying to write an edge detection program in python, and it seems to work well without using hysteresis, but many sources include it. I was wondering why it would be useful.

If you just have a threshold without hysteresis, then when an image is near the threshold you can have low and high transitions (edges) very near each other. What you likely want are real value transitions in order to recognize an edge. The hysteresis value gives a required change before going from a high edge to a low edge and the other way around.

Related

Is there any way to detect edge in this situation?

I try to detect edge using python.
There are hundreds of algorithms for edge detection, however, the image is very obscure and unclear. The most serious problem is one edge is located at the local maximum value, but the other edge is located slightly shifted from local maximum value. Through the detailed examination, I found the other edge is located at the one of the inflection points of original values. I depicted this situation as a simple situation.
Is there any simple and beautiful ways for detection edges in various situations?
No, there is no simple and beautiful method to detect edges. This is an ill-posed problem. In particular, there is no absolute criterion to tell signal from noise.
A not-so-bad method is to consider the peaks of the derivative, provided they correspond to a sufficiently high step in the signal.

Python - Detecting desired corners of a image

I need help in python coding an algorithm capable of detecting the corners of a image. I have a thresholded image so far and I was using cornerHarris from opencv to detect all the corners. My problem is filtrating all those points to output only the ones I desired. Maybe I can do a loop to achieve this?
In my case, I want the two lowest corners and the two highest corners points. My main interest is to obtain the pixel coordinates of this corners. You can see an example of a image I'm processing here:
In this image I draw the corners points I'm interested in.
There are several ways to solve this problem. In real-world applications it's rare (that is, actually never occurs) that you need to solve a problem once for a single image. If you have additional images it would be nice to see how much the object of interest varies.
One method to find corners is the convex hull. This method is more generally used to find a convex shape encompassing scattered points, but it's worth knowing about and implementing.
https://en.wikipedia.org/wiki/Convex_hull
What's handy about the convex hull is that the concept of a "corner" (a vertex on the convex hull polygon) is easy to grasp and doesn't rely on parameter settings. You don't have to consider whether a corner is sharp enough, strong enough, pointy enough, unique in its neighborhood, etc.--the convex hull will simply make sense to you.
You should be able to write a functional version of a convex hull "gift wrapping" algorithm in a reasonable period of time.
https://en.wikipedia.org/wiki/Gift_wrapping_algorithm
There are many ways to compute the convex hull, but don't get lost in all the different methods. Choose one that makes sense to you and implement it. The fastest known method may still be Seidel, but don't even think about running down that rabbit hole. Simple is good.
Before you compute the convex hull, you'll need to reduce your white shape to edge points; otherwise the hull algorithm will check far too many points. Reducing the number of points to be considered can be done using edge-finding on the connected component (the white "blob"), edge-finding without first segmenting foreground from background, or any of various simple kernels (e.g. Sobel).
Although the algorithm is called the "convex" hull, your shape doesn't have to be convex, especially if you're only interested in the top and bottom vertices/corners as shown in your sample image.
Corner finders can be a bit disappointing, frankly, especially since the name implies, "Hey, it'll just find corners all the time." There are some good ones out there, but you could spend a lot of time investigating all the alternatives. Even then you'll likely have to set thresholds, consider whether your application will yield the occasional weird result given the shape and scale of corners, and so on.
Although you mention wanting to find only the top and bottom points, if you wanted to find those two odd triangular outcroppings on the left side the corner-finding gets a little more complicated; using the convex hull keeps this very simple.
Although you want to find a robust solution to corner detection, preferably using a known algorithm for which performance can be understood easily, you also want to avoid overgeneralizing. In any case, review some list of corner detectors and see what strikes your fancy. If you see a promising algorithm that looks easy-ish to implement, why not try implementing it?
https://en.wikipedia.org/wiki/Corner_detection

How to create steerable Edge Detection filters using Python or discard edges that don't conform to desired angle

I know how to do basic Canny edge detection using OpenCV. However I need to discard all edges that do not fall within 15 degrees of a predetermined angle.
Any help would be greatly appreciated.
Its an old question but here is the process you should use.
1]Start by filter your source image (back-ground subtract/color/etc)
2]Apply a generic Edge detector or a steerable filter or (if you want to get some really good result & are doing it for research purposes look for Phase Strectch Transform Algorithm
3]Save those line in a vector/whatever
4]Create a circle drawing algorithm (here is the main idea)
Your circle drawing algorithm (CDA further) will take every point returned by your edge filter.
For each point it will build circles of a variable diameter [Dmin;Dmax] based on the max/min distance you can accept for two points be considered on the same line.
If no next-pixel are present in the circle octant corresponding to your angle, simply dismiss it.
Once you have your lines that match your angle you can sort them by length to dismiss line probably due to noise.
You should also note that there is other methods, this method as some good aspect:
1- Its robust against noise & low quality images/video
2- Its CUDA compliant (i.e. easy to push in parallel processing).
3-Its fast and roughly more accurate than most basic line detectors.

how to calculate depth of object in image captured by android camera

I have an image captured by android camera. Is it possible to calculate depth of object in the image ? Image contains object and background only. Any suggestion, explanation or links that you think can help me will be appreciated.
OpenCV is the library you need.
I did some depth identification of water levels in pure white background a few days ago. Generally, if you want to identify the depth, you can convert the question to identify the edge of the changing colors. In this case, you can convert the colorful pictures to grey and identify the changing of while-black-grey interface. OpenCV is capable of doing the job at high speed.
Hope it helps. Let me know if you need further help.
Edits:
If you want to find the actual depths, you need to project the coordinate system of your pictures to the real world, or vice versa. To do it, you have to know a fix location as your reference and the relationship between pixels and real distances.
What I did is find the fixed location and set it as zero. Afterwards, I measured a length of an object in the picture, and also calculated the pixel amount of the object. Therefore I obtained the relationship between pixels and real distances.
Note that these procedures may involve errors in the identification. I did it very carefully and the error was acceptable in my case.
With only one image, accurate depth estimation is near impossible. However, there are various methods of estimating depth under certain assumptions or the availability of the camera calibration matrix. As mentioned by #WenlongLiu, OpenCV is a very good place to start with.

Efficient 2D edge detection in Python

I know that this problem has been solved before, but I've been great difficulty finding any literature describing the algorithms used to process this sort of data. I'm essentially doing some edge finding on a set of 2D data. I want to be able to find a couple points on an eye diagram (generally used to qualify high speed communications systems), and as I have had no experience with image processing I am struggling to write efficient methods.
As you can probably see, these diagrams are so called because they resemble the human eye. They can vary a great deal in the thickness, slope, and noise, depending on the signal and the system under test. The measurements that are normally taken are jitter (the horizontal thickness of the crossing region) and eye height (measured at either some specified percentage of the width or the maximum possible point). I know this can best be done with image processing instead of a more linear approach, as my attempts so far take several seconds just to find the left side of the first crossing. Any ideas of how I should go about this in Python? I'm already using NumPy to do some of the processing.
Here's some example data, it is formatted as a 1D array with associated x-axis data. For this particular example, it should be split up every 666 points (2 * int((1.0 / 2.5e9) / 1.2e-12)), since the rate of the signal was 2.5 GB/s, and the time between points was 1.2 ps.
Thanks!
Have you tried OpenCV (Open Computer Vision)? It's widely used and has a Python binding.
Not to be a PITA, but are you sure you wouldn't be better off with a numerical approach? All the tools I've seen for eye-diagram analysis go the numerical route; I haven't seen a single one that analyzes the image itself.
You say your algorithm is painfully slow on that dataset -- my next question would be why. Are you looking at an oversampled dataset? (I'm guessing you are.) And if so, have you tried decimating the signal first? That would at the very least give you fewer samples for your algorithm to wade through.
just going down your route for a moment, if you read those images into memory, as they are, wouldn't it be pretty easy to do two flood fills (starting centre and middle of left edge) that include all "white" data. if the fill routine recorded maximum and minimum height at each column, and maximum horizontal extent, then you have all you need.
in other words, i think you're over-thinking this. edge detection is used in complex "natural" scenes when the edges are unclear. here you edges are so completely obvious that you don't need to enhance them.

Categories

Resources