I think what I would get from 1 in the picture with gaussian filter is 2. I want to get number 3. I want to get the derivate but just the absolute amount.
I basically want to filter out the gradient of the big triangle and just get irregularities (arrow in the image)
I could use sine instead of the triangle as well, would this make it easier?
How can I implement this in python or halcon? What should I look into to get an better understanding in what I want and how to achieve it?
edit: I want to find the "defects" and get rid of the pattern
theory:
Real Image with real defects:
A Gaussian filter does not give you a derivative. It's a weigthed average.
Your assumption that a Gaussian would give you 2 for input 1 is incorrect.
Just suppress the low frequency of your background with a Notch filter for example.
Quick and dirty example:
Also see Find proper notch filter to remove pattern from image
Another simple approach is to use a row-wise threshold or background subtraction if the background is always aligned like that
Agree with Piglet that if the frequency of your pattern is substantially lower than that of your defects, a notch filter is the tool of first choice.
Also agree that if you have multiple frames of calibrated fringe patterns, then you have an array of options available. Recent versions of Halcon have built-in deflectometry operators.
For quick-n-dirty, you could also exploit the general orientation of your pattern using rectangular kernels. This is equivalent to an orthotropic high-pass filter.
read_image(imgInput, 'C:/Users/jpeyton/Documents/zzz_temp/FringePat_raw.jpg')
*smooth input image with mean using vertically oriented rectangular kernel
mean_image (imgInput, imgMean, 3, 15)
*subtract smoothed image from raw image to get local / high frequency residuals
abs_diff_image(imgMean,imgInput,imgAbsDiff, 1)
*threshold away background
threshold (imgAbsDiff, Regions, 8, 255)
Smooth with mean operator. Vertically oriented kernel (3x15 in this case)
Subtract smoothed image from raw image and threshold:
From there, you can run a connection operator and use region features to further accentuate defects. You'll notice this approach doesn't provide as strong a signal for the lower frequency defects (dents?).
So tradeoffs are that a FFT/DFT filter doesn't exploit direction of pattern, and leaves behind edge/harmonic artifacts. A highpass filter approach (like above) will not be sensitive to defects as the approach/exceed fringe frequency.
Related
I am trying to approximate different shapes of a weld bead geometry cross section in additive manufacturing with a graph or ideally (but not necessarily) a function. The regions are the outer shape as well as the individual layers. (see following images)
Therefore, I applied some pre-processing methods to extract the relevant pixels which represent the geometry of a weld bead which are shown as white pixels. (see third image)
I derived this image with canny edge detection and multiple morphological operations such as closing erosion and dilation prior to that and of course converting it into grey-scale.
The "noisy" areas are the transition areas between individual layers of metal and only show up in this way, so in general there is not a "better" or "sharper" transition in thus less "noise". Pictures 3 and 4 are an example of some of the image pre-processing methods I used.
My main approach to treat the inner geometry so far was to split up the image in several sub-images and perform least squares regression on each individual one by interpreting the white pixels as data points. Afterwards I've stitched all those little approximation functions back together to form the image of the original size. I've tried it with different sizes of those sub-images. (see pictures 5 and 6)
However, this approach produces jumps between the functions as well as functions next to each other where the pixels or data points in my case should only be approximated with one function (see attached image). My next approach would be to use multivariate adaptive regression on the sub-images.
Thus, I'm asking if anybody knows a better solution for my problem, maybe even for an approximation on global scale without splitting the image into the sub-images. The approximation does not need to be a polynomial function, piece wise linear but connected functions are totally sufficient. I would be thankful if anybody knows a method that is at least capable of achieving what I want to do. Whether a pure non-linear regression method. Unfortunately I don't have many images (only 64), hence I don't think I can use an ANN. (please correct me if I'm wrong)
If you need to take a look at my code, just let me know. Thank you! :)
The best I could obtain is with bilateral filtering for denoising, then adaptive binarization.
And on a reduced image:
Here`s the deal. I want to create a mask that visualizes all the changes between two images (GeoTiffs which are converted to 2D numpy arrays).
For that I simply subtract the pixel values and normalize the absolute value of the subtraction:
Since the result will be covered in noise, I use a treshold and remove all pixels with a value below a certain limit.
def treshold(array, thresholdLimit):
print("Treshold...")
result = (array > thresholdLimit) * array
return result
This works without a problem. Now comes the issue. When applying the treshold, outliers remain, which is not intended:
What is a good way to remove those outliers?
Sometimes the outliers are small chunks of pixels, like 5-6 pixels together, how could those be removed?
Additionally, the images I use are about 10000x10000 pixels.
I would appreciate all advice!
EDIT:
Both images are landsat satelite images, covering the exact same area.
The difference here is that one image shows cloud coverage and the other one is free of clouds.
The bright snakey line in the top right is part of a river that has been covered by a cloud. Since water bodies like the ocean or rivers are depicted black in those images, the difference between the bright cloud and the dark river results in the river showing a high degree of change.
I hope the following images make this clear:
Source tiffs :
Subtraction result:
I also tried to smooth the result of the tresholding by using a median filter but the result was still covered in outliers:
from scipy.ndimage import median_filter
def filter(array, limit):
print("Median-Filter...")
filteredImg = np.array(median_filter(array, size=limit)).astype(np.float32)
return filteredImg
I would suggest the following:
Before proceeding please double check if the two images are 100% registered. To check that you should overlay them using e.g. different color channels. Even minimal registration errors can render your task impossible
Smooth both input images slightly (before the subtraction). For that I would suggest you use standard implementations. Play around with the filter parameters to find an acceptable compromise between smoothness (or reduction of graininess of source image 1) and resolution
Then try to match the image statistics by applying histogram normalization, using the histogram of image 2 as a target for the histogram of image 1. For this you can also use e.g. the OpenCV implementation
Subtract the images
If you then still observe obvious noise, look at the histogram of the subtraction result and see if you can relate the noise to intensity outliers. If you can clearly separate signal and noise based on intensity, apply again a thresholding (informed by your histogram). Alternatively (or additionally), if the noise is structurally different from your signal (e.g. clustered), you could look into morphological operations to remove it.
I'm making some photo-editing tools in python using PIL (Python Imaging Library), and I was trying to make a program which converts a photo to its 'painted' version.
I've managed to make a program which converts a photo into its distinct colours, but the problem is that the algorithm I'm using is operating on every pixel, meaning that the resulting image has very jagged differences between colours.
Ideally, I'd like to smoothen out these edges, but I don't know how!
I've checked out this site for some help, but the method there produces quite different results to what I need.
My Starting Image:
My Image with Distinct Colours:
I would like to smoothen the edges in the image above.
Results of using the method which doesn't quite work:
As you can see, using the technique doesn't smoothen the edges into natural-looking curves; instead it creates jagged edges.
I know I should provide sample output, but suprisingly, I haven't actually got it, so I'll describe it as best as I can. Simply put, I want to smoothen the edges between the different colours.
I've seen something called a Gaussian blur, but I'm not quite sure as to how to apply it here as the answers I've seen always mention some sort of threshold, and are usually to do with binary images, so I don't think it can apply here.
Edge enhancement does the opposite of edge smoothing, so this is certainly not the tool you should use.
Unfortunately, there is little that you can do because edge smoothing will indeed smoothen the jaggies, but it will also destroy the true edges, resulting in a blurred image. Edge-preserving smoothing is also a dead-end.
You should have a look at the methods to extract the "cartoon part" of an image. There is a lot of literature on this topic, though often pretty sophisticated.
You can enhance the quality of your "Image with Distinct Colours" by applying a median filter with a radius of 2:
If you want to get "comic-like" dark edges, you can calculate the edges of the original image using a sobel filter, convert the edge map to grayscale, then multiply the resulting edge map with 2, inverse the map and add each non-white pixel of the edge map to the original image. This will result in:
This is of course only a starting point as the result leaves much to be desired, but it should give you a good idea about the basic concept.
I'm newbie in computer vision. My goal is to distinguish individual cells on a set of pictures like this: Example
Basically, I blur whole image, find region maximum on it and use it like seed in watershed algorithm on distance tranfsform of threesholded blurred image. In fact I'm following tutorial which you can find here:
github/luispedro/python-image-tutorial
(sorry, can't post more than 2 links).
My problem is that some cells in my set have very distinguishable dark nucleus (which you can see on the example) and my algorithm produce results like this which are cleary wrong.
Of course it's possible to fix it by increasing strength of gaussian blur but it will merge some other cells toghether which is even worse.
What can be done to solve this problem? What are other possibilites if watershed just isn't situable for this case (keeping in mind that my set is pretty small and learning seems impossible)?
The watershed tends to over-segment if you don't use a watershed with markers.
Usually, we start with DNA/DAPI segmentation that is easy, and it provides the number of cells and the inner markers for the watershed.
If you blur the images, you smooth all the patterns. You should use an alternate sequential filter (opening / closing) in order to simplify each zone, and then try an ultimate eroded in order to find the number of inner seed for your watershed.
I have written a program in Python which automatically reads score sheets like this one
At the moment I am using the following basic strategy:
Deskew the image using ImageMagick
Read into Python using PIL, converting the image to B&W
Calculate calculate the sums of pixels in the rows and the columns
Find peaks in these sums
Check the intersections implied by these peaks for fill.
The result of running the program is shown in this image:
You can see the peak plots below and to the right of the image shown in the top left. The lines in the top left image are the positions of the columns and the red dots show the identified scores. The histogram bottom right shows the fill levels of each circle, and the classification line.
The problem with this method is that it requires careful tuning, and is sensitive to differences in scanning settings. Is there a more robust way of recognising the grid, which will require less a-priori information (at the moment I am using knowledge about how many dots there are) and is more robust to people drawing other shapes on the sheets? I believe it may be possible using a 2D Fourier Transform, but I'm not sure how.
I am using the EPD, so I have quite a few libraries at my disposal.
First of all, I find your initial method quite sound and I would have probably tried the same way (I especially appreciate the row/column projection followed by histogramming, which is an underrated method that is usually quite efficient in real applications).
However, since you want to go for a more robust processing pipeline, here is a proposal that can probably be fully automated (also removing at the same time the deskewing via ImageMagick):
Feature extraction: extract the circles via a generalized Hough transform. As suggested in other answers, you can use OpenCV's Python wrapper for that. The detector may miss some circles but this is not important.
Apply a robust alignment detector using the circle centers.You can use Desloneux parameter-less detector described here. Don't be afraid by the math, the procedure is quite simple to implement (and you can find example implementations online).
Get rid of diagonal lines by a selection on the orientation.
Find the intersections of the lines to get the dots. You can use these coordinates for deskewing by assuming ideal fixed positions for these intersections.
This pipeline may be a bit CPU-intensive (especially step 2 that will proceed to some kind of greedy search), but it should be quite robust and automatic.
The correct way to do this is to use Connected Component analysis on the image, to segment it into "objects". Then you can use higher level algorithms (e.g. hough transform on the components centroids) to detect the grid and also determine for each cell whether it's on/off, by looking at the number of active pixels it contains.