How to remove repititve pattern from an image using FFT - python

I have image of skin colour with repetitive pattern (Horizontal White Lines) generated by a scanner that uses a line of sensors to perceive the photo.
My Question is how to denoise the image effectively using FFT without affecting the quality of the image much, somebody told me that I have to suppress the lines that appears in the magnitude spectrum manually, but I didn't know how to do that, can you please tell me how to do it?
My approach is to use Fast Fourier Transform(FFT) to denoise the image channel by channel.
I have tried HPF, and LPF in Fourier domain, but the results were not good as you can see:
My Code:
from skimage.io import imread, imsave
from matplotlib import pyplot as plt
import numpy as np
img = imread('skin.jpg')
R = img[...,2]
G = img[...,1]
B = img[...,0]
f1 = np.fft.fft2(R)
fshift1 = np.fft.fftshift(f1)
phase_spectrumR = np.angle(fshift1)
magnitude_spectrumR = 20*np.log(np.abs(fshift1))
f2 = np.fft.fft2(G)
fshift2 = np.fft.fftshift(f2)
phase_spectrumG = np.angle(fshift2)
magnitude_spectrumG = 20*np.log(np.abs(fshift2))
f3 = np.fft.fft2(B)
fshift3 = np.fft.fftshift(f3)
phase_spectrumB = np.angle(fshift3)
magnitude_spectrumB = 20*np.log(np.abs(fshift2))
#===============================
# LPF # HPF
magR = np.zeros_like(R) # = fshift1 #
magR[magR.shape[0]//4:3*magR.shape[0]//4,
magR.shape[1]//4:3*magR.shape[1]//4] = np.abs(fshift1[magR.shape[0]//4:3*magR.shape[0]//4,
magR.shape[1]//4:3*magR.shape[1]//4]) # =0 #
resR = np.abs(np.fft.ifft2(np.fft.ifftshift(magR)))
resR = R - resR
#===============================
magnitude_spectrumR
plt.subplot(221)
plt.imshow(R, cmap='gray')
plt.title('Original')
plt.subplot(222)
plt.imshow(magnitude_spectrumR, cmap='gray')
plt.title('Magnitude Spectrum')
plt.subplot(223)
plt.imshow(phase_spectrumR, cmap='gray')
plt.title('Phase Spectrum')
plt.subplot(224)
plt.imshow(resR, cmap='gray')
plt.title('Processed')
plt.show()

Here is a simple and effective linear filtering strategy to remove the horizontal line artifact:
Outline:
Estimate the frequency of the distortion by looking for a peak in the image's power spectrum in the vertical dimension. The function scipy.signal.welch is useful for this.
Design two filters: a highpass filter with cutoff just below the distortion frequency and a lowpass filter with cutoff near DC. We'll apply the highpass filter vertically and the lowpass filter horizontally to try to isolate the distortion. We'll use scipy.signal.firwin to design these filters, though there are many ways this could be done.
Compute the restored image as "image − (hpf ⊗ lpf) ∗ image".
Code:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from scipy.ndimage import convolve1d
from scipy.signal import firwin, welch
def remove_lines(image, distortion_freq=None, num_taps=65, eps=0.025):
"""Removes horizontal line artifacts from scanned image.
Args:
image: 2D or 3D array.
distortion_freq: Float, distortion frequency in cycles/pixel, or
`None` to estimate from spectrum.
num_taps: Integer, number of filter taps to use in each dimension.
eps: Small positive param to adjust filters cutoffs (cycles/pixel).
Returns:
Denoised image.
"""
image = np.asarray(image, float)
if distortion_freq is None:
distortion_freq = estimate_distortion_freq(image)
hpf = firwin(num_taps, distortion_freq - eps,
pass_zero='highpass', fs=1)
lpf = firwin(num_taps, eps, pass_zero='lowpass', fs=1)
return image - convolve1d(convolve1d(image, hpf, axis=0), lpf, axis=1)
def estimate_distortion_freq(image, min_frequency=1/25):
"""Estimates distortion frequency as spectral peak in vertical dim."""
f, pxx = welch(np.reshape(image, (len(image), -1), 'C').sum(axis=1))
pxx[f < min_frequency] = 0.0
return f[pxx.argmax()]
Examples:
On the portrait image, estimate_distortion_freq estimates that the frequency of the distortion is 0.1094 cycles/pixel (period of 9.14 pixels). The transfer function of the filtering "image − (hpf ⊗ lpf) ∗ image" looks like this:
Here is the filtered output from remove_lines:
On the skin image, estimate_distortion_freq estimates that the frequency of the distortion is 0.08333 cycles/pixel (period of 12.0 pixels). Filtered output from remove_lines:
The distortion is mostly removed on both examples. It isn't perfect: on the portrait image, a couple ripples are still visible near the top and bottom borders, a typical defect when using large filters or Fourier methods. Still, it's a good improvement over the original images.

Related

fitting ellipse in images with poor contrast

I am working on image processing in Python, on the topic of underwater photogrammetry. My goal is to fit an ellipse to fidicual markers, and retrieve its a) center, b) axis and c) orientation.
My markers are
radial,
white on black background, and some have a
binary code:
A ML-model delivers a small image snippets for each marker in each image, containting only the center of the marker.
So far, I've implemented these approaches:
Using openCV:
a) Thresholding, which results in a binary image (cv2.threshold)
b) Find Contours (cv2.findContours)
c) fit Ellipse (v2.fitEllipse)
Using Scikit:
a) Detect edge (using Canny)
b) Apply hough transform
Star operator (work in progress)
a) Estimate ellipse center
b) Send 360rays in all directions
c) Build an array, comprising coordinates of the largest gradient on each ray
d) Calculate best-fit ellipse using least-square method
e) Use the new center to repeat process (possibly several iterations required)
I perform these methods for each color-channel seperately. So far, the results between channels differ within several pixels for the ellipse center.
Do you have any suggestions on what pre-processing methods I should use, prior detecting/fitting the ellipse?
Any thoughts on which of the above methods will lead to the most accurate results?
This is amazing! Thank you. I just started to read about moments (e.g. https://www.pythonpool.com/opencv-moments/) and inertia.
However, there is a challange applying your code to this example:
As you can see, the image was poorly cropped, and the inertia of the image is more in the image center than in the center of the expected ellipse.
My first attempt to fix this is to binarize the image first:
import cv2 as cv2
T = int(cv2.mean(image)[0])
ret,image = cv2.threshold(image,T,255,0)
Is that a reasonable approach? I fear, that the binarization will have an unwanted impact on the moments of inertia. Thank you for claryfying.
This code finds the center of mass of the image, and the main axis of symmetry by calculating the moments of inertia.
I tried many libraries that calculate moments of inertia of images, but they give strange results (like 4x4 matrix for what should be a 2x2 matrix of inertia.
Also, ndimage.measurements.center_of_mass() appears to return (Cy,Cx) (row, column)
So, I resorted to manually calculating the moments of inertia
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image as Pim
from io import BytesIO
import requests
photoURL = "https://i.stack.imgur.com/EcLYk.png"
response = requests.get(photoURL)
image = np.array(Pim.open(BytesIO(response.content)).convert('L')) # Convert to greyscale
plt.imshow(image)
if True: # calculate eigen vectors = main axis of inertia
# xCoord, yCoord are the column and row numbers in image
xCoord, yCoord = np.meshgrid(np.arange(image.shape[1]), np.arange(image.shape[0]))
# mass M is the total sum of Image
M = np.sum(image)
# Cx, Cy are the coordinates of the center of mass
#Cx = sum(xCoord * image) / sum(image)
Cx = np.einsum('ij,ij', xCoord, image)/M
Cy = np.einsum('ij,ij', yCoord, image)/M
# Ixx is the second order moment of image respect to the horizontal axis passing through the center of mass
# Ixx=sum(Image*y^2)
Ixx = np.einsum('ij,ij,ij', yCoord-Cy, yCoord-Cy, image)
# Iyy is the second order moment of image respect to the vertical axis passing through the center of mass
# Iyy=sum(Image*x^2)
Iyy = np.einsum('ij,ij,ij', xCoord-Cx, xCoord-Cx, image)
# Ixy is the second order moment of image respect to both axis passing through the center of mass
# Ixy=sum(Image*x*y)
Ixy = np.einsum('ij,ij,ij', xCoord-Cx, yCoord-Cy, image)
inertiaMatrix = np.array([[Ixx, Ixy],
[Ixy, Iyy]])
eigValues, eigVectors = np.linalg.eig(inertiaMatrix)
# Plot center of mass
plt.scatter(Cx, Cy, c='r')
# Plot eigenvectors from center to direction of eigenvectors
plt.quiver(Cx, Cy, eigVectors[0, 0], eigVectors[1, 0], color='r', scale=2)
plt.quiver(Cx, Cy, eigVectors[0, 1], eigVectors[1, 1], color='r', scale=2)
plt.show()
nothing = 0

Sliding window on an image to calculate variance of pixels in that window

I am trying to build a function that uses sliding window over and image and calculates the variance of pixels in the window and returns a bounding box where there is the most variance observed.
I'm new to coding and I've tried solutions from this post but I don't know how to input image in that instead of array.
I'm on a deadline here and been trying this since a while so any help is much appreciated . TIA
Edit: Also, if someone could help me with how to call the rolling_window_lastaxis function and modify it to what I'm trying to do then it would mean a lot.
Here is one way to compute the sliding window variance (or standard deviation) using Python/OpenCV/Skimage.
This approach makes use of the following form for computing the variance (see https://en.wikipedia.org/wiki/Variance):
Variance = mean of square of image - square of mean of image
However, since the variance will be outside the 8-bit range, we take the square root to form the standard deviation.
I also use the (local) mean filter from the Skimage rank filter module.
Input:
import cv2
import numpy as np
from skimage.morphology import rectangle
import skimage.filters as filters
# Variance = mean of square of image - square of mean of image
# See # see https://en.wikipedia.org/wiki/Variance
# read the image
# convert to 16-bits grayscale since mean filter below is limited
# to single channel 8 or 16-bits, not float
# and variance will be larger than 8-bit range
img = cv2.imread('lena.png', cv2.IMREAD_GRAYSCALE).astype(np.uint16)
# compute square of image
img_sq = cv2.multiply(img, img)
# compute local mean in 5x5 rectangular region of each image
# note: python will give warning about slower performance when processing 16-bit images
region = rectangle(5,5)
mean_img = filters.rank.mean(img, selem=region)
mean_img_sq = filters.rank.mean(img_sq, selem=region)
# compute square of local mean of img
sq_mean_img = cv2.multiply(mean_img, mean_img)
# compute variance using float versions of images
var = cv2.add(mean_img_sq.astype(np.float32), -sq_mean_img.astype(np.float32))
# compute standard deviation and convert to 8-bit format
std = cv2.sqrt(var).clip(0,255).astype(np.uint8)
# save results
# multiply by 2 to make brighter as an example
cv2.imwrite('lena_std.png',2*std)
# show results
# multiply by 2 to make brighter as an example
cv2.imshow('std', 2*std)
cv2.waitKey(0)
cv2.destroyAllWindows()
Local Standard Deviation Image for 5x5 Sliding Window:
ADDITION
Here is a version that finds the bounding box for the maximum average variance for the bounding box size and draws it on the variance image (actually standard deviation).
import cv2
import numpy as np
from skimage.morphology import rectangle
import skimage.filters as filters
# Variance = mean of square of image - square of mean of image
# See # see https://en.wikipedia.org/wiki/Variance
# set the bounding box size
bbox_size = 25
# read the image
# convert to 16-bits grayscale since mean filter below is limited
# to single channel 8 or 16-bits, not float
# and variance will be larger than 8-bit range
img = cv2.imread('lena.png', cv2.IMREAD_GRAYSCALE).astype(np.uint16)
# compute square of image
img_sq = cv2.multiply(img, img)
# compute local mean in bbox_size x bbox_size rectangular region of each image
# note: python will give warning about slower performance when processing 16-bit images
region = rectangle(bbox_size, bbox_size)
mean_img = filters.rank.mean(img, selem=region)
mean_img_sq = filters.rank.mean(img_sq, selem=region)
# compute square of local mean of img
sq_mean_img = cv2.multiply(mean_img, mean_img)
# compute variance using float versions of images
var = cv2.add(mean_img_sq.astype(np.float32), -sq_mean_img.astype(np.float32))
# compute standard deviation and convert to 8-bit format
std = cv2.sqrt(var).clip(0,255).astype(np.uint8)
# find bbox_size x bbox_size region with largest var (or std)
# get the moving window average at each pixel
std_ave = (cv2.sqrt(var)).astype(np.uint8)
# find the pixel x,y with the largest mean
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(std_ave)
x,y = max_loc
print("x:", x, "y:", y, "max:", max_val)
# draw rectangle for bounding box on copy of std image
result = std.copy()
result = cv2.merge([result, result, result])
cv2.rectangle(result, (x, y), (x+bbox_size, y+bbox_size), (0,0,255), 1)
# save results
# multiply by 2 to make brighter as an example
cv2.imwrite('lena_std.png',std)
cv2.imwrite('lena_std_bbox.png',result)
# show results
# multiply by 2 to make brighter as an example
cv2.imshow('std', std)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
x: 208 y: 67 max: 79.0
Resulting Bounding Box:
An alternative method to compute the windowed/rolling variance in regions of WxH is to use just numpy and scipy with convolutions, which are computed fairly quickly. An example:
import numpy as np
import scipy.signal
# Create image data
original = np.zeros((811,123))
img = original + np.random.normal(0, 1, original.shape)
# Create averaging kernel
H, W = 5, 5
mean_op = np.ones((H,W))/(H*W)
# Carry out convolution to compute mean of square, and square of mean
mean_of_sq = scipy.signal.convolve2d( img**2, mean_op, mode='same', boundary='symm')
sq_of_mean = scipy.signal.convolve2d( img , mean_op, mode='same', boundary='symm') **2
win_var = mean_of_sq - sq_of_mean

What's the best way to find lines on a very poor quality image, knowing the angle of these lines?

i'm trying to find theses two horizontal lines with the Houghlines transform. As you can see, the picture is very noisy ! Currently my workflow looks like this :
crop the image
blur it
low the noise (for that, I invert the image, and then substract the blured image to the inverted one)
open it and dilate it with an "horizontal kernel" (kernel_1 = np.ones((10,1), np.uint8)
threshold
Houglines
the results are not as good as expected... is there a better strategy, knowing that I will always serach for horizontal lines (hence, abs(theta) will always be closed to 0 or pi)
the issue is the noise and the faint signal. you can subdue the noise with averaging/integration, while maintaining the signal because it's replicated along a dimension (signal is a line).
your approach using a very wide but narrow kernel can be extended to simply integrating along the whole image.
rotate the image so the suspected line is aligned with an axis (let's say horizontal)
sum up all pixels of one scanline (horizontal line), np.sum(axis=1) or mean, either way mind the data type. working with floats is convenient.
work with the 1-dimensional series of values.
this will not tell you how long the line is, only that it's there and potentially spanning the whole width.
edit: since my answer got a reaction, I'll elaborate as well:
I think you can lowpass that to get the "gray" baseline, then subtract ("difference of gaussians"). that should give you a nice signal.
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
import scipy.ndimage
im = cv.imread("0gczo.png", cv.IMREAD_GRAYSCALE) / np.float32(255)
relief = im.mean(axis=1)
smoothed = scipy.ndimage.gaussian_filter(relief, sigma=2.0)
baseline = scipy.ndimage.gaussian_filter(relief, sigma=10.0)
difference = smoothed - baseline
std = np.std(difference)
level = 2
outliers = (difference <= std * -level)
plt.plot(difference)
plt.hlines([std * +level, std * -level], xmin=0, xmax=len(relief))
plt.plot(std * -level + outliers * std)
plt.show()
# where those peaks are:
edgemap = np.diff(outliers.astype(np.int8))
(edges,) = edgemap.nonzero()
print(edges) # [392 398 421 427]
print(edgemap[edges]) # [ 1 -1 1 -1]
Much the same as Christoph's answer, but just wanted to share a processed image which I can't do in the comments.
I just took the mean across the rows with np.mean(axis=1) and normalised the result. Hopefully you can see the two dark bands corresponding to your lines.

modifying images to get sharp, non-noisy features for optical flow/image stabilization

I am attempting to develop a pipeline for stabilizing images of a fluid experiment using opencv in Python. Here is an example raw image (actual size: 1920x1460)
This pipeline should be able to stabilize both for low-frequency drift and high frequency "jitter" that occasionally happens when valves are opened/closed during the experiment. My current approach, following the example here, is to apply a bilateral filter followed by adaptive thresholding to bring out the channels in the image. Then, I use goodFeaturesToTrack to find corners in the thresholded image. However, there is a substantial amount of noise in the image because of low contrast some optical effects in the corners of the image. Although I can find corners of the channel, as shown here, they move around quite a bit frame to frame, seen here. I tracked the amount of x and y pixel shift in each frame relative to the first frame calculated from calcOpticalFlowPyrLK, and computing a rigid transform using estimateRigidTransform, shown here. In this plot I can see low-frequency drift from frames 0:200, and a sharp jump around frame ~225. These jumps match what is observed in the video. However, the substantial noise (with amplitude of ~5-10 pixels) does not match what is observed in the video. If I apply these transforms to my image stack, I get instead increased jitter that does not stabilize the image. Moreover, if I try and compute the transform for one frame to the next (rather than all frames to the first), after processing a handful of frames I will get None returns for the rigid transform matrix, probably because the noisiness prevents a rigid transform from being calculated.
Here's a sample of how I am calculating the transform:
# Load required libraries
import numpy as np
from skimage.external import tifffile as tif
import os
import cv2
import matplotlib.pyplot as plt
from sklearn.externals._pilutil import bytescale
#Read in file and convert to 8-bit so it can be processed
os.chdir(r"C:\Path\to\my\processingfolder\inputstack")
inputfilename = "mytestfile.tif"
input_image = tif.imread(inputfilename)
input_image_8 = bytescale(input_image)
n_frames, vid_height, vid_width = np.shape(input_image_8)
transforms = np.zeros((n_frames-1,3),np.float32)
prev_image = starting_image
prev_f = cv2.bilateralFilter(prev_image,9,75,75)
prev_t = cv2.adaptiveThreshold(prev_f,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,49,2)
prev_pts = cv2.goodFeaturesToTrack(prev_t,maxCorners=100,qualityLevel=0.5,minDistance=10,blockSize=25,mask=None)
for i in range(1,n_frames-2):
curr_image = input_image_8[i]
curr_f = cv2.bilateralFilter(curr_image,9,75,75)
curr_t = cv2.adaptiveThreshold(curr_f,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,49,2)
#Detect features through optical flow:
curr_pts, status, err = cv2.calcOpticalFlowPyrLK(prev_t,curr_t,prev_pts,None)
#Sanity check
assert len(prev_pts) == len(curr_pts)
#Filter to only the valid points
idx = np.where(status==1)[0]
prev_pts = prev_pts[idx]
curr_pts = curr_pts[idx]
#Find transformation matrix
m = cv2.estimateRigidTransform(prev_pts,curr_pts, fullAffine=False) #will only work with OpenCV-3 or less
# Extract translation
dx = m[0,2]
dy = m[1,2]
# Extract rotation angle
da = np.arctan2(m[1,0], m[0,0])
# Store transformation
transforms[i] = [dx,dy,da]
print("Frame: " + str(i) + "/" + str(n_frames) + " - Tracked points : " + str(len(prev_pts)))
How can I process my images differently so that I am picking out the lines of these channels without the noisiness in detecting corners? This stabilization/alignment does not need to happen on the fly, it can be applied to the whole stack after the fact.

Fit curve to segmented image

In my current data analysis I have some segmented Images like for example below.
My Problem is that I would like to fit a polynom or spline (s.th. one-dimensional) to
a certain area (red) in the segmented image. ( the result would be the black line).
Usually i would use something like orthogonal distance regression, the problem is that this
needs some kind of fit function which I don't have in this case.
So what would be the best approach to do this with python/numpy?
Is there maybe some standard algorithm for this kind of problem?
UPDATE:
it seems my drawing skills are probably not the best, the red area in the picture could also have some random noise and does not have to be completely connected (there could be small gaps due to noise).
UPDATE2:
The overall target would be to have a parametrized curve p(t) which returns the position i.e. p(t) => (x, y) for t in [0,1]. where t=0 start of black line, t= 1 end of black line.
I used scipy.ndimage and this gist as a template. This gets you almost there, you'll have to find a reasonable way to parameterize the curve from the mostly skeletonized image.
from scipy.misc import imread
import scipy.ndimage as ndimage
# Load the image
raw = imread("bG2W9mM.png")
# Convert the image to greyscale, using the red channel
grey = raw[:,:,0]
# Simple thresholding of the image
threshold = grey>200
radius = 10
distance_img = ndimage.distance_transform_edt(threshold)
morph_laplace_img = ndimage.morphological_laplace(distance_img,
(radius, radius))
skeleton = morph_laplace_img < morph_laplace_img.min()/2
import matplotlib.cm as cm
from pylab import *
subplot(221); imshow(raw)
subplot(222); imshow(grey, cmap=cm.Greys_r)
subplot(223); imshow(threshold, cmap=cm.Greys_r)
subplot(224); imshow(skeleton, cmap=cm.Greys_r)
show()
You may find other answers that reference skeletonization useful, an example of that is here:
Problems during Skeletonization image for extracting contours

Categories

Resources