fitting ellipse in images with poor contrast - python

I am working on image processing in Python, on the topic of underwater photogrammetry. My goal is to fit an ellipse to fidicual markers, and retrieve its a) center, b) axis and c) orientation.
My markers are
radial,
white on black background, and some have a
binary code:
A ML-model delivers a small image snippets for each marker in each image, containting only the center of the marker.
So far, I've implemented these approaches:
Using openCV:
a) Thresholding, which results in a binary image (cv2.threshold)
b) Find Contours (cv2.findContours)
c) fit Ellipse (v2.fitEllipse)
Using Scikit:
a) Detect edge (using Canny)
b) Apply hough transform
Star operator (work in progress)
a) Estimate ellipse center
b) Send 360rays in all directions
c) Build an array, comprising coordinates of the largest gradient on each ray
d) Calculate best-fit ellipse using least-square method
e) Use the new center to repeat process (possibly several iterations required)
I perform these methods for each color-channel seperately. So far, the results between channels differ within several pixels for the ellipse center.
Do you have any suggestions on what pre-processing methods I should use, prior detecting/fitting the ellipse?
Any thoughts on which of the above methods will lead to the most accurate results?

This is amazing! Thank you. I just started to read about moments (e.g. https://www.pythonpool.com/opencv-moments/) and inertia.
However, there is a challange applying your code to this example:
As you can see, the image was poorly cropped, and the inertia of the image is more in the image center than in the center of the expected ellipse.
My first attempt to fix this is to binarize the image first:
import cv2 as cv2
T = int(cv2.mean(image)[0])
ret,image = cv2.threshold(image,T,255,0)
Is that a reasonable approach? I fear, that the binarization will have an unwanted impact on the moments of inertia. Thank you for claryfying.

This code finds the center of mass of the image, and the main axis of symmetry by calculating the moments of inertia.
I tried many libraries that calculate moments of inertia of images, but they give strange results (like 4x4 matrix for what should be a 2x2 matrix of inertia.
Also, ndimage.measurements.center_of_mass() appears to return (Cy,Cx) (row, column)
So, I resorted to manually calculating the moments of inertia
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image as Pim
from io import BytesIO
import requests
photoURL = "https://i.stack.imgur.com/EcLYk.png"
response = requests.get(photoURL)
image = np.array(Pim.open(BytesIO(response.content)).convert('L')) # Convert to greyscale
plt.imshow(image)
if True: # calculate eigen vectors = main axis of inertia
# xCoord, yCoord are the column and row numbers in image
xCoord, yCoord = np.meshgrid(np.arange(image.shape[1]), np.arange(image.shape[0]))
# mass M is the total sum of Image
M = np.sum(image)
# Cx, Cy are the coordinates of the center of mass
#Cx = sum(xCoord * image) / sum(image)
Cx = np.einsum('ij,ij', xCoord, image)/M
Cy = np.einsum('ij,ij', yCoord, image)/M
# Ixx is the second order moment of image respect to the horizontal axis passing through the center of mass
# Ixx=sum(Image*y^2)
Ixx = np.einsum('ij,ij,ij', yCoord-Cy, yCoord-Cy, image)
# Iyy is the second order moment of image respect to the vertical axis passing through the center of mass
# Iyy=sum(Image*x^2)
Iyy = np.einsum('ij,ij,ij', xCoord-Cx, xCoord-Cx, image)
# Ixy is the second order moment of image respect to both axis passing through the center of mass
# Ixy=sum(Image*x*y)
Ixy = np.einsum('ij,ij,ij', xCoord-Cx, yCoord-Cy, image)
inertiaMatrix = np.array([[Ixx, Ixy],
[Ixy, Iyy]])
eigValues, eigVectors = np.linalg.eig(inertiaMatrix)
# Plot center of mass
plt.scatter(Cx, Cy, c='r')
# Plot eigenvectors from center to direction of eigenvectors
plt.quiver(Cx, Cy, eigVectors[0, 0], eigVectors[1, 0], color='r', scale=2)
plt.quiver(Cx, Cy, eigVectors[0, 1], eigVectors[1, 1], color='r', scale=2)
plt.show()
nothing = 0

Related

How to remove repititve pattern from an image using FFT

I have image of skin colour with repetitive pattern (Horizontal White Lines) generated by a scanner that uses a line of sensors to perceive the photo.
My Question is how to denoise the image effectively using FFT without affecting the quality of the image much, somebody told me that I have to suppress the lines that appears in the magnitude spectrum manually, but I didn't know how to do that, can you please tell me how to do it?
My approach is to use Fast Fourier Transform(FFT) to denoise the image channel by channel.
I have tried HPF, and LPF in Fourier domain, but the results were not good as you can see:
My Code:
from skimage.io import imread, imsave
from matplotlib import pyplot as plt
import numpy as np
img = imread('skin.jpg')
R = img[...,2]
G = img[...,1]
B = img[...,0]
f1 = np.fft.fft2(R)
fshift1 = np.fft.fftshift(f1)
phase_spectrumR = np.angle(fshift1)
magnitude_spectrumR = 20*np.log(np.abs(fshift1))
f2 = np.fft.fft2(G)
fshift2 = np.fft.fftshift(f2)
phase_spectrumG = np.angle(fshift2)
magnitude_spectrumG = 20*np.log(np.abs(fshift2))
f3 = np.fft.fft2(B)
fshift3 = np.fft.fftshift(f3)
phase_spectrumB = np.angle(fshift3)
magnitude_spectrumB = 20*np.log(np.abs(fshift2))
#===============================
# LPF # HPF
magR = np.zeros_like(R) # = fshift1 #
magR[magR.shape[0]//4:3*magR.shape[0]//4,
magR.shape[1]//4:3*magR.shape[1]//4] = np.abs(fshift1[magR.shape[0]//4:3*magR.shape[0]//4,
magR.shape[1]//4:3*magR.shape[1]//4]) # =0 #
resR = np.abs(np.fft.ifft2(np.fft.ifftshift(magR)))
resR = R - resR
#===============================
magnitude_spectrumR
plt.subplot(221)
plt.imshow(R, cmap='gray')
plt.title('Original')
plt.subplot(222)
plt.imshow(magnitude_spectrumR, cmap='gray')
plt.title('Magnitude Spectrum')
plt.subplot(223)
plt.imshow(phase_spectrumR, cmap='gray')
plt.title('Phase Spectrum')
plt.subplot(224)
plt.imshow(resR, cmap='gray')
plt.title('Processed')
plt.show()
Here is a simple and effective linear filtering strategy to remove the horizontal line artifact:
Outline:
Estimate the frequency of the distortion by looking for a peak in the image's power spectrum in the vertical dimension. The function scipy.signal.welch is useful for this.
Design two filters: a highpass filter with cutoff just below the distortion frequency and a lowpass filter with cutoff near DC. We'll apply the highpass filter vertically and the lowpass filter horizontally to try to isolate the distortion. We'll use scipy.signal.firwin to design these filters, though there are many ways this could be done.
Compute the restored image as "image − (hpf ⊗ lpf) ∗ image".
Code:
# Copyright 2021 Google LLC.
# SPDX-License-Identifier: Apache-2.0
import numpy as np
from scipy.ndimage import convolve1d
from scipy.signal import firwin, welch
def remove_lines(image, distortion_freq=None, num_taps=65, eps=0.025):
"""Removes horizontal line artifacts from scanned image.
Args:
image: 2D or 3D array.
distortion_freq: Float, distortion frequency in cycles/pixel, or
`None` to estimate from spectrum.
num_taps: Integer, number of filter taps to use in each dimension.
eps: Small positive param to adjust filters cutoffs (cycles/pixel).
Returns:
Denoised image.
"""
image = np.asarray(image, float)
if distortion_freq is None:
distortion_freq = estimate_distortion_freq(image)
hpf = firwin(num_taps, distortion_freq - eps,
pass_zero='highpass', fs=1)
lpf = firwin(num_taps, eps, pass_zero='lowpass', fs=1)
return image - convolve1d(convolve1d(image, hpf, axis=0), lpf, axis=1)
def estimate_distortion_freq(image, min_frequency=1/25):
"""Estimates distortion frequency as spectral peak in vertical dim."""
f, pxx = welch(np.reshape(image, (len(image), -1), 'C').sum(axis=1))
pxx[f < min_frequency] = 0.0
return f[pxx.argmax()]
Examples:
On the portrait image, estimate_distortion_freq estimates that the frequency of the distortion is 0.1094 cycles/pixel (period of 9.14 pixels). The transfer function of the filtering "image − (hpf ⊗ lpf) ∗ image" looks like this:
Here is the filtered output from remove_lines:
On the skin image, estimate_distortion_freq estimates that the frequency of the distortion is 0.08333 cycles/pixel (period of 12.0 pixels). Filtered output from remove_lines:
The distortion is mostly removed on both examples. It isn't perfect: on the portrait image, a couple ripples are still visible near the top and bottom borders, a typical defect when using large filters or Fourier methods. Still, it's a good improvement over the original images.

how to draw thousand of circles at once openCv faster - (Maybe use GPU)

I need to draw thousands of dots on a given area of an image (frame of a video).
Using a loop is the easiest way to do this.
while i < num:
x = random.randint(min_x, max_x)
y = random.randint(min_y, max_y)
//this if is to check the the random points are within the original shape
if cv2.pointPolygonTest(contour, (int(x), int(y)), False)==1:
cv2.circle(img, (int(x), int(y)), 2, color, -1)
i = i+1;
this process takes a long time to complete.
how can I achieve this more efficiently?
Maybe it's possible to speed it up using several tricks:
Try to get rid of the loop and vectorize your operations.
You can vectorize the (x, y) point generation by passing size to random.randint. If after filtering they don't amount to num, you can generate another set.
Instead of pointPolygonTest, maybe you can try matplotlib.path.Path.contains_points that operates on a vector of points rather than a single point.
For the circle, once you filter all your circle centers, create an all zeros image to draw your circles, then mark centers in this image (again, vectorize). If you want color circles you'll have to set the pixel value to appropriate gray level for each channel. Then dilate using a circular structuring element having the required radius. For small radii, these circles should be fine. Or, you can try Gaussian blur as well, because, if you convolve a Gaussian with an impulse, it'll give you a Gaussian, and a symmetric Gaussian will resemble a circle in the image. You can also use filter2D to do this if your circles have the same size. If the dilation result isn't good, you can create your own kernel resembling the circle you want, then convolve it with centers image.
Copy all non-zero pixels from this circles image to your img using the circles image as a mask.
A simple example for creating circles:
import numpy as np
import cv2 as cv
# create random centers for circles
img = np.random.randint(low=0, high=1000, size=(256, 256))
img = np.uint8(img < 1) * 255
# use Gaussian bluer to create cricles
img1 = cv.GaussianBlur(img, (9, 9), 3)*20
# use morphological dilation bluer to create cricles
se = cv.getStructuringElement(cv.MORPH_ELLIPSE, (5, 5))
img2 = cv.dilate(img, se)
Centers:
Circles using dilation:
Circles using Gaussian blur:

Rotate incomplete box so it is vertical

I have a dataset of x-ray images that i am trying to clean by rotating the images so the arm is vertical and cropping the image of any excess space. Here are some examples from the dataset:
I am currently working out the best way to work out the angle of the x-ray and rotate the image based on that.
My curent approach is to detect the line of the side of the rectangle that the scan is in using the hough transform, and rotate the image based on that.
I tried to run the hough transform on the output of a canny edge detector but this doesnt work so well for images where the edge of the rectangle is blurred like in the first image.
I cant use cv's box detection as sometimes the rectangle around the scan has an edge off screen.
So i currently use adaptive thresholding to find the edge of the box and then median filter it and try to find the longest line in this, but sometimes the wrong line is the longest and the image gets rotated completley wrong.
Adaptive thresholding is used due to the fact that soem scans have different brightnesses.
The current implementation i have is:
def get_lines(img):
#threshold
thresh = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 15, 4.75)
median = cv2.medianBlur(thresh, 3)
# detect lines
lines = cv2.HoughLines(median, 1, np.pi/180, 175)
return sorted(lines, key=lambda x: x[0][0], reverse=True)
def rotate(image, angle):
(h, w) = image.shape[:2]
(cX, cY) = (w // 2, h // 2)
M = cv2.getRotationMatrix2D((cX, cY), angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
return cv2.warpAffine(image, M, (nW, nH))
def fix_rotation(input):
lines = get_lines(input)
rho, theta = lines[0][0]
return rotate_bound(input, theta*180/np.pi)
and produces the following results:
When it goes wrong:
I was wondering if there are any better techniques to usein order to improve the performance of this and what the best way to go about cropping the images after they have been rotated would be?
The idea is to use the blob of the arm itself and fit an ellipse around it. Then, extract its major axis. I quickly tested the idea in Matlab – not OpenCV. Here's what I did, you should be able to use OpenCV's equivalent functions to achieve similar outputs.
First, compute the threshold value of your input via Otsu. Then add some bias to the threshold value to find a better segmentation and use this value to threshold the image.
In pseudo-code:
//the bias value
threshBias = 0.4;
//get the binary threshold via otsu:
thresholdLevel = graythresh( grayInput, “otsu” );
//add bias to the original value
thresholdLevel = thresholdLevel - threshSensitivity * thresholdLevel;
//get the fixed binary image:
thresholdLevel = imbinarize( grayInput, thresholdLevel );
After small blob filtering, this is the output:
Now, get the contours/blobs and fit an ellipse for each contour. Check out the OpenCV example here: https://docs.opencv.org/3.4.9/de/d62/tutorial_bounding_rotated_ellipses.html
You end up with two ellipses:
We are looking for the biggest ellipse, the one with the biggest area and the biggest major and minor axis. I used the width and height of each ellipse to filter the results. The target ellipse is then colored in green. Finally, I get the major axis of the target ellipse, here colored in yellow:
Now, to implement these ideas in OpenCV you have these options:
Use fitEllipse to find the ellipses. The return value of this
function is a RotatedRect object. The data stored here are the
vertices of the ellipse.
Instead of fitting an ellipse you could try using minAreaRect, which
finds a rotated rectangle of the minimum area enclosing a blob.
You can use image moments to calculate the rotation angle.
Using opencv moments function, calculate the second order central moments to construct a covariance matrix and then obtain the orientation as shown here in the Image moment wiki page.
Obtain the normalized central moments nu20, nu11 and nu02 from opencv moments. Then the orientation is calculated as
0.5 * arctan(2 * nu11/(nu20 - nu02))
Please refer the given link for details.
You can use the raw image itself or the preprocessed one for the calculation of orientation. See which one gives you better accuracy and use it.
As for the bounding-box, once you rotate the image, assuming you used the preprocessed one, get all the non-zero pixel coordinates of the rotated image and calculate their upright bounding-box using opencv boundingRect.

determining the average colour of a given circular sample of an image?

What I am trying to achieve is similar to photoshop/gimp's eyedropper tool: take a round sample of a given area in an image and return the average colour of that circular sample.
The simplest method I have found is to take a 'regular' square sample, mask it as a circle, then reduce it to 1 pixel, but this is very CPU-demanding (especially when repeated millions of times).
A more mathematically complex method is to take a square area and average only the pixels that fall within a circular area within that sample, but determining what pixel is or isn't within that circle, repeated, is CPU-demanding as well.
Is there a more succinct, less-CPU-demanding means to achieve this?
Here's a little example of skimage.draw.circle() which doesn't actually draw a circle but gives you the coordinates of points within a circle which you can use to index Numpy arrays with.
#!/usr/bin/env python3
import numpy as np
from skimage.io import imsave
from skimage.draw import circle
# Make rectangular canvas of mid-grey
w, h = 200, 100
img = np.full((h, w), 128, dtype=np.uint8)
# Get coordinates of points within a central circle
Ycoords, Xcoords = circle(h//2, w//2, 45)
# Make all points in circle=200, i.e. fill circle with 200
img[Ycoords, Xcoords] = 200
# Get mean of points in circle
print(img[Ycoords, Xcoords].mean()) # prints 200.0
# DEBUG: Save image for checking
imsave('result.png',img)
I'm sure that there's a more succinct way to go about it, but:
import math
import numpy as np
import imageio as ioimg # as scipy's i/o function is now depreciated
from skimage.draw import circle
import matplotlib.pyplot as plt
# base sample dimensions (rest below calculated on this).
# Must be an odd number.
wh = 49
# tmp - this placement will be programmed later
dp = 500
#load work image (from same work directory)
img = ioimg.imread('830.jpg')
# convert to numpy array (droppying the alpha while we're at it)
np_img = np.array(img)[:,:,:3]
# take sample of resulting array
sample = np_img[dp:wh+dp, dp:wh+dp]
#==============
# set up numpy circle mask
## this mask will be multiplied against each RGB layer in extracted sample area
# set up basic square array
sample_mask = np.zeros((wh, wh), dtype=np.uint8)
# set up circle centre coords and radius values
xy, r = math.floor(wh/2), math.ceil(wh/2)
# use these values to populate circle area with ones
rr, cc = circle(xy, xy, r)
sample_mask[rr, cc] = 1
# add axis to make array multiplication possible (do I have to do this)
sample_mask = sample_mask[:, :, np.newaxis]
result = sample * sample_mask
# count number of nonzero values (this will be our median divisor)
nz = np.count_nonzero(sample_mask)
sample_color = []
for c in range(result.shape[2]):
sample_color.append(int(round(np.sum(result[:,:,c])/nz)))
print(sample_color) # will return array like [225, 205, 170]
plt.imshow(result, interpolation='nearest')
plt.show()
Perhaps asking this question here wasn't necessary (it has been a while since I've python-ed, and was hoping that some new library had been developed for this since), but I hope this can be a reference for others who have the same goal.
This operation will be performed for every pixel in the image (sometimes millions of times) for thousands of images (scanned pages), so therein are my performance issue worries, but thanks to numpy, this code is pretty quick.

How to change the K matrix as well as the distortion coefficients to simulate the barrel distortion in python with the help of OpenCV

I have the intrensic paramaters of my camera, as well as the distortion coefficents, and I know how to calculate out the barrel distortion. - Mainly from this blogpost:
Barrel distortion calculation
However, now I wish to add the barrel distortion like it would be done by the camera itself.
The code for correcting the barrel distortion is the following:
import numpy as np
import cv2
from matplotlib import pyplot as plt
# Define camera matrix K
K = np.array([[1.051e+03,0,0],
[0, 1.0845e+03,0],
[964.4480,544.2625,1.]])
#Matrix was written in matlab style, hence it has to be transposed ...
K = K.transpose()
# Define distortion coefficients d
d = np.array([0.0719,-0.0833,0.0013,-6.1840e-04,0])
# Read an example image and acquire its size
img = cv2.imread("grid.png")
h, w = img.shape[:2]
# Generate new camera matrix from parameters
newcameramatrix, roi = cv2.getOptimalNewCameraMatrix(K, d, (w,h), 0)
# Generate look-up tables for remapping the camera image
mapx, mapy = cv2.initUndistortRectifyMap(K, d, None, newcameramatrix, (w, h), 5)
# Remap the original image to a new image
newimg = cv2.remap(img, mapx, mapy, cv2.INTER_LINEAR)
# Display old and new image
fig, (oldimg_ax, newimg_ax) = plt.subplots(1, 2)
oldimg_ax.imshow(img)
oldimg_ax.set_title('Original image')
newimg_ax.imshow(newimg)
newimg_ax.set_title('Unwarped image')
plt.show()
I tried to simualte the barrel distortion by using the inverse of th K-Matrix, or the transposed K-matrix, as well as multiplicating the d-vector with -1.
I transposed it via:
K = K.transpose()
or inverted it via:
K = np.linalg.inv(K)
Howver, this gave me only a black image. If I do not inverted / transpose it all I just get a negative radial resolution, however I need a positive radial distortion

Categories

Resources