I would like to implement a filter on numpy array which compute locally (given a footprint) the average distance to the central pixel.
This function is similar to the local standard deviation, but takes the center pixel as the reference instead of the average.
One part of the problem is that my arrays are multimodal 2d images (rgb for example).
I have a full numpy implementation, that uses indexing tricks to perform the task, but it is limited to a 3x3 neighborhood for technical reasons.
I was thinking implementing this in cython, following this for example, but it seems not to work for vector images (with more than one value per pixel).
A numpy implementation (kind of pseudo code!) would look something like this:
img: np.ndarray # source image of shape (n, m, 3) for example
mask: np.ndarray # mask representing valid data (n, m)
footprint = disk(5)
n_px_nh = footprint.sum()
out = np.zeros(img.shape)
for i, neighborhood, masked_px in local_2d_filter(img, footprint, mask):
# going through
center_px = neighborhood[n_px_nh/2] # works if n_px_nh is odd of course
center_diff = 0
for px in neighborhood[masked_px]:
center_diff += ((px - center_px) ** 2).sum() ** 0.5 # distance to the center pixel
center_diff /= n_px_nh
out[i] = center_diff
local_2d_filter would be a function, like scipy.ndimage.generic_filter, which goes through an image, returning pixels in the footprint, around the center pixel .
Has anyone an idea on how to implement such filter ?
Thanks
I have a images, each with a single value of 1 (delta) within it and previously known sigma. Reproduction of a single example:
img = np.zeros((40,40))
idx1 = np.random.randint(0, img.shape[0])
idx2 = np.random.randint(0, img.shape[1])
img [idx1, idx2] = 1
I wish to convolve each image with it's respected sigma value, such as in:
out_image = scipy.ndimage.filters.gaussian_filter(img, sigma, mode='constant')
The thing is since it is only a single delta, the output will just be to substitute the gaussian's values into the image, centered around the location of the delta. Will it be faster to implement this? If so, how do I generate the sigma filter? Maybe there is a faster sparse representation in skimage or cv2 which can make a faster job for me?
What will be the most efficient way (in terms of execution time), to repeatedly calculate such a case, given that the location of the delta and sigma size changes each time?
why not construct a Gaussian...?
idx1 = np.random.randint(0, img.shape[0])
idx2 = np.random.randint(0, img.shape[1])
yg,xg = np.mgrid[range(0,img.shape[0]),range(0,img.shape[1])]
simga=5
img =np.exp(-0.5/sigma**2*((xg-idx2)**2+(yg-idx1)**2))
img=img/np.sum(img)
UPDATE
Since your Gaussian is out of bounds, it does not always sum to 1 (due to the tails), and your new image does not sums to 1. If you would like to consider boundary conditions, you would need to calculate the double integral sum on your Gaussian, which is not straightforward:
https://www.wolframalpha.com/input?i=integrate+%28integrate+e%5E%28-0.5%2Fs%5E2*%28%28x-x_0%29%5E2%2B%28y-y_0%29%5E2%29%29+dx+from+x%3D0+to+w+%29+dy+from+y%3D0+to+h
f = u+n: f is noisy image, u is an desired reconstruction and n is noise.
The reconstruction error is ||u-f||_2^2 + lambda * ||gradient(u)||_2^2
Solve ||Ax-b||_2^2 where x is a vector that is vectorised from f in column-wise.
the above is my problem and I can't understand what means "solve ||Ax-b||_2^2".
what is 'A'? what is 'b'? How can get 'the reconstruction'?
I know the simple way of find minimizing least square using pseudo inverse.
But I just adjusted the way on find θ in ||Aθ-b||^2.
I don't know what I have to do. So I did what can I do.
import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
from skimage import io, color
from skimage import exposure
file_image = 'image.jpg'
im_color = io.imread(file_image)
im_gray = color.rgb2gray(im_color)
im = (im_gray - np.mean(im_gray)) / np.std(im_gray)
(row, col) = im.shape
noise_std = 0.2 # try with varying noise standard deviation
noise = np.random.normal(0, noise_std, (row, col))
im_noise = im + noise
I made a noisy image. and I don't know next step.
Is there anyone who can explain?
This very much looks like a poorly phrased homework question. I have a fair background in mathematical image processing and inverse problems so I rewrote it for you the only way it makes sense.
Let f be a noisy image described by the relationship f = u+n,
where u is a noise-free image and n is the noise. The goal is to
recover u from n. To do this, we introduce the following function
||u - f||²,
which is equal to the squared summed difference between all pixels in
u and f, to measure the similarity between u and f. Furthermore, we introduce the following function to measure the amount
of noise in the image
||Du||²,
where Du(x, y) represents the magnitude of the gradient of u at
position (x, y), as a measure of the noise in an image. By
||Du||², we therefore mean the squared sum of the gradient in all pixels.
A way to measure how well we have reconstructed the noise-free image can then be represented by the following function
||u - f||² + ||Du||²
Solve the regularised least squares problem above.
I want to apply a geometric mean filter on an image in opencv (python). Is there a builtin function or should I implement the filter myself? What is the most efficient way to implement a nonlinear filter in opencv?
Recall from logarithmic identities that
log((x1 * x2 * ... * xn)^(1/n)) = (1/n) * (log(x1) + log(x2) + ... + log(xn))
From Wikipedia:
The geometric mean can also be expressed as the exponential of the arithmetic mean of logarithms. By using logarithmic identities to transform the formula, the multiplications can be expressed as a sum and the power as a multiplication.
This means that a geometric mean can be simply calculated as an arithmetic mean, i.e. a cv2.boxFilter() of the logarithm of the image values. Then you just exponentiate the result and you're done!
For e.g., let's test the manual method and this method and check the results. First load the image and define the kernel size:
import cv2
import numpy as np
img = cv2.imread('cameraman.png', cv2.IMREAD_GRAYSCALE).astype(float)
rows, cols = img.shape[:2]
ksize = 5
Next let's pad the image and calculate the geometric mean manually:
padsize = int((ksize-1)/2)
pad_img = cv2.copyMakeBorder(img, *[padsize]*4, cv2.BORDER_DEFAULT)
geomean1 = np.zeros_like(img)
for r in range(rows):
for c in range(cols):
geomean1[r, c] = np.prod(pad_img[r:r+ksize, c:c+ksize])**(1/(ksize**2))
geomean1 = np.uint8(geomean1)
cv2.imshow('1', geomean1)
cv2.waitKey()
Looks like what we'd expect. Now instead of this, if we use the use the logarithmic version, all we need to do is take the exponential of the box filter running on the log of the image:
geomean2 = np.uint8(np.exp(cv2.boxFilter(np.log(img), -1, (ksize, ksize))))
cv2.imshow('2', geomean2)
cv2.waitKey()
Well, they certainly look the same. Actually I cheated, this is the same uploaded image as above. But that's okay because:
print(np.array_equal(geomean1, geomean2))
True
I am trying to convert a RGB image to Grayscale using the following paper.
The main algorithm using in the paper is this:
Novel PCA based algorithm to convert images to grayscale
However, when I am trying to extract eigen vectors from the image I am getting 500 eigen values, instead of 3 as required. As far as I know, a NxN matrix usually gives N Eigen vectors, but I am not really sure what I should be doing here to get only 3 Eigen vectors.
Any help as to what I should do? Here's my code so far:
import numpy as np
import cv2
def pca_rgb2gray(img):
"""
NOVEL PCA-BASED COLOR-TO-GRAY IMAGE CONVERSION
Authors:
-Ja-Won Seo
-Seong Dae Kim
2013 IEEE International Conference on Image Processing
"""
I_re = cv2.resize(img, (500,500))
Iycc = cv2.cvtColor(I_re, cv2.COLOR_BGR2YCrCb)
Izycc = Iycc - Iycc.mean()
eigvals = []
eigvecs = []
final_im = []
for i in range(3):
res = np.linalg.eig(Izycc[:,:,i])
eigvals.append(res[0])
eigvecs.append(res[1])
eignorm = np.linalg.norm(eigvals)
for i in range(3):
eigvals[i]/=eignorm
eigvecs[i]/=np.linalg.norm(eigvecs[i])
temp = eigvals[i] * np.dot(eigvecs[i], Izycc[:,:,i])
final_im.append(temp)
final_im = final_im[0] + final_im[1] + final_im[2]
return final_im
if __name__ == '__main__':
img = cv2.imread('image.png')
gray = pca_rgb2gray(img)
The accepted answer by Ahmed unfortunately has the PCA math wrong, leading to the a result quite different to the manuscript. Here are the images screen captured from the manuscript.
The mean centring and SVD should be done along the other dimension, with the channels treated as the different samples. The mean centring is aimed at getting an average pixel response of zero, not an average channel response of zero.
The linked algorithm also clearly states that the projection of the PCA model involves multiplication of the image by the scores first and this product by the eigenvalues, not the other way round as in the other answer.
For further info on the math see my PCA math answer here
The difference in the code can be seen in the outputs. Since the manuscript did not provide an example output (that I found) there may be subtle differences between the results as the manuscript ones are captured screenshots.
For comparison, the downloaded colour file, which is a little more contrasted than the screenshot, so one would expect the same from the output greyscale.
First the result from Ahmed's code:
Then the result from the updated code:
The corrected code (based on Ahmed's for ease of comparison) is
import numpy as np
import cv2
from numpy.linalg import svd, norm
# Read input image
Ibgr = cv2.imread('path/peppers.jpg')
#Convert to YCrCb
Iycc = cv2.cvtColor(Ibgr, cv2.COLOR_BGR2YCR_CB)
# Reshape the H by W by 3 array to a 3 by N array (N = W * H)
Izycc = Iycc.reshape([-1, 3]).T
# Remove mean along Y, Cr, and Cb *separately*!
Izycc = Izycc - Izycc.mean(0) #(1)[:, np.newaxis]
# Mean across channels is required (separate means for each channel is not a
# mathematically sensible idea) - each pixel's variation should centre around 0
# Make sure we're dealing with zero-mean data here: the mean for Y, Cr, and Cb
# should separately be zero. Recall: Izycc is 3 by N array.
# Original assertion was based on a false presmise. Mean value for each pixel should be 0
assert(np.allclose(np.mean(Izycc, 0), 0.0))
# Compute data array's SVD. Ignore the 3rd return value: unimportant in this context.
(U, S, L) = svd(Izycc, full_matrices=False)
# Square the data's singular vectors to get the eigenvalues. Then, normalize
# the three eigenvalues to unit norm and finally, make a diagonal matrix out of
# them.
eigvals = np.diag(S**2 / norm(S**2))
# Eigenvectors are just the right-singular vectors.
eigvecs = U;
# Project the YCrCb data onto the principal components and reshape to W by H
# array.
# This was performed incorrectly, the published algorithm shows that the eigenvectors
# are multiplied by the flattened image then scaled by eigenvalues
Igray = np.dot(eigvecs.T, np.dot(eigvals, Izycc)).sum(0).reshape(Iycc.shape[:2])
Igray2 = np.dot(eigvals, np.dot(eigvecs, Izycc)).sum(0).reshape(Iycc.shape[:2])
eigvals3 = eigvals*[1,-1,1]
Igray3 = np.dot(eigvals3, np.dot(eigvecs, Izycc)).sum(0).reshape(Iycc.shape[:2])
eigvals4 = eigvals*[1,-1,-1]
Igray4 = np.dot(eigvals4, np.dot(eigvecs, Izycc)).sum(0).reshape(Iycc.shape[:2])
# Rescale Igray to [0, 255]. This is a fancy way to do this.
from scipy.interpolate import interp1d
Igray = np.floor((interp1d([Igray.min(), Igray.max()],
[0.0, 256.0 - 1e-4]))(Igray))
Igray2 = np.floor((interp1d([Igray2.min(), Igray2.max()],
[0.0, 256.0 - 1e-4]))(Igray2))
Igray3 = np.floor((interp1d([Igray3.min(), Igray3.max()],
[0.0, 256.0 - 1e-4]))(Igray3))
Igray4 = np.floor((interp1d([Igray4.min(), Igray4.max()],
[0.0, 256.0 - 1e-4]))(Igray4))
# Make sure we don't accidentally produce a photographic negative (flip image
# intensities). N.B.: `norm` is often expensive; in real life, try to see if
# there's a more efficient way to do this.
if norm(Iycc[:,:,0] - Igray) > norm(Iycc[:,:,0] - (255.0 - Igray)):
Igray = 255 - Igray
if norm(Iycc[:,:,0] - Igray2) > norm(Iycc[:,:,0] - (255.0 - Igray2)):
Igray2 = 255 - Igray2
if norm(Iycc[:,:,0] - Igray3) > norm(Iycc[:,:,0] - (255.0 - Igray3)):
Igray3 = 255 - Igray3
if norm(Iycc[:,:,0] - Igray4) > norm(Iycc[:,:,0] - (255.0 - Igray4)):
Igray4 = 255 - Igray4
# Display result
if True:
import pylab
pylab.ion()
fGray = pylab.imshow(Igray, cmap='gray')
# Save result
cv2.imwrite('peppers-gray.png', Igray.astype(np.uint8))
fGray2 = pylab.imshow(Igray2, cmap='gray')
# Save result
cv2.imwrite('peppers-gray2.png', Igray2.astype(np.uint8))
fGray3 =pylab.imshow(Igray3, cmap='gray')
# Save result
cv2.imwrite('peppers-gray3.png', Igray3.astype(np.uint8))
fGray4 =pylab.imshow(Igray4, cmap='gray')
# Save result
cv2.imwrite('peppers-gray4.png', Igray4.astype(np.uint8))
****EDIT*****
Following Nazlok's query about the instability of eigenvector direction (which direction any one eigenvectors is oriented in is arbitrary, so there is not guarantee that different algorithms (or single algorithms without a reproducible standardisation step for orientation) would give the same result. I have now added in two extra examples, where I have simply switched the sign of the eigenvectors (number 2 and numbers 2 and 3). The results are again different, with the switching of only PC2 giving a much lighter tone, while switching 2 and 3 is similar (not surprising as the exponential scaling relegates the influence of PC3 to very little). I'll leave that last one for people bothered to run the code.
Conclusion
Without clear additional steps taken to provide a repeatable and reproducible orientation of PCs this algorithm is unstable and I personally would not be comfortable employing it as is. Nazlok's suggestion of using the balance of positive and negative intensities could provide a rule but would need validated so is out of scope of this answer. Such a rule however would not guarantee a 'best' solution, just a stable one. Eigenvectors are unit vectors, so are balanced in variance (square of intensity). Which side of zero has the largest sum of magnitudes is only telling us which side has individual pixels contributing larger variances which I suspect is generally not very informative.
Background
When Seo and Kim ask for lambda_i, v_i <- PCA(Iycc), for i = 1, 2, 3, they want:
from numpy.linalg import eig
lambdas, vs = eig(np.dot(Izycc, Izycc.T))
for a 3×N array Izycc. That is, they want the three eigenvalues and eigenvectors of the 3×3 covariance matrix of Izycc, the 3×N array (for you, N = 500*500).
However, you almost never want to compute the covariance matrix, then find its eigendecomposition, because of numerical instability. There is a much better way to get the same lambdas, vs, using the singular value decomposition (SVD) of Izycc directly (see this answer). The code below shows you how to do this.
Just show me the code
First download http://cadik.posvete.cz/color_to_gray_evaluation/img/155_5572_jpg/155_5572_jpg.jpg and save it as peppers.jpg.
Then, run the following:
import numpy as np
import cv2
from numpy.linalg import svd, norm
# Read input image
Ibgr = cv2.imread('peppers.jpg')
# Convert to YCrCb
Iycc = cv2.cvtColor(Ibgr, cv2.COLOR_BGR2YCR_CB)
# Reshape the H by W by 3 array to a 3 by N array (N = W * H)
Izycc = Iycc.reshape([-1, 3]).T
# Remove mean along Y, Cr, and Cb *separately*!
Izycc = Izycc - Izycc.mean(1)[:, np.newaxis]
# Make sure we're dealing with zero-mean data here: the mean for Y, Cr, and Cb
# should separately be zero. Recall: Izycc is 3 by N array.
assert(np.allclose(np.mean(Izycc, 1), 0.0))
# Compute data array's SVD. Ignore the 3rd return value: unimportant.
(U, S) = svd(Izycc, full_matrices=False)[:2]
# Square the data's singular vectors to get the eigenvalues. Then, normalize
# the three eigenvalues to unit norm and finally, make a diagonal matrix out of
# them. N.B.: the scaling factor of `norm(S**2)` is, I believe, arbitrary: the
# rest of the algorithm doesn't really care if/how the eigenvalues are scaled,
# since we will rescale the grayscale values to [0, 255] anyway.
eigvals = np.diag(S**2 / norm(S**2))
# Eigenvectors are just the left-singular vectors.
eigvecs = U;
# Project the YCrCb data onto the principal components and reshape to W by H
# array.
Igray = np.dot(eigvecs.T, np.dot(eigvals, Izycc)).sum(0).reshape(Iycc.shape[:2])
# Rescale Igray to [0, 255]. This is a fancy way to do this.
from scipy.interpolate import interp1d
Igray = np.floor((interp1d([Igray.min(), Igray.max()],
[0.0, 256.0 - 1e-4]))(Igray))
# Make sure we don't accidentally produce a photographic negative (flip image
# intensities). N.B.: `norm` is often expensive; in real life, try to see if
# there's a more efficient way to do this.
if norm(Iycc[:,:,0] - Igray) > norm(Iycc[:,:,0] - (255.0 - Igray)):
Igray = 255 - Igray
# Display result
if True:
import pylab
pylab.ion()
pylab.imshow(Igray, cmap='gray')
# Save result
cv2.imwrite('peppers-gray.png', Igray.astype(np.uint8))
This produces the following grayscale image, which seems to match the result in Figure 4 of the paper (though see caveat at the bottom of this answer!):
Errors in your implementation
Izycc = Iycc - Iycc.mean() WRONG. Iycc.mean() flattens the image and computes the mean. You want Izycc such that the Y channel, Cr channel, and Cb channel all have zero-mean. You could do this in a for dim in range(3)-loop, but I did it above with array broadcasting. I also have an assert above to make sure this condition holds. The trick where you get the eigendecomposition of the covariance matrix from the SVD of the data array requires zero-mean Y/Cr/Cb channels.
np.linalg.eig(Izycc[:,:,i]) WRONG. The contribution of this paper is to use principal components to convert color to grayscale. This means you have to combine the colors. The processing you were doing above was on a channel-by-channel basis—no combination of colors. Moreover, it was totally wrong to decompose the 500×500 array: the width/height of the array don’t matter, only pixels. For this reason, I reshape the three channels of the input into 3×whatever and operate on that matrix. Make sure you understand what’s happening after BGR-to-YCrCb conversion and before the SVD.
Not so much an error but a caution: when calling numpy.linalg.svd, the full_matrices=False keyword is important: this makes the “economy-size” SVD, calculating just three left/right singular vectors and just three singular values. The full-sized SVD will attempt to make an N×N array of right-singular vectors: with N = 114270 pixels (293 by 390 image), an N×N array of float64 will be N ** 2 * 8 / 1024 ** 3 or 97 gigabytes.
Final note
The magic of this algorithm is really in a single line from my code:
Igray = np.dot(eigvecs.T, np.dot(eigvals, Izycc)).sum(0) # .reshape...
This is where The Math is thickest, so let’s break it down.
Izycc is a 3×N array whose rows are zero-mean;
eigvals is a 3×3 diagonal array containing the eigenvalues of the covariance matrix dot(Izycc, Izycc.T) (as mentioned above, computed via a shortcut, using SVD of Izycc),
eigvecs is a 3×3 orthonormal matrix whose columns are the eigenvectors corresponding to those eigenvalues of that covariance.
Because these are Numpy arrays and not matrixes, we have to use dot(x,y) for matrix-matrix-multiplication, and then we use sum, and both of these obscure the linear algebra. You can check for yourself but the above calculation (before the .reshape() call) is equivalent to
np.ones([1, 3]) · eigvecs.T · eigvals · Izycc = dot([[-0.79463857, -0.18382267, 0.11589724]], Izycc)
where · is true matrix-matrix-multiplication, and the sum is replaced by pre-multiplying by a row-vector of ones. Those three numbers,
-0.79463857 multiplying each pixels’s Y-channel (luma),
-0.18382267 multiplying Cr (red-difference), and
0.11589724 multiplying Cb (blue-difference),
specify the “perfect” weighted average, for this particular image: each pixel’s Y/Cr/Cb channels are being aligned with the image’s covariance matrix and summed. Numerically speaking, each pixel’s Y-value is slightly attenuated, its Cr-value is significantly attenuated, and its Cb-value is even more attenuated but with an opposite sign—this makes sense, we expect the luma to be most informative for a grayscale so its contribution is the highest.
Minor caveat
I’m not really sure where OpenCV’s RGB to YCrCb conversion comes from. The documentation for cvtColor, specifically the section on RGB ↔︎ YCrCb JPEG doesn’t seem to correspond to any of the transforms specified on Wikipedia. When I use, say, the Colorspace Transformations Matlab package to just do the RGB to YCrCb conversion (which cites the Wikipedia entry), I get a nicer grayscale image which appears to be more similar to the paper’s Figure 4:
I’m totally out of my depth when it comes to these color transformations—if someone can explain how to get Wikipedia or Matlab’s Colorspace Transformations equivalents in Python/OpenCV, that’d be very kind. Nonetheless, this caveat is about preparing the data. After you make Izycc, the 3×N zero-mean data array, the above code fully-specifies the remaining processing.