skimage resize changes total sum of array - python

I want to resize an image in fits format to a smaller dimension. For example, I would like to resize my 100x100 pixel image to a 58x58 pixel image. The values of the array are intensity or flux values. I want the total intensity of the image to be conserved after transformation. This does not work with skimage resize. My total value reduces depending on what factor I scale up or scale down. I have shown below the code I tried so far.
import numpy as np
from skimage.transform import resize
image=fits.open(directory+file1)
cutout=image[0].data
out = resize(cutout, (58,58), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
My output is:
0.074657436655 0.22187 (I want these two values to be equal)
If I scale it to the same dimension using:
out = resize(cutout, (100,100), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
My output is very close to what I want:
0.221869631852 0.22187
I have the same problem if I try to increase the image size as well.
out = resize(cutout, (200,200), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
Output:
0.887316320731 0.22187
I would like to know if there is any workaround to this problem.
EDIT 1:
I just realized that if I multiply my image by the square of the scale of which I want to increase or decrease the size of my image, then my total sum is conserved.
For example:
x=58
out = resize(cutout, (x,x), order=1, preserve_range=True)
test=out*(100/x)**2
print(np.sum(test),np.sum(cutout))
My output is very close to what I want but slightly higher:
0.221930548915 0.22187
I tried this with different dimensions and it works except for really small values. Can anybody explain why this relation is true or is this just a statistical coincidence.

If you treat an image I = Width x Height where N = Width x Height as a set of pixels with intensities in the range of [0,1], it is completely normal that after resizing the image to M = newWidth x newWeight the sum of intensities completely differs from before.
Assume that an image I with N pixels has intensities uniformly distributed in the range [0,1]. Then the sum of intensities will approximately be 0.5 * N. If you use skimage's resize, the image will be resized to a lower (or larger) size by interpolating. Interpolating does not accumulate values (as you seem to expect), it does instead average values in a neighbourhood to predict the value of each of the pixels in the new image. Thus, the intensity range of the image does not change,the values are modified, and thus, the sum of intensites of the new resized image will approximately be 0.5 * M. If M != N then the sum of intensities will differ a lot.
What you can do to solve this problem is:
Re-scale your new data proportional to its size:
>>> y, x = (57, 58)
>>> out = resize(data, (y,x), order=1, preserve_range=True)
>>> out = out * (data.shape[0] / float(y)) * (data.shape[1] / float(x))
Which is analogous to what you propose but for any size image (not just square images). This however, compensates for every pixel with a constant factor out[i,j] *= X where X is equal for every pixel in the image, and not all the pixels will be interpolated with the same weight, thus, adding small artificial artifacts.
I think it is just best to replace the total sum of the image (which depends on the number of pixels on the image) with the average intensity in the image (which doesn't rely on the number of pixels)
>>> meanI = np.sum(I) / float(I.size) # Exactly the same as np.mean(I) or I.mean()
>>> meanInew = np.sum(out) / float(out.size)
>>> np.isclose(meanI, meanInew) # True

Related

Find pixels with a combo of saturation and brightness over X and return the percentage of those in the total pixel count

I am trying to take an image and evaluate the total number of pixels that have a combination of saturation and brightness over a specific baseline in an image. This formula works properly.
sqrt(s/255. * v/255.)
A value of a medium red for example would be 0.5, where a totally saturated bright red would be 1.
Returning the saturation isn't enough for the type of metric I wish to collect.
I want to return a single percentage of those pixels over the baseline, versus the total pixels in the image. I am a bit stuck trying to get this filter to work and to do the total calculation of the result versus the total pixels.
BASELINE = 0.8
pixels = hsv.reshape((hsv.shape[0] * hsv.shape[1], 3))
for h,s,v in pixels:
d[h] = sqrt(s/255. * v/255.)
vibrant_px[h] = filter(lambda x: x > BASELINE, d)
How to total vibrant_px versus total pixels that are over BASELINE?
For the most part this works, but getting the total results from the tuple is frustrating me.

Sliding window on an image to calculate variance of pixels in that window

I am trying to build a function that uses sliding window over and image and calculates the variance of pixels in the window and returns a bounding box where there is the most variance observed.
I'm new to coding and I've tried solutions from this post but I don't know how to input image in that instead of array.
I'm on a deadline here and been trying this since a while so any help is much appreciated . TIA
Edit: Also, if someone could help me with how to call the rolling_window_lastaxis function and modify it to what I'm trying to do then it would mean a lot.
Here is one way to compute the sliding window variance (or standard deviation) using Python/OpenCV/Skimage.
This approach makes use of the following form for computing the variance (see https://en.wikipedia.org/wiki/Variance):
Variance = mean of square of image - square of mean of image
However, since the variance will be outside the 8-bit range, we take the square root to form the standard deviation.
I also use the (local) mean filter from the Skimage rank filter module.
Input:
import cv2
import numpy as np
from skimage.morphology import rectangle
import skimage.filters as filters
# Variance = mean of square of image - square of mean of image
# See # see https://en.wikipedia.org/wiki/Variance
# read the image
# convert to 16-bits grayscale since mean filter below is limited
# to single channel 8 or 16-bits, not float
# and variance will be larger than 8-bit range
img = cv2.imread('lena.png', cv2.IMREAD_GRAYSCALE).astype(np.uint16)
# compute square of image
img_sq = cv2.multiply(img, img)
# compute local mean in 5x5 rectangular region of each image
# note: python will give warning about slower performance when processing 16-bit images
region = rectangle(5,5)
mean_img = filters.rank.mean(img, selem=region)
mean_img_sq = filters.rank.mean(img_sq, selem=region)
# compute square of local mean of img
sq_mean_img = cv2.multiply(mean_img, mean_img)
# compute variance using float versions of images
var = cv2.add(mean_img_sq.astype(np.float32), -sq_mean_img.astype(np.float32))
# compute standard deviation and convert to 8-bit format
std = cv2.sqrt(var).clip(0,255).astype(np.uint8)
# save results
# multiply by 2 to make brighter as an example
cv2.imwrite('lena_std.png',2*std)
# show results
# multiply by 2 to make brighter as an example
cv2.imshow('std', 2*std)
cv2.waitKey(0)
cv2.destroyAllWindows()
Local Standard Deviation Image for 5x5 Sliding Window:
ADDITION
Here is a version that finds the bounding box for the maximum average variance for the bounding box size and draws it on the variance image (actually standard deviation).
import cv2
import numpy as np
from skimage.morphology import rectangle
import skimage.filters as filters
# Variance = mean of square of image - square of mean of image
# See # see https://en.wikipedia.org/wiki/Variance
# set the bounding box size
bbox_size = 25
# read the image
# convert to 16-bits grayscale since mean filter below is limited
# to single channel 8 or 16-bits, not float
# and variance will be larger than 8-bit range
img = cv2.imread('lena.png', cv2.IMREAD_GRAYSCALE).astype(np.uint16)
# compute square of image
img_sq = cv2.multiply(img, img)
# compute local mean in bbox_size x bbox_size rectangular region of each image
# note: python will give warning about slower performance when processing 16-bit images
region = rectangle(bbox_size, bbox_size)
mean_img = filters.rank.mean(img, selem=region)
mean_img_sq = filters.rank.mean(img_sq, selem=region)
# compute square of local mean of img
sq_mean_img = cv2.multiply(mean_img, mean_img)
# compute variance using float versions of images
var = cv2.add(mean_img_sq.astype(np.float32), -sq_mean_img.astype(np.float32))
# compute standard deviation and convert to 8-bit format
std = cv2.sqrt(var).clip(0,255).astype(np.uint8)
# find bbox_size x bbox_size region with largest var (or std)
# get the moving window average at each pixel
std_ave = (cv2.sqrt(var)).astype(np.uint8)
# find the pixel x,y with the largest mean
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(std_ave)
x,y = max_loc
print("x:", x, "y:", y, "max:", max_val)
# draw rectangle for bounding box on copy of std image
result = std.copy()
result = cv2.merge([result, result, result])
cv2.rectangle(result, (x, y), (x+bbox_size, y+bbox_size), (0,0,255), 1)
# save results
# multiply by 2 to make brighter as an example
cv2.imwrite('lena_std.png',std)
cv2.imwrite('lena_std_bbox.png',result)
# show results
# multiply by 2 to make brighter as an example
cv2.imshow('std', std)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
x: 208 y: 67 max: 79.0
Resulting Bounding Box:
An alternative method to compute the windowed/rolling variance in regions of WxH is to use just numpy and scipy with convolutions, which are computed fairly quickly. An example:
import numpy as np
import scipy.signal
# Create image data
original = np.zeros((811,123))
img = original + np.random.normal(0, 1, original.shape)
# Create averaging kernel
H, W = 5, 5
mean_op = np.ones((H,W))/(H*W)
# Carry out convolution to compute mean of square, and square of mean
mean_of_sq = scipy.signal.convolve2d( img**2, mean_op, mode='same', boundary='symm')
sq_of_mean = scipy.signal.convolve2d( img , mean_op, mode='same', boundary='symm') **2
win_var = mean_of_sq - sq_of_mean

Rescaling image formula with arrays

I'm doing an exercise about matplotlib in which they show me the procedure to improve image intensity.
After converting the image to an array and detecting the minimum and maximum values (RGB - 0 to 255):
# Extract minimum and maximum values from the image: pmin, pmax
pmin, pmax = image.min(), image.max()
print("The smallest & largest pixel intensities are %d & %d." % (pmin, pmax))
They propose the following:
# Rescale the pixels: rescaled_image
rescaled_image = 256*(image - pmin) / (pmax - pmin)
print("The rescaled smallest & largest pixel intensities are %.1f & %.1f." %
(rescaled_image.min(), rescaled_image.max()))
What is the logic behind this formula?
256 * (image - pmin) / (pmax - pmin)
Thanks :)
The original 2D Array is an image array with a thin histogram. The rescaled 2D Array is the final image after you apply the formula [256 * (image - pmin) / (pmax - pmin)]. You get the final image with a wider histogram. Note the word "image" in formula refers to a pixel.
The idea behind this formula is to increase the dynamic range of the image's colors.
While the original dynamic range of the pixels is between 0-255, pixels in the extreme edges of this range don't necessarily exist in any image. Using the proposed formula ensures that there exists at least one pixel with an intensity of 255 and another pixel with an intensity of 0.

How to iterate through all pixels in a image and compare their RGB values with another RGB value without using for loop?

So, basically i have a array with 16 RGB color values, and i have to calculate the distance between the RGB value of a pixel in the input image and all of these 16. The RGB value which has the lower distance will be the RGB value in the output image.
The problem is: I'm using nested for loops to do these operations, and it's REALLY slow. Excerpt as follow:
for i in range (row):
for j in range (columns):
pixel = img[i, j]
for color in colorsarray:
dist.append(np.linalg.norm(pixel - color))
img[i,j] = colorsarray[dist.index(min(dist))]
dist.clear()
Is there a numpy function that can help me optimize this?
You can calculate the distances by broadcasting the arrays.
If your image has shape (x,y,3) and your palette has shape (n,3), then you can calculate the distance between each pixel and each color as an array with shape (x,y,n):
# distance[x,y,n] is the distance from pixel (x,y) to
# color n
distance = np.linalg.norm(
img[:,:,None] - colors[None,None,:], axis=3)
The index : means "the entire axis" and the index None means "broadcast the value along this axis".
You can then choose the closest color index:
# pal_img[x,y] is the index of the color closest to
# pixel (x,y)
pal_img = np.argmin(distance, axis=2)
Finally, you can convert back to RGB:
# rgb_img[x,y] is the RGB color closest to pixel (x,y)
rgb_img = colors[pal_img]
This shows how you don't really need special functions in NumPy. Unfortunately, this can be a bit hard to understand.
Untested, but you could try to vectorize your function:
# reshape to have 1D array
dimx = image.shape[0]
image = image.reshape(-1, 3)
def f(pixel):
# TODO here: logic to return, given the pixel, the closest match in the list
# vectorize the function and apply it to the image
image = np.vectorize(f)(image)
# set the shape back to original
image = image.reshape( dimx, -1, 3 )

Sum of colorvalues of an image

I am looking for a way to sum the color values of all pixels of an image. I require this to estimate the total flux of a bright source (say a distant galaxy) from its surface brightness image.
Would anyone please help me how can I sum the colour values of all pixels of an image.
For example:
Each pixel of the following image has a colour value in between 0 to 1.
But when I read the image with imread the colour values of each pixel I get is an array of 3 elements. I am very new in matplotlib and I do not know how can I convert that array to single values in the scale of 0 to 1 and add them.
If you have a PIL image, then you can convert to greyscale ("luminosity") like this:
from PIL import Image
col = Image.open('sample.jpg')
gry = col.convert('L') # returns grayscale version.
If you want ot have more control over how the colors are added, convert to a numpy array first:
arr = np.asarray(col)
tot = arr.sum(-1) # sum over color (last) axis
mn = arr.mean(-1) # or a mean, to keep the same normalization (0-1)
Or you can weight the colors differently:
wts = [.25, .25, .5] # in order: R, G, B
tot = (arr*wts).sum(-1) # now blue has twice the weight of red and green
For large arrays, this is equivalent to the last line and faster, but possibly harder to read:
tot = np.einsum('ijk, k -> ij', arr, wts)
All of the above adds up the colors of each pixel, to turn a color image into a grayscale (luminosity) image. The following will add up all the pixels together to see the integral of the entire image:
tot = arr.sum(0).sum(0) # first sums all the rows, second sums all the columns
If you have a color image, tot will still have three values. If your image is grayscale, it will be a single value. If you want the mean value, just replace sum with mean:
mn = arr.mean(0).mean(0)

Categories

Resources