Rescaling image formula with arrays - python

I'm doing an exercise about matplotlib in which they show me the procedure to improve image intensity.
After converting the image to an array and detecting the minimum and maximum values (RGB - 0 to 255):
# Extract minimum and maximum values from the image: pmin, pmax
pmin, pmax = image.min(), image.max()
print("The smallest & largest pixel intensities are %d & %d." % (pmin, pmax))
They propose the following:
# Rescale the pixels: rescaled_image
rescaled_image = 256*(image - pmin) / (pmax - pmin)
print("The rescaled smallest & largest pixel intensities are %.1f & %.1f." %
(rescaled_image.min(), rescaled_image.max()))
What is the logic behind this formula?
256 * (image - pmin) / (pmax - pmin)
Thanks :)

The original 2D Array is an image array with a thin histogram. The rescaled 2D Array is the final image after you apply the formula [256 * (image - pmin) / (pmax - pmin)]. You get the final image with a wider histogram. Note the word "image" in formula refers to a pixel.

The idea behind this formula is to increase the dynamic range of the image's colors.
While the original dynamic range of the pixels is between 0-255, pixels in the extreme edges of this range don't necessarily exist in any image. Using the proposed formula ensures that there exists at least one pixel with an intensity of 255 and another pixel with an intensity of 0.

Related

Find pixels with a combo of saturation and brightness over X and return the percentage of those in the total pixel count

I am trying to take an image and evaluate the total number of pixels that have a combination of saturation and brightness over a specific baseline in an image. This formula works properly.
sqrt(s/255. * v/255.)
A value of a medium red for example would be 0.5, where a totally saturated bright red would be 1.
Returning the saturation isn't enough for the type of metric I wish to collect.
I want to return a single percentage of those pixels over the baseline, versus the total pixels in the image. I am a bit stuck trying to get this filter to work and to do the total calculation of the result versus the total pixels.
BASELINE = 0.8
pixels = hsv.reshape((hsv.shape[0] * hsv.shape[1], 3))
for h,s,v in pixels:
d[h] = sqrt(s/255. * v/255.)
vibrant_px[h] = filter(lambda x: x > BASELINE, d)
How to total vibrant_px versus total pixels that are over BASELINE?
For the most part this works, but getting the total results from the tuple is frustrating me.

How to iterate through all pixels in a image and compare their RGB values with another RGB value without using for loop?

So, basically i have a array with 16 RGB color values, and i have to calculate the distance between the RGB value of a pixel in the input image and all of these 16. The RGB value which has the lower distance will be the RGB value in the output image.
The problem is: I'm using nested for loops to do these operations, and it's REALLY slow. Excerpt as follow:
for i in range (row):
for j in range (columns):
pixel = img[i, j]
for color in colorsarray:
dist.append(np.linalg.norm(pixel - color))
img[i,j] = colorsarray[dist.index(min(dist))]
dist.clear()
Is there a numpy function that can help me optimize this?
You can calculate the distances by broadcasting the arrays.
If your image has shape (x,y,3) and your palette has shape (n,3), then you can calculate the distance between each pixel and each color as an array with shape (x,y,n):
# distance[x,y,n] is the distance from pixel (x,y) to
# color n
distance = np.linalg.norm(
img[:,:,None] - colors[None,None,:], axis=3)
The index : means "the entire axis" and the index None means "broadcast the value along this axis".
You can then choose the closest color index:
# pal_img[x,y] is the index of the color closest to
# pixel (x,y)
pal_img = np.argmin(distance, axis=2)
Finally, you can convert back to RGB:
# rgb_img[x,y] is the RGB color closest to pixel (x,y)
rgb_img = colors[pal_img]
This shows how you don't really need special functions in NumPy. Unfortunately, this can be a bit hard to understand.
Untested, but you could try to vectorize your function:
# reshape to have 1D array
dimx = image.shape[0]
image = image.reshape(-1, 3)
def f(pixel):
# TODO here: logic to return, given the pixel, the closest match in the list
# vectorize the function and apply it to the image
image = np.vectorize(f)(image)
# set the shape back to original
image = image.reshape( dimx, -1, 3 )

Calculate percentage of number of pixels of object detected to the total number of pixels in a picture

I want to calculate the percentage of number of pixels of detected object to the total number of pixels in a picture in python. Many objects detected are detected multiple times so the total count of number of pixels of detected object is not correct.
Test image
In opencv get the shape of the image with w, h = img.shape. Then you can get the area of the image (number of pixels) by multiplying the h and w. The same thing is with the area of the pixels. When an object detection algorithm gives you the output, it's a bounding box coordinates. With that you can calculate the area (number of pixels) of each of the bounding boxes and sum them all up. Then just divide that sum with number of pixels in the image and multiply the result with 100.

PIL: What is happening in Image.Blend() when I use an alpha greater than 1.0?

From the docs:
Creates a new image by interpolating between the given images, using a constant alpha. Both images must have the same size and mode. out = image1 * (1.0 - alpha) + image2 * alpha
If the alpha is 0.0, a copy of the first image is returned. If the alpha is 1.0, a copy of the second image is returned. There are no restrictions on the alpha value. If necessary, the result is clipped to fit into the allowed output range.
So there is no restriction on alpha, but what actually happens when you use values greater than 1.0?
Image 1:
Image 2:
The image blended with an alpha of 100.0:
The results are fully explained by the formula you quoted. If alpha is 100, you get image1 * -99 + image2 * 100 and then the results for each pixel are clamped to be valid. Lets look at what that means for some example point.
A pixel I sampled from the joker's forehead has RGB value (255, 194, 106). An approximately corresponding pixel from the other image (part of the seascape's blue sky) has RGB value (1, 109, 217). Combining them according to the equation above, gives (25401, 8609, -10883), which is obviously way out of bounds for all three bands. Clamping each color to be between 0 and 255 gives (255, 255, 0), which is the pure yellow you see in the output image for that area.
Almost all of the pixels in the output image will have values of either 0 or 255 for each color band (resulting in very saturated colors). Only a very few pixels (maybe none for these images, I've not checked), where the difference between the two images was very small will there be any intermediate values.

skimage resize changes total sum of array

I want to resize an image in fits format to a smaller dimension. For example, I would like to resize my 100x100 pixel image to a 58x58 pixel image. The values of the array are intensity or flux values. I want the total intensity of the image to be conserved after transformation. This does not work with skimage resize. My total value reduces depending on what factor I scale up or scale down. I have shown below the code I tried so far.
import numpy as np
from skimage.transform import resize
image=fits.open(directory+file1)
cutout=image[0].data
out = resize(cutout, (58,58), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
My output is:
0.074657436655 0.22187 (I want these two values to be equal)
If I scale it to the same dimension using:
out = resize(cutout, (100,100), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
My output is very close to what I want:
0.221869631852 0.22187
I have the same problem if I try to increase the image size as well.
out = resize(cutout, (200,200), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
Output:
0.887316320731 0.22187
I would like to know if there is any workaround to this problem.
EDIT 1:
I just realized that if I multiply my image by the square of the scale of which I want to increase or decrease the size of my image, then my total sum is conserved.
For example:
x=58
out = resize(cutout, (x,x), order=1, preserve_range=True)
test=out*(100/x)**2
print(np.sum(test),np.sum(cutout))
My output is very close to what I want but slightly higher:
0.221930548915 0.22187
I tried this with different dimensions and it works except for really small values. Can anybody explain why this relation is true or is this just a statistical coincidence.
If you treat an image I = Width x Height where N = Width x Height as a set of pixels with intensities in the range of [0,1], it is completely normal that after resizing the image to M = newWidth x newWeight the sum of intensities completely differs from before.
Assume that an image I with N pixels has intensities uniformly distributed in the range [0,1]. Then the sum of intensities will approximately be 0.5 * N. If you use skimage's resize, the image will be resized to a lower (or larger) size by interpolating. Interpolating does not accumulate values (as you seem to expect), it does instead average values in a neighbourhood to predict the value of each of the pixels in the new image. Thus, the intensity range of the image does not change,the values are modified, and thus, the sum of intensites of the new resized image will approximately be 0.5 * M. If M != N then the sum of intensities will differ a lot.
What you can do to solve this problem is:
Re-scale your new data proportional to its size:
>>> y, x = (57, 58)
>>> out = resize(data, (y,x), order=1, preserve_range=True)
>>> out = out * (data.shape[0] / float(y)) * (data.shape[1] / float(x))
Which is analogous to what you propose but for any size image (not just square images). This however, compensates for every pixel with a constant factor out[i,j] *= X where X is equal for every pixel in the image, and not all the pixels will be interpolated with the same weight, thus, adding small artificial artifacts.
I think it is just best to replace the total sum of the image (which depends on the number of pixels on the image) with the average intensity in the image (which doesn't rely on the number of pixels)
>>> meanI = np.sum(I) / float(I.size) # Exactly the same as np.mean(I) or I.mean()
>>> meanInew = np.sum(out) / float(out.size)
>>> np.isclose(meanI, meanInew) # True

Categories

Resources