Sum of colorvalues of an image - python

I am looking for a way to sum the color values of all pixels of an image. I require this to estimate the total flux of a bright source (say a distant galaxy) from its surface brightness image.
Would anyone please help me how can I sum the colour values of all pixels of an image.
For example:
Each pixel of the following image has a colour value in between 0 to 1.
But when I read the image with imread the colour values of each pixel I get is an array of 3 elements. I am very new in matplotlib and I do not know how can I convert that array to single values in the scale of 0 to 1 and add them.

If you have a PIL image, then you can convert to greyscale ("luminosity") like this:
from PIL import Image
col = Image.open('sample.jpg')
gry = col.convert('L') # returns grayscale version.
If you want ot have more control over how the colors are added, convert to a numpy array first:
arr = np.asarray(col)
tot = arr.sum(-1) # sum over color (last) axis
mn = arr.mean(-1) # or a mean, to keep the same normalization (0-1)
Or you can weight the colors differently:
wts = [.25, .25, .5] # in order: R, G, B
tot = (arr*wts).sum(-1) # now blue has twice the weight of red and green
For large arrays, this is equivalent to the last line and faster, but possibly harder to read:
tot = np.einsum('ijk, k -> ij', arr, wts)
All of the above adds up the colors of each pixel, to turn a color image into a grayscale (luminosity) image. The following will add up all the pixels together to see the integral of the entire image:
tot = arr.sum(0).sum(0) # first sums all the rows, second sums all the columns
If you have a color image, tot will still have three values. If your image is grayscale, it will be a single value. If you want the mean value, just replace sum with mean:
mn = arr.mean(0).mean(0)

Related

Using python to fill an image with one color in steps from 0% to 100%

I have a black image that I need to fill with a new color.
I want to generate new images starting from 1% to 100% (generating an
image for every 1% filled).
Examples for 4 fill-ratios
Heart image filled with 1%, 5%, 10% and 15%
Research I did
I did a lot of research on the internet and the closest I came was this link:
Fill an image with color but keep the alpha (Color overlay in PIL)
However, as I don't have much experience with Python for image editing, I couldn't move forward or modify the code as needed.
Edit:
I was trying with this code from the link
from PIL import Image
import numpy as np
# Open image
im = Image.open('2746646.png')
# Make into Numpy array
n = np.array(im)
# Set first three channels to red
n[..., 0:3] = [ 255, 0, 0 ]
# Convert back to PIL Image and save
Image.fromarray(n).save('result.png')
But it only generates a single image (as if it were 100%, I need 100 images with 1% filled in each one).
Updated Answer
Now you have shared your actual starting image, it seems you don't really want to replace black pixels, but actually opaque pixels. If you split your image into its constituent RGBA channels and lay them out left-to-right R,G,B then A, you can see you want to fill where the alpha (rightmost) channel is white, rather than where the RGB channels are black:
That changes the code to this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image, ensure not palettised, and make into Numpy array
im = Image.open('muscle.png').convert('RGBA')
# Make Numpy array
RGBA = np.array(im)
# Get RGB part
RGB = RGBA[..., :3]
# Get greyscale version of image as Numpy array
alpha = RGBA[..., 3]
# Find X,Y coordinates of all black pixels in image
blkY, blkX = np.where(alpha==255)
# Just take one entry per row, even if multiple black pixels in it
uniqueRows = np.unique(blkY)
# How many rows are there with black pixels in?
nUniqueRows = len(uniqueRows)
for percent in range(2,101):
# Work out filename based on percentage
filename = f'result-{percent:03d}.png'
# How many rows do we need to fill?
nRows = int(nUniqueRows * percent/100.0)
# Which rows are they? Negative index because filling bottom-up.
rows = uniqueRows[-nRows:]
print(f'DEBUG: filename: {filename}, percent: {percent}, nRows: {nRows}, rows: {rows}')
# What are the indices onto blkY, blkX ?
indices = np.argwhere(np.isin(blkY, rows))
# Make those pixels black
RGB[blkY[indices.ravel()], blkX[indices.ravel()], :3] = [0,255,0]
res = Image.fromarray(RGBA).save(filename)
Original Answer
That was fun! This seems to work - though it's not that efficient. It is not a true "floodfill", see note at end.
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image, ensure not palettised, and make into Numpy array
im = Image.open('heart.png').convert('RGB')
# Make Numpy array
na = np.array(im)
# Get greyscale version of image as Numpy array
grey = np.array(im.convert('L'))
# Find X,Y coordinates of all black pixels in image
blkY, blkX = np.where(grey==0)
# Just take one entry per row, even if multiple black pixels in it
uniqueRows = np.unique(blkY)
# How many rows are there with black pixels in?
nUniqueRows = len(uniqueRows)
for percent in range(1,101):
# Work out filename based on percentage
filename = f'result-{percent:03d}.png'
# How many rows do we need to fill?
nRows = int(nUniqueRows * percent/100.0)
# Which rows are they? Negative index because filling bottom-up.
rows = uniqueRows[-nRows:]
# print(f'DEBUG: filename: {filename}, percent: {percent}, nRows: {nRows}, rows: {rows}')
# What are the indices onto blkY, blkX ?
indices = np.argwhere(np.isin(blkY, rows))
# Make those pixels green
na[blkY[indices.ravel()], blkX[indices.ravel()], :] = [0,255,0]
res = Image.fromarray(na).save(filename)
Note that this isn't actually a true "flood fill" - it is more naïve than that - because it doesn't seem necessary for your image. If you add another shape, it will fill that too:

How to change the colour of pixels in an image depending on their initial luminosity?

The aim is to take a coloured image, and change any pixels within a certain luminosity range to black. For example, if luminosity is the average of a pixel's RGB values, any pixel with a value under 50 is changed to black.
I’ve attempted to begin using PIL and converting to grayscale, but having trouble trying to find a solution that can identify luminosity value and use that info to manipulate a pixel map.
There are many ways to do this, but the simplest and probably fastest is with Numpy, which you should get accustomed to using with image processing in Python:
from PIL import Image
import numpy as np
# Load image and ensure RGB, not palette image
im = Image.open('start.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Make all pixels of "na" where the mean of the R,G,B channels is less than 50 into black (0)
na[np.mean(na, axis=-1)<50] = 0
# Convert back to PIL Image to save or display
result = Image.fromarray(na)
result.show()
That turns this:
Into this:
Another slightly different way would be to convert the image to a more conventional greyscale, rather than averaging for the luminosity:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version
grey = im.convert('L')
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Notice that the blue channel is given considerably less significance in the ITU-R 601-2 luma transform that PIL uses (see the lower 114 weighting for Blue versus 299 for Red and 587 for Green) in the formula:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
so the blue shades are considered darker and become black.
Another way would be to make a greyscale and a mask as above. but then choose the darker pixel at each location when comparing the original and the mask:
from PIL import Image, ImageChops
im = Image.open('start.png').convert('RGB')
grey = im.convert('L')
mask = grey.point(lambda p: 0 if p<50 else 255)
res = ImageChops.darker(im, mask.convert('RGB'))
That gives the same result as above.
Another way, pure PIL and probably closest to what you actually asked, would be to derive a luminosity value by averaging the channels:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version by averaging R,G and B
grey = im.convert('L', matrix=(0.333, 0.333, 0.333, 0))
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Another approach could be to split the image into its constituent RGB channels, evaluate a mathematical function over the channels and mask with the result:
from PIL import Image, ImageMath
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Split into RGB channels
(R, G, B) = im.split()
# Evaluate mathematical function over channels
dark = ImageMath.eval('(((R+G+B)/3) <= 50) * 255', R=R, G=G, B=B)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=dark)
I created a function that returns a list with True if the pixel has a luminosity of less than a parameter, and False if it doesn't. It includes an RGB or RGBA option (True or False)
def get_avg_lum(pic,avg=50,RGBA=False):
num=3
numd=4
if RGBA==False:
num=2
numd=3
li=[[[0]for y in range(0,pic.size[1])] for x in range(0,pic.size[0])]
for x in range(0,pic.size[0]):
for y in range(0,pic.size[1]):
if sum(pic.getpixel((x,y))[:num])/numd<avg:
li[x][y]=True
else:
li[x][y]=False
return(li)
a=get_avg_lum(im)
The pixels match in the list, so (0,10) on the image is [0][10] in the list.
Hopefully this helps. My module is for standard PIL objects.

How to iterate through all pixels in a image and compare their RGB values with another RGB value without using for loop?

So, basically i have a array with 16 RGB color values, and i have to calculate the distance between the RGB value of a pixel in the input image and all of these 16. The RGB value which has the lower distance will be the RGB value in the output image.
The problem is: I'm using nested for loops to do these operations, and it's REALLY slow. Excerpt as follow:
for i in range (row):
for j in range (columns):
pixel = img[i, j]
for color in colorsarray:
dist.append(np.linalg.norm(pixel - color))
img[i,j] = colorsarray[dist.index(min(dist))]
dist.clear()
Is there a numpy function that can help me optimize this?
You can calculate the distances by broadcasting the arrays.
If your image has shape (x,y,3) and your palette has shape (n,3), then you can calculate the distance between each pixel and each color as an array with shape (x,y,n):
# distance[x,y,n] is the distance from pixel (x,y) to
# color n
distance = np.linalg.norm(
img[:,:,None] - colors[None,None,:], axis=3)
The index : means "the entire axis" and the index None means "broadcast the value along this axis".
You can then choose the closest color index:
# pal_img[x,y] is the index of the color closest to
# pixel (x,y)
pal_img = np.argmin(distance, axis=2)
Finally, you can convert back to RGB:
# rgb_img[x,y] is the RGB color closest to pixel (x,y)
rgb_img = colors[pal_img]
This shows how you don't really need special functions in NumPy. Unfortunately, this can be a bit hard to understand.
Untested, but you could try to vectorize your function:
# reshape to have 1D array
dimx = image.shape[0]
image = image.reshape(-1, 3)
def f(pixel):
# TODO here: logic to return, given the pixel, the closest match in the list
# vectorize the function and apply it to the image
image = np.vectorize(f)(image)
# set the shape back to original
image = image.reshape( dimx, -1, 3 )

Python 3: I am trying to find find all green pixels in an image by traversing all pixels using an np.array, but can't get around index error

My code currently consists of loading the image, which is successful and I don't believe has any connection to the problem.
Then I go on to transform the color image into a np.array named rgb
# convert image into array
rgb = np.array(img)
red = rgb[:,:,0]
green = rgb[:,:,1]
blue = rgb[:,:,2]
To double check my understanding of this array, in case that may be the root of the issue, it is an array such that rgb[x-coordinate, y-coordinate, color band] which holds the value between 0-255 of either red, green or blue.
Then, my idea was to make a nested for loop to traverse all pixels of my image (620px,400px) and sort them based on the ratio of green to blue and red in an attempt to single out the greener pixels and set all others to black or 0.
for i in range(xsize):
for j in range(ysize):
color = rgb[i,j] <-- Index error occurs here
if(color[0] > 128):
if(color[1] < 128):
if(color[2] > 128):
rgb[i,j] = [0,0,0]
The error I am receiving when trying to run this is as follows:
IndexError: index 400 is out of bounds for axis 0 with size 400
I thought it may have something to do with the bounds I was giving i and j so I tried only sorting through a small inner portion of the image but still got the same error. At this point I am lost as to what is even the root of the error let alone even the solution.
In direct answer to your question, the y axis is given first in numpy arrays, followed by the x axis, so interchange your indices.
Less directly, you will find that for loops are very slow in Python and you are generally better off using numpy vectorised operations instead. Also, you will often find it easier to find shades of green in HSV colourspace.
Let's start with an HSL colour wheel:
and assume you want to make all the greens into black. So, from that Wikipedia page, the Hue corresponding to Green is 120 degrees, which means you could do this:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open image and make RGB and HSV versions
RGBim = Image.open("image.png").convert('RGB')
HSVim = RGBim.convert('HSV')
# Make numpy versions
RGBna = np.array(RGBim)
HSVna = np.array(HSVim)
# Extract Hue
H = HSVna[:,:,0]
# Find all green pixels, i.e. where 100 < Hue < 140
lo,hi = 100,140
# Rescale to 0-255, rather than 0-360 because we are using uint8
lo = int((lo * 255) / 360)
hi = int((hi * 255) / 360)
green = np.where((H>lo) & (H<hi))
# Make all green pixels black in original image
RGBna[green] = [0,0,0]
count = green[0].size
print("Pixels matched: {}".format(count))
Image.fromarray(RGBna).save('result.png')
Which gives:
Here is a slightly improved version that retains the alpha/transparency, and matches red pixels for extra fun:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open image and make RGB and HSV versions
im = Image.open("image.png")
# Save Alpha if present, then remove
if 'A' in im.getbands():
savedAlpha = im.getchannel('A')
im = im.convert('RGB')
# Make HSV version
HSVim = im.convert('HSV')
# Make numpy versions
RGBna = np.array(im)
HSVna = np.array(HSVim)
# Extract Hue
H = HSVna[:,:,0]
# Find all red pixels, i.e. where 340 < Hue < 20
lo,hi = 340,20
# Rescale to 0-255, rather than 0-360 because we are using uint8
lo = int((lo * 255) / 360)
hi = int((hi * 255) / 360)
red = np.where((H>lo) | (H<hi))
# Make all red pixels black in original image
RGBna[red] = [0,0,0]
count = red[0].size
print("Pixels matched: {}".format(count))
result=Image.fromarray(RGBna)
# Replace Alpha if originally present
if savedAlpha is not None:
result.putalpha(savedAlpha)
result.save('result.png')
Keywords: Image processing, PIL, Pillow, Hue Saturation Value, HSV, HSL, color ranges, colour ranges, range, prime.

skimage resize changes total sum of array

I want to resize an image in fits format to a smaller dimension. For example, I would like to resize my 100x100 pixel image to a 58x58 pixel image. The values of the array are intensity or flux values. I want the total intensity of the image to be conserved after transformation. This does not work with skimage resize. My total value reduces depending on what factor I scale up or scale down. I have shown below the code I tried so far.
import numpy as np
from skimage.transform import resize
image=fits.open(directory+file1)
cutout=image[0].data
out = resize(cutout, (58,58), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
My output is:
0.074657436655 0.22187 (I want these two values to be equal)
If I scale it to the same dimension using:
out = resize(cutout, (100,100), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
My output is very close to what I want:
0.221869631852 0.22187
I have the same problem if I try to increase the image size as well.
out = resize(cutout, (200,200), order=1, preserve_range=True)
print(np.sum(out),np.sum(cutout))
Output:
0.887316320731 0.22187
I would like to know if there is any workaround to this problem.
EDIT 1:
I just realized that if I multiply my image by the square of the scale of which I want to increase or decrease the size of my image, then my total sum is conserved.
For example:
x=58
out = resize(cutout, (x,x), order=1, preserve_range=True)
test=out*(100/x)**2
print(np.sum(test),np.sum(cutout))
My output is very close to what I want but slightly higher:
0.221930548915 0.22187
I tried this with different dimensions and it works except for really small values. Can anybody explain why this relation is true or is this just a statistical coincidence.
If you treat an image I = Width x Height where N = Width x Height as a set of pixels with intensities in the range of [0,1], it is completely normal that after resizing the image to M = newWidth x newWeight the sum of intensities completely differs from before.
Assume that an image I with N pixels has intensities uniformly distributed in the range [0,1]. Then the sum of intensities will approximately be 0.5 * N. If you use skimage's resize, the image will be resized to a lower (or larger) size by interpolating. Interpolating does not accumulate values (as you seem to expect), it does instead average values in a neighbourhood to predict the value of each of the pixels in the new image. Thus, the intensity range of the image does not change,the values are modified, and thus, the sum of intensites of the new resized image will approximately be 0.5 * M. If M != N then the sum of intensities will differ a lot.
What you can do to solve this problem is:
Re-scale your new data proportional to its size:
>>> y, x = (57, 58)
>>> out = resize(data, (y,x), order=1, preserve_range=True)
>>> out = out * (data.shape[0] / float(y)) * (data.shape[1] / float(x))
Which is analogous to what you propose but for any size image (not just square images). This however, compensates for every pixel with a constant factor out[i,j] *= X where X is equal for every pixel in the image, and not all the pixels will be interpolated with the same weight, thus, adding small artificial artifacts.
I think it is just best to replace the total sum of the image (which depends on the number of pixels on the image) with the average intensity in the image (which doesn't rely on the number of pixels)
>>> meanI = np.sum(I) / float(I.size) # Exactly the same as np.mean(I) or I.mean()
>>> meanInew = np.sum(out) / float(out.size)
>>> np.isclose(meanI, meanInew) # True

Categories

Resources