How to label different objects in a non solid black background? - python

I know that scipy.ndimage.label can't label if the background color is not a solid black.
So I have an image with black background and it's not a solid black so we can't assume that all the RGB values are(0,0,0) in all pixels.
How can I prepare the image so I can use ndimage.label??
this is a similar image to test on:
test image http://imageshack.us/a/img4/8661/backgrf.png
Note:
(1) The image was converted fromRGB to PNG gray scale .
(2) The background color varies.
(3) The ndimage.label labels the whole image as one object.
Thanks

This is a simple method for increasing the contrast as far as it can go so that anything "light" becomes white and anything "dark" becomes black. Assuming 8 bit grayscale and adapting the code in #Warren Weckesser's answer:
img2 = img.copy() # Copy the image.
img2[img2 < 128] = 0 # Set all values less than 128 to 0 (black).
img2[img2 >= 128] = 255 # Set all values equal or greater than 128 to 255 (white).
lbl, n = label(img2)
Let me know if this works for you.

You could set all values less than some threshold to 0, and then call label:
In [16]: img2 = img.copy() # Copy the image.
In [17]: img2[img2 < 20] = 0 # Set all values less than 20 to 0.
In [18]: lbl, n = label(img2)
In [19]: n
Out[19]: 2

Related

How to change the colour of pixels in an image depending on their initial luminosity?

The aim is to take a coloured image, and change any pixels within a certain luminosity range to black. For example, if luminosity is the average of a pixel's RGB values, any pixel with a value under 50 is changed to black.
I’ve attempted to begin using PIL and converting to grayscale, but having trouble trying to find a solution that can identify luminosity value and use that info to manipulate a pixel map.
There are many ways to do this, but the simplest and probably fastest is with Numpy, which you should get accustomed to using with image processing in Python:
from PIL import Image
import numpy as np
# Load image and ensure RGB, not palette image
im = Image.open('start.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Make all pixels of "na" where the mean of the R,G,B channels is less than 50 into black (0)
na[np.mean(na, axis=-1)<50] = 0
# Convert back to PIL Image to save or display
result = Image.fromarray(na)
result.show()
That turns this:
Into this:
Another slightly different way would be to convert the image to a more conventional greyscale, rather than averaging for the luminosity:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version
grey = im.convert('L')
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Notice that the blue channel is given considerably less significance in the ITU-R 601-2 luma transform that PIL uses (see the lower 114 weighting for Blue versus 299 for Red and 587 for Green) in the formula:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
so the blue shades are considered darker and become black.
Another way would be to make a greyscale and a mask as above. but then choose the darker pixel at each location when comparing the original and the mask:
from PIL import Image, ImageChops
im = Image.open('start.png').convert('RGB')
grey = im.convert('L')
mask = grey.point(lambda p: 0 if p<50 else 255)
res = ImageChops.darker(im, mask.convert('RGB'))
That gives the same result as above.
Another way, pure PIL and probably closest to what you actually asked, would be to derive a luminosity value by averaging the channels:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version by averaging R,G and B
grey = im.convert('L', matrix=(0.333, 0.333, 0.333, 0))
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Another approach could be to split the image into its constituent RGB channels, evaluate a mathematical function over the channels and mask with the result:
from PIL import Image, ImageMath
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Split into RGB channels
(R, G, B) = im.split()
# Evaluate mathematical function over channels
dark = ImageMath.eval('(((R+G+B)/3) <= 50) * 255', R=R, G=G, B=B)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=dark)
I created a function that returns a list with True if the pixel has a luminosity of less than a parameter, and False if it doesn't. It includes an RGB or RGBA option (True or False)
def get_avg_lum(pic,avg=50,RGBA=False):
num=3
numd=4
if RGBA==False:
num=2
numd=3
li=[[[0]for y in range(0,pic.size[1])] for x in range(0,pic.size[0])]
for x in range(0,pic.size[0]):
for y in range(0,pic.size[1]):
if sum(pic.getpixel((x,y))[:num])/numd<avg:
li[x][y]=True
else:
li[x][y]=False
return(li)
a=get_avg_lum(im)
The pixels match in the list, so (0,10) on the image is [0][10] in the list.
Hopefully this helps. My module is for standard PIL objects.

HSV colour range in openCV

I wrote a program that used trackbars, to find out the appropriate HSV values (range) for segmenting out the white lines from the image.
For a long time this seemed like the best shot:
But its still not very accurate, its leaving out chunks of the line...
After messing around some more, I realised something:
This is very accurate: apart from the fact that the black and white regions are swapped.
Is there any way to invert this colour scheme to swap the black and white regions?
If not, what exactly can I do to not leave out chunks of the line like the first image...I have tried out various HSV combinations and it seems like this is the closest I can get.
code:
import cv2 as cv
import numpy as np
def nothing(x):
pass
img= cv.imread("ti2.jpeg")
cv.namedWindow("image") #create a window that will contain the trackbars for HSV values
cv.createTrackbar('HMin','image',0,179,nothing)
cv.createTrackbar('SMin','image',0,255,nothing)
cv.createTrackbar('VMin','image',0,255,nothing)
cv.createTrackbar('HMax','image',0,179,nothing)
cv.createTrackbar('SMax','image',0,255,nothing)
cv.createTrackbar('VMax','image',0,255,nothing)
cv.setTrackbarPos('HMax', 'image', 179) #setting default trackbar pos for max HSV values at max
cv.setTrackbarPos('SMax', 'image', 255)
cv.setTrackbarPos('VMax', 'image', 255)
while True:
hMin = cv.getTrackbarPos('HMin','image') #get the current slider position
sMin = cv.getTrackbarPos('SMin','image')
vMin = cv.getTrackbarPos('VMin','image')
hMax = cv.getTrackbarPos('HMax','image')
sMax = cv.getTrackbarPos('SMax','image')
vMax = cv.getTrackbarPos('VMax','image')
hsv=cv.cvtColor(img, cv.COLOR_BGR2HSV)
lower=np.array([hMin,sMin,vMin])
upper=np.array([hMax,sMax,vMax])
mask=cv.inRange(hsv,lower,upper)
#result=cv.bitwise_and(frame,frame,mask=mask)
cv.imshow("img",img)
cv.imshow("mask",mask)
#cv.imshow("result",result)
k=cv.waitKey(1)
if k==27 :
break
cv.destroyAllWindows()
Test Image:
To invert the mask
mask = 255-mask # if mask is a uint8 which ranges 0 to 255
mask = 1-mask # if mask is a bool which is either 0 or 1

How to create a mask for all relatively white parts of an image using numpy?

Say I have 2 white images (RGB 800x600 image) that is 'dirty' at some unknown positions, I want to create a final combined image that has all the dirty parts of both images.
Just adding the images together reduces the 'dirtyness' of each blob, since I half the pixel values and then add them (to stay in the 0->255 rgb range), this is amplified when you have more than 2 images.
What I want to do is create a mask for all relatively white pixels in the 3 channel image, I've seen that if all RGB values are within 10-15 of each other, a pixel is relatively white. How would I create this mask using numpy?
Pseudo code for what I want to do:
img = cv2.imread(img) #BGR image
mask = np.where( BGR within 10 of each other)
Then I can use the first image, and replace pixels on it where the second picture is not masked, keeping the 'dirtyness level' relatively dirty. (I know some dirtyness of the second image will replace that of the first, but that's okay)
Edit:
People asked for images so I created some sample images, the white would not always be so exactly white as in these samples which is why I need to use a 'within 10 BGR' range.
Image 1
Image 2
Image 3 (combined, ignore the difference in yellow blob from image 2 to here, they should be the same)
What you asked for is having the pixels in which the distance between colors is under 10.
Here it is, translated to numpy.
img = cv2.imread(img) # assuming rgb image in naming
r = img[:, :, 0]
g = img[:, :, 1]
b = img[:, :, 2]
rg_close = np.abs(r - g) < 10
gb_close = np.abs(g - b) < 10
br_close = np.abs(b - r) < 10
all_close = np.logical_and(np.logical_and(rg_close, gb_close), br_close)
I do believe, however, that this is not what you REALLY want.
I think what you want in a mask that segments the background.
This is actually simpler, assuming the background is completely white:
img = cv2.imread(img)
background_mask = 245 * 3 < img[: ,: ,0] + img[: ,: ,1] + img[: ,: ,2]
Please note this code required thresholding games, and only shows a concept.
I would suggest you convert to HSV colourspace and look for saturated (colourful) pixels like this:
import cv2
# Load background and foreground images
bg = cv2.imread('A.jpg')
fg = cv2.imread('B.jpg')
# Convert to HSV colourspace and extract just the Saturation
Sat = cv2.cvtColor(fg, cv2.COLOR_BGR2HSV)[..., 1]
# Find best (Otsu) threshold to divide black from white, and apply it
_ , mask = cv2.threshold(Sat,0,1,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# At each pixel, choose foreground where mask is set and background elsewhere
res = np.where(mask[...,np.newaxis], fg, bg)
# Save the result
cv2.imwrite('result.png', res)
Note that you can modify this if it picks up too many or too few coloured pixels. If it picks up too few, you could dilate the mask and if it it picks up too many, you could erode the mask. You could also blur the image a little bit before masking which might not be a bad idea as it is a "nasty" JPEG with compression artefacts in it. You could change the saturation test and make it more clinical and targeted if you only wanted to allow certain colours through, or a certain brightness or a comnbination.

Python 3: I am trying to find find all green pixels in an image by traversing all pixels using an np.array, but can't get around index error

My code currently consists of loading the image, which is successful and I don't believe has any connection to the problem.
Then I go on to transform the color image into a np.array named rgb
# convert image into array
rgb = np.array(img)
red = rgb[:,:,0]
green = rgb[:,:,1]
blue = rgb[:,:,2]
To double check my understanding of this array, in case that may be the root of the issue, it is an array such that rgb[x-coordinate, y-coordinate, color band] which holds the value between 0-255 of either red, green or blue.
Then, my idea was to make a nested for loop to traverse all pixels of my image (620px,400px) and sort them based on the ratio of green to blue and red in an attempt to single out the greener pixels and set all others to black or 0.
for i in range(xsize):
for j in range(ysize):
color = rgb[i,j] <-- Index error occurs here
if(color[0] > 128):
if(color[1] < 128):
if(color[2] > 128):
rgb[i,j] = [0,0,0]
The error I am receiving when trying to run this is as follows:
IndexError: index 400 is out of bounds for axis 0 with size 400
I thought it may have something to do with the bounds I was giving i and j so I tried only sorting through a small inner portion of the image but still got the same error. At this point I am lost as to what is even the root of the error let alone even the solution.
In direct answer to your question, the y axis is given first in numpy arrays, followed by the x axis, so interchange your indices.
Less directly, you will find that for loops are very slow in Python and you are generally better off using numpy vectorised operations instead. Also, you will often find it easier to find shades of green in HSV colourspace.
Let's start with an HSL colour wheel:
and assume you want to make all the greens into black. So, from that Wikipedia page, the Hue corresponding to Green is 120 degrees, which means you could do this:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open image and make RGB and HSV versions
RGBim = Image.open("image.png").convert('RGB')
HSVim = RGBim.convert('HSV')
# Make numpy versions
RGBna = np.array(RGBim)
HSVna = np.array(HSVim)
# Extract Hue
H = HSVna[:,:,0]
# Find all green pixels, i.e. where 100 < Hue < 140
lo,hi = 100,140
# Rescale to 0-255, rather than 0-360 because we are using uint8
lo = int((lo * 255) / 360)
hi = int((hi * 255) / 360)
green = np.where((H>lo) & (H<hi))
# Make all green pixels black in original image
RGBna[green] = [0,0,0]
count = green[0].size
print("Pixels matched: {}".format(count))
Image.fromarray(RGBna).save('result.png')
Which gives:
Here is a slightly improved version that retains the alpha/transparency, and matches red pixels for extra fun:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open image and make RGB and HSV versions
im = Image.open("image.png")
# Save Alpha if present, then remove
if 'A' in im.getbands():
savedAlpha = im.getchannel('A')
im = im.convert('RGB')
# Make HSV version
HSVim = im.convert('HSV')
# Make numpy versions
RGBna = np.array(im)
HSVna = np.array(HSVim)
# Extract Hue
H = HSVna[:,:,0]
# Find all red pixels, i.e. where 340 < Hue < 20
lo,hi = 340,20
# Rescale to 0-255, rather than 0-360 because we are using uint8
lo = int((lo * 255) / 360)
hi = int((hi * 255) / 360)
red = np.where((H>lo) | (H<hi))
# Make all red pixels black in original image
RGBna[red] = [0,0,0]
count = red[0].size
print("Pixels matched: {}".format(count))
result=Image.fromarray(RGBna)
# Replace Alpha if originally present
if savedAlpha is not None:
result.putalpha(savedAlpha)
result.save('result.png')
Keywords: Image processing, PIL, Pillow, Hue Saturation Value, HSV, HSL, color ranges, colour ranges, range, prime.

opencv python detecting color of object is white or black

I'm new to opencv, I've managed to detect the object and place a ROI around it but I can't managed it so detect if the object is black or white. I've found something i think but i don't know if this is the right solution. The function should return True of False if it's black or white. Anyone experience with this?
def filter_color(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_black = np.array([0,0,0])
upper_black = np.array([350,55,100])
black = cv2.inRange(hsv, lower_black, upper_black)
If you are certain that the ROI is going to be basically black or white and not worried about misidentifying something, then you should be able to just average the pixels in the ROI and check if it is above or below some threshold.
In the code below, after you set an ROI using the newer numpy method, you can pass the roi/image into the method as if you were passing a full image.
Copy-Paste Sample
import cv2
import numpy as np
def is_b_or_w(image, black_max_bgr=(40, 40, 40)):
# use this if you want to check channels are all basically equal
# I split this up into small steps to find out where your error is coming from
mean_bgr_float = np.mean(image, axis=(0,1))
mean_bgr_rounded = np.round(mean_bgr_float)
mean_bgr = mean_bgr_rounded.astype(np.uint8)
# use this if you just want a simple threshold for simple grayscale
# or if you want to use an HSV (V) measurement as in your example
mean_intensity = int(round(np.mean(image)))
return 'black' if np.all(mean_bgr < black_max_bgr) else 'white'
# make a test image for ROIs
shape = (10, 10, 3) # 10x10 BGR image
im_blackleft_white_right = np.ndarray(shape, dtype=np.uint8)
im_blackleft_white_right[:, 0:4] = 10
im_blackleft_white_right[:, 5:9] = 255
roi_darkgray = im_blackleft_white_right[:,0:4]
roi_white = im_blackleft_white_right[:,5:9]
# test them with ROI
print 'dark gray image identified as: {}'.format(is_b_or_w(roi_darkgray))
print 'white image identified as: {}'.format(is_b_or_w(roi_white))
# output
# dark gray image identified as: black
# white image identified as: white
I don't know if this is the right approach but it worked for me.
black = [0,0,0]
Thres = 50
h,w = img.shape[:2]
black = 0
not_black = 0
for y in range(h):
for x in range(w):
pixel = img[y][x]
d = math.sqrt((pixel[0]-0)**2+(pixel[1]-0)**2+(pixel[2]-0)**2)
if d<Thres:
black = black + 1
else:
not_black = not_black +1
This one worked for me but like i said, don't know if this is the right approach. It's ask a lot of processing power therefore i defined a ROI which is much smaller. The Thres is currently hard-coded...

Categories

Resources