I wrote a program that used trackbars, to find out the appropriate HSV values (range) for segmenting out the white lines from the image.
For a long time this seemed like the best shot:
But its still not very accurate, its leaving out chunks of the line...
After messing around some more, I realised something:
This is very accurate: apart from the fact that the black and white regions are swapped.
Is there any way to invert this colour scheme to swap the black and white regions?
If not, what exactly can I do to not leave out chunks of the line like the first image...I have tried out various HSV combinations and it seems like this is the closest I can get.
code:
import cv2 as cv
import numpy as np
def nothing(x):
pass
img= cv.imread("ti2.jpeg")
cv.namedWindow("image") #create a window that will contain the trackbars for HSV values
cv.createTrackbar('HMin','image',0,179,nothing)
cv.createTrackbar('SMin','image',0,255,nothing)
cv.createTrackbar('VMin','image',0,255,nothing)
cv.createTrackbar('HMax','image',0,179,nothing)
cv.createTrackbar('SMax','image',0,255,nothing)
cv.createTrackbar('VMax','image',0,255,nothing)
cv.setTrackbarPos('HMax', 'image', 179) #setting default trackbar pos for max HSV values at max
cv.setTrackbarPos('SMax', 'image', 255)
cv.setTrackbarPos('VMax', 'image', 255)
while True:
hMin = cv.getTrackbarPos('HMin','image') #get the current slider position
sMin = cv.getTrackbarPos('SMin','image')
vMin = cv.getTrackbarPos('VMin','image')
hMax = cv.getTrackbarPos('HMax','image')
sMax = cv.getTrackbarPos('SMax','image')
vMax = cv.getTrackbarPos('VMax','image')
hsv=cv.cvtColor(img, cv.COLOR_BGR2HSV)
lower=np.array([hMin,sMin,vMin])
upper=np.array([hMax,sMax,vMax])
mask=cv.inRange(hsv,lower,upper)
#result=cv.bitwise_and(frame,frame,mask=mask)
cv.imshow("img",img)
cv.imshow("mask",mask)
#cv.imshow("result",result)
k=cv.waitKey(1)
if k==27 :
break
cv.destroyAllWindows()
Test Image:
To invert the mask
mask = 255-mask # if mask is a uint8 which ranges 0 to 255
mask = 1-mask # if mask is a bool which is either 0 or 1
Related
The aim is to take a coloured image, and change any pixels within a certain luminosity range to black. For example, if luminosity is the average of a pixel's RGB values, any pixel with a value under 50 is changed to black.
I’ve attempted to begin using PIL and converting to grayscale, but having trouble trying to find a solution that can identify luminosity value and use that info to manipulate a pixel map.
There are many ways to do this, but the simplest and probably fastest is with Numpy, which you should get accustomed to using with image processing in Python:
from PIL import Image
import numpy as np
# Load image and ensure RGB, not palette image
im = Image.open('start.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Make all pixels of "na" where the mean of the R,G,B channels is less than 50 into black (0)
na[np.mean(na, axis=-1)<50] = 0
# Convert back to PIL Image to save or display
result = Image.fromarray(na)
result.show()
That turns this:
Into this:
Another slightly different way would be to convert the image to a more conventional greyscale, rather than averaging for the luminosity:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version
grey = im.convert('L')
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Notice that the blue channel is given considerably less significance in the ITU-R 601-2 luma transform that PIL uses (see the lower 114 weighting for Blue versus 299 for Red and 587 for Green) in the formula:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
so the blue shades are considered darker and become black.
Another way would be to make a greyscale and a mask as above. but then choose the darker pixel at each location when comparing the original and the mask:
from PIL import Image, ImageChops
im = Image.open('start.png').convert('RGB')
grey = im.convert('L')
mask = grey.point(lambda p: 0 if p<50 else 255)
res = ImageChops.darker(im, mask.convert('RGB'))
That gives the same result as above.
Another way, pure PIL and probably closest to what you actually asked, would be to derive a luminosity value by averaging the channels:
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Calculate greyscale version by averaging R,G and B
grey = im.convert('L', matrix=(0.333, 0.333, 0.333, 0))
# Point process over pixels to make mask of darker ones
mask = grey.point(lambda p: 255 if p<50 else 0)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=mask)
Another approach could be to split the image into its constituent RGB channels, evaluate a mathematical function over the channels and mask with the result:
from PIL import Image, ImageMath
# Load image and ensure RGB
im = Image.open('start.png').convert('RGB')
# Split into RGB channels
(R, G, B) = im.split()
# Evaluate mathematical function over channels
dark = ImageMath.eval('(((R+G+B)/3) <= 50) * 255', R=R, G=G, B=B)
# Paste black (i.e. 0) into image where mask indicates it is dark
im.paste(0, mask=dark)
I created a function that returns a list with True if the pixel has a luminosity of less than a parameter, and False if it doesn't. It includes an RGB or RGBA option (True or False)
def get_avg_lum(pic,avg=50,RGBA=False):
num=3
numd=4
if RGBA==False:
num=2
numd=3
li=[[[0]for y in range(0,pic.size[1])] for x in range(0,pic.size[0])]
for x in range(0,pic.size[0]):
for y in range(0,pic.size[1]):
if sum(pic.getpixel((x,y))[:num])/numd<avg:
li[x][y]=True
else:
li[x][y]=False
return(li)
a=get_avg_lum(im)
The pixels match in the list, so (0,10) on the image is [0][10] in the list.
Hopefully this helps. My module is for standard PIL objects.
I have a numpy array and I am minusing a constant value from the array. I want the values to go negative if necessary (and not wrap around or floor to zero). I then need to extract all array values that are around zero and produce a new binary array/image. So the resulting image will show white where in the areas that were close to zero.
I have tried to implement this but its hacky and I'm not sure if its correct. Can you assist what I am trying to do above?
# roi is a numpy array/image in Cielab colour space
swatch_colour = (255, 10, 30) # Cielab colour space
swatch_roi = np.full((roi.shape[0], roi.shape[1], 3), swatch_colour, dtype='int8')
int_roi = roi.astype('int8')
diff = np.subtract(int_roi, swatch_roi)
thresh = diff.copy()
# Get all pixels whose Cielab colour is close to zero
thresh[np.abs(thresh) < (12,6,12)] = 0
# the remaining pixels are greater than/less than the above threshold
thresh[np.abs(thresh) > (0,0,0)] = 255
thresh = thresh.astype('uint8')
# convert from 3 channels to 1 channel
thresh = cv2.cvtColor(thresh, cv2.COLOR_BGR2GRAY)
# Invert the image so that the pixels that were close to zero are white
thresh = cv2.bitwise_not(thresh)
Numpy can index slice logical operators
why can you do something like?
image[image.logical_and( image > -05 , image < 05 )]
https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html
My objective here is to replace the spot in mask_image by a color corresponding to the spot in original_image. What I did here is to find connected components and labeling them, but I can't figure out how to find the corresponding labeled spot and replace it.
How can i put the n circles in n objects and fill them by the corresponding intensities?
Any help would be appreciated.
For example, if spot in (2, 1) in mask image should be painted by color of corresponding spot in this image below.
mask image http://myfair.software/goethe/images/mask.jpg
original image http://myfair.software/goethe/images/original.jpg
def thresh(img):
ret , threshold = cv2.threshold(img,5,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
return threshold
def spot_id(img):
seed_pt = (5, 5)
fill_color = 0
mask = np.zeros_like(img)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
for th in range(5, 255):
prev_mask = mask.copy()
mask = cv2.threshold(img, th, 255, cv2.THRESH_BINARY)[1]
mask = cv2.floodFill(mask, None, seed_pt, fill_color)[1]
mask = cv2.bitwise_or(mask, prev_mask)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
#here I labelled them
n_centers, labels = cv2.connectedComponents(mask)
label_hue = np.uint8(892*labels/np.max(labels))
blank_ch = 255*np.ones_like(label_hue)
labeled_img = cv2.merge([label_hue, blank_ch, blank_ch])
labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR)
labeled_img[label_hue==0] = 0
print('There are %d bright spots in the image.'%n_centers)
cv2.imshow("labeled_img",labeled_img)
return mask, n_centers
image_thresh = thresh(img_greyscaled)
mask, centers = spot_id(img_greyscaled)
There is one very simple way of accomplishing this task. First one needs to sample the value at the center of each dot in mask_image. Next, one expands this color to fill the dot in that same image.
Here is some code using PyDIP (because I know it better than OpenCV, I'm an author), I'm sure something similar can be done with OpenCV alone:
import PyDIP as dip
import cv2
import numpy as np
# Load the color image shown in the question
original_image = cv2.imread('/home/cris/tmp/BxW25.png')
# Load the mask image shown in the question
mask_image = cv2.imread('/home/cris/tmp/aqf3Z.png')[:,:,0]
# Get a single colored pixel in the middle of each spot of the mask
colors = dip.EuclideanSkeleton(mask_image > 50, 'loose ends away') * original_image
# Spread that color across the full spot
# (dilation and similar operators like this one don't work with color images,
# so we apply the operation on each channel separately)
for t in range(colors.TensorElements()):
colors.TensorElement(t).Copy(dip.MorphologicalReconstruction(colors.TensorElement(t), mask_image))
# Save the result
cv2.imwrite('/home/cris/tmp/so.png', np.array(colors))
I'm trying to extract a specific color from an image within a defined RGB range using the OpenCV for python module. In the example below I am trying to isolate the fire from the exhaust of the space shuttle between yellow and white RGB values and then print out the percentage of RGB values within that range compared to the rest of the image.
Here is my minimal working example:
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
import imageio
img = imageio.imread(r"shuttle.jpg")
plt.imshow(img)
This is the output image. Its from wikipedia.
img = cv.cvtColor(img, cv.COLOR_BGR2HSV)
color1 = (255,255,0) #yellow
color2 = (255,255,255) #white
boundaries = [([color1[0], color1[1], color1[2]], [color2[0], color2[1], color2[2]])]
for (lower, upper) in boundaries:
lower = np.array(lower, dtype=np.uint8)
upper = np.array(upper, dtype=np.uint8)
mask = cv.inRange(img, lower, upper)
output = cv.bitwise_and(img, img, mask=mask)
ratio = cv.countNonZero(mask)/(img.size/3)
print('pixel percentage:', np.round(ratio*100, 2))
plt.imshow(mask)
However this does not seem to work because I get 0% of pixels between the yellow and white values. I'm not really sure where I'm going wrong:
[([255, 255, 0], [255, 255, 255])]
pixel percentage: 0.0
And the output graph appears to be blank with a blue/purple image:
Note I haven't used OpenCV's built-in image viewers such as cv.imshow(), cv.waitKey() and cv.destroyAllWindows() because calling them kept crashing my IDE (Spyder 3.3.1) on Windows 8.1. Not sure if this is why the image is appearing blue/purple?
Also when I just try to output the original image, it appears in a strange inverted color format:
plt.imshow(img)
Anyway, I have tried following a similar method to detect a specific color range previously described here however that particular method gave me problems during compilation and has frozen and crashed my computer several times, when I try to implement something like this:
imask = mask>0
exhaust_color = np.zeros_like(img, np.uint8)
green[imask] = img[exhaust_color]
I guess what I'm tried to achieve here is something like the image below where only the colors between yellow and white are displayed, and then print out the percentage of pixels consisting of these colors. For the image below I just filtered out all colors below RGB (255, 255, 0) using a basic image processing software.
Is there a way to achieve this using the code I have already written or similar?
EDIT 1: Followed the advice below to convert to HSV color space first. However it still doesn't work and the yellow to white pixel percentage is still 0%. Output graphs are still the same and showing all black or purple. Also I managed to get cv.imshow() working by passing 1 to cs2.waitKey(1). (Doesn't work with 0 for some reason.)
#CONVERT TO HSV COLORS
hsv_img = cv.cvtColor(img, cv.COLOR_BGR2HSV)
color1 = np.uint8([[[0, 255, 255 ]]]) #yellow
color2 = np.uint8([[[255, 255, 255]]]) #white
hsv_color1 = cv.cvtColor(color1,cv.COLOR_BGR2HSV)
hsv_color2 = cv.cvtColor(color2,cv.COLOR_BGR2HSV)
print(hsv_color1)
print(hsv_color2)
#Define threshold color range to filter
mask = cv.inRange(hsv_img, hsv_color1, hsv_color2)
# Bitwise-AND mask and original image
res = cv.bitwise_and(hsv_img, hsv_img, mask=mask)
ratio = cv.countNonZero(mask)/(hsv_img.size/3)
print('pixel percentage:', np.round(ratio*100, 2))
#plt.imshow(mask)
cv.imshow('mask',res)
cv.waitKey(1)
Output
[[[ 30 255 255]]]
[[[ 0 0 255]]]
pixel percentage: 0.0
It was a pretty simple issue; you gave a larger color before a smaller one to cv.inRange, so there was no valid intersection! Here's some working code that shows the output. This should be easy to adapt into your own script.
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
img = cv.imread('shuttle.jpg') # you can read in images with opencv
img_hsv = cv.cvtColor(img, cv.COLOR_BGR2HSV)
hsv_color1 = np.asarray([0, 0, 255]) # white!
hsv_color2 = np.asarray([30, 255, 255]) # yellow! note the order
mask = cv.inRange(img_hsv, hsv_color1, hsv_color2)
plt.imshow(mask, cmap='gray') # this colormap will display in black / white
plt.show()
you can download colorgram module from pypi in this method you can extract as many colors you want from one picture
note : your image name should be in the same file of your main , that you can see here in my code (turtle.jpg) otherwise you can give the colorgram the number of colors that you want to be extracted : as in my case (30)
import colorgram
from extraction import Extraction
colors = colorgram.extract('turtle.jpg',30)
# print(colors)
list = []
for color in colors:
# print(color)
r = color.rgb.r
g = color.rgb.g
b = color.rgb.b
new_color = (r,g,b)
list.append(new_color)
print(list
I'm new to opencv, I've managed to detect the object and place a ROI around it but I can't managed it so detect if the object is black or white. I've found something i think but i don't know if this is the right solution. The function should return True of False if it's black or white. Anyone experience with this?
def filter_color(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_black = np.array([0,0,0])
upper_black = np.array([350,55,100])
black = cv2.inRange(hsv, lower_black, upper_black)
If you are certain that the ROI is going to be basically black or white and not worried about misidentifying something, then you should be able to just average the pixels in the ROI and check if it is above or below some threshold.
In the code below, after you set an ROI using the newer numpy method, you can pass the roi/image into the method as if you were passing a full image.
Copy-Paste Sample
import cv2
import numpy as np
def is_b_or_w(image, black_max_bgr=(40, 40, 40)):
# use this if you want to check channels are all basically equal
# I split this up into small steps to find out where your error is coming from
mean_bgr_float = np.mean(image, axis=(0,1))
mean_bgr_rounded = np.round(mean_bgr_float)
mean_bgr = mean_bgr_rounded.astype(np.uint8)
# use this if you just want a simple threshold for simple grayscale
# or if you want to use an HSV (V) measurement as in your example
mean_intensity = int(round(np.mean(image)))
return 'black' if np.all(mean_bgr < black_max_bgr) else 'white'
# make a test image for ROIs
shape = (10, 10, 3) # 10x10 BGR image
im_blackleft_white_right = np.ndarray(shape, dtype=np.uint8)
im_blackleft_white_right[:, 0:4] = 10
im_blackleft_white_right[:, 5:9] = 255
roi_darkgray = im_blackleft_white_right[:,0:4]
roi_white = im_blackleft_white_right[:,5:9]
# test them with ROI
print 'dark gray image identified as: {}'.format(is_b_or_w(roi_darkgray))
print 'white image identified as: {}'.format(is_b_or_w(roi_white))
# output
# dark gray image identified as: black
# white image identified as: white
I don't know if this is the right approach but it worked for me.
black = [0,0,0]
Thres = 50
h,w = img.shape[:2]
black = 0
not_black = 0
for y in range(h):
for x in range(w):
pixel = img[y][x]
d = math.sqrt((pixel[0]-0)**2+(pixel[1]-0)**2+(pixel[2]-0)**2)
if d<Thres:
black = black + 1
else:
not_black = not_black +1
This one worked for me but like i said, don't know if this is the right approach. It's ask a lot of processing power therefore i defined a ROI which is much smaller. The Thres is currently hard-coded...