Python Boolean in Brackets? - python

I'm working on OpenCV using python, and in the edge detection script here I've encountered something I've never seen before. I apologize if this question has been asked before on here, but I'm not really sure what to search for.
I've pasted the relevant piece below:
while True:
flag, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thrs1 = cv2.getTrackbarPos('thrs1', 'edge')
thrs2 = cv2.getTrackbarPos('thrs2', 'edge')
edge = cv2.Canny(gray, thrs1, thrs2, apertureSize=5)
vis = img.copy()
vis /= 2
vis[edge != 0] = (0, 255, 0) #This is the line I'm trying to figure out
cv2.imshow('edge', vis)
The code isn't mine, but is part of the OpenCV documentation. As best as I can tell, vis[edge != 0] is going through each element in edge, comparing it to true, and then somehow (this is the strange part to me) turning the result of the boolean evaluation into xy coordinates for vis, and then setting the image value to green.
It just seems a little magical to me, as I've never encountered anything like this, since I'm mostly a C/C++ programmer. Can someone point me to the docs where I can read up on it? I have STFW unsuccessfully because I don't know what to call this behavior.

vis is a numpy array, and the [edge != 0] seems like syntactic sugar for the numpy.where() function...so its thresholding the values with Canny and then drawing a green line on the vis image where the edges are.
Here is an analogous example.
import numpy as np
x = np.arange(10)
y = np.zeros(10)
print y
y[x>3] = 10
print y

Related

Apply a brightness mask using Lab color space and Opencv to remove Vignetting

Good day everyone,
I'm trying to compensate the vignetting effect of a Basler camera using a brightness mask. First thing first, I have taken a picture of a white screen projected by a projector, to have an almost perfect white image with the vignetting effect visible on the angles.
The idea was to then invert the vignetting (so that white=255 become 0, so no change, and vice-versa). And everything works pretty well using Lab color space, apart from the fact that even if I use clipping (0-255) the luminosity channel seems to overflow and the brightest zone becoming dark.
Below an example and the code to reproduce, as well as the image and vignette I'm using right now. Apart from the overflow zone, the method seems to work well and the sides / angles get brighter as I want.
Problem demonstration
Download Vignette image
Download Sample image
import cv2
import numpy as np
if __name__ == "__main__":
image = cv2.imread("img/4mm_raw2.jpeg")
mask = cv2.imread("img/vignetting.png")
h, w, c = image.shape
# Convert to LAB color space
lab_img = cv2.cvtColor(image, cv2.COLOR_BGR2Lab)
lab_mask = cv2.cvtColor(mask, cv2.COLOR_BGR2Lab)
# Invert the vignetting (white no change, black increase brightness)
inv_mask = (255 - lab_mask[:,:,0])
# Add the vignetting contribution, clipping to the channel limit (0-255)
lab_img[:,:,0] = np.clip(lab_img[:,:,0] + inv_mask, 0, 255)
# Back to RGB
result = cv2.cvtColor(lab_img, cv2.COLOR_Lab2BGR)
# Resize for visualization
res_img = cv2.resize(image, (w//2, h//2))
res_dst = cv2.resize(result, (w//2, h//2))
stack = np.hstack((res_img, res_dst))
cv2.imshow('Difference', stack)
cv2.waitKey(0)
Using a double for loop and changing pixel per pixel, works fine, but it's too too slow, for the type of application I'm developing.
inv_mask = (255 - lab_mask[:,:,0])
for y in range(0, h):
for x in range(0, w):
new = lab_img.item(y, x, 0) + inv_mask.item(y, x)
lab_img[y, x, 0] = np.clip(new, 0, 255)
The question is, how can I solve this and also if there is a better way to achieve the same result (color spaces...)
Thanks,
Best
Solution:
My bad, I found the solution myself on the last try before give up.
The brightness channel needs to be converted to 16 bit otherwise it overflows before even been evaluated by Numpy clip.
a = lab_img[:,:,0].astype(np.int16) + inv_mask
lab_img[:,:,0] = np.clip(a,0,255)
However, I leave the question open for any advice or better methodologies. Thanks

skeletonization (thinning) of small images not giving expected results - python

I am trying to implement a skeletonization of small images. But I am not getting an expected results. I tried also thin() and medial_axis() but nothing seems to work as expected. I am suspicious that this problem occurs because of the small resolutions of images. Here is the code:
import cv2
from numpy import asarray
import numpy as np
# open image
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
# threshold the image
img_binary = cv2.threshold(afterMedian, thresh, 255, cv2.THRESH_BINARY)[1]
# make binary image
arr = asarray(img_binary)
binaryArr = np.zeros(asarray(img_binary).shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if arr[i][j] == 255:
binaryArr[i][j] = 1
else:
binaryArr[i][j] = 0
# perform skeletonization
from skimage.morphology import skeletonize
cv2.imshow("binary arr", binaryArr)
backgroundSkeleton = skeletonize(binaryArr)
# convert to non-binary image
bSkeleton = np.zeros(arr.shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if backgroundSkeleton[i][j] == 0:
bSkeleton[i][j] = 0
else:
bSkeleton[i][j] = 255
cv2.imshow("background skeleton", bSkeleton)
cv2.waitKey(0)
The results are:
I would expect something more like this:
This applies to similar shapes also:
Expectation:
Am I doing something wrong? Or it will truly will not be possible with such small pictures, because I tried skeletonization on bigger images and it worked just fine. Original images:
You could try the skeleton in DIPlib (dip.EuclideanSkeleton):
import numpy as np
import diplib as dip
import cv2
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
bin = afterMedian > thresh
sk = dip.EuclideanSkeleton(bin, endPixelCondition='three neighbors')
dip.viewer.Show(bin)
dip.viewer.Show(sk)
dip.viewer.Spin()
The endPixelCondition input argument can be used to adjust how many branches are preserved or removed. 'three neighbors' is the option that produces the most branches.
The code above produces branches also towards the corners of the image. Using 'two neighbors' prevents that, but produces fewer branches towards the object as well. The other way to prevent it is to set edgeCondition='object', but in this case the ring around the object becomes a square on the image boundary.
To convert the DIPlib image sk back to a NumPy array, do
sk = np.array(sk)
sk is now a Boolean NumPy array (values True and False). To create an array compatible with OpenCV simply cast to np.uint8 and multiply by 255:
sk = np.array(sk, dtype=np.uint8)
sk *= 255
Note that, when dealing with NumPy arrays, you generally don't need to loop over all pixels. In fact, it's worth trying to avoid doing so, as loops in Python are extremely slow.
It seems the scikit-image is much better choice than cv2 here.
since the package define Bit functions, if you are playing with BW images, then try this ready to use code:
skeletonize
note: if process pass the image details, then don’t upsample the input at first until you tried other functions:again use skimage morphology functions to enhance details which in such case your code will be work on bigger area of images too. You could look here.

Create gradient image in numpy for LUT (Look Up Tables)

What I'm trying to achieve: lookup tables to create duotone effect also called false color.
Say I have two colours: pure red and pure green provided in hex format ff0000 and 00ff00 respectively. We know its essentially (255, 0, 0) and (0, 255, 0). I need to create a 256x1 gradient image in numpy with red and green at both ends of the gradient.
I would strongly prefer to limit the dependancies to numpy and cv2.
Below is a code that works for me just fine, however all the rgb values are already hardcoded and I need to compute LUT gradient map dynamically for any given left and right colors (LUT tables truncated for brevity):
lut = np.zeros((256, 1, 3), dtype=np.uint8)
lut[:, 0, 0] = [250,248,246,244,242,240,238,236,234,232,230, ...]
lut[:, 0, 1] = [109,107,105,103,101,99,97,95,93,91,89,87,85, ...]
lut[:, 0, 2] = [127,127,127,127,127,127,127,127,127,127,127, ...]
im_color = cv2.LUT(image, lut)
From here modifying to give numpy arrays
def hex_to_rgb(hex):
hex = hex.lstrip('#')
hlen = len(hex)
return np.array([int(hex[i:i+hlen//3], 16) for i in range(0, hlen, hlen//3)])
Then the numpy part:
def gradient(hex1, hex2):
np1 = hex_to_rgb(hex1)
np2 = hex_to_rgb(hex2)
return np.linspace(np1[:, None], np2[:, None], 256, dtype = int)
I know the question has been answered, but just want to ask the author if the code for duotone effect can be shared. I have a brute-forth solution that updates an image pixel by pixel, it works but is really inefficient. So I'm looking for a more efficient algorithm, and found this post inspiring, but haven't figured out a working solution using the clues. #Pono, it'd be great if you can share the code to create a duotone image using any 2 colors.
Never mind, I figured it out, and share the code below in case someone else looks for the same thing.
def gradient1d(rbg1, rbg2):
bgr1 = np.array((rbg1[2], rbg1[1], rbg1[0]))
bgr2 = np.array((rbg2[2], rbg2[1], rbg2[0]))
return np.linspace(bgr2, bgr1, 256, dtype = int)
def duotone(image, color1, color2):
img = image.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
table = gradient1d(color1, color2)
result = np.zeros((*gray.shape,3), dtype=np.uint8)
np.take(table, gray, axis=0, out=result)
return result

Why isn't my contour closed (Python, OpenCV)?

I've been trying to write a script in opencv for some greyscale image processing. However, I keep running into an issue when finding and drawing contours on thresholded images. Finding contours is easy and gives me the kind of result that I'm looking for. But, when I choose the largest contour by area and intend to draw it seperately, I get a much more 'broken up' result. I've been trying to figure out what is wrong with my code that results in this for a while now, but frankly can't figure it out. Anybody else have a similar experience or possible solution?
My (admittedly very messy) current code is:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
import os
import math as m
import imutils as imt
read_directory = r'E:\Other\Ultrasound_Trad_Alg\Input'
write_directory = r'E:\Other\Ultrasound_Trad_Alg\Output'
os.chdir(read_directory)
image_files = os.listdir(read_directory)
for image_file in image_files:
input_image_grey = cv.imread(image_file,0)
input_image_color = cv.imread(image_file)
input_image_color_2 = input_image_color.copy()
#Initial Black Background Masking Process:
blurred_input = cv.GaussianBlur(input_image_grey,(7,7),0)
_,thresholded_image_binary = cv.threshold(blurred_input,0,255,cv.THRESH_BINARY)
input_contours,hierarchy = cv.findContours(thresholded_image_binary,cv.RETR_EXTERNAL,cv.CHAIN_APPROX_SIMPLE)
chosen_input_contour = max(input_contours, key=cv.contourArea)
input_mask = np.zeros_like(input_image_grey)
cv.drawContours(input_mask, chosen_input_contour, -1, 255, -1)
#For Testing and Visualization:
cv.drawContours(input_image_color, chosen_input_contour, -1, (0,255,0), 1)
cv.drawContours(input_image_color_2, input_contours, -1, (0,255,0), 1)
cv.imshow("test1",thresholded_image_binary)
cv.imshow("test2",input_image_color)
cv.imshow("test3",input_image_color_2)
cv.imshow("mask",input_mask)
print(cv.isContourConvex(chosen_input_contour))
print(cv.contourArea(chosen_input_contour))
cv.waitKey(0)
When I run this code, I get the following set of 4 images just to demonstrate what I am talking about:
https://i.stack.imgur.com/IP7Ip.jpg
As can be seen, the initial set of contours are pretty much exactly what I'm looking for. However, by isolating the largest contour in the image and drawing it separately, I get a broken up contour that does not work for me. I've also checked the contour areas and it shows that I should have exactly 5, meaning there shouldn't be any tiny hidden contours messing with results either. Any thoughts on what is happening?
I think your issue in your Python/OpenCV code is that you have:
input_mask = np.zeros_like(input_image_grey)
and it should be
input_mask = np.zeros_like(thresholded_image_binary)
the binary mask needs to be the same bit-depth as your binary image, not your grayscale image. That is it needs to be 1-bit and not 8-bits.
See if that fixes it.

Python & Numpy - Finding the Mode of Values in an Array that aren't Zero

I have an Numpy array (it's the red channel from an image).
I have masked a portion of it (making those values 0), and now I would like to find the Mode of the values in my non masked area.
The problem I'm running into is that the Mode command keeps coming back with [0]. I want to exclude the 0 values (the masked area), but I'm not sure how to do this?
This is the command I was using to try and get mode:
#mR is the Numpy Array of the Red channel with the values of the areas I don't want at 0
print(stats.mode(mR[:, :], axis=None))
Returns 0 as my Mode.
How do I exclude 0 or the masked area?
Update - Full Code:
Here's my full code using the "face" from scipy.misc - still seems slow with that image and the result is "107" which is way to high for the masked area (shadows) so seems like it's processing the whole image, not just the area in the mask.
import cv2
import numpy as np
from scipy import stats
import scipy.misc
img = scipy.misc.face()
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
r, g, b = cv2.split(img_rgb)
img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
l_channel ,a_channel, b_channel = cv2.split(img_lab)
mask = cv2.inRange(l_channel, 5, 10)
cv2.imshow("mask", mask)
print(stats.mode(r[mask],axis=None))
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1)
You can just mask the array and use np.histogram:
counts, bins = np.histogram(mR[mR>0], bins=np.arange(256))
# mode
modeR = np.argmax(counts)
Update:
After the OP kindly posted their full code, I can confirm that stats.mode() is either extremely slow or never in fact completes (who knows why?).
On the other hand #Quang Hoang's solution is as elegant as it is fast - and it also works for me in terms of respecting the mask.
I of course therefore throw my weight behind QH's answer.
My old answer:
Try
print(stats.mode(mR[mask],axis=None))
Except for the masking, calculating the mode of a numpy array efficiently is covered extensively here:
Most efficient way to find mode in numpy array

Categories

Resources