What I'm trying to achieve: lookup tables to create duotone effect also called false color.
Say I have two colours: pure red and pure green provided in hex format ff0000 and 00ff00 respectively. We know its essentially (255, 0, 0) and (0, 255, 0). I need to create a 256x1 gradient image in numpy with red and green at both ends of the gradient.
I would strongly prefer to limit the dependancies to numpy and cv2.
Below is a code that works for me just fine, however all the rgb values are already hardcoded and I need to compute LUT gradient map dynamically for any given left and right colors (LUT tables truncated for brevity):
lut = np.zeros((256, 1, 3), dtype=np.uint8)
lut[:, 0, 0] = [250,248,246,244,242,240,238,236,234,232,230, ...]
lut[:, 0, 1] = [109,107,105,103,101,99,97,95,93,91,89,87,85, ...]
lut[:, 0, 2] = [127,127,127,127,127,127,127,127,127,127,127, ...]
im_color = cv2.LUT(image, lut)
From here modifying to give numpy arrays
def hex_to_rgb(hex):
hex = hex.lstrip('#')
hlen = len(hex)
return np.array([int(hex[i:i+hlen//3], 16) for i in range(0, hlen, hlen//3)])
Then the numpy part:
def gradient(hex1, hex2):
np1 = hex_to_rgb(hex1)
np2 = hex_to_rgb(hex2)
return np.linspace(np1[:, None], np2[:, None], 256, dtype = int)
I know the question has been answered, but just want to ask the author if the code for duotone effect can be shared. I have a brute-forth solution that updates an image pixel by pixel, it works but is really inefficient. So I'm looking for a more efficient algorithm, and found this post inspiring, but haven't figured out a working solution using the clues. #Pono, it'd be great if you can share the code to create a duotone image using any 2 colors.
Never mind, I figured it out, and share the code below in case someone else looks for the same thing.
def gradient1d(rbg1, rbg2):
bgr1 = np.array((rbg1[2], rbg1[1], rbg1[0]))
bgr2 = np.array((rbg2[2], rbg2[1], rbg2[0]))
return np.linspace(bgr2, bgr1, 256, dtype = int)
def duotone(image, color1, color2):
img = image.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
table = gradient1d(color1, color2)
result = np.zeros((*gray.shape,3), dtype=np.uint8)
np.take(table, gray, axis=0, out=result)
return result
Related
I'm trying to get the pixel coordinates of a specific roi in a image. I created the roi using mask. The code and the result is shown below.
import cv2
import numpy as np
img = cv2.imread("Inxee.jpg")
img = cv2.resize(img, (640, 480))
mask = np.zeros(img.shape, np.uint8)
points = np.array([[273,167], [363, 167], [573, 353], [63, 353]]) ##taking random points for ROI.
cv2.fillPoly(mask, [points], (100, 0, 100))
img = cv2.addWeighted(img, 0.7, mask, 0.5, 0)
values = img[np.where((mask == (100, 0, 100)).all(axis=1))]
print(values)
##cv2.imshow("mask", mask)
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
result image
so in the image we can see the ROI.
I tried to use the
values = img[np.where((mask == (100, 0, 100)).all(axis=1))]
but here I'm getting only values not coordinates.
So is there any way to get those coordinates?
Thanks for the solutions and possibilities friends,
I just did,
val = np.where(mask < 0)
coordinate = list(zip(val[0], val[1]))
print(coordinate)
with this i got the coordinates!
Thanks!
To get the coordinates, you could do it like this:
pixels_in_roi = [(i_x,i_y) for i_x, col in enumerate(mask) for i_y, val in enumerate(col) if val == (100,0,100)]
There are faster ways to do it.
I'm not sure what your goal is in the end, but it sounds like undistorting this area as if it was a top-down view could be the next step. In that case this might help you: https://pyimagesearch.com/2014/08/25/4-point-opencv-getperspective-transform-example/
Edit: Here is a better solution using np.where: https://stackoverflow.com/a/27175491/7438122
The input 'mask' has to be a numpy array then (which it should already be in your case), not a list.
Christoph is right though: If you tell us what you want to achieve, there might be a way without extracting the indices at all.
I am working on an image processing/building problem. I have a smaller image that I want to place into a larger one. As normal the image is represented as a 3d array. This works fine with the following code (both element_pixels and image_pixels are 3d ndarrays with depth 3 representing RGB, element_pixels is equal to or smaller than image_pixels in the other dimensions):
element_pixels = element.get_pixels()
image_pixels[element.position[0]:element.position[0]+element.height, element.position[1]:element.position[1]+element.width, :] = element_pixels
However I want to treat black pixels in the element as transparent. The simplest way to do this seems to be to mask the element so I don't modify image_pixels where element_pixel is black. I tried the following, but I am tying myself in knots:
element_pixels = element.get_pixels()
b = np.all(element_pixels == [0, 0, 0], axis=-1)
black_pixels_mask = np.dstack([b,b,b])
image_pixels[element.position[0]:element.position[0]+element.height, element.position[1]:element.position[1]+element.width, :][black_pixels_mask] = element_pixels
This looks to be correctly generating a mask but I can't figure out how to use it. I get the following error:
image_pixels[element.position[0]:element.position[0]+element.height, element.position[1]:element.position[1]+element.width, :][black_pixels_mask] = element_pixels
TypeError: NumPy boolean array indexing assignment requires a 0 or 1-dimensional input, input has 3 dimensions
The masking kind-of works (i.e. runs without exceptions) if I replace the final = element_pixels with a constant, but I'm struggling to extrapolate this to a solution.
Extra detail of sizes
element_pixels.shape=(40, 40,3)
image_pixels.shape=(100, 100,3)
image_pixels[element.position[0]:element.position[0]+element.height, element.position[1]:element.position[1]+element.width, :].shape = (40,40,3)
A MRE in 2d
This captures what I'm trying to do without the complexity of the extra dimension.
import numpy as np
bg = np.ones((10,10))*0.5
img = np.concatenate([np.zeros((5,1)),np.ones((5,1))], axis=1)
mask = img == 0
# copy the *non-zero* pixel values of img to a particular location in bg
bg[5:10,5:7][mask] = img # this throws exception
print(bg)
I discovered after some experimentation that the (perhaps obvious in hindsight) answer is the you have to apply the mask to both sides.
So taking my MRE:
import numpy as np
bg = np.ones((10,10))*0.5
img = np.concatenate([np.zeros((5,1)),np.ones((5,1))], axis=1)
mask = img > 0
bg[5:10,5:7][mask] = img[mask]
print(bg)
Or going back to my original code, the only line that changes is:
image_pixels[element.position[0]:element.position[0]+element.height, element.position[1]:element.position[1]+element.width, :][~black_pixels_mask] = element_pixels[~black_pixels_mask]
Well you can use a 2d mask on a 3d array. So something like this will replace all black pixels of img with those of background.
img = np.random.randint(0, 2, (10, 10, 3))
background = np.random.randint(0, 2, (10, 10, 3))
mask = np.all(img == [0,0,0], axis=2)
img[mask] = background[img]
I'm not sure I understand what is in image_pixels but I think you can do something similar.
I'm trying to apply a function over all pixels of an image (in my specific case I want to do some colour approximation per pixel, but I don't think this is relevant for the problem) in an efficient way. The thing is that I've found different approaches to do so, but all of them apply a function over each component of a pixel, whereas what I want to do is to apply a function that receives a pixel (not a pixel component), this is, the 3 rgb components (I guess as a tuple, but I don't care about the format as long as I have the 3 components as parameters in my function).
If you are interested on what I have, this is the non-efficient solution to my issue (works fine but it is too slow):
def closest_colour(pixel, colours):
closest_colours = sorted(colours, key=lambda colour: colours_distance(colour, pixel))
return closest_colours[0]
# reduces the colours of the image based on the results of KMean
# img is image open with opencv.imread()
# colours is an array of colours
def image_color_reduction(img, colours):
start = time.time()
print("Color reduction...")
reduced_img = img.copy()[...,::-1]
width = reduced_img.shape[0]
height = reduced_img.shape[1]
for x in range(width):
for y in range(height):
reduced_img[x,y] = closest_colour(reduced_img[x,y], colours)
end = time.time()
print(f"Successfully reduced in {end-start} seconds")
return reduced_img
I've followed this post: PIL - apply the same operation to every pixel that seemed to be pretty clear and aligned with my issue. I've tried using any kind of image formatting, I've tried multithreading (both with pool.map & pool.imap), I've tried numpy.apply_along_axis and I finally tried PIL.point(), what I thought was the most similar solution to what I was looking for. Indeed, if you take a look at their official documentation: .point(), it exactly says: The function is called once for each possible pixel value. I find this really misleading, because after trying it I realized pixel value in this context does not refer to an rgb tuple, but to each of the 3 rgb compoments (seriously, in what world?).
I would really appreciate it if somebody could share a bit of their experience and give me some light on this issue. Thank you in advance!!
(UPDATE)
As per your request I add more information on the specific problem I am approaching:
Given
an image M of size 1022*1080
an array of colours N with size 1 < |N| < 16
Reduce the colours of M by replacing each pixel's colour by the most
similar one in N (thanks to your answers I know this is defined as
Nearest Neighbor Color Quantization)
This is the missing implementation of colours_distance:
def colours_distance(c1, c2):
(r1,g1,b1) = c1
(r2,g2,b2) = c2
return math.sqrt((r1 - r2)**2 + (g1 - g2) ** 2 + (b1 - b2) **2)
And this are the imports needed to run this code:
import cv2
import time
import math
The solution shown in my question solves the problem described in slightly less than 40s on average.
Assuming that your image is an (M, N, 3) numpy array, your color table is (K, 3), and you measure color distance as some sane vector norm, you can use scipy's cKDTree (or just KDTree) to optimize and vectorize the lookup for you.
First make a tree out of your color table:
colors = ... # K, 3 array
color_tree = cKDTree(colors)
Now you can query the tree directly to get the output image:
_, output = color_tree.query(img)
output will be an (M, N) index array into color_table. The important thing is that KD trees are optimized to perform O(log K) lookups per pixel, not O(K) or O(K log K) as in your current implementation. Since the loops are implemented in C, you'll get a major boost from that too.
Note about "vectorization"
With numpy, there isn't a generic efficient way to apply an arbitrary function across some axis of your image. In order to do calculations efficiently, numpy needs to be able to do those calculations in the backend for you, not using Python at all. The same is true of OpenCV. When you call for e.g. np.mean() or cv.meanStdDev() or similar, these libraries are iterating through your images in C/C++/Fortran/etc so the code that gets executed needs to be there. However, you want to apply a function you've defined in Python on these values, which means you need to operate on Python objects directly, which removes all of the efficiency of doing operations in numpy/OpenCV/etc and is why there's no instantly fast way to do these calculations. You mentioned in your post about df.apply() from Pandas---note that apply() is actually slow, it does the looping in Python like you're currently doing, and generally you want to stay away from using it for that reason. Numpy and OpenCV don't expose a method like apply() because it is not really a good way to do things.
Generally, to do operations efficiently, you need to vectorize your code, which in Python-land means to only utilize built-in numpy/opencv/etc functions that can operate on your data all at once, without writing loops (or without them implicitly being called in Python, like df.apply()).
Note that nothing here is specific to working on pixels (or their individual components), this is a general problem in trying to achieve fast computations in Python. That is, even if any of the solutions you've tried were to work on a pixel (as opposed to components), it'd still be slow anyways.
Solution
The specific problem you give as an example (nearest neighbor color quantization) is non-trivial to make fast, since for each pixel you need to figure out where you sit nearest in the list of colors. If you only have a few colors, like 8, it isn't terrible to just calculate the distance to all, but if you are trying to reduce the palette to 256 colors or something like that, it's a lot of compute. If you have only a few colors, then you can vectorize the whole operation by creating a 3d array representing the distance to each color at each x, y location, and taking the argmin across the color axis, which you can then use with a lookup table.
Here's an example implementation, reducing an image to 8 colors. We'll start with an image and some defined colors
In [80]: img.shape
Out[80]: (90, 160, 3)
In [81]: colors
Out[81]:
array([[ 0, 0, 0],
[255, 0, 0],
[ 0, 255, 0],
[ 0, 0, 255],
[255, 255, 0],
[255, 0, 255],
[ 0, 255, 255],
[255, 255, 255]])
Now we want, for each pixel location in the image, the distance to each color (we'll use the abs diff distance function as an example, but any vectorizeable operation here will do). Here we can utilize broadcasting to get a resulting array of shape (h, w, n_colors):
In [83]: distances = np.sum(np.abs(img[..., np.newaxis] - colors.T), axis=2)
In [84]: distances.shape
Out[84]: (90, 160, 8)
Now you want to know which color resulted in the minimum distance for each pixel:
In [87]: nearest_colors = np.argmin(distances, axis=2)
In [88]: nearest_colors
Out[88]:
array([[7, 4, 3, ..., 2, 2, 4],
[5, 7, 6, ..., 3, 2, 5],
[5, 3, 7, ..., 3, 5, 7],
...,
[6, 5, 0, ..., 7, 6, 1],
[1, 6, 5, ..., 2, 0, 3],
[0, 1, 0, ..., 7, 5, 4]])
So at the first pixel, the closest color was the last one in my color list (all white), at the next pixel to the right the closest color was [255, 255, 0], and so on. Now you can use a lookup table to map from these to their actual color values. The way to do this with numpy is to use fancy indexing:
In [91]: quantized = colors[nearest_colors]
In [92]: quantized.shape
Out[92]: (90, 160, 3)
And here's your image with the new quantized colors.
A more efficient solution for this problem is to utilize a kd-tree, as MadPhysicist answered. However, color distance functions can be non-linear and those distances may not map well to spatial data structures, in which case there's usually specialized implementations or very specific ways in which to make them faster, but this is closer to research and not appropriate for SO.
For other color quantization algorithms, this question has a lot of good examples: Fast color quantization in OpenCV
Try this solution if it will be faster:
img = cv2.imread(path)
result = np.zeros_like(img)
colors_arr = [[0, 0, 255], [255, 0, 0], [0, 255, 0], [0, 255, 255], [255, 0, 255], [255, 255, 0]]
# Normalizing images and colors to 1.
colors = np.array(colors_arr, np.float32) / 255
img = img.astype(np.float32) / 255
# For each color making an array of weights.
weights = []
for i in range(colors.shape[0]):
weights.append(np.sum(np.square(img - colors[i]), axis=2))
weights = np.array(weights, np.float32)
# Finding the index of minimum weight
weights = np.transpose(weights, axes=[1, 2, 0])
color_inds = np.argmin(weights, axis=2)
# Depending on minimum weight index assigning the color to the result
for i in range(len(colors_arr)):
idx = np.where(color_inds == i)
result[idx] = colors_arr[i]
cv2.imshow('', result)
cv2.waitKey()
I was interested in the relative performance of using linalg.norm() versus the cKDTree() for your dataset sizes of 1022x1080 image with N (palette length) in the range 1..16.
#!/usr/bin/env python3
import numpy as np
import cv2
def QuantizeToGivenPalette(im, palette):
"""Quantize image to a given palette.
The input image is expected to be a Numpy array.
The palette is expected to be a list of R,G,B values."""
# Calculate the distance to each palette entry from each pixel
distance = np.linalg.norm(im[:,:,None].astype(np.float) - palette[None,None,:].astype(np.float), axis=3)
# Now choose whichever one of the palette colours is nearest for each pixel
palettised = np.argmin(distance, axis=2).astype(np.uint8)
return palettised
################################################################################
# main
################################################################################
# Let's get some repeatable randomness
np.random.seed(42)
# Open a colorwheel, resize to match dimensions of question
M = 1022, 1080
im = cv2.imread('colorwheel.png', cv2.IMREAD_COLOR)
im = cv2.resize(im, M, interpolation = cv2.INTER_AREA)
# Make a full 256-entry palette of random colours, but we'll just use the first N
palette = np.random.randint(0,256,(256,3),dtype=np.uint8)
# Try quantizing with linalg.norm, for various palette lengths
pLengths = [4,8,12,16]
for pLength in pLengths:
indices = QuantizeToGivenPalette(im, palette[:pLength])
# Write image of just palette indices
cv2.imwrite(f'DEBUG-indices-linalg{pLength}.png', indices)
# Look up each pixel in the palette and revert to BGR and save
BGR = palette[indices]
cv2.imwrite(f'DEBUG-result-linalg{pLength}.png', BGR)
################################################################################
# NOW DO SAME THING BUT WITH KDTREE
################################################################################
from scipy.spatial import cKDTree
# Try quantizing with cKDTree, for various palette lengths
for pLength in pLengths:
# Build our tree from the palette, only necessary once for any given palette
treeFromPalette = cKDTree(palette[:pLength])
# Lookup nearest indices for each pixel in image
_, indices = treeFromPalette.query(im)
# Write image of just palette indices
cv2.imwrite(f'DEBUG-indices-cKDTree{pLength}.png', indices)
# Look up each pixel in the palette and revert to BGR and save
BGR = palette[indices]
cv2.imwrite(f'DEBUG-result-cKDTree{pLength}.png', BGR)
I used this image as input and resized it to your stated dimensions:
The results were the same for both methods:
4 colours:
8 colours:
16 colours:
The interesting thing was the timings - all in milliseconds:
N norm() cKDTree()
4 147 485
8 307 308
12 449 530
16 601 542
If we plot those, you can see that cKDTree() only really comes into its own at the higher end of your N-values:
Keywords: Python, image processing, KDTree, linalg.norm, palette, quantisation, prime.
I would like to apply a filter/kernel to an image to alter it (for instance, perform vertical edge detection, diagonal blur, etc). I found this wikipedia page with some interesting examples of kernels.
When I look online, filters are implemented using opencv or default matplotlib/Pillow functions. I want to be able to modify an image using only numpy arrays and functions like matrix multiplication and such (There doesn't appear to be a default numpy function to perform the convolution operation.)I've tried very hard to figure it out but I keep making errors and I'm also relatively new to numpy.
I worked out this code to convert an image to greyscale:
import numpy as np
from PIL import Image
img = Image.open("my_path/my_image.jpeg")
img = np.array(img.resize((180, 320)))
grey = np.zeros((320, 180))
grey_avg_array = (np.sum(img,axis=-1,keepdims=False)/3)
grey_avg_array = grey_avg_array.astype(np.uint8)
grey_image = Image.fromarray(grey_avg_array)
I have tried to multiply my image by a numpy array [[1, 0, -1], [1, 0, -1], [1, 0, -1]] to implement edge detection but that gave me a broadcasting error. What would some sample code/useful functions that can do this without errors look like?
Also: a minor problem I've faced all day is that PIL can't display (x, x, 1) shaped arrays as images. Why is this? How do I get it to fix this? (np.squeeze didn't work)
Note: I would highly recommend checking out OpenCV, which has a large variety of built-in image filters.
Also: a minor problem I've faced all day is that PIL can't display (x, x, 1) shaped arrays as images. Why is this? How do I get it to fix this? (np.squeeze didn't work)
I assume the issue here is with processing grayscale float arrays. To fix this issue, you have to convert the float arrays to np.uint8 and use the 'L' mode in PIL.
img_arr = np.random.rand(100, 100) # Our float array in the range (0, 1)
uint8_img_arr = np.uint8(img_arr * 255) # Converted to the np.uint8 type
img = Image.fromarray(uint8_img_arr, 'L') # Create PIL Image from img_arr
As for doing convolutions, SciPy provides functions for doing convolutions with kernels that you may find useful.
But since we're solely using NumPy, let's implement it!
Note: To make this as general as possible, I am adding a few extra parameters that may or may not be important to you.
# Assuming the image has channels as the last dimension.
# filter.shape -> (kernel_size, kernel_size, channels)
# image.shape -> (width, height, channels)
def convolve(image, filter, padding = (1, 1)):
# For this to work neatly, filter and image should have the same number of channels
# Alternatively, filter could have just 1 channel or 2 dimensions
if(image.ndim == 2):
image = np.expand_dims(image, axis=-1) # Convert 2D grayscale images to 3D
if(filter.ndim == 2):
filter = np.repeat(np.expand_dims(filter, axis=-1), image.shape[-1], axis=-1) # Same with filters
if(filter.shape[-1] == 1):
filter = np.repeat(filter, image.shape[-1], axis=-1) # Give filter the same channel count as the image
#print(filter.shape, image.shape)
assert image.shape[-1] == filter.shape[-1]
size_x, size_y = filter.shape[:2]
width, height = image.shape[:2]
output_array = np.zeros(((width - size_x + 2*padding[0]) + 1,
(height - size_y + 2*padding[1]) + 1,
image.shape[-1])) # Convolution Output: [(W−K+2P)/S]+1
padded_image = np.pad(image, [
(padding[0], padding[0]),
(padding[1], padding[1]),
(0, 0)
])
for x in range(padded_image.shape[0] - size_x + 1): # -size_x + 1 is to keep the window within the bounds of the image
for y in range(padded_image.shape[1] - size_y + 1):
# Creates the window with the same size as the filter
window = padded_image[x:x + size_x, y:y + size_y]
# Sums over the product of the filter and the window
output_values = np.sum(filter * window, axis=(0, 1))
# Places the calculated value into the output_array
output_array[x, y] = output_values
return output_array
Here is an example of its usage:
Original Image (saved as original.png):
filter = np.array([
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]
], dtype=np.float32)/9.0 # Box Filter
image = Image.open('original.png')
image_arr = np.array(image)/255.0
convolved_arr = convolve(image_arr, filter, padding=(1, 1))
convolved = Image.fromarray(np.uint8(255 * convolved_arr), 'RGB') # Convolved Image
Convolved Image:
A few things:
OpenCV, SciPy and scikit-image all use Numpy arrays as the standard way to store and manipulate images and are all largely interoperable with Numpy and each other
as regards plotting im with shape (x,y,1), you can just take the zeroth plane and plot that, i.e. newim = im[...,0]
When converting an RGB image to greyscale, rather than add all the RGB components up and divide by 3, you could just calculate the mean:
grey = np.mean(im, axis=2)
Actually the recommended weightings in ITU-R 601-2 are
L = 0.299 * Red + 0.587 * Green + 0.114 * Blue
So, you can use np.dot() to do that:
grey = np.dot(RGBimg[...,:3], [0.299, 0.587,0.114]).astype(np.uint8)
As regards finding vertical edges, you can do this with Numpy by subtracting each pixel from the one to its immediate right, i.e. differencing. Here is a little example, I also drew the shapes with Numpy so you can see a way to do that without using OpenCV since it seems to upset you so much ;-)
#!/usr/bin/env python3
import numpy as np
# Create a test image with a white square on black
rect = np.zeros((200,200), dtype=np.uint8)
rect[40:-40,40:-40] = 255
# Create a test image with a white circle on black
xx, yy = np.mgrid[:200, :200]
circle = (xx - 100) ** 2 + (yy - 100) ** 2
circle = (circle<4096).astype(np.uint8)*255
# Concatenate side-by-side to make our test image
im = np.hstack((rect,circle))
That now looks like this:
# Calculate horizontal differences only finding increasing brightnesses
d = im[:,1:] - im[:,0:-1]
# Calculate horizontal differences finding increasing or decreasing brightnesses
d = np.abs(im[:,1:].astype(np.int16) - im[:,0:-1].astype(np.int16))
Not very efficient, but you could extend your code by the following to detect edges:
edge = np.zeros([322, 182])
for i in range(grey_avg_array.shape[0]-2):
for j in range(grey_avg_array.shape[1]-2):
edge[i+1, j+1] = np.sum(grey_avg_array[i:i+3, j:j+3]*[[1, 0, -1], [1, 0, -1], [1, 0, -1]])
edge = edge.astype(np.uint8)
edge_img = Image.fromarray(edge)
edge_img
To show image in the (say) Jupyter Notebook, you could just type the variable name (after you have done Image.fromarray()) as I have written above in the last line.
I'm working on OpenCV using python, and in the edge detection script here I've encountered something I've never seen before. I apologize if this question has been asked before on here, but I'm not really sure what to search for.
I've pasted the relevant piece below:
while True:
flag, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thrs1 = cv2.getTrackbarPos('thrs1', 'edge')
thrs2 = cv2.getTrackbarPos('thrs2', 'edge')
edge = cv2.Canny(gray, thrs1, thrs2, apertureSize=5)
vis = img.copy()
vis /= 2
vis[edge != 0] = (0, 255, 0) #This is the line I'm trying to figure out
cv2.imshow('edge', vis)
The code isn't mine, but is part of the OpenCV documentation. As best as I can tell, vis[edge != 0] is going through each element in edge, comparing it to true, and then somehow (this is the strange part to me) turning the result of the boolean evaluation into xy coordinates for vis, and then setting the image value to green.
It just seems a little magical to me, as I've never encountered anything like this, since I'm mostly a C/C++ programmer. Can someone point me to the docs where I can read up on it? I have STFW unsuccessfully because I don't know what to call this behavior.
vis is a numpy array, and the [edge != 0] seems like syntactic sugar for the numpy.where() function...so its thresholding the values with Canny and then drawing a green line on the vis image where the edges are.
Here is an analogous example.
import numpy as np
x = np.arange(10)
y = np.zeros(10)
print y
y[x>3] = 10
print y