Placing a Laplacian Pyramid Image onto a black canvas changes the background - python

I've built a function that takes an image, and builds a Laplacian Pyramid from it. I want to take, say, the first image of the Laplacian Pyramid and place it onto a black canvas(using np.zeros to build it).
I've done this, but what I get is that the black canvas takes on a color similar to the Laplacian Image, instead of remaining black.
The code basically replaces an NxM spot on the canvas with the laplacian image:
canvas[0:768, 0:1024] = laplace_image
I was wondering what exactly I'm missing here, as trying this with a grayscale image yields the correct canvas.
And the plotting code which is probably the issue:
plt.figure()
plt.imshow(canvas, cmap='gray')
plt.show()
Here is an example of the values in a Laplacian Image
[[0.00206756 0.00217308 0.00229568 0.00241833 0.00253975 0.0026407
0.0027411 0.00283026 0.00289416 0.00295967 0.00302006 0.003061
0.00308811 0.00310638 0.00311357 0.00311655 0.00312005 0.00312285
0.00311985 0.00311802 0.003109 0.00308746 0.00304459 0.00298541
0.00291537 0.00283966 0.00276133 0.00267244 0.00255839 0.00242822
0.002288 0.002139 ]
[0.00066538 0.00070738 0.00075546 0.00080446 0.00084945 0.00087207
0.00088813 0.00091252 0.0009471 0.00099087 0.00103915 0.00107427
0.00109442 0.00109901 0.00110466 0.00110936 0.0011094 0.0011042
0.0010959 0.00109445 0.00109941 0.00108648 0.00105162 0.00103264
0.00101328 0.00098499 0.00094468 0.00089966 0.00084997 0.00079252
0.00072701 0.00066181]]

Setting vmin=0 will ensure that all zeros in canvas get interpreted as black by imshow:
plt.figure()
plt.imshow(canvas, cmap='gray', vmin=0)
plt.show()
Before it's fed into your colormap, the data in canvas is first normalized so that the smallest value corresponds to black and the largest value corresponds to white. You can control the normalization by passing in the vmin and vmax arguments to imshow. For cmap=gray, any values x <= vmin will get displayed as black, and any values x >= vmax will get displayed as white.
You reproduce a similar problem to the one you describe if there's any negative values in the image data:
img = np.zeros((500,1000))
img[:, :250] = -2
img[:, 250:500] = 2
plt.imshow(img, cmap='gray')
Passing in vmin=0 will cause the zeros in the second half of img to be displayed as black instead of gray:
plt.imshow(img, cmap='gray', vmin=0)

Related

when I apply median filter to image, it turns purple. why?

I have a image.I added salt & pepper noise to this image. After that I applied 2D median filter to remove noise from image. But after this process, the image converted purple.
And here is my codes.
M=3;
N=3;
modifyA=np.pad(image, [(math.floor(M/2),math.floor(N/2))])
B = np.zeros([(image.shape[0]),(image.shape[1])])
med_indx = round((M*N)/2); #MEDIAN INDEX
for i in range ((modifyA.shape[0])-(M-1)-1):
for j in range ((modifyA.shape[1])-(N-1)-1):
temp = modifyA[i:i+(M-1), j:j+(N-1)] #
#RED,GREEN AND BLUE CHANNELS ARE TRAVERSED SEPARATELY
for k in range (2):
tmp = temp[:,:,k]
B[i,j] = np.median(tmp[:])
B = B.astype(np.uint8)
imgplot = plt.imshow(B)
plt.show()
Where could the error be?
As #gre_gor wrote in their comment, imshow is using a pseudocolor. More specifically, it is using the common colormap viridis by default if the image is not RGB(A).
Take a look at the documentation: https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html
To display a grayscale version of your image refer to this part of the doc:
The input may either be actual RGB(A) data, or 2D scalar data, which will be rendered as a pseudocolor image. For displaying a grayscale image set up the colormapping using the parameters cmap='gray', vmin=0, vmax=255.

Plotting HSV channel histograms from a BGR image Opencv

I have B,G,R histograms that look like the following:
Image Histogram for B channel of an image
Description: On the X axis, I have the values from 0-255, that each pixel ranges from, and on Y axis, I have the number of pixels that have that particular X value.
My code for the same is:
hist1 = cv2.calcHist([image],[0],None,[256],[0,256])
plt.plot(hist1, color='b')
plt.xlabel("Value (blue)")
plt.ylabel("Number of pixels")
plt.title('Image Histogram For Blue Channel')
plt.show()
My question is, that I need to get the same plot - X axis with values, and Y axis with number of pixels, for HSV channels. Basically, instead of B, G, and R plots, I need the same histogram, but one that gets H, S, I.
I got the following code:
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = img2[:,:,0], img2[:,:,1], img2[:,:,2]
hist_h = cv2.calcHist([h],[0],None,[256],[0,256])
#hist_s = cv2.calcHist([s],[0],None,[256],[0,256])
#hist_v = cv2.calcHist([v],[0],None,[256],[0,256])
plt.plot(hist_h, color='r', label="hue")
Which gives me the following plot: Hue plot for an image
But from what I've read so far, BGR and HSV are different color spaces. So, I want to know, that when I'm using the calcHist function after converting to HSV, and then splitting into three channels, those channels by default are H,S and V? It's not that they're actually only BGR, but just simply mislabelled H, S and V? I just want to verify how both the methods are practically the same, but BGR and HSV are different color spaces.
Edit: Here's the source image
Image
Most likely you have a synthetic image with nothing in the red and green channels and some random data centred on 128 in the blue channel.
When you go to HSV colourspace, all the hues are centred on 110 which corresponds to 220 degrees which is blue in the regular 0..360 HSV range. Remember OpenCV uses a range of 0..180 for Hue when using uint8 so that it fits in uint8's range of 0..255. So you effectively need to multiply the 110 you see in your Hue histogram by 2... making 220 which is blue.
See bottom part of this figure.
As you seem so uncertain of your plotting, I made histograms of the HSV channels for you below. I used a different tool to generate them, but don't let that bother you - in fact confirmation from a different tool is always a good sign.
First, here are the Hue (left), Saturation (centre) and Value (right) channels side-by-side:
Now the Hue histogram:
This tells you all the hues in the image are similar - i.e. various shades of blue.
Now the Saturation histogram:
This tells you that the colours in the image are generally low-to-medium saturated with no really vivid colours.
And finally, the Value histogram:
This tells you the image is generally mid-brightness, with no dark shadows and a small peak of brighter areas on the right of the histogram corresponding to where the white parts are in the original.

Problem with defining a hole filing function (according to mathematical definition) in python

I'm new to python, so please consider that :)
I was trying to fill the holes of a binary image after applying "closing" on it without using any built-in functions. I wanted to do something like this (down below) on my Head Ct picture.
continously erosion with a structuring element(+) (similar to the equation on it)
so I wrote this code, but I still have a problem with the initial point.
def holefiling(img,B):
initial= np.zeros_like(img)
imginv = cv.bitwise_not(img)
i=np.random.choice(img.shape[0])
j=np.random.choice(img.shape[1])
while img[i,j]!=np.max(img):#FINDING A WHITE HOLE WITH THE brightest intensities
i=np.random.choice(img.shape[0])
j=np.random.choice(img.shape[1])
initial[i,j]=imginv[i,j]
#initial=cv.bitwise_not(initial)#?
for k in (0,10000):#10000 times of doing this algoryithm might solve all of it
erosion = cv.erode(initial,B,iterations = 1) #X_k erosion with structuring element B
X_k=initial & erosion
initial=X_k
Output=X_k | img
return Output
kernel=cv.getStructuringElement(cv.MORPH_CROSS,(3,3))# defining a 3*3 cross(+) structure
Holefilled=holefiling(closing,kernel)
#showing the result
plt.figure()
plt.suptitle('fingerprint.tif')
plt.subplot(131)
plt.title(r'${Original_{img}}$')
plt.imshow(img, cmap='gray', vmin=0, vmax=np.max(img))
plt.axis(False)
plt.subplot(132)
plt.title(r'${closing}$')
plt.imshow(closing, cmap='gray', vmin=0, vmax=np.max(closing))
plt.axis(False)
plt.subplot(133)
plt.title(r'${Holefilled}$')
plt.imshow(Holefilled, cmap='gray', vmin=0, vmax=np.max(Holefilled))
plt.axis(False)
The result doesn't show any changes to the picture, and I got confused about why it happened. (In fact, I thought that the initial point must be in the brightest area, and also, there must be a change by increasing its iteration while nothing happened .) here you can see the result:
zooming on the results
I think the problem might be related to one of these.
1.the initial point must be somewhere in between the holes, but I don't know how :|
2.the process of replacing X_k in the code has a little bit problem
3.structuring element (cross +) should be something else(the chances of this option is way too low :D )
so, please everyone who could help me do it, thank you so super much in advance.
Initial Point Of a Holfilling Process
I have found an answer to my question, but to those, it may be their question too. I wouldn't delete my question and want to add my answer.
I solved the Problem of Finding Initial you may want to know how? so I decided to find a hole point that has a zero value while the summation of the points of 4 lines around it has a value similar to be filled lines. (if you want to fill the white holes inside of a black area, you should find a point that has a 1 value while summation of the points of four lines around it is zero)
here is my code:
def holefiling(img,B):
img1 = np.array(img)
#imginv
thresh = 0/5
binarr = np.where(img1>thresh, 0, 1)# if i wrote: np.where(img>thresh, 1,0) the color would be reversed
# Covert numpy array back to image
binimg = np.asmatrix(binarr)
#binimg1 = cv.threshold(img, thresh, 255, cv.THRESH_BINARY)
imginv=binimg.astype('uint8')
hf=13 #depends on situation of the hole
#finding an initial point:
#1.defining a black initial
initial= np.zeros_like(img)
#finding a white pixel inside a black area:
#separating each pixel of the image but not around the edges: [hf:img.shape[0]-hf]
for i in range(0+hf,img.shape[0]-hf):
for j in range(0+hf,img.shape[1]-hf):
#print(np.sum(img[i-hf:i+hf,j+hf]))
#print(img[78,236])
#checking if the i,j pixel is black while around it has white lines according to the equations of these lines below and vice-versa
if np.sum(img[i-hf:i+hf,j-hf])>=hf+hf/2 and np.sum(img[i-hf:i+hf,j+hf])>=hf+hf/2 and np.sum(img[i-hf,j-hf:j+hf])>=1/2 and np.sum(img[i+hf,j-hf:j+hf])>=1/2 and img[i,j]==0:
#finally the initial point for the holefiling process
""" print(img[i,j])
print(img[j,i]) """
i_init=i
j_init=j
print(i_init,j_init)
initial[i_init,j_init]=imginv[i_init,j_init]
#print(i_init,j_init)
""" elif np.sum(img[i-hf:i+hf,j-hf])>=hf and np.sum(img[i-hf:i+hf,j+hf])>=hf and np.sum(img[i-hf,j-hf:j+hf])>=hf and np.sum(img[i+hf,j-hf:j+hf])>=hf and img[i,j]==0:
print(i,j)
i_init=i
j_init=j
initial[i_init,j_init]=img[i_init,j_init] """
print(img[i_init,j_init])
#initial[i_init,j_init]=imginv[i_init,j_init]
plt.figure()
plt.title(r'${InitialPoints}$')
plt.imshow(initial, cmap='gray', vmin=0, vmax=np.max(initial))
plt.axis(False)
plt.show()
#initial[i_init,j_init]=img[i_init,j_init]
for k in range(100):#ten times of doing this algorithym might solve all of it
dialation=cv.dilate(initial,B,iterations=1)
X_k=imginv & dialation #img | dialation
#if you want to check the code and wanna know how it works in each iteration, simply run this part down below
""" plt.figure
plt.subplot(144)
plt.imshow(X_k, cmap='gray', vmin=0, vmax=np.max(initial))
plt.axis(False)
plt.subplot(141)
plt.imshow(initial, cmap='gray', vmin=0, vmax=np.max(initial))
plt.axis(False)
plt.subplot(142)
plt.imshow(dialation, cmap='gray', vmin=0, vmax=np.max(dialation))
plt.axis(False)
plt.subplot(143)
plt.imshow(imginv, cmap='gray', vmin=0, vmax=np.max(initial))
plt.axis(False)
plt.show() """
initial=X_k
#complementing X_k in order to have both images in complent mode
""" binarrinv = np.where(X_k>0, 0, 1)# if i wrote: np.where(img>thresh, 1,0) the color would be reversed
# Covert numpy array back to image
binimginv = np.asmatrix(binarrinv)
imgOut=binimginv.astype('uint8')#imgOut """
Output=(X_k | img)# since the img and imgout are the complement of what we had in the slides so if we use intersection between two complements is equal to the union of two original images
return Output`
so to check how does it work, I run this code below:
img= cv.imread('HeadCT.tif',0)
img = np.array(img)
# Put threshold to 100 according to question to make it binary
thresh = 100
binarr = np.where(img>thresh, 1, 0)
# Covert numpy array back to image
binimg = np.asmatrix(binarr)
binimg1=binimg.astype('uint8')
kernel=cv.getStructuringElement(cv.MORPH_ELLIPSE,(20,20))# defining a 20*20 ELLIPSE structure
closing = cv.morphologyEx(binimg1, cv.MORPH_CLOSE, kernel)# closing with Kernel
kernel1=cv.getStructuringElement(cv.MORPH_ELLIPSE,(5,5))# defining a 3*3 ellipse structure
#testing my function
Holefilled1=holefiling(closing,kernel1)#for one of the holes
plt.figure()
plt.suptitle('HeadCT.tif')
plt.subplot(131)
plt.title(r'${Original_{img}}$')
plt.imshow(binimg1, cmap='gray', vmin=0, vmax=np.max(binimg1))
plt.axis(False)
plt.subplot(132)
plt.title(r'${closing_{on_{original img}}}$')
plt.imshow(closing, cmap='gray', vmin=0, vmax=np.max(closing))
plt.axis(False)
plt.subplot(133)
plt.title(r'${Holefilled_{after_{closing}}}$')
plt.imshow(Holefilled1, cmap='gray', vmin=0, vmax=np.max(Holefilled1))
plt.axis(False)
plt.show()
here is the input imageHeadCt.tif
and here is the output of itResults of hole-filling after closing
You can use this result also for finding the boundaries of an object.
I hope you enjoyed it as much as I did. :)

OpenCV copyMakeBorder changes grayscale value

I'm trying to add pad to my image which is a grayscale image using copyMakeBorder. It does work and adds the padding that I want but grayscale values change, image gets brighter. I want to keep my values and just add padding. Why does it even interact with colors?
padded_img = cv2.copyMakeBorder( img, 0, 0, 0, pad_value, cv2.BORDER_CONSTANT)
You are displaying the image using a colormap with adaptive range. From the documentation of matplotlib's imshow:
normNormalize, optional
The Normalize instance used to scale scalar data to the [0, 1] range before mapping to colors using cmap. By default, a linear scaling mapping the lowest value to 0 and the highest to 1 is used. This parameter is ignored for RGB(A) data.
As your image only contains relatively light colors (high intensity values), it appears to you as if the image would lighten up with the border. In fact, the first display of your image without the border was darkened (contrast-enhanced) by imshow.
Pass a Normalize object to your imshow call to specify the correct value range of your image, e.g.
imshow(..., normNormalize=matplotlib.colors.Normalize(vmin=0, vmax=255))
Do this for both before and after outputs.
ypnos answer is perfectly fine. Alternatively you can also make these changes to pyplot params and use imshow without worrying about having to add it every time you have to display it. For example:
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20, 20)
# Grayscale mapping
plt.rcParams["image.cmap"] = 'gray'
# Now simply use imshow anywhere in your code
plt.imshow(img)

matplotlib markers / mask on image pixels

So I have an image and I have a pixel mask for that image, where the mask is the same size as the image and contains values of 0 and 1, where if it is 0 I don't want to modify the image, and if it is 1 I want to add a transparent color over that pixel of the image.
Basically I want to highlight certain segments of the image but still see what is underneath.
Now I have searched high and low but haven't found a simple way to do this. I used np.where with the mask to get the pixel locations of the 1's to use with the plot functions. I first tried scatter plots with a small marker size and no edge color (small scatter plot markers in matplotlib are always black), but the markers are not one image pixel in size, they seem to be an absolute size and so depending on the size of the figure the transparency is affected and weird patterns are created from the overlapping markers.
Just the regular pyplot plot function created the exact look I desired (where the coloring was smooth and invariant to figure size) but it also colored horizontal connections between disjoint segments in the mask (since it is drawing lines I guess), so I couldn't use that.
What worked the best was patches, which I came across in this question: (How to set a fixed/static size of circle marker on a scatter plot?). I found that rectangular patches with width and height of 1 gave me the exact desired effect, where I could put a transparent color over certain pixels of the image. However this proved to produce a ton (tens of thousands) of rectangles for certain images, and so it was quite slow. Even when using a PatchCollection instead of calling addPatch every time it was still slow.
Now I can probably just join adjacent rectangles to reduce the number of things needing to be drawn, but I was just wondering if there was an easier way to do this?
Thanks.
You can do a semitransparent overlay either using masked arrays or by setting the alpha values in an RGBA image. Here are both worked through (using the example of three semitransparent red squares placed over a circular pattern), and they give similar images (so I'll only show one):
from pylab import *
from numpy import ma
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
# z4 = 3 diagonal square
# zm = a uniform image (ones), with a mask of squares (~z4)
z4 = np.repeat(np.repeat(eye(3, dtype=bool), 40, axis=0), 40, axis=1)
zm = ma.masked_where(~z4, ones((120,120)))
imshow(z3, cmap=cm.jet)
imshow(zm, cmap=cm.bwr, alpha=.3, vmin=0, vmax=1) #cm.bwr is an easy way to get red
# do this by changing alpha for each pixel
figure()
z5 = zeros((120, 120, 4), dtype=float)
z5[..., 0] = 1
z5[..., 3] = .4*z4.astype(float)
imshow(z3, cmap=cm.jet)
imshow(z5)
show()
I think both approaches can produce the same results for all cases, but:
1. the masked arrays can be a more direct approach if the mask or composition becomes complicated, and masking gives you more flexibility in drawing your overlay image since, for example, you can use colormaps rather than specifying the full RGBA for every pixel, but,
2. the masked array approach doesn't give full pixel-by-pixel control over the alpha value like RGBA does.
z1 = sin(X*Y)
z1 = cos(2*X)
z2 = cos(5*(X+Y))
zm = ma.masked_where( (z2<.5) & (Y>0), z1)
figure()
imshow(z3)
imshow(zm, cmap=cm.gray, alpha=.4, vmin=-2, vmax=2)
show()
It's a bit crazy, but here's what's going on: The primary image is a circular pattern that goes from blue to red (z3). Then there are vertical bars that faintly shade this (z1) but only in half of the figure and in narrow alternate diagonal bands on the other half (due to the mask). Here's a more complicated image using masked arrays:
Just to add on to what tom10 has posted, the masked arrays do work great with colormaps, but I also wrote a small function in the meantime that should work with any RGB color tuple.
def overlayImage(im, mask, col, alpha):
maskRGB = np.tile(mask[..., np.newaxis], 3)
untocuhed = (maskRGB == False) * im
overlayComponent = alpha * np.array(col) * maskRGB
origImageComponent = (1 - alpha) * maskRGB * im
return untocuhed + overlayComponent + origImageComponent
im is the rgb image
mask is a boolean mask of the image, such that mask.shape + (3,) = im.shape
col is just the 3-tuple rgb value you want to mask the image with
alpha is just the alpha value / transparency for the mask
I also needed a clear contour on my areas. Thus, you can easily add a contour plot on top: e.g., create a dummy numpy array and set a different value in each area of interest.
Here's an example build on top of tom10's answer with a different condition:
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
imshow(z3, cmap=cm.jet, extent = (-6,6,-6,6));
zm = ma.masked_where((z3>=0.7) & (z3<=1.5), ones(np.shape(z3)));
imshow(zm, cmap=cm.bwr, alpha=.4, vmin=0, vmax=1, extent = (-6,6,-6,6)) #cm.bwr is an easy way to get red
# Build dummy array of 1s and 0s (you can play with different values to obtain different contours for different regions):
temp_vector = ones(np.shape(z3));
temp_vector[(z3>=0.7) & (z3<=1.5)] = 0.0;
temp_vector[(z3>8.2)] = 2.0; # etc.
# Create contour. I found only one contour necessary:
contour(X, Y, temp_vector, 1, colors=['r','g']);
show()
Which yields:

Categories

Resources