Marking boundary given mask - python

I have a volume of image slices and their according masks. I've been trying to use skimage.segmentation library to mark the object in mind for each slice according to its mask.
import numpy as np
from skimage.segmentation import mark_boundaries
import matplotlib.pyplot as plt
def plot_marked_volume(marked_image_volume, mask):
for slice in range(len(marked_image_volume)):
if np.count_nonzero(mask[slice,:,:]):
plt.figure(figsize=(10,10))
edges_pz = mark_boundaries(marked_image_volume[slice,:,:], mask[slice].astype(np.int),
color=(1,0,0), mode='thin')
plt.imshow(edges_pz)
plt.title('slice ' + str(slice))
plt.show()
Here's a sample image and mask slice:
However running the code results in given boundaries with black backgrounds.
I am expecting an output like the following yellow boundAry (Ignore the 'CG'):
Any thoughts and suggestions as to what might be the issue is appreciated.

Although, I couldn't understand fully from your provided data, that what you were trying to do, but if you just want the mask to be shown in the original image, this is what you may like to do:
fig, axarr = plt.subplots(1, 3, figsize=(15, 40))
axarr[0].axis('off')
axarr[1].axis('off')
axarr[2].axis('off')
imgPath = "download.jpeg"
image = cv2.imread(imgPath)
#Show original image of same shape as of edges_pz or mask. Arguments should be image not its path.
axarr[0].imshow(image)
#Show the maks or edges_pz in your case
axarr[1].imshow(edges_pz)
#Show the image with combined mask and the original image, the shape of both image and mask should be same.
axarr[2].imshow(image)
axarr[2].imshow(edges_pz, alpha=0.4)
I hope this helps.

Related

Apply colormap on grayscale image and save it

I want to apply a plt colormap to a medical grayscale image (14bit (16383 is the maximum pixel value) image stored as np.uint16) and save it as a single channel grayscale image. However when I do:
import matplotlib.pyplot as plt
import numpy as np
rand_img = np.random.randint(low=0, high=16383,size=(500,500),dtype=np.uint16)
# rand_img.shape = (500,500)
cm = plt.cm.gist_yarg
norm = plt.Normalize(vmin=0, vmax=((2**14)-1))
img = cm(norm(rand_img))
# img.shape = (500,500,4)
the resulting img.shape is a 4 channel (rgba?) image whereas what I want is a one channel image. The first 3 channels are all the same so I could just slice out one channel and use it. However the image turns out significantly darker as when I display it with, e.g.,
plt.imshow(rand_img, cmap=plt.cm.gist_yarg)
So how can I apply the colormap and save the image so that it looks exactly like when I use plt.imshow?
PS: When I use plt.imsave with the colormap the saved image looks as expected, however it is still stores as a 4 channel image and not as a single channel image.
I am almost certain that the difference you are seeing is coming from the default dpi setting in plt.savefig(). See this question for details. Since the default DPI is 100, it is likely that the difference you are seeing comes from down-sampling during image saving. Trying plt.savefig() with the default DPI setting on your example, I can clearly see that the fine details are missing.
Modifying your code slightly, I can take your example and get two reasonably close looking plots:
import matplotlib.pyplot as plt
import numpy as np
SAVEFIG_DPI = 1000
np.random.seed(1000) # Added to make example repeatable
rand_img = np.random.randint(low=0, high=16383, size=(500, 500), dtype=np.uint16)
cm = plt.cm.gist_yarg
norm = plt.Normalize(vmin=0, vmax=((2**14) - 1))
norm_cm_img = cm(norm(rand_img))
plt.imshow(norm_cm_img)
plt.savefig("test.png", dpi = SAVEFIG_DPI)
plt.imshow(norm_cm_img)
plt.show()
I get the following output, with show() on the left, and the image file on the right:
It's worth noting that I did try using fig.dpi as suggested in the linked question, but I was not able to get results that looked as close as this using that approach.

Turn a image to grayscale in python

I'm a newbie to tensorflow and keras, and I'm trying to create a CNN model for The Street View House Numbers (SVHN) dataset. The dataset contains color images, and I want to turn them in grayscale. I found some code on the web that claims they're turning image to grayscale, but it just changes colors.
People are reading the second image with a gray colormap. Is there any way to actually turn this image to grayscale?
(I do not know how to process an image in this kind of programming languages. If this is a dumb question, please forgive me and provide a brief explain.)
I provided images and code below, I'll be grateful for any help.
Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
#Read picture:
picture = plt.imread('google.jpg')
print("google logo's shape is: ",picture.shape) #(500, 500, 3)
#saving picture as an np array:
pic_array = np.array(picture)
#Turning image to grayscale
grayscale_pic = np.expand_dims(np.dot(pic_array[...,:3],[0.299, 0.587, 0.144]),axis = 0)
#Dimensions shifted, (probly my mistake):
grayscale_pic = np.moveaxis(grayscale_pic, 0, -1)
print("shape of grayscale pic = ", grayscale_pic.shape) # (500, 500, 1)
plt.imshow(picture) #Figure_1
plt.show()
plt.imshow(grayscale_pic) #Figure_2
plt.show()
U can convert a normal image to grayscale using opencv like this:
import cv2
gray = cv2.cvtColor(picture,cv2.COLOR_RGB2GRAY)
If u prefer numpy over opencv, then u can use this:
gray = np.dot(picture[...,:3], [0.2989, 0.5870, 0.1140])
You can use matplotlib with weights:
import numpy as np
import matplotlib.pyplot as plt
an_image = plt.imread('google.png')
rgb_weights = [0.2989, 0.5870, 0.1140]
grayscale_image = np.dot(an_image[..., :3], rgb_weights)
plt.axis('off')
plt.imshow(grayscale_image, cmap=plt.get_cmap("gray"), aspect='auto')
plt.show()
Output:
If you remove aspect='auto' parameter:
or you can use opencv
import cv2
an_image = cv2.imread("google.png")
grey_image = cv2.cvtColor(an_image, cv2.COLOR_BGR2GRAY)
or you can use PIL library
from PIL import Image
img = Image.open('google.png').convert('LA')
LA mode is L (8-bit pixels, black and white) with ALPHA desinged for .gif and .png. If your images are .jpeg use L.
Output:
There can be several ways to do this. One potential way is to utilize PIL(Pillow) library:
from PIL import Image
import matplotlib.pyplot as plt
picture = Image.open('google.jpg')
grayscale_pic = picture.convert('LA')
grayscale_pic.save('grayscale.png')
fig,ax = plt.subplots(nrows=1, ncols=2)
plt.subplot(1,2,1)
plt.imshow(picture)
plt.subplot(1,2,2)
plt.imshow(grayscale_pic)
plt.show()
Output:

Plotting segmented color images using numpy masked array and imshow

I'm new to numpy's masked array data-structure, and I want to use it to work with segmented color images.
When I use matplotlib's plt.imshow( masked_gray_image, "gray") to display a masked gray image, the invalid regions will be displayed transparent, which is what I want.
However, when I do the same for color images it doesn't seem to work.
Interestingly the data-point cursor won't show the rgb values [r,g,b] but empty [], but still the color values are displayed instead of transparent.
Am I doing something wrong or is this not yet provided in matplotlib imshow?
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import face
img_col = face() #example image from scipy
img_gray = np.dot(img_col[...,:3], [0.299, 0.587, 0.114]) #convert to gray
threshold = 25
mask2D = img_gray < threshold # some exemplary mask
mask3D = np.atleast_3d(mask2D)*np.ones_like(img_col) # expand to 3D with broadcasting...
# using numpy's masked array to specify where data is valid
m_img_gray = np.ma.masked_where( mask2D, img_gray)
m_img_col = np.ma.masked_where( mask3D, img_col)
fig,axes=plt.subplots(1,4,num=2,clear=True)
axes[0].imshow(mask2D.astype(np.float32)) # plot mask
axes[0].set_title("simple mask")
axes[1].imshow(m_img_gray,"gray") #plot gray verison => works
axes[1].set_title("(works)\n masked gray")
axes[2].imshow(m_img_col) #plot color version, => does not work
axes[2].set_title("(doesn't work)\n masked color")
# manually adding mask as alpha channel to show what I want
axes[3].imshow( np.append( m_img_col.data, 255*(1-(0 < np.sum(m_img_col.mask ,axis=2,keepdims=True) ).astype(np.uint8) ),axis=2) )
axes[3].set_title("(desired) \n alpha channel set manually")
Here is an example image:
[update]:
some minor changes to code and images for better clarity...
I do not know if this is a feature not provided by matplotlib yet, but you can
just set all values to 255 where your mask is True:
m_img_col.data[m_img_col.mask]=255
In this way the invalid regions will be displayed as transparent

How to apply watershed on grayscale image with opencv and python?

Based on a solution that I read at How to define the markers for Watershed in OpenCV?, I am trying apply watershed to grayscale data (not very visible but not all black), extracted from netcdf (precipitation data).
Here is a black and white version of the data (threshold at 0) so that you can see more easily, and the markers I want to use to define the different basins (basically just another threshold where precipitation is more intense).
The code I'm running is as follows:
import os,sys,string
from netCDF4 import Dataset as nc
import cv2
import numpy as np
import matplotlib.pyplot as mpl
import scipy.ndimage as ndimage
import scipy.spatial as spatial
from skimage import filter
from skimage.morphology import watershed
from scipy import ndimage
filename=["Cmorph-1999_01_03.nc"]
nc_data=nc(filename[0])
data=nc_data.variables["CMORPH"][23,0:250,250:750]
new_data=np.flipud(data)
ma_data=np.ma.masked_where(new_data<=0,new_data)
ma_conv=np.ma.masked_where(new_data<=2,new_data)
## Borders
tmp_data=ma_data.filled(0)
tmp_data[np.where(tmp_data!=0)]=255
bw_data=tmp_data.astype(np.uint8)
border = cv2.dilate(bw_data, None, iterations=5)
border = border - cv2.erode(border, None)
## Markers
tmp_conv=ma_conv.filled(0)
tmp_conv[np.where(tmp_conv!=0)]=255
bw_conv=tmp_conv.astype(np.uint8)
lbl, ncc = ndimage.label(bw_conv)
lbl = lbl * (255/ncc)
lbl[border == 255] = 255
lbl = lbl.astype(np.int32)
## Apply watershed
cv2.watershed(ma_data, lbl)
lbl[lbl == -1] = 0
lbl = lbl.astype(np.uint8)
result = 255 - lbl
I have the following error for the watershed in opencv-2.4.11/modules/imgproc/src/segmentation.cpp:
error: (-210) Only 8-bit, 3-channel input images are supported in function cvWatershed
For what I saw on the internet, this is due to the fact that the grayscale data is a 2D image and watershed needs a 3D image (from RGB). Indeed, I tried the script with a jpg image and I worked perfectly.
This problem is mentionned here but the answer given was finally rejected. And I can't find any more recent link answering the question.
To try to solve this, I created a 3D array from the 2D new_data:
new_data = new_data[..., np.newaxis]
test=np.append(new_data, new_data, axis=2)
test=np.append(new_data, test, axis=2)
But, as expected, it didn't solve the problem (same error message).
I also tried to save the plot from matplotlib to get RGB data:
fig = mpl.figure()
fig.add_subplot(111)
fig.tight_layout(pad=0)
mpl.contourf(ma_data,levels=np.arange(0,255.1,0.1))
fig.canvas.draw()
test_data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
test_data = test_data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
But the size of the test_data created is different from ma_data (+ I can't get rid of the labels).
So, I am stuck here. Ideally, I want to apply the watershed on the 2D grayscale image directly and/or limit the number of operations as much as possible.
As yapws87 mentioned, there was indeed a problem with the format I was presenting to the watershed function.
Doing try_data=ma_data.astype(np.uint8) removed the error message.
Here is a minimal example that works now:
import os,sys
from netCDF4 import Dataset as nc
import cv2
import numpy as np
import scipy.ndimage as ndimage
from skimage.morphology import watershed
from scipy import ndimage
basename="/home/dcop696/Data/CMORPH/precip/CMORPH_V1.0/CRT/8km-30min/1999/"
filename=["Cmorph-1999_01_03.nc"]
fileslm=["/home/dcop696/Data/LSM/Cmorph_slm_8km.nc"]
nc_data=nc(basename+filename[0])
data=nc_data.variables["CMORPH"][23,0:250,250:750]
new_data=np.flipud(data)
ma_data=np.ma.masked_where(new_data<=0,new_data)
try_data=ma_data.astype(np.uint8)
## Building threshold
tmp_data=ma_data.filled(0)
tmp_data[np.where(tmp_data!=0)]=255
bw_data=tmp_data.astype(np.uint8)
## Building markers
ma_conv=np.ma.masked_where(new_data<=2,new_data)
tmp_conv=ma_conv.filled(0)
tmp_conv[np.where(tmp_conv!=0)]=255
bw_conv=tmp_conv.astype(np.uint8)
markers = ndimage.label(bw_conv)[0]
## Watershed
labels = watershed(-try_data, markers, mask=bw_data)
you can try changing your image fram gray to a BGR color space using
cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
before passing your image to watershed algorithm

crop image in skimage?

I'm using skimage to crop a rectangle in a given image, now I have (x1,y1,x2,y2) as the rectangle coordinates, then I had loaded the image
image = skimage.io.imread(filename)
cropped = image(x1,y1,x2,y2)
However this is the wrong way to crop the image, how would I do it in the right way in skimage
This seems a simple syntax error.
Well, in Matlab you can use _'parentheses'_ to extract a pixel or an image region. But in Python, and numpy.ndarray you should use the brackets to slice a region of your image, besides in this code you is using the wrong way to cut a rectangle.
The right way to cut is using the : operator.
Thus,
from skimage import io
image = io.imread(filename)
cropped = image[x1:x2,y1:y2]
One could use skimage.util.crop() function too, as shown in the following code:
import numpy as np
from skimage.io import imread
from skimage.util import crop
import matplotlib.pylab as plt
A = imread('lena.jpg')
# crop_width{sequence, int}: Number of values to remove from the edges of each axis.
# ((before_1, after_1), … (before_N, after_N)) specifies unique crop widths at the
# start and end of each axis. ((before, after),) specifies a fixed start and end
# crop for every axis. (n,) or n for integer n is a shortcut for before = after = n
# for all axes.
B = crop(A, ((50, 100), (50, 50), (0,0)), copy=False)
print(A.shape, B.shape)
# (220, 220, 3) (70, 120, 3)
plt.figure(figsize=(20,10))
plt.subplot(121), plt.imshow(A), plt.axis('off')
plt.subplot(122), plt.imshow(B), plt.axis('off')
plt.show()
with the following output (with original and cropped image):
You can crop image using skimage just by slicing the image array like below:
image = image_name[y1:y2, x1:x2]
Example Code :
from skimage import io
import matplotlib.pyplot as plt
image = io.imread(image_path)
cropped_image = image[y1:y2, x1:x2]
plt.imshow(cropped_image)
you can go ahead with the Image module of the PIL library
from PIL import Image
im = Image.open("image.png")
im = im.crop((0, 50, 777, 686))
im.show()

Categories

Resources