I've been running into problems recently where the local binary pattern method in python skimage is producing unexpected results.
Have a look at the cartoon example below. It shows two flat color circles on a flat color background.
The local binary pattern (P=8 samples, Radius=1) output is:
(Image is color coded in jet colors). The gray color correctly represents 255. However, the blue color is 85 (binary 01010101).
So while the method correctly shows the background and the circle on the right as 255, it shows the left circle as 85. Apparently, the local binary pattern method in skimage thinks the region is completely noisy (hence the alternating binary pattern 01010101). This is not true, however, as I have double checked the individual pixels in the region shown in blue above and their values are identical (i.e. its flat color, just like the flat color background and other flat color circle).
Has anyone experienced a similar problem before?
Here is the code if you want to replicate this:
from skimage.feature import local_binary_pattern
from skimage.color import rgb2gray
import matplotlib.pyplot as plt
img = plt.imread('circles.png')
img = rgb2gray(img)
lbp = local_binary_pattern(img, 8, 1, 'default')
plt.imshow(lbp, cmap='nipy_spectral')
plt.title('Standard lbp (8,1)')
I guess the issue is due to numeric errors. When the color image is read using
img = plt.imread('circles.png')
you get an array of type float32 and in the subsequent conversion to grayscale
img = skimage.color.rgb2gray(img)
the resulting image is of type float64.
I recommend you to avoid the intermediate step. You could read the image with double precision (i.e. float64) from the very beginning like this:
In [63]: from skimage.feature import local_binary_pattern
In [64]: from skimage import io
In [65]: img = io.imread('circles.png', as_grey=True)
In [66]: img.dtype
Out[66]: dtype('float64')
In [67]: lbp = local_binary_pattern(img, 8, 1, 'default')
In [68]: io.imshow(lbp/255., cmap='nipy_spectral')
Out[68]: <matplotlib.image.AxesImage at 0x10bdd780>
Related
I have a multi class segmented image consisting of labels of 4 different classes represented in 4 different colors ( Darkblue,red,yellow and sky blue ), i would like to calculate the total area of pixels in each class label of segmented prediction.
I tried writing this code for obtaining total number of pixels in each label but i am not able to get any result which consists of total number of pixels in each corresponding class label.
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, io, img_as_ubyte
from skimage.filters import threshold_multiotsu
# Read an image
image = io.imread("images/Ulcer_segmented.jpg")
# Apply multi-Otsu threshold
thresholds = threshold_multiotsu(image, classes=5)
# Digitize (segment) original image into multiple classes.
#np.digitize assign values 0, 1, 2, 3, ... to pixels in each class.
regions = np.digitize(image, bins=thresholds)
output = img_as_ubyte(regions) #Convert 64 bit integer values to uint8
plt.imsave("images/Ulcer_segmented..jpg", output)
props = measure.regionprops_table(label_image, output,
properties=['label',
'area', 'equivalent_diameter',
'mean_intensity', 'solidity'])
This is described in the docs:
from skimage.measure import label, regionprops
# Read an image
image = io.imread("your/image.jpg")
# label image regions
label_image = label(image)
for region in regionprops(label_image):
print(region.area)
Looks like you want to get an image histogram the issue of using np.histogram or skimage.exposure.histogram is that your image is not single-channel and using these functions you would get a histogram of flattened image which would not yield the expected results.
The way you chose to overcome this problem is using otsu thresholding which I'm not sure if works as the documentation states that it expects a single channel (grayscale) image.
The knowledge of the colors used to represent your classes would help here, you could do something like
coors = [
[cls_0_rgb_color],
[cls_1_rgb_color],
[cls_2_rgb_color],
[cls_3_rgb_color]
]
areas = [np.count_nonzero(np.all(img == c, axis=-1)) for c in colors]
If you don't know exactly what colors the classes have you probably have to reduce the last dimension of your image to uniquely represent the 3-dimensional color (I'm not sure exactly how this is done correctly, maybe someone smarter than me can answer this in a new question). What I would do is convert the image to HSV format and use the hue component as a class representation.
from skimage.color import rgb2hsv
hsv = rgb2hsv(image)
hue = hsv[:, :, 0]
areas, bin_edges = np.histogram(hue, bins=4)
What could be tricky here is deciphering which area corresponds to what class but knowing approximately what colors to expect and from knowing how colors in hue space are aligned we could say that the order would be red, yellow, light_blue, dark_blue or yellow, light_blue, dark_blue, red as red hue is symmetrical around 0 or 360 degrees. Checking the bin_edges vector could do the trick here.
# set red_threshold experimentally
if bin_edges[1] < red_threshold:
# (red, yellow, light_blue, dark_blue)
else:
# (yellow, light_blue, dark_blue, red)
I am trying to detect edges on this lane image. First blurred the image using Gaussian filter and applied Canny edge detection but it gives only blank image without detecting edges.
I have done like this:
#imports
import matplotlib.pyplot as plt
import numpy as np
import cv2
import matplotlib.image as mpimg
image= mpimg.imread("Screenshot from Lane Detection Test Video 01.mp4.png")
image = image[:,:,:3]
image_g = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
image_blurred = cv2.GaussianBlur(image_g, (3, 3), 0)
threshold_low = 50
threshold_high = 100
image_blurred = image_blurred.astype(np.uint8)
image_canny = cv2.Canny(image_blurred, threshold_low, threshold_high)
plt.imshow(image_canny,cmap='gray')
You should always examine your data. Simply running your script step by step and examining intermediate values shows what is going wrong: mpimg.imread reads the image as a floating-point array with values between 0 and 1. After blurring, you cast it to uint8, which sets almost all values to 0. Simply multiplying the image by 255 at some point before casting to uint8 solves your issue.
I'm working with Python and trying to do Otsu thresholding on an image but only inside the mask (yes, I have an image and a mask image). It means less pixel on the image will be included in the histogram for calculating the Otsu threshold.
I'm currently using the cv2.threshold function without the mask image and have no idea how to do this kind of job.
ret, OtsuMat = cv2.threshold(GaborMat, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
Since this function also incorporates the pixels outside the mask, I think it will give a less accurate threshold.
This is the example of the image and its mask:
https://drive.google.com/drive/folders/1p8JMhncJs19oOWO9RdkWuEADVGqE-gzQ?usp=sharing
Hope there is a OpenCV or other lib function to do it easily (and also with fast computing), but any kind of help will be appreciated.
I had a try at this using the threshold_otsu() method from skimage and a Numpy masked array. I don't know if there are faster ways - the skimage is normally pretty well optimised. If anyone else wants to take my sample data and try other ideas on it, please feel free - although there is a service charge of one upvote ;-)
#!/usr/bin/env python3
import cv2
import numpy as np
import numpy.ma as ma
from skimage.filters import threshold_otsu
# Set up some repeatable test data, 4 blocks 100x100 pixels each of random normal np.uint8s centred on 32, 64, 160,192
np.random.seed(42)
a=np.random.normal(size = (100,100), loc = 32,scale=10).astype(np.uint8)
b=np.random.normal(size = (100,100), loc = 64,scale=10).astype(np.uint8)
c=np.random.normal(size = (100,100), loc = 160,scale=10).astype(np.uint8)
d=np.random.normal(size = (100,100), loc = 192,scale=10).astype(np.uint8)
# Stack (concatenate) the 4 squares horizontally across the page
im = np.hstack((a,b,c,d))
# Next line is just for debug
cv2.imwrite('start.png',im)
That gives us this:
# Now make a mask revealing only left half of image, centred on 32 and 64
mask=np.zeros((100,400))
mask[:,200:]=1
masked = ma.masked_array(im,mask)
print(threshold_otsu(masked.compressed())) # Prints 47
# Now do same revealing only right half of image, centred on 160 and 192
masked = ma.masked_array(im,1-mask)
print(threshold_otsu(masked.compressed())) # Prints 175
The histogram of the test data looks like this, x-axis is 0..255
Adapting to your own sample data, I get this:
#!/usr/bin/env python3
import cv2
import numpy as np
import numpy.ma as ma
from skimage.filters import threshold_otsu
# Load images
im = cv2.imread('eye.tif', cv2.IMREAD_UNCHANGED)
mask = cv2.imread('mask.tif', cv2.IMREAD_UNCHANGED)
# Calculate Otsu threshold on entire image
print(threshold_otsu(im)) # prints 130
# Now do same for masked image
masked = ma.masked_array(im,mask>0)
print(threshold_otsu(masked.compressed())). # prints 124
I am researching about ways of detecting changes in grayscale levels in images, but only working within a certain area of them, and I have come across the integral image. I think it can be used for this, just selecting an area from the image and comparing the mean gray level (or something like that) with other areas.
But my question is, is it possible (or is there a way) to compute the integral image of just the specific region I am interested in of the general image (the important region is mixed in different parts of the general image).
Cheers
I don't think the integral image is the most adequate tool for this task. Detection of intensity changes in a ROI can be easily implemented by comparing the intensity values within the ROI through Numpy's any and slicing as shown below.
To begin with, we import the necessary modules and load some sample images:
import numpy as np
from skimage import io
import matplotlib.pyplot as plt
reference = io.imread('https://i.stack.imgur.com/9fmvl.png')
same = io.imread('https://i.stack.imgur.com/u1wlT.png')
changed = io.imread('https://i.stack.imgur.com/H2dIu.png')
This is how the images look:
fig, [ax0, ax1, ax2] = plt.subplots(1, 3)
ax0.imshow(reference)
ax0.axis('off')
ax0.set_title('Reference')
ax1.imshow(same)
ax1.axis('off')
ax1.set_title('Same')
ax2.imshow(changed)
ax2.axis('off')
ax2.set_title('Changed')
plt.show(fig)
Then we define a function that returns True whenever there is at least one ROI pixel whose intensity in the test image is different to that of the reference image:
def detect_change(ref, img, roi):
upper, left, lower, right = roi
return np.any(ref[upper:lower, left:right] != img[upper:lower, left:right])
Finally we just need to set up the ROI (the red square) and call detect_change with proper arguments:
In [73]: roi = [32, 32, 96, 96]
In [74]: detect_change(reference, same, roi)
Out[74]: False
In [75]: detect_change(reference, changed, roi)
Out[75]: True
Based on a solution that I read at How to define the markers for Watershed in OpenCV?, I am trying apply watershed to grayscale data (not very visible but not all black), extracted from netcdf (precipitation data).
Here is a black and white version of the data (threshold at 0) so that you can see more easily, and the markers I want to use to define the different basins (basically just another threshold where precipitation is more intense).
The code I'm running is as follows:
import os,sys,string
from netCDF4 import Dataset as nc
import cv2
import numpy as np
import matplotlib.pyplot as mpl
import scipy.ndimage as ndimage
import scipy.spatial as spatial
from skimage import filter
from skimage.morphology import watershed
from scipy import ndimage
filename=["Cmorph-1999_01_03.nc"]
nc_data=nc(filename[0])
data=nc_data.variables["CMORPH"][23,0:250,250:750]
new_data=np.flipud(data)
ma_data=np.ma.masked_where(new_data<=0,new_data)
ma_conv=np.ma.masked_where(new_data<=2,new_data)
## Borders
tmp_data=ma_data.filled(0)
tmp_data[np.where(tmp_data!=0)]=255
bw_data=tmp_data.astype(np.uint8)
border = cv2.dilate(bw_data, None, iterations=5)
border = border - cv2.erode(border, None)
## Markers
tmp_conv=ma_conv.filled(0)
tmp_conv[np.where(tmp_conv!=0)]=255
bw_conv=tmp_conv.astype(np.uint8)
lbl, ncc = ndimage.label(bw_conv)
lbl = lbl * (255/ncc)
lbl[border == 255] = 255
lbl = lbl.astype(np.int32)
## Apply watershed
cv2.watershed(ma_data, lbl)
lbl[lbl == -1] = 0
lbl = lbl.astype(np.uint8)
result = 255 - lbl
I have the following error for the watershed in opencv-2.4.11/modules/imgproc/src/segmentation.cpp:
error: (-210) Only 8-bit, 3-channel input images are supported in function cvWatershed
For what I saw on the internet, this is due to the fact that the grayscale data is a 2D image and watershed needs a 3D image (from RGB). Indeed, I tried the script with a jpg image and I worked perfectly.
This problem is mentionned here but the answer given was finally rejected. And I can't find any more recent link answering the question.
To try to solve this, I created a 3D array from the 2D new_data:
new_data = new_data[..., np.newaxis]
test=np.append(new_data, new_data, axis=2)
test=np.append(new_data, test, axis=2)
But, as expected, it didn't solve the problem (same error message).
I also tried to save the plot from matplotlib to get RGB data:
fig = mpl.figure()
fig.add_subplot(111)
fig.tight_layout(pad=0)
mpl.contourf(ma_data,levels=np.arange(0,255.1,0.1))
fig.canvas.draw()
test_data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
test_data = test_data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
But the size of the test_data created is different from ma_data (+ I can't get rid of the labels).
So, I am stuck here. Ideally, I want to apply the watershed on the 2D grayscale image directly and/or limit the number of operations as much as possible.
As yapws87 mentioned, there was indeed a problem with the format I was presenting to the watershed function.
Doing try_data=ma_data.astype(np.uint8) removed the error message.
Here is a minimal example that works now:
import os,sys
from netCDF4 import Dataset as nc
import cv2
import numpy as np
import scipy.ndimage as ndimage
from skimage.morphology import watershed
from scipy import ndimage
basename="/home/dcop696/Data/CMORPH/precip/CMORPH_V1.0/CRT/8km-30min/1999/"
filename=["Cmorph-1999_01_03.nc"]
fileslm=["/home/dcop696/Data/LSM/Cmorph_slm_8km.nc"]
nc_data=nc(basename+filename[0])
data=nc_data.variables["CMORPH"][23,0:250,250:750]
new_data=np.flipud(data)
ma_data=np.ma.masked_where(new_data<=0,new_data)
try_data=ma_data.astype(np.uint8)
## Building threshold
tmp_data=ma_data.filled(0)
tmp_data[np.where(tmp_data!=0)]=255
bw_data=tmp_data.astype(np.uint8)
## Building markers
ma_conv=np.ma.masked_where(new_data<=2,new_data)
tmp_conv=ma_conv.filled(0)
tmp_conv[np.where(tmp_conv!=0)]=255
bw_conv=tmp_conv.astype(np.uint8)
markers = ndimage.label(bw_conv)[0]
## Watershed
labels = watershed(-try_data, markers, mask=bw_data)
you can try changing your image fram gray to a BGR color space using
cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
before passing your image to watershed algorithm