I am trying to implement a skeletonization of small images. But I am not getting an expected results. I tried also thin() and medial_axis() but nothing seems to work as expected. I am suspicious that this problem occurs because of the small resolutions of images. Here is the code:
import cv2
from numpy import asarray
import numpy as np
# open image
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
# threshold the image
img_binary = cv2.threshold(afterMedian, thresh, 255, cv2.THRESH_BINARY)[1]
# make binary image
arr = asarray(img_binary)
binaryArr = np.zeros(asarray(img_binary).shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if arr[i][j] == 255:
binaryArr[i][j] = 1
else:
binaryArr[i][j] = 0
# perform skeletonization
from skimage.morphology import skeletonize
cv2.imshow("binary arr", binaryArr)
backgroundSkeleton = skeletonize(binaryArr)
# convert to non-binary image
bSkeleton = np.zeros(arr.shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if backgroundSkeleton[i][j] == 0:
bSkeleton[i][j] = 0
else:
bSkeleton[i][j] = 255
cv2.imshow("background skeleton", bSkeleton)
cv2.waitKey(0)
The results are:
I would expect something more like this:
This applies to similar shapes also:
Expectation:
Am I doing something wrong? Or it will truly will not be possible with such small pictures, because I tried skeletonization on bigger images and it worked just fine. Original images:
You could try the skeleton in DIPlib (dip.EuclideanSkeleton):
import numpy as np
import diplib as dip
import cv2
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
bin = afterMedian > thresh
sk = dip.EuclideanSkeleton(bin, endPixelCondition='three neighbors')
dip.viewer.Show(bin)
dip.viewer.Show(sk)
dip.viewer.Spin()
The endPixelCondition input argument can be used to adjust how many branches are preserved or removed. 'three neighbors' is the option that produces the most branches.
The code above produces branches also towards the corners of the image. Using 'two neighbors' prevents that, but produces fewer branches towards the object as well. The other way to prevent it is to set edgeCondition='object', but in this case the ring around the object becomes a square on the image boundary.
To convert the DIPlib image sk back to a NumPy array, do
sk = np.array(sk)
sk is now a Boolean NumPy array (values True and False). To create an array compatible with OpenCV simply cast to np.uint8 and multiply by 255:
sk = np.array(sk, dtype=np.uint8)
sk *= 255
Note that, when dealing with NumPy arrays, you generally don't need to loop over all pixels. In fact, it's worth trying to avoid doing so, as loops in Python are extremely slow.
It seems the scikit-image is much better choice than cv2 here.
since the package define Bit functions, if you are playing with BW images, then try this ready to use code:
skeletonize
note: if process pass the image details, then don’t upsample the input at first until you tried other functions:again use skimage morphology functions to enhance details which in such case your code will be work on bigger area of images too. You could look here.
Related
I've been trying to add reproducible Gaussian Noise by fixing a random seed, saving the image, reading the image and regenerate the Gaussian Noise, and 'subtracting' it to recover the original image. Here's the (pseudo-)code for what I've tried so far:
SEED = 1234
np.random.seed(SEED)
img = cv2.imread(path, -1) # (32,32,3)
noise = np.random.normal(loc=0, scale=20, size=(32,32,3)).astype(np.uint8)
temp = img + noise
# ignore the noise if value exceeds 255 or is below 0
temp = np.where(temp<255, temp, img)
temp = np.where(temp>0, temp, img)
cv2.imwrite(some_path_and_file_name, temp)
Then, I read the image file with Gaussian Noise in the same way. When I 'ignore' the noise, I keep track of the matrix index of when the 'ignoring' happened, and use this information to recover the original data:
img = cv2.imread(path_to_noise_img, -1)
SEED = 1234
np.random.seed(SEED)
noise = np.random.normal(loc=0, scale=20, size=(32,32,3)).astype(np.uint8)
temp = img - noise
# flag is the matrix with the indices of 'ignoring'
recovered_img = np.where(flag == 0, temp, img)
cv2.imwrite(some_path_and_file_name, recovered_img)
However, when I open the two images, they are different. I have checked that the Gaussian Noise is the same all the time, and it feels like something is going wrong (some sort of irreversible conversion is happening) when I read or write the image file.
However, I am having trouble debugging this since I am new to Python.
Any help would be appreciated. Thanks!
Edit
I tried saving the image and loading the image using OpenCV function calls without modifying anything, and compared the values. From what I see, the values being read in are different. What should I fix in my code to prevent this from happening?
Solution
The code below worked like a charm (Thanks to all the comments and the answer).
np.random.seed(SEED)
original = cv2.imread("C:/Users/user/project/1.png", cv2.IMREAD_UNCHANGED)
n = np.random.normal(loc=0, scale=20, size=original.shape).astype(np.int32)
n = original.astype(np.int32) + n.astype(np.int32)
floor = np.zeros_like(n)
ceil = np.zeros_like(n)
floor = np.where(n<255, 0, 1)
t = np.where(n < 255, n, original)
# record when rounding occurs
ceil = np.where(t>0, 0, 1)
arr = np.where(t>0, t, original)
flag = floor + ceil
arr = arr.astype(np.uint8)
cv2.imwrite("C:/Users/user/project/1-1.png", arr.astype(np.uint8))
np.random.seed(SEED)
recover = cv2.imread("C:/Users/user/project/1-1.png", cv2.IMREAD_UNCHANGED).astype(np.int32)
noise = np.random.normal(loc=0, scale=20, size=recover.shape).astype(np.int32)
ans = np.where(flag==0, recover-noise, recover)
assert(ans.astype(np.uint8) == original.astype(np.uint8)).all()
Multiple issues...
Compression
JPEG is a lossy compression, which means you will surely not be getting the exact same values back as you had before compression.
PNG is lossless, so it will give you the exact values you had before compression.
Integer math
Adding two uint8 values results in an uint8 again.
That means your math will always result in values in the range of 0..255. uint8 values can never be <0 or >255.
Your np.where checks are useless because the values are uint8, and even after addition they stay uint8, and they can never be <0 or >255.
Further, whenever you add/subtract values, if the result exceeds the range, that has to be handled in some way. Numpy simply wraps the values around, as is usual with integer math. Another option is to saturate, meaning to clip. OpenCV functions tend to do that. You can produce either with either library, with some care.
If there is any saturating math in your code, you will definitely not be able to subtract the noise and recover the original image. If there is merely wrapping math in your code, you can recover the original image by subtracting the noise.
Debugging
You are discarding so much information, reducing the answer to "are they bit-exact equal or not?"
You should use a small image, 3x3 pixels or something, and then look at the values, when you print those numpy arrays.
Demo
import numpy as np
import cv2 as cv
img = cv.imread("image.png", cv.IMREAD_UNCHANGED)
SEED = 1234
np.random.seed(SEED)
noise = np.random.normal(loc=0, scale=20, size=img.shape).astype(np.uint8)
img_with_noise = img + noise # this will wrap around
cv.imwrite("image_with_noise.png", img_with_noise)
import numpy as np
import cv2 as cv
img = cv.imread("image.png", cv.IMREAD_UNCHANGED)
img_with_noise = cv.imread("image_with_noise.png", cv.IMREAD_UNCHANGED)
# generate noise the exact same way
SEED = 1234
np.random.seed(SEED)
noise = np.random.normal(loc=0, scale=20, size=img.shape).astype(np.uint8)
img_recovered = img_with_noise - noise # wrapping around again, backwards
cv.imwrite("image_recovered.png", img_recovered)
# images should be equal in all values of all pixels
assert (img_recovered == img).all()
EDIT: Code is working now, thanks to Mark and zephyr. zephyr also has two alternate working solutions below.
I want to divide blend two images with PIL. I found ImageChops.multiply(image1, image2) but I couldn't find a similar divide(image, image2) function.
Divide Blend Mode Explained (I used the first two images here as my test sources.)
Is there a built-in divide blend function that I missed (PIL or otherwise)?
My test code below runs and is getting close to what I'm looking for. The resulting image output is similar to the divide blend example image here: Divide Blend Mode Explained.
Is there a more efficient way to do this divide blend operation (less steps and faster)? At first, I tried using lambda functions in Image.eval and ImageMath.eval to check for black pixels and flip them to white during the division process, but I couldn't get either to produce the correct result.
EDIT: Fixed code and shortened thanks to Mark and zephyr. The resulting image output matches the output from zephyr's numpy and scipy solutions below.
# PIL Divide Blend test
import Image, os, ImageMath
imgA = Image.open('01background.jpg')
imgA.load()
imgB = Image.open('02testgray.jpg')
imgB.load()
# split RGB images into 3 channels
rA, gA, bA = imgA.split()
rB, gB, bB = imgB.split()
# divide each channel (image1/image2)
rTmp = ImageMath.eval("int(a/((float(b)+1)/256))", a=rA, b=rB).convert('L')
gTmp = ImageMath.eval("int(a/((float(b)+1)/256))", a=gA, b=gB).convert('L')
bTmp = ImageMath.eval("int(a/((float(b)+1)/256))", a=bA, b=bB).convert('L')
# merge channels into RGB image
imgOut = Image.merge("RGB", (rTmp, gTmp, bTmp))
imgOut.save('PILdiv0.png', 'PNG')
os.system('start PILdiv0.png')
You are asking:
Is there a more efficient way to do this divide blend operation (less steps and faster)?
You could also use the python package blend modes. It is written with vectorized Numpy math and generally fast. Install it via pip install blend_modes. I have written the commands in a more verbose way to improve readability, it would be shorter to chain them. Use blend_modes like this to divide your images:
from PIL import Image
import numpy
import os
from blend_modes import blend_modes
# Load images
imgA = Image.open('01background.jpg')
imgA = numpy.array(imgA)
# append alpha channel
imgA = numpy.dstack((imgA, numpy.ones((imgA.shape[0], imgA.shape[1], 1))*255))
imgA = imgA.astype(float)
imgB = Image.open('02testgray.jpg')
imgB = numpy.array(imgB)
# append alpha channel
imgB = numpy.dstack((imgB, numpy.ones((imgB.shape[0], imgB.shape[1], 1))*255))
imgB = imgB.astype(float)
# Divide images
imgOut = blend_modes.divide(imgA, imgB, 1.0)
# Save images
imgOut = numpy.uint8(imgOut)
imgOut = Image.fromarray(imgOut)
imgOut.save('PILdiv0.png', 'PNG')
os.system('start PILdiv0.png')
Be aware that for this to work, both images need to have the same dimensions, e.g. imgA.shape == (240,320,3) and imgB.shape == (240,320,3).
There is a mathematical definition for the divide function here:
http://www.linuxtopia.org/online_books/graphics_tools/gimp_advanced_guide/gimp_guide_node55_002.html
Here's an implementation with scipy/matplotlib:
import numpy as np
import scipy.misc as mpl
a = mpl.imread('01background.jpg')
b = mpl.imread('02testgray.jpg')
c = a/((b.astype('float')+1)/256)
d = c*(c < 255)+255*np.ones(np.shape(c))*(c > 255)
e = d.astype('uint8')
mpl.imshow(e)
mpl.imsave('output.png', e)
If you don't want to use matplotlib, you can do it like this (I assume you have numpy):
imgA = Image.open('01background.jpg')
imgA.load()
imgB = Image.open('02testgray.jpg')
imgB.load()
a = asarray(imgA)
b = asarray(imgB)
c = a/((b.astype('float')+1)/256)
d = c*(c < 255)+255*ones(shape(c))*(c > 255)
e = d.astype('uint8')
imgOut = Image.fromarray(e)
imgOut.save('PILdiv0.png', 'PNG')
The problem you're having is when you have a zero in image B - it causes a divide by zero. If you convert all of those values to one instead I think you'll get the desired result. That will eliminate the need to check for zeros and fix them in the result.
I have an Numpy array (it's the red channel from an image).
I have masked a portion of it (making those values 0), and now I would like to find the Mode of the values in my non masked area.
The problem I'm running into is that the Mode command keeps coming back with [0]. I want to exclude the 0 values (the masked area), but I'm not sure how to do this?
This is the command I was using to try and get mode:
#mR is the Numpy Array of the Red channel with the values of the areas I don't want at 0
print(stats.mode(mR[:, :], axis=None))
Returns 0 as my Mode.
How do I exclude 0 or the masked area?
Update - Full Code:
Here's my full code using the "face" from scipy.misc - still seems slow with that image and the result is "107" which is way to high for the masked area (shadows) so seems like it's processing the whole image, not just the area in the mask.
import cv2
import numpy as np
from scipy import stats
import scipy.misc
img = scipy.misc.face()
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
r, g, b = cv2.split(img_rgb)
img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
l_channel ,a_channel, b_channel = cv2.split(img_lab)
mask = cv2.inRange(l_channel, 5, 10)
cv2.imshow("mask", mask)
print(stats.mode(r[mask],axis=None))
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1)
You can just mask the array and use np.histogram:
counts, bins = np.histogram(mR[mR>0], bins=np.arange(256))
# mode
modeR = np.argmax(counts)
Update:
After the OP kindly posted their full code, I can confirm that stats.mode() is either extremely slow or never in fact completes (who knows why?).
On the other hand #Quang Hoang's solution is as elegant as it is fast - and it also works for me in terms of respecting the mask.
I of course therefore throw my weight behind QH's answer.
My old answer:
Try
print(stats.mode(mR[mask],axis=None))
Except for the masking, calculating the mode of a numpy array efficiently is covered extensively here:
Most efficient way to find mode in numpy array
I'm working with Python and trying to do Otsu thresholding on an image but only inside the mask (yes, I have an image and a mask image). It means less pixel on the image will be included in the histogram for calculating the Otsu threshold.
I'm currently using the cv2.threshold function without the mask image and have no idea how to do this kind of job.
ret, OtsuMat = cv2.threshold(GaborMat, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
Since this function also incorporates the pixels outside the mask, I think it will give a less accurate threshold.
This is the example of the image and its mask:
https://drive.google.com/drive/folders/1p8JMhncJs19oOWO9RdkWuEADVGqE-gzQ?usp=sharing
Hope there is a OpenCV or other lib function to do it easily (and also with fast computing), but any kind of help will be appreciated.
I had a try at this using the threshold_otsu() method from skimage and a Numpy masked array. I don't know if there are faster ways - the skimage is normally pretty well optimised. If anyone else wants to take my sample data and try other ideas on it, please feel free - although there is a service charge of one upvote ;-)
#!/usr/bin/env python3
import cv2
import numpy as np
import numpy.ma as ma
from skimage.filters import threshold_otsu
# Set up some repeatable test data, 4 blocks 100x100 pixels each of random normal np.uint8s centred on 32, 64, 160,192
np.random.seed(42)
a=np.random.normal(size = (100,100), loc = 32,scale=10).astype(np.uint8)
b=np.random.normal(size = (100,100), loc = 64,scale=10).astype(np.uint8)
c=np.random.normal(size = (100,100), loc = 160,scale=10).astype(np.uint8)
d=np.random.normal(size = (100,100), loc = 192,scale=10).astype(np.uint8)
# Stack (concatenate) the 4 squares horizontally across the page
im = np.hstack((a,b,c,d))
# Next line is just for debug
cv2.imwrite('start.png',im)
That gives us this:
# Now make a mask revealing only left half of image, centred on 32 and 64
mask=np.zeros((100,400))
mask[:,200:]=1
masked = ma.masked_array(im,mask)
print(threshold_otsu(masked.compressed())) # Prints 47
# Now do same revealing only right half of image, centred on 160 and 192
masked = ma.masked_array(im,1-mask)
print(threshold_otsu(masked.compressed())) # Prints 175
The histogram of the test data looks like this, x-axis is 0..255
Adapting to your own sample data, I get this:
#!/usr/bin/env python3
import cv2
import numpy as np
import numpy.ma as ma
from skimage.filters import threshold_otsu
# Load images
im = cv2.imread('eye.tif', cv2.IMREAD_UNCHANGED)
mask = cv2.imread('mask.tif', cv2.IMREAD_UNCHANGED)
# Calculate Otsu threshold on entire image
print(threshold_otsu(im)) # prints 130
# Now do same for masked image
masked = ma.masked_array(im,mask>0)
print(threshold_otsu(masked.compressed())). # prints 124
I am trying to use a dicom image and manipulate it using OpenCV in a Python environment. So far I have used the pydicom library to read the dicom(.dcm) image data and using the pixel array attribute to display the picture using OpenCV imshow method. But the output is just a blank window. Here is the snippet of code I am using at this moment.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
cv2.imshow('sample image dicom',ds.pixel_array)
cv2.waitkey()
If i print out the array which is used here, the output is different from what i would get with a normal numpy array. I have tried using matplotlib imshow method as well and it was able to display the image with some colour distortions. Is there a way to convert the array into a legible format for OpenCV?
Faced a similar issue. Used exposure.equalize_adapthist() (source). The resulting image isn't a hundred percent to that you would see using a DICOM Viewer but it's the best I was able to get.
import numpy as np
import cv2
import pydicom as dicom
from skimage import exposure
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array
dcm_sample=exposure.equalize_adapthist(dcm_sample)
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I have figured out a way to get the image to show. As Dan mentioned in the comments, the value of the matrix was scaled down and due to the imshow function, the output was too dark for the human eye to differentiate. So, in the end the only thing i needed to do was multiply the entire mat data with 128. The image is showing perfectly now. multiplying the matrix by 255 over exposes the picture and causes certain features to blow. Here is the revised code.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array*128
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I don't think that is a correct answer. It works for that particular image because most of your pixel values are in the lower range. Check this OpenCV: How to visualize a depth image. It is for c++ but easily adapted to Python.
This is the best way(in my opinion) to open image in opencv as a numpy array while perserving the image quality:
import numpy as np
import pydicom, os, cv2
def dicom_to_numpy(ds):
DCM_Img = ds
rows = DCM_Img.get(0x00280010).value #Get number of rows from tag (0028, 0010)
cols = DCM_Img.get(0x00280011).value #Get number of cols from tag (0028, 0011)
Instance_Number = int(DCM_Img.get(0x00200013).value) #Get actual slice instance number from tag (0020, 0013)
Window_Center = int(DCM_Img.get(0x00281050).value) #Get window center from tag (0028, 1050)
Window_Width = int(DCM_Img.get(0x00281051).value) #Get window width from tag (0028, 1051)
Window_Max = int(Window_Center + Window_Width / 2)
Window_Min = int(Window_Center - Window_Width / 2)
if (DCM_Img.get(0x00281052) is None):
Rescale_Intercept = 0
else:
Rescale_Intercept = int(DCM_Img.get(0x00281052).value)
if (DCM_Img.get(0x00281053) is None):
Rescale_Slope = 1
else:
Rescale_Slope = int(DCM_Img.get(0x00281053).value)
New_Img = np.zeros((rows, cols), np.uint8)
Pixels = DCM_Img.pixel_array
for i in range(0, rows):
for j in range(0, cols):
Pix_Val = Pixels[i][j]
Rescale_Pix_Val = Pix_Val * Rescale_Slope + Rescale_Intercept
if (Rescale_Pix_Val > Window_Max): #if intensity is greater than max window
New_Img[i][j] = 255
elif (Rescale_Pix_Val < Window_Min): #if intensity is less than min window
New_Img[i][j] = 0
else:
New_Img[i][j] = int(((Rescale_Pix_Val - Window_Min) / (Window_Max - Window_Min)) * 255) #Normalize the intensities
return New_Img
file_path = "C:/example.dcm"
image = pydicom.read_file(file_path)
image = dicom_to_numpy(image)
#show image
cv2.imshow('sample image dicom',image)
cv2.waitKey(0)
cv2.destroyAllWindows()