Using dicom Images with OpenCV in Python - python

I am trying to use a dicom image and manipulate it using OpenCV in a Python environment. So far I have used the pydicom library to read the dicom(.dcm) image data and using the pixel array attribute to display the picture using OpenCV imshow method. But the output is just a blank window. Here is the snippet of code I am using at this moment.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
cv2.imshow('sample image dicom',ds.pixel_array)
cv2.waitkey()
If i print out the array which is used here, the output is different from what i would get with a normal numpy array. I have tried using matplotlib imshow method as well and it was able to display the image with some colour distortions. Is there a way to convert the array into a legible format for OpenCV?

Faced a similar issue. Used exposure.equalize_adapthist() (source). The resulting image isn't a hundred percent to that you would see using a DICOM Viewer but it's the best I was able to get.
import numpy as np
import cv2
import pydicom as dicom
from skimage import exposure
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array
dcm_sample=exposure.equalize_adapthist(dcm_sample)
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()

I have figured out a way to get the image to show. As Dan mentioned in the comments, the value of the matrix was scaled down and due to the imshow function, the output was too dark for the human eye to differentiate. So, in the end the only thing i needed to do was multiply the entire mat data with 128. The image is showing perfectly now. multiplying the matrix by 255 over exposes the picture and causes certain features to blow. Here is the revised code.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array*128
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()

I don't think that is a correct answer. It works for that particular image because most of your pixel values are in the lower range. Check this OpenCV: How to visualize a depth image. It is for c++ but easily adapted to Python.

This is the best way(in my opinion) to open image in opencv as a numpy array while perserving the image quality:
import numpy as np
import pydicom, os, cv2
def dicom_to_numpy(ds):
DCM_Img = ds
rows = DCM_Img.get(0x00280010).value #Get number of rows from tag (0028, 0010)
cols = DCM_Img.get(0x00280011).value #Get number of cols from tag (0028, 0011)
Instance_Number = int(DCM_Img.get(0x00200013).value) #Get actual slice instance number from tag (0020, 0013)
Window_Center = int(DCM_Img.get(0x00281050).value) #Get window center from tag (0028, 1050)
Window_Width = int(DCM_Img.get(0x00281051).value) #Get window width from tag (0028, 1051)
Window_Max = int(Window_Center + Window_Width / 2)
Window_Min = int(Window_Center - Window_Width / 2)
if (DCM_Img.get(0x00281052) is None):
Rescale_Intercept = 0
else:
Rescale_Intercept = int(DCM_Img.get(0x00281052).value)
if (DCM_Img.get(0x00281053) is None):
Rescale_Slope = 1
else:
Rescale_Slope = int(DCM_Img.get(0x00281053).value)
New_Img = np.zeros((rows, cols), np.uint8)
Pixels = DCM_Img.pixel_array
for i in range(0, rows):
for j in range(0, cols):
Pix_Val = Pixels[i][j]
Rescale_Pix_Val = Pix_Val * Rescale_Slope + Rescale_Intercept
if (Rescale_Pix_Val > Window_Max): #if intensity is greater than max window
New_Img[i][j] = 255
elif (Rescale_Pix_Val < Window_Min): #if intensity is less than min window
New_Img[i][j] = 0
else:
New_Img[i][j] = int(((Rescale_Pix_Val - Window_Min) / (Window_Max - Window_Min)) * 255) #Normalize the intensities
return New_Img
file_path = "C:/example.dcm"
image = pydicom.read_file(file_path)
image = dicom_to_numpy(image)
#show image
cv2.imshow('sample image dicom',image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Related

skeletonization (thinning) of small images not giving expected results - python

I am trying to implement a skeletonization of small images. But I am not getting an expected results. I tried also thin() and medial_axis() but nothing seems to work as expected. I am suspicious that this problem occurs because of the small resolutions of images. Here is the code:
import cv2
from numpy import asarray
import numpy as np
# open image
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
# threshold the image
img_binary = cv2.threshold(afterMedian, thresh, 255, cv2.THRESH_BINARY)[1]
# make binary image
arr = asarray(img_binary)
binaryArr = np.zeros(asarray(img_binary).shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if arr[i][j] == 255:
binaryArr[i][j] = 1
else:
binaryArr[i][j] = 0
# perform skeletonization
from skimage.morphology import skeletonize
cv2.imshow("binary arr", binaryArr)
backgroundSkeleton = skeletonize(binaryArr)
# convert to non-binary image
bSkeleton = np.zeros(arr.shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if backgroundSkeleton[i][j] == 0:
bSkeleton[i][j] = 0
else:
bSkeleton[i][j] = 255
cv2.imshow("background skeleton", bSkeleton)
cv2.waitKey(0)
The results are:
I would expect something more like this:
This applies to similar shapes also:
Expectation:
Am I doing something wrong? Or it will truly will not be possible with such small pictures, because I tried skeletonization on bigger images and it worked just fine. Original images:
You could try the skeleton in DIPlib (dip.EuclideanSkeleton):
import numpy as np
import diplib as dip
import cv2
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
bin = afterMedian > thresh
sk = dip.EuclideanSkeleton(bin, endPixelCondition='three neighbors')
dip.viewer.Show(bin)
dip.viewer.Show(sk)
dip.viewer.Spin()
The endPixelCondition input argument can be used to adjust how many branches are preserved or removed. 'three neighbors' is the option that produces the most branches.
The code above produces branches also towards the corners of the image. Using 'two neighbors' prevents that, but produces fewer branches towards the object as well. The other way to prevent it is to set edgeCondition='object', but in this case the ring around the object becomes a square on the image boundary.
To convert the DIPlib image sk back to a NumPy array, do
sk = np.array(sk)
sk is now a Boolean NumPy array (values True and False). To create an array compatible with OpenCV simply cast to np.uint8 and multiply by 255:
sk = np.array(sk, dtype=np.uint8)
sk *= 255
Note that, when dealing with NumPy arrays, you generally don't need to loop over all pixels. In fact, it's worth trying to avoid doing so, as loops in Python are extremely slow.
It seems the scikit-image is much better choice than cv2 here.
since the package define Bit functions, if you are playing with BW images, then try this ready to use code:
skeletonize
note: if process pass the image details, then don’t upsample the input at first until you tried other functions:again use skimage morphology functions to enhance details which in such case your code will be work on bigger area of images too. You could look here.

Simplest algorithm for zooming an image in Python by K factor

I'm a total newbie to Python.
What's the simplest algorithm by which I can zoom an image by a factor of 3?
I don't want to use the already made zoom functions available.
The task is moderately cumbersome, so I have shown a simple way to implement row zooming. You can similarly modify the indexes to implement column indexing for new_image as well.
# loading the image
from PIL import Image
import numpy as np
image = np.asarray( Image.open("img.jpg") )
import matplotlib.pyplot as plt
# create new image of correct size
m = len(image[0])
n = len(image)
factor = 3
new_image = np.zeros((factor*(n-1) + 1,factor*(m-1) + 1,3), dtype=int)
# implement row zooming
for i in range(n):
row = image[i]
for k in range(len(row)-1):
new_image[i][k*factor], new_image[i][(k+1)*factor] = row[k], row[k+1]
for mode in range(3):
# need mode as three colour channels in RGB
lo = int(min(row[k][mode], row[k+1][mode]))
hi = int(max(row[k][mode], row[k+1][mode]))
diff = int((hi-lo)//factor)
for x in range(factor-1):
new_image[i][k*factor+1+x][mode] = lo + (x*diff)
Let us say you have a .png image named lenna.png on you file system. You can load it and convert it to a numpy array like this
from PIL import Image
import numpy as np
image = np.asarray( Image.open("lenna.png") )
import matplotlib.pyplot as plt
plt.imshow(image)
plt.show()
Numpy offers a simple wayto increase the pixel resolution like this:
# Simply increase the resolution of the image by repeating the pixels
zoom_factor = 3
for i in range(2):
image = np.repeat(image, zoom_factor, axis=i)
If we plot the image it now simply has more pixels in each dimension:
You could then display only part of the image by cropping your new high resolution image like this
# Focus on any paricular region by croping it out
image = image[700:1000, 700:1000]
plt.imshow(image)
plt.show()
The result looks like this
Cheers!

Python: Convert image from RGB to YDbDr color space

Trying to convert image from RGB color space to YDbDr color space according to the formula:
Y = 0.299R + 0.587G + 0.114B
Db = -0.45R - 0.883G +1.333B
Dr = -1.333R + 1.116G + 0.217B
With the following code I'm trying to show only Y channel which should be grayscale image but I keep getting image all in blue color:
import numpy as np
from PIL import Image
import cv2
import matplotlib.pyplot as plt
img = cv2.imread("./pics/Slike_modela/Test/Proba/1_Color.png")
new_img = []
for row in img:
new_row = []
for pixel in row:
Y = 0.299*pixel[2]+0.587*pixel[1]+0.114*pixel[0]
Db = -0.45*pixel[2]-0.883*pixel[1]+1.333*pixel[0]
Dr = -1.333*pixel[2]+1.116*pixel[1]+0.217*pixel[0]
new_pixel = [Y, Db, Dr]
new_row.append(new_pixel)
new_img.append(new_row)
new_img_arr = np.array(new_img)
new_img_arr_y = new_img_arr.copy()
new_img_arr_y[:,:,1] = 0
new_img_arr_y[:,:,2] = 0
print (new_img_arr_y)
cv2.imshow("y image", new_img_arr_y)
key = cv2.waitKey(0)
When printing the result array I see correct numbers according to formula and correct shape of the array.
What is my mistake? How to get Y channel image i.e. grayscale image?
When processing images with Python, you really, really should try to avoid:
treating images as lists and appending millions and millions of pixels, each of which creates a whole new object and takes space to administer
processing images with for loops, which are very slow
The better way to deal with both of these is through using Numpy or other vectorised code libraries or techniques. That is why OpenCV, wand, scikit-image open and handle images as Numpy arrays.
So, you basically want to do a dot product of the colour channels with a set of 3 weights:
import cv2
import numpy as np
# Load image
im = cv2.imread('paddington.png', cv2.IMREAD_COLOR)
# Calculate Y using Numpy "dot()"
Y = np.dot(im[...,:3], [0.114, 0.587, 0.299]).astype(np.uint8)
That's it.

Why 16bit to 8bit conversion produces striped image?

I am testing a segmentation algorithm on several VHSR satellite images, which originally comes in 16bit format, but when I convert them to 8bit images, the produced images are showing striped appearance.
I've been trying different python libraries (skimage, cv2, scipy) getting similar results.
1) The original 16-bit image it is a 4 band image (NIR,B,G,R), so you need to choose the right bands to create a true color image, RGB image (4,3,2 bands). thanks in advance. It can be downloaded from this link:
16bit image
2) I use this code to convert each pixel value, from a 16-bit integer now fitting within 8-bit range:
from scipy.misc import bytescale
SS = io.imread('Imag16bit.tif')
SS = bytescale(SS)
SS = np.asarray(SS)
plt.imshow(SS)
This is my result of above code:
bytescale works for me. I think the asarray step messes up something.
import cv2
from skimage import io
from scipy.misc import bytescale
image = io.imread('SkySat_16bit.tif')
cv2.imshow('Original', image)
print(image.dtype)
image = bytescale(image)
print(image.dtype)
cv2.imshow('Converted', image)
cv2.waitKey(0)
I think this is a way to do it:
#!/usr/local/bin/python3
from PIL import Image
from tifffile import imsave, imread
# Load image
im = imread('SkySat_16bit.tif')
# Extract Red, Green and Blue bands into separate 8-bit arrays
R = (im[:,:,3]/256).astype(np.uint8)
G = (im[:,:,2]/256).astype(np.uint8)
B = (im[:,:,1]/256).astype(np.uint8)
# Combine bands into RGB array
RGB = np.dstack((R,G,B))
# Save to disk
Image.fromarray(RGB).save('result.png')
You may want to adjust the contrast a bit, and check I selected the correct bands.

Quality loss after combining two images with PIL and numpy

I'm using PIL and numpy to combine two images while one is a .jpg and the other image is represented by a numpy array, which defines a mask that I want to put on top of the original image (basically just a matrix with one and zero entries and the same size as the .jpg). PIL’s composite function works just fine for that but for some reason, after saving the composite image, the file size shrinks to approximately 1/3 of the original image size. Can someone explain this behavior to me?
Here's a code snippet:
import numpy as np
import PIL
from PIL import Image
from PIL import ImageColor
rgb = ImageColor.getrgb('black')
# Read image and write into numpy array
image = Image.open('test_image.jpg')
(im_width, im_height) = image.size
# Create empty mask
mask = np.zeros((im_width, im_height))
# Composite image and mask
solid_color = np.expand_dims(np.ones_like(mask), axis=2) *
np.reshape(list(rgb), [1, 1, 3])
pil_solid_color =
Image.fromarray(np.uint8(solid_color)).convert('RGBA')
pil_mask = Image.fromarray(np.uint8(255.*mask)).convert('L')
image = Image.composite(pil_solid_color, image, pil_mask)
# save image
image.save('test_image_with_mask.jpg')
Code was inspired by tnesorflow's object detection api. Thanks in advance.

Categories

Resources