I am working with python 3.6 and performing following functions on an image (read image, pad image, crop image, rotate image). I am either using skimage or basic python function. If I don't rotate the image then there is no warning. But if I rotate the image, I am getting following warning
/anaconda3/lib/python3.6/site-packages/skimage/util/dtype.py:122:
UserWarning: Possible precision loss when converting from int64 to
float64 .format(dtypeobj_in, dtypeobj_out))
When I try to view the image using imshow, image values seemed to be in the range of (1e-17) (https://ibb.co/cSfzb7). I guess image values got normalized somewhere but I am not able to find the location.
If I try to save the rotated image, I get a completely black image.
im = img_as_uint(image)
imsave(image_path, im, plugin='imageio')
I tried removing img_as_uint, using default imsave, using (freeimage, imagio) as plugin option
/anaconda3/lib/python3.6/site-packages/skimage/util/dtype.py:122:
UserWarning: Possible precision loss when converting from int64 to
float64 .format(dtypeobj_in, dtypeobj_out))
/anaconda3/lib/python3.6/site-packages/skimage/util/dtype.py:122:
UserWarning: Possible precision loss when converting from float64 to
uint16 .format(dtypeobj_in, dtypeobj_out))
/anaconda3/lib/python3.6/site-packages/skimage/io/_io.py:132:
UserWarning: /Users/ashisharora/google_Drive/madcat_arabic/lines/AAW_ARB_20061105.0017-S1_1_LDC0372_00z1.png is a low contrast image warn('%s is a low contrast image' % fname)
import sys
import argparse
import os
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import skimage
from skimage.io import imread, imsave, imshow
from skimage.transform import rotate
from skimage import img_as_uint
im_wo_pad = imread(image_file_name)
im = pad_image(im_wo_pad)
region_initial = im[281:480, 2509:4766]
rotation_angle_in_rad = -0.00708
img2 = rotate(region_initial, degrees(rotation_angle_in_rad))
region_final = img2[15:298, 7:2263]
imshow(region_final)
def pad_image(image):
offset = 200
max_val = 255
height = image.shape[0]
im_pad = np.concatenate((max_val * np.ones((height, offset),
dtype=int), image), axis=1)
im_pad1 = np.concatenate((im_pad, max_val * np.ones((height, offset),
dtype=int)), axis=1)
width = im_pad1.shape[1]
im_pad2 = np.concatenate((max_val * np.ones((offset, width),
dtype=int), im_pad1), axis=0)
im_pad3 = np.concatenate((im_pad2, max_val * np.ones((offset, width),
dtype=int)), axis=0)
return im_pad3
I tried solutions posted on stack overflow but it is not working for my case. I would appreciate if someone can help.
By default, the skimage rotate function converts to float and scales the image values between 0 and 1. (See http://scikit-image.org/docs/dev/api/skimage.transform.html#skimage.transform.rotate)
Try using the preserve_range parameter and some type casting, e.g.,
img_rot = skimage.transform.rotate(img, rot_angle, preserve_range=True).astype(np.uint8)
Related
I am trying to use a dicom image and manipulate it using OpenCV in a Python environment. So far I have used the pydicom library to read the dicom(.dcm) image data and using the pixel array attribute to display the picture using OpenCV imshow method. But the output is just a blank window. Here is the snippet of code I am using at this moment.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
cv2.imshow('sample image dicom',ds.pixel_array)
cv2.waitkey()
If i print out the array which is used here, the output is different from what i would get with a normal numpy array. I have tried using matplotlib imshow method as well and it was able to display the image with some colour distortions. Is there a way to convert the array into a legible format for OpenCV?
Faced a similar issue. Used exposure.equalize_adapthist() (source). The resulting image isn't a hundred percent to that you would see using a DICOM Viewer but it's the best I was able to get.
import numpy as np
import cv2
import pydicom as dicom
from skimage import exposure
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array
dcm_sample=exposure.equalize_adapthist(dcm_sample)
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I have figured out a way to get the image to show. As Dan mentioned in the comments, the value of the matrix was scaled down and due to the imshow function, the output was too dark for the human eye to differentiate. So, in the end the only thing i needed to do was multiply the entire mat data with 128. The image is showing perfectly now. multiplying the matrix by 255 over exposes the picture and causes certain features to blow. Here is the revised code.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array*128
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I don't think that is a correct answer. It works for that particular image because most of your pixel values are in the lower range. Check this OpenCV: How to visualize a depth image. It is for c++ but easily adapted to Python.
This is the best way(in my opinion) to open image in opencv as a numpy array while perserving the image quality:
import numpy as np
import pydicom, os, cv2
def dicom_to_numpy(ds):
DCM_Img = ds
rows = DCM_Img.get(0x00280010).value #Get number of rows from tag (0028, 0010)
cols = DCM_Img.get(0x00280011).value #Get number of cols from tag (0028, 0011)
Instance_Number = int(DCM_Img.get(0x00200013).value) #Get actual slice instance number from tag (0020, 0013)
Window_Center = int(DCM_Img.get(0x00281050).value) #Get window center from tag (0028, 1050)
Window_Width = int(DCM_Img.get(0x00281051).value) #Get window width from tag (0028, 1051)
Window_Max = int(Window_Center + Window_Width / 2)
Window_Min = int(Window_Center - Window_Width / 2)
if (DCM_Img.get(0x00281052) is None):
Rescale_Intercept = 0
else:
Rescale_Intercept = int(DCM_Img.get(0x00281052).value)
if (DCM_Img.get(0x00281053) is None):
Rescale_Slope = 1
else:
Rescale_Slope = int(DCM_Img.get(0x00281053).value)
New_Img = np.zeros((rows, cols), np.uint8)
Pixels = DCM_Img.pixel_array
for i in range(0, rows):
for j in range(0, cols):
Pix_Val = Pixels[i][j]
Rescale_Pix_Val = Pix_Val * Rescale_Slope + Rescale_Intercept
if (Rescale_Pix_Val > Window_Max): #if intensity is greater than max window
New_Img[i][j] = 255
elif (Rescale_Pix_Val < Window_Min): #if intensity is less than min window
New_Img[i][j] = 0
else:
New_Img[i][j] = int(((Rescale_Pix_Val - Window_Min) / (Window_Max - Window_Min)) * 255) #Normalize the intensities
return New_Img
file_path = "C:/example.dcm"
image = pydicom.read_file(file_path)
image = dicom_to_numpy(image)
#show image
cv2.imshow('sample image dicom',image)
cv2.waitKey(0)
cv2.destroyAllWindows()
So I wrote this little code to try to convert RGB imge to Grayscale taken from this accepted answer. The problem, is that it shows a cryptic image with no likeness to original even when I am just re-creating the original. What do you think is the problem and how should we go about solving it? Also then I want to convert it to Grayscale as per the given array b.
Here's my code:
import matplotlib.image as img
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
a = img.imread('hCeFA.png')
a = a * 255 #Matplotlib gives float values between 0-1
b = a * np.array([0.3, 0.59, 0.11])
b = np.sum(b, axis = 2) / 3 #Grayscale conversion
img1 = Image.fromarray(a, 'RGB')
img1.save('my.png')
img1.show() #Gives a cryptic image
plt.imshow(a/255, interpolation='nearest') #Works fine
plt.show()
Better Read the image with the 'I'.
Like
imread('imageName.imgFormat',"I")
That will resolve the issue. It will read image in uint8 format, which i think you desire.
Based on a solution that I read at How to define the markers for Watershed in OpenCV?, I am trying apply watershed to grayscale data (not very visible but not all black), extracted from netcdf (precipitation data).
Here is a black and white version of the data (threshold at 0) so that you can see more easily, and the markers I want to use to define the different basins (basically just another threshold where precipitation is more intense).
The code I'm running is as follows:
import os,sys,string
from netCDF4 import Dataset as nc
import cv2
import numpy as np
import matplotlib.pyplot as mpl
import scipy.ndimage as ndimage
import scipy.spatial as spatial
from skimage import filter
from skimage.morphology import watershed
from scipy import ndimage
filename=["Cmorph-1999_01_03.nc"]
nc_data=nc(filename[0])
data=nc_data.variables["CMORPH"][23,0:250,250:750]
new_data=np.flipud(data)
ma_data=np.ma.masked_where(new_data<=0,new_data)
ma_conv=np.ma.masked_where(new_data<=2,new_data)
## Borders
tmp_data=ma_data.filled(0)
tmp_data[np.where(tmp_data!=0)]=255
bw_data=tmp_data.astype(np.uint8)
border = cv2.dilate(bw_data, None, iterations=5)
border = border - cv2.erode(border, None)
## Markers
tmp_conv=ma_conv.filled(0)
tmp_conv[np.where(tmp_conv!=0)]=255
bw_conv=tmp_conv.astype(np.uint8)
lbl, ncc = ndimage.label(bw_conv)
lbl = lbl * (255/ncc)
lbl[border == 255] = 255
lbl = lbl.astype(np.int32)
## Apply watershed
cv2.watershed(ma_data, lbl)
lbl[lbl == -1] = 0
lbl = lbl.astype(np.uint8)
result = 255 - lbl
I have the following error for the watershed in opencv-2.4.11/modules/imgproc/src/segmentation.cpp:
error: (-210) Only 8-bit, 3-channel input images are supported in function cvWatershed
For what I saw on the internet, this is due to the fact that the grayscale data is a 2D image and watershed needs a 3D image (from RGB). Indeed, I tried the script with a jpg image and I worked perfectly.
This problem is mentionned here but the answer given was finally rejected. And I can't find any more recent link answering the question.
To try to solve this, I created a 3D array from the 2D new_data:
new_data = new_data[..., np.newaxis]
test=np.append(new_data, new_data, axis=2)
test=np.append(new_data, test, axis=2)
But, as expected, it didn't solve the problem (same error message).
I also tried to save the plot from matplotlib to get RGB data:
fig = mpl.figure()
fig.add_subplot(111)
fig.tight_layout(pad=0)
mpl.contourf(ma_data,levels=np.arange(0,255.1,0.1))
fig.canvas.draw()
test_data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
test_data = test_data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
But the size of the test_data created is different from ma_data (+ I can't get rid of the labels).
So, I am stuck here. Ideally, I want to apply the watershed on the 2D grayscale image directly and/or limit the number of operations as much as possible.
As yapws87 mentioned, there was indeed a problem with the format I was presenting to the watershed function.
Doing try_data=ma_data.astype(np.uint8) removed the error message.
Here is a minimal example that works now:
import os,sys
from netCDF4 import Dataset as nc
import cv2
import numpy as np
import scipy.ndimage as ndimage
from skimage.morphology import watershed
from scipy import ndimage
basename="/home/dcop696/Data/CMORPH/precip/CMORPH_V1.0/CRT/8km-30min/1999/"
filename=["Cmorph-1999_01_03.nc"]
fileslm=["/home/dcop696/Data/LSM/Cmorph_slm_8km.nc"]
nc_data=nc(basename+filename[0])
data=nc_data.variables["CMORPH"][23,0:250,250:750]
new_data=np.flipud(data)
ma_data=np.ma.masked_where(new_data<=0,new_data)
try_data=ma_data.astype(np.uint8)
## Building threshold
tmp_data=ma_data.filled(0)
tmp_data[np.where(tmp_data!=0)]=255
bw_data=tmp_data.astype(np.uint8)
## Building markers
ma_conv=np.ma.masked_where(new_data<=2,new_data)
tmp_conv=ma_conv.filled(0)
tmp_conv[np.where(tmp_conv!=0)]=255
bw_conv=tmp_conv.astype(np.uint8)
markers = ndimage.label(bw_conv)[0]
## Watershed
labels = watershed(-try_data, markers, mask=bw_data)
you can try changing your image fram gray to a BGR color space using
cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
before passing your image to watershed algorithm
I'm trying to run the canny edge detector on this image:
With this code:
def edges(img):
from skimage import feature
img = Image.open(img)
img.convert('L')
array = np.array(img)
out = feature.canny(array, sigma=1, )
return Image.fromarray(out,'L')
edges('Q_3.jpg').save('Q_3_edges.jpg')
But I'm just getting a black image back. Any ideas what I could be doing wrong? I tried sigma of 1 and of 3.
I have the same situation and this helps for me. Before use the Canny filter, just convert your elements of image array to float32 type:
array = np.array(img)
array = array.astype('float32')
out = feature.canny(array, sigma=1, )
Your images need to be in the correct range for the relevant dtype, as discussed in the user manual here: http://scikit-image.org/docs/stable/user_guide/data_types.html
This should be automatically handled if you use the scikit-image image I/O functions:
from skimage import io
img = io.imread('Q_3.jpg')
So the issue was with the canny function returning and array of type boolean.
Oddly, setting the Image.fromarray mode to '1' didn't help. Instead this was the only way I could get it working; converting the output array to grayscale:
def edges(img):
from skimage import feature
img = Image.open(img)
img.convert('L')
array = np.array(img)
out = np.uint8(feature.canny(array, sigma=1, ) * 255)
return Image.fromarray(out,mode='L')
The problem happens when the image is loaded as float (i.e. in the range 0-1). The loader does that for some types of images. You can check the type of the loaded image by:
print(img.dtype)
If the output is something like float64 (i.e. not uint8), then your image is in the range 0-1.
Canny expects an image in the range 0-255. Therefore, the solution is as easy as:
from skimage import img_as_ubyte
img = io.imread("an_image.jpg")
img = img_as_ubyte(img)
Hope this helps,
The problem happens when the image is saved. You can save image with other library like matplotlib:
import numpy as np
import matplotlib.pyplot as plt
from skimage import feature
from skimage import io
def edges(img):
img = io.imread(img)
array = np.array(img)
out = feature.canny(array, sigma=1, )
return out
plt.imsave("canny.jpg", edges("input.jpg"), cmap="Greys")
I am trying to save a numpy array of dimensions 128x128 pixels into a grayscale image.
I simply thought that the pyplot.imsave function would do the job but it's not, it somehow converts my array into an RGB image.
I tried to force the colormap to Gray during conversion but eventhough the saved image appears in grayscale, it still has a 128x128x4 dimension.
Here is a code sample I wrote to show the behaviour :
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mplimg
from matplotlib import cm
x_tot = 10e-3
nx = 128
x = np.arange(-x_tot/2, x_tot/2, x_tot/nx)
[X, Y] = np.meshgrid(x,x)
R = np.sqrt(X**2 + Y**2)
diam = 5e-3
I = np.exp(-2*(2*R/diam)**4)
plt.figure()
plt.imshow(I, extent = [-x_tot/2, x_tot/2, -x_tot/2, x_tot/2])
print I.shape
plt.imsave('image.png', I)
I2 = plt.imread('image.png')
print I2.shape
mplimg.imsave('image2.png',np.uint8(I), cmap = cm.gray)
testImg = plt.imread('image2.png')
print testImg.shape
In both cases the results of the "print" function are (128,128,4).
Can anyone explain why the imsave function is creating those dimensions eventhough my input array is of a luminance type?
And of course, does anyone have a solution to save the array into a standard grayscale format?
Thanks!
With PIL it should work like this
from PIL import Image
I8 = (((I - I.min()) / (I.max() - I.min())) * 255.9).astype(np.uint8)
img = Image.fromarray(I8)
img.save("file.png")
There is also an alternative of using imageio. It provides an easy and convenient API and it is bundled with Anaconda. It can save grayscale images as a single color channel file.
Quoting the documentation
>>> import imageio
>>> im = imageio.imread('imageio:astronaut.png')
>>> im.shape # im is a numpy array
(512, 512, 3)
>>> imageio.imwrite('astronaut-gray.jpg', im[:, :, 0])
I didn't want to use PIL in my code and as noted in the question I ran into the same problem with pyplot, where even in grayscale, the file is saved in MxNx3 matrix.
Since the actual image on disk wasn't important to me, I ended up writing the matrix as is and reading it back "as-is" using numpy's save and load methods:
np.save("filename", image_matrix)
And:
np.load("filename.npy")
There is also a possibility to use scikit-image, then there is no need to convert numpy array into a PIL object.
from skimage import io
io.imsave('output.tiff', I.astype(np.uint16))