Why cv2.write saves black images? - python

hi folks, greetings
am using this code that I found on the web, to apply a wiener filter on an image, the code :
from scipy.signal.signaltools import deconvolve
from skimage import color, data, restoration
img = color.rgb2gray(img)
from scipy.signal import convolve2d
psf = np.ones((5, 5)) / 25
img = convolve2d(img, psf, 'same')
img += 0.1 * img.std() * np.random.standard_normal(img.shape)
deconvolved_img = restoration.wiener(img, psf, 1100)
f, (plot1, plot2) = plt.subplots(1, 2)
plot1.imshow(img)
plot2.imshow(deconvolved_img)
plt.show()
cv2.imwrite("wiener result 2.jpeg",deconvolved_img)
the issue is when I plot the result using Matplotlib I get this :
but when I type cv2.imwrite("wiener result 2.jpeg",deconvolved_img) to save the image, I get this :
why do I get a black image when I save it ??

There are two ways to save your images as a file:
Method 1: Using matplotlib
Since you are using matplotlib library to show the image plt.show(), you can use the same library to save your plots as well using plt.savefig()
plot1.imshow(img)
plot2.imshow(deconvolved_img)
plt.savefig('path_to_save) # mention the path you want to save the plots
plt.show()
Method 2: Using OpenCV
You can also save your file using OpenCV.
But prior to saving your image there is a conversion required. The image variable deconvolved_img is of float data type, with values ranging between [0 - 1]. Hence when you save such images they are perceived as a black image.
In OpenCV you can convert your image to int data type and scaling the pixel intensities between the expected [0 - 255] using cv2.normalize() function:
result = cv2.normalize(deconvolved_img, dst=None, alpha=0, beta=255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)
cv2.imwrite('path_to_save', result) # mention the path you want to save the result

Related

Convert gray pixel_array from DICOM to RGB image

I'm reading DICOM gray image file as
gray = dicom.dcmread(file).pixel_array
There I've got (x,y) shape but I need RGB (x,y,3) shape
I'm trying to convert using CV
img = cv2.cvtColor(gray, cv2.COLOR_GRAY2RGB)
And for testing I'm writing it to file cv2.imwrite('dcm.png', img)
I've got extremely dark image on output which is wrong, what is correct way to convert pydicom image to RGB?
To answer your question, you need to provide a bit more info, and be a bit clearer.
First what are you trying to do? Are you trying to only get an (x,y,3) array in memory? or are you trying to convert the dicom file to a .png file? ...they are very different things.
Secondly, what modality is your dicom image?
It's likely (unless its ultrasound or perhaps nuc med) a 16 bit greyscale image, meaning the data is 16 bit, meaning your gray array above is 16 bit data.
So the first thing to understand is window levelling and how to display a 16-bit image in 8 bits. have a look here: http://www.upstate.edu/radiology/education/rsna/intro/display.php.
If it's a 16-bit image, if you want to view it as a greyscale image in rgb format, then you need to know what window level you're using or need, and adjust appropriately before saving.
Thirdly, like lenik mention above, you need to apply the dicom slope/intercept values to your pixel data prior to using.
If your problem is just making a new array with extra dimension for rgb (so sizes (r,c) to (r,c,3)), then it's easy
# orig is your read in dcmread 2D array:
r, c = orig.shape
new = np.empty((w, h, 3), dtype=orig.dtype)
new[:,:,2] = new[:,:,1] = new[:,:,0] = orig
# or with broadcasting
new[:,:,:] = orig[:,:, np.newaxis]
That will give you the 3rd dimension. BUT the values will still all be 16-bit, not 8 bit as needed if you want it to be RGB. (Assuming your image you read with dcmread is CT, MR or equivalent 16-bit dicom - then the dtype is likely uint16).
If you want it to be RGB, then you need to convert the values to 8-bit from 16-bit. For that you'll need to decide on a window/level and apply it to select the 8-bit values from the full 16-bit data range.
Likely your problem above - I've got extremely dark image on output which is wrong - is actually correct, but it's dark because the window/level cv is using by default makes it 'look' dark, or it's correct but you didn't apply the slope/intercept.
If what you want to do is convert the dicom to png (or jpg), then you should probably use PIL or matplotlib rather than cv. Both of those offer easy ways to save a 16 bit 2D array (which is what you 'gray' is in your code above), both which allow you to specify window and level when saving to png or jpg. CV is complete overkill (meaning much bigger/slower to load, and much higher learning curve).
Some psueudo code using matplotlib. The vmin/vmax values you need to adjust - the ones here would be approximately ok for a CT image.
import matplotlib.pyplot as plt
df = dcmread(file)
slope = float(df.RescaleSlope)
intercept = float(df.RescaleIntercept)
df_data = intercept + df.pixel_array * slope
# tell matplotlib to 'plot' the image, with 'gray' colormap and set the
# min/max values (ie 'black' and 'white') to correspond to
# values of -100 and 300 in your array
plt.imshow(df_data, cmap='gray', vmin=-100, vmax=300)
# save as a png file
plt.savefig('png-copy.png')
that will save a png version, but with axes drawn as well. To save as just an image, without axes and no whitespace, use this:
inches = (3,3)
dpi = 150
fig, ax = plt.subplots(figsize=inches, dpi=dpi)
fig.subplots_adjust(left=0, right=1, top=1, bottom=0, wspace=0, hspace=0)
ax.imshow(df_data, cmap='gray', vmin=-100, vmax=300)
fig.save('copy-without-whitespace.png')
The full tutorial on reading DICOM files is here: https://www.kaggle.com/gzuidhof/full-preprocessing-tutorial
Basically, you have to extract parameters slope and interception from the DICOM file and do the math for every pixel: hu = pixel_value * slope + intercept -- all this explained in the tutorial with the code samples and pictures.

How to create synthetic blurred image from sharp image using PSF kernel (in image format)

Update as suggestion from #Fix that I should BGR to RGB, but the outputs are still not the same as the paper's output.
(Small note: this post already post on https://dsp.stackexchange.com/posts/60670 but since I need help quickly so I think I reposted here, hope this doesn't violate to any policy)
I tried to create synthetic blurred image from ground-truth image using PSF kernels (in png format), some paper only mentioned that I need to do convolve operation on it, but it's seem to be I need more than that.
What I did
import matplotlib.pyplot as plt
import cv2 as cv
import scipy
from scipy import ndimage
import matplotlib.image as mpimg
import numpy as np
img = cv.imread('../dataset/text_01.png')
norm_image = cv.normalize(img, None, alpha=-0.1, beta=1.8, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F)
f = cv.imread('../matlab/uniform_kernel/kernel_01.png')
norm_f = cv.normalize(f, None, alpha=0, beta=1, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F)
result = ndimage.convolve(norm_image, norm_f, mode='nearest')
result = np.clip(result, 0, 1)
imgplot = plt.imshow(result)
plt.show()
And this only give me a white-entire image.
I tried to decrease the beta to lower number like this here norm_f = cv.normalize(f, None, alpha=0, beta=0.03, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F) and the image is appeared but it's very different in the color of it.
The paper I got idea how to do it and dataset (images with ground-truth and PSF kernels in PNG format) are here
This is what they said:
We create the synthetic saturated images in a way similar to [3, 10].
Specifically, we first stretch the intensity range of the latent image
from [0,1] to [−0.1,1.8], and convolve the blur kernels with the
images. We then clip the blurred images into the range of [0,1]. The
same process is adopted for generating non-uniform blurred images.
This is some images I got from my source.
And this is the ground-truth image:
And this is the PSF kernel in PNG format file:
And this is their output (synthetic image):
Please help me out, it doesn't matter solution, even it's a software, another languages, another tools. I only care eventually I have synthetic blurred image from original (sharp) image with PSF kernel with good performance (I tried on Matlab but suffered similar problem, I used imfilter, and one more problem with Matlab is they're slow).
(please not judge for only care about the output of the process, I'm not using deconvol method to deblur blurred back to the original image one so I want to have enough datasets (original&blurred) pairs to test my hypothesis/method)
Thanks.
OpenCV reads / writes images in BGR format, and Matplotlib in RGB. So if you want to display the right colours, you should first convert it to RGB :
result_rgb = cv.cvtColor(result, cv.COLOR_BGR2RGB)
imgplot = plt.imshow(result)
plt.show()
Edit: You could convolve each channel separately and normalise your convolve image like this:
f = cv.cvtColor(f, cv.COLOR_BGR2GRAY)
norm_image = img / 255.0
norm_f = f / 255.0
result0 = ndimage.convolve(norm_image[:,:,0], norm_f)/(np.sum(norm_f))
result1 = ndimage.convolve(norm_image[:,:,1], norm_f)/(np.sum(norm_f))
result2 = ndimage.convolve(norm_image[:,:,2], norm_f)/(np.sum(norm_f))
result = np.stack((result0, result1, result2), axis=2).astype(np.float32)
Then you should get the right colors. This though uses a normalisation between 0.0 and 1.0 for both the image and the kernel (unlike between -0.1 and 1.8 for the image as the paper suggests).

Store calculated filter banks into spectrogram image with Python

I am using the following code to calculate the frequency or the MFCC coefficients of a wavelet signal. When I have calculated my signals (frequency over time) in 2D numpy arrays I am trying to store it locally into a .png images. I am trying to do so with two different possible ways. Firstly, by using:
matplotlib.image.imsave("my_img.png", filter_banks)
That leads to:
and the second way using librosa tool:
import librosa.display
from matplotlib import cm
fig = plt.figure(figsize=(..., ...), dpi=1)
librosa.display.specshow(filter_banks.T, cmap=cm.jet)
plt.tight_layout()
plt.savefig("_plot_static_conv.png")
plt.show()
and the result is look like:
My issue is that I am having some white margin over the image which are not desired. How can I have the same size also in the second case and avoid the white margin over the image that I guess is caused by the plt.figure?
EDIT: I tried to use the answer from the following post but it did not solve my issue.
probably as a workaround, your white margin is 4 pixel,
could you save your second image with 8 more pixel in height and width.
then crop it using c2v
import cv2
img = cv2.imread("image.png")
crop_img = img[y:y+h, x:x+w]
cv2.imshow("cropped", crop_img)
cv2.waitKey(0)
as proposed in:
https://stackoverflow.com/a/15589825/4610938

Why 16bit to 8bit conversion produces striped image?

I am testing a segmentation algorithm on several VHSR satellite images, which originally comes in 16bit format, but when I convert them to 8bit images, the produced images are showing striped appearance.
I've been trying different python libraries (skimage, cv2, scipy) getting similar results.
1) The original 16-bit image it is a 4 band image (NIR,B,G,R), so you need to choose the right bands to create a true color image, RGB image (4,3,2 bands). thanks in advance. It can be downloaded from this link:
16bit image
2) I use this code to convert each pixel value, from a 16-bit integer now fitting within 8-bit range:
from scipy.misc import bytescale
SS = io.imread('Imag16bit.tif')
SS = bytescale(SS)
SS = np.asarray(SS)
plt.imshow(SS)
This is my result of above code:
bytescale works for me. I think the asarray step messes up something.
import cv2
from skimage import io
from scipy.misc import bytescale
image = io.imread('SkySat_16bit.tif')
cv2.imshow('Original', image)
print(image.dtype)
image = bytescale(image)
print(image.dtype)
cv2.imshow('Converted', image)
cv2.waitKey(0)
I think this is a way to do it:
#!/usr/local/bin/python3
from PIL import Image
from tifffile import imsave, imread
# Load image
im = imread('SkySat_16bit.tif')
# Extract Red, Green and Blue bands into separate 8-bit arrays
R = (im[:,:,3]/256).astype(np.uint8)
G = (im[:,:,2]/256).astype(np.uint8)
B = (im[:,:,1]/256).astype(np.uint8)
# Combine bands into RGB array
RGB = np.dstack((R,G,B))
# Save to disk
Image.fromarray(RGB).save('result.png')
You may want to adjust the contrast a bit, and check I selected the correct bands.

Displaying a grayscale Image

My aim:
Read an image into the PIL format.
Convert it to grayscale.
Plot the image using pylab.
Here is the code i'm using:
from PIL import Image
from pylab import *
import numpy as np
inputImage='C:\Test\Test1.jpg'
##outputImage='C:\Test\Output\Test1.jpg'
pilImage=Image.open(inputImage)
pilImage.draft('L',(500,500))
imageArray= np.asarray(pilImage)
imshow(imageArray)
##pilImage.save(outputImage)
axis('off')
show()
My Problem:
The image get's displayed like the colours are inverted.
But I know that the image is getting converted to grayscale, because when I write it to the disk it is appearing as a grayscale image.(Just as I expect).
I feel that the problem is somewhere in the numpy conversion.
I've just started programming in Python for Image Processing.
And Tips and Guideline will also be appreciated.
You want to over-ride the default color map:
imshow(imageArray, cmap="Greys_r")
Here's a page on plotting images and pseudocolor in matplotlib .
This produces a B&W image:
pilImage=Image.open(inputImage)
pilImage = pilImage.convert('1') #this convert to black&white
pilImage.draft('L',(500,500))
pilImage.save('outfile.png')
From the convert method docs:
convert
im.convert(mode) => image
Returns a converted copy of an image.
When translating from a palette image, this translates pixels through the palette.
If mode is omitted, a mode is chosen so that all information in the image and the palette can be represented without a palette.
When from a colour image to black and white, the library uses the ITU-R 601-2 luma transform:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
When converting to a bilevel image (mode "1"), the source image is first converted to black and white.
Resulting values larger than 127 are then set to white, and the image is dithered.
To use other thresholds, use the point method.

Categories

Resources