OpenCV vs Labview Images Greyscale (U16) - Difference in values - python

I am trying to understand why LabView shows one set of values for an image, while OpenCV shows another set of values.
I have two U16 Grayscale PNG images that I am trying to combine vertically to create one continuous image. The majority of the pixels are near zero or low-valued, with the ROI having pixel values in the middle of the U16 range. In Python, this is achieve by reading the file using OpenCV, combining the image using numpy and then using Matplotlib to display the values:
image_one = cv2.imread("..\filename_one.png", cv2.IMREAD_UNCHANGED)
image_two = cv2.imread("..\filename_two.png", cv2.IMREAD_UNCHANGED)
combined_image = numpy.concatenate((image_one, image_two), axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_image,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
Dual Exposure Image
As seen above, this show the image as have two different dynamic ranges, resulting in different exposures. To normalize the images, we can try to rescale it to take advantage of the same dynamic range.
rescaled_one = ((image_one - image_one.min()) / (image_one.max() -
image_one.min())) * 65535 rescaled_two = ((image_two -
image_two.min()) / (image_two.max() - image_two.min())) * 65535
combined_rescaled = numpy.concatenate((rescaled_one, rescaled_two),
axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_irescaled,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
Rescaled Image - Dual Exposure
This still shows the same issue with the images.
In LabView, to combine images vertically, I adapted a VI that was published to stitch Images horizontally:
https://forums.ni.com/t5/Example-Code/Stitch-Images-Together-in-LabVIEW-with-Vision-Development-Module/ta-p/3531092?profile.language=en
The Final VI Block Diagram looks as follows:
VI Block Diagram - Vertically Combine Images using IMAQ
The Output you see on the Front Panel:
Singular continuous Image - Front Panel
The dual exposure issues appears to have disappeared and the image now appears as a single continuous image. This didn't make any sense to me, so I plotted the results using Plotly as follows:
fig = plty.subplots.make_subplots(1, 1, horizontal_spacing=0.05)
fig.append_trace(go.Histogram(x=image_one.ravel(), name="cv2_top",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=image_two.ravel(), name="cv2_bottom",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=lv_joined[:1024, :].ravel(),
name="LabView_joined_top", showlegend=True, nbinsx = 13107), 1, 1)
//First Image
fig.append_trace(go.Histogram(x=lv_joined[1024:,:].ravel(), name="LabView_joined_bottom", showlegend=True, nbinsx =
13107), 1, 1) //Second Image fig.update_layout(height=800) fig.show()
Histogram - Python vs Labview respective halves - Focus on Low
pixels
Here it shows that the Second Image's pixel values have been "compressed" to find the same distribution as the the First Image. I don't understand why this is the case. Have I configured something wrong in LabView or have I not considered something when reading in a file with OpenCV?
Original Images:

Please refer to the answer posted here: [https://forums.ni.com/t5/LabVIEW/OpenCV-vs-Labview-Images-Greyscale-U16-Difference-in-values/td-p/4172150/highlight/false]

Related

Matplotlib overlaying multiple images with different colors

I have a list (image_list) with images in the forms of numpy arrays. I want to overlay all of these images into 1 single image, and I want each image to have its own color.
Attached is a screenshot showing some of the individual images; i want each of the "curves" (gray stuff) to be a different color, not the actual background itself. This way when I overlay all the images the colors will help me identify each specific image. individual images
Attached is a screenshot of what I tried. Additionally, to clarify, by "overlay" I mean I want all the images in ONE single image, not multiple plots. Thus, I don't think changing cmap will work because that also changes the background color; I only want the color of the actual "data"/image to change.
Any help would be appreciated. what i tried
If your images are just binary data (zeros and ones) then you could change the data in each subsequent image by one count. Then you plot them all in the same image and matplotlib will take care of the coloring. You could also merge them into a single array first with the same result
1st image: zeros and ones
2nd image: zeros and 2s
3rd image: zeros and 3s
....
in pseudo code:
new_image_list = []
for idx, img in enumerate(img_list):
img = np.where(img > threshold, 1, 0)
img *= (idx+1)
new_image_list.append(img)
for img in new_image_list:
plt.imshow(img)

when I apply median filter to image, it turns purple. why?

I have a image.I added salt & pepper noise to this image. After that I applied 2D median filter to remove noise from image. But after this process, the image converted purple.
And here is my codes.
M=3;
N=3;
modifyA=np.pad(image, [(math.floor(M/2),math.floor(N/2))])
B = np.zeros([(image.shape[0]),(image.shape[1])])
med_indx = round((M*N)/2); #MEDIAN INDEX
for i in range ((modifyA.shape[0])-(M-1)-1):
for j in range ((modifyA.shape[1])-(N-1)-1):
temp = modifyA[i:i+(M-1), j:j+(N-1)] #
#RED,GREEN AND BLUE CHANNELS ARE TRAVERSED SEPARATELY
for k in range (2):
tmp = temp[:,:,k]
B[i,j] = np.median(tmp[:])
B = B.astype(np.uint8)
imgplot = plt.imshow(B)
plt.show()
Where could the error be?
As #gre_gor wrote in their comment, imshow is using a pseudocolor. More specifically, it is using the common colormap viridis by default if the image is not RGB(A).
Take a look at the documentation: https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html
To display a grayscale version of your image refer to this part of the doc:
The input may either be actual RGB(A) data, or 2D scalar data, which will be rendered as a pseudocolor image. For displaying a grayscale image set up the colormapping using the parameters cmap='gray', vmin=0, vmax=255.

Convert gray pixel_array from DICOM to RGB image

I'm reading DICOM gray image file as
gray = dicom.dcmread(file).pixel_array
There I've got (x,y) shape but I need RGB (x,y,3) shape
I'm trying to convert using CV
img = cv2.cvtColor(gray, cv2.COLOR_GRAY2RGB)
And for testing I'm writing it to file cv2.imwrite('dcm.png', img)
I've got extremely dark image on output which is wrong, what is correct way to convert pydicom image to RGB?
To answer your question, you need to provide a bit more info, and be a bit clearer.
First what are you trying to do? Are you trying to only get an (x,y,3) array in memory? or are you trying to convert the dicom file to a .png file? ...they are very different things.
Secondly, what modality is your dicom image?
It's likely (unless its ultrasound or perhaps nuc med) a 16 bit greyscale image, meaning the data is 16 bit, meaning your gray array above is 16 bit data.
So the first thing to understand is window levelling and how to display a 16-bit image in 8 bits. have a look here: http://www.upstate.edu/radiology/education/rsna/intro/display.php.
If it's a 16-bit image, if you want to view it as a greyscale image in rgb format, then you need to know what window level you're using or need, and adjust appropriately before saving.
Thirdly, like lenik mention above, you need to apply the dicom slope/intercept values to your pixel data prior to using.
If your problem is just making a new array with extra dimension for rgb (so sizes (r,c) to (r,c,3)), then it's easy
# orig is your read in dcmread 2D array:
r, c = orig.shape
new = np.empty((w, h, 3), dtype=orig.dtype)
new[:,:,2] = new[:,:,1] = new[:,:,0] = orig
# or with broadcasting
new[:,:,:] = orig[:,:, np.newaxis]
That will give you the 3rd dimension. BUT the values will still all be 16-bit, not 8 bit as needed if you want it to be RGB. (Assuming your image you read with dcmread is CT, MR or equivalent 16-bit dicom - then the dtype is likely uint16).
If you want it to be RGB, then you need to convert the values to 8-bit from 16-bit. For that you'll need to decide on a window/level and apply it to select the 8-bit values from the full 16-bit data range.
Likely your problem above - I've got extremely dark image on output which is wrong - is actually correct, but it's dark because the window/level cv is using by default makes it 'look' dark, or it's correct but you didn't apply the slope/intercept.
If what you want to do is convert the dicom to png (or jpg), then you should probably use PIL or matplotlib rather than cv. Both of those offer easy ways to save a 16 bit 2D array (which is what you 'gray' is in your code above), both which allow you to specify window and level when saving to png or jpg. CV is complete overkill (meaning much bigger/slower to load, and much higher learning curve).
Some psueudo code using matplotlib. The vmin/vmax values you need to adjust - the ones here would be approximately ok for a CT image.
import matplotlib.pyplot as plt
df = dcmread(file)
slope = float(df.RescaleSlope)
intercept = float(df.RescaleIntercept)
df_data = intercept + df.pixel_array * slope
# tell matplotlib to 'plot' the image, with 'gray' colormap and set the
# min/max values (ie 'black' and 'white') to correspond to
# values of -100 and 300 in your array
plt.imshow(df_data, cmap='gray', vmin=-100, vmax=300)
# save as a png file
plt.savefig('png-copy.png')
that will save a png version, but with axes drawn as well. To save as just an image, without axes and no whitespace, use this:
inches = (3,3)
dpi = 150
fig, ax = plt.subplots(figsize=inches, dpi=dpi)
fig.subplots_adjust(left=0, right=1, top=1, bottom=0, wspace=0, hspace=0)
ax.imshow(df_data, cmap='gray', vmin=-100, vmax=300)
fig.save('copy-without-whitespace.png')
The full tutorial on reading DICOM files is here: https://www.kaggle.com/gzuidhof/full-preprocessing-tutorial
Basically, you have to extract parameters slope and interception from the DICOM file and do the math for every pixel: hu = pixel_value * slope + intercept -- all this explained in the tutorial with the code samples and pictures.

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

What's the fastest way to increase color image contrast with OpenCV in python (cv2)?

I'm using OpenCV to process some images, and one of the first steps I need to perform is increasing the image contrast on a color image. The fastest method I've found so far uses this code (where np is the numpy import) to multiply and add as suggested in the original C-based cv1 docs:
if (self.array_alpha is None):
self.array_alpha = np.array([1.25])
self.array_beta = np.array([-100.0])
# add a beta value to every pixel
cv2.add(new_img, self.array_beta, new_img)
# multiply every pixel value by alpha
cv2.multiply(new_img, self.array_alpha, new_img)
Is there a faster way to do this in Python? I've tried using numpy's scalar multiply instead, but the performance is actually worse. I also tried using cv2.convertScaleAbs (the OpenCV docs suggested using convertTo, but cv2 seems to lack an interface to this function) but again the performance was worse in testing.
Simple arithmetic in numpy arrays is the fastest, as Abid Rahaman K commented.
Use this image for example: http://i.imgur.com/Yjo276D.png
Here is a bit of image processing that resembles brightness/contrast manipulation:
'''
Simple and fast image transforms to mimic:
- brightness
- contrast
- erosion
- dilation
'''
import cv2
from pylab import array, plot, show, axis, arange, figure, uint8
# Image data
image = cv2.imread('imgur.png',0) # load as 1-channel 8bit grayscale
cv2.imshow('image',image)
maxIntensity = 255.0 # depends on dtype of image data
x = arange(maxIntensity)
# Parameters for manipulating image data
phi = 1
theta = 1
# Increase intensity such that
# dark pixels become much brighter,
# bright pixels become slightly bright
newImage0 = (maxIntensity/phi)*(image/(maxIntensity/theta))**0.5
newImage0 = array(newImage0,dtype=uint8)
cv2.imshow('newImage0',newImage0)
cv2.imwrite('newImage0.jpg',newImage0)
y = (maxIntensity/phi)*(x/(maxIntensity/theta))**0.5
# Decrease intensity such that
# dark pixels become much darker,
# bright pixels become slightly dark
newImage1 = (maxIntensity/phi)*(image/(maxIntensity/theta))**2
newImage1 = array(newImage1,dtype=uint8)
cv2.imshow('newImage1',newImage1)
z = (maxIntensity/phi)*(x/(maxIntensity/theta))**2
# Plot the figures
figure()
plot(x,y,'r-') # Increased brightness
plot(x,x,'k:') # Original image
plot(x,z, 'b-') # Decreased brightness
#axis('off')
axis('tight')
show()
# Close figure window and click on other window
# Then press any keyboard key to close all windows
closeWindow = -1
while closeWindow<0:
closeWindow = cv2.waitKey(1)
cv2.destroyAllWindows()
Original image in grayscale:
Brightened image that appears to be dilated:
Darkened image that appears to be eroded, sharpened, with better contrast:
How the pixel intensities are being transformed:
If you play with the values of phi and theta you can get really interesting outcomes. You can also implement this trick for multichannel image data.
--- EDIT ---
have a look at the concepts of 'levels' and 'curves' on this youtube video showing image editing in photoshop. The equation for linear transform creates the same amount i.e. 'level' of change on every pixel. If you write an equation which can discriminate between types of pixel (e.g. those which are already of a certain value) then you can change the pixels based on the 'curve' described by that equation.
Try this code:
import cv2
img = cv2.imread('sunset.jpg', 1)
cv2.imshow("Original image",img)
# CLAHE (Contrast Limited Adaptive Histogram Equalization)
clahe = cv2.createCLAHE(clipLimit=3., tileGridSize=(8,8))
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) # convert from BGR to LAB color space
l, a, b = cv2.split(lab) # split on 3 different channels
l2 = clahe.apply(l) # apply CLAHE to the L-channel
lab = cv2.merge((l2,a,b)) # merge channels
img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
cv2.imshow('Increased contrast', img2)
#cv2.imwrite('sunset_modified.jpg', img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sunset before:
Sunset after increased contrast:
Use the cv2::addWeighted function. It will be faster than any of the other methods presented thus far. It's designed to work on two images:
dst = cv.addWeighted( src1, alpha, src2, beta, gamma[, dst[, dtype]] )
But if you use the same image twice AND you set beta to zero, you can get the effect you want
dst = cv.addWeighted( src1, alpha, src1, 0, gamma)
The big advantage to using this function is that you will not have to worry about what happens when values go below 0 or above 255. In numpy, you have to figure out how to do all of the clipping yourself. Using the OpenCV function, it does all of the clipping for you and it's fast.

Categories

Resources