Creating an RGB composite SAR image - python

I am quite new at Python programming and I need your help. I always do a research for my problem first before posting.
I have SAR dual polarization image (2^16 gray level values) in tiff format. In this tiff image there are two bands. The first band (HH_band) is a horizontal polarization channel and the second one (HV_band) is the vertical polarization channel. I want to create an RGB composite image. For this to happen, I need to layer stack the two channels as follows:
get the first band (HH_band)
get the second band (HV_band)
get the ratio (HH_band/HV_band)
I know that there are many people posting about sometime similar to this (RGB composite image of natural colors). I tried to use cv2.merge or cv2.split from openCV library but didn't work. I thought it would be relatively easy to create a SAR RGB image in Python (as I have seen a few post about creating RGB image of LANDSAT) but I got stuck in my case.
I would much appreciate any help.

Here is a possible way to accomplish the band composition programmatically:
import numpy as np
tif = io.imread('dual_polarization_image.tif')
band = {'HH': 0, 'HV': 1}
r = tif[:, :, band['HH']]
g = tif[:, :, band['HV']]
hh = r.astype(np.float64)
hv = g.astype(np.float64)
b = np.divide(hh, hv, out=np.zeros_like(hh), where=hv!=0)
rgb = np.dstack((r, g, b.astype(np.uint16)))
Remarks:
It would be possible to deal with different arrangements of the bands in the TIFF image by simply redefining the values of the dictionary band.
Prior to calculating the band ratio is necessary to convert data to np.float64.
I have taken advantage of the where option for universal functions to avoid zero division warnings.
In order for the composition to be possible, the band ratio (blue channel) has to be converted back to the same type (i.e. np.uint16) as the original bands (red and green channels).

It's difficult to test without sample images, but you should be able to do this simply at the commandline with ImageMagick which is included in most Linux distributions and is available for OSX and Windows.
The command will look like:
convert HH.tif HV.tif \( -clone 0 -clone 1 -compose divide -composite \) \
-combine -auto-level result.png

Related

How does cv2.merge((r,g,b)) works?

I am trying to do a linear filter on an image with RGB colors. I found a way to do that is by splitting the image to different color layers and then merge them.
i.e.:
cv2.split(img)
Sobel(b...)
Sobel(g...)
Sobel(r...)
cv2.merge((b,g,r))
I want to find out how cv2.merge((b,g,r)) works and how the final image will be constructed.
cv2.merge takes single channel images and combines them to make a multi-channel image. You've run the Sobel edge detection algorithm on each channel on its own. You are then combining the results together into a final output image. If you combine the results together, it may not make sense visually at first but what you would be displaying are the edge detection results of all three planes combined into a single image.
Ideally, hues of red will tell you the strength of the edge detection in the red channel, hues of green giving the strength of the detection for the green channel, and finally blue hues for the strength of detection in the blue.
Sometimes this is a good debugging tool so that you can semantically see all of the edge information for each channel in a single image. However, this will most likely be very hard to interpret for very highly complicated images with lots of texture and activity.
What is more usually done is to actually do an edge detection using a colour edge detection algorithm, or convert the image to grayscale and do the detection on that image instead.
As an example of the former, one can decompose the RGB image into HSV and use the colour information in this space to do a better edge detection. See this answer by Micka: OpenCV Edge/Border detection based on color.
This is my understanding. In OpenCV the function split() will take in the paced image input (being a multi-channel array) and split it into several separate single-channel arrays.
Within an image, each pixel has a spot sequentially within an array with each pixel having its own array to denote (r,g and b) hence the term multi channel. This set up allows any type of image such as bgr, rgb, or hsv to be split using the same function.
As Example (pretend these are separate examples so no variables are being overwritten)
b,g,r = cv2.split(bgrImage)
r,g,b = cv2.split(rgbImage)
h,s,v = cv2.split(hsvImage)
Take b,g,r arrayts for example. Each is a single channel array contains a portion of the split rgb image.
This means the image is being split out into three separate arrays:
rgbImage[0] = [234,28,19]
r[0] = 234
g[0] = 28
b[0] = 19
rgbImage[41] = [119,240,45]
r[41] = 119
g[14] = 240
b[14] = 45
Merge does the reverse by taking several single channel arrays and merging them together:
newRGBImage = cv2.merge((r,g,b))
the order in which the separated channels are passed through become important with this function.
Pseudo code:
cv2.merge((r,g,b)) != cv2.merge((b,g,r))
As an aside: Cv2.split() is an expensive function and the use of numpy indexing is must more efficient.
For more information check out opencv python tutorials

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

remove pixel annotations in dicom image

I am analyzing medical images. All images have a marker with the position. It looks like this
It is the "TRH RMLO" annotation in this image, but it can be different in other images. Also the size varies. The image is cropped but you see that the tissue is starting on the right side.
I found that the presence of these markers distort my analysis.
How can I remove them?
I load the image in python like this
import dicom
import numpy as np
img = dicom.read_file(my_image.dcm)
img_array = img.pixel_array
The image is then a numpy array. The white text is always surrounded by a large black area (black has value zero). The marker is in a different position in each image.
How can I remove the white text without hurting the tissue data.
UPDATE
added a second image
UPDATE2:
Here are two of the original dicom files. All personal information has been removed.edit:removed
Looking at the actual pixel values of the image you supplied, you can see that the marker is almost (99.99%) pure white and this doesn't occur elsewhere in the image so you can isolate it with a simple 99.99% threshold.
I prefer ImageMagick at the command-line, so I would do this:
convert sample.dcm -threshold 99.99% -negate mask.png
convert sample.dcm mask.png -compose darken -composite result.jpg
Of course, if the sample image is not representative, you may have to work harder. Let's look at that...
If the simple threshold doesn't work for your images, I would look at "Hit and Miss Morphology". Basically, you threshold your image to pure black and white - at around 90% say, and then you look for specific shapes, such as the corner markers on the label. So, if we want to look for the top-left corner of a white rectangle on a black background, and we use 0 to mean "this pixel must be black", 1 to mean "this pixel must be white" and - to mean "we don't care", we would use this pattern:
0 0 0 0 0
0 1 1 1 1
0 1 - - -
0 1 - - -
0 1 - - -
Hopefully you can see the top left corner of a white rectangle there. That would be like this in the Terminal:
convert sample.dcm -threshold 90% \
-morphology HMT '5x5:0,0,0,0,0 0,1,1,1,1 0,1,-,-,- 0,1,-,-,- 0,1,-,-,-' result.png
Now we also want to look for top-right, bottom-left and bottom-right corners, so we need to rotate the pattern, which ImageMagick handily does when you add the > flag:
convert sample.dcm -threshold 90% \
-morphology HMT '5x5>:0,0,0,0,0 0,1,1,1,1 0,1,-,-,- 0,1,-,-,- 0,1,-,-,-' result.png
Hopefully you can see dots demarcating the corners of the logo now, so we could ask ImageMagick to trim the image of all extraneous black and just leave the white dots and then tell us the bounding box:
cconvert sample.dcm -threshold 90% \
-morphology HMT '5x5>:0,0,0,0,0 0,1,1,1,1 0,1,-,-,- 0,1,-,-,- 0,1,-,-,-' -format %# info:
308x198+1822+427
So, if I now draw a red box around those coordinates, you can see where the label has been detected - of course in practice I would draw a black box to cover it but I am explaining the idea:
convert sample.dcm -fill "rgba(255,0,0,0.5)" -draw "rectangle 1822,427 2130,625" result.png
If you want a script to do that automagically, I would use something like this, saving it as HideMarker:
#!/bin/bash
input="$1"
output="$2"
# Find corners of overlaid marker using Hit and Miss Morphology, then get crop box
IFS="x+" read w h x1 y1 < <(convert "$input" -threshold 90% -morphology HMT '5x5>:0,0,0,0,0 0,1,1,1,1 0,1,-,-,- 0,1,-,-,- 0,1,-,-,-' -format %# info:)
# Calculate bottom-right corner from top-left and dimensions
((x1=x1-1))
((y1=y1-1))
((x2=x1+w+1))
((y2=y1+h+1))
convert "$input" -fill black -draw "rectangle $x1,$y1 $x2,$y2" "$output"
Then you would do this to make it executable:
chmod +x HideMarker
And run it like this:
./HideMarker someImage.dcm result.png
I have another idea. This solution is in OpenCV using python. It is a rather solution.
First, obtain the binary threshold of the image.
ret,th = cv2.threshold(img,2,255, 0)
Perform morphological dilation:
dilate = cv2.morphologyEx(th, cv2.MORPH_DILATE, kernel, 3)
To join the gaps, I then used median filtering:
median = cv2.medianBlur(dilate, 9)
Now you can use the contour properties to eliminate the smallest contour and retain the other containing the image.
It also works for the second image:
If these annotations are in the DICOM file there are a couple ways they could be stored (see https://stackoverflow.com/a/4857782/1901261). The currently supported method can be cleaned off by simply removing the 60xx group attributes from the files.
For the deprecated method (which is still commonly used) you can clear out the unused high bit annotations manually without messing up the other image data as well. Something like:
int position = object.getInt( Tag.OverlayBitPosition, 0 );
if( position == 0 ) return;
int bit = 1 << position;
int[] pixels = object.getInts( Tag.PixelData );
int count = 0;
for( int pix : pixels )
{
int overlay = pix & bit;
pixels[ count++ ] = pix - overlay;
}
object.putInts( Tag.PixelData, VR.OW, pixels );
If these are truly burned into the image data, you're probably stuck using one of the other recommendations here.
The good thing is, that these watermarks are probably in an isolated totally black are which makes it easier (although it's questionable if removing this is according to the indicated usage; license-stuff).
Without beeing an expert, here is one idea. It might be a sketch of some very very powerful approach tailored to this problem but you have to decide if implementation-complexity & algorithmic-complexity (very dependent on image-statistics) are worth it:
Basic idea
Detect the semi-cross like borders (4)
Calculate the defined rectangle from these
Black-out this rectangle
Steps
0
Binarize
1
Use some gradient-based edge-detector to get all the horizontal edges
There may be multiple; you can try to give min-length (maybe some morphology needed to connect pixels which are not connected based on noise in source or algorithm)
2
Use some gradient-based edge-detector to get all the horizontal edges
Like the above, but a different orientation
3
Do some connected-component calculation to get some objects which are vertical and horizontal lines
Now you can try different chosings of candidate-components (8 real ones) with the following knowledge
two of these components can be described by the same line (slope-intercept form; linear regression problem) -> line which borders the rectangle
it's probably that the best 4 pair-chosings (according to linear-regression loss) are the valid borders of this rectangle
you might add the assumption, that vertical borders and horizontal borders are orthogonal to each other
4
- Calculate the rectangle from these borders
- Widen it by a few pixels (hyper-parameter)
- Black-out that rectangle
That's the basic approach.
Alternative
This one is much less work, use more specialized tools and assumes the facts in the opening:
the stuff to remove is on some completely black part of the image
it's kind of isolated; distance to medical-data is high
Steps
Run some general OCR to detect characters
Get the occupied pixels / borders somehow (i'm not sure what OCR tools return)
Calculate some outer rectangle and black-out (using some predefined widening-gap; this one needs to be much bigger than the one above)
Alternative 2
Sketch only: The idea is to use something like binary-closing on the image somehow to build fully connected-components ouf of the source pixels (while small gaps/holes are filled), so that we got one big component describing the medical-data and one for the watermark. Then just remove the smaller one.
I am sure this can be optimized, but ... You could create 4 patches of size 3x3 or 4x4, and initialize them with the exact content of the pixel values for each of the individual corners of the frame surrounding the annotation text. You could then iterate over the whole image (or have some smart initialization looking only in the black area) and find the exact match for those patches. It is not very likely you will have the same regular structure (90 deg corner surrounded by near 0) in the tissue, so this might give you the bounding box.
Simpler one is still possible!!!.
Just implement following after (img_array = img.pixel_array)
img_array[img_array > X] = Y
In which X is the intensity threshold you want to eliminate after that. Also Y is the intensity value which you want to consider instead of that.
For example:
img_array[img_array > 4000] = 0
Replace white matter greater than 4000 with black intensity 0.

Interpreting numpy array obtained from tif file

I need to work with some greyscale tif files and I have been using PIL to import them as images and convert them into numpy arrays:
np.array(Image.open(src))
I want to have a transparent understanding of exactly what the values of these array correspond to and in particular, it was not clear what value was appropriate as a white point or black point for my images. For instance if I wanted to convert this array into an array of floats with pixel values of 1 for white values and 0 for black with other values scaled linearly in between.
I have tried some naive methods including scaling by the maximum value in the array but opening the resulting files, there is always some amount of shift in the color levels.
Is there any documentation for the proper way to understand the values stored in these tif arrays?
A TIFF is basically a computer file format for storing raster graphics images. It has a lot of specs and quick search on the web will get you the resources you need.
The thing is you are using PIL as your input library. The array you have is likely working with an uint8 data type, which means your data can be anywhere within 0 to 255. To obtain the 0 to 1 color range do the following:
im = np.array(Image.open(src)).astype('float32')/255
Notice your array will likely have 4 layers given in the third dimension im[:,:, here] (im.shape = (i,j,k)). So each trace im[i,j,:] (which represents a pixel) is going to be a quadruplet for an RGBA value.
The R stands for Red (or quantity of Red), G for Green, B for Blue. A is the alpha channel and it is what enables you to have transparency (lower values means less opacity and more transparency).
It can also have three layers for only RGB, or one layer if intended to be plotted in the grey-scale.
In the case you have RGB (or RGBA but not considering alpha) but need a single value you should understand that there are quite a few different ways of doing this. In this post #denis recommends the use of the following formulation:
Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma
where gamma is 2.2 for many PCs. The usual R G B are sometimes written
as R' G' B' (R' = Rlin ^ (1/gamma)) (purists tongue-click) but here
I'll drop the '.
And finally L* = 116 * Y ^ 1/3 - 16 to obtain the luminance.
I recommend you to read his post. Also consider looking into the following concepts:
RGB Colors model
Gamma correction
Tagged Image File Format
Pillow documentation of TIFF
Working with TIFFs (import, export) in Python using numpy

What is intensity conversion in image processing?

I want to select the green channel of an image and perform intensity conversion. I have selected the green channel of image. I would like to know how to do intensity conversion. I am currently working in python.
By selecting the green channel, you're technically already doing an intensity conversion. This is represented as a grayscale image which denotes how much green is experienced at each pixel in the image.
However, #MarkSetchell is correct where the canonical approach to convert from colour images to intensity is a weighted combination of each colour. Some people average all of them, other people exaggerate on the green channel more because we perceive that colour more clearly, but the SMPTE Rec. 709 standard is amongst the most popular: Y' = 0.299 R' + 0.587 G' + 0.114 B'.
Take a look at these informative links for more details on the conversion:
https://en.wikipedia.org/wiki/Luma_(video)
http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
https://en.wikipedia.org/wiki/Grayscale
However, since you are using OpenCV, you can simply call cv2.cvtColor with the correct flag to convert an image from colour to grayscale:
import numpy as np
import cv2
im = cv2.imread('...') # Place filename here
im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Alternatively, you can specify 0 as the extra flag to cv2.imread to automatically convert any image into grayscale without having the need to call cv2.cvtColor:
im = cv2.imread('...', 0)
You need to be more precise. The "green channel" probably means you have green luma, a correlate of green intensity. They are related via a "transfer function", e.g. as defined as a part of sRGB:
https://en.wikipedia.org/wiki/SRGB
This will allow you to flip between luminous intensity of green and luma of green.
Equally likely, you are interested in luminance (CIE Y) or luma. Google for "Gamma FAQ" if that is the case.

Categories

Resources