I have two sets of satellites images. One is a set of classical satellites images, and the other is a set of infrared satellites images. I am trying to detect the vegetation inside the yellow polygon and outside the red area using Normalized difference vegetation index
(NDVI).
Visible image
Infrared image
According to the image documentation, a shift in the color spectrum was made on the Infrared images: infrared band occupies the red band, red band occupies the green band...
To calculate the NDVI image, I'm doing the following :
# Images are in BGR color space.
ndvi = (img_irc[:,:,2].astype(int) - img_visible[:,:,2].astype(int))/(img_irc[:,:,2] + img_visible[:,:,2] + 0.001)
Then, I use an Otsu threshold to extract the following mask :
To better see the effects, I add the semi-transparent mask of the impact of the detection on the satellite photo :
The result is not too bad, but there are sometimes wrong detections, like here on the roofs. I suspect that my way of extracting the spectral reflectance measurements acquired in the red (visible) and near-infrared regions by selecting the red channels is not perfect. Are there better methods to do so?
The NIR channel has apparently already been preprocessed, by the translation of the spectrum. You should get closer to the source to get the values directly from the IR spectrum values, rather than re-creating the images.
Related
I'm currently overlaying a thermal camera on a normal rgb camera, the thermal camera outputs an image in grey scale.
I'm converting to rgb using a colormap to add colour to the image, then overlaying it to the normal rgb camera. This is working fine
thermal = cv2.applyColorMap(thermal, cv2.COLORMAP_HOT)
alpha =0.5
blended_portion = cv.addWeighted(foreground, alpha, background[:foreground_height,:foreground_width,:], 1 - alpha, 0, background)
background[:foreground_height,:foreground_width,:] = blended_portion
cv.imshow('composited image', background)
However this code applies that 0.5 opacity to the entire image, is there any way possible where i only decrease the opacity at a certain threshold?
so for example if its cold then its just 0 opacity but when its hot the opacity is 100?
so kind of like this:
Sorry for not providing code, but with these resources you should be able to figure it out:
See the thresholding tutorial to create a binarized version of your thermal image.
Then apply this binary image as a mask to your thermal image so that only the 'hot parts' remain (see this answer)
Now, in your call to cv.addWeighted() use the masked thermal image.
I have a dataset of face images which were captured with a uniform gray background. Because of the lighting variations during collection, the images no longer have same color tone. The backgrounds color looks different for all the images.
I want to find the exact skin color and wanted to implement color correction using the fact that all images had uniform gray background. How can I implement this using python?
Assuming your pixels are converted to floats in the range 0,0,0 (black) to 1,1,1 (white). You have a vector in 3D (RGB) space from the picture background to the known value. To correct, calculate a correction by multiplying each component by the magnitude of the correction needed. So if you have dR, dG, dB as the differences, all between 0.0 and 1.0, and R,G,B is a pixel, Rnew = R * (1.0 + dR) clipping max at 1.0. This will keep black pixels black.
I have trained a face segmentation model on the CelebA-Mask-HQ dataset (https://github.com/switchablenorms/CelebAMask-HQ ) that is able to create a color segmentation mapping of an image with different colours for the background, eyes, face, hair, etc. The model produces a numpy array of shape (1024,1024,3). The outputted segmentation maps are a bit noisy, with some random pixels in the face labelled as eyes for example, or cloth labels popping up when it is actually background, please see the image below:
As you can see in the image, in the top left corner you see green pixels and in the face around the mustache you see green pixels (above the yellow upper lip map).
I would like to remove this 'noise' from the segmentation map, by changing these wrongly labeled small segments in the image, which are surrounded by larger correctly labeled areas, automatically to the most dominant color in that area (with adaptable window size). I could not find built-in opencv functionality for this. Do you know any efficient way to do this (I need to 'denoise' a large set of images, so ideally in a vectorized numpy-only way)?
It is very important that the image after denoising only contains the set of predefined label colors (19 different colors in total), so the noise needs to be recolored in an absolute manner without averaging (which would introduce new colors to the color palette of the image).
Thank you!
I can point you away from openCV and towards scikit-image which I am more familiar with. I would tackle this using an approach borrowed from this tutorial.
Specifically, I would do something like this:
label_image = label(image)
for region in regionprops(label_image):
# only recolor areas that are under a certain threshold size
if region.area <= 100:
#get creative with which color to recolor with...
minr, minc, maxr, maxc = region.bbox
colors = np.bincount(label_image[minr : maxr, minc:maxc])
max_color = -1
for i in range(len(colors)):
if (colors[i] > max_color) and (i != region.label):
max_color = colors[i]
crop_image = label_image[minr : maxr, minc:maxc]
label_image[minr : maxr, minc:maxc][crop_image == region.label] = max_color
I haven't tried this code out...but I think something like this may work. Let me know if it is helpful or not.
I am trying to count cells in a microscopy image, and i need to differentiate between the membrane signal and organelles within.
There is only one color, as we are visualizing a protein within the cells using GFP
Right now i am using skimage package (measure, labels). This method kinda works, as it can find connected black regions, and by using the convex of these in together with the bounding box, i can achieve the following (inside: red, membrane: blue):
I am however having problems with organelles (bright spots inside) that touch the membrane and hence I lose signal from the inside (which then is added to the membrane signal - which is a problem).
Any suggestions for a better method?
from skimage import measure
from skimage.segmentation import clear_border
image= ndimage.gaussian_filter(raw_image, sigma=(0.5,0.5), order=0)
median = np.median(image)
mask_inv =np.ma.masked_where(image>median*1.5,image) # was 5
array = np.zeros(image.shape)
img_contour_inv =np.array(array+mask_inv,dtype=np.float)
mask_inverse_bool = img_contour_inv>0
labels = measure.label(mask_inverse_bool,connectivity=1)
df=measure.regionprops(lables, intensity_image=intensity_image)
Followed by some plotting sorting by size and plotting yields image 2
You can try this method:
Find the black spots as you have done in Inside image. Make Inside
image a black and white image.
Now inverse the Inside image and multiply it with the raw image. Bright spots will be
left behind.
To get cell contours, you can subtract the bright spots from the raw image.
Another method could be using thresholding to find bright spots.
I am doing a project in python for hand gesture recognition. So the usage of LAB color space will help to improve the accuracy of recognition because as we know that our skin color mainly comprises a ratio of red and yellow color and in case of Lαβ color space, the α component represents the pixel components position between red and green while the β component represents between yellow and blue making it less vulnerable to noise.
But the problem is that, when i tried to convert the Lab image into binary using threshold function provided in opencv it returned some errors, because the input of threshold function should be a gray scale image. Anybody know how to solve this problem?
lab = cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
blur = cv2.GaussianBlur(gray,(5,5),0)
ret,thresh1 = cv2.threshold(blur,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
The error returned is Assertion Failed.
Anybody know how to threshold an LAB image?
The simplest thresholding methods replace each pixel in an image with a black pixel if the image intensity is less than some fixed constant T, or a white pixel if the image intensity is greater than that constant. Hence, for doing thresholding it is recommended to use gray scale images.
In opencv, cv2.threshold takes two arguments, First argument is the
source image, which should be a grayscale image. Second argument is the threshold value which is used to classify
the pixel values.
But in Wikipedia, there is a reference that we can threshold color images by designating a separate threshold for each of the RGB components of the image and then combine them with an AND operation.
Opencv threshold Documentation:
input array (single-channel, 8-bit or 32-bit floating point).
You can't threshold a color image. And where did grey come from? You never use the lab converted image.
Input image should be a single channel 8-bit or 32-bit float like M4rtini said. However, an RGB, Lab, HSV are all images build up from 3 8-bit channels. If you split the channels
L, a, b = cv2.split(lab)
the result will be 3 single channel images. These you can input into the function
ret,thresh_L = cv2.threshold(L,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_a = cv2.threshold(a,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_b = cv2.threshold(b,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
basically, you can input any 2d numpy array into the threshold function as long as its 8-bit or 32-bit float. OpenCv scales Lab colorspace to 0-256.