I'm currently overlaying a thermal camera on a normal rgb camera, the thermal camera outputs an image in grey scale.
I'm converting to rgb using a colormap to add colour to the image, then overlaying it to the normal rgb camera. This is working fine
thermal = cv2.applyColorMap(thermal, cv2.COLORMAP_HOT)
alpha =0.5
blended_portion = cv.addWeighted(foreground, alpha, background[:foreground_height,:foreground_width,:], 1 - alpha, 0, background)
background[:foreground_height,:foreground_width,:] = blended_portion
cv.imshow('composited image', background)
However this code applies that 0.5 opacity to the entire image, is there any way possible where i only decrease the opacity at a certain threshold?
so for example if its cold then its just 0 opacity but when its hot the opacity is 100?
so kind of like this:
Sorry for not providing code, but with these resources you should be able to figure it out:
See the thresholding tutorial to create a binarized version of your thermal image.
Then apply this binary image as a mask to your thermal image so that only the 'hot parts' remain (see this answer)
Now, in your call to cv.addWeighted() use the masked thermal image.
Related
I have two sets of satellites images. One is a set of classical satellites images, and the other is a set of infrared satellites images. I am trying to detect the vegetation inside the yellow polygon and outside the red area using Normalized difference vegetation index
(NDVI).
Visible image
Infrared image
According to the image documentation, a shift in the color spectrum was made on the Infrared images: infrared band occupies the red band, red band occupies the green band...
To calculate the NDVI image, I'm doing the following :
# Images are in BGR color space.
ndvi = (img_irc[:,:,2].astype(int) - img_visible[:,:,2].astype(int))/(img_irc[:,:,2] + img_visible[:,:,2] + 0.001)
Then, I use an Otsu threshold to extract the following mask :
To better see the effects, I add the semi-transparent mask of the impact of the detection on the satellite photo :
The result is not too bad, but there are sometimes wrong detections, like here on the roofs. I suspect that my way of extracting the spectral reflectance measurements acquired in the red (visible) and near-infrared regions by selecting the red channels is not perfect. Are there better methods to do so?
The NIR channel has apparently already been preprocessed, by the translation of the spectrum. You should get closer to the source to get the values directly from the IR spectrum values, rather than re-creating the images.
I have a dataset of face images which were captured with a uniform gray background. Because of the lighting variations during collection, the images no longer have same color tone. The backgrounds color looks different for all the images.
I want to find the exact skin color and wanted to implement color correction using the fact that all images had uniform gray background. How can I implement this using python?
Assuming your pixels are converted to floats in the range 0,0,0 (black) to 1,1,1 (white). You have a vector in 3D (RGB) space from the picture background to the known value. To correct, calculate a correction by multiplying each component by the magnitude of the correction needed. So if you have dR, dG, dB as the differences, all between 0.0 and 1.0, and R,G,B is a pixel, Rnew = R * (1.0 + dR) clipping max at 1.0. This will keep black pixels black.
How to do color balance with OpenCV. I have an image with a white paper in it, how can I balance the whole image color based on that white paper. Is there any code has implemented in Python?
I've tried this method
Get the rgb mean value of the "white" background. Then compute the ratios of 255/r, 255/g, 255/b using the r,g,b values from the "white" rgb color. Then multiply the red, green and blue channels by those ratios.
But there is a problem I encounter:
I have to imwrite the balanced image and imshow it again to display the right result. If I only imshow it directly (with converting the datatype to np.uint8), the shown image is strange and not right.
I am analyzing an image for finding brown objects in an image. I am thresholding an image and taking darkest parts as brown cells. However depending on the quality of an image objects cannot be identified sometimes. Is there any solution for that in OpenCV Python, such as pre-processing the gray scale image and defining what brown means for that particular image?
The code that I am using to find brown dots is as follows:
def countBrownDots(imageFile):
im = cv2.imread(imageFile)
#changing color space
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
gray = increaseBrighntness(gray)
l1,thresh = cv2.threshold(gray,10,255,cv2.THRESH_BINARY_INV)
thresh = ndimage.gaussian_filter(thresh, 16)
l2,thresh = cv2.threshold(thresh,70,255,cv2.THRESH_BINARY)
thresh = ndimage.gaussian_filter(thresh, 16)
cv2.imshow("thresh22",thresh)
rmax = pymorph.regmax(thresh)
nim = pymorph.overlay(thresh, rmax)
seeds,nr_nuclei = ndimage.label(rmax)
cv2.imshow("original",im)
cv2.imshow("browns",nim)
Here is an input image example:
Have a look at the image in HSV color space, here are the 3 planes stacked side by side
Although people have suggested segmenting on the basis of hue, there is actually more discriminative information in the saturation and value planes. For this particular image you would probably get a better result with the gray scale (i.e. value plane) than with the hue. However that is no reason to discard the color information.
As proof of concept (using Gimp) for color segmentation, I just randomly picked a brown spot and changed all colors with a color distance of less than 60 from that spot to green to get this:
If you play with the parameters a bit you will probably get what you want. Then write the code.
I tried pre-processing mean shift filtering to posterize the image, but that didn't really help.
I am doing a project in python for hand gesture recognition. So the usage of LAB color space will help to improve the accuracy of recognition because as we know that our skin color mainly comprises a ratio of red and yellow color and in case of Lαβ color space, the α component represents the pixel components position between red and green while the β component represents between yellow and blue making it less vulnerable to noise.
But the problem is that, when i tried to convert the Lab image into binary using threshold function provided in opencv it returned some errors, because the input of threshold function should be a gray scale image. Anybody know how to solve this problem?
lab = cv2.cvtColor(img,cv2.COLOR_BGR2LAB)
blur = cv2.GaussianBlur(gray,(5,5),0)
ret,thresh1 = cv2.threshold(blur,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
The error returned is Assertion Failed.
Anybody know how to threshold an LAB image?
The simplest thresholding methods replace each pixel in an image with a black pixel if the image intensity is less than some fixed constant T, or a white pixel if the image intensity is greater than that constant. Hence, for doing thresholding it is recommended to use gray scale images.
In opencv, cv2.threshold takes two arguments, First argument is the
source image, which should be a grayscale image. Second argument is the threshold value which is used to classify
the pixel values.
But in Wikipedia, there is a reference that we can threshold color images by designating a separate threshold for each of the RGB components of the image and then combine them with an AND operation.
Opencv threshold Documentation:
input array (single-channel, 8-bit or 32-bit floating point).
You can't threshold a color image. And where did grey come from? You never use the lab converted image.
Input image should be a single channel 8-bit or 32-bit float like M4rtini said. However, an RGB, Lab, HSV are all images build up from 3 8-bit channels. If you split the channels
L, a, b = cv2.split(lab)
the result will be 3 single channel images. These you can input into the function
ret,thresh_L = cv2.threshold(L,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_a = cv2.threshold(a,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
ret,thresh_b = cv2.threshold(b,70,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
basically, you can input any 2d numpy array into the threshold function as long as its 8-bit or 32-bit float. OpenCv scales Lab colorspace to 0-256.