How does cv2.merge((r,g,b)) works? - python

I am trying to do a linear filter on an image with RGB colors. I found a way to do that is by splitting the image to different color layers and then merge them.
i.e.:
cv2.split(img)
Sobel(b...)
Sobel(g...)
Sobel(r...)
cv2.merge((b,g,r))
I want to find out how cv2.merge((b,g,r)) works and how the final image will be constructed.

cv2.merge takes single channel images and combines them to make a multi-channel image. You've run the Sobel edge detection algorithm on each channel on its own. You are then combining the results together into a final output image. If you combine the results together, it may not make sense visually at first but what you would be displaying are the edge detection results of all three planes combined into a single image.
Ideally, hues of red will tell you the strength of the edge detection in the red channel, hues of green giving the strength of the detection for the green channel, and finally blue hues for the strength of detection in the blue.
Sometimes this is a good debugging tool so that you can semantically see all of the edge information for each channel in a single image. However, this will most likely be very hard to interpret for very highly complicated images with lots of texture and activity.
What is more usually done is to actually do an edge detection using a colour edge detection algorithm, or convert the image to grayscale and do the detection on that image instead.
As an example of the former, one can decompose the RGB image into HSV and use the colour information in this space to do a better edge detection. See this answer by Micka: OpenCV Edge/Border detection based on color.

This is my understanding. In OpenCV the function split() will take in the paced image input (being a multi-channel array) and split it into several separate single-channel arrays.
Within an image, each pixel has a spot sequentially within an array with each pixel having its own array to denote (r,g and b) hence the term multi channel. This set up allows any type of image such as bgr, rgb, or hsv to be split using the same function.
As Example (pretend these are separate examples so no variables are being overwritten)
b,g,r = cv2.split(bgrImage)
r,g,b = cv2.split(rgbImage)
h,s,v = cv2.split(hsvImage)
Take b,g,r arrayts for example. Each is a single channel array contains a portion of the split rgb image.
This means the image is being split out into three separate arrays:
rgbImage[0] = [234,28,19]
r[0] = 234
g[0] = 28
b[0] = 19
rgbImage[41] = [119,240,45]
r[41] = 119
g[14] = 240
b[14] = 45
Merge does the reverse by taking several single channel arrays and merging them together:
newRGBImage = cv2.merge((r,g,b))
the order in which the separated channels are passed through become important with this function.
Pseudo code:
cv2.merge((r,g,b)) != cv2.merge((b,g,r))
As an aside: Cv2.split() is an expensive function and the use of numpy indexing is must more efficient.
For more information check out opencv python tutorials

Related

matplotlib imshow gray colormap additional preprocess

I have bunch of images, randomly I figured out that best preprocessing for my images is using matplotlib imshow with cmap=gray. This is my RGB image (I can't publish the original images, this is a sample that I created to make my point. So the original images are not noiseless and perfect like this):
When I use plt.imshow(img, cmap='gray') the image will be:
I wanted to implement this process in Opencv. I tried to use OpenCV colormaps but there wasn't any gray one there. I used these solutions but the result is like the first image not the second one. (result here)
So I was wondering besides changing colormaps, what preprocessing does matplotlib apply on images when we call imshow?
P.S: You might suggest binarization, I've tested both techniques but on my data binarization will ruin some of the samples which this method (matplotlib) won't.
cv::normalize with NORM_MINMAX should help you. it can map intensity values so the darkest becomes black and the lightest becomes white, regardless of what the absolute values were.
this section of OpenCV docs contains example code. it's a permalink.
or so that minIdst(I)=alpha, maxIdst(I)=beta when normType=NORM_MINMAX (for dense arrays only)
that means, for NORM_MINMAX, alpha=0, beta=255. these two params have different meanings for different normTypes. for NORM_MINMAX it seems that the code automatically swaps them so the lower value of either is used as the lower bound etc.
further, the range for uint8 type data is 0 .. 255. giving 1 only makes sense for float data.
example:
import numpy as np
import cv2 as cv
im = cv.imread("m78xj.jpg")
normalized = cv.normalize(im, dst=None, alpha=0, beta=255, norm_type=cv.NORM_MINMAX)
cv.imshow("normalized", normalized)
cv.waitKey(-1)
cv.destroyAllWindows()
apply a median blur to remove noisy pixels (which go beyond the average gray of the text):
blurred = cv.medianBlur(im, ksize=5)
# ...normalize...
or do the scaling manually. apply the median blur, find the maximum value in that version, then apply it to the original image.
output = im.astype(np.uint16) * 255 / blurred.max()
output = np.clip(output, 0, 255).astype(np.uint8)
# ...

Difference between images of different sizes

My problem is as follows. I have an image img0 (array shape (A,B,3)) and then a face img1 cut out from the middle of that image (by an algorithm I don't have access to: my input is only the whole image, and the face cut out from it), now an array shaped (C,D,3) where C<A and D<B. Now, I want to perform operations on the face (e.g., colour it differently) and then stick it back inside the original background (which is not coloured differently) -- these operations will not affect the shape of img1 array containing the face alone, it will remain (C,D,3). Something like img0-img1 doesn't work because of the shape mismatch.
I guess an approach like finding the starting coordinate of the face in img0 would work in the case that the face cut out is rectangular (which is possible for me to use, though not ideal), since it is guaranteed that the face is exactly identical in img1 and img0. That means, to get the background, we only need to find the starting coordinate of the img1 array in img0, cut out the subsequent elements (that correspond to img1) from img0, and we're left with the background. After I've done whatever I want to the face, I can use the new (C,D,3) array in place of the previous img1 part of the whole image (img0).
Is there a way to do this in Python? i.e., compute the difference between two images of different sizes, where one image is a 'subimage' of the other? Or, failing that, if we can find the starting coordinate of the rectangular portion of an image (img0) which corresponds to a rectangular cutout available to us (img1)?
Or, failing that, if we can find the starting coordinate of the rectangular ?portion of an image (img0) which corresponds to a rectangular cutout available to us (img1)?
One easy way to do that would be to cross-correlate your zero-mean cut-out with the zero-mean original image. As you have no noise added to the image, any maximum of the cross-correlation is a possible candidate.
However:
(i) If you don't use faces but e.g. blocks, there will be multiple maxima and you don't have an unique solution.
(ii) It is not exactly an elegant solution to your problem.
I modified the code example from [1] to make it clearer:
from scipy import signal, misc
import numpy as np
face = misc.face(gray=True)
face = face - np.mean(face)
face_cutout = np.copy(face[300:365, 670:750])
face_cutout = face_cutout - np.mean(face_cutout)
corr = signal.correlate2d(face, face_cutout, mode='valid')
y, x = np.unravel_index(np.argmax(corr), corr.shape) # find the match
print(f'x: {x} y: {y}')
[1] https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate2d.html

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

ICA on RGB channels in python

I have processed a fingertip video, split it into r, g and b channels. Filtered each channel using butter worth band pass filter. Now I want to do ICA on them to remove the noise. I don't understand how to construct the matrix for the ICA.
Do I have to do ICA separately on each channel, or should it be done combined on all channels?
This is the image of r,g,b after processing respectively, all with respect to time in x axis.
Red channel:
Green channel
Blue channel:
It is based on this paper
This is a hip shot from my part.
As I remember from uni you can make one large array of every row of pixels, per image. I think the important part is that you are consistent (i.e. you can use columns instead of rows). By consistent I mean that when you reconstruct the image afterwards, splice the array the same way you constructed it.
I think that they do the FFT, PCA and ICA on each color separately. But I can't see them explicitly stating it. I think so because I don't see the point in filtering and normalizing the colors separately and then combine them before the ICA, FFT n' PCA.

Pytorch: Normalize Image data set

I want to normalize custom dataset of images. For that i need to compute mean and standard deviation by iterating over the dataset. How can I normalize my entire dataset before creating the data set?
Well, let's take this image as an example:
The first thing you need to do is decide which library you want to use: Pillow or OpenCV. In this example I'll use Pillow:
from PIL import Image
import numpy as np
img = Image.open("test.jpg")
pix = np.asarray(img.convert("RGB")) # Open the image as RGB
Rchan = pix[:,:,0] # Red color channel
Gchan = pix[:,:,1] # Green color channel
Bchan = pix[:,:,2] # Blue color channel
Rchan_mean = Rchan.mean()
Gchan_mean = Gchan.mean()
Bchan_mean = Bchan.mean()
Rchan_var = Rchan.var()
Gchan_var = Gchan.var()
Bchan_var = Bchan.var()
And the results are:
Red Channel Mean: 134.80585625
Red Channel Variance: 3211.35843945
Green Channel Mean: 81.0884125
Green Channel Variance: 1672.63200823
Blue Channel Mean: 68.1831375
Blue Channel Variance: 1166.20433566
Hope it helps for your needs.
What normalization tries to do is mantain the overall information on your dataset, even when there exists differences in the values, in the case of images it tries to set apart some issues like brightness and contrast that in certain case does not contribute to the general information that the image has. There are several ways to do this, each one with pros and cons, depending on the image set you have and the processing effort you want to do on them, just to name a few:
Linear Histogram stetching: where you do a linear map on the current
range of values in your image and stetch it to match the 0 and 255
values in RGB
Nonlinear Histogram stetching: Where you use a
nonlinear function to map the input pixels to a new image. Commonly
used functions are logarithms and exponentials. My favorite function
is the cumulative probability function of the original histogram, it
works pretty well.
Adaptive Histogram equalization: Where you do a linear
histogram stretching in certain places of your image to avoid doing
an identity mapping where you have the max range of values in your original
image.

Categories

Resources