I have processed a fingertip video, split it into r, g and b channels. Filtered each channel using butter worth band pass filter. Now I want to do ICA on them to remove the noise. I don't understand how to construct the matrix for the ICA.
Do I have to do ICA separately on each channel, or should it be done combined on all channels?
This is the image of r,g,b after processing respectively, all with respect to time in x axis.
Red channel:
Green channel
Blue channel:
It is based on this paper
This is a hip shot from my part.
As I remember from uni you can make one large array of every row of pixels, per image. I think the important part is that you are consistent (i.e. you can use columns instead of rows). By consistent I mean that when you reconstruct the image afterwards, splice the array the same way you constructed it.
I think that they do the FFT, PCA and ICA on each color separately. But I can't see them explicitly stating it. I think so because I don't see the point in filtering and normalizing the colors separately and then combine them before the ICA, FFT n' PCA.
Related
My problem is as follows. I have an image img0 (array shape (A,B,3)) and then a face img1 cut out from the middle of that image (by an algorithm I don't have access to: my input is only the whole image, and the face cut out from it), now an array shaped (C,D,3) where C<A and D<B. Now, I want to perform operations on the face (e.g., colour it differently) and then stick it back inside the original background (which is not coloured differently) -- these operations will not affect the shape of img1 array containing the face alone, it will remain (C,D,3). Something like img0-img1 doesn't work because of the shape mismatch.
I guess an approach like finding the starting coordinate of the face in img0 would work in the case that the face cut out is rectangular (which is possible for me to use, though not ideal), since it is guaranteed that the face is exactly identical in img1 and img0. That means, to get the background, we only need to find the starting coordinate of the img1 array in img0, cut out the subsequent elements (that correspond to img1) from img0, and we're left with the background. After I've done whatever I want to the face, I can use the new (C,D,3) array in place of the previous img1 part of the whole image (img0).
Is there a way to do this in Python? i.e., compute the difference between two images of different sizes, where one image is a 'subimage' of the other? Or, failing that, if we can find the starting coordinate of the rectangular portion of an image (img0) which corresponds to a rectangular cutout available to us (img1)?
Or, failing that, if we can find the starting coordinate of the rectangular ?portion of an image (img0) which corresponds to a rectangular cutout available to us (img1)?
One easy way to do that would be to cross-correlate your zero-mean cut-out with the zero-mean original image. As you have no noise added to the image, any maximum of the cross-correlation is a possible candidate.
However:
(i) If you don't use faces but e.g. blocks, there will be multiple maxima and you don't have an unique solution.
(ii) It is not exactly an elegant solution to your problem.
I modified the code example from [1] to make it clearer:
from scipy import signal, misc
import numpy as np
face = misc.face(gray=True)
face = face - np.mean(face)
face_cutout = np.copy(face[300:365, 670:750])
face_cutout = face_cutout - np.mean(face_cutout)
corr = signal.correlate2d(face, face_cutout, mode='valid')
y, x = np.unravel_index(np.argmax(corr), corr.shape) # find the match
print(f'x: {x} y: {y}')
[1] https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate2d.html
I am trying to do a linear filter on an image with RGB colors. I found a way to do that is by splitting the image to different color layers and then merge them.
i.e.:
cv2.split(img)
Sobel(b...)
Sobel(g...)
Sobel(r...)
cv2.merge((b,g,r))
I want to find out how cv2.merge((b,g,r)) works and how the final image will be constructed.
cv2.merge takes single channel images and combines them to make a multi-channel image. You've run the Sobel edge detection algorithm on each channel on its own. You are then combining the results together into a final output image. If you combine the results together, it may not make sense visually at first but what you would be displaying are the edge detection results of all three planes combined into a single image.
Ideally, hues of red will tell you the strength of the edge detection in the red channel, hues of green giving the strength of the detection for the green channel, and finally blue hues for the strength of detection in the blue.
Sometimes this is a good debugging tool so that you can semantically see all of the edge information for each channel in a single image. However, this will most likely be very hard to interpret for very highly complicated images with lots of texture and activity.
What is more usually done is to actually do an edge detection using a colour edge detection algorithm, or convert the image to grayscale and do the detection on that image instead.
As an example of the former, one can decompose the RGB image into HSV and use the colour information in this space to do a better edge detection. See this answer by Micka: OpenCV Edge/Border detection based on color.
This is my understanding. In OpenCV the function split() will take in the paced image input (being a multi-channel array) and split it into several separate single-channel arrays.
Within an image, each pixel has a spot sequentially within an array with each pixel having its own array to denote (r,g and b) hence the term multi channel. This set up allows any type of image such as bgr, rgb, or hsv to be split using the same function.
As Example (pretend these are separate examples so no variables are being overwritten)
b,g,r = cv2.split(bgrImage)
r,g,b = cv2.split(rgbImage)
h,s,v = cv2.split(hsvImage)
Take b,g,r arrayts for example. Each is a single channel array contains a portion of the split rgb image.
This means the image is being split out into three separate arrays:
rgbImage[0] = [234,28,19]
r[0] = 234
g[0] = 28
b[0] = 19
rgbImage[41] = [119,240,45]
r[41] = 119
g[14] = 240
b[14] = 45
Merge does the reverse by taking several single channel arrays and merging them together:
newRGBImage = cv2.merge((r,g,b))
the order in which the separated channels are passed through become important with this function.
Pseudo code:
cv2.merge((r,g,b)) != cv2.merge((b,g,r))
As an aside: Cv2.split() is an expensive function and the use of numpy indexing is must more efficient.
For more information check out opencv python tutorials
I want to normalize custom dataset of images. For that i need to compute mean and standard deviation by iterating over the dataset. How can I normalize my entire dataset before creating the data set?
Well, let's take this image as an example:
The first thing you need to do is decide which library you want to use: Pillow or OpenCV. In this example I'll use Pillow:
from PIL import Image
import numpy as np
img = Image.open("test.jpg")
pix = np.asarray(img.convert("RGB")) # Open the image as RGB
Rchan = pix[:,:,0] # Red color channel
Gchan = pix[:,:,1] # Green color channel
Bchan = pix[:,:,2] # Blue color channel
Rchan_mean = Rchan.mean()
Gchan_mean = Gchan.mean()
Bchan_mean = Bchan.mean()
Rchan_var = Rchan.var()
Gchan_var = Gchan.var()
Bchan_var = Bchan.var()
And the results are:
Red Channel Mean: 134.80585625
Red Channel Variance: 3211.35843945
Green Channel Mean: 81.0884125
Green Channel Variance: 1672.63200823
Blue Channel Mean: 68.1831375
Blue Channel Variance: 1166.20433566
Hope it helps for your needs.
What normalization tries to do is mantain the overall information on your dataset, even when there exists differences in the values, in the case of images it tries to set apart some issues like brightness and contrast that in certain case does not contribute to the general information that the image has. There are several ways to do this, each one with pros and cons, depending on the image set you have and the processing effort you want to do on them, just to name a few:
Linear Histogram stetching: where you do a linear map on the current
range of values in your image and stetch it to match the 0 and 255
values in RGB
Nonlinear Histogram stetching: Where you use a
nonlinear function to map the input pixels to a new image. Commonly
used functions are logarithms and exponentials. My favorite function
is the cumulative probability function of the original histogram, it
works pretty well.
Adaptive Histogram equalization: Where you do a linear
histogram stretching in certain places of your image to avoid doing
an identity mapping where you have the max range of values in your original
image.
I am quite new at Python programming and I need your help. I always do a research for my problem first before posting.
I have SAR dual polarization image (2^16 gray level values) in tiff format. In this tiff image there are two bands. The first band (HH_band) is a horizontal polarization channel and the second one (HV_band) is the vertical polarization channel. I want to create an RGB composite image. For this to happen, I need to layer stack the two channels as follows:
get the first band (HH_band)
get the second band (HV_band)
get the ratio (HH_band/HV_band)
I know that there are many people posting about sometime similar to this (RGB composite image of natural colors). I tried to use cv2.merge or cv2.split from openCV library but didn't work. I thought it would be relatively easy to create a SAR RGB image in Python (as I have seen a few post about creating RGB image of LANDSAT) but I got stuck in my case.
I would much appreciate any help.
Here is a possible way to accomplish the band composition programmatically:
import numpy as np
tif = io.imread('dual_polarization_image.tif')
band = {'HH': 0, 'HV': 1}
r = tif[:, :, band['HH']]
g = tif[:, :, band['HV']]
hh = r.astype(np.float64)
hv = g.astype(np.float64)
b = np.divide(hh, hv, out=np.zeros_like(hh), where=hv!=0)
rgb = np.dstack((r, g, b.astype(np.uint16)))
Remarks:
It would be possible to deal with different arrangements of the bands in the TIFF image by simply redefining the values of the dictionary band.
Prior to calculating the band ratio is necessary to convert data to np.float64.
I have taken advantage of the where option for universal functions to avoid zero division warnings.
In order for the composition to be possible, the band ratio (blue channel) has to be converted back to the same type (i.e. np.uint16) as the original bands (red and green channels).
It's difficult to test without sample images, but you should be able to do this simply at the commandline with ImageMagick which is included in most Linux distributions and is available for OSX and Windows.
The command will look like:
convert HH.tif HV.tif \( -clone 0 -clone 1 -compose divide -composite \) \
-combine -auto-level result.png
I have a map with a scale like this one: (the numbers are just an example)
which describes a single variable on a map. However, I don't have access to the original data and know pretty close to nothing
about image processing. What I have done is use PIL to get the pixel-coordinates and RGB values of each point on the map. Simply using pix = im.load() and saving pix[x,y] for each x,y. Now I would like to guess the value of each point using the gradient above.
Is there a standard formula for such a gradient? Does it look very familiar to the trained eye? I have visited Digital Library of Mathematical Functions for some examples ... but I'm not sure if it's using the hue, the rgb height function or something else (to make things easier I'm also colorblind to some greens/brows/reds) :)
Any tips on how to proceed, libraries, links or ideas are appreciated. Thank you!
edit:
Following the replies and martineau's suggestion, I've tried to catch the colors at the top and bottom:
def rgb2hls(colotup):
'''converts 225 based RGB to 360 based HLS
`input`: (222,98,32) tuple'''
dec_rgb = [x/255.0 for x in colotup] # use decimal 0.0 - 1.0 notation for RGB
hsl_col = colorsys.rgb_to_hls(dec_rgb[0], dec_rgb[1], dec_rgb[2])
# PIL uses hsl(360,x%,y%) notation and throws errors on float, so I use int
return (int(hsl_col[0]*360), int(hsl_col[1]*100), int(hsl_col[2]*100))
def pil_hsl_string(hsltup):
'''returns a string PIL can us as HSL color
from a tuple (x,y,z) -> "hsl(x,y%,z%)"'''
return 'hsl(%s,%s%%,%s%%)' % (hsltup[0], hsltup[1], hsltup[2])
BottomRed = (222,98,32) # taken with gimp
TopBlue = (65, 24, 213)
hue_red = pil_hsl_string(rgb2hls(BottomRed))
hue_blue = pil_hsl_string(rgb2hls(TopBlue))
However they come out pretty different ... which makes me worry about using the rgb_to_hls function to extract the values. Or I'm I doing something very wrong? Here's what the color s convert to with the code:
Interesting question..
If you do a clock-wise walk in HSL color-space from 250,85%,85% --> 21,85%,85% you get a gradient very close to the one you've shown. The obvious difference being that your image exhibits a fairly narrow band of greenish values.
So, if you have the 4 magic numbers then you can interpolate to any point within the map.
These of course being the first and last colour, also the first and last scale value.
Here's the image I got with a straight linear gradient on the H channel (used the gimp).
EDIT: I've since whipped up a program to grab the pixel values for each row, graphing the results. You can see that indeed, the Hue isn't linear, you can also see the S & V channels taking a definite dip at around 115 (115 pixels from top of image) This indeed corresponds with the green band.
Given the shape of the curves, I'm inclined to think that perhaps they are intended to model something. But don't have the experience in related fields to recognise the shape of the curves.
Below, I've added the graphs for the change in both the HSV and RGB models.
The left of the graph represents the top of the bar.
The X-axis labels represent pixels
Quite interesting, me thinks. Bookmarked.
The scale in the image looks like an HSV gradient to me, something like what is mentioned in this question. If so, you could use the colorsys.rgb_to_hls() or colorsys.rgb_to_hsv() functions to obtain a hue color value between 0 and 1 from the r,g,b values in a pixel. That can then be mapped accordingly.
However, short of doing OCR, I have no idea how to determine the range of values being represented unless it's some consistent range you can just hardcode.
I would recomend to define an area where you want to compare the colour. Take an FFT of the regions. Each colour is defined by a frequency. You do the same on the countour scale. then compare and narrow on a value.
I have found some like to understand it better.
http://www.imagemagick.org/Usage/fourier/
You can get something like that by varying the hue with a fixed saturation and luminance.
http://en.wikipedia.org/wiki/HSL_and_HSV