Proper way to overlay multiband images? - python

I want to overlay two views of the same scene - one is a white-light image (monochrome, used for reference) and the other is an image in a specific band (that has the real data I'm showing).
The white-light image is "reference", the data image is "data". They're ordinary 2D numpy arrays of identical dimensions. I want to show the white reference image using the 'gray' color map, and the data image using the 'hot' color map.
What is the "proper" way to do this?
I started with this:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
hotm = cm.ScalarMappable(cmap='hot')
graym = cm.ScalarMappable(cmap='gray')
ref_rgb = graym.to_rgba(reference) # rgba reference image, 'gray' color map
data_rgb = hotm.to_rgb(data) # rgba data image, 'hot' color map
plt.imshow(ref_rgb + data_rgb)
That didn't work well because in the plt.imshow() call the sum overflowed the range 0..1 (or maybe 0..255; this is confusing) and gave me crazy colors.
Then I replaced the last line with this:
plt.imshow(ref_rgb/2 + data_rgb/2)
That worked, but gives me a very washed-out, low-contrast image.
Finally, I tried this:
plt.imshow(np.maximum(ref_rgb, data_rgb))
That seems to give the best result, but I'm worried that much of my "data" is lost by having lower r, g, or b values than the reference image.
What is the "proper", or "usual" way to do this?

I'm not exactly sure what you're trying to achieve, but hopefully this will give you some ideas. :)
I've never used matplotlib, but from a quick look at the docs, it looks like matplotlib.cm gives you the option to have the pixel data as integers in the 0..255 range or as floats in the 0.0..1.0 range. The float format is more convenient for arithmetic image processing, so I'll assume that's the case in the rest of this answer.
We can do basic image processing by doing simple arithmetic on the RGB pixel values. Roughly speaking, adding (or subtracting) a constant to the RGB value of all your pixels changes the image brightness, multiplying your pixels by a constant changes the image contrast, and raising your pixels to a constant (positive) power changes the image gamma. Of course, you do need to make sure that these operations don't cause the colour values to go out of range. That's not a problem for gamma adjustment, or contrast adjustment (assuming the constant is in the 0.0..1.0 range), but it can be a problem for brightness modification. More subtle brightness & contrast modification can be achieved by suitable combinations of addition and multiplication.
When doing this sort of thing it's often a Good Idea to normalize the pixel values in your image data to the 0.0..1.0 range, either before &/or after you've done your main processing.
Your code above is essentially treating the grey reference data as a kind of mask and using its values, instead of using a constant, to operate on the colour data pixel by pixel. As you've seen, taking the mean of ref_rgb & data_rgb results in a washed-out image because you are reducing the contrast. But see what happens when you multiply ref_rgb & data_rgb: contrast will generally be increased because dark areas in ref_rgb will darken the corresponding pixels in data_rgb but bright areas in ref_rgb will leave the corresponding pixels in data_rgb virtually untouched.
ImageMagick has some nice examples of arithmetic image processing.
Another thing to try is to convert your data_rgb to HSV format, and replace the V (value) data with the greyscale data from ref_rgb. And you can do similar tricks with the S (saturation) data, although the effect is generally a bit subtler.

Related

Python - fill colors of an image with closest non zero color

I have an image that is created by ray casting a bunch of vectors on to a mesh with a uv map (in blender). There are not enough vectors to completely cover the image so I'd like a way to fill the rest of the image with the closest non zero color. I've been looking into some techniques with convolutions in numpy etc but can't really find what I need, attached is an example of an image I'm working with - png with RGBA.
[Edited to add]
Possibly a better description of what I am trying to do:
for each pixel that doesn't have a cast color (ie black) I need to find the closest pixel with a cast color based on the distance away, not based on how similar the RGB values are.

Image Operations with Python

I hope you're all doing well!
I'm new to Image Manipulation, and so I want to apologize right here for my simple question. I'm currently working on a problem that involves classifying an object called jet into two known categories. This object is made of sub-objects. My idea is to use this sub-objects to transform each jet in a pixel image, and then applying convolutional neural networks to find the patterns.
Here is an example of the pixel images:
jet's constituents pixel distribution
To standardize all the images, I want to find the two most intense pixels and make sure the axis connecting them is in the vertical direction, as well as make sure that the most intense pixel is at the top. It also would be good to impose that one of the sides (left or right) of the image contains the majority of the intensity and to normalize the intensity of the whole image to 1.
My question is: as I'm new to this kind of processing, I don't know if there is a library in Python that can handle these operations. Are you aware of any?
PS: the picture was taken from here:https://arxiv.org/abs/1407.5675
You can look into OpenCV library for Python:
https://docs.opencv.org/master/d6/d00/tutorial_py_root.html.
It supports a lot of image processing functions.
In your case, it probably would be easier to convert the image into a more suitable color space in which one axis stands for color intensity (e.g HSI, HSL, HSV) and trying to find indices of the maximum values along this axis (this should return the pixels with the highest intensity in the image).
Generally, in Python, we use PIL library for basic manipulations with images and OpenCV for advances ones.
But, if understand your task correctly, you can just think of an image as a multidimensional array and use numpy to manipulate it.
For example, if your image is stored in a variable of type numpy.array called img, you can find maximum value along the desired axis just by writing:
img.max(axis=0)
To normalize image you can use:
img /= img.max()
To find which image part is brighter, you can split an img array into desired parts and calculate their mean:
left = img[:, :int(img.shape[1]/2), :]
right = img[:, int(img.shape[1]/2):, :]
left_mean = left.mean()
right_mean = right.mean()

Remove outliers in an image after applying treshold

Here`s the deal. I want to create a mask that visualizes all the changes between two images (GeoTiffs which are converted to 2D numpy arrays).
For that I simply subtract the pixel values and normalize the absolute value of the subtraction:
Since the result will be covered in noise, I use a treshold and remove all pixels with a value below a certain limit.
def treshold(array, thresholdLimit):
print("Treshold...")
result = (array > thresholdLimit) * array
return result
This works without a problem. Now comes the issue. When applying the treshold, outliers remain, which is not intended:
What is a good way to remove those outliers?
Sometimes the outliers are small chunks of pixels, like 5-6 pixels together, how could those be removed?
Additionally, the images I use are about 10000x10000 pixels.
I would appreciate all advice!
EDIT:
Both images are landsat satelite images, covering the exact same area.
The difference here is that one image shows cloud coverage and the other one is free of clouds.
The bright snakey line in the top right is part of a river that has been covered by a cloud. Since water bodies like the ocean or rivers are depicted black in those images, the difference between the bright cloud and the dark river results in the river showing a high degree of change.
I hope the following images make this clear:
Source tiffs :
Subtraction result:
I also tried to smooth the result of the tresholding by using a median filter but the result was still covered in outliers:
from scipy.ndimage import median_filter
def filter(array, limit):
print("Median-Filter...")
filteredImg = np.array(median_filter(array, size=limit)).astype(np.float32)
return filteredImg
I would suggest the following:
Before proceeding please double check if the two images are 100% registered. To check that you should overlay them using e.g. different color channels. Even minimal registration errors can render your task impossible
Smooth both input images slightly (before the subtraction). For that I would suggest you use standard implementations. Play around with the filter parameters to find an acceptable compromise between smoothness (or reduction of graininess of source image 1) and resolution
Then try to match the image statistics by applying histogram normalization, using the histogram of image 2 as a target for the histogram of image 1. For this you can also use e.g. the OpenCV implementation
Subtract the images
If you then still observe obvious noise, look at the histogram of the subtraction result and see if you can relate the noise to intensity outliers. If you can clearly separate signal and noise based on intensity, apply again a thresholding (informed by your histogram). Alternatively (or additionally), if the noise is structurally different from your signal (e.g. clustered), you could look into morphological operations to remove it.

Apply "reverse" colormap/lookup-table to generate grayscale image from RGB

We have a large dataset of thermal/infrared images. Due to some error, we received the data not as single-layer-TIFs or something, but the camera software already applied a colormap and I'm now looking at RGB jpg files.
I was able to "reconstruct" the used colormap from an image found online, and now I'm looking for an efficient way to revert the RGB images to grayscale to be able to work with it. Small problem, not all of the image RGB triplets may be represented in my reconstructed colormap, so right now my python script does something like that:
I = cv2.imread('image.jpg')
Iout = I[:,:,0] * 0
for i in range(0, I.shape[0]):
for j in range(0, I.shape[1]):
# calculate square difference between value and colormap and find idx
Iout[i,j]=idx
This works, but is painfully slow because of the for-loops.
Is there any way to use a lookup table with the RGB values (3D or something) which can be applied to the image as a whole? For values not in the colormap it should select the "closest" one, like I did with suqared differences above.

python imshow pixel size varies within plot

Dear stackoverflow community!
I need to plot a 2D-map in python using imshow. The command used is
plt.imshow(ux_map, interpolation='none', origin='lower', extent=[lonhg_all.min(), lonhg_all.max(), lathg_all.min(), lathg_all.max()])
The image is then saved as follows
plt.savefig('rdv_cr%s_cmlon%s_ux.png' % (2097, cmlon_ref))
and looks like
The problem is that when zooming into the plot one can notice that the pixels have different shapes (e.g. different width). This is illustrated in the zoomed part below (taken from the top region of the the first image):
Is there any reason for this behaviour? I input a rectangular grid for my data, but the problem does not have to do with the data itself, I suppose. Instead it is probably something related to rendering. I'd expect all pixels to be of equal shape, but as could be seen they have both different widths as well as heights. By the way, this also occurs in the interactive plot of matplotlib. However, when zooming in there, they become equally shaped all of a sudden.
I'm not sure as to whether
https://github.com/matplotlib/matplotlib/issues/3057/ and the link therein might be related, but I can try playing around with dpi values. In any case, if anybody knows why this happens, could that person provide some background on why the computer cannot display the plot as intended using the commands from above?
Thanks for your responses!
This is related to the way the image is mapped to the screen. To determine the color of a pixel in the screen, the corresponding color is sampled from the image. If the screen area and the image size do not match, either upsampling (image too small) or downsampling (image too large) occurs.
You observed a case of upsampling. For example, consider drawing a 4x4 image on a region of 6x6 pixels on the screen. Sometimes two screen pixels fall into an image pixel, and sometimes only one. Here, we observe an extreme case of differently sized pixels.
When you zoom in in the interactive view, this effect seems to disapear. That is because suddenly you map the image to a large number of pixels. If one image pixel is enlarged to, say, 10 screen pixels and another to 11, you hardly notice the difference. The effect is most apparent when the image nearly matches the screen resolution.
A solution to work around this effect is to use interpolation, which may lead to an undesirable blurred look. To reduce the blur you can...
play with different interpolation functions. Try for example 'kaiser'
or up-scale the image by a constant factor using nearest neighbor interpolation (e.g. replace each pixel in the image by a block of pixels with the same color). Then any blurring will only affect the edges of the block.

Categories

Resources