Matplotlib overlaying multiple images with different colors - python

I have a list (image_list) with images in the forms of numpy arrays. I want to overlay all of these images into 1 single image, and I want each image to have its own color.
Attached is a screenshot showing some of the individual images; i want each of the "curves" (gray stuff) to be a different color, not the actual background itself. This way when I overlay all the images the colors will help me identify each specific image. individual images
Attached is a screenshot of what I tried. Additionally, to clarify, by "overlay" I mean I want all the images in ONE single image, not multiple plots. Thus, I don't think changing cmap will work because that also changes the background color; I only want the color of the actual "data"/image to change.
Any help would be appreciated. what i tried

If your images are just binary data (zeros and ones) then you could change the data in each subsequent image by one count. Then you plot them all in the same image and matplotlib will take care of the coloring. You could also merge them into a single array first with the same result
1st image: zeros and ones
2nd image: zeros and 2s
3rd image: zeros and 3s
....
in pseudo code:
new_image_list = []
for idx, img in enumerate(img_list):
img = np.where(img > threshold, 1, 0)
img *= (idx+1)
new_image_list.append(img)
for img in new_image_list:
plt.imshow(img)

Related

OpenCV vs Labview Images Greyscale (U16) - Difference in values

I am trying to understand why LabView shows one set of values for an image, while OpenCV shows another set of values.
I have two U16 Grayscale PNG images that I am trying to combine vertically to create one continuous image. The majority of the pixels are near zero or low-valued, with the ROI having pixel values in the middle of the U16 range. In Python, this is achieve by reading the file using OpenCV, combining the image using numpy and then using Matplotlib to display the values:
image_one = cv2.imread("..\filename_one.png", cv2.IMREAD_UNCHANGED)
image_two = cv2.imread("..\filename_two.png", cv2.IMREAD_UNCHANGED)
combined_image = numpy.concatenate((image_one, image_two), axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_image,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
Dual Exposure Image
As seen above, this show the image as have two different dynamic ranges, resulting in different exposures. To normalize the images, we can try to rescale it to take advantage of the same dynamic range.
rescaled_one = ((image_one - image_one.min()) / (image_one.max() -
image_one.min())) * 65535 rescaled_two = ((image_two -
image_two.min()) / (image_two.max() - image_two.min())) * 65535
combined_rescaled = numpy.concatenate((rescaled_one, rescaled_two),
axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_irescaled,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
Rescaled Image - Dual Exposure
This still shows the same issue with the images.
In LabView, to combine images vertically, I adapted a VI that was published to stitch Images horizontally:
https://forums.ni.com/t5/Example-Code/Stitch-Images-Together-in-LabVIEW-with-Vision-Development-Module/ta-p/3531092?profile.language=en
The Final VI Block Diagram looks as follows:
VI Block Diagram - Vertically Combine Images using IMAQ
The Output you see on the Front Panel:
Singular continuous Image - Front Panel
The dual exposure issues appears to have disappeared and the image now appears as a single continuous image. This didn't make any sense to me, so I plotted the results using Plotly as follows:
fig = plty.subplots.make_subplots(1, 1, horizontal_spacing=0.05)
fig.append_trace(go.Histogram(x=image_one.ravel(), name="cv2_top",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=image_two.ravel(), name="cv2_bottom",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=lv_joined[:1024, :].ravel(),
name="LabView_joined_top", showlegend=True, nbinsx = 13107), 1, 1)
//First Image
fig.append_trace(go.Histogram(x=lv_joined[1024:,:].ravel(), name="LabView_joined_bottom", showlegend=True, nbinsx =
13107), 1, 1) //Second Image fig.update_layout(height=800) fig.show()
Histogram - Python vs Labview respective halves - Focus on Low
pixels
Here it shows that the Second Image's pixel values have been "compressed" to find the same distribution as the the First Image. I don't understand why this is the case. Have I configured something wrong in LabView or have I not considered something when reading in a file with OpenCV?
Original Images:
Please refer to the answer posted here: [https://forums.ni.com/t5/LabVIEW/OpenCV-vs-Labview-Images-Greyscale-U16-Difference-in-values/td-p/4172150/highlight/false]

matplotlib imshow gray colormap additional preprocess

I have bunch of images, randomly I figured out that best preprocessing for my images is using matplotlib imshow with cmap=gray. This is my RGB image (I can't publish the original images, this is a sample that I created to make my point. So the original images are not noiseless and perfect like this):
When I use plt.imshow(img, cmap='gray') the image will be:
I wanted to implement this process in Opencv. I tried to use OpenCV colormaps but there wasn't any gray one there. I used these solutions but the result is like the first image not the second one. (result here)
So I was wondering besides changing colormaps, what preprocessing does matplotlib apply on images when we call imshow?
P.S: You might suggest binarization, I've tested both techniques but on my data binarization will ruin some of the samples which this method (matplotlib) won't.
cv::normalize with NORM_MINMAX should help you. it can map intensity values so the darkest becomes black and the lightest becomes white, regardless of what the absolute values were.
this section of OpenCV docs contains example code. it's a permalink.
or so that minIdst(I)=alpha, maxIdst(I)=beta when normType=NORM_MINMAX (for dense arrays only)
that means, for NORM_MINMAX, alpha=0, beta=255. these two params have different meanings for different normTypes. for NORM_MINMAX it seems that the code automatically swaps them so the lower value of either is used as the lower bound etc.
further, the range for uint8 type data is 0 .. 255. giving 1 only makes sense for float data.
example:
import numpy as np
import cv2 as cv
im = cv.imread("m78xj.jpg")
normalized = cv.normalize(im, dst=None, alpha=0, beta=255, norm_type=cv.NORM_MINMAX)
cv.imshow("normalized", normalized)
cv.waitKey(-1)
cv.destroyAllWindows()
apply a median blur to remove noisy pixels (which go beyond the average gray of the text):
blurred = cv.medianBlur(im, ksize=5)
# ...normalize...
or do the scaling manually. apply the median blur, find the maximum value in that version, then apply it to the original image.
output = im.astype(np.uint16) * 255 / blurred.max()
output = np.clip(output, 0, 255).astype(np.uint8)
# ...

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

How to hide overlapping pixels using Pillow?

I have two images of similar dimensions as such:
Since the outer circle should have close to overlapping pixels, I would like to have a resultant image that has the inner circle from image A and the square from image B. I thought inverting image A and then calling PIL.Image.composite(imageA, imageB, mask) would do something but it just gave me a combination of imageA and imageB.
Is there a way to do what I want using Pillow or perhaps using numpy somehow to make white the pixels that are similar between both images?
I think you are looking for an XOR between the two images.
I'll work up to it slowly in case you don't do many logical expression evaluations, so starting with OR, you will get white pixels out as a result where the either image A OR image B has white pixels. Then with an AND, you will get white pixels out where both image A AND image B are white. Finally, with an XOR, you will get white pixels out where either image A or image B but exclusively one or the other but not both have white pixels.
In code, that looks like this:
#!/usr/local/bin/python3
from PIL import Image, ImageChops
# Load up the two images, discarding any alpha channel
im1 = Image.open('im1.png').convert('1')
im2 = Image.open('im2.png').convert('1')
# XOR the images together
result = ImageChops.logical_xor(im1,im2)
result = ImageChops.invert(result)
# Save the result
result.save('result.png')

Edit image pixel by pixel - Python

I have two images, one overlay and one background.
I want to create a new image, by editing overlay image and manipulating it to show only the pixels which have blue colour in the background image.
I dont want to add the background, it is only for selecting the pixels.
Rest part should be transparent.
Any hints or ideas please? PS: I edited result image with paint so its not perfect.
Image 1 is background image.
Image 2 is overlay image.
Image 3 is the check I want to perform. (to find out which pixels have blue in background and making remaining pixels transparent)
Image 4 is the result image after editing.
I renamed your images according to my way of thinking, so I took this as image.png:
and this as mask.png:
Then I did what I think you want as follows. I wrote it quite verbosely so you can see all the steps along the way:
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
# Open input images
image = Image.open("image.png")
mask = Image.open("mask.png")
# Get dimensions
h,w=image.size
# Resize mask to match image, taking care not to introduce new colours (Image.NEAREST)
mask = mask.resize((h,w), Image.NEAREST)
mask.save('mask_resized.png')
# Convert both images to numpy equivalents
npimage = np.array(image)
npmask = np.array(mask)
# Make image transparent where mask is not blue
# Blue pixels in mask seem to show up as RGB(163 204 255)
npimage[:,:,3] = np.where((npmask[:,:,0]<170) & (npmask[:,:,1]<210) & (npmask[:,:,2]>250),255,0).astype(np.uint8)
# Identify grey pixels in image, i.e. R=G=B, and make transparent also
RequalsG=np.where(npimage[:,:,0]==npimage[:,:,1],1,0)
RequalsB=np.where(npimage[:,:,0]==npimage[:,:,2],1,0)
grey=(RequalsG*RequalsB).astype(np.uint8)
npimage[:,:,3] *= 1-grey
# Convert numpy image to PIL image and save
PILrgba=Image.fromarray(npimage)
PILrgba.save("result.png")
And this is the result:
Notes:
a) Your image already has an (unused) alpha channel present.
b) Any lines starting:
npimage[:,:,3] = ...
are just modifying the 4th channel, i.e. the alpha/transparency channel of the image

Categories

Resources