How to plot a limited region of an image? - python

If I have an image in the form of [556,556] PX and I would like to plot a certain range of them so say:
Image Size: [556,556]
Plot -> XPixels [224,300] YPixels [224,300]
This was my attempt and it does sort-of what I need it to do, I have the correct pixels selected but unfortunately this only labels the given range and doesn't actually plot the range. Understandably the next part of this would be to plot the new range given the image but how would I go about this?
openDicom = pdi.dcmread(filePath)
plt.imshow(openDicom.pixel_array, cmap=plt.cm.Spectral, origin = 'lower',interpolation = 'nearest')
plt.xticks([308,341])
plt.yticks([234,271])
Please see my "Amazing Drawing" for further reference. Thanks!

Just pass the array sub area to imshow.
plt.imshow(image[y_start:y_end, x_start:x_end])

Related

Python matplotlib plot background

I want to compare a plot from a paper to my simulation results.
Therefore it would be convenient to plot my results with the ref pic in the background.
the x value range i choose to be the same as the ref pic
y should also be close to the ref pic.
Or i can set the y range also to the same values as the ref pic.
i tryed
x=range(len(P))
plt.plot(x,P)
img=plt.imread("REF.jpg")
plt.imshow(img)
plt.show()
But it chooses the scale i think according to pixels?
and my other plot was a miniature on it.
if i use extend and one dimension is much larger then the other, one can see nothing on the pic because of the bad aspect ratio

plt.imshow doesn't display the image outside of its original domain

I ran a simple script to register the landmarks (via translation, rotation and scaling) of the Helen data set. I have decided to center all face based on the point between the two eyes, and define that as the center of my system (position 0,0). I have successfully managed to create a data based on registered landmarks. Great, that was the first step.
Second step is to registered the actual images; what's in between the landmarks. I'm using skimage and things are technically working:
dst = np.array(Data[1])
dst[:,1] = -dst[:,1]
fig, ax = plt.subplots()
ax.imshow(Helen1)
plt.scatter(dst[:,0], dst[:,1], color="red", s=1)
plt.show()
Image With Landmarks:
src = np.array(Data_Aligned[1])
src[:,1] = src[:,1]
tform3 = PiecewiseAffineTransform()
tform3.estimate(src,dst)
warped = warp(Helen1, tform3,clip=False)
The problem seems to be in the display function, it seems like the original image is in (0,\infty)^2 and plotting anything outside of this domain returns something blank.
fig, ax = plt.subplots()
plt.xlim([700, 1500])
plt.ylim([700, 1500])
plt.imshow(warped)
plt.scatter(src[:,0], src[:,1], color="red", s=1)
plt.show()
Registered:
Anyone knows the solution to this problem ? I want the middle of the eyes to be the origin (0,0). Obviously I can move the center around to be within the domain of the original image and it works, here (1000,1000):
Centered at 1000,1000:
Anyone knows a way to allow imshow to display outside the original domain of the image ?
This is not an imshow problem, but a warp problem.
The output of warp has the same size as the input. So the "domain" of that image is [0,1600] or something like that, for both axes. So, indeed, you cannot warp your image so its center is at (0,0), because the image domain always starts there.
You need to pick some positive coordinate (preferably the center of the output image) as the origin of your system. Say this origin is o = (1000, 1000). You then display your image with
ysz, xsz, nchan = warped.shape
plt.imshow(warped, extent=(-o[0]-0.5, xsz-o[0]-0.5, ysz-o[1]-0.5, -o[1]-0.5))
This will shift the image within the plot's coordinate system.

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

Extracting actual values from RGB-bands

I have a (geographic) raster-image in RGB. I also have an external legend displaying the heights according to a certain color. In below figure, I have sampled this legend, hopefully revealing its RGB-characteristics. I have plotted these values to their actual height values on the X-axis.
Now, is it possible to directly derive height from the pixel's RGB-value? Ultimately, I'm looking for a simple formula which is able to translate my RGB values into one height value (e.g. H = aR + bG + c*B) Any hint or tips? Is it even possible at all?

How to plot bokeh image with non-uniform pixel width

I've been working on a web-app to produce images using Bokeh in Python and have been having trouble making images with non-uniform pixel width. The type of behavior that I'd like to have is similar to the NonUniformImage function from the matplotlib.image module, but I want it to be interactive in the browser, which is why I use Bokeh.
The data that I want to plot has a fixed pixel width in the vertical direction, but each column can have a different pixel width. Now, the only way that I could figure out how to make a variable width column in an image plot was to slice each column into its own image and plot them all as separate images with the appropriate widths. While this does plot things with the widths I want, it has a rendering issue in between each of the pixels where white lines show up depending on the level of zoom. These white lines will translate to the saved images as well. I've written up some sample code below:
import numpy as np
from bokeh.models import Range1d
from bokeh.plotting import figure, show, output_file
# Sample Data
img = np.array([[(x/10.*255,y/10.*255,100,255) for x in range(10)] for y in range(10)])
# Convert to RGBA array that can be plotted
d = np.empty((10, 10), dtype=np.uint32)
view = d.view(dtype=np.uint8).reshape((10, 10, 4))
view[:,:,:] = img
# Set output file
output_file("image.html", title="image.py example")
# Setup the figure
rng = Range1d(0,10,bounds='auto')
p = figure(x_range=rng, y_range=rng, plot_width=500, plot_height=500,active_scroll='wheel_zoom')
# Slice the images
imgs = [d[:,n:n+1] for n in range(10)]
dhs = [10 for n in range(10)]
ys = [0 for n in range(10)]
dws = [0.5 if n%2 == 0 else 1.5 for n in range(10)]
xs = [sum(dws[:n]) for n in range(10)]
# Plot the image
p.image_rgba(image=imgs, x=xs, y=ys, dw=dws, dh=dhs)
show(p)
Now, my real data is much denser than this sample data, so the rendering is dominated by the white vertical lines. If you zoom in far enough, you can see that the pixels are right next to each other.
So, my question is the following: Is there a better way to plot a non-uniform image in Bokeh? Something that can take in x,y position information for each pixel would be preferable. Or, is there a way I can get the rendering to work better using this method to avoid the white stripes?
EDIT: It seems that if I give the pixels some overlap then it gets rid of the striping. But there is a limit to that, the overlap needs to be sufficiently large, which seems like a pretty sketchy way of doing things. I'll think about this some more, as it could be a work-around, but I'd like to have a more reasonable solution.

Categories

Resources