Prior to performing my processing algorithm on an image, I need the user to click-draw a circle to create a clipping mask using the mouse. This mask will be used to remove areas of the image that will cause my algorithm to fail.
How can I allow the user to:
drag the ROI (to adjust x-y position on the image)
adjust the shape of the ROI (i.e. the size of the circle by dragging)
In the future I will need to use some feature detection to make the ROI choice, but for now I really need the user to be able to define the ROI in a way that is easy for them,
If you have scikit-image installed, you can use the following to do a rectangular selection (modifying the skimage code to do a circle instead would not be hard, though):
import matplotlib.pyplot as plt
from skimage import data
from skimage.viewer.canvastools import RectangleTool
f, ax = plt.subplots()
ax.imshow(data.camera(), interpolation='nearest', cmap='gray')
props = {'facecolor': '#000070',
'edgecolor': 'white',
'alpha': 0.3}
rect_tool = RectangleTool(ax, rect_props=props)
plt.show()
print("Final selection:")
rect_tool.callback_on_enter(rect_tool.extents)
You press enter to finalize the selection.
The piece of code given by Stefan must not be supported anymore (it fails when ax is passed to RectangleTool). RectangleTool only takes skimage viewer as argument. Here is a piece of code adapted from Stephan example and skimage documentation. It provides an interactive way for retrieving ROI coordinates.
from pylab import *
from skimage import data
from skimage.viewer.canvastools import RectangleTool
from skimage.viewer import ImageViewer
im = data.camera()
def get_rect_coord(extents):
global viewer,coord_list
coord_list.append(extents)
def get_ROI(im):
global viewer,coord_list
selecting=True
while selecting:
viewer = ImageViewer(im)
coord_list = []
rect_tool = RectangleTool(viewer, on_enter=get_rect_coord)
print "Draw your selections, press ENTER to validate one and close the window when you are finished"
viewer.show()
finished=raw_input('Is the selection correct? [y]/n: ')
if finished!='n':
selecting=False
return coord_list
a=get_ROI(im)
print a
Related
I want to apply a plt colormap to a medical grayscale image (14bit (16383 is the maximum pixel value) image stored as np.uint16) and save it as a single channel grayscale image. However when I do:
import matplotlib.pyplot as plt
import numpy as np
rand_img = np.random.randint(low=0, high=16383,size=(500,500),dtype=np.uint16)
# rand_img.shape = (500,500)
cm = plt.cm.gist_yarg
norm = plt.Normalize(vmin=0, vmax=((2**14)-1))
img = cm(norm(rand_img))
# img.shape = (500,500,4)
the resulting img.shape is a 4 channel (rgba?) image whereas what I want is a one channel image. The first 3 channels are all the same so I could just slice out one channel and use it. However the image turns out significantly darker as when I display it with, e.g.,
plt.imshow(rand_img, cmap=plt.cm.gist_yarg)
So how can I apply the colormap and save the image so that it looks exactly like when I use plt.imshow?
PS: When I use plt.imsave with the colormap the saved image looks as expected, however it is still stores as a 4 channel image and not as a single channel image.
I am almost certain that the difference you are seeing is coming from the default dpi setting in plt.savefig(). See this question for details. Since the default DPI is 100, it is likely that the difference you are seeing comes from down-sampling during image saving. Trying plt.savefig() with the default DPI setting on your example, I can clearly see that the fine details are missing.
Modifying your code slightly, I can take your example and get two reasonably close looking plots:
import matplotlib.pyplot as plt
import numpy as np
SAVEFIG_DPI = 1000
np.random.seed(1000) # Added to make example repeatable
rand_img = np.random.randint(low=0, high=16383, size=(500, 500), dtype=np.uint16)
cm = plt.cm.gist_yarg
norm = plt.Normalize(vmin=0, vmax=((2**14) - 1))
norm_cm_img = cm(norm(rand_img))
plt.imshow(norm_cm_img)
plt.savefig("test.png", dpi = SAVEFIG_DPI)
plt.imshow(norm_cm_img)
plt.show()
I get the following output, with show() on the left, and the image file on the right:
It's worth noting that I did try using fig.dpi as suggested in the linked question, but I was not able to get results that looked as close as this using that approach.
I have an image here (DMM_a01_s01_e01_sdepth.PNG, it is basically a human depth map or something, I don't really know the details :( ):
It's very small (54x102) so here is a visualization:
But when I tried to resize it to 20x20 using this piece of code that I've made:
from scipy import misc
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import math
import cv2
im = misc.imread('DMM_a01_s01_e01_sdepth.PNG')
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
if len(im.shape) ==3:
im = rgb2gray(im) # Convert RGB to grayscale
# Show image
plt.imshow(im, cmap = cm.Greys_r)
plt.show()
# Resize image
boxSize = 20
newImage= misc.imresize(im, (boxSize,boxSize), interp="bicubic")
plt.imshow(newImage, cmap = cm.Greys_r)
plt.show()
, the resized image is no longer the same as the orignal one:
How do I resize and still keep the structure of the image? Please help me, thank you very much :)
What you are asking for is impossible. Resizing of image is a destructive operation. You have 54x102 pixels (5508 pixels of data) and you are trying to fit that amount of data into a 20x20 image - that's just 400 pixels! You'll always lose some detail, structure etc. based on the algorithm you used - in this case scipy's.
This question is kind of a follow-up to my 2 previous auestions: Python Image tutorial works, other images behaves differently (showing images with Pylab) and Detect objects on a white background in Python .
What I am trying to achieve is being able to programmatically count the number of individual objects on a white background. As seen in the other 2 posts I've been able to achieve this to some extent. At this moment I am able to count the number of objects when virtually no shadow is present on the image. The image I'm trying to analyse (bottom of this post) does have some shadows which causes objects to merge and be seen as one individual object.
I need some simple way of getting rid of the shadows, I already tried adaptive thresholding with scikit-image (http://scikit-image.org/docs/dev/auto_examples/plot_threshold_adaptive.html#example-plot-threshold-adaptive-py), but I stumbled upon some problems (https://imgur.com/uYnj6af). Is there any not too complicated way to get this to work? There's already a lot of code in my previous posts but please let me know if you want to see any additional code!
Thanks in advance.
Perhaps it's easier to operate on a binary image. In the code below, I obtain such an image by computing the variance over a sliding window and thresholding it.
from skimage import io, exposure, color, util
import matplotlib.pyplot as plt
image = color.rgb2gray(io.imread('tools.jpg'))
exposure.equalize_adapthist(image)
Z = util.view_as_windows(image, (5, 5))
Z = Z.reshape(Z.shape[0], Z.shape[1], -1)
variance_map = Z.var(axis=2)
plt.imshow(variance_map > 0.005, cmap='gray')
plt.savefig('tools_thresh.png')
plt.show()
Update:
This extended version of the code identifies the 8 tools.
from skimage import io, exposure, color, util, measure, morphology
from scipy import ndimage as ndi
import numpy as np
import matplotlib.pyplot as plt
image = color.rgb2gray(io.imread('tools.jpg'))
exposure.equalize_adapthist(image)
Z = util.view_as_windows(image, (5, 5))
Z = Z.reshape(Z.shape[0], Z.shape[1], -1)
variance_map = Z.var(axis=2)
tools_bw = variance_map > 0.005
tools_bw = morphology.binary_closing(tools_bw, np.ones((5, 5)))
tools_bw = ndi.binary_fill_holes(tools_bw)
labels = measure.label(tools_bw)
regions = measure.regionprops(labels)
regions = [r for r in regions if r.perimeter > 500 and r.major_axis_length > 200]
print(len(regions))
out = np.zeros_like(tools_bw, dtype=int)
for i, r in enumerate(regions):
out[labels == r.label] = i + 1
plt.imshow(out, cmap='spectral')
plt.savefig('tools_identified.png', bbox_inches='tight')
plt.show()
I'm trying to fill holes in the below image.
When I use SciPy's binary_fill_holes(), I am generally successful, with the exception of objects that touch the image's border.
Are there any existing Python functions that can fill holes in objects that touch the border? I tried adding a white border around the image, but that just resulted in the entire image being filled.
This assumes that there is more background than other stuff. It basically does a connected component analysis on the image. Extract the largest component (assumed to be the background), and sets everything else to white.
import numpy as np
import matplotlib.pyplot as plt
import skimage.morphology, skimage.data
img = skimage.data.imread('j1ESv.png', 1)
labels = skimage.morphology.label(img)
labelCount = np.bincount(labels.ravel())
background = np.argmax(labelCount)
img[labels != background] = 255
plt.imshow(img, cmap=plt.cm.gray)
plt.show()
I have a plot of spatial data that I display with imshow().
I need to be able to overlay the crystal lattice that produced the data. I have a png
file of the lattice that loads as a black and white image.The parts of this image I want to
overlay are the black lines that are the lattice and not see the white background between the lines.
I'm thinking that I need to set the alphas for each background ( white ) pixel to transparent (0 ? ).
I'm so new to this that I don't really know how to ask this question.
EDIT:
import matplotlib.pyplot as plt
import numpy as np
lattice = plt.imread('path')
im = plt.imshow(data[0,:,:],vmin=v_min,vmax=v_max,extent=(0,32,0,32),interpolation='nearest',cmap='jet')
im2 = plt.imshow(lattice,extent=(0,32,0,32),cmap='gray')
#thinking of making a mask for the white background
mask = np.ma.masked_where( lattice < 1,lattice ) #confusion here b/c even tho theimage is gray scale in8, 0-255, the numpy array lattice 0-1.0 floats...?
With out your data, I can't test this, but something like
import matplotlib.pyplot as plt
import numpy as np
import copy
my_cmap = copy.copy(plt.cm.get_cmap('gray')) # get a copy of the gray color map
my_cmap.set_bad(alpha=0) # set how the colormap handles 'bad' values
lattice = plt.imread('path')
im = plt.imshow(data[0,:,:],vmin=v_min,vmax=v_max,extent=(0,32,0,32),interpolation='nearest',cmap='jet')
lattice[lattice< thresh] = np.nan # insert 'bad' values into your lattice (the white)
im2 = plt.imshow(lattice,extent=(0,32,0,32),cmap=my_cmap)
Alternately, you can hand imshow a NxMx4 np.array of RBGA values, that way you don't have to muck with the color map
im2 = np.zeros(lattice.shape + (4,))
im2[:, :, 3] = lattice # assuming lattice is already a bool array
imshow(im2)
The easy way is to simply use your image as a background rather than an overlay. Other than that you will need to use PIL or Python Image Magic bindings to convert the selected colour to transparent.
Don't forget you will probably also need to resize either your plot or your image so that they match in size.
Update:
If you follow the tutorial here with your image and then plot your data over it you should get what you need, note that the tutorial uses PIL so you will need that installed as well.