So I have an image and I have a pixel mask for that image, where the mask is the same size as the image and contains values of 0 and 1, where if it is 0 I don't want to modify the image, and if it is 1 I want to add a transparent color over that pixel of the image.
Basically I want to highlight certain segments of the image but still see what is underneath.
Now I have searched high and low but haven't found a simple way to do this. I used np.where with the mask to get the pixel locations of the 1's to use with the plot functions. I first tried scatter plots with a small marker size and no edge color (small scatter plot markers in matplotlib are always black), but the markers are not one image pixel in size, they seem to be an absolute size and so depending on the size of the figure the transparency is affected and weird patterns are created from the overlapping markers.
Just the regular pyplot plot function created the exact look I desired (where the coloring was smooth and invariant to figure size) but it also colored horizontal connections between disjoint segments in the mask (since it is drawing lines I guess), so I couldn't use that.
What worked the best was patches, which I came across in this question: (How to set a fixed/static size of circle marker on a scatter plot?). I found that rectangular patches with width and height of 1 gave me the exact desired effect, where I could put a transparent color over certain pixels of the image. However this proved to produce a ton (tens of thousands) of rectangles for certain images, and so it was quite slow. Even when using a PatchCollection instead of calling addPatch every time it was still slow.
Now I can probably just join adjacent rectangles to reduce the number of things needing to be drawn, but I was just wondering if there was an easier way to do this?
Thanks.
You can do a semitransparent overlay either using masked arrays or by setting the alpha values in an RGBA image. Here are both worked through (using the example of three semitransparent red squares placed over a circular pattern), and they give similar images (so I'll only show one):
from pylab import *
from numpy import ma
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
# z4 = 3 diagonal square
# zm = a uniform image (ones), with a mask of squares (~z4)
z4 = np.repeat(np.repeat(eye(3, dtype=bool), 40, axis=0), 40, axis=1)
zm = ma.masked_where(~z4, ones((120,120)))
imshow(z3, cmap=cm.jet)
imshow(zm, cmap=cm.bwr, alpha=.3, vmin=0, vmax=1) #cm.bwr is an easy way to get red
# do this by changing alpha for each pixel
figure()
z5 = zeros((120, 120, 4), dtype=float)
z5[..., 0] = 1
z5[..., 3] = .4*z4.astype(float)
imshow(z3, cmap=cm.jet)
imshow(z5)
show()
I think both approaches can produce the same results for all cases, but:
1. the masked arrays can be a more direct approach if the mask or composition becomes complicated, and masking gives you more flexibility in drawing your overlay image since, for example, you can use colormaps rather than specifying the full RGBA for every pixel, but,
2. the masked array approach doesn't give full pixel-by-pixel control over the alpha value like RGBA does.
z1 = sin(X*Y)
z1 = cos(2*X)
z2 = cos(5*(X+Y))
zm = ma.masked_where( (z2<.5) & (Y>0), z1)
figure()
imshow(z3)
imshow(zm, cmap=cm.gray, alpha=.4, vmin=-2, vmax=2)
show()
It's a bit crazy, but here's what's going on: The primary image is a circular pattern that goes from blue to red (z3). Then there are vertical bars that faintly shade this (z1) but only in half of the figure and in narrow alternate diagonal bands on the other half (due to the mask). Here's a more complicated image using masked arrays:
Just to add on to what tom10 has posted, the masked arrays do work great with colormaps, but I also wrote a small function in the meantime that should work with any RGB color tuple.
def overlayImage(im, mask, col, alpha):
maskRGB = np.tile(mask[..., np.newaxis], 3)
untocuhed = (maskRGB == False) * im
overlayComponent = alpha * np.array(col) * maskRGB
origImageComponent = (1 - alpha) * maskRGB * im
return untocuhed + overlayComponent + origImageComponent
im is the rgb image
mask is a boolean mask of the image, such that mask.shape + (3,) = im.shape
col is just the 3-tuple rgb value you want to mask the image with
alpha is just the alpha value / transparency for the mask
I also needed a clear contour on my areas. Thus, you can easily add a contour plot on top: e.g., create a dummy numpy array and set a different value in each area of interest.
Here's an example build on top of tom10's answer with a different condition:
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
imshow(z3, cmap=cm.jet, extent = (-6,6,-6,6));
zm = ma.masked_where((z3>=0.7) & (z3<=1.5), ones(np.shape(z3)));
imshow(zm, cmap=cm.bwr, alpha=.4, vmin=0, vmax=1, extent = (-6,6,-6,6)) #cm.bwr is an easy way to get red
# Build dummy array of 1s and 0s (you can play with different values to obtain different contours for different regions):
temp_vector = ones(np.shape(z3));
temp_vector[(z3>=0.7) & (z3<=1.5)] = 0.0;
temp_vector[(z3>8.2)] = 2.0; # etc.
# Create contour. I found only one contour necessary:
contour(X, Y, temp_vector, 1, colors=['r','g']);
show()
Which yields:
Related
I have contour plots created in Matplotlib, that I need to analyze further to see if they are closed curves, and then look at area, convexity, solidity, etc. for cellular structures. In Matplotlib, they are of type LineCollection and Path.
In OpenCV, I cannot pass a float array to cv2.contourArea or similar functions. On the other hand, converting to uint8 coordinates loses important data like nesting structure. In this case, I need to get to the inner nested convex contours.
Are there any options to find information like area, convex hull, bounding rectangle in Python?
I could enlarge the image, but I'm worried it might skew the picture unpredictably.
For example: Attached image with floating point and integer coordinates.
I assume, you have full control over the Matplotlib part. So, let's try to get an image from there, which can you easily use for further image processing with OpenCV.
We start with some common contour plot as shown in your question:
You can set the levels parameter to get a single contour level. That's helpful to work on several levels individually. In the following, I will focus on levels=[1.75] (the most inner green ellipse). Later, you can simply loop through all desired levels, and perform your analyses.
For our custom contour plot, we will set a fixed x, y domain, for example [-3, 3] x [-2, 2], using xlim and ylim. So, we have known dimensions for the actual canvas. We get rid of the axes using axis('off'), and the margins around the canvas using tight_layout(pad=0). What's left is the plain canvas in full size (figure size was adjusted to (10, 5), and colors are automatically adjusted to the number of levels):
Now, we save the canvas to some NumPy array, cf. this Q&A. From there, we can perform any OpenCV operation. For finding the combined area of this level contours, we might threshold the grayscaled image, find all contours, and calculate their areas using cv2.contourArea. We sum those areas, and get the whole area in pixels. From the known canvas dimensions, we know the whole canvas area in "units", and from the image dimensions, we know the whole canvas area in pixels. So, we just need to divide the whole contour area (in pixels) by the whole canvas area (in pixels), and multiply with the whole canvas area (in "units").
That'd be the whole code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
# Generate some data for some contour plot
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-(X + 1.5)**2 - Y**2)
Z2 = np.exp(-(X - 1.5)**2 - Y**2)
Z = (Z1 + Z2) * 2
# Custom contour plot
x_min, x_max = -3, 3
y_min, y_max = -2, 2
fig = plt.figure(2, figsize=(10, 5)) # Set large figure size
plt.contour(X, Y, Z, levels=[1.75]) # Set single levels if needed
plt.xlim([x_min, x_max]) # Explicitly set x limits
plt.ylim([y_min, y_max]) # Explicitly set y limits
plt.axis('off') # No axes shown at all
plt.tight_layout(pad=0) # No margins at all
# Get figure's canvas as NumPy array, cf. https://stackoverflow.com/a/7821917/11089932
fig.canvas.draw()
img = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
img = img.reshape(fig.canvas.get_width_height()[::-1] + (3,))
# Grayscale, and threshold image
mask = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = cv2.threshold(mask, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV)[1]
# Find contours, calculate areas (pixels), sum to get whole area (pixels) for certain level
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
area = np.sum(np.array([cv2.contourArea(cnt) for cnt in cnts]))
# Whole area (coordinates) from canvas area (pixels), and x_min, x_max, etc.
area = area / np.prod(mask.shape[:2]) * (x_max - x_min) * (y_max - y_min)
print('Area:', area)
The output area seems reasonable:
Area: 0.861408
Now, you're open to do any image processing with OpenCV you like. Always remember to convert any results in pixels to some result in "units".
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1.1
Matplotlib: 3.4.1
NumPy: 1.20.2
OpenCV: 4.5.1
I need to introduce a non-constant alpha value using pcolormesh (imshow is a priori not a possible substitue because I need to use log scale for the axes -- hence non-regular spacing along each coordinate).
Following this post, I tried to change a posteriori the alpha value of the faces. However, in the results, I can't get rid of edges that appear.
Here is a minimal example, where I plot a 2D gaussian bump (with very few points), with alpha increasing from the lower left to the upper right corner:
from matplotlib import pyplot as plt
import numpy as np
# start with coordinates, corresponding meshgrid to compute the "shading" value and
# extended coordinate array for pcolormesh (center mesh)
xx = np.linspace(-4,4,7)
xmesh, ymesh = np.meshgrid(xx,xx)
xplot = np.pad(0.5*(xx[1:]+xx[:-1]),1,'reflect',reflect_type="odd") # center & extend
yy = np.exp(-xx[None,:]**2-xx[:,None]**2) # data to plot
# plot the data
fig = plt.figure()
hpc = plt.pcolormesh(xplot, xplot, yy, shading="flat", edgecolor=None)
plt.gca().set_aspect(1)
# change alpha of the faces: lower-left to upper-right gradient
fig.canvas.draw() # this generate face color array
colors = hpc.get_facecolor()
grad = ( (xmesh.ravel()+ymesh.ravel())/2. - xx.min() ) / ( xx.max()-xx.min() )
colors[:,3] = grad.ravel() # change alpha
hpc.set_facecolor(colors) # update face colors
fig.canvas.draw() # make the modification appears
The result looks like this: 2D gaussian bump (with very few points), with alpha increasing from the lower left to the upper right corner:
Is it possible to get rid of these edges ? My problem is that I don't even know where it comes from... I tried adding hpc.set_antialiased(True), hpc.set_rasterized(True), explicitely adding edges with hpc.set_facecolor(face), tuning the linewidth to very small values -- none of these worked.
Thanks a lot for your help
The problem is that the squares overlap a tiny bit, and they are somewhat transparent (you're setting their alpha values != 1) -- so at the overlaps, they're less transparent than they should be, and it looks like a line.
You can fix it by making the squares opaque, but with a colour as if they had the stated transparency, with a white background:
def alpha_to_white(color):
white = np.array([1,1,1])
alpha = color[-1]
color = color[:-1]
return alpha*color + (1 - alpha)*white
colors = np.array([alpha_to_white(color) for color in colors])
I have an image patch that I want to insert into another image at a floating point location. In fact, what I need is something like the opposite of what the opencv getRectSubPix function does.
I guess I could implement it by doing a subpixel warp of the patch into another patch and insert this other patch into the target image at an integer location. However, I don't have clear what to do with the empty fraction of the pixels in the warped patch, or how would I blend the border of the new patch with in the target image.
I rather use a library function than implement this operation myself. Does anybody know if there is any library function that can do this type of operation, in opencv or any other image processing library?
UPDATE:
I discovered that opencv warpPerspective can be used with a borderMode = BORDER_TRANSPARENT which means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function. So I thought I could implement this subpixel patch insertion with just a warpPerspective and an adequate transformation matrix. So I wrote this function in python to perform the operation:
def insert_patch_subpixel(im, patch, p):
"""
im: numpy array with source image.
patch: numpy array with patch to be inserted into the source image
p: tuple with the center of the position (can be float) where the patch is to be inserted.
"""
ths = patch.shape[0]/2
xpmin = p[0] - ths
ypmin = p[1] - ths
Ho = np.array([[1, 0, xpmin],
[0, 1, ypmin],
[0, 0, 1]], dtype=float)
h,w = im.shape
im2 = cv2.warpPerspective(patch, Ho, (w,h), dst=im,
flags=cv2.INTER_LINEAR,
borderMode=cv2.BORDER_TRANSPARENT)
return im2
Unfortunately, the interpolation doesn't seem to work for the outlier pixels if BORDER_TRANSPARENT is used. I tested this function with a small 10x10 image (filled with value 30) and inserting a 4x4 patch (filled with value 100) at p=(5,5) (left figure) and p=(5.5,5.5) (middle figure) and we can see in the figures below that there is no interpolation in the border. However, if I change the boderMode to BORDER_CONSTANT the interpolation works (right figure), but that also fills the destination image with 0s for the outlier values.
It's a shame that interpolation doesn't work with BORDER_TRANSPARENT. I'll suggest this as an improvement to the opencv project.
Resize the patch image to the size you want in the destination. Then set alpha along the edges based on 1.0 - fraction for the left edge, fraction for the right edge. Then blend.
It's not quite perfect, because you're not resampling all the pixels properly, but that would also damage resolution. It's probably your best compromise.
Actually you should use getRectSubPix().
Use it to extract your patch from the source image with the fractional part of your desired offset then just set it into the destination image with a simple copy (or blend as needed).
You might want to add a 1 pixel border around be patch where you can do the blend.
This function essentially does a translation only (subpixel) warp.
I found a solution based on what I found in my question update.
As I could see the interpolation happening when using boderMode = BORDER_CONSTANT in the warpPerspective function I thought I could use this as a weighting mask for a blending between the original image and the subpixel inserted patch on a black background. See the new function and test code:
import numpy as np
import matplotlib.pyplot as plt
def insert_patch_subpixel2(im, patch, p):
"""
im: numpy array with source image.
patch: numpy array with patch to be inserted into the source image
p: tuple with the center of the position (can be float) where the patch is to be inserted.
"""
ths = patch.shape[0]/2
xpmin = p[0] - ths
ypmin = p[1] - ths
Ho = np.array([[1, 0, xpmin],
[0, 1, ypmin],
[0, 0, 1]], dtype=float)
h,w = im.shape
im2 = cv2.warpPerspective(patch, Ho, (w,h),
flags=cv2.INTER_LINEAR,
borderMode=cv2.BORDER_CONSTANT)
patch_mask = np.ones_like(patch,dtype=float)
blend_mask = cv2.warpPerspective(patch_mask, Ho, (w,h),
flags=cv2.INTER_LINEAR,
borderMode=cv2.BORDER_CONSTANT)
#I don't multiply im2 by blend_mask because im2 has already
#been interpolated with a zero background.
im3 = im*(1-blend_mask)+im2
im4 = cv2.convertScaleAbs(im3)
return im4
if __name__ == "__main__":
x,y = np.mgrid[0:10:1, 0:10:1]
im =(x+y).astype('uint8')*5
#im = np.ones((10,10), dtype='uint8')*30
patch = np.ones((4,4), dtype='uint8')*100
p=(5.5,5.5)
im = insert_patch_subpixel2(im, patch, p)
plt.gray()
plt.imshow(im, interpolation='none', extent = (0, 10, 10, 0))
ax=plt.gca()
ax.grid(color='r', linestyle='-', linewidth=1)
ax.set_xticks(np.arange(0, 10, 1));
ax.set_yticks(np.arange(0, 10, 1));
def format_coord(x, y):
col = int(x)
row = int(y)
z = im[row,col]
return 'x=%1.4f, y=%1.4f %s'%(x, y, z)
ax.format_coord = format_coord
plt.show()
In the images below we can see the results of a test with a small 10x10 image (filled with value 30) and inserting a 4x4 patch (filled with value 100) at p=(5,5) (left figure) and p=(5.5,5.5) (middle figure) and now we can see in the figures below that there is bilinear interpolation in the border. To show that the interpolation works with an arbitrary background I also show a test with a gradient 10x10 image background (right figure). The test script creates a figure that lets you inspect the pixel values and verify that the correct interpolation is done at each border pixel.
I have a 2D numpy array and would like to generate an image such that the pixels corresponding to numbers that have a high value (relative to other pixels) are coloured with a more intense colour. For example if the image is in gray scale, and a pixel has value 0.4849 while all the other pixels correspond to values below 0.001 then that pixel would probably be coloured black, or something close to black.
Here is an example image, the array is 28x28 and contains values between 0 and 1.
All I did to plot this image was run the following code:
import matplotlib.pyplot as plt
im = plt.imshow(myArray, cmap='gray')
plt.show()
However, for some reason this only works if the values are between 0 and 1. If they are on some other scale which may include negative numbers, then the image does not make much sense.
You can use different colormaps too, like in the example below (note that I removed the interpolation):
happy_array = np.random.randn(28, 28)
im = plt.imshow(happy_array, cmap='seismic', interpolation='none')
cbar = plt.colorbar(im)
plt.show()
And even gray is going to work:
happy_array = np.random.randn(28, 28)
im = plt.imshow(happy_array, cmap='gray', interpolation='none')
cbar = plt.colorbar(im)
plt.show()
You can normalize the data to the range (0,1) by dividing everything by the maximum value of the array:
normalized = array / np.amax(a)
plt.imshow(normalized)
If the array contains negative values you have two logical choices. Either plot the magnitude:
mag = np.fabs(array)
normalized = mag / np.amax(mag)
plt.imshow(normalized)
or shift the array so that everything is positive:
positive = array + np.amin(array)
normalized = positive / np.amax(positive)
plt.imshow(normalized)
I am trying to add shading to a map of some data by calculating the gradient of the data and using it to set alpha values.
I start by loading my data (unfortunately I cannot share the data as it is being used in a number of manuscripts in preparation. EDIT - December, 2020: the published paper is available with open access on the Society of Exploration Geophysicists library, and the data is available with an accompanying Jupyter Notebook):
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from pylab import imread, imshow, gray, mean
import matplotlib.colors as cl
%matplotlib inline
data = np.loadtxt('data.txt')
plt.imshow(data, cmap='cubehelix')
plt.show()
gets me a plot of the data:
Then I calculate the total horizontal gradient and normalize it to use for shading:
dx,dy = np.gradient(data, 1, 1)
tdx=np.sqrt(dx*dx + dy*dy)
tdx_n=(tdx-tdx.min())/(tdx.max()-tdx.min())
tdx_n=1-tdx_n
which looks as I expected:
plt.imshow(tdx_n[4:-3,4:-3], cmap='bone')
plt.show()
To create the shading effect I thought I would get the colour from the plot of the data, then replace the opacity with the gradient so as to have dark shading proportional to the gradient, like this:
img_array = plt.get_cmap('cubehelix')(data[4:-3,4:-3])
img_array[..., 3] = (tdx_n[4:-3,4:-3])
plt.imshow(img_array)
plt.show()
But the result is not what I expected:
This is what I am looking for (created in Matlab, colormap is different):
Any suggestion as to how I may modify my code?
UPDATED
With Ran Novitsky's method, using the code suggested in the answer by titusjan, I get this result:
which gives the effect I was looking for. In terms of shading though I do like titusjan's own suggestion of using HSV, which gives this result:
.
However, I could not get the colormap to be cubehelix, even though I called for it:
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
hsv = rgb_to_hsv(img_array[:, :, :3])
hsv[:, :, 2] = tdx_n
rgb = hsv_to_rgb(hsv)
plt.imshow(rgb[4:-3,4:-3], cmap='cubehelix')
plt.show()
First of all, Matplotlib includes a hill shading implementation. This calculates the intensity by comparing the gradient with a light source at a certain angle. So it's not exactly what you are implementing, but close and may even give better results.
Ran Novitsky has made another hill shading implementation that differs from Matplotlib in the way how the color and intensity values are merged. I can't tell which is better but it's worth a look.
Perhaps the best way of combining color and intensity would be to use gouraud shading, which is used in 3D computer graphics. My own approach, which I have implemented in the past, was to put the intensity in the value layer of the HSV color of the image.
I don't think I agree with your approach of placing the intensity (tdx_n in your case) in the alpha layer of the image. This means that where the gradient is low the image will be transparent and you will see data that was plotted earlier. I think that's what's happening in your screen shot.
Furthermore I think you need to normalize your data before you pass it through the cmap, just as you normalize your intensity:
data_n=(data-data.min())/(data.max()-data.min())
img_array = plt.get_cmap('cubehelix')(data_n)
We then can use the approach of Ran Novitsky to merge the color with the intensity:
rgb = img_array[:, :, :3]
# form an rgb eqvivalent of intensity
d = tdx_n.repeat(3).reshape(rgb.shape)
# simulate illumination based on pegtop algorithm.
rgb = 2 * d * rgb + (rgb ** 2) * (1 - 2 * d)
plt.imshow(rgb[4:-3,4:-3])
plt.show()
Or you can follow my past approach and put the intensity in the value layer of the HSV triplet.
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
hsv = rgb_to_hsv(img_array[:, :, :3])
hsv[:, :, 2] = tdx_n
rgb = hsv_to_rgb(hsv)
plt.imshow(rgb[4:-3,4:-3])
plt.show()
Edit 2015-05-23:
Your question has prompted me to finish my hill shading implementation that I started a year ago. I've put it on Github here.
It uses a blending mechanism that is similar to Gouraud shading, which is used in 3D computer graphics. It's labeled RGB blending below. I think this is the best blending algorithm, HSV blending gives erroneous results when the color is close to black (note the blue color in the center of the HSV image, which is not present in the un-shaded data).
RGB blending is also the simplest algorithm, it just multiplies the intensity with the RGB triplet (it adds an extra dimension of length 1 to allow broadcasting in the multiplication).
rgb = img_array[:, :, :3]
tdx_n_exp = np.expand_dims(tdx_n, axis=2)
result = tdx_n_exp * rgb
plt.imshow(result[4:-3,4:-3])