Related
In other words, I want to make a heatmap (or surface plot) where the color varies as a function of 2 variables. (Specifically, luminance = magnitude and hue = phase.) Is there any native way to do this?
Some examples of similar plots:
Several good examples of exactly(?) what I want to do.
More examples from astronomy, but with non-perceptual hue
Edit: This is what I did with it: https://github.com/endolith/complex_colormap
imshow can take an array of [r, g, b] entries. So you can convert the absolute values to intensities and phases - to hues.
I will use as an example complex numbers, because for it it makes the most sense. If needed, you can always add numpy arrays Z = X + 1j * Y.
So for your data Z you can use e.g.
imshow(complex_array_to_rgb(Z))
where (EDIT: made it quicker and nicer thanks to this suggestion)
def complex_array_to_rgb(X, theme='dark', rmax=None):
'''Takes an array of complex number and converts it to an array of [r, g, b],
where phase gives hue and saturaton/value are given by the absolute value.
Especially for use with imshow for complex plots.'''
absmax = rmax or np.abs(X).max()
Y = np.zeros(X.shape + (3,), dtype='float')
Y[..., 0] = np.angle(X) / (2 * pi) % 1
if theme == 'light':
Y[..., 1] = np.clip(np.abs(X) / absmax, 0, 1)
Y[..., 2] = 1
elif theme == 'dark':
Y[..., 1] = 1
Y[..., 2] = np.clip(np.abs(X) / absmax, 0, 1)
Y = matplotlib.colors.hsv_to_rgb(Y)
return Y
So, for example:
Z = np.array([[3*(x + 1j*y)**3 + 1/(x + 1j*y)**2
for x in arange(-1,1,0.05)] for y in arange(-1,1,0.05)])
imshow(complex_array_to_rgb(Z, rmax=5), extent=(-1,1,-1,1))
imshow(complex_array_to_rgb(Z, rmax=5, theme='light'), extent=(-1,1,-1,1))
imshow will take an NxMx3 (rbg) or NxMx4 (grba) array so you can do your color mapping 'by hand'.
You might be able to get a bit of traction by sub-classing Normalize to map your vector to a scaler and laying out a custom color map very cleverly (but I think this will end up having to bin one of your dimensions).
I have done something like this (pdf link, see figure on page 24), but the code is in MATLAB (and buried someplace in my archives).
I agree a bi-variate color map would be useful (primarily for representing very dense vector fields where your kinda up the creek no matter what you do).
I think the obvious extension is to let color maps take complex arguments. It would require specialized sub-classes of Normalize and Colormap and I am going back and forth on if I think it would be a lot of work to implement. I suspect if you get it working by hand it will just be a matter of api wrangling.
I created an easy to use 2D colormap class, that takes 2 NumPy arrays and maps them to an RGB image, based on a reference image.
I used #GjjvdBurg's answer as a starting point. With a bit of work, this could still be improved, and possibly turned into a proper Python module - if you want, feel free to do so, I grant you all credits.
TL;DR:
# read reference image
cmap_2d = ColorMap2D('const_chroma.jpeg', reverse_x=True) # , xclip=(0,0.9))
# map the data x and y to the RGB space, defined by the image
rgb = cmap_2d(data_x, data_y)
# generate a colorbar image
cbar_rgb = cmap_2d.generate_cbar()
The ColorMap2D class:
class ColorMap2D:
def __init__(self, filename: str, transpose=False, reverse_x=False, reverse_y=False, xclip=None, yclip=None):
"""
Maps two 2D array to an RGB color space based on a given reference image.
Args:
filename (str): reference image to read the x-y colors from
rotate (bool): if True, transpose the reference image (swap x and y axes)
reverse_x (bool): if True, reverse the x scale on the reference
reverse_y (bool): if True, reverse the y scale on the reference
xclip (tuple): clip the image to this portion on the x scale; (0,1) is the whole image
yclip (tuple): clip the image to this portion on the y scale; (0,1) is the whole image
"""
self._colormap_file = filename or COLORMAP_FILE
self._img = plt.imread(self._colormap_file)
if transpose:
self._img = self._img.transpose()
if reverse_x:
self._img = self._img[::-1,:,:]
if reverse_y:
self._img = self._img[:,::-1,:]
if xclip is not None:
imin, imax = map(lambda x: int(self._img.shape[0] * x), xclip)
self._img = self._img[imin:imax,:,:]
if yclip is not None:
imin, imax = map(lambda x: int(self._img.shape[1] * x), yclip)
self._img = self._img[:,imin:imax,:]
if issubclass(self._img.dtype.type, np.integer):
self._img = self._img / 255.0
self._width = len(self._img)
self._height = len(self._img[0])
self._range_x = (0, 1)
self._range_y = (0, 1)
#staticmethod
def _scale_to_range(u: np.ndarray, u_min: float, u_max: float) -> np.ndarray:
return (u - u_min) / (u_max - u_min)
def _map_to_x(self, val: np.ndarray) -> np.ndarray:
xmin, xmax = self._range_x
val = self._scale_to_range(val, xmin, xmax)
rescaled = (val * (self._width - 1))
return rescaled.astype(int)
def _map_to_y(self, val: np.ndarray) -> np.ndarray:
ymin, ymax = self._range_y
val = self._scale_to_range(val, ymin, ymax)
rescaled = (val * (self._height - 1))
return rescaled.astype(int)
def __call__(self, val_x, val_y):
"""
Take val_x and val_y, and associate the RGB values
from the reference picture to each item. val_x and val_y
must have the same shape.
"""
if val_x.shape != val_y.shape:
raise ValueError(f'x and y array must have the same shape, but have {val_x.shape} and {val_y.shape}.')
self._range_x = (np.amin(val_x), np.amax(val_x))
self._range_y = (np.amin(val_y), np.amax(val_y))
x_indices = self._map_to_x(val_x)
y_indices = self._map_to_y(val_y)
i_xy = np.stack((x_indices, y_indices), axis=-1)
rgb = np.zeros((*val_x.shape, 3))
for indices in np.ndindex(val_x.shape):
img_indices = tuple(i_xy[indices])
rgb[indices] = self._img[img_indices]
return rgb
def generate_cbar(self, nx=100, ny=100):
"generate an image that can be used as a 2D colorbar"
x = np.linspace(0, 1, nx)
y = np.linspace(0, 1, ny)
return self.__call__(*np.meshgrid(x, y))
Usage:
Full example, using the constant chroma reference taken from here as a screenshot:
# generate data
x = y = np.linspace(-2, 2, 300)
xx, yy = np.meshgrid(x, y)
ampl = np.exp(-(xx ** 2 + yy ** 2))
phase = (xx ** 2 - yy ** 2) * 6 * np.pi
data = ampl * np.exp(1j * phase)
data_x, data_y = np.abs(data), np.angle(data)
# Here is the 2D colormap part
cmap_2d = ColorMap2D('const_chroma.jpeg', reverse_x=True) # , xclip=(0,0.9))
rgb = cmap_2d(data_x, data_y)
cbar_rgb = cmap_2d.generate_cbar()
# plot the data
fig, plot_ax = plt.subplots(figsize=(8, 6))
plot_extent = (x.min(), x.max(), y.min(), y.max())
plot_ax.imshow(rgb, aspect='auto', extent=plot_extent, origin='lower')
plot_ax.set_xlabel('x')
plot_ax.set_ylabel('y')
plot_ax.set_title('data')
# create a 2D colorbar and make it fancy
plt.subplots_adjust(left=0.1, right=0.65)
bar_ax = fig.add_axes([0.68, 0.15, 0.15, 0.3])
cmap_extent = (data_x.min(), data_x.max(), data_y.min(), data_y.max())
bar_ax.imshow(cbar_rgb, extent=cmap_extent, aspect='auto', origin='lower',)
bar_ax.set_xlabel('amplitude')
bar_ax.set_ylabel('phase')
bar_ax.yaxis.tick_right()
bar_ax.yaxis.set_label_position('right')
for item in ([bar_ax.title, bar_ax.xaxis.label, bar_ax.yaxis.label] +
bar_ax.get_xticklabels() + bar_ax.get_yticklabels()):
item.set_fontsize(7)
plt.show()
I know this is an old post, but want to help out others that may arrive late. Below is a python function to implement complex_to_rgb from sage. Note: This implementation isn't optimal, but it is readable. See links: (examples)(source code)
Code:
import numpy as np
def complex_to_rgb(z_values):
width = z_values.shape[0]
height = z_values.shape[1]
rgb = np.zeros(shape=(width, height, 3))
for i in range(width):
row = z_values[i]
for j in range(height):
# define value, real(value), imag(value)
zz = row[j]
x = np.real(zz)
y = np.imag(zz)
# define magnitued and argument
magnitude = np.hypot(x, y)
arg = np.arctan2(y, x)
# define lighness
lightness = np.arctan(np.log(np.sqrt(magnitude) + 1)) * (4 / np.pi) - 1
if lightness < 0:
bot = 0
top = 1 + lightness
else:
bot = lightness
top = 1
# define hue
hue = 3 * arg / np.pi
if hue < 0:
hue += 6
# set ihue and use it to define rgb values based on cases
ihue = int(hue)
# case 1
if ihue == 0:
r = top
g = bot + hue * (top - bot)
b = bot
# case 2
elif ihue == 1:
r = bot + (2 - hue) * (top - bot)
g = top
b = bot
# case 3
elif ihue == 2:
r = bot
g = top
b = bot + (hue - 2) * (top - bot)
# case 4
elif ihue == 3:
r = bot
g = bot + (4 - hue) * (top - bot)
b = top
# case 5
elif ihue == 4:
r = bot + (hue - 4) * (top - bot)
g = bot
b = top
# case 6
else:
r = top
g = bot
b = bot + (6 - hue) * (top - bot)
# set rgb array values
rgb[i, j, 0] = r
rgb[i, j, 1] = g
rgb[i, j, 2] = b
return rgb
TL;DR: NumPy FFT creates non uniform output when output is wanted to be uniform. I want the output to be a uniform corona.
I am trying to eventually run a Gerchberg-Saxton phase retrieval algorithm. I have been trying to make sure that I understand how FFT works in NumPy. I have used fftshift to create the correct looking output but the image does not have uniform intensity afterwards.
My input image is a circle, output should be a coronagraph looking thing from the circle aperture. I am trying to reproduce the results detailed in https://www.osapublishing.org/optica/fulltext.cfm?uri=optica-2-2-147&id=311836#articleSupplMat.
My algorithm to produce the error:
Initial image, f
FT(f)
x exp ( i phase_mask)
IFT(FT(f)x exp( i phase_mask)
Happy to clear anything up.
import numpy as np
import matplotlib.pyplot as plt
#Create 'pixels' for circle
pixels = 400
edge = np.linspace(-10, 10, pixels)
xv, yv = np.meshgrid(edge, edge)
def circle(x, y, r):
'''
x, y : dimensions of grid to place circle on
r : radius
Function defines aperture
'''
x0 = 0
y0 = 0
return np.select([((x-x0)**2+(y-y0)**2)>=r**2,
((x-x0)**2+(y-y0)**2)<r**2],
[0,
1.])
#Create input and output images
radius = 4
input_img = circle(xv, yv, radius)
constraint_img = xcircle(xv, yv, radius)
img = input_img
constraint = 1 - img
max_iter = 10
re,im = np.mgrid[-1:1:400j, -1:1:400j] #Creates grid of values, 400=pixels
mask = 2*np.angle(re + 1j*im) #Gets angle from centre of grid
mask_i = mask
#Initial focal plane field, F. Initial image f.
f = np.sqrt(img)
F = np.fft.fftshift(np.fft.fft2(f)) * np.exp(mask * 1j) #Focal plane field
F_1 = F
am_f = np.abs(F_1) #Initial amplitude
g = np.fft.ifft2(F)
mask = np.angle(F/(F_1+1e-18)) #Final phase mask
recovery = (np.fft.ifft2(F*np.exp(-1j * mask)))
im3 = plt.imshow(np.abs(g)**2, cmap='gray')
plt.title('Recovered image')
plt.tight_layout()
plt.show()
plt.imshow(mask_i)
plt.colorbar()
plt.show()
Your issue is in this bit of code:
pixels = 400
edge = np.linspace(-10, 10, pixels)
as well as this one:
re,im = np.mgrid[-1:1:400j, -1:1:400j]
Because you use fftshift*, you need the origin to be at pixels//2. However, you don't sample the origin at all, it is in between two samples.
* You should really be using ifftshift instead, which moves the origin from pixels//2 to 0. fftshift moves the origin from 0 to pixels//2. For an even number of samples, these two do the same thing though.
To properly sample the origin, create edge as follows:
edge = np.linspace(-10, 10, pixels, endpoint=False)
We now see that edge[pixels//2] is equal to 0.
For np.mgrid there's no equivalent option. You will have to do this manually by creating one more sample, then deleting the last sample:
re,im = np.mgrid[-1:1:401j, -1:1:401j] #Creates grid of values, 400=pixels
mask = 2*np.angle(re + 1j*im) #Gets angle from centre of grid
mask = mask[:-1, :-1]
With these two changes, you will see a symmetric output.
I have the following code that generates a Mandelbrot image. The white spaces around the image, which has to be gotten rid.
import numpy as np
import matplotlib.pyplot as plt
from pylab import *
from numpy import NaN
def mandelbrot(C):
z = 0
for n in range(1, 10):
z = z**2 + C
if abs(z) > 2:
return n
return NaN
def plot():
X = np.arange(-2.0, 1.0, 0.05)
Y = np.arange(-1.5, 1.5, 0.05)
pixel = np.zeros((len(Y), len(X)))
for x_iter, x in enumerate(X):
for y_iter, y in enumerate(Y):
pixel[y_iter, x_iter] = mandelbrot(x + 1j * y)
imshow(pixel, cmap = 'gray', extent = (X.min(), X.max(), Y.min(), Y.max()))
return pixel
pixel = mandelbrot(-0.7 + 0.27015j)
plt.axis('off')
plot()
plt.show()
from PIL import Image
min_value = np.nanmin(pixel)
max_value = np.nanmax(pixel)
pixel_int = (255*(pixel-min_value)/(max_value-min_value)).astype(np.uint8)
# sample LUT from matplotlib
lut = (plt.cm.viridis(np.arange(256)) * 255).astype(np.uint8) # CHOOSE COLORMAP HERE viridis, jet, rainbow
pixel_rgb = lut[pixel_int]
# changing NaNs to a chosen color
nan_color = [0,0,0,0] # Transparent NaNs
for i,c in enumerate(nan_color):
pixel_rgb[:,:,i] = np.where(np.isnan(pixel),c,pixel_rgb[:,:,i])
# apply LUT and display
img = Image.fromarray(pixel_rgb, 'RGBA')
print(pixel)
But it turns out IndexError: too many indices for array for the line
pixel_rgb[:,:,i] = np.where(np.isnan(pixel),c,pixel_rgb[:,:,i])
Please, how to fix it?
Actually, in order to get rid of the white spaces around the image the same code (same line) had worked for Julia instead of Mandelbrot a few weeks ago. The following code that generates the Julia image is getting rid of the white spaces around the image.
import numpy as np
import matplotlib.pyplot as plt
def julia(C):
X = np.arange(-1.5, 1.5, 0.05)
Y = np.arange(-1.5, 1.5, 0.05)
pixel = np.zeros((len(Y), len(X)))
for x_iter, x in enumerate(X):
for y_iter, y in enumerate(Y):
z = x + 1j * y
intensity = np.nan
r = np.empty((100, 100)) # Unused at the moment
for n in range(1, 1024):
if abs(z) > 2:
intensity = n
break
z = z**2 + C
pixel[y_iter, x_iter] = intensity
r.fill(intensity) # Unused at the moment
# We return pixel matrix
return pixel
# Compute Julia set image
pixel = julia(-0.7 + 0.27015j)
# Plotting
print(pixel)
plt.show()
from PIL import Image
min_value = np.nanmin(pixel)
max_value = np.nanmax(pixel)
#want to set all the 255 pixels to removed
pixel_int = (255*(pixel-min_value)/(max_value-min_value)).astype(np.uint8)
# sample LUT from matplotlib,If lut is not None it must be an integer giving the number of entries desired in the lookup table
lut = (plt.cm.viridis(np.arange(256)) * 255).astype(np.uint8) # CHOOSE COLORMAP HERE viridis, jet, rainbow
pixel_rgb = lut[pixel_int]
# changing NaNs to a chosen color
nan_color = [0,0,0,0] # Transparent NaNs
for i,c in enumerate(nan_color):
pixel_rgb[:,:,i] = np.where(np.isnan(pixel),c,pixel_rgb[:,:,i])
# apply LUT and display
img = Image.fromarray(pixel_rgb, 'RGBA')
img.save('julia.tiff')
Image.open('julia.tiff').show()
print(min_value, max_value)
Now, I just don't know why this code of getting rid of the white space around the image doesn't work for the Mandelbrot?! Please help me to figure out the problem!
Your direct problem is that in the Julia case, pixel_rgb is a three dimensional array, where in the Mandelbrot case, pixel_rgb is a one dimensional array. So you're trying to apply a three dimensional transform to each of them, and this blows up for the Mandelbrot case, because what you're operating on has only a single dimension, not three.
I don't have more time to completely understand and play with your code, but in the Mandelbrot case, it seems that the mandelbrot() function only returns a single value, where the julia() function returns a 2D array. It is the plot() function that returns a 2D array in the Mandelbrot case. So my quick guess at the change you want to make is to change this:
pixel = mandelbrot(-0.7 + 0.27015j)
plt.axis('off')
plot()
to this:
# pixel = mandelbrot(-0.7 + 0.27015j)
plt.axis('off')
pixel = plot()
This allows the Mandelbrot code to run without crashing. I don't know if it's doing exactly what you want though.
I have an image:
>>> image.shape
(720, 1280)
My image is a binary array of 0s and 255s. I've done some cursory edge detection, and now I want to fit a polynomial through the points.
I want to see these points back on my original image, in image-space.
As far as I can tell, the standard way to do this is to unwrap the x,y- image with a reshape, fit on the unwrapped version, then re-reshape back into the original image.
pts = np.array(image).reshape((-1, 2))
xdata = pts[:,0]
ydata = pts[:,1]
z1 = np.polyfit(xdata, ydata, 1)
z2 = np.polyfit(xdata, ydata, 2) # or quadratic...
f = np.poly1d(z)
Now that I have this function, f, how do I use it to paint my lines in the original image space?
In particular:
What's the right inverse indexing of .reshape() to get back into image space?
This seems a bit cumbersome. Is this reshape reshape dance a common thing in image processing? Is what is described above the standard way to do this, or is there a different approach?
If mapping onto the 720, 1280, 1 array is called the image space, what is the reshaped space called? data-space? Linearized space?
You don't need to do this. You can combine np.nonzero, np.polyfit and np.polyval to do this. It would look like this:
import numpy as np
from matplotlib import pyplot as plt
# in your case, you would read your image
# > cv2.imread(...) # import cv2 before
# but we are going to create an image based on a polynomial
img = np.zeros((400, 400), dtype=np.uint8)
h, w = img.shape
xs = np.arange(150, 250)
ys = np.array(list(map(lambda x: 0.01 * x**2 - 4*x + 600, xs))).astype(np.int)
img[h - ys, xs] = 255
# I could use the values I have, but if you have a binary image,
# you will need to get them, and you could do something like this
ys, xs = np.nonzero(img) # use (255-img) if your image is inverted
ys = h - ys
# compute the coefficients
coefs = np.polyfit(xs, ys, 2)
xx = np.arange(0, w).astype(np.int)
yy = h - np.polyval(coefs, xx)
# filter those ys out of the image, because we are going to use as index
xx = xx[(0 <= yy) & (yy < h)]
yy = yy[(0 <= yy) & (yy < h)].astype(np.int) # convert to int to use as index
# create and display a color image just to viz the result
color_img = np.repeat(img[:, :, np.newaxis], 3, axis=2)
color_img[yy, xx, 0] = 255 # 0 because pyplot is RGB
f, ax = plt.subplots(1, 2)
ax[0].imshow(img, cmap='gray')
ax[0].set_title('Binary')
ax[1].imshow(color_img)
ax[1].set_title('Polynomial')
plt.show()
The results look like this:
If you print coefs, you will have [ 1.00486819e-02 -4.01966712e+00 6.01540472e+02] which are very close to the [0.01, -4, 600] we chose.
I need to find the extent of a plot including its related artists (in this case just ticks and ticklabels) in axis coordinates (as defined in the matplotlib transformations tutorial).
The background to this is that I am automatically creating thumbnail plots (as in this SO question) for a large number of charts, only when I can position the thumbnail so that it does not obscure data in the original plot.
This is my current approach:
Create a number of candidate rectangles to test, starting at the top-right of the original plot and working left, then the bottom-right of the original plot and move left.
For each candidate rectangle:
Using code from this SO question convert the left and right hand side of the rect (in axis coordinates) into data coordinates, to find which slice of the x-data the rectangle will cover.
Find the minimum / maximum y-value for the slice of data the rectangle covers.
Find the top and bottom of the rectangle in data coordinates.
Using the above, determine whether the rectangle overlaps with any data. If not, draw the thumbnail plot in the current rectangle, otherwise continue.
The problem with this approach is that axis coordinates give you the extent of the axis from (0,0) (bottom-left of the axes) to (1,1) (top-right) and does not include ticks and ticklabels (the thumbnail plots do not have titles, axis labels, legends or other artists).
All charts use the same font sizes, but the charts have ticklabels of different lengths (e.g. 1.5 or 1.2345 * 10^6), although these are known before the inset is drawn. Is there a way to convert from font sizes / points to axis coordinates? Alternatively, maybe there is a better approach than the one above (bounding boxes?).
The following code implements the algorithm above:
import math
from matplotlib import pyplot, rcParams
rcParams['xtick.direction'] = 'out'
rcParams['ytick.direction'] = 'out'
INSET_DEFAULT_WIDTH = 0.35
INSET_DEFAULT_HEIGHT = 0.25
INSET_PADDING = 0.05
INSET_TICK_FONTSIZE = 8
def axis_data_transform(axis, xin, yin, inverse=False):
"""Translate between axis and data coordinates.
If 'inverse' is True, data coordinates are translated to axis coordinates,
otherwise the transformation is reversed.
Code by Covich, from: https://stackoverflow.com/questions/29107800/
"""
xlim, ylim = axis.get_xlim(), axis.get_ylim()
xdelta, ydelta = xlim[1] - xlim[0], ylim[1] - ylim[0]
if not inverse:
xout, yout = xlim[0] + xin * xdelta, ylim[0] + yin * ydelta
else:
xdelta2, ydelta2 = xin - xlim[0], yin - ylim[0]
xout, yout = xdelta2 / xdelta, ydelta2 / ydelta
return xout, yout
def add_inset_to_axis(fig, axis, rect):
left, bottom, width, height = rect
def transform(coord):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
fig_left, fig_bottom = transform((left, bottom))
fig_width, fig_height = transform([width, height]) - transform([0, 0])
return fig.add_axes([fig_left, fig_bottom, fig_width, fig_height])
def collide_rect((left, bottom, width, height), fig, axis, data):
# Find the values on the x-axis of left and right edges of the rect.
x_left_float, _ = axis_data_transform(axis, left, 0, inverse=False)
x_right_float, _ = axis_data_transform(axis, left + width, 0, inverse=False)
x_left = int(math.floor(x_left_float))
x_right = int(math.ceil(x_right_float))
# Find the highest and lowest y-value in that segment of data.
minimum_y = min(data[int(x_left):int(x_right)])
maximum_y = max(data[int(x_left):int(x_right)])
# Convert the bottom and top of the rect to data coordinates.
_, inset_top = axis_data_transform(axis, 0, bottom + height, inverse=False)
_, inset_bottom = axis_data_transform(axis, 0, bottom, inverse=False)
# Detect collision.
if ((bottom > 0.5 and maximum_y > inset_bottom) or # inset at top of chart
(bottom < 0.5 and minimum_y < inset_top)): # inset at bottom
return True
return False
if __name__ == '__main__':
x_data, y_data = range(0, 100), [-1.0] * 50 + [1.0] * 50 # Square wave.
y_min, y_max = min(y_data), max(y_data)
fig = pyplot.figure()
axis = fig.add_subplot(111)
axis.set_ylim(y_min - 0.1, y_max + 0.1)
axis.plot(x_data, y_data)
# Find a rectangle that does not collide with data. Start top-right
# and work left, then try bottom-right and work left.
inset_collides = False
left_offsets = [x / 10.0 for x in xrange(6)] * 2
bottom_values = (([1.0 - INSET_DEFAULT_HEIGHT - INSET_PADDING] * (len(left_offsets) / 2))
+ ([INSET_PADDING * 2] * (len(left_offsets) / 2)))
for left_offset, bottom in zip(left_offsets, bottom_values):
# rect: (left, bottom, width, height)
rect = (1.0 - INSET_DEFAULT_WIDTH - left_offset - INSET_PADDING,
bottom, INSET_DEFAULT_WIDTH, INSET_DEFAULT_HEIGHT)
inset_collides = collide_rect(rect, fig, axis, y_data)
print 'TRYING:', rect, 'RESULT:', inset_collides
if not inset_collides:
break
if not inset_collides:
inset = add_inset_to_axis(fig, axis, rect)
inset.set_ylim(axis.get_ylim())
inset.set_yticks([y_min, y_min + ((y_max - y_min) / 2.0), y_max])
inset.xaxis.set_tick_params(labelsize=INSET_TICK_FONTSIZE)
inset.yaxis.set_tick_params(labelsize=INSET_TICK_FONTSIZE)
inset_xlimit = (0, int(len(y_data) / 100.0 * 2.5)) # First 2.5% of data.
inset.set_xlim(inset_xlimit[0], inset_xlimit[1], auto=False)
inset.plot(x_data[inset_xlimit[0]:inset_xlimit[1] + 1],
y_data[inset_xlimit[0]:inset_xlimit[1] + 1])
fig.savefig('so_example.png')
And the output of this is:
TRYING: (0.6, 0.7, 0.35, 0.25) RESULT: True
TRYING: (0.5, 0.7, 0.35, 0.25) RESULT: True
TRYING: (0.4, 0.7, 0.35, 0.25) RESULT: True
TRYING: (0.30000000000000004, 0.7, 0.35, 0.25) RESULT: True
TRYING: (0.2, 0.7, 0.35, 0.25) RESULT: True
TRYING: (0.10000000000000002, 0.7, 0.35, 0.25) RESULT: False
My solution doesn't seem to detect tick marks, but does take care of the tick labels, axis labels and the figure title. Hopefully it's enough though, since a fixed pad value should be fine to account for the tick marks.
Use axes.get_tightbbox to obtain a rectangle that fits around the axes including labels.
from matplotlib import tight_layout
renderer = tight_layout.get_renderer(fig)
inset_tight_bbox = inset.get_tightbbox(renderer)
Whereas your original rectangle set the axis bbox, inset.bbox. Find the rectangles in axis coordinates for these two bboxes:
inv_transform = axis.transAxes.inverted()
xmin, ymin = inv_transform.transform(inset.bbox.min)
xmin_tight, ymin_tight = inv_transform.transform(inset_tight_bbox.min)
xmax, ymax = inv_transform.transform(inset.bbox.max)
xmax_tight, ymax_tight = inv_transform.transform(inset_tight_bbox.max)
Now calculate a new rectangle for the axis itself, such that the outer tight bbox will be reduced in size to the old axis bbox:
xmin_new = xmin + (xmin - xmin_tight)
ymin_new = ymin + (ymin - ymin_tight)
xmax_new = xmax - (xmax_tight - xmax)
ymax_new = ymax - (ymax_tight - ymax)
Now, just switch back to figure coordinates and reposition the inset axes:
[x_fig,y_fig] = axis_to_figure_transform([xmin_new, ymin_new])
[x2_fig,y2_fig] = axis_to_figure_transform([xmax_new, ymax_new])
inset.set_position ([x_fig, y_fig, x2_fig - x_fig, y2_fig - y_fig])
The function axis_to_figure_transform is based on your transform function from add_inset_to_axis:
def axis_to_figure_transform(coord, axis):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
Note: this doesn't work with fig.show(), at least on my system; tight_layout.get_renderer(fig) causes an error. However, it works fine if you're only using savefig() and not displaying the plot interactively.
Finally, here's your full code with my changes and additions:
import math
from matplotlib import pyplot, rcParams, tight_layout
rcParams['xtick.direction'] = 'out'
rcParams['ytick.direction'] = 'out'
INSET_DEFAULT_WIDTH = 0.35
INSET_DEFAULT_HEIGHT = 0.25
INSET_PADDING = 0.05
INSET_TICK_FONTSIZE = 8
def axis_data_transform(axis, xin, yin, inverse=False):
"""Translate between axis and data coordinates.
If 'inverse' is True, data coordinates are translated to axis coordinates,
otherwise the transformation is reversed.
Code by Covich, from: http://stackoverflow.com/questions/29107800/
"""
xlim, ylim = axis.get_xlim(), axis.get_ylim()
xdelta, ydelta = xlim[1] - xlim[0], ylim[1] - ylim[0]
if not inverse:
xout, yout = xlim[0] + xin * xdelta, ylim[0] + yin * ydelta
else:
xdelta2, ydelta2 = xin - xlim[0], yin - ylim[0]
xout, yout = xdelta2 / xdelta, ydelta2 / ydelta
return xout, yout
def axis_to_figure_transform(coord, axis):
return fig.transFigure.inverted().transform(
axis.transAxes.transform(coord))
def add_inset_to_axis(fig, axis, rect):
left, bottom, width, height = rect
fig_left, fig_bottom = axis_to_figure_transform((left, bottom), axis)
fig_width, fig_height = axis_to_figure_transform([width, height], axis) \
- axis_to_figure_transform([0, 0], axis)
return fig.add_axes([fig_left, fig_bottom, fig_width, fig_height], frameon=True)
def collide_rect((left, bottom, width, height), fig, axis, data):
# Find the values on the x-axis of left and right edges of the rect.
x_left_float, _ = axis_data_transform(axis, left, 0, inverse=False)
x_right_float, _ = axis_data_transform(axis, left + width, 0, inverse=False)
x_left = int(math.floor(x_left_float))
x_right = int(math.ceil(x_right_float))
# Find the highest and lowest y-value in that segment of data.
minimum_y = min(data[int(x_left):int(x_right)])
maximum_y = max(data[int(x_left):int(x_right)])
# Convert the bottom and top of the rect to data coordinates.
_, inset_top = axis_data_transform(axis, 0, bottom + height, inverse=False)
_, inset_bottom = axis_data_transform(axis, 0, bottom, inverse=False)
# Detect collision.
if ((bottom > 0.5 and maximum_y > inset_bottom) or # inset at top of chart
(bottom < 0.5 and minimum_y < inset_top)): # inset at bottom
return True
return False
if __name__ == '__main__':
x_data, y_data = range(0, 100), [-1.0] * 50 + [1.0] * 50 # Square wave.
y_min, y_max = min(y_data), max(y_data)
fig = pyplot.figure()
axis = fig.add_subplot(111)
axis.set_ylim(y_min - 0.1, y_max + 0.1)
axis.plot(x_data, y_data)
# Find a rectangle that does not collide with data. Start top-right
# and work left, then try bottom-right and work left.
inset_collides = False
left_offsets = [x / 10.0 for x in xrange(6)] * 2
bottom_values = (([1.0 - INSET_DEFAULT_HEIGHT - INSET_PADDING] * (len(left_offsets) / 2))
+ ([INSET_PADDING * 2] * (len(left_offsets) / 2)))
for left_offset, bottom in zip(left_offsets, bottom_values):
# rect: (left, bottom, width, height)
rect = (1.0 - INSET_DEFAULT_WIDTH - left_offset - INSET_PADDING,
bottom, INSET_DEFAULT_WIDTH, INSET_DEFAULT_HEIGHT)
inset_collides = collide_rect(rect, fig, axis, y_data)
print 'TRYING:', rect, 'RESULT:', inset_collides
if not inset_collides:
break
if not inset_collides:
inset = add_inset_to_axis(fig, axis, rect)
inset.set_ylim(axis.get_ylim())
inset.set_yticks([y_min, y_min + ((y_max - y_min) / 2.0), y_max])
inset.xaxis.set_tick_params(labelsize=INSET_TICK_FONTSIZE)
inset.yaxis.set_tick_params(labelsize=INSET_TICK_FONTSIZE)
inset_xlimit = (0, int(len(y_data) / 100.0 * 2.5)) # First 2.5% of data.
inset.set_xlim(inset_xlimit[0], inset_xlimit[1], auto=False)
inset.plot(x_data[inset_xlimit[0]:inset_xlimit[1] + 1],
y_data[inset_xlimit[0]:inset_xlimit[1] + 1])
# borrow this function from tight_layout
renderer = tight_layout.get_renderer(fig)
inset_tight_bbox = inset.get_tightbbox(renderer)
# uncomment this to show where the two bboxes are
# def show_bbox_on_plot(ax, bbox, color='b'):
# inv_transform = ax.transAxes.inverted()
# xmin, ymin = inv_transform.transform(bbox.min)
# xmax, ymax = inv_transform.transform(bbox.max)
# axis.add_patch(pyplot.Rectangle([xmin, ymin], xmax-xmin, ymax-ymin, transform=axis.transAxes, color = color))
#
# show_bbox_on_plot(axis, inset_tight_bbox)
# show_bbox_on_plot(axis, inset.bbox, color = 'g')
inv_transform = axis.transAxes.inverted()
xmin, ymin = inv_transform.transform(inset.bbox.min)
xmin_tight, ymin_tight = inv_transform.transform(inset_tight_bbox.min)
xmax, ymax = inv_transform.transform(inset.bbox.max)
xmax_tight, ymax_tight = inv_transform.transform(inset_tight_bbox.max)
# shift actual axis bounds inwards by "margin" so that new size + margin
# is original axis bounds
xmin_new = xmin + (xmin - xmin_tight)
ymin_new = ymin + (ymin - ymin_tight)
xmax_new = xmax - (xmax_tight - xmax)
ymax_new = ymax - (ymax_tight - ymax)
[x_fig,y_fig] = axis_to_figure_transform([xmin_new, ymin_new], axis)
[x2_fig,y2_fig] = axis_to_figure_transform([xmax_new, ymax_new], axis)
inset.set_position ([x_fig, y_fig, x2_fig - x_fig, y2_fig - y_fig])
fig.savefig('so_example.png')
To get the tight bbox of an axis in figure coordinates, use
def tight_bbox(ax):
fig = ax.get_figure()
tight_bbox_raw = ax.get_tightbbox(fig.canvas.get_renderer())
from matplotlib.transforms import TransformedBbox
tight_bbox_fig = TransformedBbox(tight_bbox_raw, fig.transFigure.inverted())
return tight_bbox_fig
This can for example be used to place labels relative to the axes in figure coordinates just outside the tight bounding box.