Matplotlib draw vertical lines up to a curve - python

I am currently using Rectangle in an attempt to fill an area under a curve with a single colour per rectangle. However the Rectanges are > 1 pixel wide. I want to draw lines 1 pixel wide so that they dont overlap. Currently the vertical rectangles under the curve overlap horizontally by 1 or two pixels.
def rect(x,y,w,h,c):
ax = plt.gca()
polygon = plt.Rectangle((x,y),w,h,color=c, antialiased=False)
ax.add_patch(polygon)
def mask_fill(X,Y, fa, cmap='Set1'):
plt.plot(X,Y,lw=0)
plt.xlim([X[0], X[-1]])
plt.ylim([0, MAX])
dx = X[1]-X[0]
for n, (x,y, f) in enumerate(zip(X,Y, fa)):
color = cmap(f)
rect(x,0,dx,y,color)
If I use the code below to draw lines, the overlap is reduced but there is still an overlap
def vlines(x_pos, y1, y2, c):
plt.vlines(x_pos, ymin=y1, ymax=y2, color=c)
def draw_lines(X, Y, trend_len, cmap='Blues_r']):
plt.plot(X, Y, lw=0)
plt.xlim([X[0], X[-1]])
plt.ylim([0, MAX])
dx = X[1] - X[0]
ydeltas = y_trend(Y, trend_len)
for n, (x, y, yd) in enumerate(zip(X, Y, ydeltas)):
color = cmap(y / MAX)
vlines(x, y1=0, y2=y, c=color)
Printing the first 3 iterations of values of parameters into vlines we can see that x_pos is incrementing by 1 - yet the red line clearly overlaps the first blue line as per image below (NB first (LHS) blue line is 1 pixel wide):
x_pos: 0, y1: 0, y2: 143.51, c: (0.7816378316032295, 0.8622683583237216, 0.9389773164167627, 1.0)
x_pos: 1, y1: 0, y2: 112.79092811646952, c: (0.9872049211841599, 0.5313341022683583, 0.405843906189927, 1.0)
x_pos: 2, y1: 0, y2: 123.53185623293905, c: (0.9882352941176471, 0.6059669357939254, 0.4853671664744329, 1.0)
Sample data:
47.8668447889, 1
78.5668447889, 1
65.9768447889, 1
139.658525932, 2
123.749454049, 2
116.660382165, 3
127.771310282, 3
114.792238398, 3
The first column above corresponds the the y value of the series (x values just number of values, counting from 0)
The second column corresponds to the class.
I am generating two images:
One with unique values per class (0-6), each a different colour (7 unique colours), with colour filled up the to the y value this will be used as a mask over data image below.
Second image (example shown) uses different colour maps for different class values (eg 0=Blues_r, 1=Reds_r etc) and the intensity of the colour is given by the value of y.
The code for calculating the colours is fine, but I just cant get matplotlib to plot vertical lines a sigle pixel wide.

Since your goal is not to create an interactive figure, and you are trying to manipulate columns of pixels, you can use numpy instead of matplotlib to generate the result.
Here is a function that will take in y and category arrays, and create an image that's as wide as y is long, with the specified height. Color scaling is done similarly to your solution, where y is divided by the max.
from matplotlib import pyplot as plt
import numpy as np
def draw_lines(y, category, filename, cmap='Set1', max=None, height=None):
y = np.asanyarray(y).ravel()
category = np.asanyarray(category).ravel()
assert y.size == category.size
if max is None:
max = y.max()
if height is None:
height = int(np.ceil(max))
if isinstance(cmap, str):
cmap = plt.get_cmap(cmap)
colors = cmap(category)
colors[:, 3] = y / max
colors = (255 * colors).astype(np.uint8)
output = np.repeat(colors[None, ...], height, axis=0)
heights = np.round(height * (y / max))
mask = np.arange(height)[:, None] >= heights
mask = np.broadcast_to(mask[::-1, :, None], output.shape)
output[mask] = 0
plt.imsave(filename, output)
return output
The first part just sets up the input values. The second part gets the color values. Calling a colormap with an array of n values returns an (n, 4) array of colors in the range [0, 1.0]. colors[:, 3] = y / max sets the alpha channel proportional to the height. The colors are then smeared vertically to the desired height. The last part creates a mask to set the top part of each column to zero, according to the method proposed here.
This version uses transparency to turn off the colors, and to trim the shape. You can do the same thing with a white background, if you are willing to scale the colors instead of adjusting the transparency:
def draw_lines_b(y, category, filename, cmap='Set1', max=None, height=None):
y = np.asanyarray(y).ravel()
category = np.asanyarray(category).ravel()
assert y.size == category.size
if max is None:
max = y.max()
if height is None:
height = int(np.ceil(max))
if isinstance(cmap, str):
cmap = plt.get_cmap(cmap)
colors = cmap(category)
colors[..., :3] *= (y / max)[..., None]
colors = (255 * colors).astype(np.uint8)
output = np.repeat(colors[None, ...], height, axis=0)
heights = np.round(height * (y / max))
mask = np.arange(height)[:, None] >= heights
mask = np.broadcast_to(mask[::-1, :, None], output.shape)
output[mask] = 255
plt.imsave(filename, output)
return output
In both cases, as you can imagine, matplotlib is not strictly necessary. You can define your own list of colors, and use a more appropriate library, such as PIL, to save the images.

Related

How to let matplotlib show a colormap with two orthogonal scales [duplicate]

In other words, I want to make a heatmap (or surface plot) where the color varies as a function of 2 variables. (Specifically, luminance = magnitude and hue = phase.) Is there any native way to do this?
Some examples of similar plots:
Several good examples of exactly(?) what I want to do.
More examples from astronomy, but with non-perceptual hue
Edit: This is what I did with it: https://github.com/endolith/complex_colormap
imshow can take an array of [r, g, b] entries. So you can convert the absolute values to intensities and phases - to hues.
I will use as an example complex numbers, because for it it makes the most sense. If needed, you can always add numpy arrays Z = X + 1j * Y.
So for your data Z you can use e.g.
imshow(complex_array_to_rgb(Z))
where (EDIT: made it quicker and nicer thanks to this suggestion)
def complex_array_to_rgb(X, theme='dark', rmax=None):
'''Takes an array of complex number and converts it to an array of [r, g, b],
where phase gives hue and saturaton/value are given by the absolute value.
Especially for use with imshow for complex plots.'''
absmax = rmax or np.abs(X).max()
Y = np.zeros(X.shape + (3,), dtype='float')
Y[..., 0] = np.angle(X) / (2 * pi) % 1
if theme == 'light':
Y[..., 1] = np.clip(np.abs(X) / absmax, 0, 1)
Y[..., 2] = 1
elif theme == 'dark':
Y[..., 1] = 1
Y[..., 2] = np.clip(np.abs(X) / absmax, 0, 1)
Y = matplotlib.colors.hsv_to_rgb(Y)
return Y
So, for example:
Z = np.array([[3*(x + 1j*y)**3 + 1/(x + 1j*y)**2
for x in arange(-1,1,0.05)] for y in arange(-1,1,0.05)])
imshow(complex_array_to_rgb(Z, rmax=5), extent=(-1,1,-1,1))
imshow(complex_array_to_rgb(Z, rmax=5, theme='light'), extent=(-1,1,-1,1))
imshow will take an NxMx3 (rbg) or NxMx4 (grba) array so you can do your color mapping 'by hand'.
You might be able to get a bit of traction by sub-classing Normalize to map your vector to a scaler and laying out a custom color map very cleverly (but I think this will end up having to bin one of your dimensions).
I have done something like this (pdf link, see figure on page 24), but the code is in MATLAB (and buried someplace in my archives).
I agree a bi-variate color map would be useful (primarily for representing very dense vector fields where your kinda up the creek no matter what you do).
I think the obvious extension is to let color maps take complex arguments. It would require specialized sub-classes of Normalize and Colormap and I am going back and forth on if I think it would be a lot of work to implement. I suspect if you get it working by hand it will just be a matter of api wrangling.
I created an easy to use 2D colormap class, that takes 2 NumPy arrays and maps them to an RGB image, based on a reference image.
I used #GjjvdBurg's answer as a starting point. With a bit of work, this could still be improved, and possibly turned into a proper Python module - if you want, feel free to do so, I grant you all credits.
TL;DR:
# read reference image
cmap_2d = ColorMap2D('const_chroma.jpeg', reverse_x=True) # , xclip=(0,0.9))
# map the data x and y to the RGB space, defined by the image
rgb = cmap_2d(data_x, data_y)
# generate a colorbar image
cbar_rgb = cmap_2d.generate_cbar()
The ColorMap2D class:
class ColorMap2D:
def __init__(self, filename: str, transpose=False, reverse_x=False, reverse_y=False, xclip=None, yclip=None):
"""
Maps two 2D array to an RGB color space based on a given reference image.
Args:
filename (str): reference image to read the x-y colors from
rotate (bool): if True, transpose the reference image (swap x and y axes)
reverse_x (bool): if True, reverse the x scale on the reference
reverse_y (bool): if True, reverse the y scale on the reference
xclip (tuple): clip the image to this portion on the x scale; (0,1) is the whole image
yclip (tuple): clip the image to this portion on the y scale; (0,1) is the whole image
"""
self._colormap_file = filename or COLORMAP_FILE
self._img = plt.imread(self._colormap_file)
if transpose:
self._img = self._img.transpose()
if reverse_x:
self._img = self._img[::-1,:,:]
if reverse_y:
self._img = self._img[:,::-1,:]
if xclip is not None:
imin, imax = map(lambda x: int(self._img.shape[0] * x), xclip)
self._img = self._img[imin:imax,:,:]
if yclip is not None:
imin, imax = map(lambda x: int(self._img.shape[1] * x), yclip)
self._img = self._img[:,imin:imax,:]
if issubclass(self._img.dtype.type, np.integer):
self._img = self._img / 255.0
self._width = len(self._img)
self._height = len(self._img[0])
self._range_x = (0, 1)
self._range_y = (0, 1)
#staticmethod
def _scale_to_range(u: np.ndarray, u_min: float, u_max: float) -> np.ndarray:
return (u - u_min) / (u_max - u_min)
def _map_to_x(self, val: np.ndarray) -> np.ndarray:
xmin, xmax = self._range_x
val = self._scale_to_range(val, xmin, xmax)
rescaled = (val * (self._width - 1))
return rescaled.astype(int)
def _map_to_y(self, val: np.ndarray) -> np.ndarray:
ymin, ymax = self._range_y
val = self._scale_to_range(val, ymin, ymax)
rescaled = (val * (self._height - 1))
return rescaled.astype(int)
def __call__(self, val_x, val_y):
"""
Take val_x and val_y, and associate the RGB values
from the reference picture to each item. val_x and val_y
must have the same shape.
"""
if val_x.shape != val_y.shape:
raise ValueError(f'x and y array must have the same shape, but have {val_x.shape} and {val_y.shape}.')
self._range_x = (np.amin(val_x), np.amax(val_x))
self._range_y = (np.amin(val_y), np.amax(val_y))
x_indices = self._map_to_x(val_x)
y_indices = self._map_to_y(val_y)
i_xy = np.stack((x_indices, y_indices), axis=-1)
rgb = np.zeros((*val_x.shape, 3))
for indices in np.ndindex(val_x.shape):
img_indices = tuple(i_xy[indices])
rgb[indices] = self._img[img_indices]
return rgb
def generate_cbar(self, nx=100, ny=100):
"generate an image that can be used as a 2D colorbar"
x = np.linspace(0, 1, nx)
y = np.linspace(0, 1, ny)
return self.__call__(*np.meshgrid(x, y))
Usage:
Full example, using the constant chroma reference taken from here as a screenshot:
# generate data
x = y = np.linspace(-2, 2, 300)
xx, yy = np.meshgrid(x, y)
ampl = np.exp(-(xx ** 2 + yy ** 2))
phase = (xx ** 2 - yy ** 2) * 6 * np.pi
data = ampl * np.exp(1j * phase)
data_x, data_y = np.abs(data), np.angle(data)
# Here is the 2D colormap part
cmap_2d = ColorMap2D('const_chroma.jpeg', reverse_x=True) # , xclip=(0,0.9))
rgb = cmap_2d(data_x, data_y)
cbar_rgb = cmap_2d.generate_cbar()
# plot the data
fig, plot_ax = plt.subplots(figsize=(8, 6))
plot_extent = (x.min(), x.max(), y.min(), y.max())
plot_ax.imshow(rgb, aspect='auto', extent=plot_extent, origin='lower')
plot_ax.set_xlabel('x')
plot_ax.set_ylabel('y')
plot_ax.set_title('data')
# create a 2D colorbar and make it fancy
plt.subplots_adjust(left=0.1, right=0.65)
bar_ax = fig.add_axes([0.68, 0.15, 0.15, 0.3])
cmap_extent = (data_x.min(), data_x.max(), data_y.min(), data_y.max())
bar_ax.imshow(cbar_rgb, extent=cmap_extent, aspect='auto', origin='lower',)
bar_ax.set_xlabel('amplitude')
bar_ax.set_ylabel('phase')
bar_ax.yaxis.tick_right()
bar_ax.yaxis.set_label_position('right')
for item in ([bar_ax.title, bar_ax.xaxis.label, bar_ax.yaxis.label] +
bar_ax.get_xticklabels() + bar_ax.get_yticklabels()):
item.set_fontsize(7)
plt.show()
I know this is an old post, but want to help out others that may arrive late. Below is a python function to implement complex_to_rgb from sage. Note: This implementation isn't optimal, but it is readable. See links: (examples)(source code)
Code:
import numpy as np
def complex_to_rgb(z_values):
width = z_values.shape[0]
height = z_values.shape[1]
rgb = np.zeros(shape=(width, height, 3))
for i in range(width):
row = z_values[i]
for j in range(height):
# define value, real(value), imag(value)
zz = row[j]
x = np.real(zz)
y = np.imag(zz)
# define magnitued and argument
magnitude = np.hypot(x, y)
arg = np.arctan2(y, x)
# define lighness
lightness = np.arctan(np.log(np.sqrt(magnitude) + 1)) * (4 / np.pi) - 1
if lightness < 0:
bot = 0
top = 1 + lightness
else:
bot = lightness
top = 1
# define hue
hue = 3 * arg / np.pi
if hue < 0:
hue += 6
# set ihue and use it to define rgb values based on cases
ihue = int(hue)
# case 1
if ihue == 0:
r = top
g = bot + hue * (top - bot)
b = bot
# case 2
elif ihue == 1:
r = bot + (2 - hue) * (top - bot)
g = top
b = bot
# case 3
elif ihue == 2:
r = bot
g = top
b = bot + (hue - 2) * (top - bot)
# case 4
elif ihue == 3:
r = bot
g = bot + (4 - hue) * (top - bot)
b = top
# case 5
elif ihue == 4:
r = bot + (hue - 4) * (top - bot)
g = bot
b = top
# case 6
else:
r = top
g = bot
b = bot + (6 - hue) * (top - bot)
# set rgb array values
rgb[i, j, 0] = r
rgb[i, j, 1] = g
rgb[i, j, 2] = b
return rgb

NumPy FFT producing off centre output

TL;DR: NumPy FFT creates non uniform output when output is wanted to be uniform. I want the output to be a uniform corona.
I am trying to eventually run a Gerchberg-Saxton phase retrieval algorithm. I have been trying to make sure that I understand how FFT works in NumPy. I have used fftshift to create the correct looking output but the image does not have uniform intensity afterwards.
My input image is a circle, output should be a coronagraph looking thing from the circle aperture. I am trying to reproduce the results detailed in https://www.osapublishing.org/optica/fulltext.cfm?uri=optica-2-2-147&id=311836#articleSupplMat.
My algorithm to produce the error:
Initial image, f
FT(f)
x exp ( i phase_mask)
IFT(FT(f)x exp( i phase_mask)
Happy to clear anything up.
import numpy as np
import matplotlib.pyplot as plt
#Create 'pixels' for circle
pixels = 400
edge = np.linspace(-10, 10, pixels)
xv, yv = np.meshgrid(edge, edge)
def circle(x, y, r):
'''
x, y : dimensions of grid to place circle on
r : radius
Function defines aperture
'''
x0 = 0
y0 = 0
return np.select([((x-x0)**2+(y-y0)**2)>=r**2,
((x-x0)**2+(y-y0)**2)<r**2],
[0,
1.])
#Create input and output images
radius = 4
input_img = circle(xv, yv, radius)
constraint_img = xcircle(xv, yv, radius)
img = input_img
constraint = 1 - img
max_iter = 10
re,im = np.mgrid[-1:1:400j, -1:1:400j] #Creates grid of values, 400=pixels
mask = 2*np.angle(re + 1j*im) #Gets angle from centre of grid
mask_i = mask
#Initial focal plane field, F. Initial image f.
f = np.sqrt(img)
F = np.fft.fftshift(np.fft.fft2(f)) * np.exp(mask * 1j) #Focal plane field
F_1 = F
am_f = np.abs(F_1) #Initial amplitude
g = np.fft.ifft2(F)
mask = np.angle(F/(F_1+1e-18)) #Final phase mask
recovery = (np.fft.ifft2(F*np.exp(-1j * mask)))
im3 = plt.imshow(np.abs(g)**2, cmap='gray')
plt.title('Recovered image')
plt.tight_layout()
plt.show()
plt.imshow(mask_i)
plt.colorbar()
plt.show()
Your issue is in this bit of code:
pixels = 400
edge = np.linspace(-10, 10, pixels)
as well as this one:
re,im = np.mgrid[-1:1:400j, -1:1:400j]
Because you use fftshift*, you need the origin to be at pixels//2. However, you don't sample the origin at all, it is in between two samples.
* You should really be using ifftshift instead, which moves the origin from pixels//2 to 0. fftshift moves the origin from 0 to pixels//2. For an even number of samples, these two do the same thing though.
To properly sample the origin, create edge as follows:
edge = np.linspace(-10, 10, pixels, endpoint=False)
We now see that edge[pixels//2] is equal to 0.
For np.mgrid there's no equivalent option. You will have to do this manually by creating one more sample, then deleting the last sample:
re,im = np.mgrid[-1:1:401j, -1:1:401j] #Creates grid of values, 400=pixels
mask = 2*np.angle(re + 1j*im) #Gets angle from centre of grid
mask = mask[:-1, :-1]
With these two changes, you will see a symmetric output.

I don't know why the same code works for Julia but not work for Mandelbrot?

I have the following code that generates a Mandelbrot image. The white spaces around the image, which has to be gotten rid.
import numpy as np
import matplotlib.pyplot as plt
from pylab import *
from numpy import NaN
def mandelbrot(C):
z = 0
for n in range(1, 10):
z = z**2 + C
if abs(z) > 2:
return n
return NaN
def plot():
X = np.arange(-2.0, 1.0, 0.05)
Y = np.arange(-1.5, 1.5, 0.05)
pixel = np.zeros((len(Y), len(X)))
for x_iter, x in enumerate(X):
for y_iter, y in enumerate(Y):
pixel[y_iter, x_iter] = mandelbrot(x + 1j * y)
imshow(pixel, cmap = 'gray', extent = (X.min(), X.max(), Y.min(), Y.max()))
return pixel
pixel = mandelbrot(-0.7 + 0.27015j)
plt.axis('off')
plot()
plt.show()
from PIL import Image
min_value = np.nanmin(pixel)
max_value = np.nanmax(pixel)
pixel_int = (255*(pixel-min_value)/(max_value-min_value)).astype(np.uint8)
# sample LUT from matplotlib
lut = (plt.cm.viridis(np.arange(256)) * 255).astype(np.uint8) # CHOOSE COLORMAP HERE viridis, jet, rainbow
pixel_rgb = lut[pixel_int]
# changing NaNs to a chosen color
nan_color = [0,0,0,0] # Transparent NaNs
for i,c in enumerate(nan_color):
pixel_rgb[:,:,i] = np.where(np.isnan(pixel),c,pixel_rgb[:,:,i])
# apply LUT and display
img = Image.fromarray(pixel_rgb, 'RGBA')
print(pixel)
But it turns out IndexError: too many indices for array for the line
pixel_rgb[:,:,i] = np.where(np.isnan(pixel),c,pixel_rgb[:,:,i])
Please, how to fix it?
Actually, in order to get rid of the white spaces around the image the same code (same line) had worked for Julia instead of Mandelbrot a few weeks ago. The following code that generates the Julia image is getting rid of the white spaces around the image.
import numpy as np
import matplotlib.pyplot as plt
def julia(C):
X = np.arange(-1.5, 1.5, 0.05)
Y = np.arange(-1.5, 1.5, 0.05)
pixel = np.zeros((len(Y), len(X)))
for x_iter, x in enumerate(X):
for y_iter, y in enumerate(Y):
z = x + 1j * y
intensity = np.nan
r = np.empty((100, 100)) # Unused at the moment
for n in range(1, 1024):
if abs(z) > 2:
intensity = n
break
z = z**2 + C
pixel[y_iter, x_iter] = intensity
r.fill(intensity) # Unused at the moment
# We return pixel matrix
return pixel
# Compute Julia set image
pixel = julia(-0.7 + 0.27015j)
# Plotting
print(pixel)
plt.show()
from PIL import Image
min_value = np.nanmin(pixel)
max_value = np.nanmax(pixel)
#want to set all the 255 pixels to removed
pixel_int = (255*(pixel-min_value)/(max_value-min_value)).astype(np.uint8)
# sample LUT from matplotlib,If lut is not None it must be an integer giving the number of entries desired in the lookup table
lut = (plt.cm.viridis(np.arange(256)) * 255).astype(np.uint8) # CHOOSE COLORMAP HERE viridis, jet, rainbow
pixel_rgb = lut[pixel_int]
# changing NaNs to a chosen color
nan_color = [0,0,0,0] # Transparent NaNs
for i,c in enumerate(nan_color):
pixel_rgb[:,:,i] = np.where(np.isnan(pixel),c,pixel_rgb[:,:,i])
# apply LUT and display
img = Image.fromarray(pixel_rgb, 'RGBA')
img.save('julia.tiff')
Image.open('julia.tiff').show()
print(min_value, max_value)
Now, I just don't know why this code of getting rid of the white space around the image doesn't work for the Mandelbrot?! Please help me to figure out the problem!
Your direct problem is that in the Julia case, pixel_rgb is a three dimensional array, where in the Mandelbrot case, pixel_rgb is a one dimensional array. So you're trying to apply a three dimensional transform to each of them, and this blows up for the Mandelbrot case, because what you're operating on has only a single dimension, not three.
I don't have more time to completely understand and play with your code, but in the Mandelbrot case, it seems that the mandelbrot() function only returns a single value, where the julia() function returns a 2D array. It is the plot() function that returns a 2D array in the Mandelbrot case. So my quick guess at the change you want to make is to change this:
pixel = mandelbrot(-0.7 + 0.27015j)
plt.axis('off')
plot()
to this:
# pixel = mandelbrot(-0.7 + 0.27015j)
plt.axis('off')
pixel = plot()
This allows the Mandelbrot code to run without crashing. I don't know if it's doing exactly what you want though.

Plotting image rgb value against function of time

Is there a way to compute to only one number in order to represent the rgb value of a pixel in an image? I was trying to visualize my ROI color changes over time.x as my function of time and y as my rgb value. Initially, i average the pixel rgb value that I got. For example [84 90 135] = 103 and plot it as my first point, but I realised this might be wrong representation?[135 90 84] gave the same average value as well but they actually represent different colour? This mean I will get wrong graph.
EDIT : Sorry for the late update was trying to fix my graph. I do not know why but i could not draw the line graph for my data, only works with point marker or round marker
Was trying to track the color data of images when it approaches to white colour like
I was expecting that the value will keep on increasing when it approaches white as decimal code for white is 255 255 255, so the trend should be inclined upwards? But i got the result otherwise, this is the result i got when i plotted b,g,r value of images and it doesnt really show me much info.
. Code is shown below:
import cv2
import numpy as np
import matplotlib.pyplot as plt
path = 'R:\\xx\\'
path1 = 'R:\\xx\\'
def BlueComponent(im_file):
im = cv2.imread(im_file) #return blue value
im1 = im[788, 526]
b = im1[0]
return b
def GreenComponent(im_file):
im = cv2.imread(im_file) #return green value
im1 = im[788, 526]
g = im1[1]
return g
def RedComponent(im_file): #return red value
im = cv2.imread(im_file)
im1 = im[788, 526]
r = im1[2]
return r
myBlueList = []
myGreenList = []
myRedList = []
myList = []
num_images = 99 # number of images
dotPos = 0
for i in range(1770, 1869): # loop to auto-generate image names and run prior function
image_name = path + 'Cropped_Aligned_IMG_' + str(i) + '.png' # for loop runs from image number 1770 to 1868
myBlueList.append(BlueComponent(image_name))
myGreenList.append(GreenComponent(image_name))
myRedList.append(RedComponent(image_name))
myList.append(dotPos)
dotPos = dotPos + 0.5
print(myList)
print(myBlueList)
print(myGreenList)
print(myRedList)
for k in range(1770,1869):
a = 'Cropped_Aligned_IMG_' + str(k)
image_name = path + a + '.png'
img_file = cv2.imread(image_name)
y = [myGreenList]
x = [myList]
y1 = [myBlueList]
y2 = [myRedList]
plt.xticks(np.arange(0.0 ,50.0, 0.5), rotation='vertical' )
plt.plot(x, y, 'g.-')
plt.plot(x, y1, 'b.-')
plt.plot(x, y2, 'r.-')
plt.title('Color Decimal Code Against Time')
plt.xlabel('Time(Hours)', labelpad=10)
plt.ylabel('Colour Code')
plt.show()
If you are only interested in color you can convert your RGB touples to Hue values. If saturation and intensity also matter this is of course not sufficient.
This will of course fail for neutral values.
Please search the web for details.
MIN = min(r,g,b)
MAX = max(r,g,b)
Hue =
0 if MIN == MAX
60° ⋅ (g - b)/(MAX - MIN) if MAX == r
60° ⋅ (2 + (b - r)/(MAX - MIN)) if MAX == g
60° ⋅ (4 + (r - g)/(MAX - MIN)) if MAX == b
If you are only interested in change, but not to which colour you could for example use the distance between RGB touples.
Another option that has already been suggested in the comments is to compose a 3 byte value.
You just cannot fully visualize a 3d change in 1d in an intuitive way.

Normalize histogram2d by bin area

I have a 2D histogram that I generate with numpy:
H, xedges, yedges = np.histogram2d(y, x, weights=mass * (1.0 - pf),
bins=(yrange,xrange))
Note that I'm currently weighing the bins with a function of mass (mass is a numpy array with the same dimensions as x and y). The bins are logarithmic (generated via xrange = np.logspace(minX, maxX, 100)).
However, I really want to weight the bins by the mass function but normalize them (i.e. divide) each by the area of the bin: e.g. - each bin has area xrange[i] * yrange[i]. However, since xrange and yrange don't have the same dimensions as mass, x and y ... I can't simply put the normalization in the np.histogram2d call.
How can I normalize the bin counts by the area in each log bin?
For reference, here's the plot (I've added x and y 1D histograms that I'll also need to normalize by the width of the bin, but once I figure out how to do it for 2D it should be analogous).
FYI - I generate the main (and axes-histograms) with matplotlib:
X,Y=np.meshgrid(xrange,yrange)
H = np.log10(H)
masked_array = np.ma.array(H, mask=np.isnan(H)) # mask out all nan, i.e. log10(0.0)
cax = (ax2dhist.pcolormesh(X,Y,masked_array, cmap=cmap, norm=LogNorm(vmin=1,vmax=8)))
I think you just want to pass normed=True to np.histogram2d:
normed: bool, optional
If False, returns the number of samples in each bin. If True, returns the bin density bin_count / sample_count / bin_area.
If you wanted to compute the bin areas and do the normalization manually , the simplest way would probably be to use broadcasting:
x, y = np.random.rand(2, 1000)
xbin = np.logspace(-1, 0, 101)
ybin = np.logspace(-1, 0, 201)
# raw bin counts
counts, xe, ye = np.histogram2d(x, y, [xbin, ybin])
# size of each bin in x and y dimensions
dx = np.diff(xbin)
dy = np.diff(ybin)
# compute the area of each bin using broadcasting
area = dx[:, None] * dy
# normalized counts
manual_norm = counts / area / x.shape[0]
# using normed=True
counts_norm, xe, ye = np.histogram2d(x, y, [xbin, ybin], normed=True)
print(np.allclose(manual_norm, counts_norm))
# True

Categories

Resources