2D alpha shape / concave hull problem in Python - python

I have a large set of 2D points that I've downsampled into a 44x2 numpy array (array defined later). I am trying to find the bounding shape of those points which are effectively a concave hull. In the 2nd image I've manually marked an approximate bounding shape that I am hoping to get.
I have tried using alphashape and the Delauney triangulation method from here, both methods providing the same answer.
Unfortunately, I don't seem to be able to achieve what I need, regardless of the alpha parameters. I've tried some manual settings and alphaoptimize, some examples of which are below.
Is there something critical I'm misunderstanding about alphashape? The documentation seems very clear, but obviously I'm missing something.
import numpy as np
import alphashape
from descartes import PolygonPatch
import matplotlib.pyplot as plt
points = np.array(
[[0.16,3.98],
[-0.48,3.33],
[-0.48,4.53],
[0.1,3.67],
[0.04,5.67],
[-7.94,3.02],
[-18.16,3.07],
[-0.15,5.67],
[-0.26,5.14],
[-0.1,5.11],
[-0.96,5.48],
[-0.03,3.86],
[-0.12,3.16],
[0.32,4.64],
[-0.1,4.32],
[-0.84,4.28],
[-0.56,3.16],
[-6.85,3.28],
[-0.7,3.24],
[-7.2,3.03],
[-1.0,3.28],
[-1.1,3.28],
[-2.4,3.28],
[-2.6,3.28],
[-2.9,3.28],
[-4.5,3.28],
[-12.3,3.28],
[-14.8,3.28],
[-16.7,3.28],
[-17.8,3.28],
[-0,3.03],
[-1,3.03],
[-2.1,3.03],
[-2.8,3.03],
[-3.2,3.03],
[-5,3.03],
[-12,3.03],
[-14,3.03],
[-17,3.03],
[-18,3.03],
[-0.68,4.86],
[-1.26,3.66],
[-1.71,3.51],
[-9.49,3.25]])
alpha = 0.1
alphashape = alphashape.alphashape(points, alpha)
fig = plt.figure()
ax = plt.gca()
ax.scatter(points[:,0],points[:,1])
ax.add_patch(PolygonPatch(alphashape,alpha=0.2))
plt.show()

The plots that you attached are misleading, since the scales on the x-axis and the y-axis are very different. If you set both axes to the same scale, you obtain the following plot:
.
Since differences between x-coordinates of points are on the average much larger than differences between y-coordinates, you cannot obtain an alpha shape resembling your desired result. For larger values of alpha points scattered along the x-axis will not be connected by edges, since alpha shape will use circles too small to connect these points. For values of alpha small enough that these points get connected you will obtain the long edges on the right-hand side of the plot.
You can fix this issue by rescaling y-coordinates of all points, effectively stretching the plot in the vertical direction. For example, multiplying y-coordinates by 7 and setting alpha = 0.4 gives the following picture:

Related

How to get colour value from cmap with RGB tuple? [duplicate]

How do I invert a color mapped image?
I have a 2D image which plots data on a colormap. I'd like to read the image in and 'reverse' the color map, that is, look up a specific RGB value, and turn it into a float.
For example:
using this image: http://matplotlib.sourceforge.net/_images/mri_demo.png
I should be able to get a 440x360 matrix of floats, knowing the colormap was cm.jet
from pylab import imread
import matplotlib.cm as cm
a=imread('mri_demo.png')
b=colormap2float(a,cm.jet) #<-tricky part
There may be better ways to do this; I'm not sure.
If you read help(cm.jet) you will see the algorithm used to map values in the interval [0,1] to RGB 3-tuples. You could, with a little paper and pencil, work out formulas to invert the piecewise-linear functions which define the mapping.
However, there are a number of issues which make the paper and pencil solution somewhat unappealing:
It's a lot of laborious algebra, and
the solution is specific for cm.jet.
You'd have to do all this work again
if you change the color map. How to automate the solving of these algebraic equations is interesting, but not a problem I know how to solve.
In general, the color map may not be
invertible (more than one value may
be mapped to the same color). In the
case of cm.jet, values between 0.11
and 0.125 are all mapped to the RGB
3-tuple (0,0,1), for example. So if
your image contains a pure blue
pixel, there is really no way to
tell if it came from a value of 0.11
or a value of, say, 0.125.
The mapping from [0,1] to
3-tuples is a curve in 3-space. The
colors in your image may not lie
perfectly on this curve. There might
be round-off error, for example. So any practical solution has to be able to interpolate or somehow project points in 3-space onto the curve.
Due to the non-uniqueness issue, and the projection/interpolation issue, there can be many possible solutions to the problem you pose. Below is just one possibility.
Here is one way to resolve the uniqueness and projection/interpolation issues:
Create a gradient which acts as a "code book". The gradient is an array of RGBA 4-tuples in the cm.jet color map. The colors of the gradient correspond to values from 0 to 1. Use scipy's vector quantization function scipy.cluster.vq.vq to map all the colors in your image, mri_demo.png, onto the nearest color in gradient.
Since a color map may use the same color for many values, the gradient may contain duplicate colors. I leave it up to scipy.cluster.vq.vq to decide which (possibly) non-unique code book index to associate with a particular color.
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import scipy.cluster.vq as scv
def colormap2arr(arr,cmap):
# http://stackoverflow.com/questions/3720840/how-to-reverse-color-map-image-to-scalar-values/3722674#3722674
gradient=cmap(np.linspace(0.0,1.0,100))
# Reshape arr to something like (240*240, 4), all the 4-tuples in a long list...
arr2=arr.reshape((arr.shape[0]*arr.shape[1],arr.shape[2]))
# Use vector quantization to shift the values in arr2 to the nearest point in
# the code book (gradient).
code,dist=scv.vq(arr2,gradient)
# code is an array of length arr2 (240*240), holding the code book index for
# each observation. (arr2 are the "observations".)
# Scale the values so they are from 0 to 1.
values=code.astype('float')/gradient.shape[0]
# Reshape values back to (240,240)
values=values.reshape(arr.shape[0],arr.shape[1])
values=values[::-1]
return values
arr=plt.imread('mri_demo.png')
values=colormap2arr(arr,cm.jet)
# Proof that it works:
plt.imshow(values,interpolation='bilinear', cmap=cm.jet,
origin='lower', extent=[-3,3,-3,3])
plt.show()
The image you see should be close to reproducing mri_demo.png:
(The original mri_demo.png had a white border. Since white is not a color in cm.jet, note that scipy.cluster.vq.vq maps white to to closest point in the gradient code book, which happens to be a pale green color.)
Here is a simpler approach, that works for many colormaps, e.g. viridis, though not for LinearSegmentedColormaps such as 'jet'.
The colormaps are stored as lists of [r,g,b] values. For lots of colormaps, this map has exactly 256 entries. A value between 0 and 1 is looked up using its nearest neighbor in the color list. So, you can't get the exact value back, only an approximation.
Some code to illustrate the concepts:
from matplotlib import pyplot as plt
def find_value_in_colormap(tup, cmap):
# for a cmap like viridis, the result of the colormap lookup is a tuple (r, g, b, a), with a always being 1
# but the colors array is stored as a list [r, g, b]
# for some colormaps, the situation is reversed: the lookup returns a list, while the colors array contains tuples
tup = list(tup)[:3]
colors = cmap.colors
if tup in colors:
ind = colors.index(tup)
elif tuple(tup) in colors:
ind = colors.index(tuple(tup))
else: # tup was not generated by this colormap
return None
return (ind + 0.5) / len(colors)
val = 0.3
tup = plt.cm.viridis(val)
print(find_value_in_colormap(tup, plt.cm.viridis))
This prints the approximate value:
0.298828125
being the value corresponding to the color triple.
To illustrate what happens, here is a visualization of the function looking up a color for a value, followed by getting the value corresponding to that color.
from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(-0.1, 1.1, 10000)
y = [ find_value_in_colormap(plt.cm.viridis(x), plt.cm.viridis) for x in x]
fig, axes = plt.subplots(ncols=3, figsize=(12,4))
for ax in axes.ravel():
ax.plot(x, x, label='identity: y = x')
ax.plot(x, y, label='lookup, then reverse')
ax.legend(loc='best')
axes[0].set_title('overall view')
axes[1].set_title('zoom near x=0')
axes[1].set_xlim(-0.02, 0.02)
axes[1].set_ylim(-0.02, 0.02)
axes[2].set_title('zoom near x=1')
axes[2].set_xlim(0.98, 1.02)
axes[2].set_ylim(0.98, 1.02)
plt.show()
For a colormap with only a few colors, a plot can show the exact position where one color changes to the next. The plot is colored corresponding to the x-values.
Hy unutbu,
Thanks for your reply, I understand the process you explain, and reproduces it. It works very well, I use it to reverse IR camera shots in temperature grids, since a picture can be easily rework/reshape to fulfill my purpose using GIMP.
I'm able to create grids of scalar from camera shots that is really usefull in my tasks.
I use a palette file that I'm able to create using GIMP + Sample a Gradient Along a Path.
I pick the color bar of my original picture, convert it to palette then export as hex color sequence.
I read this palette file to create a colormap normalized by a temperature sample to be used as the code book.
I read the original image and use the vector quantization to reverse color into values.
I slightly improve the pythonic style of the code by using code book indices as index filter in the temperature sample array and apply some filters pass to smooth my results.
from numpy import linspace, savetxt
from matplotlib.colors import Normalize, LinearSegmentedColormap
from scipy.cluster.vq import vq
# sample the values to find from colorbar extremums
vmin = -20.
vmax = 120.
precision = 1.
resolution = 1 + vmax-vmin/precision
sample = linspace(vmin,vmax,resolution)
# create code_book from sample
cmap = LinearSegmentedColormap.from_list('Custom', hex_color_list)
norm = Normalize()
code_book = cmap(norm(sample))
# quantize colors
indices = vq(flat_image,code_book)[0]
# filter sample from quantization results **(improved)**
values = sample[indices]
savetxt(image_file_name[:-3]+'.csv',values ,delimiter=',',fmt='%-8.1f')
The results are finally exported in .csv
Most important thing is to create a well representative palette file to obtain a good precision. I start to obtain a good gradient (code book) using 12 colors and more.
This process is useful since sometimes camera shots cannot be translated to gray-scale easily and linearly.
Thanks to all contributors unutbu, Rob A, scipy community ;)
The LinearSegmentedColormap doesn't give me the same interpolation if I don't it manually during my test, so I prefer to use my own :
As an advantage, matplotlib is not more required since I integrate my code within an existing software.
def codeBook(color_list, N=256):
"""
return N colors interpolated from rgb color list
!!! workaround to matplotlib colormap to avoid dependency !!!
"""
# seperate r g b channel
rgb = np.array(color_list).T
# normalize data points sets
new_x = np.linspace(0., 1., N)
x = np.linspace(0., 1., len(color_list))
# interpolate each color channel
rgb = [np.interp(new_x, x, channel) for channel in rgb]
# round elements of the array to the nearest integer.
return np.rint(np.column_stack( rgb )).astype('int')

Rotating an ellipse but axes have vastly different scales which leads to distortion

So, I'm trying to plot some data I have and draw a confidence ellipse with a given mean and covariance on top of it. I'm very new to Python but I thought it would be way easier to visualize this than it would be in Java (where I calculated these values used).
The problem I have right now is that I simply can't rotate the ellipse I'm drawing without distorting it. I'm not entirely sure if that is actually the case, but I think it's because of the vastly different scales in the axes (x from 0 to 6, y from 40 to 100). Once again I'm guessing, but I would imagine that the rotation doesn't scale the height (which is set to fit the y axis from 40 to 100 obviously) so it's way too long for my short x axis.
My question: Is there a better way to do this or a way to scale the ellipse after the rotation so it fits again? Since I don't really know too much about Python, I just went through different tutorials to see what gets me a result quickly. Not sure if this is the "right" way to begin with. What I know, however, is that all of the examples I saw have similarly scaled axes so I could not find an adequate solution to my issue.
This is just a part of the code, the actual data is not included but it should still be copy-paste-able for the ellipse so you guys can see what I mean if you change the angle of rotation (45 in this example).
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
meanX = 2.0374
meanY = 54.4897
secondLongest = 0.0700
longest = 33.7679
ax = plt.subplot(111)
plt.axis([0, 6, 40, 100])
#plt.plot(duration, waitperiod, "rx")
plt.xlabel("Duration")
plt.ylabel("Time between eruptions")
#if rotation is set to 0, it works perfectly fine but I want a different angle
ellipse = Ellipse((meanX, meanY), 2*np.sqrt(5.991*secondLongest), 2*np.sqrt(5.991*longest), 45)
ax.add_artist(ellipse)
plt.show()
It works perfectly fine, the apparent distortion you see is indeed because of the different scaling of your X and Y axes in the plot.
E.g. try with these axis limits to see that angle doesn't "distort" anymore:
plt.axis([-20, 20, 40, 80])
(and make sure your plot window is square too)
If this ellipse represents confidence, I doubt you want to rescale it to fit your arbitrary axis limits, instead you probably want to set your axis limits to fit your data and confidence ellipse...

How to normalise plotted points and get a circle?

Given 2000 random points in a unit circle (using numpy.random.normal(0,1)), I want to normalize them such that the output is a circle, how do I do that?
I was requested to show my efforts. This is part of a larger question: Write a program that samples 2000 points uniformly from the circumference of a unit circle. Plot and show it is indeed picked from the circumference. To generate a point (x,y) from the circumference, sample (x,y) from std normal distribution and normalise them.
I'm almost certain my code isn't correct, but this is where I am up to. Any advice would be helpful.
This is the new updated code, but it still doesn't seem to be working.
import numpy as np
import matplotlib.pyplot as plot
def plot():
xy = np.random.normal(0,1,(2000,2))
for i in range(2000):
s=np.linalg.norm(xy[i,])
xy[i,]=xy[i,]/s
plot.plot(xy)
plot.show()
I think the problem is in
plot.plot(xy)
even if I use
plot.plot(xy[:,0],xy[:,1])
it doesn't work.
Connected lines are not a good visualization here. You essentially connect random points on the circle. Since you do this quite often, you will get a filled circle. Try drawing points instead.
Also avoid name space mangling. You import matplotlib.pyplot as plot and also name your function plot. This will lead to name conflicts.
import numpy as np
import matplotlib.pyplot as plt
def plot():
xy = np.random.normal(0,1,(2000,2))
for i in range(2000):
s=np.linalg.norm(xy[i,])
xy[i,]=xy[i,]/s
fig, ax = plt.subplots(figsize=(5,5))
# scatter draws dots instead of lines
ax.scatter(xy[:,0], xy[:,1])
If you use dots instead, you will see that your points indeed lie on the unit circle.
Your code has many problems:
Why using np.random.normal (a gaussian distribution) when the problem text is about uniform (flat) sampling?
To pick points on a circle you need to correlate x and y; i.e. randomly sampling x and y will not give a point on the circle as x**2+y**2 must be 1 (for example for the unit circle centered in (x=0, y=0)).
A couple of ways to get the second point is to either "project" a random point from [-1...1]x[-1...1] on the unit circle or to pick instead uniformly the angle and compute a point on that angle on the circle.
First of all, if you look at the documentation for numpy.random.normal (and, by the way, you could just use numpy.random.randn), it takes an optional size parameter, which lets you create as large of an array as you'd like. You can use this to get a large number of values at once. For example: xy = numpy.random.normal(0,1,(2000,2)) will give you all the values that you need.
At that point, you need to normalize them such that xy[:,0]**2 + xy[:,1]**2 == 1. This should be relatively trivial after computing what xy[:,0]**2 + xy[:,1]**2 is. Simply using norm on each dimension separately isn't going to work.
Usual boilerplate
import numpy as np
import matplotlib.pyplot as plt
generate the random sample with two rows, so that it's more convenient to refer to x's and y's
xy = np.random.normal(0,1,(2,2000))
normalize the random sample using a library function to compute the norm, axis=0 means consider the subarrays obtained varying the first array index, the result is a (2000) shaped array that can be broadcasted to xy /= to have points with unit norm, hence lying on the unit circle
xy /= np.linalg.norm(xy, axis=0)
Eventually, the plot... here the key is the add_subplot() method, and in particular the keyword argument aspect='equal' that requires that the scale from user units to output units it's the same for both axes
plt.figure().add_subplot(111, aspect='equal').scatter(xy[0], xy[1])
pt.show()
to have

Contouring non-uniform 2d data in python/matplotlib above terrain

I am having trouble contouring some data in matplotlib. I am trying to plot a vertical cross-section of temperature that I sliced from a 3d field of temperature.
My temperature array (T) is of size 50*300 where 300 is the number of horizontal levels which are evenly spaced. However, 50 is the number of vertical levels that are: a) non-uniformly spaced; and b) have a different starting level for each vertical column. As in there are always 50 vertical levels, but sometimes they span from 100 - 15000 m, and sometimes from 300 - 20000 m (due to terrain differences).
I also have a 2d array of height (Z; same shape as T), a 1d array of horizontal location (LAT), and a 1d array of terrain height (TER).
I am trying to get a similar plot to one like here in which you can see the terrain blacked out and the data is contoured around it.
My first attempt to plot this was to create a meshgrid of horizontal distance and height, and then contourf temperature with those arguments as well. However numpy.meshgrid requires 1d inputs, and my height is a 2d variable. Doing something like this only begins contouring upwards from the first column:
ax1 = plt.gca()
z1, x1 = np.meshgrid(LAT, Z[:,0])
plt.contourf(z1, x1, T)
ax1.fill_between(z1[0,:], 0, TER, facecolor='black')
Which produces this. If I use Z[:,-1] in the meshgrid, it contours underground for columns to the left, which obviously I don't want. What I really would like is to use some 2d array for Z in the meshgrid but I'm not sure how to go about that.
I've also looked into the griddata function but that requires 1D inputs as well. Anyone have any ideas on how to approach this? Any help is appreciated!
For what I understand your data is structured. Then you can directly use the contourf or contour option in matplotlib. The code you present have the right idea but you should use
x1, z1 = np.meshgrid(LAT, Z[:,0])
plt.contourf(x1, Z, T)
for the contours. I have an example below
import numpy as np
import matplotlib.pyplot as plt
L, H = np.pi*np.mgrid[-1:1:100j, -1:1:100j]
T = np.cos(L)*np.cos(2*H)
H = np.cos(L) + H
plt.contourf(L, H, T, cmap="hot")
plt.show()
Look that the grid is generated with the original bounding box, but the plot is made with the height that has been transformed and not the initial one. Also, you can use tricontour for nonstructured data (or in general), but then you will need to generate the triangulation (that in your case is straightforward).

How to reverse a color map image to scalar values?

How do I invert a color mapped image?
I have a 2D image which plots data on a colormap. I'd like to read the image in and 'reverse' the color map, that is, look up a specific RGB value, and turn it into a float.
For example:
using this image: http://matplotlib.sourceforge.net/_images/mri_demo.png
I should be able to get a 440x360 matrix of floats, knowing the colormap was cm.jet
from pylab import imread
import matplotlib.cm as cm
a=imread('mri_demo.png')
b=colormap2float(a,cm.jet) #<-tricky part
There may be better ways to do this; I'm not sure.
If you read help(cm.jet) you will see the algorithm used to map values in the interval [0,1] to RGB 3-tuples. You could, with a little paper and pencil, work out formulas to invert the piecewise-linear functions which define the mapping.
However, there are a number of issues which make the paper and pencil solution somewhat unappealing:
It's a lot of laborious algebra, and
the solution is specific for cm.jet.
You'd have to do all this work again
if you change the color map. How to automate the solving of these algebraic equations is interesting, but not a problem I know how to solve.
In general, the color map may not be
invertible (more than one value may
be mapped to the same color). In the
case of cm.jet, values between 0.11
and 0.125 are all mapped to the RGB
3-tuple (0,0,1), for example. So if
your image contains a pure blue
pixel, there is really no way to
tell if it came from a value of 0.11
or a value of, say, 0.125.
The mapping from [0,1] to
3-tuples is a curve in 3-space. The
colors in your image may not lie
perfectly on this curve. There might
be round-off error, for example. So any practical solution has to be able to interpolate or somehow project points in 3-space onto the curve.
Due to the non-uniqueness issue, and the projection/interpolation issue, there can be many possible solutions to the problem you pose. Below is just one possibility.
Here is one way to resolve the uniqueness and projection/interpolation issues:
Create a gradient which acts as a "code book". The gradient is an array of RGBA 4-tuples in the cm.jet color map. The colors of the gradient correspond to values from 0 to 1. Use scipy's vector quantization function scipy.cluster.vq.vq to map all the colors in your image, mri_demo.png, onto the nearest color in gradient.
Since a color map may use the same color for many values, the gradient may contain duplicate colors. I leave it up to scipy.cluster.vq.vq to decide which (possibly) non-unique code book index to associate with a particular color.
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import scipy.cluster.vq as scv
def colormap2arr(arr,cmap):
# http://stackoverflow.com/questions/3720840/how-to-reverse-color-map-image-to-scalar-values/3722674#3722674
gradient=cmap(np.linspace(0.0,1.0,100))
# Reshape arr to something like (240*240, 4), all the 4-tuples in a long list...
arr2=arr.reshape((arr.shape[0]*arr.shape[1],arr.shape[2]))
# Use vector quantization to shift the values in arr2 to the nearest point in
# the code book (gradient).
code,dist=scv.vq(arr2,gradient)
# code is an array of length arr2 (240*240), holding the code book index for
# each observation. (arr2 are the "observations".)
# Scale the values so they are from 0 to 1.
values=code.astype('float')/gradient.shape[0]
# Reshape values back to (240,240)
values=values.reshape(arr.shape[0],arr.shape[1])
values=values[::-1]
return values
arr=plt.imread('mri_demo.png')
values=colormap2arr(arr,cm.jet)
# Proof that it works:
plt.imshow(values,interpolation='bilinear', cmap=cm.jet,
origin='lower', extent=[-3,3,-3,3])
plt.show()
The image you see should be close to reproducing mri_demo.png:
(The original mri_demo.png had a white border. Since white is not a color in cm.jet, note that scipy.cluster.vq.vq maps white to to closest point in the gradient code book, which happens to be a pale green color.)
Here is a simpler approach, that works for many colormaps, e.g. viridis, though not for LinearSegmentedColormaps such as 'jet'.
The colormaps are stored as lists of [r,g,b] values. For lots of colormaps, this map has exactly 256 entries. A value between 0 and 1 is looked up using its nearest neighbor in the color list. So, you can't get the exact value back, only an approximation.
Some code to illustrate the concepts:
from matplotlib import pyplot as plt
def find_value_in_colormap(tup, cmap):
# for a cmap like viridis, the result of the colormap lookup is a tuple (r, g, b, a), with a always being 1
# but the colors array is stored as a list [r, g, b]
# for some colormaps, the situation is reversed: the lookup returns a list, while the colors array contains tuples
tup = list(tup)[:3]
colors = cmap.colors
if tup in colors:
ind = colors.index(tup)
elif tuple(tup) in colors:
ind = colors.index(tuple(tup))
else: # tup was not generated by this colormap
return None
return (ind + 0.5) / len(colors)
val = 0.3
tup = plt.cm.viridis(val)
print(find_value_in_colormap(tup, plt.cm.viridis))
This prints the approximate value:
0.298828125
being the value corresponding to the color triple.
To illustrate what happens, here is a visualization of the function looking up a color for a value, followed by getting the value corresponding to that color.
from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(-0.1, 1.1, 10000)
y = [ find_value_in_colormap(plt.cm.viridis(x), plt.cm.viridis) for x in x]
fig, axes = plt.subplots(ncols=3, figsize=(12,4))
for ax in axes.ravel():
ax.plot(x, x, label='identity: y = x')
ax.plot(x, y, label='lookup, then reverse')
ax.legend(loc='best')
axes[0].set_title('overall view')
axes[1].set_title('zoom near x=0')
axes[1].set_xlim(-0.02, 0.02)
axes[1].set_ylim(-0.02, 0.02)
axes[2].set_title('zoom near x=1')
axes[2].set_xlim(0.98, 1.02)
axes[2].set_ylim(0.98, 1.02)
plt.show()
For a colormap with only a few colors, a plot can show the exact position where one color changes to the next. The plot is colored corresponding to the x-values.
Hy unutbu,
Thanks for your reply, I understand the process you explain, and reproduces it. It works very well, I use it to reverse IR camera shots in temperature grids, since a picture can be easily rework/reshape to fulfill my purpose using GIMP.
I'm able to create grids of scalar from camera shots that is really usefull in my tasks.
I use a palette file that I'm able to create using GIMP + Sample a Gradient Along a Path.
I pick the color bar of my original picture, convert it to palette then export as hex color sequence.
I read this palette file to create a colormap normalized by a temperature sample to be used as the code book.
I read the original image and use the vector quantization to reverse color into values.
I slightly improve the pythonic style of the code by using code book indices as index filter in the temperature sample array and apply some filters pass to smooth my results.
from numpy import linspace, savetxt
from matplotlib.colors import Normalize, LinearSegmentedColormap
from scipy.cluster.vq import vq
# sample the values to find from colorbar extremums
vmin = -20.
vmax = 120.
precision = 1.
resolution = 1 + vmax-vmin/precision
sample = linspace(vmin,vmax,resolution)
# create code_book from sample
cmap = LinearSegmentedColormap.from_list('Custom', hex_color_list)
norm = Normalize()
code_book = cmap(norm(sample))
# quantize colors
indices = vq(flat_image,code_book)[0]
# filter sample from quantization results **(improved)**
values = sample[indices]
savetxt(image_file_name[:-3]+'.csv',values ,delimiter=',',fmt='%-8.1f')
The results are finally exported in .csv
Most important thing is to create a well representative palette file to obtain a good precision. I start to obtain a good gradient (code book) using 12 colors and more.
This process is useful since sometimes camera shots cannot be translated to gray-scale easily and linearly.
Thanks to all contributors unutbu, Rob A, scipy community ;)
The LinearSegmentedColormap doesn't give me the same interpolation if I don't it manually during my test, so I prefer to use my own :
As an advantage, matplotlib is not more required since I integrate my code within an existing software.
def codeBook(color_list, N=256):
"""
return N colors interpolated from rgb color list
!!! workaround to matplotlib colormap to avoid dependency !!!
"""
# seperate r g b channel
rgb = np.array(color_list).T
# normalize data points sets
new_x = np.linspace(0., 1., N)
x = np.linspace(0., 1., len(color_list))
# interpolate each color channel
rgb = [np.interp(new_x, x, channel) for channel in rgb]
# round elements of the array to the nearest integer.
return np.rint(np.column_stack( rgb )).astype('int')

Categories

Resources