I'm new to python and matplotlib and I was wondering whether anyone knew if there were any utilities available to do the equavalent of histogram equalization but to a matplotlib color table? There is a function called matplotlib.colors.Normalize which, if given a image array, will automatically set the bottom and top levels but I want something more intelligent that this. I could always just apply histogram equalization to the data itself but I would rather not touch the data. Any thoughts?
You have to create your own image-specific colormap, but it's not too tricky:
import pylab
import matplotlib.colors
import numpy
im = pylab.imread('lena.png').sum(axis=2) # make grayscale
pylab.imshow(im, cmap=pylab.cm.gray)
pylab.title('orig')
imvals = numpy.sort(im.flatten())
lo = imvals[0]
hi = imvals[-1]
steps = (imvals[::len(imvals)/256] - lo) / (hi - lo)
num_steps = float(len(steps))
interps = [(s, idx/num_steps, idx/num_steps) for idx, s in enumerate(steps)]
interps.append((1, 1, 1))
cdict = {'red' : interps,
'green' : interps,
'blue' : interps}
histeq_cmap = matplotlib.colors.LinearSegmentedColormap('HistEq', cdict)
pylab.figure()
pylab.imshow(im, cmap=histeq_cmap)
pylab.title('histeq')
pylab.show()
Histogram equalization can be applied by modifying the palette (or LUT) of your image, so it would the definition of a palette that is equalized.
I searched a bit and couldn't find source code for computing an equalized palette, so unless something exitss you would have to code it yourself.
You should be started with the description of the algorithm on the Wikipedia article.
You could also ask for help on the matplotlib lists.
Related
I have larger-than-memory uniform (regularly gridded) 2d binary data which I am trying to interactively plot using any combination of Dask, Datashader and Holoviews. I am open to using other python-based tools, but the internet has led me to these ones for now.
The data files are ~11 GB and consist of a (600000, 4800) array of float32s.
I want to plot them on a different aspect ratio (1000x1000 px), and have a callback handle the dataloading/shading on zoom/pan. I am serving to a browser instead of using notebooks.
Within a 1000x1000px datashader canvas I have plotted:
4800x4800 points (which filled the canvas)
600000x4800 points (which filled only the bottom few pixels of the canvas, since the colored pixels had an aspect ratio of 600000/4800)
Neither were interactive.
What I have to far using python3.10 is:
import numpy as np
import datashader as ds
from datashader import transfer_functions as tf
import xarray as xr
import holoviews as hv
import panel as pn
hv.extension('bokeh', logo=False)
hv.output(backend="bokeh")
filename = 'path/to/binary/datafile'
arr = np.memmap(filename, shape=(4800,600000), offset=0, dtype=np.dtype("f4"), mode='r')
arr = xr.DataArray(arr, dims=("x", "y"), coords={'x': np.arange(4800), "y": np.arange(600000)})
cvs = ds.Canvas(plot_width=1000, plot_height=1000, x_range=(0, 4800), y_range=(0, 4800))
# the following line works too but does not fill the canvas
# cvs = ds.Canvas(plot_width=1000, plot_height=1000, x_range=(0, 4800), y_range=(0, 600000))
agg = cvs.raster(arr)
sh = tf.shade(agg)
pn.Row(sh).show()
Any advice is appreciated!
I'm not sure precisely what the ask is here, but the HoloViz way of approaching this problem would be to use dask without .persist() or .compute(). The np.memmap approach may also work.
And then you'd use holoviews as described at https://examples.pyviz.org/census/census.html, or hvplot as described at https://hvplot.holoviz.org . Without having the actual data or a synthesized version of it it's hard to be more specific than that.
BTW, I think you have x and y switched in your x_range and y_range above, since a Numpy shape of 4800,600000 corresponds to a y_range of 0,4800 and an x_range of 0,600000 (since NumPy shapes are row, column while row is on y and column is on x).
I am confused regarding how matplotlib handles fp32 pixel intensities. To my understanding, it rescales the values between max and min values of the image. However, when I try to view images originally in [0,1] by rescaling their pixel intensites to [-1,1] (by im*2-1) using imshow(), the image appears differently colored. How do I rescale so that images don't differ ?
EDIT : Please look at the image -
PS: I need to do this as part of a program that outputs those values in [-1,1]
Following is the code used for this:
img = np.float32(misc.face(gray=False))
fig,ax = plt.subplots(1,2)
img = img/255 # Convert to 0,1 range
print (np.max(img), np.min(img))
img0 = ax[0].imshow(img)
plt.colorbar(img0,ax=ax[0])
print (np.max(2*img-1), np.min(2*img-1))
img1 = ax[1].imshow(2*img-1) # Convert to -1,1 range
plt.colorbar(img1,ax=ax[1])
plt.show()
The max,min output is :
(1.0, 0.0)
(1.0, -1.0)
You are probably using matplotlib wrong here.
The normalization-step should work correctly, if it's active. The docs tell us, that is only active by default, if the input-image is of type float!
Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import misc
fig, ax = plt.subplots(2,2)
# This usage shows different colors because there is no normalization
# FIRST ROW
f = misc.face(gray=True)
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[0,0].imshow(f)
ax[0,1].imshow(g)
# This usage makes sure that the input-image is of type float
# -> automatic normalization is used!
# SECOND ROW
f = np.asarray(misc.face(gray=True), dtype=float) # TYPE!
print(f.dtype)
g = f*2 # just some operation to show the difference between usages
ax[1,0].imshow(f)
ax[1,1].imshow(g)
plt.show()
Output
uint8
float64
Analysis
The first row shows the wrong usage, because the input is of type int and therefore no normalization will be used.
The second row shows the correct usage!
EDIT:
sascha has correctly pointed out in the comments that rescaling is not applied for RGB images and inputs must be ensured to be in [0,1] range.
I had x,y,height vars to build a contour in python.
I created a Triangulation grid using
x,y,height and traing are numpy arrays
tri = Tri.Triangulation(x, y, triang)
then i did a contour using tricontourf
tricontourf(tri,height)
how can i get the output of the tricontourf into a numpy array. I can display the image using pyplot but I dont want to.
when I tried this:
triout = tricontourf(tri,height)
print triout
I got:
<matplotlib.tri.tricontour.TriContourSet instance at 0xa9ab66c>
I need to get the image data and if I could get numpy array its easy for me.
Is it possible to do this?
if its not possible can I do what tricontourf does without matplotlib in python?
You should try this :
cs = tricontourf(tri,height)
for collection in cs.collections:
for path in collection.get_paths():
print path.to_polygons()
as I learned on:
https://github.com/matplotlib/matplotlib/issues/367
(it is better to use path.to_polygons() )
Is is possible to change the colour of an iso-surface depending on height of the points (in python / mayavi) ?
I can create an iso-surface visualization with my script, but I don't know how to make the iso_surface change colour with z axis so that it will be let's say black at the bottom and white at the top of the plot.
I need this in order to make sense of the visualization when it is viewed from directly above the graph.
If you know any other way to achieve this, please let me know as well.
I only want to show one iso_surface plot.
I managed to do this by combining some code from examples http://docs.enthought.com/mayavi/mayavi/auto/example_atomic_orbital.html#example-atomic-orbital and http://docs.enthought.com/mayavi/mayavi/auto/example_custom_colormap.html . Basically you must create a surface as in atomic-orbital example and then make it change colour depending on x. You must create an array of values for x. My code is (the relevant part) :
#src.image_data.point_data.add_array(np.indices(list(self.data.shape)[self.nx,self.ny,self.nz])[2].T.ravel())
src.image_data.point_data.add_array(np.indices(list(self.data.shape))[0].T.ravel())
src.image_data.point_data.get_array(1).name = 'z'
# Make sure that the dataset is up to date with the different arrays:
src.image_data.point_data.update()
# We select the 'scalar' attribute, ie the norm of Phi
src2 = mlab.pipeline.set_active_attribute(src, point_scalars='scalar')
# Cut isosurfaces of the norm
contour = mlab.pipeline.contour(src2)
# contour.filter.contours=[plotIsoSurfaceContours]
# contour.filter.contours=[plotIsoSurfaceContours[0]]
min_c = min(contour.filter._data_min * 1.05,contour.filter._data_max)
max_c = max(contour.filter._data_max * 0.95,contour.filter._data_min)
plotIsoSurfaceContours = [ max(min(max_c,x),min_c) for x in plotIsoSurfaceContours ]
contour.filter.contours= plotIsoSurfaceContours
# Now we select the 'angle' attribute, ie the phase of Phi
contour2 = mlab.pipeline.set_active_attribute(contour, point_scalars='z')
# And we display the surface. The colormap is the current attribute: the phase.
# mlab.pipeline.surface(contour2, colormap='hsv')
xxx = mlab.pipeline.surface(contour2, colormap='gist_ncar')
colorbar = xxx.module_manager.scalar_lut_manager
colorbar.reverse_lut = True
lut = xxx.module_manager.scalar_lut_manager.lut.table.to_array()
lut[:,-1] = int(plotIsoSurfaceOpacity * 254)
xxx.module_manager.scalar_lut_manager.lut.table = lut
# mlab.colorbar(title='Phase', orientation='vertical', nb_labels=3)
self.data is my data, and for unknown reasons if you want to set opacity of your surface you must reverse the lut first and then set the opacity. Multiplication by 254 instead of 255 is done to avoid a possible bug in mayavi.
I hope this helps someone.
I have a series of lines that each need to be plotted with a separate colour. Each line is actually made up of several data sets (positive, negative regions etc.) and so I'd like to be able to create a generator that will feed one colour at a time across a spectrum, for example the gist_rainbow map shown here.
I have found the following works but it seems very complicated and more importantly difficult to remember,
from pylab import *
NUM_COLORS = 22
mp = cm.datad['gist_rainbow']
get_color = matplotlib.colors.LinearSegmentedColormap.from_list(mp, colors=['r', 'b'], N=NUM_COLORS)
...
# Then in a for loop
this_color = get_color(float(i)/NUM_COLORS)
Moreover, it does not cover the range of colours in the gist_rainbow map, I have to redefine a map.
Maybe a generator is not the best way to do this, if so what is the accepted way?
To index colors from a specific colormap you can use:
import pylab
NUM_COLORS = 22
cm = pylab.get_cmap('gist_rainbow')
for i in range(NUM_COLORS):
color = cm(1.*i/NUM_COLORS) # color will now be an RGBA tuple
# or if you really want a generator:
cgen = (cm(1.*i/NUM_COLORS) for i in range(NUM_COLORS))