Printing one color using imshow [closed] - python

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to print a color on the screen using RGB values and the output should be just a single color. For example if I give RGB values of red, I want the output to show me a red color. But when I try this code, it isn't working. What am I missing?
import matplotlib.pyplot as plt
plt.imshow([(255, 0, 0)])
plt.show()
The output is:

The issue is that you are trying to display a 2D color array with 1 row and 3 columns. The pixel values from left to right are 255, 0and 0. As #Ben K. correctly pointed out in the comments, by doing so the intensity values are scaled to the range 0..1 and displayed using the current colormap. That's why your code displays one yellow pixel and two violet pixels.
If you wish to specify the RGB values you should create a 3D array of m rows, n columns and 3 color channels (one chromatic channel for each RGB component).
Demo
The snippet below generates a random array of indices of a color palette and displays the result:
In [14]: import numpy as np
In [15]: import matplotlib.pyplot as plt
In [16]: from skimage import io
In [17]: palette = np.array([[255, 0, 0], # index 0: red
...: [ 0, 255, 0], # index 1: green
...: [ 0, 0, 255], # index 2: blue
...: [255, 255, 255], # index 3: white
...: [ 0, 0, 0], # index 4: black
...: [255, 255, 0], # index 5: yellow
...: ], dtype=np.uint8)
...:
In [18]: m, n = 4, 6
In [19]: indices = np.random.randint(0, len(palette), size=(4, 6))
In [20]: indices
Out[20]:
array([[2, 4, 0, 1, 4, 2],
[1, 1, 5, 5, 2, 0],
[4, 4, 3, 3, 0, 4],
[2, 5, 0, 5, 2, 3]])
In [21]: io.imshow(palette[indices])
Out[21]: <matplotlib.image.AxesImage at 0xdbb8ac8>
You could also generate a random color pattern rather than using a color palette:
In [24]: random_colors = np.uint8(np.random.randint(0, 255, size=(m, n, 3)))
In [24]: random_colors
Out[27]:
array([[[137, 40, 84],
[ 42, 142, 25],
[ 48, 240, 90],
[ 22, 27, 205],
[253, 130, 22],
[137, 33, 252]],
[[144, 67, 156],
[155, 208, 130],
[187, 243, 200],
[ 88, 171, 116],
[ 51, 15, 157],
[ 39, 64, 235]],
[[ 76, 56, 135],
[ 20, 38, 46],
[216, 4, 102],
[142, 60, 118],
[ 93, 222, 117],
[ 53, 138, 39]],
[[246, 88, 20],
[219, 114, 172],
[208, 76, 247],
[ 1, 163, 65],
[ 76, 83, 8],
[191, 46, 53]]], dtype=uint8)
In [26]: io.imshow(random_colors)
Out[26]: <matplotlib.image.AxesImage at 0xe6c6a90>

This is the output produced by
import matplotlib.pyplot as plt
plt.imshow([(3,0,0),(0,2,0),(0,0,1)])
plt.colorbar()
plt.show()
You see that the three tuples I provided to imshow are interpreted as rows of a matrix:
3 0 0
0 2 0
0 0 1
The numeric values are mappped to colors for the plot. The colorbar function shows the mapping between colors and numeric values.
To draw rectangles, refer to this SO question, but replace the value of the facecolor parameter with one of the following possibilities:
A color name, as a string.
A Hex color code, given as a string with a leading # sign. For example, facecolor='#FF0000' is bright red.
A triple with three values between 0 and 1, which specify the (Red, Green, Blue) parts of your color. (Not 0 to 255 like you assumed in your question!)
Use the edgecolor parameter in the same manner to determine the color of the rectangle border, or use 'None' to draw no border.

Related

How to compare against and modify values of NumPy array

I am trying to convert a numpy array to a .vox file. .vox files have a limit where they can only store 255 unique colors. My numpy array is being somewhat randomly generated, so it's length and values are not always the same. However, it's shape is always (N, 3) and the color values are usually similar. For instance, if there is a "red" part of the array, most of the reds are close enough to be visually the same. I've created another numpy array with a set of 19 sample colors equally spaced between 13 points in the RGB color space, which produces a shape of (247, 3).
eg. ([13, 0, 0], [26, 0, 0], [39, 0, 0], [52, 0, 0], [65, 0, 0], [78, 0, 0], [91, 0, 0],
[104, 0, 0], [117, 0, 0], [130, 0, 0], [143, 0, 0], [156, 0, 0], [169, 0, 0], [182, 0, 0],
[195, 0, 0], [208, 0, 0], [221, 0, 0], [234, 0, 0], [247, 0, 0]) x 13 other sets
How can I compare every color in my original numpy array to my array of sample colors and change its value to the closest match? It is ok if the length of the array is greater than 255 so long as there are only 255 or less unique colors.
The most usual way to compare everything to everything (and, generally speaking to do in numpy the equivalent of nested for loops) is to use broadcasting.
Let's consider a smaller example
colorTable = np.array([[0,0,0], [120,0,0], [0,120,0], [0,0,120], [255,255,255]])
randomColors = np.array([[10,10,10], [255,0,0], [140,140,140], [0,0,130], [20,200,80]])
So, the idea is to compare all colors from randomColors to all from colorTable.
Numpy broadcasting consist in assigning one different axis to each dimension you want to iterated in nested implicit for loop.
For example, before applying to our case
a=np.array([1,2,3])
b=np.array([4,5,6,7])
a[:,None]*b[None, :]
# array([[ 4, 5, 6, 7],
# [ 8, 10, 12, 14],
# [12, 15, 18, 21]])
See that we places ourselves in 2D, making a a column of 3 numbers, and b a row of 4 numbers, and letting numpy broadcasting peform the 12 matching multiplications.
So, in our case,
colorTable[:,None,:]-randomColors[None,:,:]
computes the difference between each color (in axis 0) of colorTable, and each color of randomColor (in axis 1). Note that axis 2 are the 3 r,g,b. Since this axis is present in both operands, no broadcasting here.
array([[[ -10, -10, -10],
[-255, 0, 0],
[-140, -140, -140],
[ 0, 0, -130],
[ -20, -200, -80]],
[[ 110, -10, -10],
[-135, 0, 0],
[ -20, -140, -140],
[ 120, 0, -130],
[ 100, -200, -80]],
[[ -10, 110, -10],
[-255, 120, 0],
[-140, -20, -140],
[ 0, 120, -130],
[ -20, -80, -80]],
[[ -10, -10, 110],
[-255, 0, 120],
[-140, -140, -20],
[ 0, 0, -10],
[ -20, -200, 40]],
[[ 245, 245, 245],
[ 0, 255, 255],
[ 115, 115, 115],
[ 255, 255, 125],
[ 235, 55, 175]]])
As you can see, this is a 3D array, that you can see as a 2D array of rgb triplets (1 color of color table in each row, 1 color of randomColors in each column)
((colorTable[:,None,:]-randomColors[None,:,:])**2).sum(axis=2)
sum the square of this difference along axis 2. So what we have here is, for each pair (r,g,b), (r',g',b') of color from both array, is (r-r')²+(g-g')²+(b-b')².
array([[ 300, 65025, 58800, 16900, 46800],
[ 12300, 18225, 39600, 31300, 56400],
[ 12300, 79425, 39600, 31300, 13200],
[ 12300, 79425, 39600, 100, 42000],
[180075, 130050, 39675, 145675, 88875]])
This is a 2D array of square of euclidean distance between each color of colorTable (on each row) and each color of randomColors (on each column).
If we want to find the index in colorTable of the closest color to randomColors[3], all we have to do is to compute argmin of column 3 of this table.
((colorTable[:,None,:]-randomColors[None,:,:])**2).sum(axis=2)[:,3].argmin()
Result is, correctly, 3.
Or, even better, we can do that for all columns, by telling argmin to compute minimum only along axis 0, that is along rows, that is along all color of colorTable
((colorTable[:,None,:]-randomColors[None,:,:])**2).sum(axis=2).argmin(axis=0)
# array([0, 1, 1, 3, 2])
You can see that the result is, correctly, for each column, that is each color of randomColors, the index of the color of colorTable that is closest (for euclidean distance) to id. That is, the index of the smallest number in each column of the previous table
So, all that remains here, is to extract the color of colorTable matching this index
colorTable[((colorTable[:,None,:]-randomColors[None,:,:])**2).sum(axis=2).argmin(axis=0)]
Giving a table of the same shape as randomColors (that is having as many rows as the previous result have indexes), made of colors from colorTable (the one closest to the each rows)
array([[ 0, 0, 0],
[120, 0, 0],
[120, 0, 0],
[ 0, 0, 120],
[ 0, 120, 0]])
Note that the result is not always intuitive. (140,140,140) is closest to (120,0,0) than it is to (255,255,255)
But that is a matter of defining the distance.

reducing colors with numpy

I'm writing a script to reduce the number of colors in a list by finding clusters. The problem I seem to run into is that the clusters will have different dimensions. Here is my jumping off point after the original list of 6 colors got already seperated into 3 clusters:
import numpy
a = numpy.array([
[12, 44, 52],
[27, 0, 71],
[81, 99, 92]
])
b = numpy.array([
[ 12, 13, 93],
[128, 128, 128]
])
c = numpy.array([
[ 57, 14, 255]
])
clusters = numpy.array([a,b,c])
print(numpy.min(clusters, axis=1))
However now the function numpy.min() starts to throw an error - I suspect it's because of the differently sized arrays.
The cluster arrays will always have the shape (x, 3) (x number of colors, 3 components). I want to get an array with the minimums of all components of the colors in one cluster (n, 3) (n is number of clusters) - so array([12, 0, 52], [12, 13, 93], [57, 14, 255]) in this case.
Is there a way to do this? As I mentioned it works as long as all clusters have multiple values.
Since your arrays a, b and c don't have an equal shape, you can't put them in the same array (at least if you don't pad with some value). You could calculate the minimum first and then generate an array from these minima:
numpy.array([arr.min(axis=0) for arr in (a, b, c)])
Which gives you:
array([[ 12, 0, 52],
[ 12, 13, 93],
[ 57, 14, 255]])

Convert multi-dimensional Numpy array to 2-dimensional array based on color values

I have an image which is read as a uint8 array with the shape (512,512,3).
Now I would like to convert this array to a uint8 array of shape (512,512,1), where each pixel value in the third axis are converted from a color value [255,0,0] to a single class label value [3], based on the following color/class encoding:
1 : [0, 0, 0],
2 : [0, 0, 255],
3 : [255, 0, 0],
4 : [150, 30, 150],
5 : [255, 65, 255],
6 : [150, 80, 0],
7 : [170, 120, 65],
8 : [125, 125, 125],
9 : [255, 255, 0],
10 : [0, 255, 255],
11 : [255, 150, 0],
12 : [255, 225, 120],
13 : [255, 125, 125],
14 : [200, 100, 100],
15 : [0, 255, 0],
16 : [0, 150, 80],
17 : [215, 175, 125],
18 : [220, 180, 210],
19 : [125, 125, 255]
What is the most efficient way to do this? I thought of looping through all classes and using numpy.where, but this is obviously time-consuming.
You could use giant lookup table. Let cls be [[0,0,0], [0,0,255], ...] of dtype=np.uint8.
LUT = np.zeros(size=(256,256,256), dtype='u1')
LUT[cls[:,0],cls[:,1],cls[:,2]] = np.arange(cls.shape[1])+1
img_as_cls = LUT[img[...,0],img[...,1], img[...,2]]
This solution is O(1) per pixel. It is also quite cache efficient because a small part of entries in LUT are actually used. It takes circa 10ms to process 1000x1000 image on my machine.
The solution can be slightly improved by converting 3-color channels to 24-bit integers.
Here is the code
def scalarize(x):
# compute x[...,2]*65536+x[...,1]*256+x[...,0] in efficient way
y = x[...,2].astype('u4')
y <<= 8
y +=x[...,1]
y <<= 8
y += x[...,0]
return y
LUT = np.zeros(2**24, dtype='u1')
LUT[scalarize(cls)] = 1 + np.arange(cls.shape[0])
simg = scalarize(img)
img_to_cls = LUT[simg]
After optimization it takes about 5ms to process 1000x1000 image.
One way: separately create the boolean arrays with True values where the input's pixel value matches one of the palette values, and then use arithmetic to combine them. Thus:
palette = [
[0, 0, 0],
[0, 0, 255],
[255, 0, 0],
# etc.
]
def palettized(data, palette):
# Initialize result array
shape = list(data.shape)
shape[-1] = 1
result = np.zeros(shape)
# Loop and add each palette index component.
for value, colour in enumerate(palette, 1):
result += (data == colour).all(axis=2) * value
return result
Here's one based on views -
# https://stackoverflow.com/a/45313353/ #Divakar
def view1D(a, b): # a, b are arrays
# This function gets 1D view into 2D input arrays
a = np.ascontiguousarray(a)
b = np.ascontiguousarray(b)
void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[-1]))
return a.view(void_dt).ravel(), b.view(void_dt).ravel()
def img2label(a, maps):
# Get one-dimension reduced view into input image and map arrays.
# We need to reshape image to 2D, then feed it to view1D to get 1D
# outputs and then reshape 1D image to 2D
A,B = view1D(a.reshape(-1,a.shape[-1]),maps)
A = A.reshape(a.shape[:2])
# Trace back positions of A in B using searchsorted. This gives us
# original order, which is the final output.
sidx = B.argsort()
return sidx[np.searchsorted(B,A,sorter=sidx)]
Given that your labels start from 1, you might want to add 1 to the output.
Sample run -
In [100]: # Mapping array
...: maps = np.array([[0, 0, 0],[0, 0, 255],\
...: [255, 0, 0],[150, 30, 150]],dtype=np.uint8)
...:
...: # Setup random image array
...: idx = np.array([[0,2,1,3],[1,3,2,0]])
...: img = maps[idx]
In [101]: img2label(img, maps) # should retrieve back idx
Out[101]:
array([[0, 2, 1, 3],
[1, 3, 2, 0]])

Efficiently zero out all but largest n elements for each image pixel

So I have an image I of size (H x W x C), where C is some number of channels. The challenge is to obtain a new image J, again of size (H x W x C), in which J[i, j] contains only the maximum n entries in I[i, j].
Equivalently, think about iterating through each image pixel in I and zero-ing out all but the highest n entries.
What I've tried:
# NOTE: bone_weight_matrix is a matrix of size (256 x 256 x 43)
argsort_four = np.argsort(bone_weight_matrix, axis=2)[:, :, -4:]
# For each pixel, retain only the top four influencing bone weights
proc_matrix = np.zeros(bone_weight_matrix.shape)
for i in range(bone_weight_matrix.shape[0]):
for j in range(bone_weight_matrix.shape[1]):
proc_matrix[i, j, argsort_four[i, j]] = bone_weight_matrix[i, j, argsort_four[i, j]]
return proc_matrix
Problem is this method seems to be super slow and doesn't feel very pythonic. Any advice would be great.
Cheers.
Generic case : Keeping largest or smallest n elements along an axis
Basically two steps would be involved :
Get those n indices to be kept along the specified axis with np.argparition.
Initialize a zeros array and use those earlier obtained indices with advanced-indexing to select from the input array as well as assign into the zeros array.
Let's try to solve for a generic problem that works to select n elements along the specified axis and also be able to keep largest n as well as smallest n elements.
The implementation would look like this -
def keep(ar, n, axis=-1, order='largest'):
axis = np.core.multiarray.normalize_axis_index(axis, ar.ndim)
slice_l = [slice(None, None, None)]*ar.ndim
if order=='largest':
slice_l[axis] = slice(-n,None,None)
idx = np.argpartition(ar, kth=-n, axis=axis)[slice_l]
elif order=='smallest':
slice_l[axis] = slice(None,n,None)
idx = np.argpartition(ar, kth=n, axis=axis)[slice_l]
else:
raise Exception('Invalid order value')
grid = np.ogrid[tuple(map(slice, ar.shape))]
grid[axis] = idx
out = np.zeros_like(ar)
out[grid] = ar[grid]
return out
Sample runs
Input array :
In [208]: np.random.seed(0)
...: I = np.random.randint(11,99,(2,2,6))
In [209]: I
Out[209]:
array([[[55, 58, 75, 78, 78, 20],
[94, 32, 47, 98, 81, 23]],
[[69, 76, 50, 98, 57, 92],
[48, 36, 88, 83, 20, 31]]])
Keep largest 2 elements along last axis :
In [210]: keep(I, n=2, axis=-1, order='largest')
Out[210]:
array([[[ 0, 0, 0, 78, 78, 0],
[94, 0, 0, 98, 0, 0]],
[[ 0, 0, 0, 98, 0, 92],
[ 0, 0, 88, 83, 0, 0]]])
Keep largest 1 element along first axis :
In [211]: keep(I, n=1, axis=1, order='largest')
Out[211]:
array([[[ 0, 58, 75, 0, 0, 0],
[94, 0, 0, 98, 81, 23]],
[[69, 76, 0, 98, 57, 92],
[ 0, 0, 88, 0, 0, 0]]])
Keep smallest 2 elements along last axis :
In [212]: keep(I, n=2, axis=-1, order='smallest')
Out[212]:
array([[[55, 0, 0, 0, 0, 20],
[ 0, 32, 0, 0, 0, 23]],
[[ 0, 0, 50, 0, 57, 0],
[ 0, 0, 0, 0, 20, 31]]])

Multiply NumPy ndarray with every element in another binary ndarray of different size

I have two ndarrays :
a = [[30,40],
[60,90]]
b = [[0,0,1],
[1,0,1],
[1,1,1]]
please notice that a shape might be larger but always square array (50,50) , (100,100)
The wanted result is :
Result = [[a*0,a*0,a*1],
[[a*1,a*0,a*1],
[[a*1,a*1,a*1]]
I managed to get the right answer with this code but I think there would be a built in function in numpy that accomplish this task in fast manners
totalrows=[]
for row in range(b.shape[0]):
cells=[]
for column in range(b.shape[1]):
print row,column
cells.append(b[row,column]*a)
totalrows.append(np.concatenate(cells,axis=1))
return np.concatenate(totalrows,axis=0)
Indeed there's a NumPy built-in np.kron for such block-based elementwise multiplication problems. To solve your case, it could be used like so -
np.kron(b,a)
Sample run -
In [50]: a
Out[50]:
array([[30, 40],
[60, 90]])
In [51]: b
Out[51]:
array([[0, 0, 1],
[1, 0, 1],
[1, 1, 1]])
In [52]: np.kron(b,a)
Out[52]:
array([[ 0, 0, 0, 0, 30, 40],
[ 0, 0, 0, 0, 60, 90],
[30, 40, 0, 0, 30, 40],
[60, 90, 0, 0, 60, 90],
[30, 40, 30, 40, 30, 40],
[60, 90, 60, 90, 60, 90]])
3D array case
Now, let's say we are working with a as a 3D array (m,n,p) and b as (q,r) and assuming you are looking to perform such a block-wise multiplication iteratively along the last axis of a. Thus, the shapes are to be multiplied along the first two axes on the two inputs to get the output array. To achieve such an output, we need to extend the dimension of b by introducing a singleton dimension as the last axis. The final output would be of shape (m*q,n*r,p*1). The implementation would be simply -
np.kron(b[...,None],a)
Shape check -
In [161]: a = np.random.randint(0,99,(4,5,2))
...: b = np.random.randint(0,99,(6,7))
...:
In [162]: np.kron(b[...,None],a).shape
Out[162]: (24, 35, 2)

Categories

Resources