How to use numpy.where to change all pixels of an image? - python

I have an image of shape (300,300,3) consisting of these pixels [255, 194, 7],[224, 255, 8],[230, 230, 230],[11, 102, 255]. I want to change this pixel [230, 230, 230] to [255,255,255]. And rest other pixels to [0,0,0]. So I'm applying numpy where function to switch the pixels. Below is the code:
import numpy
im = numpy.array([[[255, 194, 7],[224, 255, 8],[230, 230, 230],[11, 102, 255]]])
im[np.where((im == [230, 230, 230]).all(axis = 2))] = [255,255,255]
im[np.where((im != [255,255,255]).all(axis = 2))] = [0,0,0]
The first code is working fine, but all the pixels that have 255 in it like [11, 102, 255] doesnot get flipped at all in the second line. and the image remains same. Can anyone tell me what I'm doing wrong ?

import numpy as np
im = np.array([[[255, 194, 7],[224, 255, 8],[230, 230, 230],[11, 102, 255]]])
Like this?
Make a mask and use it to change the values.
>>> mask = im == 230
>>> im[mask] = 255
>>> im[np.logical_not(mask)] = 0
>>> im
=> array([[[ 0, 0, 0],
[ 0, 0, 0],
[255, 255, 255],
[ 0, 0, 0]]])
Or using numpy.where
>>> np.where(im==230, 255, 0)
=> array([[[ 0, 0, 0],
[ 0, 0, 0],
[255, 255, 255],
[ 0, 0, 0]]])

try
np.array_equal(arr1, arr2)

Related

Concerning euclidean distance and loops. How can I fix my code?

canonical=[]
purple = [181, 126, 212]
red = [242, 0, 86]
white = [229, 229, 229]
brown = [109, 59, 24]
black = [37, 23, 40]
pink = [254, 180, 218]
orange = [255, 97, 20]
grey = [97, 97, 97]
blue = [0, 104, 149]
green = [0, 231, 160]
yellow = [227, 239, 79]
element=[]
purple1 = [160, 32, 240]
red1 = [255, 0, 0]
white1 = [255, 255, 255]
brown1 = [165, 42, 42]
black1 = [0, 0, 0]
pink1 = [255, 192, 203]
orange1 = [255, 165, 0]
grey1 = [190, 190, 190]
blue1 = [0, 0, 255]
green1 = [0, 255, 0]
yellow1 = [255, 255, 0]
start = time.time()
euclidean[element[3]] = math.sqrt((canonical[0]-element[0])**2 + (canonical[1]-element[1])**2 + (canonical[2]-element[2])**2)
end = time.time()
times[element[3]] = end-start
I wish the formula to be apply to every colour, and I wish an output similar to the following:
euclidean: {'black': 46.3033476111609, 'blue': 136.24610086163935, 'brown': 41.916583830269374, 'green': 118.86547017532047, 'orange': 104.75686135046239, 'pink': 106.68645649753299, 'purple': 45.98912915026767, 'red': 76.2954782408499, 'white': 41.53311931459037, 'yellow': 127.45587471748802}
The number could be different.
I can't quite understand exactly what your colour differencing is doing, but broadly speaking I would approach this with different data structures. For example, put the colours in a dictionary:
colours = {
'white1': [255, 255, 255] ,
'brown1': [165, 42, 42],
}
...and so on.
Then write a function for the distance:
import math
def distance(r, g, b):
return math.sqrt(r**2 + g**2 + b**2)
Then you can do something like this:
distances = {}
for colour, rgb in colours.items():
distances[colour] = distance(*rgb)
That *rgb 'explodes' the rgb tuple into the three arguments the distance() function wants. Later on you can try replacing this explicit loop with a dictionary comprehension.
You will eventually also want to learn about NumPy, which provides n-dimensional arrays and makes computing vector norms and so on much easier.

Replacing ones and zeros in a 2D numpy array with another array?

I have a simple problem that I am trying to solve using numpy in an efficient manner. The jist of it is that I have a simple 2D array containing ones and zeros representing an image mask.
What I want to do is convert these ones and zeros into their RGB equivalent where one is a white pixel [255, 255, 255] and zero is a black pixel [0, 0, 0].
How would I go about doing this using NumPy?
mask = [[0, 0, 1],
[1, 0, 0]]
# something
result = [
[[0, 0, 0], [0, 0, 0], [255, 255, 255]],
[[255, 255, 255], [0, 0, 0], [0, 0, 0]]
]
The intent is to take the result and feed it into PIL to save into a PNG.
I've tried using numpy.where but can't seem to coax it into broadcasting another array out.
A possible solution:
np.stack([255 * mask, 255 * mask, 255 * mask], axis=2)
Output:
array([[[ 0, 0, 0],
[ 0, 0, 0],
[255, 255, 255]],
[[255, 255, 255],
[ 0, 0, 0],
[ 0, 0, 0]]])
As your image contains only two colours, I would suggest you consider saving it as a palette image, a.k.a. an indexed image.
Rather than needlessly inflating your image by a factor of 3 to enable it to store 16.7 million colours, you can just store one byte per pixel which will still enable you to have 256 colours which seems plenty when you only have 2 "colours", namely black and white.
That looks like this:
import numpy as np
from PIL import Image
# Make Numpy array "na" from your list
na = np.array(mask, dtype=np.uint8)
# Make PIL Image from Numpy array - this image will be 'L' mode
im = Image.fromarray(na)
# Now push a palette into the image that says:
# index 0 => black, i.e. [0,0,0]
# index 1 => white, i.e. [255,255,255]
#  all other 254 indices are black
# Afterwards the image will be 'P' mode
im.putpalette([0,0,0, 255,255,255] + [0,0,0]*254)
# Save
im.save('result.png')
Since you need to repeat each item three times, np.repeat in conjunction with reshape could be used:
mask = np.array([[0, 0, 1], [1, 0, 0]])
255 * np.repeat(mask, 3, axis=1).reshape(*mask.shape, -1)
>>> array([[[ 0, 0, 0],
[ 0, 0, 0],
[255, 255, 255]],
[[255, 255, 255],
[ 0, 0, 0],
[ 0, 0, 0]]])

How get unique pixels from 2d numpy array?

I have 2d array with rgb pixel data (2 row with 3 pixel in a row).
[[[255, 255, 255],[3, 0, 2],[255, 255, 255]],[[255, 255, 255],[3, 0, 2],[255, 255, 255]]]
How can I get unique pixel? I want to get
[[255, 255, 255], [3, 0, 2]]
I am trying to use np.unique and np.transpose with np.reshape but I wasn't able to get the desired result.
Reshape the array to 2D and then use np.unique with axis=0
arr = np.array([[[255, 255, 255],[3, 0, 2],[255, 255, 255]],[[255, 255, 255],[3, 0, 2],[255, 255, 255]]])
shape = arr.shape
arr = arr.reshape((shape[0] * shape[1], shape[2]))
print(np.unique(arr, axis=0))
Output
[[ 3 0 2]
[255 255 255]]
How about this?
import itertools
np.unique(np.array(list(itertools.chain(*arr))), axis=0)
array([[ 3, 0, 2],
[255, 255, 255]])

Convert multi-dimensional Numpy array to 2-dimensional array based on color values

I have an image which is read as a uint8 array with the shape (512,512,3).
Now I would like to convert this array to a uint8 array of shape (512,512,1), where each pixel value in the third axis are converted from a color value [255,0,0] to a single class label value [3], based on the following color/class encoding:
1 : [0, 0, 0],
2 : [0, 0, 255],
3 : [255, 0, 0],
4 : [150, 30, 150],
5 : [255, 65, 255],
6 : [150, 80, 0],
7 : [170, 120, 65],
8 : [125, 125, 125],
9 : [255, 255, 0],
10 : [0, 255, 255],
11 : [255, 150, 0],
12 : [255, 225, 120],
13 : [255, 125, 125],
14 : [200, 100, 100],
15 : [0, 255, 0],
16 : [0, 150, 80],
17 : [215, 175, 125],
18 : [220, 180, 210],
19 : [125, 125, 255]
What is the most efficient way to do this? I thought of looping through all classes and using numpy.where, but this is obviously time-consuming.
You could use giant lookup table. Let cls be [[0,0,0], [0,0,255], ...] of dtype=np.uint8.
LUT = np.zeros(size=(256,256,256), dtype='u1')
LUT[cls[:,0],cls[:,1],cls[:,2]] = np.arange(cls.shape[1])+1
img_as_cls = LUT[img[...,0],img[...,1], img[...,2]]
This solution is O(1) per pixel. It is also quite cache efficient because a small part of entries in LUT are actually used. It takes circa 10ms to process 1000x1000 image on my machine.
The solution can be slightly improved by converting 3-color channels to 24-bit integers.
Here is the code
def scalarize(x):
# compute x[...,2]*65536+x[...,1]*256+x[...,0] in efficient way
y = x[...,2].astype('u4')
y <<= 8
y +=x[...,1]
y <<= 8
y += x[...,0]
return y
LUT = np.zeros(2**24, dtype='u1')
LUT[scalarize(cls)] = 1 + np.arange(cls.shape[0])
simg = scalarize(img)
img_to_cls = LUT[simg]
After optimization it takes about 5ms to process 1000x1000 image.
One way: separately create the boolean arrays with True values where the input's pixel value matches one of the palette values, and then use arithmetic to combine them. Thus:
palette = [
[0, 0, 0],
[0, 0, 255],
[255, 0, 0],
# etc.
]
def palettized(data, palette):
# Initialize result array
shape = list(data.shape)
shape[-1] = 1
result = np.zeros(shape)
# Loop and add each palette index component.
for value, colour in enumerate(palette, 1):
result += (data == colour).all(axis=2) * value
return result
Here's one based on views -
# https://stackoverflow.com/a/45313353/ #Divakar
def view1D(a, b): # a, b are arrays
# This function gets 1D view into 2D input arrays
a = np.ascontiguousarray(a)
b = np.ascontiguousarray(b)
void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[-1]))
return a.view(void_dt).ravel(), b.view(void_dt).ravel()
def img2label(a, maps):
# Get one-dimension reduced view into input image and map arrays.
# We need to reshape image to 2D, then feed it to view1D to get 1D
# outputs and then reshape 1D image to 2D
A,B = view1D(a.reshape(-1,a.shape[-1]),maps)
A = A.reshape(a.shape[:2])
# Trace back positions of A in B using searchsorted. This gives us
# original order, which is the final output.
sidx = B.argsort()
return sidx[np.searchsorted(B,A,sorter=sidx)]
Given that your labels start from 1, you might want to add 1 to the output.
Sample run -
In [100]: # Mapping array
...: maps = np.array([[0, 0, 0],[0, 0, 255],\
...: [255, 0, 0],[150, 30, 150]],dtype=np.uint8)
...:
...: # Setup random image array
...: idx = np.array([[0,2,1,3],[1,3,2,0]])
...: img = maps[idx]
In [101]: img2label(img, maps) # should retrieve back idx
Out[101]:
array([[0, 2, 1, 3],
[1, 3, 2, 0]])

Converting a PNG image to 2D array

I have a PNG file which when I convert the image to a numpy array, it is of the format that is 184 x 184 x 4. The image is 184 by 184 and each pixel is in RGBA format and hence the 3D array.
This a B&W image and the pixels are either [255, 255, 255, 255] or [0, 0, 0, 255].
I want to convert this to a 184 x 184 2D array where the pixels are now either 1 or 0, depending upon if it is [255, 255, 255, 255] or [0, 0, 0, 255].
Any ideas how to do a straightforward conversion of this.
There would be several ways to do the comparison to give us a boolean array and then, we just need to convert to int array with type conversion. So, for the comparison, one simple way would be to compare against 255 and check for ALL matches along the last axis. This would correspond to checking for [255, 255, 255, 255]. Thus, one approach would be like so -
((arr == 255).all(-1)).astype(int)
Sample run -
In [301]: arr
Out[301]:
array([[[255, 255, 255, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[255, 255, 255, 255],
[255, 255, 255, 255]]])
In [302]: ((arr == 255).all(-1)).astype(int)
Out[302]:
array([[1, 0, 0],
[0, 1, 1]])
If there are really only two values in the array as you say, simply scale and return one of the dimensions:
(arr[:,:,0] / 255).astype(int)

Categories

Resources