The code works well but is very slow. How can I vectorize the color substitution to avoid usage of Python for loop?
processed_image = np.empty(initial_image.shape)
for i, j in np.ndindex(initial_image.shape[:2]):
l_, a, b = initial_image[i, j, :]
idx = mapping[a + 128, b + 128]
a, b = new_colors[tuple(idx)]
processed_image[i, j] = l_, a, b
I have an image initial_image in CIELAB space as numpy array of shape (some height, some width, 3). I need to produce a corrected image by changing a and b color components of image using mapping. mapping is a numpy array of shape (255, 255, 2). It gives me indices which can be used to get corrected a and b colors from new_colors. new_colors is of shape (table height, table width, 2).
Solutions that use scikit-image will also be helpful.
You can use advanced indexing:
# chain the two maps
chained = new_colors[(*np.moveaxis(mapping, 2, 0),)]
# split color channels
c1, *c23 = np.moveaxis(initial_image, 2, 0)
# add 128
c23 = *map(np.add, c23, (128, 128)),
# apply chained map
processed_image_2 = np.concatenate([c1[..., None], chained[c23]], axis=2)
Related
Given a Numpy array (actually a 3 channels image) I need to map a function on it, only where a triplette (aka RGB pixel) satisfies a predefined condition. All the rest should be kept untouched.
I know how to set a constant value when a pixel meets a certain condition, but I don't know how to apply a function having as parameter the value of such pixel.
For instance, the following example allows to set to 128 all the pixels that have all the channels greater than 128:
import numpy as np
L = 128
img = np.random.randint(0, 255, (5, 5, 3))
img[(img > L).all(axis=2)] = np.array([128, 128, 128])
But, what about if I have to set a value dependent on the current value of the pixel ?
The following code of course does not work:
import numpy as np
def smart_function(v):
return v//2
L = 128
img = np.random.randint(0, 255, (5, 5, 3))
img[(img > L).all(axis=2)] = smart_function(img)
I also tried with vectorize with no success:
import numpy as np
def smart_function(v):
return v//2
vf = np.vectorize(smart_function)
L = 128
img = np.random.randint(0, 255, (5, 5, 3))
img[(img > L).all(axis=2)] = vf(img)
Edit
To explain better my request, this is the expected behaviour written in plain Python. Obviously this code is very slow, so unusable, but it gives the idea:
for y in range(img.shape[0]):
for x in range(img.shape[1]):
pixel = img[y, x]
if pixel[0] > L and pixel[1] > L and pixel[2] > L:
img[y, x] = smart_function(pixel)
You could use frompyfunc with at:
import numpy as np
xs = np.random.randn(5, 5)
def f(x):
return np.round(x)
f = np.frompyfunc(f, nin=1, nout=1)
f.at(xs, xs > 0)
lets assume we have a tensor representing an image of the shape (910, 270, 1) which assigned a number (some index) to each pixel with width=910 and height=270.
We also have a numpy array of size (N, 3) which maps a 3-tuple to an index.
I now want to create a new numpy array of shape (920, 270, 3) which has a 3-tuple based on the original tensor index and the mapping-3-tuple-numpy array. How do I do this assignment without for loops and other consuming iterations?
This would look simething like:
color_image = np.zeros((self._w, self._h, 3), dtype=np.int32)
self._colors = np.array(N,3) # this is already present
indexed_image = torch.tensor(920,270,1) # this is already present
#how do I assign it to this numpy array?
color_image[indexed_image.w, indexed_image.h] = self._colors[indexed_image.flatten()]
Assuming you have _colors, and indexed_image. Something that ressembles to:
>>> indexed_image = torch.randint(0, 10, (920, 270, 1))
>>> _colors = np.random.randint(0, 255, (N, 3))
A common way of converting a dense map to a RGB map is to loop over the label set:
>>> _colors = torch.FloatTensor(_colors)
>>> rgb = torch.zeros(indexed_image.shape[:-1] + (3,))
>>> for lbl in range(N):
... rgb[lbl == indexed_image[...,0]] = _colors[lbl]
I have a stack of images stored in a 4D array, e.g. [0, 0, :, :] is the image at the location (0, 0). Now I want to make a montage of the images and store them in a 2D array and do something with the images, then I want to transfer the montage back to a 4D array. How can I manage this with numpy? Following is a schematic of what I want to do. It is shown with a 3D array, but I think you can get the idea.
The first part of the operation can be carried out using np.block. You would need to convert to a non-array sequence type for the outer dimensions:
l = [list(x) for x in arr]
montage = np.block(l)
Alternatively, you can just arrange your dimensions the way you like first, then reshape. The key is to remember that later dimensions get raveled together. So if you have an array with (A, B) elements, each of which is an (M, N) image, the result should be an (A * M, B * N) image. You want the original image pixels from each row to stay contiguous, but the rows to be concatenated. So transpose and reshape like this:
a, b, m, n = arr.shape
montage = arr.transpose(0, 2, 1, 3).reshape(a * m, b * n)
You can reshape back using the inverse operation fairly easily:
stack = montage.reshape(a, m, b, n).transpose(0, 2, 1, 3)
This is actually the default behavior of np.reshape(). Just calculate how wide/tall the collage image will be, and then call np.reshape. reshape again will reverse it.
import numpy as np
# placeholder data -- 4 images that are 5x5
image = np.arange(4 * 5 * 5 * 3).reshape(4, 5, 5, 3)
# 2x2 grid of images
collage = image.reshape(10, 10, 3)
result = collage.reshape(4, 5, 5, 3)
assert np.array_equal(image, result)
Edit: I misunderstood the question. I assumed that the 4D array was a 1D-list of NxMx3 RGB images. If, instead, it is a 2D grid of 2D (single channel) images, I can't think of a clever way to do it with numpy operations. But, it shouldn't be to slow to just use a python for-loop.
(assuming row-major order)
# rows = number of rows in image grid
# cols = number of cols in image grid
# width = width of each image
# height = height of each image
rows, cols, height, width = images.shape
collage = np.empty(rows * height, cols * width, dtype=images.dytpe)
for i in range(rows):
for j in range(cols):
y = i * height
x = j * range
collage[y:y+height, x:x+width] = images[i, j]
Then to reverse just flip it:
result = np.empty(rows, cols, width, height, dtype=collage.dytpe)
for i in range(rows):
for j in range(cols):
y = i * height
x = j * range
images[i, j, :, :] = collage[y:y+height, x:x+width]
I have an issue in using python with matrix multiplication and reshape. for example, I have a column S of size (16,1) and another matrix H of size (4,4), I need to reshape the column S into (4,4) in order to multiply it with H and then reshape it again into (16,1), I did that in matlab as below:
clear all; clc; clear
H = randn(4,4,16) + 1j.*randn(4,4,16);
S = randn(16,1) + 1j.*randn(16,1);
for ij = 1 : 16
y(:,:,ij) = reshape(H(:,:,ij)*reshape(S,4,[]),[],1);
end
y = mean(y,3);
Coming to python :
import numpy as np
H = np.random.randn(4,4,16) + 1j * np.random.randn(4,4,16)
S = np.random.randn(16,) + 1j * np.random.randn(16,)
y = np.zeros((4,4,16),dtype=complex)
for ij in range(16):
y[:,:,ij] = np.reshape(h[:,:,ij]#S.reshape(4,4),16,1)
But I get an error here that we can't reshape the matrix y of size 256 into 16x1.
Does anyone have an idea about how to solve this problem?
Simply do this:
S.shape = (4,4)
for ij in range(16):
y[:,:,ij] = H[:,:,ij] # S
S.shape = -1 # equivalent to 16
np.dot operates over the last and second-to-last axis of the two operands if they have two or more axes. You can move your axes around to use this.
Keep in mind that reshape(S, 4, 4) in Matlab is likely equivalent to S.reshape(4, 4).T in Python.
So given H of shape (4, 4, 16) and S of shape (16,), you can multiply each channel of H by a reshaped S using
np.moveaxis(np.dot(np.moveaxis(H, -1, 0), S.reshape(4, 4).T), 0, -1)
The inner moveaxis call makes H into (16, 4, 4) for easy multiplication. The outer one reverses the effect.
Alternatively, you could use the fact that S will be transposed to write
np.transpose(S.reshape(4, 4), np.transpose(H))
There are two issues in your solution
1) reshape method takes a shape in the form of a single tuple argument, but not multiple arguments.
2) The shape of your y-array should be 16x1x16, not 4x4x16. In Matlab, there is no issue since it automatically reshapes y as you update it.
The correct version would be the following:
import numpy as np
H = np.random.randn(4,4,16) + 1j * np.random.randn(4,4,16)
S = np.random.randn(16,) + 1j * np.random.randn(16,)
y = np.zeros((16,1,16),dtype=complex)
for ij in range(16):
y[:,:,ij] = np.reshape(H[:,:,ij]#S.reshape((4,4)),(16,1))
This is my first nontrivial use of numpy, and I'm having some trouble in one spot.
So, I have colors, a (xsize + 2, ysize + 2, 3) ndarray, and newlife, a (xsize + 2, ysize + 2) ndarray of booleans. I want to add a random value between -5 and 5 to all three values in colors at all positions where newlife is true. In other words newlife maps 2D vectors to whether or not I want to add a random value to the color in colors at that position.
I've tried a million variations on this:
colors[np.nonzero(newlife)] += (np.random.random_sample((xsize + 2,ysize + 2, 3)) * 10 - 5)
but I keep getting stuff like
ValueError: operands could not be broadcast together with shapes (589,3) (130,42,3) (589,3)
How do I do this?
I think this does what you want:
# example data
colors = np.random.randint(0, 100, (5,4,3))
newlife = np.random.randint(0, 2, (5,4), bool)
# create values to add, then mask with newlife
to_add = np.random.randint(-5,6, (5,4,3))
to_add[~newlife] = 0
# modify in place
colors += to_add
This changes the colors in-place assuming uint8 dtype. Both assumptions are not essential:
import numpy as np
n_x, n_y = 2, 2
colors = np.random.randint(5, 251, (n_x+2, n_y+2, 3), dtype=np.uint8)
mask = np.random.randint(0, 2, (n_x+2, n_y+2), dtype=bool)
n_change = np.count_nonzero(mask)
print(colors)
print(mask)
colors[mask] += np.random.randint(-5, 6, (n_change, 3), dtype=np.int8).view(np.uint8)
print(colors)
The easiest way of understanding this is to look at the shape of colors[mask].