Adding an array to a dimension of an array - python

I have 2 numpy arrays img and mask that I want to combine into a single array. The shapes of the arrays are as follows:
image.shape = (512, 366, 3) and mask.shape = (512, 366). I want the final array to have a shape of (512, 366, 4), such that the mask array occupies the 4th dimension.
What's the best way to achieve this, please?

The suggestion from Julien works:
from numpy import stack
new_image = dstack(img, mask)

Related

Reshaping a numpy array with padding

I created a mask which has the shape (128, 128, 128). I then got the np.sum on that mask n_voxels_flattened = np.sum(mask) (removed the zero voxels so that I can do the transformation on non-zeros voxels) where now the n_voxels_flattened=962517. Then iterated over all images (300 images) which resulted in an array with shape (962517, 300). I did some adjustments to this array and the output has the same shape as input:(962517, 300). I now want to reshape this array and put back the zero voxels I removed so that it has the shape (128,128,128,300).
This is what I tried, but it resulted in a weird looking image when visualized.
zero_array = np.zeros((128*128*128 * 300))
zero_array[:len(result.reshape(-1))]=result.reshape(-1)
Any help would be much appreciated.

Convert 3 Dimensional Numpy Array to 4 Dimensional Numpy Array

I want to make a simple Program which outputs a video as an Webcam, but the Cam wants a RGBA Numpy Array but I only have RGB from the video. How can I convert the 3 dimensional array to 4 dimensions?
You're actually not converting a 3-dimensional array to a 4-dimensional array. You're changing the size of one of the dimensions from three to four.
Lets say you have a NxMx3 image. You then need to:
temp = np.zeros((N, M, 4))
temp[:,:,0:3] = image
temp[:,:,3] = whatever default alpha you choose to use.
Generalize as you see fit.
Assuming your existing array is shaped (xsize, ysize, 3) and you want to create alpha as a 4th entry all filled with 1, you should be able to do something like
alpha = np.ones((*rgb.shape[0:2], 1))
rgba = np.concatenate((rgb, alpha), axis=2)
If you wanted a different uniform alpha value you could use np.full with that value instead of np.ones, but normally when converting RGB to RGBA you want fully opaque.
You can np.dstack your original im with np.ones(im.shape[:2])
new_im = np.dstack((im, np.ones(im.shape[:2])))
update: this is equivalent to #hobbs solution np.concatenate(..., axis=2)
Maybe try something like these: (import numpy as np)
arr # shape (n_bands, y_pixels, x_pixels)
swapped = np.moveaxis(arr, 0, 2) # shape (y_pixels, x_pixels, n_bands)
arr4d = np.expand_dims(swapped, 0) # shape (1, y_pixels, x_pixels, n_bands)

How to slice ndarray after hstacking , back to original pieces

Hi i would like to recover two pieces of a composite numpy array that was made by stacking two smaller arrays. i need the slicing for each peice, iyou could help me.
i have two ndarrays that i hastacked on to each other
frame = np.hstack([thought1,pix])
shape for pix and thought1 is equal to (1080, 1920, 3)
for both of them
after stacking shape of frame is (1080, 3840, 3)
i want to recover thought1 and pix from frame by slicing
it
frame = np.hstack([thought1,pix])
The 'inverse' of np.hstack would be np.hsplit
thought1, pix = np.hsplit(np.hstack([thought1,pix]), [thought1.shape[1]])

ImageDataGenerator: how to add the 4th dimension to a numpy array?

I have the following code that reads an image with opencv and displays it:
import cv2, matplotlib.pyplot as plt
img = cv2.imread('imgs_soccer/soccer_10.jpg',cv2.IMREAD_COLOR)
img = cv2.resize(img, (128, 128))
plt.imshow(img)
plt.show()
I want to generate some random images by using keras so I define this generator:
image_gen = ImageDataGenerator(rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.01,
zoom_range=[0.9, 1.25],
horizontal_flip=True,
vertical_flip=False,
fill_mode='reflect',
data_format='channels_last',
brightness_range=[0.5, 1.5])
but, when I use it in this way:
image_gen.flow(img)
I get this error:
'Input data in `NumpyArrayIterator` should have rank 4. You passed an array with shape', (128, 128, 3))
And it seems obvious to me: RGB, an image, of course it is 3 dimension!
What am I missing here?
The documentation says that it wants a 4-dim array, but does not specify what should I put in the 4th dimension!
And how this 4-dim array should be made? I have, for now, (width, height, channel), this 4th dimension goes at the start or at the end?
I am also not very familiar with numpy: how can I alter the existing img array to add a 4th dimension?
Use np.expand_dims():
import numpy as np
img = np.expand_dims(img, 0)
print(img.shape) # (1, 128, 128, 3)
The first dimension specifies the number of images (in your case 1 image).
Alternatively, you can use numpy.newaxis or None for promoting your 3D array to 4D as in:
img = img[np.newaxis, ...]
# or use None
img = img[None, ...]
The first dimension is usually the batch_size. This gives you lot of flexibility when you want to fully utilize modern hardwares such as GPUs as long as your tensor fits in your GPU memory. For example, you can pass 64 images by stacking 64 images along the first dimension. In this case, your 4D array would be of shape (64, width, height, channels).

Use PIL to convert a grayscale image to a (1, H, W) numpy array

Using PIL to convert a RGB image to a (H, W, 3) numpy array is very fast.
im = np.array(PIL.open(path))
However, I cannot find a fast way to convert a grayscale image to a (H, W, 1) array. I tried two approaches but they are both much slower than above:
im = np.array(PIL.open(path)) # return an (H, W) array
im = np.expand_dims(im, axis=0)
im = im.astype(int)
This approach is slow too:
img = PIL.open(path)
im = np.array(img.getdata()).reshape(img.size[1], img.size[0], 1)
Please advice...
You can use np.asarray() to get the array view, then append a new axis with None/np.newaxis and then use type conversion with copy set to False (in case you were converting from same dtype to save on memory) -
im = np.asarray(PIL.open(path))
im_out = im[None].astype(dtype=int, copy=False)
This appends new axis at the start, resulting in (1,H,W) as the output array shape. To do so, for the end to get an array shape of (H,W,1), do : im[...,None] instead of im[None].
A simpler way would be -
im_out = np.asarray(img, dtype=int)[None]
If the input is already in uint8 dtype and we want an output array of the same dtype, use dtype=np.uint8 and that should be pretty fast.

Categories

Resources