How to slice ndarray after hstacking , back to original pieces - python

Hi i would like to recover two pieces of a composite numpy array that was made by stacking two smaller arrays. i need the slicing for each peice, iyou could help me.
i have two ndarrays that i hastacked on to each other
frame = np.hstack([thought1,pix])
shape for pix and thought1 is equal to (1080, 1920, 3)
for both of them
after stacking shape of frame is (1080, 3840, 3)
i want to recover thought1 and pix from frame by slicing
it
frame = np.hstack([thought1,pix])

The 'inverse' of np.hstack would be np.hsplit
thought1, pix = np.hsplit(np.hstack([thought1,pix]), [thought1.shape[1]])

Related

Convert 3 Dimensional Numpy Array to 4 Dimensional Numpy Array

I want to make a simple Program which outputs a video as an Webcam, but the Cam wants a RGBA Numpy Array but I only have RGB from the video. How can I convert the 3 dimensional array to 4 dimensions?
You're actually not converting a 3-dimensional array to a 4-dimensional array. You're changing the size of one of the dimensions from three to four.
Lets say you have a NxMx3 image. You then need to:
temp = np.zeros((N, M, 4))
temp[:,:,0:3] = image
temp[:,:,3] = whatever default alpha you choose to use.
Generalize as you see fit.
Assuming your existing array is shaped (xsize, ysize, 3) and you want to create alpha as a 4th entry all filled with 1, you should be able to do something like
alpha = np.ones((*rgb.shape[0:2], 1))
rgba = np.concatenate((rgb, alpha), axis=2)
If you wanted a different uniform alpha value you could use np.full with that value instead of np.ones, but normally when converting RGB to RGBA you want fully opaque.
You can np.dstack your original im with np.ones(im.shape[:2])
new_im = np.dstack((im, np.ones(im.shape[:2])))
update: this is equivalent to #hobbs solution np.concatenate(..., axis=2)
Maybe try something like these: (import numpy as np)
arr # shape (n_bands, y_pixels, x_pixels)
swapped = np.moveaxis(arr, 0, 2) # shape (y_pixels, x_pixels, n_bands)
arr4d = np.expand_dims(swapped, 0) # shape (1, y_pixels, x_pixels, n_bands)

Subtract 2D array from 4D array

I have a greyscale image, represented by a 2D array of integers, shape (1000, 1000).
I then use sklearn.feature_extraction.image.extract_patches_2d() to generate an array of 3x3 'patches' from this image, resulting in an array of shape (1000000, 3, 3), as there are 1 million 3x3 arrays for each pixel value in the original image.
I reshape this to (1000, 1000, 3, 3), which is a 1000x1000 array of 3x3 arrays, one 3x3 array for each pixel in the original image.
I now want to effectively subtract the 2D array from the 4D array. I have already found a method to do this, but I would like to make one using vectorisation.
I currently iterate through each pixel and subtract the value there from the 3x3 array at the same index. This is a little bit slow.
This is what currently loads images, formats the arrays before hand, and then performs this subtraction.
from PIL import Image, ImageOps
from skimage import io
from sklearn.feature_extraction import image
import numpy
jitter = 1
patchsize = (jitter*2)+1
#load image as greyscale image using PIL
original = load_image_greyscale(filename)
#create a padded version of the image so that 1000x1000 patches are made
#instead of 998x998
padded = numpy.asarray(ImageOps.expand(original,jitter))
#extract these 3x3 patches using sklearn
patches = image.extract_patches_2d(padded,(patchsize,patchsize))
#convert image to numpy array
pixel_array = numpy.asarray(original)
#then reshape the array of patches so it matches array_image
patch_array = numpy.reshape(patches, (pixel_array.shape[0],pixel_array.shape[1],patchsize,patchsize))
#create a copy for results
patch_array_copy = numpy.copy(patch_array)
#iterate over each 3x3 array in the patch array and subtract the pixel value
#at the same index in the pixel array
for x in range(pixel_array.shape[0]):
for y in range(pixel_array.shape[1]):
patch_array_copy[x,y] = patch_array[x,y] - pixel_array[x,y]
I would like a way to perform the final step in the for loop using matrix operations.
I would also like to extend this at some point to work with RGB images, effectively making it a subtraction of an array with shape(1000,1000,3) from an array with shape(1000,1000,3,3,3). But i'm trying to go one step at a time here.
Any help or tips or suggestions or links to helpful resources would be greatly appreciated.

Adding an array to a dimension of an array

I have 2 numpy arrays img and mask that I want to combine into a single array. The shapes of the arrays are as follows:
image.shape = (512, 366, 3) and mask.shape = (512, 366). I want the final array to have a shape of (512, 366, 4), such that the mask array occupies the 4th dimension.
What's the best way to achieve this, please?
The suggestion from Julien works:
from numpy import stack
new_image = dstack(img, mask)

How to reshape an matrix of grayscale images in Numpy?

I have a numpy matrix of images with shape (50, 17500). Each image has shape (50, 50), so my matrix is like a long row of 350 grayscale images.
I want to use plt.imshow() to display those images. How do I change the dimension of the concatenated images matrix? Say reshaping the array to shape (1750, 500), which is 35 rows and 10 columns of images.
Some posts suggest to use np.reshape(), but if I use my_array.reshape((1750, 500)) the individual image in the new matrix is broken.
My question is how to reshape while preserving each individual (50,50) image?

Use PIL to convert a grayscale image to a (1, H, W) numpy array

Using PIL to convert a RGB image to a (H, W, 3) numpy array is very fast.
im = np.array(PIL.open(path))
However, I cannot find a fast way to convert a grayscale image to a (H, W, 1) array. I tried two approaches but they are both much slower than above:
im = np.array(PIL.open(path)) # return an (H, W) array
im = np.expand_dims(im, axis=0)
im = im.astype(int)
This approach is slow too:
img = PIL.open(path)
im = np.array(img.getdata()).reshape(img.size[1], img.size[0], 1)
Please advice...
You can use np.asarray() to get the array view, then append a new axis with None/np.newaxis and then use type conversion with copy set to False (in case you were converting from same dtype to save on memory) -
im = np.asarray(PIL.open(path))
im_out = im[None].astype(dtype=int, copy=False)
This appends new axis at the start, resulting in (1,H,W) as the output array shape. To do so, for the end to get an array shape of (H,W,1), do : im[...,None] instead of im[None].
A simpler way would be -
im_out = np.asarray(img, dtype=int)[None]
If the input is already in uint8 dtype and we want an output array of the same dtype, use dtype=np.uint8 and that should be pretty fast.

Categories

Resources