I have a numpy matrix of images with shape (50, 17500). Each image has shape (50, 50), so my matrix is like a long row of 350 grayscale images.
I want to use plt.imshow() to display those images. How do I change the dimension of the concatenated images matrix? Say reshaping the array to shape (1750, 500), which is 35 rows and 10 columns of images.
Some posts suggest to use np.reshape(), but if I use my_array.reshape((1750, 500)) the individual image in the new matrix is broken.
My question is how to reshape while preserving each individual (50,50) image?
Related
I created a mask which has the shape (128, 128, 128). I then got the np.sum on that mask n_voxels_flattened = np.sum(mask) (removed the zero voxels so that I can do the transformation on non-zeros voxels) where now the n_voxels_flattened=962517. Then iterated over all images (300 images) which resulted in an array with shape (962517, 300). I did some adjustments to this array and the output has the same shape as input:(962517, 300). I now want to reshape this array and put back the zero voxels I removed so that it has the shape (128,128,128,300).
This is what I tried, but it resulted in a weird looking image when visualized.
zero_array = np.zeros((128*128*128 * 300))
zero_array[:len(result.reshape(-1))]=result.reshape(-1)
Any help would be much appreciated.
I have a dataset which comprises of the binary data of pixelated 50x50 images. The array shape is (50, 50, 90245). I want to reach 50x50 pixels of each of the 90245 images. How can I slice the array?
If data is the variable storing the image data, and i is the index of the image you want to access, then you can do:
data[:,:,i]
to get the desired image data.
If data is the variable storing the image data, and i is the index of the image you want to access, then you can do as #BrokenBenchmark suggested. In case you want a (50,50,1) 3D array as the output, you could do:
data[:,:,i:i+1]
to get the image as a 3D array.
Edit1: If you reshaped your data matrix to be of shape (90245,50,50), you can get the ith image by doing data[i,:,:] or just data[i] to get a (50,50) image. Similarly, to get a (1,50,50) image, you could do data[i:i+1,:,:] or just data[i:i+1].
Edit2: To reshape the array, you could use the swapaxes() function in numpy.
Hi i would like to recover two pieces of a composite numpy array that was made by stacking two smaller arrays. i need the slicing for each peice, iyou could help me.
i have two ndarrays that i hastacked on to each other
frame = np.hstack([thought1,pix])
shape for pix and thought1 is equal to (1080, 1920, 3)
for both of them
after stacking shape of frame is (1080, 3840, 3)
i want to recover thought1 and pix from frame by slicing
it
frame = np.hstack([thought1,pix])
The 'inverse' of np.hstack would be np.hsplit
thought1, pix = np.hsplit(np.hstack([thought1,pix]), [thought1.shape[1]])
I have a greyscale image, represented by a 2D array of integers, shape (1000, 1000).
I then use sklearn.feature_extraction.image.extract_patches_2d() to generate an array of 3x3 'patches' from this image, resulting in an array of shape (1000000, 3, 3), as there are 1 million 3x3 arrays for each pixel value in the original image.
I reshape this to (1000, 1000, 3, 3), which is a 1000x1000 array of 3x3 arrays, one 3x3 array for each pixel in the original image.
I now want to effectively subtract the 2D array from the 4D array. I have already found a method to do this, but I would like to make one using vectorisation.
I currently iterate through each pixel and subtract the value there from the 3x3 array at the same index. This is a little bit slow.
This is what currently loads images, formats the arrays before hand, and then performs this subtraction.
from PIL import Image, ImageOps
from skimage import io
from sklearn.feature_extraction import image
import numpy
jitter = 1
patchsize = (jitter*2)+1
#load image as greyscale image using PIL
original = load_image_greyscale(filename)
#create a padded version of the image so that 1000x1000 patches are made
#instead of 998x998
padded = numpy.asarray(ImageOps.expand(original,jitter))
#extract these 3x3 patches using sklearn
patches = image.extract_patches_2d(padded,(patchsize,patchsize))
#convert image to numpy array
pixel_array = numpy.asarray(original)
#then reshape the array of patches so it matches array_image
patch_array = numpy.reshape(patches, (pixel_array.shape[0],pixel_array.shape[1],patchsize,patchsize))
#create a copy for results
patch_array_copy = numpy.copy(patch_array)
#iterate over each 3x3 array in the patch array and subtract the pixel value
#at the same index in the pixel array
for x in range(pixel_array.shape[0]):
for y in range(pixel_array.shape[1]):
patch_array_copy[x,y] = patch_array[x,y] - pixel_array[x,y]
I would like a way to perform the final step in the for loop using matrix operations.
I would also like to extend this at some point to work with RGB images, effectively making it a subtraction of an array with shape(1000,1000,3) from an array with shape(1000,1000,3,3,3). But i'm trying to go one step at a time here.
Any help or tips or suggestions or links to helpful resources would be greatly appreciated.
I have an issue with expanding the size of the Sklearn digit dataset digits from 8*8 to 32*32 pixels.
My approach is to take the 8*8 array and then flatten and expand it. That is, enlarge from 64 to 1024 pixels in total. Therefore I simply want to multiply the values along each row 16 times:
create a new array (=newfeatures) with 1024 NaN values.
Replace every 16. value of the newfeatures array with the values of the original array, that is (0=0),(16=1),(32=2),(...),(1008=64).
3.Replace the remaining NaN values with dropna(ffill) to "expand" the original image to a 32*32 pixels image.
Therefore I use the following code:
#Load in the training dataset
digits=datasets.load_digits()
features=digits.data
targets=digits.target
#Plot original digit
ax[0].imshow(features[0].reshape((8,8)))
#Expand 8*8 image to a 32*32 image (64 to 1024)
newfeatures=np.ndarray((1797,16*len(features[0])))
newfeatures[:]=np.NaN
newfeatures=pd.DataFrame(newfeatures)
for row in range(1797):
for i in range(0,63):
newfeatures.iloc[row,16*i]=features[row][i]
newfeatures.fillna(method="ffill",axis=1,inplace=True)
#Plot expanded image with 32*32 pixels
ax[1].imshow(newfeatures.values[0].reshape((32,32)))
As you can see, the result is not as expected
you can use skimage's resize as shown below
from skimage import transform
new_features = np.array(list
(map
(lambda img: transform.resize(
img.reshape(8,8),#old shape
(32, 32), #new shape
mode='constant',
#flatten the resized image
preserve_range=True).ravel(),
features)))
new_features shape will be (1797, 1024) and displaying the first image will show
Based on the above solution I think the following is a little bit more neater way:
from skimage import transform
newfeatures=[transform.resize(features[i].reshape(8,8),(32,32))for i in
range(len(features))]
plt.imshow(newfeatures[0].reshape((32,32)))