I've got a NP array called X_train with the following properties:
X_train.shape = (139,)
X_train[0].shape = (210, 224, 3)
X_train[1].shape = (220,180, 3)
In other words, there are 139 observations. Each image has a different width and height, but they all have 3 channels. So the dimension should be (139, None, None, 3) where None = variable.
Since you don't include the dimension for the number of observations in the layer, for the Conv2D layer I used input_shape=(None,None,3). But that gives me the error:
expected conv2d_1_input to have 4 dimensions, but got array with shape
(139, 1)
My guess is that the problem is that the input shape is (139,) instead of (139, None, None, 3). I'm not sure how to convert to that however.
One possible solution to your problem is to fill the arrays with zeros so that they all have a similar size. Afterwards, your input shape will be something like (139, max_x_dimension, max_y_dimension, 3).
The following functions will do the job:
import numpy as np
def fillwithzeros(inputarray, outputshape):
"""
Fills input array with dtype 'object' so that all arrays have the same shape as 'outputshape'
inputarray: input numpy array
outputshape: max dimensions in inputarray (obtained with the function 'findmaxshape')
output: inputarray filled with zeros
"""
length = len(inputarray)
output = np.zeros((length,)+outputshape, dtype=np.uint8)
for i in range(length):
output[i][:inputarray[i].shape[0],:inputarray[i].shape[1],:] = inputarray[i]
return output
def findmaxshape(inputarray):
"""
Finds maximum x and y in an inputarray with dtype 'object' and 3 dimensions
inputarray: input numpy array
output: detected maximum shape
"""
max_x, max_y, max_z = 0, 0, 0
for array in inputarray:
x, y, z = array.shape
if x > max_x:
max_x = x
if y > max_y:
max_y = y
if z > max_z:
max_z = z
return(max_x, max_y, max_z)
#Create random data similar to your data
random_data1 = np.random.randint(0,255, 210*224*3).reshape((210, 224, 3))
random_data2 = np.random.randint(0,255, 220*180*3).reshape((220, 180, 3))
X_train = np.array([random_data1, random_data2])
#Convert X_train so that all images have the same shape
new_shape = findmaxshape(X_train)
new_X_train = fillwithzeros(X_train, new_shape)
Related
I would like to create a numpy array by concatenating two or more numpy arrays with shape (1, x, 1) where x is variable.
Here is the problem in detail.
x1 = #numpy array with shape (x,)
x2 = #numpy array with shape (y,)
#create batch
x1 = np.expand_dims(x1, 0) #shape (1, x)
x2 = np.expand_dims(x2, 0) #shape (1, y)
#add channel dimension
x1 = np.expand_dims(x1, -1) #shape (1, x, 1)
x2 = np.expand_dims(x2, -1) #shape (1, y, 1)
#merge the two arrays
x = np.concatenate((x1, x2), axis=0)
#expected shape (2, ??, 1)
Note the expected shape (2, ??, 1). I am wondering if what I am trying to do is doable.
Executing this code raises a ValueError:
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 138241 and the array at index 1 has size 104321
I've got this error:
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 1 and
the array at index 1 has size 2
When I use GPR.fit method, so it would be occurred during concatenation.
But I've been tried it with x and y in same shape,
x.shape = y.shape = (# of rows , )
This is my code:
def interpolate_(x, y, x_fine):
print('shape:',x.shape, y.shape)
gp = GaussianProcessRegressor(kernel = get_combined_kernel())
gp.fit(np.atleast_2d(x).T, y)
y_mean, y_std = gp.predict(x_fine[:,None], return_std=True)
return y_mean
Output of print('shape:', x.shape, y.shape):
lets assume we have a tensor representing an image of the shape (910, 270, 1) which assigned a number (some index) to each pixel with width=910 and height=270.
We also have a numpy array of size (N, 3) which maps a 3-tuple to an index.
I now want to create a new numpy array of shape (920, 270, 3) which has a 3-tuple based on the original tensor index and the mapping-3-tuple-numpy array. How do I do this assignment without for loops and other consuming iterations?
This would look simething like:
color_image = np.zeros((self._w, self._h, 3), dtype=np.int32)
self._colors = np.array(N,3) # this is already present
indexed_image = torch.tensor(920,270,1) # this is already present
#how do I assign it to this numpy array?
color_image[indexed_image.w, indexed_image.h] = self._colors[indexed_image.flatten()]
Assuming you have _colors, and indexed_image. Something that ressembles to:
>>> indexed_image = torch.randint(0, 10, (920, 270, 1))
>>> _colors = np.random.randint(0, 255, (N, 3))
A common way of converting a dense map to a RGB map is to loop over the label set:
>>> _colors = torch.FloatTensor(_colors)
>>> rgb = torch.zeros(indexed_image.shape[:-1] + (3,))
>>> for lbl in range(N):
... rgb[lbl == indexed_image[...,0]] = _colors[lbl]
I have a numpy array of size image_stack 64x28x28x3 which correspond to 64 images of size 28x28x3. What I want is to construct an image of size 224x224x3 which will contain all my images that are in the initial array. How can I do so in numpy? So far I have the code for stacking the images in the same line, however I want 8 lines of 8 columns instead. My code so far:
def tile_images(image_stack):
"""Given a stacked tensor of images, reshapes them into a horizontal tiling for display."""
assert len(image_stack.shape) == 4
image_list = [image_stack[i, :, :, :] for i in range(image_stack.shape[0])]
tiled_images = np.concatenate(image_list, axis=1)
return tiled_images
Does the following reshape, transpose, reshape trick work?
x.shape # (64, 28, 28, 3)
mosaic = x.reshape(8, 8, 28, 28, 3).transpose((0, 2, 1, 3, 4)).reshape(224, 224, 3)
The first reshape breaks your 64 into lines and columns. Transpose rearranges their order so that we can collapse them in a meaningful way.
Your function would then look like:
def tile_images(x):
dims = x.shape
assert len(dims) == 4
stack_dim = int(np.sqrt(dims[0]))
res = x.reshape(stack_dim, stack_dim, *dims[1:]).transpose((0, 2, 1, 3, 4))
tile_size = res.shape[0] * res.shape[1]
return res.reshape(tile_size, tile_size, -1)
Given
batch_images: 4D tensor of shape (B, H, W, C)
x: 3D tensor of shape (B, H, W)
y: 3D tensor of shape (B, H, W)
Goal
How can I index into batch_images using the x and y coordinates to obtain a 4D tensor of shape B, H, W, C. That is, I want to obtain for each batch, and for each pair (x, y) a tensor of shape C.
In numpy, this would be achieved using input_img[np.arange(B)[:,None,None], y, x] for example but I can't seem to make it work in tensorflow.
My attempt so far
def get_pixel_value(img, x, y):
"""
Utility function to get pixel value for
coordinate vectors x and y from a 4D tensor image.
"""
H = tf.shape(img)[1]
W = tf.shape(img)[2]
C = tf.shape(img)[3]
# flatten image
img_flat = tf.reshape(img, [-1, C])
# flatten idx
idx_flat = (x*W) + y
return tf.gather(img_flat, idx_flat)
which is returning an incorrect tensor of shape (B, H, W).
It should be possible to do it by flattening the tensor as you've done, but the batch dimension has to be taken into account in the index calculation.
In order to do this, you'll have to make an additional dummy batch index tensor with the same shape as x and y that always contains the index of the current batch.
This is basically the np.arange(B) from your numpy example, which is missing from your TensorFlow code.
You can also simplify things a bit by using tf.gather_nd, which does the index calculations for you.
Here's an example:
import numpy as np
import tensorflow as tf
# Example tensors
M = np.random.uniform(size=(3, 4, 5, 6))
x = np.random.randint(0, 5, size=(3, 4, 5))
y = np.random.randint(0, 4, size=(3, 4, 5))
def get_pixel_value(img, x, y):
"""
Utility function that composes a new image, with pixels taken
from the coordinates given in x and y.
The shapes of x and y have to match.
The batch order is preserved.
"""
# We assume that x and y have the same shape.
shape = tf.shape(x)
batch_size = shape[0]
height = shape[1]
width = shape[2]
# Create a tensor that indexes into the same batch.
# This is needed for gather_nd to work.
batch_idx = tf.range(0, batch_size)
batch_idx = tf.reshape(batch_idx, (batch_size, 1, 1))
b = tf.tile(batch_idx, (1, height, width))
indices = tf.pack([b, y, x], 3)
return tf.gather_nd(img, indices)
s = tf.Session()
print(s.run(get_pixel_value(M, x, y)).shape)
# Should print (3, 4, 5, 6).
# We've composed a new image of the same size from randomly picked x and y
# coordinates of each original image.