I have a numpy array with a shape similar to (3114, 7, 36, 64, 1). This example is a batch of 3114 sets of 7 images, each image 36 by 64 grayscale pixels. This is for a 3d Convolutional Neural Network, but what I want to do is double the batch size for more data for my model. Basically, I want to take these 3114 sets of 7 and duplicate each one to create a batch of size 6228. I know this is a pretty basic question, but I am not too familiar with numpy arrays. Thanks in advance!
You can use either numpy's .repeat() or .concatenate() functions
Related
I am preparing data for a convolutional neural network model. I am new to deep learning and want to start with seeing how LeNet 5 will work with my data (as it has few parameters).
I prepared 9 NumPy arrays, every array has dimension (10530, 32, 32, 1), 10530 images and each image is 32 x 32 pixel. I want to make one array has (10530, 32, 32, 9)
I tried np.concatenate but it is not working. Do you have a suggestion?
np.concatenate is the way:
np.concatenate(list_of_arrays, axis=-1)
I am just using TensorFlow to realise a CNN model. In this model I need to concatenate two 4-D tensors: tensor A with shape of [16,128,128,3] and tensor B with shape of [16,128,128,3] (16 is the batch size, 128 is the image block size, 3 is the number of channels). The concatenation result should be a tensor C with shape of [16,128,128,6].
I know that we could use 'tf.concat' function to realise this, however, this function does copy tensor A and tensor B and it uses a large GPU memory.
How could I achieve the concatenation in TensorFlow without using 'tf.concat'?
Thanks in advance!
So far, I've been practicing neural networks on numerical datasets in pandas, but now I need to create a model that will take an image as input and output a binary mask of that image.
I have my training data as numpy arrays of shape (602, 2048, 2048, 1). 602 images of dimensions 2048x2048 with one channel. The array of output masks have the same dimensions.
What I can't figure out is how to define the first layer or how to correctly feed the data into the model. I would greatly appreciate your help on this issue
Well, this is not a "rule", but probably you will be using mostly 2D conv and related layers.
You feed everything as numpy arrays, as usual, maybe normalizing the values. Common options are:
Between 0 and 1 (just divide by 255.)
Between -1 and 1 (divide by 255., multiply by 2, subtract 1)
Caffe style: subtract from each channel a specific value to "center" the values based on their usual mean without rescaling them.
Your model should start with something like:
inputTensor = Input((2048,2048,1))
output = Conv2D(filters, kernel_size, .....)(inputTensor)
Or, in sequential models: model.add(Conv2D(...., input_shape=(2048,2048,1))
Later, it's up to you to decide which layers to use.
Conv2D
MaxPooling2D
Upsampling2D
Whether you're going to create a linear model or if you're going to divide branches, join branches, etc. is also your call.
Models in a U-Net style should be a good start for you.
What you can't do:
Don't use Flatten layers (actually you can, if you later reshape the output for having image dimensions... but why?)
Don't use Global Pooling layers (you don't want to sacrifice your spatial dimensions)
I'm trying to concatenate 2 numpy arrays of features predicted by the convolution layers in a vgg16 model.
Basically i have used the bottom layers of a vgg16 model to predict the features for my full dataset and now I want to load the parts of dataset dynamically based on some settings, to train some models with it.
So, I have 2 array of shape:
(724, 512, 6, 8) and (3376, 512, 6, 8)
Basically the first one contains features predicted from 724 image files (each prediction has shape (512, 6, 8)).
I want to concatenate these 2 arrays into one of shape (4100, 512, 6, 8)
I have tried using:
np.array([np.concatenate(arr, axis=0) for arr in false_train_list])
where false_train_list is the list containing the 2 arrays with the above shapes.
Also tried with np.stack, tf.stack...
All of these result in an array with shape (2,)
Can someone explain why ? I haven't found any good resources to understand how exactly np.concatenate() works..
Thank you!
I think you simply need this instead:
np.concatenate(false_train_list, axis=0)
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.concatenate.html
Has anyone tried using Sparse Tensors for Text Analysis with TensorFlow with success? Everything is ready and I manage to feed feed_dict in tf.Session for a Softmax layer with numpy arrays, but I am unable to feed the dictionary with SparseTensorValues.
I have not found either documentation about using sparse matrices to train a model ( softmax for example ) with Tensor Flow, which is strange, as classes SparseTensor and SparseTensorValues or TensorFlow.sparse_to_dense methods are ready for it, but there is no documentation about how to feed the feed_dict dictionary of values in the session.run(fetches,feed_dict=None) method.
Thanks a lot,
I have found a way of putting sparse images into tensorflow including batch processing if that is of any help.
I create a 4-d sparse matrix in a dictionary where the dimensions are batchSize, xLen, ylen, zLen (where zLen is 3 for colour for example). The following pseudo code is for a batch of 50 32x96 pixel 3-color images. Values are the intensity of each pixel. In the snippet below I show the first 2 pixels of the first batch being initialised...
shape = [50, 32, 96, 3]
indices = [[0, 20, 31, 0],[0, 22, 33, 1], etc...]
values = [12, 24, etc...]
batch = {"indices": indices, "values": values, "shape": shape}
When setting up the computational graph I create a sparse-placeholder of the correct dimensions
images = tf.sparse_placeholder(tf.float32, shape=[None, 32, 96, 3])
'None' is used so I can vary the batch size.
When I first want to use the images, e.g. to feed into a batch convolution, I convert them back to a dense tensor:
images = tf.sparse_tensor_to_dense(batch)
Then when I am ready to run a session, e.g. for training, I pass the 3 components of the batch into the dictionary so that they will be picked up by the sparse_placeholder:
train_dict = {images: (batch['indices'], batch['values'], batch['shape']), etc...}
sess.run(train_step, feed_dict=train_dict)
If you are not needing to do batch processing just leave off the first dimension and remove 'none' from the placeholder shape.
I couldn't find any way of passing the images across in batch as an array of sparse matrices. It only worked if I created the 4th dimension. I'd be interested to know of alternatives.
Whilst this doesn't give an exact answer to your question I hope it is of use as I have been struggling with similar issues.