How to make a multidimensional array from one dimensional arrays? - python

I am preparing data for a convolutional neural network model. I am new to deep learning and want to start with seeing how LeNet 5 will work with my data (as it has few parameters).
I prepared 9 NumPy arrays, every array has dimension (10530, 32, 32, 1), 10530 images and each image is 32 x 32 pixel. I want to make one array has (10530, 32, 32, 9)
I tried np.concatenate but it is not working. Do you have a suggestion?

np.concatenate is the way:
np.concatenate(list_of_arrays, axis=-1)

Related

How to double elements in numpy array

I have a numpy array with a shape similar to (3114, 7, 36, 64, 1). This example is a batch of 3114 sets of 7 images, each image 36 by 64 grayscale pixels. This is for a 3d Convolutional Neural Network, but what I want to do is double the batch size for more data for my model. Basically, I want to take these 3114 sets of 7 and duplicate each one to create a batch of size 6228. I know this is a pretty basic question, but I am not too familiar with numpy arrays. Thanks in advance!
You can use either numpy's .repeat() or .concatenate() functions

Can 2D convolutional neural network be converted into 1D convolutional neural network?

I have designed a neural network using 2d convolutional layers and max-pooling layers with the input shape for input, one hot encoded sequences as 2d array. then it is reshaped before inputting the model.
data = np.zeros( (100, 21 * 1000), dtype=np.float32 )
#reshape
x_data = tf.reshape( data, [-1, 1, 1000, 21] )
However, I used the same dataset using 1D convolutional layers by changing the model and input array without reshaping as it is 1D
data = np.zeros( (100, 1000,21), dtype=np.float32 )
finally, the 1D convolutional model performed well with 96% act. and 2d CNN gave 93%. Can someone explain to me what actually happens there to increase the accuracy?
Can someone explain to me what actually happens there to increase the accuracy?
That's hard to tell and depends on your specific dataset, network, hyperparameters etc.
Generally, in a conv2D-Layer the filter shifts horizontal and vertical. In a conv1D-Layer the filter shifts only vertical in the convolution process.
So which one is the best? That depends on your problem. For time series conv1D could be better and for images conv2D could be the better choice.

Python numpy concatenate 4D

I'm trying to concatenate 2 numpy arrays of features predicted by the convolution layers in a vgg16 model.
Basically i have used the bottom layers of a vgg16 model to predict the features for my full dataset and now I want to load the parts of dataset dynamically based on some settings, to train some models with it.
So, I have 2 array of shape:
(724, 512, 6, 8) and (3376, 512, 6, 8)
Basically the first one contains features predicted from 724 image files (each prediction has shape (512, 6, 8)).
I want to concatenate these 2 arrays into one of shape (4100, 512, 6, 8)
I have tried using:
np.array([np.concatenate(arr, axis=0) for arr in false_train_list])
where false_train_list is the list containing the 2 arrays with the above shapes.
Also tried with np.stack, tf.stack...
All of these result in an array with shape (2,)
Can someone explain why ? I haven't found any good resources to understand how exactly np.concatenate() works..
Thank you!
I think you simply need this instead:
np.concatenate(false_train_list, axis=0)
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.concatenate.html

How to reshape the res5c layer of ResNet (3D to 2D)?

I extract the features of an image with ResNet of the 'res5c' layer, resulting of a numpy array of shape (2048, 14, 14)
I have trouble manipulating these dimensions. I understand there is 14*14 features of size 2048. I would like to iterate over to access every feature at a time.
Therefore, how I can reshape this to an array of (14*14, 2048) without mistakes and then easily iterate over it with a for loop?
You can read the features after net.forward():
feat = net.blobs['res5c'].data.cop() # copy to be on the safe side.
As you describe, feat is an np.array with shape = (2048, 14, 14).
You can reshape it:
feat.reshape((2048,-1)) # fix the first dimension to 2048, -1 set the number of features to match that of `feat`.
Now you can iterate over features:
for fi in xrange(feat.shape[1]):
f = feat[:,fi] # get the fi-th feature
# do somethinf to the feature f

Using Sparse Tensors to feed a placeholder for a softmax layer in TensorFlow

Has anyone tried using Sparse Tensors for Text Analysis with TensorFlow with success? Everything is ready and I manage to feed feed_dict in tf.Session for a Softmax layer with numpy arrays, but I am unable to feed the dictionary with SparseTensorValues.
I have not found either documentation about using sparse matrices to train a model ( softmax for example ) with Tensor Flow, which is strange, as classes SparseTensor and SparseTensorValues or TensorFlow.sparse_to_dense methods are ready for it, but there is no documentation about how to feed the feed_dict dictionary of values in the session.run(fetches,feed_dict=None) method.
Thanks a lot,
I have found a way of putting sparse images into tensorflow including batch processing if that is of any help.
I create a 4-d sparse matrix in a dictionary where the dimensions are batchSize, xLen, ylen, zLen (where zLen is 3 for colour for example). The following pseudo code is for a batch of 50 32x96 pixel 3-color images. Values are the intensity of each pixel. In the snippet below I show the first 2 pixels of the first batch being initialised...
shape = [50, 32, 96, 3]
indices = [[0, 20, 31, 0],[0, 22, 33, 1], etc...]
values = [12, 24, etc...]
batch = {"indices": indices, "values": values, "shape": shape}
When setting up the computational graph I create a sparse-placeholder of the correct dimensions
images = tf.sparse_placeholder(tf.float32, shape=[None, 32, 96, 3])
'None' is used so I can vary the batch size.
When I first want to use the images, e.g. to feed into a batch convolution, I convert them back to a dense tensor:
images = tf.sparse_tensor_to_dense(batch)
Then when I am ready to run a session, e.g. for training, I pass the 3 components of the batch into the dictionary so that they will be picked up by the sparse_placeholder:
train_dict = {images: (batch['indices'], batch['values'], batch['shape']), etc...}
sess.run(train_step, feed_dict=train_dict)
If you are not needing to do batch processing just leave off the first dimension and remove 'none' from the placeholder shape.
I couldn't find any way of passing the images across in batch as an array of sparse matrices. It only worked if I created the 4th dimension. I'd be interested to know of alternatives.
Whilst this doesn't give an exact answer to your question I hope it is of use as I have been struggling with similar issues.

Categories

Resources