Pooling for 1D tensor - python

I am looking for a way to reduce the length of a 1D tensor by applying a pooling operation. How can I do it? If I apply MaxPool1d, I get the error max_pool1d() input tensor must have 2 or 3 dimensions but got 1.
Here is my code:
import numpy as np
import torch
A = np.random.rand(768)
m = nn.MaxPool1d(4,4)
A_tensor = torch.from_numpy(A)
output = m(A_tensor)

Your initialization is fine, you've defined the first two parameters of nn.MaxPool1d: kernel_size and stride. For one-dimensional max-pooling both should be integers, not tuples.
The issue is with your input, it should be two-dimensional (the batch axis is missing):
>>> m = nn.MaxPool1d(4, 4)
>>> A_tensor = torch.rand(1, 768)
Then inference will result in:
>>> output = m(A_tensor)
>>> output.shape
torch.Size([1, 192])

I think you meant the following instead:
m = nn.MaxPool1d((4,), 4)
As mentioned in the docs, the arguments are:
torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
As you can see, it's one kernel_size, it's not something like kernel_size1 kernel_size2. Instead it's just only kernel_size

For posterity: the solution is to reshape the tensor using A_tensor.reshape(768,1).

Related

How can I apply a keras layer on each 2d slice of a 4d tensor by using lambda and map_fn?

Let us assume we have a tensor x with shape (64,100,5,32) which corresponds to (batchSize,Length,Height,Channels). Now I want to apply a 2D conv Layer on each 2D matrix of size (100,5) for each of the 32th channels. So I need to extract 32 slices and process them with the same 2D conv layer (parameters). I dont know how to start with lambda und map_fn (please not use time distributed layer). Finally, I want a tensor with size (64,100,5,32).
Thanks for a short code snipped how do this.
you can simply use a for loops with index slicing (without Lambda layer). here a dummy example:
n_sample = 3
H,W,C = 100,5,32
X = np.random.uniform(0,1, (n_sample,H,W,C))
inp = Input((H,W,C))
convs = []
conv = Conv2D(1, 3, padding='same') # this is always the same for all the slices
for c in range(inp.shape[-1]):
_x = tf.expand_dims(inp[:,:,:,c], -1)
convs.append(conv(_x))
convs = Concatenate()(convs)
model = Model(inp, convs)
model.compile('adam', 'mse')
model.fit(X,X, epochs=2)

TensorFlow vs PyTorch convolution confusion

I am confused on how to replicate Keras (TensorFlow) convolutions in PyTorch.
In Keras, I can do something like this. (the input size is (256, 237, 1, 21) and the output size is (256, 237, 1, 1024).
import tensorflow as tf
x = tf.random.normal((256,237,1,21))
y = tf.keras.layers.Conv1D(filters=1024, kernel_size=5,padding="same")(x)
print(y.shape)
(256, 237, 1, 1024)
However, in PyTorch, when I try to do the same thing I get a different output size:
import torch.nn as nn
x = torch.randn(256,237,1,21)
m = nn.Conv1d(in_channels=237, out_channels=1024, kernel_size=(1,5))
y = m(x)
print(y.shape)
torch.Size([256, 1024, 1, 17])
I want PyTorch to give me the same output size that Keras does:
This previous question seems to imply that Keras filters are PyTorch's out_channels but thats what I have. I tried to add the padding in PyTorch of padding=(0,503) but that gives me torch.Size([256, 1024, 1, 1023]) but that still not correct. This also takes so much longer than keras does so I feel that I have incorrectly assigned a parameter.
How can I replicate what Keras did with convolution in PyTorch?
In TensorFlow, tf.keras.layers.Conv1D takes in a tensor of shape (batch_shape + (steps, input_dim)). Which means that what is commonly known as channels appears on the last axis. For instance in 2D convolution you would have (batch, height, width, channels). This is different from PyTorch where the channel dimension is right after the batch axis: torch.nn.Conv1d takes in shapes of (batch, channel, length). So you will need to permute two axes.
For torch.nn.Conv1d:
in_channels is the number of channels in the input tensor
out_channels is the number of filters, i.e. the number of channels the output will have
stride the step size of the convolution
padding the zero-padding added to both sides
In PyTorch there is no option for padding='same', you will need to choose padding correctly. Here stride=1, so padding must equal to kernel_size//2 (i.e. padding=2) in order to maintain the length of the tensor.
In your example, since x has a shape of (256, 237, 1, 21), in TensorFlow's terminology it will be considered as an input with:
a batch shape of (256, 237),
steps=1, so the length of your 1D input is 1,
21 input channels.
Whereas in PyTorch, x of shape (256, 237, 1, 21) would be:
batch shape of (256, 237),
1 input channel
a length of 21.
Have kept the input in both examples below (TensorFlow vs. PyTorch) as x.shape=(256, 237, 21) assuming 256 is the batch size, 237 is the length of the input sequence, and 21 is the number of channels (i.e. the input dimension, what I see as the dimension on each timestep).
In TensorFlow:
>>> x = tf.random.normal((256, 237, 21))
>>> m = tf.keras.layers.Conv1D(filters=1024, kernel_size=5, padding="same")
>>> y = m(x)
>>> y.shape
TensorShape([256, 237, 1024])
In PyTorch:
>>> x = torch.randn(256, 237, 21)
>>> m = nn.Conv1d(in_channels=21, out_channels=1024, kernel_size=5, padding=2)
>>> y = m(x.permute(0, 2, 1))
>>> y.permute(0, 2, 1).shape
torch.Size([256, 237, 1024])
So in the latter, you would simply work with x = torch.randn(256, 21, 237)...
PyTorch now has out of the box same convolution operation you can take a look at this link [Same convolution][1]
class InceptionNet(nn.Module):
def __init__(self, in_channels, in_1x1, in_3x3reduce, in_3x3, in_5x5reduce, in_5x5, in_1x1pool):
super(InceptionNet, self).__init__()
self.incep_1 = ConvBlock(in_channels, in_1x1, kernel_size=1, padding='same')
Note a same convolution only supports the default stride value which is 1 anything other won't work.
[1]: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html

Why isn't Tensorflow/Keras Flatten layer flattening my array?

I am trying to use the tensorflow.keras.layers.Flatten layer outside of a model to flatten a 4x4 tensor. I can't figure out why the Flatten layer isn't actually flattening my array.
Here is my code:
import tensorflow as tf
import numpy as np
flayer = tf.keras.layers.Flatten()
X = tf.constant(np.random.random((4,4)),dtype=tf.float32)
Xf = flatten_layer(X)
print(Xf)
and print(Xf) shows
tf.Tensor(
[[0.9866459 0.52488756 0.86211777 0.06254051]
[0.32552275 0.23201537 0.8646714 0.80754006]
[0.55823076 0.51929855 0.538077 0.4111973 ]
[0.95845264 0.14468837 0.30223057 0.09648433]], shape=(4, 4), dtype=float32)
Why doesn't my flatten layer output a 16x1 tensor?
That's because the Flatten() layer assumes that the first dimension is the number of samples, so it returns 4 flattened rows. You have 4 observations, and 1D input for each of these already. It would behave differently if you had data with shape (32, 28, 28, 1), for example, which has a higher dimensionality for each row.
import tensorflow as tf
import numpy as np
flayer = tf.keras.layers.Flatten()
X = tf.constant(np.random.random((32, 28, 28, 1)),dtype=tf.float32)
Xf = flayer(X)
print(Xf.shape)
(32, 784)
If you meant to flatten one observation with shape (4, 4), you should add a batch dimension for it to work:
X = tf.constant(np.random.random((1, 4, 4)),dtype=tf.float32)
Xf = flayer(X)
print(Xf.shape)
(1, 16)

list of dense as outputs

I'm making a model whose output I want to be of dims (A,B).
So I'm making a list of denses (A elements, B outputs each) and I wanted my output to be (No_samples, A,B).
It is a list of A elements with (No_samples,B). The method of having one dense with output AxB does not help because for every row I want to softmax accross only that
I've attempted to use tf.concatenate, tf.reshape, but there is always either an error or the same undesirable output. My difficulty is that in order to proceed I have to do some really weird reshaping and I wish to avoid that by
for i in range(0, A):
outputs.append(Dense(B, activation="softmax")(out))
And I've tried everything below (separately):
outputs = tf.stack(outputs)
outputs = Reshape(self.output_shape)(outputs)
outputs = tf.convert_to_tensor(outputs)
The expected outcome is that the output has shape of (A,?,B) instead of (?, A, B). Is there another method that I can have multiple denses in parralel with the above behaviour?
Simple example with A=3, B=1.
from keras import backend as K
from keras.layers import Concatenate, Dense, Input, Lambda
from keras.models import Model
import numpy as np
def expand_dims(x):
return K.expand_dims(x, axis=-2) #expand (None, 1) to (None, 1, 1)
x = Input((2,))
A = 3
B = 1
y = Lambda(expand_dims)(Dense(B, activation="softmax")(x))
for i in range(0, A-1):
# Concatenate on the newly added dimension
y = Concatenate(axis=-2)([y,Lambda(expand_dims)(Dense(B, activation="softmax")(x))])
model = Model(x, y)
print(model.predict(np.ones((4,2))).shape)
(4, 3, 1) # Output shape is (No_samples, A,B)

Simple network for arbitrary shape input

I am trying to create an autoencoder in Keras with Tensorflow backend. I followed this tutorial in order to make my own. Input to the network is kind of arbitrary i.e. each sample is a 2d array with fixed number of columns (12 in this case) but rows range between 4 and 24.
What I have tried so far is:
# Generating random data
myTraces = []
for i in range(100):
num_events = random.randint(4, 24)
traceTmp = np.random.randint(2, size=(num_events, 12))
myTraces.append(traceTmp)
myTraces = np.array(myTraces) # (read Note down below)
and here is my sample model
input = Input(shape=(None, 12))
x = Conv1D(64, 3, padding='same', activation='relu')(input)
x = MaxPool1D(strides=2, pool_size=2)(x)
x = Conv1D(128, 3, padding='same', activation='relu')(x)
x = UpSampling1D(2)(x)
x = Conv1D(64, 3, padding='same', activation='relu')(x)
x = Conv1D(12, 1, padding='same', activation='relu')(x)
model = Model(input, x)
model.compile(optimizer='adadelta', loss='binary_crossentropy')
model.fit(myTraces, myTraces, epochs=50, batch_size=10, shuffle=True, validation_data=(myTraces, myTraces))
NOTE: As per Keras Doc, it says that input should be a numpy array, if I do so I get following error:
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (100, 1)
And if I dont convert it in to numpy array and let it be a list of numpy arrays I get following error:
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 100 arrays: [array([[0, 1, 0, 0 ...
I don't know what I am doing wrong here. Also I am kind of new to Keras. I would really appreciate any help regarding this.
Numpy does not know how to handle a list of arrays with varying row sizes (see this answer). When you call np.array with traceTmp, it will return a list of arrays, not a 3D array (An array with shape (100, 1) means a list of 100 arrays).
Keras will need a homogeneous array as well, meaning all input arrays should have the same shape.
What you can do is pad the arrays with zeroes such that they all have the shape (24,12): then np.array can return a 3-dimensional array and the keras input layer does not complain.

Categories

Resources