Hello together I have a problem
I am using python 3.6.5 and tensorflow 1.8.0.
My input is 1000 max_textlength * 64 embedding * 4 steps and 3 protocolls = 64007
neuronumber = 10
a normal RNN works but I wanted to improve it with
attentioncellwrapper(neurons, 2, state_is_tuple = True)
I received the following message:
Value Error: Dimension 1 in both shapes must be equal, but are 10 and
20. Shapes are [?, 10) and [?, 20].
From merging shape 1 with another shape for 'fully_connected/packed'(op: Pack) with input shapes [?,10], [?,10], [?,20]
why is this so?
Has anyone also had this problem?
I was also experimenting with state_is_tuple = False,
no error message was given, but python suddenly crashed down :(
By the way, when I was changing the attention_length the e.g. from 2 to 3 or 4 this changed to
Value Error: Dimension 1 in both shapes must be equal, but are 10 and
30. Shapes are [?, 10) and [?, 30].
Value Error: Dimension 1 in both shapes must be equal, but are 10 and
40. Shapes are [?, 10) and [?, 40].
seems as if the attention length is multiplicated with the shape
Thanks so much for help!
Related
My Code :
import torch.nn.utils.rnn as r
a = torch.ones([1, 20])
b = torch.ones([1, 25])
c = r.pad_sequence([a, b], batch_first=True, padding_value=0)
The Traceback of this code is :
RuntimeError: The size of tensor a (20) must match the size of tensor b (23) at non-singleton dimension 1
can anybody explain to me what is this error and how to solve this?
All I wanted is to pad zeros to tensor a to make it's shape equal to b.
In your example you have two sequences of length/duration of 20 and 25 samples, respectively. Both sequences have 1-dim element per time step.
PyTorch expects the element dim to be the last dim, therefore you need:
c = r.pad_sequence([a.T, b.T], batch_first=True)
With output shape of c equals to [2, 25, 1].
I am currently working to train a dataset stored as numpy arrays using
train_dataset=tf.data.Dataset.from_tensor_slices(train_data)
Here, train_data is a numpy array of data without the associated labels. The model that I am running was created to work on datasets as DatasetV1Adapters(MNIST and dataset for pix to pix GANs). I have been looking for documentation for making the required correction for quite a while now(around 4 weeks). and this method hasn't solved my problem.
For the training process, I was running:
for images in train_dataset:
#images=np.expand_dims(images, axis=0)
disc_loss += train_discriminator(images)
Which would give me an error of
ValueError: Input 0 of layer conv2d_2 is incompatible with the layer: expected ndim=4, found ndim=3.
The array shape was [32,32,3] so the 100 from number of images was lost. I tried to run the commented out line images=np.expand_dims(images, axis=0). Thus I got [1,32,32,3] which matched my required dimensionality. I thought my problem would be solved, but instead I now have the following error:
ValueError: Input 0 of layer conv2d_4 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape [1, 32, 32, 3]
Which I don't fully understand. It seems like the error is definitely related to the datasetV1Adapter as I get the same type of error was various codes. I have tried uploading my dataset to github, but as a 10GB folder, I am unable to actually upload it. Any help will be appreciated
EDIT: Followed #Sebastian-Sz's advice(Kind off). I set my channels in the model to three to accommodate RGB instead of grayscale. Running this code gave me
TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, bfloat16, float32, float64
So I added
train_data = np.asarray(train_data, dtype=np.float)
Now I get an error saying:
Input 0 of layer dense_5 is incompatible with the layer: expected axis -1 of input shape to have value 6272 but received input with shape [1, 8192]
Which makes no sense to me
I keep getting the error that my input shape should have had 3 dimensions, but it has 2, and I don't know how to shuffle it to make it work. I've checked similar questions but here I'll display my specific problem.
My dataset is a series of .wav audio files, for which I have a path, and I've already matched with the corresponding word and MFCC.
I have a 75859 arrays, in which each array consists of 99 lists, in which each list has 13 values.
Here's my x_train:
x_train = x_train.reshape(x_train.shape[0],coeff, time_step)
len(x_train[1]) = 99
len(x_train[1][0]) = 13
x_train[1][0][0] = a single number i.e. 0.10
x_train.shape[0] = 75859
(I do trust my Conv1D model and so far I have no suspicions about it)
Here's the error I get:
ValueError: Error when checking input: expected conv1d_61_input to have 3 dimensions, but got array with shape (18965, 1)
The input_shape parameter of the first layer of your neural network needs to correspond to the input. Set it to: input_shape=x_train.shape[1:]. If that doesn't work, update your post with you entire model architecture.
I have an input like this
x=[[0,0,0,0,1,0,0,0]....[n,n,n,n,n,n,n,n]]
x.shape=(18998,8)
An output like this
y= 11 11 11 11 ... 10
y.shape=(18998,)
I build the model like this
env_model = Sequential()
env_model.add(Dense(8, activation='relu', input_dim=8))
env_model.add(Dense(128, activation='relu'))
env_model.add(Dense(256, activation='relu'))
env_model.add(Dense(512, activation='relu'))
env_model.add(Dense(14, activation='softmax'))
env_model.summary()
env_model.save('model_weights/weights.environment.h5')
I thought the model should not have been any issue, but I kept getting error like the following:
'ValueError: Error when checking target: expected dense_8 to have shape (14,) but got array with shape (1,)'
Could you help me to point out what is wrong with my input, output or model? I am looking forward to your help. Thank you very much!
In your model ,
env_model.add(Dense(14, activation='softmax'))
The Dense vector expects a target array of shape ( batch_size , 14 ). You are providing it a array of shape ( batch_size , ).
Note : In the shape of a NumPy array the comma defines an unspecified dimension. Suppose you have an array of shape ( 12 , ). Meaning you have 12 samples of varying lengths (1 , 2 ,6 etc. ). The second dim is not specific and hence is shown by a ,.
So, you need to convert your target data into shape ( 18998 , dim ) where dim is the length of the feature. To achieve you can try two approaches :
As mentioned by #giser_yugang, convert the target data into categorical data using keras.utils.to_categorical()
Pad the feature vector to a fixed length. You can do this by keras.preprocessing.sequences.pad_sequence()
Hence, at final, you should have a fixed length target data.
Trying to implement a paper and running into some brick-walls due to some dimensionality problems. My input is mono audio data where 128 frames of 50ms of 16kHz sampled audio is fed into the network. So my input shape is:
[128,0.005*16000, 1]
Here's the layer details -
1.) conv-bank block : Conv1d-bank-8, LeReLU, IN (instance normalization)
I achieve this using :
bank_width = 8
conv_bank_outputs = tf.concat([ tf.layers.conv1d(input,1,k,activation=tf.nn.leaky_relu,padding="same") for k in range(1, bank_width + 1)], axes = -1)
2.) conv-block: C-512-5, LReLu --> C-512-5,stride=2, LReLu, IN, RES (Residual)
This is where I get stuck, the shapes of the output of second convolution and input to the (2) layer is mismatched. I can't get my head around it.
I achieve this using:
block_1 = tf.layers.conv1d(input,filters=512,kernel_size=5,activation=tf.nn.leaky_relu,padding="same")
block_2 = tf.layers.conv1d(block_1,filters=512,kernel_size=5,strides=2,activation=tf.nn.leaky_relu,padding="same")
IN = tf.contrib.layers.instance_norm(block_2)
RES = IN + input
Error: ValueError: Dimensions must be equal, but are 400 and 800 for 'add' (op: 'Add') with input shapes: [128,400,512], [128,800,1024].
When you run conv1d on block1 with stride = 2 , input data is halved as conv1d effectively samples only alternate numbers and also you have changed number of channels. This is usually worked around by downsampling input by 1x1 conv with stride 2 and filters 512, though I can be more specific if you can share the paper.