I see that Tensorflow support is pretty slim but I'll try anyway …
When running my agent:
optimizer = tf.keras.optimizers.Adam()
train_step_counter = tf.Variable(0)
tf_agent = reinforce_agent.ReinforceAgent(
train_py_env.time_step_spec(),
train_py_env.action_spec(),
actor_network=actor_net,
optimizer=optimizer,
normalize_returns=True,
train_step_counter=train_step_counter)
I get a ValueError from _make_gin_wrapper (Line 1605). The error text is:
Exception encountered when calling layer \"lambda_12\" (type Lambda).\n\nShapes (1, 9) and (9, 9) are incompatible\n\nCall arguments received by layer \"lambda_12\" (type Lambda):\n • inputs=tf.Tensor(shape=(1, 9), dtype=int32)\n • mask=None\n • training=None\n In call to configurable 'ReinforceAgent' (<class 'tf_agents.agents.reinforce.reinforce_agent.ReinforceAgent'>)"
So apparently some incompatibility with (1,9) and (9,9) shape. The environment is taken from https://towardsdatascience.com/creating-a-custom-environment-for-tensorflow-agent-tic-tac-toe-example-b66902f73059. It is TicTacToe on a [0,0,0,0,0,0,0,0,0]board, which has (9,)-shape, so I can see why there are 9's but I don't know which objects have (1,9) and (9,9) shapes or what I could do to get the agent running.
Related
I am novice in TensorFlow
I am traying to use BERT embeddings in LSTM model
this is my model function
def bert_tweets_model():
Bertmodel = TFAutoModel.from_pretrained(model_name,output_hidden_states=True)
input_word_ids = tf.keras.Input(shape=(max_length,), dtype=tf.int32, name="input_ids")
input_masks_in = tf.keras.Input(shape=(max_length,), name='masked_token', dtype='int32')
with torch.no_grad():
last_hidden_states = Bertmodel(input_word_ids, attention_mask=input_masks_in)[0]
x = tf.keras.layers.LSTM(100, dropout=0.1, activation='relu',recurrent_dropout=0.3,return_sequences = True)(last_hidden_states)
x = tf.keras.layers.LSTM(50, dropout=0.1,activation='relu', recurrent_dropout=0.3,return_sequences = True)(x)
x=tf.keras.layers.Flatten()(x)
output = tf.keras.layers.Dense(units = 2, activation='sigmoid')(x)
model = tf.keras.Model(inputs=[input_word_ids, input_masks_in], outputs = output)
return model
with strategy.scope():
model = bert_tweets_model()
adam_optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5)
model.compile(loss='binary_crossentropy',optimizer=adam_optimizer,metrics=['accuracy'])
model.summary()
validation_data=[dev_encoded, y_val]
train2=[input_id, attention_mask]
history = model.fit(
x=train2, y=y_train, batch_size=batch_size,
epochs=3,
validation_data=validation_data,
verbose=2)
I recieved this error in fit function when I tried to input data
"ValueError: Layer "model_1" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 512) dtype=int32>]"
also,I received these warning massages I do not know what is means.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
can someone help me, thanks in advance.
Regenerating your error
_input1 = tf.random.uniform((1,100), 0 , 10)
_input2 = tf.random.uniform((1,100), 0 , 10)
model(_input1, _input2)
After running this code I am getting the same error...
Layer "model" expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor: shape=(1, 100), ...
#Now, the problem is you have to enclose the inputs in the set or list then you have to pass the inputs to the model like this
model((_input1, _input2))
<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[0.5324366, 0.3743334]], dtype=float32)>
Remember: if you are using tf.data.Dataset then encolse it then while making the dataset enclose the dataset within the set like this
tf.data.Dataset.from_tensor_slices((words_id, words_mask))
Second Problem as you asked
The warning you are getting because, you should be aware that LSTM doesn't run in CUDA GPU it uses the CPU only therefore it is slow, so TensorFlow is just telling you that LSTM will not run under GPU or parallel computing.
I have been stuck on this error for a long time and is there anyway to resolve it without downgrading my tensorflow version? All the solutions I have found till now have recommended using TF<2.0 which I don't want to do. Current TF version = 2.4.1, Keras version = 2.4.3, using google colab
I am trying to use SHAP GradientExplainer with the VGG 16 model to see how a particular layer impacts predictions.
The code is :
e = shap.GradientExplainer((model.layers[7].input, model.layers[-1].output), map2layer(preprocess_input(X.copy()), 7))
shap_values, indexes = e.shap_values(map2layer(to_predict, 7), ranked_outputs=2)
index_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)
index_names
The error is:
TypeError Traceback (most recent call last)
<ipython-input-13-b3a265bc3cde> in <module>()
----> 1 e = shap.GradientExplainer((model.layers[7].input, model.layers[-1].output), map2layer(preprocess_input(X.copy()), 7))
2 shap_values, indexes = e.shap_values(map2layer(to_predict, 7), ranked_outputs=2)
3 index_names = np.vectorize(lambda x: class_names[str(x)][1])(indexes)
4 index_names
<ipython-input-11-f110beabf449> in map2layer(x, layer)
1 def map2layer(x, layer):
----> 2 feed_dict = dict(zip([model.layers[0].input], [preprocess_input(x.copy())]))
3 return K.get_session().run(model.layers[layer].input, feed_dict)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/keras_tensor.py in __hash__(self)
259 def __hash__(self):
260 raise TypeError('Tensors are unhashable. (%s)'
--> 261 'Instead, use tensor.ref() as the key.' % self)
262
263 # Note: This enables the KerasTensor's overloaded "right" binary
TypeError: Tensors are unhashable. (KerasTensor(type_spec=TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'"))Instead, use tensor.ref() as the key.
It looks like you're feeding in a Tensor or KerasTensor into your feed_dict. Python tries to hash the dictionary keys (a python dict is a hash-map), which raises the error you're seeing. The reason is that Tensors are not hashable (meaning that they don't have an implementation of the __hash__ method).
To solve this, make sure that the feed_dict keys are placeholders or keras.Input objects instead.
Let's begin by creating a very basic deep neural network in MXNet Gluon (inspired by this tutorial):
import mxnet as mx
from mxnet import gluon
ctx = mx.cpu()
net = gluon.nn.Sequential()
with net.name_scope():
net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='relu'))
net.add(gluon.nn.MaxPool2D(pool_size=2, strides=2))
Now, if we want to print out the dimensions of a layer, all we have to do is...
print(net[0])
# prints: Conv2D(None -> 20, kernel_size=(5, 5), stride=(1, 1), Activation(relu))
print(net[1])
# prints: MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False)
However, instead of printing it out, what if we want to programmatically inspect the padding of net[1]?
When I try net[1].padding, I get the error AttributeError: 'MaxPool2D' object has no attribute 'padding'.
When I try net[1]['padding'], I get the error TypeError: 'MaxPool2D' object is not subscriptable.
So, what's the right way to programmatically access the dimensions of a neural network layer in MXNet Gluon?
print(net[1]._kwargs["pad"])
Try getting them from kwargs dictionary. Look for other keys at this source.
This is the Colab link for the code.
Other keys are kernel for kernel size, stride for stride, .
For getting all the keys and values:
for k, v in net[1]._kwargs.items():
print(k, v)
I am using bidirectional LSTM with batch_first=True. However, it is throwing me an error regarding dimensions.
**Error:
Expected hidden[0] size (6, 5, 40), got (5, 6, 40)**
When I checked the source code, the error occurred due to below function
if is_input_packed:
mini_batch = int(batch_sizes[0])
else:
mini_batch = input.size(0) if self.batch_first else input.size(1)
num_directions = 2 if self.bidirectional else 1
expected_hidden_size = (self.num_layers * num_directions,
mini_batch, self.hidden_size)
def check_hidden_size(hx, expected_hidden_size, msg='Expected hidden size {}, got {}'):
if tuple(hx.size()) != expected_hidden_size:
raise RuntimeError(msg.format(expected_hidden_size, tuple(hx.size())))
By default expected_hidden_size is written with respect to sequence first. I believe it is causing the problem. Can someone advise if I am right and the issue needs to be fixed?
I want to build a neural network using neupy.
Therefore I consturcted the following architecture:
network = layers.join(
layers.Input(10),
layers.Linear(500),
layers.Relu(),
layers.Linear(300),
layers.Relu(),
layers.Linear(10),
layers.Softmax(),
)
My data is shaped as follwoing:
x_train.shape = (32589,10)
y_train.shape = (32589,1)
When I try to train this network using:
model.train(x_train, y_trian)
I get the follwoing error:
ValueError: Input dimension mis-match. (input[0].shape[1] = 10, input[1].shape[1] = 1)
Apply node that caused the error: Elemwise{sub,no_inplace}(SoftmaxWithBias.0, algo:network/var:network-output)
Toposort index: 26
Inputs types: [TensorType(float64, matrix), TensorType(float64, matrix)]
Inputs shapes: [(32589, 10), (32589, 1)]
Inputs strides: [(80, 8), (8, 8)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{Composite{((i0 * i1) / i2)}}(TensorConstant{(1, 1) of 2.0}, Elemwise{sub,no_inplace}.0, Elemwise{mul,no_inplace}.0), Elemwise{Sqr}[(0, 0)](Elemwise{sub,no_inplace}.0)]]
How do I have to edit my network to map this kind of data?
Thank you a lot!
Your architecture has 10 outputs instead of 1. I assume that your y_train function is a 0-1 class identifier. If so, than you need to change your structure to this:
network = layers.join(
layers.Input(10),
layers.Linear(500),
layers.Relu(),
layers.Linear(300),
layers.Relu(),
layers.Linear(1), # Single output
layers.Sigmoid(), # Sigmoid works better for 2-class classification
)
You can make it even simpler
network = layers.join(
layers.Input(10),
layers.Relu(500),
layers.Relu(300),
layers.Sigmoid(1),
)
The reason why it works is because layers.Liner(10) > layers.Relu() is the same as layers.Relu(10). You can learn more in official documentation: http://neupy.com/docs/layers/basics.html#mutlilayer-perceptron-mlp