NameError: name 'Subtract' is not defined - python

I'm working on neural network, which I will be using in Dueling DQN algorithm, but I encountered a problem with keras layer Subtract, when I use this layer I am getting this error:
AttributeError: module 'keras.layers' has no attribute 'Subtract'
Method, where I use Subtract:
def DDDQN(self):
inp=Input(shape=(self.state_size,))
x=Dense(units=32,activation='relu',kernel_initializer='he_uniform')(inp)
x=Dense(units=16,activation='relu',kernel_initializer='he_uniform')(x)
value_=Dense(units=1,activation='linear',kernel_initializer='he_uniform')(x)
ac_activation=Dense(units=self.action_size,activation='linear',kernel_initializer='he_uniform')(x)
avg_ac_activation=Lambda(lambda x: K_back.mean(x,axis=1,keepdims=True))(ac_activation)
concat_value=Concatenate(axis=-1)([value_,value_])
concat_avg_ac=Concatenate(axis=-1)([avg_ac_activation,avg_ac_activation])
for i in range(1,self.action_size-1):
concat_value=Concatenate(axis=-1)([concat_value,value_])
concat_avg_ac=Concatenate(axis=-1)([concat_avg_ac,avg_ac_activation])
ac_activation=Subtract()([ac_activation,concat_avg_ac])
merged_layers=Add()([concat_value,ac_activation])
final_model=Model(inputs=inp,outputs=merged_layers)
final_model.compile(loss='mean_squared_error',optimizer=Adam(lr=self.learning_rate))
return final_model
Other layers like Dense, Lambda or Multiplicate are working correctly, any suggestions how to solve this problem?

Basics first, do you have the appropriate Python Interpreter version installed ?
Try updating to Python3.6 for exemple (if this is relevant to you)

Related

TypeError: tensor() got an unexpected keyword argument 'names'

so I started reading deep learning with pytorch, and got to the point of setting names to the dimensions inside the tensor, to make it more friendly, but as soon as I use the names argument, I get the error:
TypeError: tensor() got an unexpected keyword argument 'names'
can anyone help me out?
The code is simple:
import torch
weights_named = torch.tensor([0.2126, 0.7152, 0.0722], names=['channels'])
weights_named
Just want to run this, to see how to set names to the dimensions. Thanks in advance.
This is due to your PyTorch Version. Upgrade your Pytorch version and should work. In my case,
import torch
torch.__version__ #1.7
weights_named = torch.tensor([0.2126, 0.7152, 0.0722], names=['channels'])
# __main__:1: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change.
Named tensors and functions supporting it on latest 1.9 release, but on 1.7 (my version) is still experimental.

AttributeError: 'Tensor' object has no attribute 'numpy' in custom loss function (Tensorflow 2.1.0)

I want to train a model with a custom loss function, in order to do that, I need to convert the tensor to numpy array inside the method below:
def median_loss_estimation(y_true, y_predicted):
a = y_predicted.numpy()
but I have this error:
AttributeError: 'Tensor' object has no attribute 'numpy'
Why?
How can I convert the tensor to a numpy array?
The answer is: put run_eagerly=True in model.compile!
You're doing the right thing, only Tensorflow 2.1 is currently broken in that aspect. This would normally happen if you run the code without eager mode enabled. However, Tensorflow 2 by default runs in eager mode... or at least it should. The issue is tracked here.
There are at least two solutions to this:
Install the latest nightly build.
Set model.run_eagerly = True.

How to use layer normalization in tensorflow 1.12?

I am stuck with tensorflow 1.12, and I need to use layer normalization. I can't find some examples of this, and as I am new to tensorflow I am unable to figure out where I am going wrong.
tf.contrib.layers.layer_norm is the function that I want to include in my tf.keras.Sequential() like this -
self.module = K.Sequential([
tf.contrib.layers.layer_norm(trainable=True),
K.layers.Activation(self.activation),
K.layers.Dense(units=self.output_size, activation=None, kernel_initializer=self.initializer)
])
I also tried using
self.ln = tf.contrib.layers.layer_norm(trainable=True)
### and in call()
self.ln(self.module)
In all the cases, it throws the error at the line defining tf.contrib.layers.layer_norm(trainable=True)-
TypeError: layer_norm() missing 1 required positional argument: 'inputs'
I understand that the inputs need to be given as the argument to layernorm, but if I want it to trainable, it can only be defined in __init__(). Where am I going wrong?
I use mainly PyTorch, so it is quite obvious that I am not able to grasp the ideology of tf. Any suggestions will be very helpful!
Sequential needs to be initialized by a list of Layer instances, such as tf.keras.layers.Activation, tf.keras.layers.Dense. tf.contrib.layers.layer_norm is functional instead of Layer instance.
There is a third party implementation of layer normalization in keras style - keras-layer-normalization. But I haven't tested in tensorflow.

How do I mix Non-Tensorflow code with Tensorflow ops?

I am trying to use a third party library (neural renderer) in my code alongside Tensorflow. This library has a function which I would like to run between Tensorflow executions. However, whenever I feed a placeholder into the function, it just gives me this problem:
AttributeError: 'Tensor' object has no attribute 'transpose'
How do I combine a Tensorflow session and a non-Tensorflow function? For example, how do I do something like this:
input = not_tensorflow(foo, bar)
op1 = tf.assign(var, input)
sess.run(op1, feed_dict={foo:stuff})
I thought I would make foo a simple Placeholder, but apparently that doesn't work.

Keras Concatenate Layers: Difference between different types of concatenate functions

I just recently started playing around with Keras and got into making custom layers. However, I am rather confused by the many different types of layers with slightly different names but with the same functionality.
For example, there are 3 different forms of the concatenate function from https://keras.io/layers/merge/ and https://www.tensorflow.org/api_docs/python/tf/keras/backend/concatenate
keras.layers.Concatenate(axis=-1)
keras.layers.concatenate(inputs, axis=-1)
tf.keras.backend.concatenate()
I know the 2nd one is used for functional API but what is the difference between the 3? The documentation seems a bit unclear on this.
Also, for the 3rd one, I have seen a code that does this below. Why must there be the line ._keras_shape after the concatenation?
# Concatenate the summed atom and bond features
atoms_bonds_features = K.concatenate([atoms, summed_bond_features], axis=-1)
# Compute fingerprint
atoms_bonds_features._keras_shape = (None, max_atoms, num_atom_features + num_bond_features)
Lastly, under keras.layers, there always seems to be 2 duplicates. For example, Add() and add(), and so on.
First, the backend: tf.keras.backend.concatenate()
Backend functions are supposed to be used "inside" layers. You'd only use this in Lambda layers, custom layers, custom loss functions, custom metrics, etc.
It works directly on "tensors".
It's not the choice if you're not going deep on customizing. (And it was a bad choice in your example code -- See details at the end).
If you dive deep into keras code, you will notice that the Concatenate layer uses this function internally:
import keras.backend as K
class Concatenate(_Merge):
#blablabla
def _merge_function(self, inputs):
return K.concatenate(inputs, axis=self.axis)
#blablabla
Then, the Layer: keras.layers.Concatenate(axis=-1)
As any other keras layers, you instantiate and call it on tensors.
Pretty straighforward:
#in a functional API model:
inputTensor1 = Input(shape) #or some tensor coming out of any other layer
inputTensor2 = Input(shape2) #or some tensor coming out of any other layer
#first parentheses are creating an instance of the layer
#second parentheses are "calling" the layer on the input tensors
outputTensor = keras.layers.Concatenate(axis=someAxis)([inputTensor1, inputTensor2])
This is not suited for sequential models, unless the previous layer outputs a list (this is possible but not common).
Finally, the concatenate function from the layers module: keras.layers.concatenate(inputs, axis=-1)
This is not a layer. This is a function that will return the tensor produced by an internal Concatenate layer.
The code is simple:
def concatenate(inputs, axis=-1, **kwargs):
#blablabla
return Concatenate(axis=axis, **kwargs)(inputs)
Older functions
In Keras 1, people had functions that were meant to receive "layers" as input and return an output "layer". Their names were related to the merge word.
But since Keras 2 doesn't mention or document these, I'd probably avoid using them, and if old code is found, I'd probably update it to a proper Keras 2 code.
Why the _keras_shape word?
This backend function was not supposed to be used in high level codes. The coder should have used a Concatenate layer.
atoms_bonds_features = Concatenate(axis=-1)([atoms, summed_bond_features])
#just this line is perfect
Keras layers add the _keras_shape property to all their output tensors, and Keras uses this property for infering the shapes of the entire model.
If you use any backend function "outside" a layer or loss/metric, your output tensor will lack this property and an error will appear telling _keras_shape doesn't exist.
The coder is creating a bad workaround by adding the property manually, when it should have been added by a proper keras layer. (This may work now, but in case of keras updates this code will break while proper codes will remain ok)
Keras historically supports 2 different interfaces for their layers, the new functional one and the old one, that requires model.add() calls, hence the 2 different functions.
For the TF -- their concatenate() functions does not do everything that required for Keras to work, hence, the additional calls to make ._keras_shape variable correct and not to upset Keras that expects that variable to have some particular value.

Categories

Resources