weird problem with Pytorch's mse_loss function - python

Traceback (most recent call last):
File "c:/Users/levin/Desktop/programming/nn.py", line 208, in <module>
agent.train(BATCHSIZE)
File "c:/Users/levin/Desktop/programming/nn.py", line 147, in train
output = F.mse_loss(prediction, target)
File "C:\Users\levin\Anaconda3\lib\site-packages\torch\nn\functional.py", line 2203, in mse_loss
if not (target.size() == input.size()):
AttributeError: 'NoneType' object has no attribute 'size'
This above is the Error that I'm constantly getting and I really don't know how to fix it.
This some code that might be important
def train(self, BATCHSIZE):
trainsample = random.sample(self.memory, BATCHSIZE)
for state, action, reward, new_state, gameovertemp in trainsample:
if gameovertemp:
target = torch.tensor(reward).grad_fn
else:
target = reward + self.gamma * torch.max(self.dqn.forward(new_state))
self.dqn.zero_grad()
prediction = torch.max(self.dqn.forward(state))
#print(prediction, "prediction")
#print(target, "target")
output = F.mse_loss(prediction, target)
output.backward()
self.optimizer.step()

As stated in a comment the error due to either target of input to be None and is not related to the size() attribute.
The problem is probably at this line:
target = torch.tensor(reward).grad_fn
Here you convert reward to a new Tensor. However, a Tensor created by the user always has a grad_fn equal to None (as explained in Pytorch Autograd).
To have a grad_fn a Tensor must be the result of some computation, not a static value.
The thing is that mse_loss does not expect target to be differentiable, as the name suggest it is just the value to be compared.
Try to remove the grad_fn from this line the raw Tensor should be sufficient.

Related

What is the ideal way, in tensorflow, of feeding the output of a model back into itself, for predicting data that changes over time?

I am working on a model that trains on simulation data, which should ideally be able to predict N timesteps forward from a given state in a simulation. I have attempted to model this by feeding the output of the model back into itself N times, where N is a hyperparameter of the model. I have done this in the call function of the tensorflow.keras.Model() class.
The relevant code:
def call(self, inputs):
x = inputs[0]
outputs = tf.TensorArray(
dtype=tf.float32, size=0, dynamic_size=True, infer_shape=False
)
window = inputs[1]
for i in tf.range(window):
x = self.model(x)
outputs = outputs.write(i, x)
outputs = tf.transpose(outputs.stack(), [1, 2, 3, 0, 4])
return outputs
This works, and the model trains, but i want to save the model using the tensorflow.keras.Model.save() function. Trying this leads to the following error:
Traceback (most recent call last):
File "/zhome/22/4/118839/Masters_Thesis/Model_files/Unet.py", line 562, in <module>
model_.save(savepath + "/saved_model/Model")
File "/zhome/22/4/118839/Masters_Thesis/Menv/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/appl/python/3.9.11/lib/python3.9/contextlib.py", line 126, in __exit__
next(self.gen)
File "/zhome/22/4/118839/Masters_Thesis/Model_files/Unet.py", line 485, in call
for i in tf.range(4):
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Iterating over a symbolic `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
Is there a better way of doing what I'm trying to do? Any other threads I have found recommend using the tf.map_fn() function, but this does not work for me due to the sequential nature of the model. Any help is appreciated!

TypeError although same shape: if not (target.size() == input.size()): 'int' object is not callable

This is the error message I get. In the first line, I output the shapes of predicted and target. From my understanding, the error arises from those shapes not being the same but here they clearly are.
torch.Size([6890, 3]) torch.Size([6890, 3])
Traceback (most recent call last):
File "train.py", line 251, in <module>
main()
File "train.py", line 230, in main
train(net, training_dataset, targets, device, criterion, optimizer, epoch, args.epochs)
File "train.py", line 101, in train
loss = criterion(predicted, target.detach().cpu().numpy())
File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 443, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2244, in mse_loss
if not (target.size() == input.size()):
TypeError: 'int' object is not callable
I hope all the relevant context information is provided and if not, please let me know. Thanks for any suggestions!
EDIT: This is the part of the code where this error occurs:
target = torch.from_numpy(np.load(file_dir + '/points/points{:03}.npy'.format(i))).to(device)
rv = torch.zeros(12 * outputs.shape[0])
for j in [x for x in range(10) if x != i]:
source = torch.from_numpy(np.load(file_dir + '/points/points{:03}.npy'.format(j))).to(device)
rv = factor.ransac(source, target, prob, n_iter, tol, device) # some self-written RANSAC-like method
predicted = factor.predict(source, rv, outputs)
print(target.shape, predicted.shape)
loss = criterion(predicted, target.detach().cpu().numpy()) ## error occurs here
criterion is nn.MSELoss().
A little bit late but maybe it will help someone else. Just solved the same problem for myself.
As Alpha said in his answer we cannot call .size() for a numpy array.
But we can call .size() for a tensor.
Therefore, we need to make our target a tensor. You can do it like this:
target = torch.from_numpy(target)
I'm using GPU, so I also needed to send my target to GPU. You can do it like this:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
target = target.to(device)
And then the loss function must work perfectly.
It probably means that you are trying to call a method when a property with the same name is available. If this is indeed the problem, the solution is easy. Simply change the method call into a property access.
If you are comparing in the following way:
compare = (X.method() == Y.method())
Change it to:
compare = (X.method == Y.method)
If this does not answer your question, kindly share the code which you have used to compare the shapes.
that's because your target is a numpy object
File "train.py", line 101, in train:
target.detach().cpu().numpy()
in your code change the target type to numpy.
TLDR try change
loss = criterion(predicted, target.detach().cpu().numpy()) ## error occurs here
to
loss = criterion(predicted, target) ## error occurs here
for example:
In [6]: b = np.ones(3)
In [7]: b.size
Out[7]: 3
In [8]: b.size()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-365705555409> in <module>
----> 1 b.size()
TypeError: 'int' object is not callable

Tensor flow range() integer argument expected, got Tensor

I am trying to define a log loss function for a multi class classification problem as:
self.loss = tf.losses.log_loss(
labels=self.sampled_actions,
predictions= [self.probability[i][self.sampled_actions[i]] for i in range(tf.shape(self.sampled_actions)[0])],
weights=self.discounted_rewards)
Here, self.sampled_actions is a 1D tensor of 0/1/2 (e.g: [0,1,2,1,0,2]) which corresponds to which action is the ground truth. self.probability is defined as:
h = tf.layers.dense(
self.observations,
units=hidden_layer_size,
activation=tf.nn.relu,
kernel_initializer=tf.contrib.layers.xavier_initializer())
self.probability = tf.layers.dense(
h,
units=3,
activation=tf.sigmoid,
kernel_initializer=tf.contrib.layers.xavier_initializer())
As the probabilities of all three actions, 0,1,2 for any given observation in the input.
However, when I run this program, I get the error:
Traceback (most recent call last):
File "spaceinvaders.py", line 68, in <module>
hidden_layer_size, learning_rate, checkpoints_dir='checkpoints')
File "/home/elfarouk/Desktop/opengym/policy_network_space_invaders.py", line 49, in __init__
predictions= [self.probability[i][self.sampled_actions[i]] for i in range(tf.shape(self.sampled_actions)[0])],
TypeError: range() integer end argument expected, got Tensor.
Is there a way to specify that my prediction in the loss function should be dependent on the sampled_actions?

TensorFlow throws error only when using MultiRNNCell

I'm building an encoder-decoder model in TensorFlow 1.0.1 using the legacy sequence-to-sequence framework. Everything works as it should when I have one layer of LSTMs in the encoder and decoder. However, when I try with >1 layers of LSTMs wrapped in a MultiRNNCell, I get an error when calling tf.contrib.legacy_seq2seq.rnn_decoder.
The full error is at the end up this post, but in brief, it's caused by a line
(c_prev, m_prev) = state
in TensorFlow that throws TypeError: 'Tensor' object is not iterable.. I'm confused by this, since the initial state I'm passing to rnn_decoder is indeed a tuple as it should be. As far as I can tell, the only difference between using 1 or >1 layers is that the latter involves using MultiRNNCell. Are there some API quirks that I should know about when using this?
This is my code (based on the example in this GitHub repo). Apologies for its length; this is as minimal I could make it while still being complete and verifiable.
import tensorflow as tf
import tensorflow.contrib.legacy_seq2seq as seq2seq
import tensorflow.contrib.rnn as rnn
seq_len = 50
input_dim = 300
output_dim = 12
num_layers = 2
hidden_units = 100
sess = tf.Session()
encoder_inputs = []
decoder_inputs = []
for i in range(seq_len):
encoder_inputs.append(tf.placeholder(tf.float32, shape=(None, input_dim),
name="encoder_{0}".format(i)))
for i in range(seq_len + 1):
decoder_inputs.append(tf.placeholder(tf.float32, shape=(None, output_dim),
name="decoder_{0}".format(i)))
if num_layers > 1:
# Encoder cells (bidirectional)
# Forward
enc_cells_fw = [rnn.LSTMCell(hidden_units)
for _ in range(num_layers)]
enc_cell_fw = rnn.MultiRNNCell(enc_cells_fw)
# Backward
enc_cells_bw = [rnn.LSTMCell(hidden_units)
for _ in range(num_layers)]
enc_cell_bw = rnn.MultiRNNCell(enc_cells_bw)
# Decoder cell
dec_cells = [rnn.LSTMCell(2*hidden_units)
for _ in range(num_layers)]
dec_cell = rnn.MultiRNNCell(dec_cells)
else:
# Encoder
enc_cell_fw = rnn.LSTMCell(hidden_units)
enc_cell_bw = rnn.LSTMCell(hidden_units)
# Decoder
dec_cell = rnn.LSTMCell(2*hidden_units)
# Make sure input and output are the correct dimensions
enc_cell_fw = rnn.InputProjectionWrapper(enc_cell_fw, input_dim)
enc_cell_bw = rnn.InputProjectionWrapper(enc_cell_bw, input_dim)
dec_cell = rnn.OutputProjectionWrapper(dec_cell, output_dim)
_, final_fw_state, final_bw_state = \
rnn.static_bidirectional_rnn(enc_cell_fw,
enc_cell_bw,
encoder_inputs,
dtype=tf.float32)
# Concatenate forward and backward cell states
# (The state is a tuple of previous output and cell state)
if num_layers == 1:
initial_dec_state = tuple([tf.concat([final_fw_state[i],
final_bw_state[i]], 1)
for i in range(2)])
else:
initial_dec_state = tuple([tf.concat([final_fw_state[-1][i],
final_bw_state[-1][i]], 1)
for i in range(2)])
decoder = seq2seq.rnn_decoder(decoder_inputs, initial_dec_state, dec_cell)
tf.global_variables_initializer().run(session=sess)
And this is the error:
Traceback (most recent call last):
File "example.py", line 67, in <module>
decoder = seq2seq.rnn_decoder(decoder_inputs, initial_dec_state, dec_cell)
File "/home/tao/.virtualenvs/example/lib/python2.7/site-packages/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py", line 150, in rnn_decoder
output, state = cell(inp, state)
File "/home/tao/.virtualenvs/example/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 426, in __call__
output, res_state = self._cell(inputs, state)
File "/home/tao/.virtualenvs/example/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 655, in __call__
cur_inp, new_state = cell(cur_inp, cur_state)
File "/home/tao/.virtualenvs/example/lib/python2.7/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 321, in __call__
(c_prev, m_prev) = state
File "/home/tao/.virtualenvs/example/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 502, in __iter__
raise TypeError("'Tensor' object is not iterable.")
TypeError: 'Tensor' object is not iterable.
Thank you!
The problem is in the format of the initial state (initial_dec_state) passed to seq2seq.rnn_decoder.
When you use rnn.MultiRNNCell, you're building a multilayer recurrent network, so you need to provide an initial state for each of these layers.
Hence, you should provide a list of tuples as the initial state, where each element of the list is the previous state coming from the corresponding layer of the recurrent network.
So your initial_dec_state, initialized like this:
initial_dec_state = tuple([tf.concat([final_fw_state[-1][i],
final_bw_state[-1][i]], 1)
for i in range(2)])
instead should be like this:
initial_dec_state = [
tuple([tf.concat([final_fw_state[j][i],final_bw_state[j][i]], 1)
for i in range(2)]) for j in range(len(final_fw_state))
]
which creates a list of tuples in the format:
[(state_c1, state_m1), (state_c2, state_m2) ...]
In more detail, the 'Tensor' object is not iterable. error, happens because seq2seq.rnn_decoder internally calls your rnn.MultiRNNCell (dec_cell) passing the initial state (initial_dec_state) to it.
rnn.MultiRNNCell.__call__ iterates through the list of initial states and for each one of them extracts the tuple (c_prev, m_prev) (in the statement (c_prev, m_prev) = state).
So if you pass just a tuple, rnn.MultiRNNCell.__call__ will iterate over it, and as soon as it reaches the (c_prev, m_prev) = state it will find a tensor (which should be a tuple) as state and will throw the 'Tensor' object is not iterable. error.
A good way to know which format of initial state a seq2seq.rnn_decoder expects, is to call dec_cell.zero_state(batch_size, dtype=tf.float32). This method returns zero-filled state tensor(s) in the exact format needed to initialize the recurrent module that you're using.

How to format training/testing sets for Deep Belief neural network

I'm trying to use implement the code from this page. But I can't work out how to format the data (training set / testing set) correctly. My code:
numpy_rng = numpy.random.RandomState(123)
dbn = DBN(numpy_rng=numpy_rng, n_ins=2,hidden_layers_sizes=[50, 50, 50],n_outs=1)
train_set_x = [
([1,2],[2,]), #first element in the tuple is the input, the second is the output
([4,5],[5,])
]
testing_set_x = [
([6,1],[3,]), #same format as the training set
]
#when I looked at the load_data function found elsewhere in the tutorial (I'll show the code they used at the bottom for ease) I found it rather confusing, but this was my first attempt at recreating what they did
train_set_xPrime = [theano.shared(numpy.asarray(train_set_x[0][0],dtype=theano.config.floatX),borrow=True),theano.shared(numpy.asarray(train_set_x[0][1],dtype=theano.config.floatX),borrow=True)]
pretraining_fns = dbn.pretraining_functions(train_set_x=train_set_xPrime,batch_size=10,k=1)
which produced this error:
Traceback (most recent call last):
File "/Users/spudzee1111/Desktop/Code/NNChatbot/DeepBeliefScratch.command", line 837, in <module>
pretraining_fns = dbn.pretraining_functions(train_set_x=train_set_xPrime,batch_size=10,k=1)
File "/Users/spudzee1111/Desktop/Code/NNChatbot/DeepBeliefScratch.command", line 532, in pretraining_functions
n_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size
AttributeError: 'list' object has no attribute 'get_value'
I can't work out how the input is supposed to be formatted. I tried using theano.shared on the list, so that it would be:
train_set_xPrime = theano.shared([theano.shared(numpy.asarray(train_set_x[0][0],dtype=theano.config.floatX),borrow=True),theano.shared(numpy.asarray(train_set_x[0][1],dtype=theano.config.floatX),borrow=True)],borrow=True)
but then it said:
Traceback (most recent call last):
File "/Users/spudzee1111/Desktop/Code/NNChatbot/DeepBeliefScratch.command", line 834, in <module>
train_set_xPrime = theano.shared([theano.shared(numpy.asarray(train_set_x[0][0],dtype=theano.config.floatX),borrow=True),theano.shared(numpy.asarray(train_set_x[0][1],dtype=theano.config.floatX),borrow=True)],borrow=True) #,borrow=True),numpy.asarray(train_set_x[0][1],dtype=theano.config.floatX),borrow=True))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/sharedvalue.py", line 228, in shared
(value, kwargs))
TypeError: No suitable SharedVariable constructor could be found. Are you sure all kwargs are supported? We do not support the parameter dtype or type. value="[<TensorType(float64, vector)>, <TensorType(float64, vector)>]". parameters="{'borrow': True}"
I tried other combinations but none of them worked.
This should work
numpy_rng = numpy.random.RandomState(123)
dbn = DBN(numpy_rng=numpy_rng, n_ins=2, hidden_layers_sizes=[50, 50, 50], n_outs=1)
train_set = [
([1,2],[2,]),
([4,5],[5,])
]
train_set_x = [train_set[i][0] for i in range(len(train_set))]
nparray = numpy.asarray(train_set_x, dtype=theano.config.floatX)
train_set_x = theano.shared(nparray, borrow=True)
pretraining_fns = dbn.pretraining_functions(train_set_x=train_set_x, batch_size=10, k=1)
The method pretraining_fns is expecting as an input a shared variable of size (number of samples, dimension of inputs). You could check this by looking at the shape of the MNIST dataset, the standard input for this example
It doesn't take a list as an input because this method is only for the pre-training functions. DBNs are pre-trained with an unsupervised learning algorithm, so it doesn't make sense to use the labels
Furthermore, the input list to make your numpy array doesn't make sense. train_set_x[0][0] yields only the first training example. You want train_set_xPrime to have all training examples. Even if you did train_set_x[0] you would have the first training example but with the labels

Categories

Resources