When using TF's tf_agents.metrics.tf_metrics.ChosenActionHistogram with TF's dynamic step driver and my own environment, I encounter the following error:
ValueError: Shapes must be equal rank, but are 1 and 0 for '{{node ResourceScatterUpdate}} = ResourceScatterUpdate[Tindices=DT_INT32, dtype=DT_INT32](ResourceScatterUpdate/resource, FloorMod, value)' with input shapes: [], [], [1]
I've attached observers to the step driver like so:
self.average_episode_length_metric = tf_metrics.AverageEpisodeLengthMetric()
self.average_return_metric = tf_metrics.AverageReturnMetric()
self.selected_action_histogram_metric = tf_metrics.ChosenActionHistogram()
self.observers = [self.average_episode_length_metric,
self.average_return_metric,
self.selected_action_histogram_metric
]
self.eval_step_driver = dynamic_step_driver.DynamicStepDriver(
self.eval_env,
self.agent.policy,
num_steps=self.num_eval_steps,
observers=self.observers
)
and then run the step driver like such:
self.eval_step_driver.run()
Some more of the error trace is as follows:
File "./bot/DQN.py", line 109, in record_policy_metrics
self.eval_step_driver.run()
tf_agents-0.4.0-py3.8.egg/tf_agents/metrics/tf_metrics.py:50 extend *
self.add(v)
I understand the premise of the issue, that tensor shapes are not matching, but I can't figure out why that might be happening. Removing ChosenActionHistorgram from the observers resolves the error and the other metrics work correctly. What could be going on here? Could the trajectory tensors be missing some value?
For anyone who comes across this issue I solved it for my case. I had mistakenly defined the action spec as (1,), a 1 dimensional vector, instead of (), a scalar value. This seemed to work for every other metric except for the tf_metrics.ChosenActionHistogram().
Ensuring my action spec complied with () instead of (1,) resolved the issue.
Related
I'm new in stackoverflow, hope this post respects all the requirements.
As in the tile, I was wondering how to change the type of a data from torch.int32 to torch.long, as I obtain this error in my code:
ValueError: Argument edge_index needs to be of type torch.long but found type torch.int32.
Thank you in advance.
There are two easy ways to convert tensor data to torch.long and they do the same thing. Check the below snippet.
# Example tensor
a = torch.tensor([1, 2, 3], dtype = torch.int32)
# One Way
a = a.to(torch.long)
# Second Way
a = a.type(torch.long)
# Test it out (Should print long version of dtype)
print(a.dtype)
Sarthak Jain
I'm trying to execute these functions
def evaluate(sentence):
sentence = preprocess_sentence(sentence)
sentence = tf.expand_dims(
START_TOKEN + tokenizer.encode(sentence) + END_TOKEN, axis=0)
output = tf.expand_dims(START_TOKEN, 0)
for i in range(MAX_LENGTH):
predictions = model(inputs=[sentence, output], training=False)
# select the last word from the seq_len dimension
predictions = predictions[:, -1:, :]
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# return the result if the predicted_id is equal to the end token
if tf.equal(predicted_id, END_TOKEN[0]):
break
#check()
#tf.cond(tf.equal(predicted_id, END_TOKEN[0]),true_fn=break,false_fn=lambda: tf.no_op())
# concatenated the predicted_id to the output which is given to the decoder
# as its input.
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0)
def predict(sentence):
prediction = evaluate(sentence)
predicted_sentence = tokenizer.decode(
[i for i in prediction if i < tokenizer.vocab_size])
print('Input: {}'.format(sentence))
print('Output: {}'.format(predicted_sentence))
return predicted_sentence
however, I'm having the following error:
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.
I do understand that I have to rewrite the if condtion in a form of tf.cond(). however, i don't know how to write break in tensor flow, also I'm not sure which if condition is causing the problem as the same function exactly in this notebook is working properly?
https://colab.research.google.com/github/tensorflow/examples/blob/master/community/en/transformer_chatbot.ipynb#scrollTo=_NURhwYz5AXa
Any help?
The code in the notebook works because it uses TF 2.0, which has eager execution enabled by default. You can turn it on in older versions with tf.enable_eager_execution.
Alternatively, you can use break in graph mode without writing tf.cond if you use tf.function or tf.autograph, but they have some restrictions on the code you can run.
There is nothing wrong with the break statement. The problem is elsewhere.
if tf.equal(predicted_id, END_TOKEN[0]):
break
will give error something about using Python bool in tensor ops. Since you have already used tf.equal condition this could be confusing. The solution is simple. The error is being thrown for the
if (boolean): python syntax.
You would have to take care of this (bool) Python syntax and convert to tensor-style, based on what you are planning to achieve. Remember, the condition returns a tensor of boolean values. Read this tensor and then proceed to do what you want.. So for e.g. below would work unconditionally irrespective of the value of the condition:
if tf.equal(predicted_id, END_TOKEN[0]) is not None:
break
I'm running into problems trying to use a PyTorch model exported as an ONNX model with Caffe2. Here is my export code
the_model = torchvision.models.densenet121(pretrained=True)
garbage, model_inputs = preprocessing("test.jpg")
torch_out = torch.onnx._export(the_model,
model_inputs,
"model_weights/chexnet-py.onnx",
export_params=True)
Now here is my testing code
model = onnx.load("model_weights/chexnet-py.onnx")
garbage, model_inputs = preprocessing("text.jpg")
prepared_backend = onnx_caffe2.backend.prepare(model)
W = {model.graph.input[0].name: model_inputs.numpy()}
c2_out = prepared_backend.run(W)[0]
This is returning the following error
ValueError: Don't know how to translate op Unsqueeze when running converted PyTorch Model
Additional information
pytorch version 1.0.0a0+6f664d3
Caffe2 is latest version (attempted building from source, pip, and conda). All gave same result.
Try looking into this, if you have to edit package called onnx-caffe2 to add the mapping b/w Unsqueeze to ExpandDims
https://github.com/onnx/onnx/issues/1481
Look for the answer:
I found that the Caffe2 equivalence for Unsqueeze in ONNX is ExpandDims, and there is a special mapping in onnx_caffe2/backend.py around line 121 for those operators that are different only in their names and attribute names, but somehow Unsqueeze isn't presented there (have no idea why). So I manually added the mapping rules for it in the _renamed_operators and _per_op_renamed_attrs dicts and the code would look like:
_renamed_operators = {
'Caffe2ConvTranspose': 'ConvTranspose',
'GlobalMaxPool': 'MaxPool',
'GlobalAveragePool': 'AveragePool',
'Pad': 'PadImage',
'Neg': 'Negative',
'BatchNormalization': 'SpatialBN',
'InstanceNormalization': 'InstanceNorm',
'MatMul': 'BatchMatMul',
'Upsample': 'ResizeNearest',
'Equal': 'EQ',
'Unsqueeze': 'ExpandDims', # add this line
}
_global_renamed_attrs = {'kernel_shape': 'kernels'}
_per_op_renamed_attrs = {
'Squeeze': {'axes': 'dims'},
'Transpose': {'perm': 'axes'},
'Upsample': {'mode': ''},
'Unsqueeze': {'axes': 'dims'}, # add this line
}
And everything works as expected.
I am not the OP, thanks to OP though.
I get all shapes assigned to baseMaterial, select the shapes and then assign the occlusionShader.
for materialClass in materialClassList:
select(materialClass.baseMaterial)
hyperShade(objects="")
hyperShade(a=materialClass.occlusionShader)
works just fine, but if I use it as a pre render script:
Error: line 0: hyperShade command not supported in batch mode
What can I change the two last lines of my function to to make this work?
Here is an example with cmds.sets() to assign a shader :
all = cmds.ls(type='mesh')
shadingEngine = 'initialShadingGroup'
cmds.sets(all, e=True, forceElement=shadingEngine)
as you can guess, to query meshes with the material :
lamb1_mshs = cmds.sets(shadingEngine, q=True)
i got it to work with:
for materialClass in materialClassList:
sets(materialClass.occlusionShadingGroup, e = True, forceElement = materialClass.meshList)
I collect the meshes when I create the materialClass now, which makes much more sense then selecting them for each renderlayer.
I need to create a some fixed length(length equal to size of some other tensor vector which is passed) zero vector in theano.
def some_fun(self, y)
x_h = T.fvector('x_h')
ret = T.alloc(0, x_h)
vec_h = theano.function(inputs=[x_h], outputs=ret)
vec=vec_h(y.shape[0])
vec[T.arange(y.shape[0]),y]=1
When I am running this I am getting error "Shape arguments to Alloc must be integers, but argument 0 is not for apply node: x_h"
It may very big mistake, as I am new to theano.
Thanks
Have you tried theano.tensor.zeros_like? It seems like that should be a shortcut to what you're trying to do.
Then, when you get
"TypeError: 'TensorVariable' object does not support item assignment"
you can replace the line vec[T.arange(y.shape[0]),y]=1 by using theano.tensor.set_subtensor instead.