I'm new in stackoverflow, hope this post respects all the requirements.
As in the tile, I was wondering how to change the type of a data from torch.int32 to torch.long, as I obtain this error in my code:
ValueError: Argument edge_index needs to be of type torch.long but found type torch.int32.
Thank you in advance.
There are two easy ways to convert tensor data to torch.long and they do the same thing. Check the below snippet.
# Example tensor
a = torch.tensor([1, 2, 3], dtype = torch.int32)
# One Way
a = a.to(torch.long)
# Second Way
a = a.type(torch.long)
# Test it out (Should print long version of dtype)
print(a.dtype)
Sarthak Jain
Related
I found a somewhat similar question here What is the difference between model.to(device) and model=model.to(device)?, but I would like to check again if the same applies to my example:
Using .to(self.device)
mask = torch.tril(torch.ones(len_q, len_k)).type(torch.BoolTensor).to(self.device)`
and
Using device=self.device
mask = torch.tril(torch.ones((trg_len, trg_len), device = self.device)).bool()
Are they both accomplishing the same thing - ensuring that mask goes to the GPU?
The torch.Tensor.to function will make a copy of your tensor on the destination device. While setting the device option on initialization will place it there on init, so there is no copy involved.
So in your case you would rather do:
>>> mask = torch.tril(torch.ones(len_q, len_k), device=self.device)
But to give an answer to your question, both have the effect of placing mask on self.device. The only difference is that in the former you will have a copy of your data on both devices.
The same can be said for torch.Tensor.bool vs. initializing with dtype:
>>> torch.randint(0, 1, (10,)).bool()
Will make a copy, while the following won't:
>>> torch.randint(0, 1, (10,), dtype=torch.bool)
However, torch.tril doesn't provide a dtype option, so it is not relevant here.
When using TF's tf_agents.metrics.tf_metrics.ChosenActionHistogram with TF's dynamic step driver and my own environment, I encounter the following error:
ValueError: Shapes must be equal rank, but are 1 and 0 for '{{node ResourceScatterUpdate}} = ResourceScatterUpdate[Tindices=DT_INT32, dtype=DT_INT32](ResourceScatterUpdate/resource, FloorMod, value)' with input shapes: [], [], [1]
I've attached observers to the step driver like so:
self.average_episode_length_metric = tf_metrics.AverageEpisodeLengthMetric()
self.average_return_metric = tf_metrics.AverageReturnMetric()
self.selected_action_histogram_metric = tf_metrics.ChosenActionHistogram()
self.observers = [self.average_episode_length_metric,
self.average_return_metric,
self.selected_action_histogram_metric
]
self.eval_step_driver = dynamic_step_driver.DynamicStepDriver(
self.eval_env,
self.agent.policy,
num_steps=self.num_eval_steps,
observers=self.observers
)
and then run the step driver like such:
self.eval_step_driver.run()
Some more of the error trace is as follows:
File "./bot/DQN.py", line 109, in record_policy_metrics
self.eval_step_driver.run()
tf_agents-0.4.0-py3.8.egg/tf_agents/metrics/tf_metrics.py:50 extend *
self.add(v)
I understand the premise of the issue, that tensor shapes are not matching, but I can't figure out why that might be happening. Removing ChosenActionHistorgram from the observers resolves the error and the other metrics work correctly. What could be going on here? Could the trajectory tensors be missing some value?
For anyone who comes across this issue I solved it for my case. I had mistakenly defined the action spec as (1,), a 1 dimensional vector, instead of (), a scalar value. This seemed to work for every other metric except for the tf_metrics.ChosenActionHistogram().
Ensuring my action spec complied with () instead of (1,) resolved the issue.
I have created an very basic decision tree using the sklearn library. This tree is trained based on 4 features:
feat1 INT
feat2 INT
feat3 FLOAT
feat4 FLOAT
And the label/target feature is a boolean value (0 or 1).
I converted the tree into a ONNX format and now I want to use the onnxruntime python library to make a prediction. I have found example code on the internet to do this. The problem is I dont understand exactly what exacly happens in all parts of this code, functions and parameters. This leads to me getting an error. I did search for some documentation, but I cant find this.
In below code I convert the tree model to ONNX format. This is succesfull but parts of the code I dont understand. In the initial_type variable, what do I have to enter here based on the 4 feature columns and label/target feature I mensioned earlier? Now I have entered FloatTensorType([None, 4] because I have 4 feature columns and what the None does I have no idea.
##Convert to ONNX format
initial_type = [('float_input', FloatTensorType([None, 4]))]
onx = convert_sklearn(treeModel, initial_types=initial_type)
with open("path", "wb") as f:
f.write(onx.SerializeToString())
In below code I want to make a prediction using the onnxruntime library but I get this error:
RuntimeError: Either type_proto was null or it was not of sequence type
This is because I dont understand the last line of code below. I entered this {input_name: [4, 8, 77.8, 143.45] because this are four values for the feature columns. What am I doing wrong here?
sess = rt.InferenceSession("pathToONNXModel")
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
pred_onx = sess.run([label_name], {input_name: [4, 8, 77.8, 143.45]})[0]
Did you try {input_name: numpy.array([4, 8, 77.8, 143.45], dtype=numpy.float32)}? onnxruntime requires numpy arrays as inputs.
I am working on a problem which involves a batch of 19 tokens each with 400 features. I get the shape (19,1,400) when concatenating two vectors of size (1, 200) into the final feature vector. If I squeeze the 1 out I am left with (19,) but I am trying to get (19,400). I have tried converting to list, squeezing and raveling but nothing has worked.
Is there a way to convert this array to the correct shape?
def attn_output_concat(sample):
out_h, state_h = get_output_and_state_history(agent.model, sample)
attns = get_attentions(state_h)
inner_outputs = get_inner_outputs(state_h)
if len(attns) != len(inner_outputs):
print 'Length err'
else:
tokens = [np.zeros((400))] * largest
print(tokens.shape)
for j, (attns_token, inner_token) in enumerate(zip(attns, inner_outputs)):
tokens[j] = np.concatenate([attns_token, inner_token], axis=1)
print(np.array(tokens).shape)
return tokens
The easiest way would be to declare tokens to be a numpy.shape=(19,400) array to start with. That's also more memory/time efficient. Here's the relevant portion of your code revised...
import numpy as np
attns_token = np.zeros(shape=(1,200))
inner_token = np.zeros(shape=(1,200))
largest = 19
tokens = np.zeros(shape=(largest,400))
for j in range(largest):
tokens[j] = np.concatenate([attns_token, inner_token], axis=1)
print(tokens.shape)
BTW... It makes it difficult for people to help you if you don't include a self-contained and runnable segment of code (which is probably why you haven't gotten a response on this yet). Something like the above snippet is preferred and will help you get better answers because there's less guessing at what your trying to accomplish.
I need to create a some fixed length(length equal to size of some other tensor vector which is passed) zero vector in theano.
def some_fun(self, y)
x_h = T.fvector('x_h')
ret = T.alloc(0, x_h)
vec_h = theano.function(inputs=[x_h], outputs=ret)
vec=vec_h(y.shape[0])
vec[T.arange(y.shape[0]),y]=1
When I am running this I am getting error "Shape arguments to Alloc must be integers, but argument 0 is not for apply node: x_h"
It may very big mistake, as I am new to theano.
Thanks
Have you tried theano.tensor.zeros_like? It seems like that should be a shortcut to what you're trying to do.
Then, when you get
"TypeError: 'TensorVariable' object does not support item assignment"
you can replace the line vec[T.arange(y.shape[0]),y]=1 by using theano.tensor.set_subtensor instead.