Tensorflow mask from one-hot encoding - python

I have labels that are OHE in the form of examples = tf.placeholder(tf.int32, [batch_size]) where each example is an int in the range 0:ohe_size.
My output is in the form of a softmax probability distribution with a shape [batch_size, ohe_size]
I'm trying to work out how to create a mask that will give me just the probability distribution for each example. e.g.
probs = [[0.1, 0.6, 0.3]
[0.2, 0.1, 0.7]
[0.9, 0.1, 0.0]]
examples = [2, 2, 0]
some_mask_func(probs, example) # <- Need this function
> [0.3, 0.7, 0.9]

If I understood your example correctly, you need tf.gather_nd
range = tf.range(tf.shape(examples)[0])
indices = tf.pack([range, examples], axis=1)
result = tf.gather_nd(probs, indices)

Related

Randomly get index of one of the maximum values in a PyTorch tensor

I need to perform something similar to the built-in torch.argmax() function on a one-dimensional tensor, but instead of picking the index of the first of the maximum values, I want to be able to pick a random index of one of the maximum values. For example:
my_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
index_1 = random_max_val_index_fn(my_tensor)
index_2 = random_max_val_index_fn(my_tensor)
print(f"{index_1}, {index_2}")
> 5, 1
You can get the indexes of all the maximums first and then choose randomly from them:
def rand_argmax(tens):
max_inds, = torch.where(tens == tens.max())
return np.random.choice(max_inds)
sample runs:
>>> my_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
>>> rand_argmax(my_tensor)
2
>>> rand_argmax(my_tensor)
5
>>> rand_argmax(my_tensor)
2
>>> rand_argmax(my_tensor)
1
I think this should work:
import numpy as np
import torch
your_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
argmaxes = np.argwhere(your_tensor==torch.max(your_tensor)).flatten()
rand_argmax = np.random.choice(argmaxes)
print(rand_argmax)
make sure you adjust for np.random.choice to account for replacement

Tensorflow 2: Sort a 3D tensor accoding to a 2D tensor

I have a 3D tensor with batch, sequence, feature dimension (N,s,e). It is a sequence of probability distributions. Then I want to order them according to the integer corresponding to the highest predictions. So say
x_probabs = 3D tensor (ex: [[[0.5, 0.1, 0.4], [0.3, 0.3, 0.4], [0.1,
0.8, 0.1]]]; # shape N s e
x = tf.argmax(x_probabs, axis=-1) = [[0, 2, 1]]; # shape N s
or another example would be
x_probabs=[[[0.6, 0.1, 0.1, 0.1, 0.1], [0.1,0.1,0.1,0.1,0.6], [0.1,0.1,0.1,0.6,0.1]]];
x = [[0, 4, 3]];
If i wanted to order x i can do ordered_x = tf.sort(x, axis=-1), then to get the ordering i can do indices_sorted_x = tf.argsort(x, axis=-1). I want the same ordering applied to x_probabs and i am confused how to that, i have tried sorted_x_probabs = tf.gather(x_probabs, indices_sorted_x) but it doesn't work because the indices are for a 2D tensor and not a 3D one. I'm stuck here.
The following is what it would look like for the first example
sorted_x = [[0,1,2]];
sorted_x_probabs = [[[0.5, 0.1, 0.4],[0.1,
0.8, 0.1],[0.3, 0.3, 0.4]]];
This would be for the 2nd example
sorted_x = [[0,3,4]];
sorted_x_probabs = [[[0.6, 0.1, 0.1, 0.1, 0.1],[0.1,0.1,0.1,0.6,0.1],[0.1,0.1,0.1,0.1,0.6]]];
Thank you very much in advance.
You can add batch_dims argument to start gathering from the lower dimension:
x = tf.gather(x_probabs, x, batch_dims=1)

python - np.random.random round up to 1 decimal

Trying to generate numbers using np.random.random:
for portfolio in range(2437):
weights = np.random.random(3)
weights /= np.sum(weights)
print(weights)
It works just as expected:
[0.348674 0.329747 0.321579]
[0.215606 0.074008 0.710386]
[0.350316 0.589782 0.059901]
[0.639651 0.025353 0.334996]
[0.697505 0.171061 0.131434]
.
.
.
.
however, how do i change the numbers such that each row is is limited to 1 decimal, like:
[0.1 0.2 0.7]
[0.2 0.2 0.6]
[0.5 0.4 0.1]
.
.
.
.
You can use
In [1]: weights.round(1)
Out[2]: array([0.4, 0.5, 0.2])
The argument to round is the amount of decimal digits you want. It also accepts negative arguments, meaning rounding to a larger-than-1 power of ten:
In [2]: np.array([123, 321, 332]).round(-1)
Out[2]: array([120, 320, 330])
For visualization only, you can use np.set_printoptions:
import numpy as np
np.set_printoptions(precision=1, suppress=True)
np.random.rand(4, 4)
array([[0.8, 0.8, 0.3, 0.3],
[0.1, 0.2, 0. , 0.2],
[0.8, 0.2, 1. , 0.2],
[0.2, 0.7, 0.6, 0.2]])
you can try np.round:
weights = np.round(weights, 1)
maybe my answer is not the most efficient but there is it:
for portfolio in range(2437):
weights = np.random.random(3)
weights /= np.sum(weights)
t_weights = []
for num in weights:
num *= 10
num = int(num)
num = float(num) / 10
t_weights.append(num)
weights = t_weights
print(weights)

Using scatter_nd with top_k output

I've been trying to do something seemingly simple, with no success.
I have a (?,4) tensor, where each row will be 4 floats between 0 and 1.
I want to replace this with a new tensor where each row has only the top 2 entries and zeros everywhere else.
Example with a (2, 4):
source = [ [0.1, 0.2, 0.5, 0.6],
[0.8, 0.7, 0.2, 0.1] ]
result = [ [0.0, 0.0, 0.5, 0.6],
[0.8, 0.7, 0.0, 0.0] ]
I tried using top_k on the source and then using scatter_nd with the indices returned by top_k, but it has literally been 4 hours of mismatched shapes and rank errors in scatter_nd.
I'm ready to give up, but I thought I would ask for help here first.
I've found a couple of questions here closely related, but I'm failing to generalize the info in there for my case.
Another approach I just tried is this:
tensor = tf.constant( [ [0.1, 0.2, 0.8], [0.1, 0.2, 0.7] ])
values, indices = tf.nn.top_k(tensor, 1)
elems = (tensor, values)
masked_a = tf.map_fn(
lambda a : tf.where( tf.greater_equal(a[0], a[1]), a[0],
tf.zeros_like(a[0]) ),
elems)
but this one gives me the following error:
ValueError: The two structures don't have the same number of elements.
First structure (2 elements): (tf.float32, tf.float32)
Second structure (1 elements): Tensor("map/while/Select:0", shape=(3,), dtype=float32)
I'm relatively new with TensorFlow, so apologies if I'm missing something simple or being unclear.
Thanks!
You can do it with tf.scatter_nd by appending the row indice to the indices returned by top_k.
import tensorflow as tf
source = tf.constant([
[0.1, 0.2, 0.5, 0.6],
[0.8, 0.7, 0.2, 0.1]])
# get indices of top k
k = 2
top_k, top_k_inds = tf.nn.top_k(source, k, )
# indices are only columns, we will stack
# it so the row indice is also there and
# make tensor of row numbers ie.
# [[0, 0],
# [1, 1],
# ...
num_rows = tf.shape(source)[0]
row_range = tf.range(num_rows)
row_tensor = tf.tile(row_range[:,None], (1, k))
# stack along the final dimension, as this is what
# scatter_nd uses as the indices
top_k_row_col_indices = tf.stack([row_tensor, top_k_inds], axis=2)
# to mask off everything, we will multiply the top_k by
# 1. so all the updates are just 1
updates = tf.ones([num_rows, k], dtype=tf.float32)
# build the mask
zero_mask = tf.scatter_nd(top_k_row_col_indices, updates, [num_rows, 4])
with tf.Session() as sess:
zeroed = source*zero_mask
print(zeroed.eval())
This should print
[[0. 0. 0.5 0.6]
[0.8 0.7 0. 0. ]]
Just paste some lines of code :)
import tensorflow as tf
def attach_indice(tensor, top_k = None):
flatty = tf.reshape(tensor, [-1])
orig_shape = tf.shape(tensor)
length = tf.shape(flatty)[0]
if top_k is not None:
orig_shape = orig_shape[:-1] # dim for top_k
length //= top_k
indice = tf.unravel_index(tf.range(length), orig_shape)
indice = tf.transpose(indice)
if indice.dtype != tensor.dtype:
indice = tf.cast(indice, tensor.dtype)
if top_k is not None:
_dims = len(tensor.shape) - 1 # indice of indice
shape = [1 for _ in range(_dims)]
shape[-1] *= top_k
indice = tf.reshape(tf.tile(indice, shape), [-1, _dims])
return tf.concat([indice, flatty[:, None]], -1)
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
# tf.enable_eager_execution()
from time import time
top_k = 3
shape = [50, 40, 100]
q = tf.random_uniform(shape)
# fast: 4.376221179962158 (GPU) / 2.483684778213501 (CPU)
v, k = tf.nn.top_k(q, top_k)
k = attach_indice(k, top_k)
s = tf.scatter_nd(k, tf.reshape(v, [-1]), shape)
# very slow: 281.82796931266785 (GPU) / 35.163344860076904 (CPU)
# s = tf.map_fn(lambda v__k__: tf.map_fn(lambda v_k_: tf.scatter_nd(v_k_[1][:, None], v_k_[0], [shape[-1]]), v__k__, q.dtype), tf.nn.top_k(q, top_k), q.dtype)
start = time()
with tf.Session() as sess:
for _ in range(1000):
sess.run(s)
print('time', time() - start)

python mask matrice for selecting a list of vertices

I have a numpy matrix of booleans, whose shape is (N,N), e.g.:
[[True False False True]
[...]
[True True True False]]
and a numpy array of vertices, whose shape is (N,3), e.g:
[[0.1, 0.2, 0.3]
[0.4, 0.5, 0.6]
[0.7, 0.8, 0.9]
[1.0, 1.1, 1.2]]
I would like to compute a matrix, with shape (N, varying), in which each row is a list of vertices selected with each line of the boolean matrix.
From the examples above:
[[[0.1, 0.2, 0.3], [1.0, 1.1, 1.2]]
[...]
[[0.1, 0.2, 0.3],[0.4, 0.5, 0.6],[0.7, 0.8, 0.9]]]
Is it possible ?
Thanks in advance
Here's one approach after extracting rows, columns from the mask -
r,c = np.where(mask)
start = np.r_[0,np.flatnonzero(r[1:] != r[:-1])+1]
stop = np.r_[start[1:], r.size]
data_rep = data[c]
out = [data_rep[start[i]:stop[i]] for i in range(len(start))]
Thanks Divakar !!
I tried your solution and it works fine.
However, I also tried a solution with a loop:
result = []
for i in range(len(data)):
result.append(data[mask[i]])
and it's faster than doing:
result = extract_rows_using_mask(data, mask)
Weird isn't it ?

Categories

Resources