Tensorflow numpy repeat - python

I wish to repeat a particular number different number of times as shown below:
x = np.array([0,1,2])
np.repeat(x,[3,4,5])
>>> array([0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2])
(The 0 is repeated 3 times, 1, 4 times etc.).
This answer (https://stackoverflow.com/a/35367161/2530674) seems to suggest that I can use a combination of tf.tile and tf.reshape to get the same effect. However, I believe this is only the case if the repetitions are a constant amount.
How can I get the same effect in Tensorflow?
edit1: there is no tf.repeat unfortunately.

This is a kind of "brute force" solution to the problem, simply tiling every value as many times as the largest number of repetitions and then picking the right elements:
import tensorflow as tf
# Repeats across the first dimension
def tf_repeat(arr, repeats):
arr = tf.expand_dims(arr, 1)
max_repeats = tf.reduce_max(repeats)
tile_repeats = tf.concat(
[[1], [max_repeats], tf.ones([tf.rank(arr) - 2], dtype=tf.int32)], axis=0)
arr_tiled = tf.tile(arr, tile_repeats)
mask = tf.less(tf.range(max_repeats), tf.expand_dims(repeats, 1))
result = tf.boolean_mask(arr_tiled, mask)
return result
with tf.Graph().as_default(), tf.Session() as sess:
print(sess.run(tf_repeat([0, 1, 2], [3, 4, 5])))
Output:
[0 0 0 1 1 1 1 2 2 2 2 2]

Related

Monte Carlo Simulation with multiple distributions in each loop

I have an array of NaNs 10 columns wide and 5 rows long.
I have a 5x3 array of poisson random number generations. This represents 5 runs of each A, B, and C, where each column has a different lambda value for the poisson distribution.
A B C
[1, 1, 2,
1, 2, 2,
2, 1, 4,
1, 2, 3,
0, 1, 2]
Each row represents the number of events. That is, the first row would produce one event of type A, one event of type B, and two events of type C.
I would like to loop through each row and produce a set of uniform random numbers. For A, it would between 1 and 100, for B it would be between 101 and 200, and for C it would be between 201 and 300.
The output of the first row would have four numbers, one number between 1 and 100, one number between 101 and 200, and two numbers between 201 and 300. So a sample output of the first row might be:
[34, 105, 287, 221]
The second output row would have five numbers in it, the third row would have seven, etc. I would like to store it in my array of NaNs by overwriting the NaNs that get replaced in each row. Can anyone please help with this? Thanks!
I've got a rather inefficient/unvectorised method which may or may not be what you're looking for, because one part of your question is unclear to me. Do you want the final array to have rows of different sizes, or to be the same size but padded with nans?
This solution assumes padding with nans, since you talked about the nans being overwritten and didn't mention the extra/unused nans being deleted. I'm also assuming that your ABC thing is structured into a numpy array of size (5,3), and I'm calling the array of nans results_arr.
import numpy as np
from random import randint
# Initializing the arrays
results_arr = np.full((5,10), np.nan)
abc = np.array([[1, 1, 2], [1, 2, 2], [2, 1, 4], [1, 2, 3], [0, 1, 2]])
# Loops through each row in ABC
for row_idx in range(len(abc)):
a, b, c = abc[row_idx]
# Here, I'm getting a number in the specified uniform distribution as many times as is specified in the A column. The other 2 loops do the same for the B and C columns.
for i in range(0, a):
results_arr[row_idx, i] = randint(1, 100)
for j in range(a, a+b):
results_arr[row_idx, j] = randint(101, 200)
for k in range(a+b, a+b+c):
results_arr[row_idx, k] = randint(201, 300)
Hope that helps!
P.S. Here's a solution with uneven rows. The result is stored in a list of lists because numpy doesn't support ragged arrays (i.e. rows of different lengths).
import numpy as np
from random import randint
# Initializations
results_arr = []
abc = np.array([[1, 1, 2], [1, 2, 2], [2, 1, 4], [1, 2, 3], [0, 1, 2]])
# Same code logic as before, just storing the results differently
for row_idx in range(len(abc)):
a, b, c = abc[row_idx]
results_this_row = []
for i in range(0, a):
results_this_row.append(randint(1, 100))
for j in range(a, a+b):
results_this_row.append(randint(101, 200))
for k in range(a+b, a+b+c):
results_this_row.append(randint(201, 300))
results_arr.append(results_this_row)
I hope these two solutions cover what you're looking for!

How could you rewrite a list of lists so that "islands" of values are unique from one another?

Let's say I've got a list of lists (or more conceptually accurate a 2D array):
list = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]]
I'd like to identify the different regions of identical values and rewrite the list so that each region has a unique value, like so:
list = [[1,1,2,2,2],
[1,1,3,2,2],
[0,3,3,3,2],
[0,0,0,3,2],
[0,0,0,4,2]]
I've mostly tried writing variations of a loop parsing the array per value and setting adjacent values equal to each other (which yea, is redundant I guess), BUT ensuring that island of 1s in the top left is distinct from the 1 in the bottom right was just not working. My attempts were spotty at best and non-functional at worst. Examples:
for x in list_length:
for y in sublist_length:
try:
if list[x][y] == list[x+1][y]:
list[x+1][y] = list[x][y]
except:
pass
or
predetermined_unique_value = 0
for x in list_length:
for y in sublist_length:
try:
if list[x][y] == list[x+1][y]:
list[x+1][y] = predetermined_unique_value
predetermined_unique_value += 1
except:
pass
and many slight variations on which directions (up, down, left, right from current spot/point) to check, brute forcing the loop by running it until all spots had been assigned a new value, etc.
Clearly I am missing something here. I suspect the answer is actually super simple, but I can't seem to find anything on google or reddit, or other answers here (I'm probably just conceptualizing it weirdly so searching for the wrong thing).
Just to reiterate, how could you parse that list of lists to organize values into adjacent regions based on identical data and rewrite it to ensure that those regions all have unique values? (I.E. so that there is only one region of the 0 value, one region of the 1 value, etc. etc.)
I hope this is enough information to help you help me, but in truth I just as much am not sure how to do this as I am doing it wrong. Please don't hesitate to ask for more.
Based on this answer you can do it with ndimage from the scipy library.
I applied your data to his answer and that's what I got as result:
from scipy import ndimage
import numpy as np
data_tup = ((1,1,0,0,0),
(1,1,2,0,0),
(0,2,2,2,0),
(0,0,0,2,0),
(0,0,0,1,0))
data_list = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]]
def find_clusters(array):
clustered = np.empty_like(array)
unique_vals = np.unique(array)
cluster_count = 0
for val in unique_vals:
labelling, label_count = ndimage.label(array == val)
for k in range(1, label_count + 1):
clustered[labelling == k] = cluster_count
cluster_count += 1
return clustered, cluster_count
clusters, cluster_count = find_clusters(data_list)
clusters_tup, cluster_count_tup = find_clusters(data_tup)
print(" With list of lists, Found {} clusters:".format(cluster_count))
print(clusters, '\n')
print(" With tuples of tuple, Found {} clusters:".format(cluster_count_tup))
print(clusters_tup)
Output:
With list of lists, Found 5 clusters:
[[2 2 0 0 0]
[2 2 4 0 0]
[1 4 4 4 0]
[1 1 1 4 0]
[1 1 1 3 0]]
With tuples of tuple, Found 5 clusters:
[[2 2 0 0 0]
[2 2 4 0 0]
[1 4 4 4 0]
[1 1 1 4 0]
[1 1 1 3 0]]
Both times the Output is a list of list. If you wish to have it different, the function needs to be changed inside.
You can use skimage.measure.label:
>>> import numpy as np
>>> from skimage import measure
>>>
>>> a = np.array([[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]])
>>> measure.label(a, background=a.max()+1)
array([[1, 1, 2, 2, 2],
[1, 1, 3, 2, 2],
[4, 3, 3, 3, 2],
[4, 4, 4, 3, 2],
[4, 4, 4, 5, 2]])
Note that the label function has an argument connectivity which determines how blobs/clusters are identified. The default for a 2D array is to consider diagonal neighbors. If that is undesired, connectivity=1 will consider only horizontal/vertical neighbors.
I'm not sure how good the performance of this solution is but here's a recursive approach to identify a connected segment. It will take a coordinate and return the same list of islands with every coordinate that was part of the same island as the given coordinate with True.
islands = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,0,0]]
def print_islands():
for row in islands:
print(row)
def get_bool_map(i, j):
checked_indexes = [[False] * len(islands[0]) ] * len(islands)
checked_cords = []
def check_island_indexes(island_value, m, i, j):
if i < 0 or j < 0:
return
try:
if m[i][j] != island_value:
return
else:
if [i, j] in checked_cords:
return
else:
checked_cords.append([i, j])
m[i][j] = True
except IndexError:
return
check_island_indexes(island_value, m, i - 1, j)
check_island_indexes(island_value, m, i + 1, j)
check_island_indexes(island_value, m, i, j - 1)
check_island_indexes(island_value, m, i, j + 1)
check_island_indexes(islands[i][j], islands, i, j)
get_bool_map(0, 4)
print_islands()
[1, 1, True, True, True]
[1, 1, 2, True, True]
[0, 2, 2, 2, True]
[0, 0, 0, 2, True]
[0, 0, 0, 1, True]

Reduce sum with condition in tensorflow

I am given a 2D Tensor with stochastic rows. After applying tf.math.greater() and tf.cast(tf.int32) I am left with a Tensor with 0's and 1's. I now want to apply reduce sum onto that matrix but with a condition: If there was at least one 1 summed and a 0 follows I want to remove all following 1 aswell, meaning 1 0 1 should result in 1 instead of 2.
I have tried to solve the Problem with tf.scan(), but I was not able to come up with a function yet that is able to handle starting 0's, because the row might look like: 0 0 0 1 0 1
One idea was to set the lower part of the matrix to one (bc I know everything left from the diagonal will always be 0) and then have a function like tf.scan() run to filter out the spots (see code and error message below).
Let z be the matrix after tf.cast.
helper = tf.matrix_band_part(tf.ones_like(z), -1, 0)
z = tf.math.logical_or(tf.cast(z, tf.bool), tf.cast(helper,tf.bool))
z = tf.cast(z, tf.int32)
z = tf.scan(lambda a, x: x if a == 1 else 0 ,z)
Resulting in:
ValueError: Incompatible shape for value ([]), expected ([5])
IIUC, this is one way to do what you want without scanning or looping. It may be a bit convoluted, and is actually iterating the columns twice (one cumsum and one cumprod), but being vectorized operations I think it is probably faster. Code is TF 2.x but runs the same in TF 1.x (except for the last line obviously).
import tensorflow as tf
# Example data
a = tf.constant([[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 1, 0],
[0, 1, 0, 1],
[1, 1, 1, 0],
[1, 1, 0, 1],
[0, 1, 1, 1],
[1, 1, 1, 1]])
# Cumsum columns
c = tf.math.cumsum(a, axis=1)
# Column-wise differences
diffs = tf.concat([tf.ones([tf.shape(c)[0], 1], c.dtype), c[:, 1:] - c[:, :-1]], axis=1)
# Find point where we should not sum anymore (cumsum is not zero and difference is zero)
cutoff = tf.equal(a, 0) & tf.not_equal(c, 0)
# Make mask
mask = tf.math.cumprod(tf.dtypes.cast(~cutoff, tf.uint8), axis=1)
# Compute result
result = tf.reduce_max(c * tf.dtypes.cast(mask, c.dtype), axis=1)
print(result.numpy())
# [0 1 2 1 3 2 3 4]

How to randomly throw numbers in a 2D dimensional board

I have a 50x50 2D dimensional board with empty cells now. I want to fill 20% cells with 0, 30% cells with 1, 30% cells with 2 and 20% cells with 3. How to randomly throw these 4 numbers onto the board with the percentages?
import numpy as np
from numpy import random
dim = 50
map = [[" "for i in range(dim)] for j in range(dim)]
print(map)
One way to get this kind of randomness would be to start with a random permutation of the numbers from 0 to the total number of cells you have minus one.
perm = np.random.permutation(2500)
now you split the permutation according the proportions you want to get and treat the entries of the permutation as the indices of the array.
array = np.empty(2500)
p1 = int(0.2*2500)
p2 = int(0.3*2500)
p3 = int(0.3*2500)
array[perm[range(0, p1)]] = 0
array[perm[range(p1, p1 + p2)]] = 1
array[perm[range(p1 + p2, p3)]] = 2
array[perm[range(p1 + p2 + p3, 2500)]] = 3
array = array.reshape(50, 50)
This way you ensure the proportions for each number.
Since the percentages sum up to 1, you can start with a board of zeros
bsize = 50
board = np.zeros((bsize, bsize))
In this approach the board positions are interpreted as 1D postions, then we need a set of position equivalent to 80% of all positions.
for i, pos in enumerate(np.random.choice(bsize**2, int(0.8*bsize**2), replace=False)):
# the fisrt 30% will be set with 1
if i < int(0.3*bsize**2):
board[pos//bsize][pos%bsize] = 1
# the second 30% (between 30% and 60%) will be set with 2
elif i < int(0.6*bsize**2):
board[pos//bsize][pos%bsize] = 2
# the rest 20% (between 60% and 80%) will be set with 3
else:
board[pos//bsize][pos%bsize] = 3
At the end the last 20% of positions will remain as zeros
As suggested by #alexis in commentaries, this approach could became more simple by using shuffle method from random module:
from random import shuffle
bsize = 50
board = np.zeros((bsize, bsize))
l = list(range(bsize**2))
shuffle(l)
for i, pos in enumerate(l):
# the fisrt 30% will be set with 1
if i < int(0.3*bsize**2):
board[pos//bsize][pos%bsize] = 1
# the second 30% (between 30% and 60%) will be set with 2
elif i < int(0.6*bsize**2):
board[pos//bsize][pos%bsize] = 2
# the rest 20% (between 60% and 80%) will be set with 3
elif i < int(0.8*bsize**2):
board[pos//bsize][pos%bsize] = 3
The last 20% of positions will remain as zeros again.
A different approach (admittedly it's probabilistic so you won't get perfect proportions as the solution proposed by Brad Solomon)
import numpy as np
res = np.random.random((50, 50))
zeros = np.where(res <= 0.2, 0, 0)
ones = np.where(np.logical_and(res <= 0.5, res > 0.2), 1, 0)
twos = np.where(np.logical_and(res <= 0.8, res > 0.5), 2, 0)
threes = np.where(res > 0.8, 3, 0)
final_result = zeros + ones + twos + threes
Running
np.unique(final_result, return_counts=True)
yielded
(array([0, 1, 2, 3]), array([499, 756, 754, 491]))
Here's an approach with np.random.choice to shuffle indices, then filling those indices with repeats of the inserted ints. It will fill the array in the exact proportions that you specify:
import numpy as np
np.random.seed(444)
board = np.zeros(50 * 50, dtype=np.uint8).flatten()
# The "20% cells with 0" can be ignored since that is the default.
#
# This will work as long as the proportions are "clean" ints
# (I.e. mod to 0; 2500 * 0.2 is a clean 500. Otherwise, need to do some rounding.)
rpt = (board.shape[0] * np.array([0.3, 0.3, 0.2])).astype(int)
repl = np.repeat([1, 2, 3], rpt)
idx = np.random.choice(board.shape[0], size=repl.size, replace=False)
board[idx] = repl
board = board.reshape((50, 50))
Resulting frequencies:
>>> np.unique(board, return_counts=True)
(array([0, 1, 2, 3], dtype=uint8), array([500, 750, 750, 500]))
>>> board
array([[1, 3, 2, ..., 3, 2, 2],
[0, 0, 2, ..., 0, 2, 0],
[1, 1, 1, ..., 2, 1, 0],
...,
[1, 1, 2, ..., 2, 2, 2],
[1, 2, 2, ..., 2, 1, 2],
[2, 2, 2, ..., 1, 0, 1]], dtype=uint8)
Approach
Flatten the board. Easier to work with indices when the board is (temporarily) one-dimensional.
rpt is a 1d vector of the number of repeats per int. It gets "zipped" together with [1, 2, 3] to create repl, which is length 2000. (80% of the size of the board; you don't need to worry about the 0s in this example.)
The indices of the flattened array are effectively shuffled (idx), and the length of this shuffled array is constrained to the size of the replacement candidates. Lastly, those indices in the 1d board are filled with the replacements, after which it can be made 2d again.

Tensorflow scan multiple matrix rows with offset

Question
I want to scan a matrix analogous to Tensorflow's tf.scan(), but using multiple rows at a time. So given a [n, m] matrix, I want to be able to iterate the m rows (with n elements) from i + j to m giving m - j slices of shape [i - j, n].
How can this be achieved?
I know how tf.scan does something like this, returning the accumulated value of each iteration. But I don't think shifting the matrix as multiple inputs solves this, since the values that have an offset cannot be precomputed.
Example
To give an example for n = 3 and m = 5, let's say I have a matrix that looks like the following:
# [[1 0 0]
# [1 1 0]
# [0 0 0] row 3
# [0 0 0] row 4
# [0 0 0]] row 5
matrix_shape = [5, 3]
matrix_idx = tf.constant([[0, 0], [1, 0], [1, 1]])
matrix = tf.scatter_nd(matrix_idx,
tf.ones(tf.shape(matrix_idx)[0],
dtype=tf.int32),
matrix_shape)
I want to apply the following function from row 3 to row 5:
# [[ 1 0 0] ┌ a
# [ 1 1 0] ├ b
# [ 6 4 2] <─┴ output / current line
# [16 12 6]
# [46 34 18]]
def compute(x):
a = x[0]
b = x[1]
return (a + b + 1) * 2
Does Tensorflow have a function specific to this problem?
The following code I wrote does exactly what I wanted.
The important part here is the return of the function used by tf.scan, which not only gives back the current computation c, but also the row from the previous step b. It is therefore important to later cut off this excess from computation by only selecting the later tensor in this list with [1].
#!/usr/bin/env python3
import tensorflow as tf
def compute(x, _):
a = x[0]
b = x[1]
c = (a + b + 1) * 2
return (b, c)
matrix_shape = tf.constant([3, 3])
init_data = [[1, 0, 0], [1, 1, 0]]
initializer = (
tf.constant(init_data[0]),
tf.constant(init_data[1]),
)
matrix = tf.zeros(matrix_shape, dtype=tf.int32)
computation = tf.scan(compute, matrix, initializer)[1]
result = tf.concat((tf.constant(init_data), computation), axis=0)
with tf.Session() as sess:
sess.run(result)
print(result.eval())
Since I'm yet lacking experience: May this solution be bad for performance, because the function is returning a tuple and therefore not using Tensorflow's speed optimizations?

Categories

Resources