Related
Let's say I've got a list of lists (or more conceptually accurate a 2D array):
list = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]]
I'd like to identify the different regions of identical values and rewrite the list so that each region has a unique value, like so:
list = [[1,1,2,2,2],
[1,1,3,2,2],
[0,3,3,3,2],
[0,0,0,3,2],
[0,0,0,4,2]]
I've mostly tried writing variations of a loop parsing the array per value and setting adjacent values equal to each other (which yea, is redundant I guess), BUT ensuring that island of 1s in the top left is distinct from the 1 in the bottom right was just not working. My attempts were spotty at best and non-functional at worst. Examples:
for x in list_length:
for y in sublist_length:
try:
if list[x][y] == list[x+1][y]:
list[x+1][y] = list[x][y]
except:
pass
or
predetermined_unique_value = 0
for x in list_length:
for y in sublist_length:
try:
if list[x][y] == list[x+1][y]:
list[x+1][y] = predetermined_unique_value
predetermined_unique_value += 1
except:
pass
and many slight variations on which directions (up, down, left, right from current spot/point) to check, brute forcing the loop by running it until all spots had been assigned a new value, etc.
Clearly I am missing something here. I suspect the answer is actually super simple, but I can't seem to find anything on google or reddit, or other answers here (I'm probably just conceptualizing it weirdly so searching for the wrong thing).
Just to reiterate, how could you parse that list of lists to organize values into adjacent regions based on identical data and rewrite it to ensure that those regions all have unique values? (I.E. so that there is only one region of the 0 value, one region of the 1 value, etc. etc.)
I hope this is enough information to help you help me, but in truth I just as much am not sure how to do this as I am doing it wrong. Please don't hesitate to ask for more.
Based on this answer you can do it with ndimage from the scipy library.
I applied your data to his answer and that's what I got as result:
from scipy import ndimage
import numpy as np
data_tup = ((1,1,0,0,0),
(1,1,2,0,0),
(0,2,2,2,0),
(0,0,0,2,0),
(0,0,0,1,0))
data_list = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]]
def find_clusters(array):
clustered = np.empty_like(array)
unique_vals = np.unique(array)
cluster_count = 0
for val in unique_vals:
labelling, label_count = ndimage.label(array == val)
for k in range(1, label_count + 1):
clustered[labelling == k] = cluster_count
cluster_count += 1
return clustered, cluster_count
clusters, cluster_count = find_clusters(data_list)
clusters_tup, cluster_count_tup = find_clusters(data_tup)
print(" With list of lists, Found {} clusters:".format(cluster_count))
print(clusters, '\n')
print(" With tuples of tuple, Found {} clusters:".format(cluster_count_tup))
print(clusters_tup)
Output:
With list of lists, Found 5 clusters:
[[2 2 0 0 0]
[2 2 4 0 0]
[1 4 4 4 0]
[1 1 1 4 0]
[1 1 1 3 0]]
With tuples of tuple, Found 5 clusters:
[[2 2 0 0 0]
[2 2 4 0 0]
[1 4 4 4 0]
[1 1 1 4 0]
[1 1 1 3 0]]
Both times the Output is a list of list. If you wish to have it different, the function needs to be changed inside.
You can use skimage.measure.label:
>>> import numpy as np
>>> from skimage import measure
>>>
>>> a = np.array([[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]])
>>> measure.label(a, background=a.max()+1)
array([[1, 1, 2, 2, 2],
[1, 1, 3, 2, 2],
[4, 3, 3, 3, 2],
[4, 4, 4, 3, 2],
[4, 4, 4, 5, 2]])
Note that the label function has an argument connectivity which determines how blobs/clusters are identified. The default for a 2D array is to consider diagonal neighbors. If that is undesired, connectivity=1 will consider only horizontal/vertical neighbors.
I'm not sure how good the performance of this solution is but here's a recursive approach to identify a connected segment. It will take a coordinate and return the same list of islands with every coordinate that was part of the same island as the given coordinate with True.
islands = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,0,0]]
def print_islands():
for row in islands:
print(row)
def get_bool_map(i, j):
checked_indexes = [[False] * len(islands[0]) ] * len(islands)
checked_cords = []
def check_island_indexes(island_value, m, i, j):
if i < 0 or j < 0:
return
try:
if m[i][j] != island_value:
return
else:
if [i, j] in checked_cords:
return
else:
checked_cords.append([i, j])
m[i][j] = True
except IndexError:
return
check_island_indexes(island_value, m, i - 1, j)
check_island_indexes(island_value, m, i + 1, j)
check_island_indexes(island_value, m, i, j - 1)
check_island_indexes(island_value, m, i, j + 1)
check_island_indexes(islands[i][j], islands, i, j)
get_bool_map(0, 4)
print_islands()
[1, 1, True, True, True]
[1, 1, 2, True, True]
[0, 2, 2, 2, True]
[0, 0, 0, 2, True]
[0, 0, 0, 1, True]
I want to maximize the following function:
f(i, j, k) = min(A(i, j), B(j, k))
Where A and B are matrices and i, j and k are indices that range up to the respective dimensions of the matrices. I would like to find (i, j, k) such that f(i, j, k) is maximized. I am currently doing that as follows:
import numpy as np
import itertools
shape_a = (100 , 150)
shape_b = (shape_a[1], 200)
A = np.random.rand(shape_a[0], shape_a[1])
B = np.random.rand(shape_b[0], shape_b[1])
# All the different i,j,k
combinations = itertools.product(np.arange(shape_a[0]), np.arange(shape_a[1]), np.arange(shape_b[1]))
combinations = np.asarray(list(combinations))
A_vals = A[combinations[:, 0], combinations[:, 1]]
B_vals = B[combinations[:, 1], combinations[:, 2]]
f = np.min([A_vals, B_vals], axis=0)
best_indices = combinations[np.argmax(f)]
print(best_indices)
[ 49 14 136]
This is faster than iterating over all (i, j, k), but a lot of (and most of the) time is spent constructing the A_vals and B_vals matrices. This is unfortunate, because they contain many many duplicate values as the same i, j and k appear multiple times. Is there a way to do this where (1) the speed of numpy's matrix computation can be preserved and (2) I don't have to construct the memory-intensive A_vals and B_vals arrays.
In other languages you could maybe construct the matrices so that they container pointers to A and B, but I do not see how to achieve this in Python.
Perhaps you could re-evaluate how you look at the problem in context of what min and max actually do. Say you have the following concrete example:
>>> np.random.seed(1)
>>> print(A := np.random.randint(10, size=(4, 5)))
[[5 8 9 5 0]
[0 1 7 6 9]
[2 4 5 2 4]
[2 4 7 7 9]]
>>> print(B := np.random.randint(10, size=(5, 3)))
[[1 7 0]
[6 9 9]
[7 6 9]
[1 0 1]
[8 8 3]]
You are looking for a pair of numbers in A and B such that the column in A is the same as the row of B, and the you get the maximum smaller number.
For any set of numbers, the largest pairwise minimum happens when you take the two largest numbers. You are therefore looking for the max in each column of A, row of B, the minimum of those pairs, and then the maximum of that. Here is a relatively simple formulation of the solution:
candidate_i = A.argmax(axis=0)
candidate_k = B.argmax(axis=1)
j = np.minimum(A[candidate_i, np.arange(A.shape[1])], B[np.arange(B.shape[0]), candidate_k]).argmax()
i = candidate_i[j]
k = candidate_k[j]
And indeed, you see that
>>> i, j, k
(0, 2, 2)
>>> A[i, j]
9
>>> B[j, k]
9
If there are collisions, argmax will always pick the first option.
Your values i,j,k are determined by the index of the maximum value from the set {A,B}. You can simply use np.argmax().
if np.max(A) < np.max(B):
ind = np.unravel_index(np.argmax(A),A.shape)
else:
ind = np.unravel_index(np.argmax(B),B.shape)
It will return only two values, either i,j if max({A,B}) = max({A}) or j,k if max({A,B}) = max({B}). But if for example you get i,j then k can be any value that fit the shape of the array B, so select randomly one of this value.
If you also need to maximize the other value then:
if np.max(A) < np.max(B):
ind = np.unravel_index(np.argmax(A),A.shape)
ind = ind + (np.argmax(B[ind[1],:]),)
else:
ind = np.unravel_index(np.argmax(B),B.shape)
ind = (np.argmax(A[:,ind[0]]),) + ind
I have a 2d array r. What I want to do is to take the product of each row (excluding the zero elements in that row). For example if I have:
r = [[1 2 0 3 4],
[0 2 5 0 1],
[1 2 3 4 0]]
Then what I want is to have another 2d array result such that:
result = [[24],
[10],
[24]]
How can I achieve this using numpy.prod?
I think I figured it out:
np.prod(r, axis = 1, where = r > 0, keepdims = True)
Output:
array([[24],
[10],
[24]])
I want to generate a lexicographic series of numbers such that for each number the sum of digits is a given constant. It is somewhat similar to 'subset sum problem'. For example if I wish to generate 4-digit numbers with sum = 3 then I have a series like:
[3 0 0 0]
[2 1 0 0]
[2 0 1 0]
[2 0 0 1]
[1 2 0 0] ... and so on.
I was able to do it successfully in Python with the following code:
import numpy as np
M = 4 # No. of digits
N = 3 # Target sum
a = np.zeros((1,M), int)
b = np.zeros((1,M), int)
a[0][0] = N
jj = 0
while a[jj][M-1] != N:
ii = M-2
while a[jj][ii] == 0:
ii = ii-1
kk = ii
if kk > 0:
b[0][0:kk-1] = a[jj][0:kk-1]
b[0][kk] = a[jj][kk]-1
b[0][kk+1] = N - sum(b[0][0:kk+1])
b[0][kk+2:] = 0
a = np.concatenate((a,b), axis=0)
jj += 1
for ii in range(0,len(a)):
print a[ii]
print len(a)
I don't think it is a very efficient way (as I am a Python newbie). It works fine for small values of M and N (<10) but really slow beyond that. I wish to use it for M ~ 100 and N ~ 6. How can I make my code more efficient or is there a better way to code it?
Very effective algorithm adapted from Jorg Arndt book "Matters Computational"
(Chapter 7.2 Co-lexicographic order for compositions into exactly k parts)
n = 4
k = 3
x = [0] * n
x[0] = k
while True:
print(x)
v = x[-1]
if (k==v ):
break
x[-1] = 0
j = -2
while (0==x[j]):
j -= 1
x[j] -= 1
x[j+1] = 1 + v
[3, 0, 0, 0]
[2, 1, 0, 0]
[2, 0, 1, 0]
[2, 0, 0, 1]
[1, 2, 0, 0]
[1, 1, 1, 0]
[1, 1, 0, 1]
[1, 0, 2, 0]
[1, 0, 1, 1]
[1, 0, 0, 2]
[0, 3, 0, 0]
[0, 2, 1, 0]
[0, 2, 0, 1]
[0, 1, 2, 0]
[0, 1, 1, 1]
[0, 1, 0, 2]
[0, 0, 3, 0]
[0, 0, 2, 1]
[0, 0, 1, 2]
[0, 0, 0, 3]
Number of compositions and time on seconds for plain Python (perhaps numpy arrays are faster) for n=100, and k = 2,3,4,5 (2.8 ghz Cel-1840)
2 5050 0.040000200271606445
3 171700 0.9900014400482178
4 4421275 20.02204465866089
5 91962520 372.03577995300293
I expect time 2 hours for 100/6 generation
Same with numpy arrays (x = np.zeros((n,), dtype=int)) gives worse results - but perhaps because I don't know how to use them properly
2 5050 0.07999992370605469
3 171700 2.390003204345703
4 4421275 54.74532389640808
Native code (this is Delphi, C/C++ compilers might optimize better) generates 100/6 in 21 seconds
3 171700 0.012
4 4421275 0.125
5 91962520 1.544
6 1609344100 20.748
Cannot go sleep until all measurements aren't done :)
MSVS VC++: 18 seconds! (O2 optimization)
5 91962520 1.466
6 1609344100 18.283
So 100 millions variants per second.
A lot of time is wasted for checking of empty cells (because fill ratio is small). Speed described by Arndt is reached on higher k/n ratios and is about 300-500 millions variants per second:
n=25, k=15 25140840660 60.981 400 millions per second
My recommendations:
Rewrite it as a generator utilizing yield, rather than a loop that concatenates a global variable on each iteration.
Keep a running sum instead of calculating the sum of some subset of the array representation of the number.
Operate on a single instance of your working number representation instead of splicing a copy of it to a temporary variable on each iteration.
Note no particular order is implied.
I have a better solution using itertools as follows,
from itertools import product
n = 4 #number of elements
s = 3 #sum of elements
r = []
for x in range(n):
r.append(x)
result = [p for p in product(r, repeat=n) if sum(p) == s]
print(len(result))
print(result)
I am saying this is better because it took 0.1 secs on my system, while your code with numpy took 0.2 secs.
But as far as n=100 and s=6, this code takes time to go through all the combinations, I think it will take days to compute the results.
I found a solution using itertools as well (Source: https://bugs.python.org/msg144273). Code follows:
import itertools
import operator
def combinations_with_replacement(iterable, r):
# combinations_with_replacement('ABC', 2) --> AA AB AC BB BC CC
pool = tuple(iterable)
n = len(pool)
if not n and r:
return
indices = [0] * r
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != n - 1:
break
else:
return
indices[i:] = [indices[i] + 1] * (r - i)
yield tuple(pool[i] for i in indices)
int_part = lambda n, k: (tuple(map(c.count, range(k))) for c in combinations_with_replacement(range(k), n))
for item in int_part(3,4): print(item)
I have a matrix A which is defined as a tensor in tensorflow, of n rows and p columns. Moreover, I have say k matrices B1,..., Bk with p rows and q columns. My goal is to obtain a resulting matrix C of n rows and q columns where each row of C is the matrix product of the corresponding row in A with one of the B matrices. Which B to choose is determined by a give index vector I of dimension n that can take values ranging from 1 to k. In my case, the B are weight variables while I is another tensor variable given as input.
An example of code in numpy would look as follows:
A = array([[1, 0, 1],
[0, 0, 1],
[1, 1, 0],
[0, 1, 0]])
B1 = array([[1, 1],
[2, 1],
[3, 6]])
B2 = array([[1, 5],
[3, 2],
[0, 2]])
B = [B1, B2]
I = [1, 0, 0, 1]
n = A.shape[0]
p = A.shape[1]
q = B1.shape[1]
C = np.zeros(shape = (n,q))
for i in xrange(n):
C[i,:] = np.dot(A[i,:],B[I[i]])
How can this be translated in tensor flow?
In my specific case the variables are defined as:
A = tf.placeholder("float", [None, p])
B1 = tf.Variable(tf.random_normal(p,q))
B2 = tf.Variable(tf.random_normal(p,q))
I = tf.placeholder("float",[None])
This is a bit tricky and there are probably better solutions. Taking your first example, my approach computes C as follows:
C = diag([0,1,1,0]) * A * B1 + diag([1,0,0,1]) * A * B2
where diag([0,1,1,0]) is the diagonal matrix having vector [0,1,1,0] in its diagonal. This can be achieved through tf.diag() in TensorFlow.
For convenience, let me assume that k<=n (otherwise some B matrices would remain unused). The following script obtains those diagonal values from vector I and computes C as mentioned above:
k = 2
n = 4
p = 3
q = 2
a = array([[1, 0, 1],
[0, 0, 1],
[1, 1, 0],
[0, 1, 0]])
index_input = [1, 0, 0, 1]
import tensorflow as tf
# Creates a dim·dim tensor having the same vector 'vector' in every row
def square_matrix(vector, dim):
return tf.reshape(tf.tile(vector,[dim]), [dim,dim])
A = tf.placeholder(tf.float32, [None, p])
B = tf.Variable(tf.random_normal(shape=[k,p,q]))
# For the first example (with k=2): B = tf.constant([[[1, 1],[2, 1],[3, 6]],[[1, 5],[3, 2],[0, 2]]], tf.float32)
C = tf.Variable(tf.zeros((n, q)))
I = tf.placeholder(tf.int32,[None])
# Create a n·n tensor 'indices_matrix' having indices_matrix[i]=I for 0<=i<n (each row vector is I)
indices_matrix = square_matrix(I, n)
# Create a n·n tensor 'row_matrix' having row_matrix[i]=[i,...,i] for 0<=i<n (each row vector is a vector of i's)
row_matrix = tf.transpose(square_matrix(tf.range(0, n, 1), n))
# Find diagonal values by comparing tensors indices_matrix and row_matrix
equal = tf.cast(tf.equal(indices_matrix, row_matrix), tf.float32)
# Compute C
for i in range(k):
diag = tf.diag(tf.gather(equal, i))
mul = tf.matmul(diag, tf.matmul(A, tf.gather(B, i)))
C = C + mul
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print(sess.run(C, feed_dict={A : a, I : index_input}))
As an improvement, C may be computed using a vectorized implementation instead of using a for loop.
Just do 2 matrix multiplications
A1 = A[0:3:3,...] # this will get the first last index of your original but just make a new matrix
A2 = A[1:2]
in tensorflow
A1 = tf.constant([matrix elements go here])
A2 = tf.constant([matrix elements go here])
B = ...
B1 = tf.matmul(A1,B)
B2 = tf.matmul(A2,B)
C = tf.pack([B1,B2])
granted if you need to reorganize the C tensor you can also use gather
C = tf.gather(C,[0,3,2,1])