Related
Let's say I've got a list of lists (or more conceptually accurate a 2D array):
list = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]]
I'd like to identify the different regions of identical values and rewrite the list so that each region has a unique value, like so:
list = [[1,1,2,2,2],
[1,1,3,2,2],
[0,3,3,3,2],
[0,0,0,3,2],
[0,0,0,4,2]]
I've mostly tried writing variations of a loop parsing the array per value and setting adjacent values equal to each other (which yea, is redundant I guess), BUT ensuring that island of 1s in the top left is distinct from the 1 in the bottom right was just not working. My attempts were spotty at best and non-functional at worst. Examples:
for x in list_length:
for y in sublist_length:
try:
if list[x][y] == list[x+1][y]:
list[x+1][y] = list[x][y]
except:
pass
or
predetermined_unique_value = 0
for x in list_length:
for y in sublist_length:
try:
if list[x][y] == list[x+1][y]:
list[x+1][y] = predetermined_unique_value
predetermined_unique_value += 1
except:
pass
and many slight variations on which directions (up, down, left, right from current spot/point) to check, brute forcing the loop by running it until all spots had been assigned a new value, etc.
Clearly I am missing something here. I suspect the answer is actually super simple, but I can't seem to find anything on google or reddit, or other answers here (I'm probably just conceptualizing it weirdly so searching for the wrong thing).
Just to reiterate, how could you parse that list of lists to organize values into adjacent regions based on identical data and rewrite it to ensure that those regions all have unique values? (I.E. so that there is only one region of the 0 value, one region of the 1 value, etc. etc.)
I hope this is enough information to help you help me, but in truth I just as much am not sure how to do this as I am doing it wrong. Please don't hesitate to ask for more.
Based on this answer you can do it with ndimage from the scipy library.
I applied your data to his answer and that's what I got as result:
from scipy import ndimage
import numpy as np
data_tup = ((1,1,0,0,0),
(1,1,2,0,0),
(0,2,2,2,0),
(0,0,0,2,0),
(0,0,0,1,0))
data_list = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]]
def find_clusters(array):
clustered = np.empty_like(array)
unique_vals = np.unique(array)
cluster_count = 0
for val in unique_vals:
labelling, label_count = ndimage.label(array == val)
for k in range(1, label_count + 1):
clustered[labelling == k] = cluster_count
cluster_count += 1
return clustered, cluster_count
clusters, cluster_count = find_clusters(data_list)
clusters_tup, cluster_count_tup = find_clusters(data_tup)
print(" With list of lists, Found {} clusters:".format(cluster_count))
print(clusters, '\n')
print(" With tuples of tuple, Found {} clusters:".format(cluster_count_tup))
print(clusters_tup)
Output:
With list of lists, Found 5 clusters:
[[2 2 0 0 0]
[2 2 4 0 0]
[1 4 4 4 0]
[1 1 1 4 0]
[1 1 1 3 0]]
With tuples of tuple, Found 5 clusters:
[[2 2 0 0 0]
[2 2 4 0 0]
[1 4 4 4 0]
[1 1 1 4 0]
[1 1 1 3 0]]
Both times the Output is a list of list. If you wish to have it different, the function needs to be changed inside.
You can use skimage.measure.label:
>>> import numpy as np
>>> from skimage import measure
>>>
>>> a = np.array([[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,1,0]])
>>> measure.label(a, background=a.max()+1)
array([[1, 1, 2, 2, 2],
[1, 1, 3, 2, 2],
[4, 3, 3, 3, 2],
[4, 4, 4, 3, 2],
[4, 4, 4, 5, 2]])
Note that the label function has an argument connectivity which determines how blobs/clusters are identified. The default for a 2D array is to consider diagonal neighbors. If that is undesired, connectivity=1 will consider only horizontal/vertical neighbors.
I'm not sure how good the performance of this solution is but here's a recursive approach to identify a connected segment. It will take a coordinate and return the same list of islands with every coordinate that was part of the same island as the given coordinate with True.
islands = [[1,1,0,0,0],
[1,1,2,0,0],
[0,2,2,2,0],
[0,0,0,2,0],
[0,0,0,0,0]]
def print_islands():
for row in islands:
print(row)
def get_bool_map(i, j):
checked_indexes = [[False] * len(islands[0]) ] * len(islands)
checked_cords = []
def check_island_indexes(island_value, m, i, j):
if i < 0 or j < 0:
return
try:
if m[i][j] != island_value:
return
else:
if [i, j] in checked_cords:
return
else:
checked_cords.append([i, j])
m[i][j] = True
except IndexError:
return
check_island_indexes(island_value, m, i - 1, j)
check_island_indexes(island_value, m, i + 1, j)
check_island_indexes(island_value, m, i, j - 1)
check_island_indexes(island_value, m, i, j + 1)
check_island_indexes(islands[i][j], islands, i, j)
get_bool_map(0, 4)
print_islands()
[1, 1, True, True, True]
[1, 1, 2, True, True]
[0, 2, 2, 2, True]
[0, 0, 0, 2, True]
[0, 0, 0, 1, True]
I would like to replace the N smallest elements in each row for 0, and that the resulting array would respect the same order and shape of the original array.
Specifically, if the original numpy array is:
import numpy as np
x = np.array([[0,50,20],[2,0,10],[1,1,0]])
And N = 2, I would like for the result to be the following:
x = np.array([[0,50,0],[0,0,10],[0,1,0]])
I tried the following, but in the last row it replaces 3 elements instead of 2 (because it replaces both 1s and not only one)
import numpy as np
N = 2
x = np.array([[0,50,20],[2,0,10],[1,1,0]])
x_sorted = np.sort(x , axis = 1)
x_sorted[:,N:] = 0
replace = x_sorted.copy()
final = np.where(np.isin(x,replace),0,x)
Note that this is small example and I would like that it works for a much bigger matrix.
Thanks for your time!
One way using numpy.argsort:
N = 2
x[x.argsort().argsort() < N] = 0
Output:
array([[ 0, 50, 0],
[ 0, 0, 10],
[ 0, 1, 0]])
Use numpy.argpartition to find the index of N smallest elements, and then use the index to replace values:
N = 2
idy = np.argpartition(x, N, axis=1)[:, :N]
x[np.arange(len(x))[:,None], idy] = 0
x
array([[ 0, 50, 0],
[ 0, 0, 10],
[ 1, 0, 0]])
Notice if there are ties, it could be undetermined which values get replaced depending on the algorithm used.
I have two numpy arrays of equal size. They contain the values 1, 0, and -1. I can count the number of matching ones and negative ones, but I'm not sure how to count the matching elements that have the same index and value of zero.
I'm a little confused on how to proceed here.
Here is some code:
print(actual_direction.shape)
print(predicted_direction.shape)
act = actual_direction
pre = predicted_direction
part1 = act[pre == 1]
part2 = part1[part1 == 1]
result1 = part2.sum()
part3 = act[pre == -1]
part4 = part3[part3 == -1]
result2 = part4.sum() * -1
non_zeros = result1 + result2
zeros = len(act) - non_zeros
print(f'zeros : {zeros}\n')
print(f'non_zeros : {non_zeros}\n')
final_result = non_zeros + zeros
print(f'result1 : {result1}\n')
print(f'result2 : {result2}\n')
print(f'final_result : {final_result}\n')
Here is the printout:
(11279,)
(11279,)
zeros : 5745.0
non_zeros : 5534.0
result1 : 2217.0
result2 : 3317.0
final_result : 11279.0
So what I've done here is simply subtract the summation of the ones and negative ones from the total length of the array. I can't assume that the difference (zeros: 5745) contains ALL matching elements that contain zeros can I?
You could try this:
import numpy as np
a=np.array([1,0,0,1,-1,-1,0,0])
b=np.array([1,0,0,1,-1,-1,0,1])
summ = np.sum((a==0) & (b==0))
print(summ)
Output:
3
You can use numpy.ravel() to flatten out the array, then use zip() to compare each element side by side:
import numpy as np
ar1 = np.array([[1, 0, 0],
[0, 1, 1],
[0, 1, 0]])
ar2 = np.array([[0, 0, 0],
[1, 0, 1],
[0, 1, 0]])
count = 0
for e1, e2 in zip(ar1.ravel(), ar2.ravel()):
if e1 == e2:
count += 1
print(count)
Output:
6
You can also do this to list all the matches found, as well as print out the amount:
dup = [e1 for e1, e2 in zip(ar1.ravel(), ar2.ravel()) if e1 == e2]
print(dup)
print(len(dup))
Output:
[0, 0, 1, 0, 1, 0]
6
You have two arrays and want to count the positions where both of these are 0, right?
You can check where the array meets your required condition (a == 0), and then use the 'and' operator & to check where both arrays meet your requirement:
import numpy as np
a = np.array([1, 0, -1, 0, -1, 1, 1, 1, 1])
b = np.array([1, 0, -1, 1, 0, -1, 1, 0, 1])
both_zero = (a == 0) & (b == 0) # [False, True, False, False, False, False]
both_zero.sum() # 1
In your updated question you appear to be interested in the similarities and differences between actual values and predictions. For this, a confusion matrix is ideally suited.
from sklearn.metrics import confusion_matrix
confusion_matrix(a, b, labels=[-1, 0, 1])
will give you a confusion matrix as output telling you how many -1s were predicted as -1, 0 and 1, and the same for 0 and +1:
[[1 1 0] # -1s predicted as -1, 0 and 1
[0 1 1] # 0s predicted as -1, 0 and 1
[1 1 3]] # 1s predicted as -1, 0 and 1
I have an array y composed of 0 and 1, but at a different frequency.
For example:
y = np.array([0, 0, 1, 1, 1, 1, 0])
And I have an array x of the same length.
x = np.array([0, 1, 2, 3, 4, 5, 6])
The idea is to filter out elements until there are the same number of 0 and 1.
A valid solution would be to remove index 5:
x = np.array([0, 1, 2, 3, 4, 6])
y = np.array([0, 0, 1, 1, 1, 0])
A naive method I can think of is to get the difference between the value frequency of y (in this case 4-3=1) create a mask for y == 1 and switch random elements from True to False until the difference is 0. Then create a mask for y == 0, do a OR between them and apply it to both x and y.
This doesn't really seem the best "python/numpy way" of doing it though.
Any suggestions? Something like randomly select n elements from the highest count, where n is the count of the lowest value.
If this is easier with pandas then that would work for me too.
Naive algorithm assuming 1 > 0:
mask_pos = y == 1
mask_neg = y == 0
pos = len(y[mask_pos])
neg = len(y[mask_neg])
diff = pos-neg
while diff > 0:
rand = np.random.randint(0, len(y))
if mask_pos[rand] == True:
mask_pos[rand] = False
diff -= 1
mask_final = mask_pos | mask_neg
y_new = y[mask_final]
x_new = x[mask_final]
This naive algorithm is really slow
One way to do that with NumPy is this:
import numpy as np
# Makes a mask to balance ones and zeros
def balance_binary_mask(binary_array):
binary_array = np.asarray(binary_array).ravel()
# Count number of ones
z = np.count_nonzero(binary_array)
# If there are less ones than zeros
if z <= len(binary_array) // 2:
# Invert the array
binary_array = ~binary_array
# Find ones
idx = np.nonzero(binary_array)[0]
# Number of elements to remove
rem = 2 * len(idx) - len(binary_array)
# Pick random indices to remove
rem_idx = np.random.choice(idx, size=rem, replace=False)
# Make mask
mask = np.ones_like(binary_array, dtype=bool)
# Mask elements to remove
mask[rem_idx] = False
return mask
# Test
np.random.seed(0)
y = np.array([0, 0, 1, 1, 1, 1, 0])
x = np.array([0, 1, 2, 3, 4, 5, 6])
m = balance_binary_mask(y)
print(m)
# [ True True True True False True True]
y = y[m]
x = x[m]
print(y)
# [0 0 1 1 1 0]
print(x)
# [0 1 2 3 5 6]
I am given a 2D Tensor with stochastic rows. After applying tf.math.greater() and tf.cast(tf.int32) I am left with a Tensor with 0's and 1's. I now want to apply reduce sum onto that matrix but with a condition: If there was at least one 1 summed and a 0 follows I want to remove all following 1 aswell, meaning 1 0 1 should result in 1 instead of 2.
I have tried to solve the Problem with tf.scan(), but I was not able to come up with a function yet that is able to handle starting 0's, because the row might look like: 0 0 0 1 0 1
One idea was to set the lower part of the matrix to one (bc I know everything left from the diagonal will always be 0) and then have a function like tf.scan() run to filter out the spots (see code and error message below).
Let z be the matrix after tf.cast.
helper = tf.matrix_band_part(tf.ones_like(z), -1, 0)
z = tf.math.logical_or(tf.cast(z, tf.bool), tf.cast(helper,tf.bool))
z = tf.cast(z, tf.int32)
z = tf.scan(lambda a, x: x if a == 1 else 0 ,z)
Resulting in:
ValueError: Incompatible shape for value ([]), expected ([5])
IIUC, this is one way to do what you want without scanning or looping. It may be a bit convoluted, and is actually iterating the columns twice (one cumsum and one cumprod), but being vectorized operations I think it is probably faster. Code is TF 2.x but runs the same in TF 1.x (except for the last line obviously).
import tensorflow as tf
# Example data
a = tf.constant([[0, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 1, 0],
[0, 1, 0, 1],
[1, 1, 1, 0],
[1, 1, 0, 1],
[0, 1, 1, 1],
[1, 1, 1, 1]])
# Cumsum columns
c = tf.math.cumsum(a, axis=1)
# Column-wise differences
diffs = tf.concat([tf.ones([tf.shape(c)[0], 1], c.dtype), c[:, 1:] - c[:, :-1]], axis=1)
# Find point where we should not sum anymore (cumsum is not zero and difference is zero)
cutoff = tf.equal(a, 0) & tf.not_equal(c, 0)
# Make mask
mask = tf.math.cumprod(tf.dtypes.cast(~cutoff, tf.uint8), axis=1)
# Compute result
result = tf.reduce_max(c * tf.dtypes.cast(mask, c.dtype), axis=1)
print(result.numpy())
# [0 1 2 1 3 2 3 4]