I am looking for efficient way of implementing uniform crossover in numpy pandas.
Every solution consists of numpy array and a number:
population = pd.DataFrame({
"mask": [get_random_genotype() for _ in range(pop_size)]
"X": [np.random.random() for _ in range(pop_size)]})
I would like to do parallel uniform crossover of chosen sub-population, eg.:
pairs = np.array([[0, 2],[1, 3]])
for male, female in pairs:
mask = random_mask() #[True, False, False, True]
new_male.mask= where(mask, male, female)
new_female.mask = where(mask, female, male)
but in compeletly parallel manner. I have already tried:
temp: pd.DataFrame = population.copy()
draw: np.ndarray = np.random.choice(
a = [True, False],
size = np.stack(temp["mask"][pairs[X, 0]]).shape,
)
population.loc[pairs[X, 0], "mask"] = pd.Series(np.where(draw, np.stack(temp["mask"][pairs[X, 0]]), np.stack(temp["mask"][pairs[X, 1]])).tolist())
population.loc[pairs[X, 1], "mask"] = pd.Series(np.where(draw, np.stack(temp["mask"][pairs[X, 1]]), np.stack(temp["mask"][pairs[X, 0]])).tolist())
but it didn't work, apparently some of my masks became Nan's I have no idea whether I should solve it this way. I think solution that works in the same way on float/int instead of array would be sufficient as well:
X = pd.DataFrame({"x":[x//2 for x in range(10)]})
mask = [True, False, False, True]
X.loc[[1,2,3,4], "x"] = pd.Series(np.where(mask,X.loc[[1,2,3,4], "x"], X.loc[[5,6,7,8], "x"]).tolist(), dtype = int)
Nan's are still appearing.
Related
I have a function that takes in 4 single value inputs to return a singular float output, for example:
from scipy.stats import multivariate_normal
grid_step = 0.25 #in units of sigma
grid_x, grid_y = np.mgrid[-2:2+grid_step:grid_step, -2:2+grid_step:grid_step]
pos = np.dstack((grid_x, grid_y))
rv = multivariate_normal([0.0, 0.0], [[1.0, 0], [0, 1.0]])
grid_pdf = rv.pdf(pos)*grid_step**2
norm_pdf = np.sum(rv.pdf(pos))*grid_step**2
def cal_prob(x, x_err, y, y_err):
x_grid = grid_x*x_err + x
y_grid = grid_y*y_err + y
PSB_grid = ((x_grid>3) & (y_grid<10) & (y_grid < 10**(0.23*x_grid-0.46)))
PSB_prob = np.sum(PSB_grid*grid_pdf)/norm_pdf
return PSB_prob
What this function is doing is estimating the probability that some x-y measurement is within some defined limit in x-y space, given x and y's uncertainties. It assumes the uncertainties are Gaussian and uncorrelated. Then, using the pre-made grid_pdf, it checks which grid points (scaled by x_err/y_err and shifted by x/y) are within the defined limit, and multiply the True/False grid by the grid_pdf, normalized by norm_pdf. The probability is given by the sum of the normalized array.
I want this function to be applied element-wise with those 4 inputs stored in 4 separate numpy masked arrays of the same shape, with possibly different masks, then use the function outputs to create a new array of the same shape. Is there a way that doesn't use a for loop?
Thanks!
My current solution is this:
mask1 = np.array([[False, True, False],[True, True, True],[True, False, False]])
mask2 = np.array([[True, True, True],[True, True, False],[False, False, True]])
# the only overlaps should be [0,1], [1,0] and [1,1]
x = np.ma.array(np.random.randn(*mask1.shape), mask=~mask1)
x_err = np.ma.array(np.abs(np.random.randn(*mask1.shape))*0.1, mask=~mask1)
y = np.ma.array(np.random.randn(*mask2.shape), mask=~mask2)
y_err = np.ma.array(np.abs(np.random.randn(*mask2.shape))*0.1, mask=~mask2)
# a combined mask to iterate through
all_mask = x+x_err+y+y_err
prob = np.zeros(mask1.shape)
prob = np.ma.masked_where(np.ma.getmask(all_mask), prob)
for i,xi in np.ma.ndenumerate(all_mask):
prob[i] = cal_prob(xi, x_err[i], y[i], y_err[i])
A test of np.vectorize with a masked array input:
In [180]: def foo(x):
...: print(x)
...: return 2*x
...:
In [181]: np.vectorize(foo)(np.ma.masked_array([1,2,3],[True,False,True]))
1
1
2
3
Out[181]:
masked_array(data=[--, 4, --],
mask=[ True, False, True],
fill_value=999999)
In [182]: _.data
Out[182]: array([2, 4, 6])
My goal is to set some labels in 2d array to zero without using a for loop. Is there a faster numpy way to do this without the for loop? The ideal scenario would be temp_arr[labeled_im not in labels] = 0, but it's not really working the way I'd like it to.
labeled_array = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
labels = [2,4,5,6,8]
temp_arr = np.zeros((labeled_array.shape)).astype(int)
for label in labels:
temp_arr[labeled_array == label] = label
>> temp_arr
[[0 2 0]
[4 5 6]
[0 8 0]]
The for loop gets quite slow when there are a lot of iterations to go through, so it is important to improve the execution time with numpy.
You can use define labels as a set and use temp_arr = np.where(np.isin(labeled_array, labels), labeled_array, 0). Although, the difference for such a small array does not seem to be significant.
import numpy as np
import time
labeled_array = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
labels = [2,4,5,6,8]
start = time.time()
temp_arr_0 = np.zeros((labeled_array.shape)).astype(int)
for label in labels:
temp_arr_0[labeled_array == label] = label
end = time.time()
print(f"Loop takes {end - start}")
start = time.time()
temp_arr_1 = np.where(np.isin(labeled_array, labels), labeled_array, 0)
end = time.time()
print(f"np.where takes {end - start}")
labels = {2,4,5,6,8}
start = time.time()
temp_arr_2 = np.where(np.isin(labeled_array, labels), labeled_array, 0)
end = time.time()
print(f"np.where with set takes {end - start}")
outputs
Loop takes 5.3882598876953125e-05
np.where takes 0.00010514259338378906
np.where with set takes 3.314018249511719e-05
In the case the labels are unique in labels (and memory isn't a concern), here's a way to go.
As the very first step, we convert labels to a ndarray
labels = np.array(labels)
Then, we produce two broadcastable arrays from labeled_array and labels
labeled_row = labeled_array.ravel()[np.newaxis, :]
labels_col = labels[:, np.newaxis]
The above code block produces respectively a row array of shape (1,9)
array([[1, 2, 3, 4, 5, 6, 7, 8, 9]])
and a column array of shape (5,1)
array([[2],
[4],
[5],
[6],
[8]])
Now the two shapes are broadcastable (see this page), so we can perform elementwise comparison, e.g.
mask = labeled_row == labels_col
which returns a (5,9)-shaped boolean mask
array([[False, True, False, False, False, False, False, False, False],
[False, False, False, True, False, False, False, False, False],
[False, False, False, False, True, False, False, False, False],
[False, False, False, False, False, True, False, False, False],
[False, False, False, False, False, False, False, True, False]])
In the case the assumption above is fullfilled, you'll have a number of True values per row equal to the number of times the corresponding label appears in your labeled_array. Nonetheless, you can also have all-False rows, e.g. when a label in labels never appears in your labeled_array.
To find out which labels actually appeared in your labeled_array, you can use np.nonzero on the boolean mask
indices = np.nonzero(mask)
which returns a tuple containing the row and column indices of the non-zero (i.e. True) elements
(array([0, 1, 2, 3, 4], dtype=int64), array([1, 3, 4, 5, 7], dtype=int64))
By construction, the first element of the tuple above tells you which labels actually appeared in your labeled_array, e.g.
appeared_labels = labels[indices[0]]
(note that you can have consecutive elements in appeared_labels if that specific label appeared more than once in your labeled_array).
We can now build and fill the output array:
out = np.zeros(labeled_array.size, dtype=int)
out[indices[1]] = labels[indices[0]]
and bring it back to the original shape
out = out.reshape(*labeled_array.shape)
array([[0, 2, 0],
[4, 5, 6],
[0, 8, 0]])
Background:
I have an rgb image with three diminution (W, H, C), where C = 3. I want to mask a few colors like (0,0,255) , (0,255,255) in this image. The problem becomes matching the last axis of the image with a list of colors I defined. color_list = [[255,0,0], [255,255,0], [255,0,255]] # just an example
It is easy to do it with one color,
mask = np.all(image == [255,0,0], axis = 2)
But I have to run a for loop if I have multiple colors.
masks = [np.all(image == color, axis = 2) for color in color_list]
mask = np.any(masks, axis=0)
Question:
Any elegant way to get the mask with multiple colors?
I have one way using broadcasting which is more efficient since it will loop in C. Basically make arrays comparable. It might look difficult at beginning, but once you know how it works this is all you will use [conditions apply]....
import numpy as np
x = np.array([[[255, 0, 0],[ 0, 255, 0], [ 0, 255, 0], [ 0, 255, 0]], [[255, 0, 0],[ 0, 255, 0], [ 0, 255, 0], [ 0, 255, 0]]])
print(x.shape)
# (2, 4, 3)
color_list = np.array([[255,0,0], [255,255,0], [255,0,255]])
print(color_list.shape)
# (3, 3)
# make array compatible
x = x[:, :, np.newaxis, :]
### Analogy for interpreting broadcasting
# Here repeating is for analogy and does not mean it will allocate new copy of memory
# element wise comparision, possibler due to broadcast
# shape of x is (2, 4, 1, 3)
# By broadcasting conceptually x will be repeated along axis=2 this will make (2, 4, 3, 3)
# color_list will be repeated over (2, 4) making it (2, 4, 3, 3) and they will have same shape also the final shape after == will be (2, 4, 3, 3)
f1 = np.all(x[:, :, np.newaxis, :] == color_list, axis=3)
#array([[[ True, False, False],
# [False, False, False],
# [False, False, False],
# [False, False, False]],
#
# [[ True, False, False],
# [False, False, False],
# [False, False, False],
# [False, False, False]]])
mask = np.any(f1, axis=2)
We have target array with shape (W, H, C) == (2, 4, 3) and we need to find size 3 arrays of color_list == [[255,0,0], [255,255,0], [255,0,255]]
Ideally we would like to do cross comparison, by that I mean if one side have M and other side N entries, then after some operations we would like M * N results. This would seem like repeating M entries each N times and comparing. While that may seem not possible at first glance, but numpy provides broadcasting. This will conceptually repeat the entries like your for loop(actually it highly memory efficient, it wont create actual copies)
So we need to broadcast, to make these two arrays compatible, but they are not compatible, as mentioned in broadcasting rules shapes are compared right to left and they need to be same or one of them must be 1.
color_list shape is (3, 3), x shape is (2, 4, 3). We will add new axis in x to make it compatible for broadcasting, which is x[:, :, np.newaxis, :] which has shape (2, 4, 1, 3). Now both are compatible and we can compare.
Compare along last axis which is color channel axis=3 and then on last but one axis which his axis = 2 will give (W, H) boolean where each entry represents True if the color channel triple was there in color_list.
This technique is exactly the same which can be used to calculate the distance matrix when two arrays of points are givenlike here Fast way to calculate min distance between two numpy arrays of 3D points
I have a volume represented by a 3D ndarray, X, with values between, say, 0 and 255, and I have another 3D ndarray, Y, that is an arbitrary mask of the first array, with values of either 0 or 1.
I want to find the indicies of a random sample of 50 voxels that is both greater than zero in X, the 'image', and equal to 1 in Y, the 'mask'.
My experience is with R, where the following would work:
idx <- sample(which(X>0 & Y==1), 50)
Maybe the advantage in R is that I can index 3D arrays linearly, because just using a single index in numpy gives me a 2D matrix, for example.
I guess it probably involves numpy.random.choice, but it doesn't seem like I can use that conditionally, let alone conditioned on two different arrays. Is there another approach I should be using instead?
Here's one way -
N = 50 # number of samples needed (50 for your actual case)
# Get mask based on conditionals
mask = (X>0) & (Y==1)
# Get corresponding linear indices (easier to random sample in next step)
idx = np.flatnonzero(mask)
# Get random sample
rand_idx = np.random.choice(idx, N)
# Format into three columnar output (each col for each dim/axis)
out = np.c_[np.unravel_index(rand_idx, X.shape)]
If you need random sample without replacement, use np.random.choice() with optional arg replace=False.
Sample run -
In [34]: np.random.seed(0)
...: X = np.random.randint(0,4,(2,3,4))
...: Y = np.random.randint(0,2,(2,3,4))
In [35]: N = 5 # number of samples needed (50 for your actual case)
...: mask = (X>0) & (Y==1)
...: idx = np.flatnonzero(mask)
...: rand_idx = np.random.choice(idx, N)
...: out = np.c_[np.unravel_index(rand_idx, X.shape)]
In [37]: mask
Out[37]:
array([[[False, True, True, False],
[ True, False, True, False],
[ True, False, True, True]],
[[False, True, True, False],
[False, False, False, True],
[ True, True, True, True]]], dtype=bool)
In [38]: out
Out[38]:
array([[1, 0, 1],
[0, 0, 1],
[0, 0, 2],
[1, 1, 3],
[1, 1, 3]])
Correlate the output out against the places of True values in mask for a quick verification.
If you don't want to flatten for getting the linear indices and directly get the indices per dim/axis, we can do it like so -
i0,i1,i2 = np.where(mask)
rand_idx = np.random.choice(len(i0), N)
out = np.c_[i0,i1,i2][rand_idx]
For performance, index first and then concatenate with np.c_ at the last step -
out = np.c_[i0[rand_idx], i1[rand_idx], i2[rand_idx]]
I want to inverse the true/false value in my numpy masked array.
So in the example below i don't want to mask out the second value in the data array, I want to mask out the first and third value.
Below is just an example. My masked array is created by a longer process than runs before. So I can not change the mask array itself. Is there another way to inverse the values?
import numpy
data = numpy.array([[ 1, 2, 5 ]])
mask = numpy.array([[0,1,0]])
numpy.ma.masked_array(data, mask)
import numpy
data = numpy.array([[ 1, 2, 5 ]])
mask = numpy.array([[0,1,0]])
numpy.ma.masked_array(data, ~mask) #note this probably wont work right for non-boolean (T/F) values
#or
numpy.ma.masked_array(data, numpy.logical_not(mask))
for example
>>> a = numpy.array([False,True,False])
>>> ~a
array([ True, False, True], dtype=bool)
>>> numpy.logical_not(a)
array([ True, False, True], dtype=bool)
>>> a = numpy.array([0,1,0])
>>> ~a
array([-1, -2, -1])
>>> numpy.logical_not(a)
array([ True, False, True], dtype=bool)
Latest Python version also support '~' character as 'logical_not'. For Example
import numpy
data = numpy.array([[ 1, 2, 5 ]])
mask = numpy.array([[False,True,False]])
result = data[~mask]