How to turn 1D array into symmetrical 3D array? - python

I have a symmetrical 1D numpy array, for example, something like this:
0 1 2 1 0
How could I turn this into a 3D array (kinda similar to a gaussian kernel), with the value 2 at the center?
As an example of what I mean (though the math is likely not right), in 2D this would be something like this (though I need it to be 3D):
0 0 0 0 0
0 0.5 1 0.5 0
0 1 2 1 0
0 0.5 1 0.5 0
0 0 0 0 0

Acknowledging that this is not a Gaussian kernel, here's how you calculate it:
center = a[a.size // 2]
(a[:, np.newaxis].repeat(a.size, axis=1) * a)\
[:, :, np.newaxis].repeat(a.size, axis=2) * a \
/ center ** 2
(Not gonna paste the whole output here.)

Related

Evenly Split 3D Numpy Arays of Varying Sizes [duplicate]

I have a 3D image with size: Deep x Weight x Height (for example: 10x20x30, means 10 images, and each image has size 20x30.
Given a patch size is pd x pw x ph (such as pd <Deep, pw<Weight, ph<Height), for example patch size: 4x4x4. The center point location of the path will be: pd/2 x pw/2 x ph/2. Let's call the distance between time t and time t+1 of the center point be stride, for example stride=2.
I want to extract the original 3D image into patches with size and stride given above. How can I do it in python? Thank you
.
Use np.lib.stride_tricks.as_strided. This solution does not require the strides to divide the corresponding dimensions of the input stack. It even allows for overlapping patches (Just do not write to the result in this case, or make a copy.). It therefore is more flexible than other approaches:
import numpy as np
from numpy.lib import stride_tricks
def cutup(data, blck, strd):
sh = np.array(data.shape)
blck = np.asanyarray(blck)
strd = np.asanyarray(strd)
nbl = (sh - blck) // strd + 1
strides = np.r_[data.strides * strd, data.strides]
dims = np.r_[nbl, blck]
data6 = stride_tricks.as_strided(data, strides=strides, shape=dims)
return data6#.reshape(-1, *blck)
#demo
x = np.zeros((5, 6, 12), int)
y = cutup(x, (2, 2, 3), (3, 3, 5))
y[...] = 1
print(x[..., 0], '\n')
print(x[:, 0, :], '\n')
print(x[0, ...], '\n')
Output:
[[1 1 0 1 1 0]
[1 1 0 1 1 0]
[0 0 0 0 0 0]
[1 1 0 1 1 0]
[1 1 0 1 1 0]]
[[1 1 1 0 0 1 1 1 0 0 0 0]
[1 1 1 0 0 1 1 1 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0]
[1 1 1 0 0 1 1 1 0 0 0 0]
[1 1 1 0 0 1 1 1 0 0 0 0]]
[[1 1 1 0 0 1 1 1 0 0 0 0]
[1 1 1 0 0 1 1 1 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0]
[1 1 1 0 0 1 1 1 0 0 0 0]
[1 1 1 0 0 1 1 1 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0]]
Explanation. Numpy arrays are organised in terms of strides, one for each dimension, data point [x,y,z] is located in memory at address base + stridex * x + stridey * y + stridez * z.
The stride_tricks.as_strided factory allows to directly manipulate the strides and shape of a new array sharing its memory with a given array. Try this only if you know what you're doing because no checks are performed, meaning you are allowed to shoot your foot by addressing out-of-bounds memory.
The code uses this function to split up each of the three existing dimensions into two new ones, one for the corresponding within block coordinate (this will have the same stride as the original dimension, because adjacent points in a block corrspond to adjacent points in the whole stack) and one dimension for the block index along this axis; this will have stride = original stride x block stride.
All the code does is computing the correct strides and dimensions (= block dimensions and block counts along the three axes).
Since the data are shared with the original array, when we set all points of the 6d array to 1, they are also set in the original array exposing the block structure in the demo. Note that the commented out reshape in the last line of the function breaks this link, because it forces a copy.
the skimage module offer you an integrated solution with view_as_blocks.
The source is on line.
Take care to choose Deep,Weight,Height multiple of pd, pw, ph, because as_strided do not check bounds.

Pick random coordinates in Numpy array based on condition

I have used convolution2d to generate some statistics on conditions of local patterns. To be complete, I'm working with images and the value 0.5 is my 'gray-screen', I cannot use masks before this unfortunately (dependence on some other packages). I want to add new objects to my image, but it should overlap at least 75% of non-gray-screen. Let's assume the new object is square, I mask the image on gray-screen versus the rest, do a 2-d convolution with a n by n matrix filled with 1s so I can get the sum of the number of gray-scale pixels in that patch. This all works, so I have a matrix with suitable places to place my new object. How do I efficiently pick a random one from this matrix?
Here is a small example with a 5x5 image and a 2x2 convolution matrix, where I want a random coordinate in my last matrix with a 1 (because there is at most 1 0.5 in that patch)
Image:
1 0.5 0.5 0 1
0.5 0.5 0 1 1
0.5 0.5 1 1 0.5
0.5 1 0 0 1
1 1 0 0 1
Convolution matrix:
1 1
1 1
Convoluted image:
3 3 1 0
4 2 0 1
3 1 0 1
1 0 0 0
Conditioned on <= 1:
0 0 1 1
0 0 1 1
0 1 1 1
1 1 1 1
How do I get a uniformly distributed coordinate of the 1s efficiently?
np.where and np.random.randint should do the trick :
#we grab the indexes of the ones
x,y = np.where(convoluted_image <=1)
#we chose one index randomly
i = np.random.randint(len(x))
random_pos = [x[i],y[i]]

Modifying numpy array to get minimum number of values between elements

I have a numpy array of the form: arr = 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1
I would like to modify it such that there are atleast seven 0s between any two 1s. If there are less than seven 0s, then convert the intervining 1's to 0.
I am thinking that numpy.where could work here, but not sure how to do it in a succint, pythonic manner:
The output should look like this:
0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
numpy.where(arr[:] > 1.0, 1.0, 0.0)
The following code is a really ugly hack, but it gets the job done in linear time (assuming 7 is fixed) without resorting to Python loops and without needing anything like Numba or Cython. I don't recommend using it, especially if 7 might be 700 next month.
def rolling_window(a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
arr2 = numpy.append(1-arr, [0]*7)
numpy.power.at(rolling_window(arr2[1:], 7), np.arange(len(arr)), arr2[:-7, None])
arr = 1 - arr2[:-7]
It works by setting 1s to 0s and vice versa, then for each element x, setting each element y in the next 7 spots to y**x, then undoing the 0/1 switch. The power operation sets everything within 7 spaces of a 0 to 1, in such a way that the effect is immediately visible to power operations further down the array.
Now this is just a simple implementation using for loops and ifs but I am pretty sure it can be condensed.(a lot!) And yeah there's no need to do Numpy for this, it will only complicate things for you.
question = [0,0,0,1,0,0,0,0,0,0,0,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,1]
result = [0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1]
indexesOf1s = []
for index,item in enumerate(question): #Here just calculate all the index of 1s
if item == 1:
indexesOf1s.append(index)
for i in indexesOf1s: #Iterate over the indexes and change acc to conditions
sub = i - indexes[indexes.index(i)-1]
if sub>0 and sub>=7:
question[i] = 1
elif sub>0:
question[i] = 0
print question
print result

Reshape 1D numpy array to 3D with x,y,z ordering

Say I have a 1D array of values corresponding to x, y, and z values like this:
x y z arr_1D
0 0 0 0
1 0 0 1
0 1 0 2
1 1 0 3
0 2 0 4
1 2 0 5
0 0 1 6
...
0 2 3 22
1 2 3 23
I want to get arr_1D into a 3D array arr_3D with shape (nx,ny,nz) (in this case (2,3,4)). I'd like to the values to be referenceable using arr_3D[x_index, y_index, z_index], so that, for example, arr_3D[1,2,0]=5. Using numpy.reshape(arr_1D, (2,3,4)) gives me a 3D matrix of the right dimensions, but not ordered the way I want. I know I can use the following code, but I'm wondering if there's a way to avoid the clunky nested for loops.
arr_1d = np.arange(24)
nx = 2
ny = 3
nz = 4
arr_3d = np.empty((nx,ny,nz))
count = 0
for k in range(nz):
for j in range(ny):
for i in range(nx):
arr_3d[i,j,k] = arr_1d[count]
count += 1
print arr_3d[1,2,0]
output: 5
What would be the most pythonic and/or fast way to do this? I'll typically want to do this for arrays of length on the order of 100,000.
You where really close, but since you want the x axis to be the one that is iterated trhough the fastest, you need to use something like
arr_3d = arr_1d.reshape((4,3,2)).transpose()
So you create an array with the right order of elements but the dimensions in the wrong order and then you correct the order of the dimensions.

Gernerate all the possible undirected graphs

What is an efficient solution to generate all the possible graphs using an incidence matrix?
The problems is equivalent of generating all the possible binary triangular matrix.
My first idea was to use python with itertools. For instance, for generating all the possibile 4x4 matrix
for b in itertools.combinations_with_replacement((0,1), n-3):
b_1=[i for i in b]
for c in itertools.combinations_with_replacement((0,1), n-2):
c_1=[i for i in c]
for d in itertools.combinations_with_replacement((0,1), n-1):
d_1=[i for i in d]
and then you create the matrix adding the respective number of zeroes..
But this is not correct since we skip some graphs...
So, any ideas?
Perhaps i can use the isomorphism between R^n matrix and R^(n*n) vector, and generate all the possibile vector of 0 and 1, and then cut it into my matrix, but i think there's a more efficient solutions.
Thank you
I add the matlab tab because it's a problem you can have in numerical analysis and matlab.
I assume you want lower triangular matrices, and that the diagonal needs not be zero. The code can be easily modified if that's not the case.
n = 4; %// matrix size
vals = dec2bin(0:2^(n*(n+1)/2)-1)-'0'; %// each row of `vals` codes a matrix
mask = tril(reshape(1:n^2, n, n))>0; %// decoding mask
for v = vals.' %'// `for` picks one column each time
matrix = zeros(n); %// initiallize to zeros
matrix(mask) = v; %// decode into matrix
disp(matrix) %// Do something with `matrix`
end
Each iteration gives one possible matrix. For example, the first matrices for n=4 are
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 1
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 1 0
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 1 1
Here is an example solution using numpy that generates all simple graphs:
It first generates the indices of the upper triangular part iu. The loop converts the number k to it's binary representation and then assigns it to the upper triangular part G[iu].
import numpy as np
n = 4
iu = np.triu_indices(n,1) # Start at first minor diagonal
G = np.zeros([n,n])
def dec2bin(k, bitlength=0):
return [1 if digit=='1' else 0 for digit in bin(k)[2:].zfill(bitlength)]
for k in range(0,2**(iu[0].size)):
G[iu] = dec2bin(k, iu[0].size)
print(G)

Categories

Resources