I am trying to implement a 3D DFT but I am running into some trouble. What I believe I should do is to just do 3 consecutive 1D DFTs, one in each direction. Assuming that the 1D DFT is correct, can you see what is wrong with this code:
def dft3d(self, real3d, img3d, nx, ny, nz, dir):
#Transform depth
for i in range(nx):
for j in range(ny):
real = numpy.zeros(nz)
img = numpy.zeros(nz)
for k in range(nz):
real[k] = real3d[i][j][k]
img[k] = img3d[i][j][k]
self.dft(real, img, nz, 1) #This was indented too much. It should work now.
for k in range(nz):
real3d[i][j][k] = real[k]
img3d[i][j][k] = img[k]
#Transform cols
for k in range(nz):
for i in range(nx):
real = numpy.zeros(ny)
img = numpy.zeros(ny)
for j in range(ny):
real[j] = real3d[i][j][k]
img[j] = img3d[i][j][k]
self.dft(real, img, ny, 1)
for j in range(ny):
real3d[i][j][k] = real[j]
img3d[i][j][k] = img[j]
#Transform rows
for j in range(ny):
for k in range(nz):
real = numpy.zeros(nx)
img = numpy.zeros(nx)
for i in range(nx):
real[i] = real3d[i][j][k]
img[i] = img3d[i][j][k]
self.dft(real, img, nx, 1)
for i in range(nx):
real3d[i][j][k] = real[i]
img3d[i][j][k] = img[i]
I know there are built in versions of this in python, but I can't use those. I'm just testing my algorithm in python so I can compare results of my algorithm and the built in ones. As far as I could tell it worked fine for both 1D and 2D transforms, but once I expanded it to 3D the results no longer match. Does anyone know what is wrong?
The first instance of self.dft is indented too far.
Other than that, I see nothing wrong from the code provided.
As a side note, if you are using numpy as your code suggests, you can simplify your code significantly even without resorting to the built-in DFT/FFT.
For example, you can index a 3D numpy array like data3D[i, j, k]. You can slice by doing data3D[:, j, k], data3D[i, :, k], data3D[:, :, k], etc., instead of assigning individual elements one at a time within a for loop.
Related
I'm currently working on an project aim at finding blur region by using walsh hadamard transform. The basic idea is pixel-wise extract local patch and apply walsh hadamard transform to this local patch. In order to do Walsh hadamard transform, I prior generate the hadamard matrix H and do H×T(local_patch)×H_transpose computation. This operation cost 5ms per pixel which is time consuming. I'm wondering is there have some technique to speed up the matrix multiplication process in numpy python or using some other fast walsh hadamard trainsform technique to replace the H×T×H'. Any help would be appreciated.
for i in range(h):
for j in range(w):
local_patch_gray = gray_pad[i:i+patch_size, j:j+patch_size]
local_patch_gray = local_patch_gray[1:, 1:] # extract 2^n×2^n part
local_patch_blur = blur_pad[i:i + patch_size, j:j + patch_size]
local_patch_blur = local_patch_blur[1:, 1:]
patch_WHT = np.dot(np.dot(H, local_patch_gray), H)
blur_WHT = np.dot(np.dot(H, local_patch_blur), H)
num = np.power(np.sum(np.power(np.abs(blur_WHT), p)), 1/p)
denomi = np.power(np.sum(np.power(np.abs(patch_WHT), p)), 1/p)
if denomi == 0:
blur_map[i, j] = 0
continue
blur_map[i, j] = num / denomi
It sounds like this is a job for Numba, check out their 5-minute starting guide.
In short, Numba compiles the first call of a function into a fast-callable format, so that every subsequent call of the same function is at light speed. Numba also has options which can make function calls at ludicrous speed. The options that will pertain to your example are likely fastmath and parallel.
As a starting point, here's what your new numba function might look like:
#njit(fastmath=True, parallel=True)
def lightning_fast_numba_function:
local_patch_gray = gray_pad[i:i+patch_size, j:j+patch_size]
local_patch_gray = local_patch_gray[1:, 1:] # extract 2^n×2^n part
local_patch_blur = blur_pad[i:i + patch_size, j:j + patch_size]
local_patch_blur = local_patch_blur[1:, 1:]
patch_WHT = np.dot(np.dot(H, local_patch_gray), H)
blur_WHT = np.dot(np.dot(H, local_patch_blur), H)
num = np.power(np.sum(np.power(np.abs(blur_WHT), p)), 1/p)
denomi = np.power(np.sum(np.power(np.abs(patch_WHT), p)), 1/p)
if denomi == 0:
blur_map[i, j] = 0
continue
blur_map[i, j] = num / denomi
for i in range(h):
for j in range(w):
lighting_fast_numba_function()
Other options you may consider are using np.nditer instead of range. But, dont hesitate to cross-check options using Numpy's iteration docs.
Lastly, I noticed a Wikipedia article for your alg has a fast section, with Python code. Might find it useful.
I have a numpy array that has a N*3 matrix of [u, v, I]. Pixel positions and I intensity for that pixel.
I need to generate fill an image of the corresponding I from the set of pixels in that numpy array. Right now I have a for loop to do it but it is quite slow. What is a faster way to do this?
dmap_raw = np.zeros((raw_img_size[1], raw_img_size[0])).astype(np.float32)
for i in range(0, velodata_cam_proj.shape[0]):
u = velodata_cam_proj[i,0]
v = velodata_cam_proj[i,1]
Z = velodata_cam_proj[i,2]
dmap_raw[int(v),int(u)] = Z*100
Try this:
dmap_raw = np.zeros((raw_img_size[1], raw_img_size[0])).astype(np.float32)
u = velodata_cam_proj[:,0].astype('int')
v = velodata_cam_proj[:,1].astype('int')
Z = velodata_cam_proj[:,2]
dmap_raw[v, u] = Z*100
The sample code is as below.
I want to get dataNew(h, w, length) according to data(h, w, c) and ind(h, w). Here length < c, it means dataNew is sliced from data.
Here, length and ind[i, j] is made sure to suit the c value.
I have realize it through for loops, but I wnat the python way. Please help, thanks!
import numpy as np
h, w, c = 3, 4, 5
data = np.arange(60).reshape((h, w, c))
print(data)
length = 3
ind = np.random.randint(0, 3, 12).reshape(h, w)
print(ind)
dataNew = np.empty((h, w, length), np.int16)
for i in range(h):
for j in range(w):
st = ind[i, j]
dataNew[i, j] = data[i, j][st : st + length]
print(dataNew)
We can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows to get sliding windows. More info on use of as_strided based view_as_windows.
from skimage.util.shape import view_as_windows
# Get all sliding windows along the last axis
w = view_as_windows(data,(1,1,length))[...,0,0,:]
# Index into windows with start indices and slice out singleton dims
out = np.take_along_axis(w,ind[...,None,None],axis=-1)[...,0]
Last step is basically using advanced-indexing into the windows with those start indices. This could be made a bit simpler and might be easier to understand. So, alternatively, we could do -
m,n = ind.shape
I,J = np.ogrid[:m,:n]
out = w[I,J,ind]
One way would be creating an indexing array using broadcasting and use np.take_along_axis to index the array:
ix = ind[...,None] + np.arange(length)
np.take_along_axis(data, ix, -1)
Hoping this is an easy problem and I just don't know the correct syntax.
I currently have a small 3D volume that is defined by a numpy array of 100,100,100.
For the problem I am testing I want to put this volume into a larger array (doesn't matter how big right now but I am testing on a 1000,1000,100 array).
Currently I am just making an empty numpy array using the following:
BigArray = np.zeros((1000,1000,100),np.float16)
Then I have my smaller array that for the purpose of this example can just be a randomly filled array.:
SmallArray = np.random.rand(100,100,100)
From here I want to loop through and fill the 1000,1000,100 array with the 100,100,100 array placing each cube next to one another. The large array starts with '0' values so it should be as simple as just adding the small array to the correct coordinates of the larger array however have no idea the syntax to do this. Could someone help?
Thanks
This should do it -- just use a standard nested for loop and numpy array assignment syntax:
small = np.random.rand(100, 100, 100)
big = np.zeros((1000, 1000, 100), dtype=np.int16)
for i in range(0, 1000, 100):
for j in range(0, 1000, 100):
big[i:i+100, j:j+100, :] = small
For generic sized 3D arrays:
def inset_into(small, big):
sx, sy, sz = small.shape
bx, by, bz = big.shape
# make sure values work
assert bx % sx == 0
assert by % sy == 0
assert bz == sz
for i in range(0, bx, sx):
for j in range(0, by, sy):
big[i:i+sx, j:j+sy, :] = small
return big
This should just be numpy slicing.
small = np.random.rand(100, 100, 100)
big = np.zeros((1000, 1000, 100), dtype=np.int16)
If you want to make big out of a bunch of smalls here is another way.
big = np.concatenate([small] * (big.shape[0] // small.shape[0]), axis=1)
big = np.concatenate([big] * (big.shape[1] // small.shape[1]), axis=0)
There is a speed difference. Looping is better.
I need to shift a 2D array field, i.e. I have a "previous_data" array which I access through shifted indices to create my "new_data" array.
I can do this in a nonpythonic (and slow) loop, but would very much appreciate some help in finding a pythonic (and faster) solution!
Any help and hints are very much appreciated!
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import mpl
def nonpythonic():
#this works, but is slow (for large arrays)
new_data = np.zeros((ny,nx))
for j in xrange(ny):
for i in xrange(nx):
#go through each item, check if it is within the bounds
#and assign the data to the new_data array
i_new = ix[j,i]
j_new = iy[j,i]
if ((i_new>=0) and (i_new<nx) and (j_new>=0) and (j_new<ny)):
new_data[j,i]=previous_data[j_new,i_new]
ef, axar = plt.subplots(1,2)
im = axar[0].pcolor(previous_data, vmin=0,vmax=2)
ef.colorbar(im, ax=axar[0], shrink=0.9)
im = axar[1].pcolor(new_data, vmin=0,vmax=2)
ef.colorbar(im, ax=axar[1], shrink=0.9)
plt.show()
def pythonic():
#tried a few things here, but none are working
#-tried assigning NaNs to indices (ix,iy) which are out of bounds, but NaN's don't work for indices
#-tried masked arrays, but they also don't work as indices
#-tried boolean arrays, but ended in shape mismatches
#just as in the nonworking code below
ind_y_good = np.where(iy>=0) and np.where(iy<ny)
ind_x_good = np.where(ix>=0) and np.where(ix<nx)
new_data = np.zeros((ny,nx))
new_data[ind_y_good,ind_x_good] = previous_data[iy[ind_y_good],ix[ind_x_good]]
#some 2D array:
nx = 20
ny = 30
#array indices:
iy, ix = np.indices((ny,nx))
#modify indices (shift):
iy = iy + 1
ix = ix - 4
#create some out of range indices (which might happen in my real scenario)
iy[0,2:7] = -9999
ix[0:3,-1] = 6666
#some previous data which is the basis for the new_data:
previous_data = np.ones((ny,nx))
previous_data[2:8,10:20] = 2
nonpythonic()
pythonic()
This is the result of the working (nonpythonic) code above:
I implemented a version of pythonic that replicates nonpythonic with some masking and index fiddling - see below. By the way I think the "new" indices should be the ones corresponding to the new array, rather than the old ones, but I've left it as in your existing function.
The main thing to realise is that in your attempt in the question, your conditions
ind_y_good = np.where(iy>=0) and np.where(iy<ny)
ind_x_good = np.where(ix>=0) and np.where(ix<nx)
must be combined, since we must always have pairs of x and y indices. i.e. if the x index is invalid, then so is the y.
Finally, if the indices are really all shifted by a constant factor, you can make this even simpler by using NumPy's roll function and taking a slice of the indices corresponding to the valid area.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import mpl
def nonpythonic(previous_data, ix, iy, nx, ny):
#this works, but is slow (for large arrays)
new_data = np.zeros((ny,nx))
for j in xrange(ny):
for i in xrange(nx):
#go through each item, check if it is within the bounds
#and assign the data to the new_data array
i_new = ix[j,i]
j_new = iy[j,i]
if ((i_new>=0) and (i_new<nx) and (j_new>=0) and (j_new<ny)):
new_data[j,i]=previous_data[j_new,i_new]
return new_data
def pythonic(previous_data, ix, iy):
ny, nx = previous_data.shape
iy_old, ix_old = np.indices(previous_data.shape)
# note you must apply the same condition to both
# index arrays
valid = (iy >= 0) & (iy < ny) & (ix >= 0) & (ix < nx)
new_data = np.zeros((ny,nx))
new_data[iy_old[valid], ix_old[valid]] = previous_data[iy[valid], ix[valid]]
return new_data
def main():
#some 2D array:
nx = 20
ny = 30
#array indices:
iy, ix = np.indices((ny,nx))
#modify indices (shift):
iy = iy + 1
ix = ix - 4
#create some out of range indices (which might happen in my real scenario)
iy[0,2:7] = -9999
ix[0:3,-1] = 6666
#some previous data which is the basis for the new_data:
previous_data = np.ones((ny,nx))
previous_data[2:8,10:20] = 2
data_nonpythonic = nonpythonic(previous_data, ix, iy, nx, ny)
data_pythonic = pythonic(previous_data, ix, iy)
new_data = data_nonpythonic
ef, axar = plt.subplots(1,2)
im = axar[0].pcolor(previous_data, vmin=0,vmax=2)
ef.colorbar(im, ax=axar[0], shrink=0.9)
im = axar[1].pcolor(new_data, vmin=0,vmax=2)
ef.colorbar(im, ax=axar[1], shrink=0.9)
plt.show()
print(np.allclose(data_nonpythonic, data_pythonic))
if __name__ == "__main__":
main()