Related
I'm applying yolov5 on kitti raw image [C, H, W] = [3, 375, 1242]. Therefore I need to pad the image so that the H and W being dividable by 32. I'm using nn.ReplicationPad2d to do the padding: [3, 375, 1242] -> [3, 384, 1248].
In the official tutorial of nn.ReplicationPad2d it was said that we give a 4-tuple to indicate padding sizes for left, right, top and bottom.
The Problem is:
When I give a 4-tuple (0, pad1, 0, pad2), it claims that: 3D tensors expect 2 values for padding
When I give a 2-tuple (pad1, pad2), the pad can be implemented but it seems that only W was padded by pad1+pad2, while H stays unchanged. Because I 'll get a tensor of size [3, 375, 1257].
1257-1242 = 15 = 9+6, where 9 was supposed to pad H and 6 pad W.
I could not figure out what is the problem here...
thanks in advance
Here is my code:
def paddingImage(img, divider=32):
if img.shape[1]%divider != 0 or img.shape[2]%divider != 0:
padding1_mult = int(img.shape[1] / divider) + 1
padding2_mult = int(img.shape[2] / divider) + 1
pad1 = (divider * padding1_mult) - img.shape[1]
pad2 = (divider * padding2_mult) - img.shape[2]
# pad1 = 32 - (img.shape[1]%32)
# pad2 = 32 - (img.shape[2]%32)
# pad1 = 384 - 375 # 9
# pad2 = 1248 - 1242 # 6
#################### PROBLEM ####################
padding = nn.ReplicationPad2d((pad1, pad2))
#################### PROBLEM ####################
return padding(img)
else:
return img
Where img was given as a torch.Tensor in the main function:
# ...
image_tensor = torch.from_numpy(image_np).type(torch.float32)
image_tensor = paddingImage(image_tensor)
image_np = image_tensor.numpy()
# ...
PyTorch expects the input to ReplicationPad2d to be batched image tensors. Therefore, we can unsqueeze to add a 'batch dimension'.
def paddingImage(img, divider=32):
if img.shape[1]%divider != 0 or img.shape[2]%divider != 0:
padding1_mult = int(img.shape[1] / divider) + 1
padding2_mult = int(img.shape[2] / divider) + 1
pad1 = (divider * padding1_mult) - img.shape[1]
pad2 = (divider * padding2_mult) - img.shape[2]
# pad1 = 32 - (img.shape[1]%32)
# pad2 = 32 - (img.shape[2]%32)
# pad1 = 384 - 375 # 9
# pad2 = 1248 - 1242 # 6
padding = nn.ReplicationPad2d((0, pad2, 0, pad1))
# Add a extra batch-dimension, pad, and then remove batch-dimension
return torch.squeeze(padding(torch.unsqueeze(img,0)),0)
else:
return img
Hope this helps!
EDIT
As GoodDeeds mentions, this is resolved with later versions of PyTorch. Either upgrade PyTorch, or, if that's not an option, use the code above.
The issue likely is that your PyTorch version is too old. The documentation corresponds to the latest version, and support for padding tensors without a batch dimension was added in v1.10.0. To resolve this, either upgrade your PyTorch version or add a batch dimension to your image, for example, by using unsqueeze(0).
This was my piece of code initially :
Here X is the array of data points with dimensions (m x n) where m is number of data points to predict, and n is number of features without the bias term.
y is the data labels with shape (m,)
lambda_ is the regularization term.
from scipy import optimize
def oneVsAll(X,y,num_labels,lambda_):
#used to find the optimal parametrs theta for each label against the others
#X (m,n)
#y (m,)
#num_labels : possible number of labels
#lambda_ : regularization param
#all_theta : trained param for logistic reg for each class
#hence (k,n+1) where k is #labels and n+1 is #features with bias
m,n = X.shape
all_theta = np.array((num_labels,n+1))
X = np.concatenate([np.ones((m,1)),X],axis = 1)
for k in np.arange(num_labels):
#y == k will generate a list with shape of y,but 1 only for index with value same as k and rest with 0
initial_theta = np.zeros(n+1)
options = {"maxiter" : 50}
res = optimize.minimize(lrCostFunction,
initial_theta,args = (X,y==k,lambda_),
jac = True,method = 'CG',
options = options)
all_theta[k] = res.x
return all_theta
lambda_ = 0.1
all_theta = oneVsAll(X,y,num_labels,lambda_)
The error I got was :
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-45-f9501694361e> in <module>()
1 lambda_ = 0.1
----> 2 all_theta = oneVsAll(X,y,num_labels,lambda_)
<ipython-input-44-05a9b582ccaf> in oneVsAll(X, y, num_labels, lambda_)
20 jac = True,method = 'CG',
21 options = options)
---> 22 all_theta[k] = res.x
23 return all_theta
ValueError: setting an array element with a sequence.
Then after debugging, I changed the code to :
from scipy import optimize
def oneVsAll(X,y,num_labels,lambda_):
#used to find the optimal parametrs theta for each label against the others
#X (m,n)
#y (m,)
#num_labels : possible number of labels
#lambda_ : regularization param
#all_theta : trained param for logistic reg for each class
#hence (k,n+1) where k is #labels and n+1 is #features with bias
m,n = X.shape
all_theta = np.array((num_labels,n+1),dtype = "object")
X = np.concatenate([np.ones((m,1)),X],axis = 1)
for k in np.arange(num_labels):
#y == k will generate a list with shape of y,but 1 only for index with value same as k and rest with 0
initial_theta = np.zeros(n+1)
options = {"maxiter" : 50}
res = optimize.minimize(lrCostFunction,
initial_theta,args = (X,y==k,lambda_),
jac = True,method = 'CG',
options = options)
all_theta[k] = res.x
return all_theta
Now the error I am getting is :
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-47-f9501694361e> in <module>()
1 lambda_ = 0.1
----> 2 all_theta = oneVsAll(X,y,num_labels,lambda_)
<ipython-input-46-383fc22e26cc> in oneVsAll(X, y, num_labels, lambda_)
20 jac = True,method = 'CG',
21 options = options)
---> 22 all_theta[k] = res.x
23 return all_theta
IndexError: index 2 is out of bounds for axis 0 with size 2
How can I correct this?
You create all_theta running:
all_theta = np.array((num_labels,n+1),dtype = "object")
This instruction actually creates an array containig just 2 elements
(the shape is (2,)), containing two passed values, whereas you probably
intend to pass the shape of the array to be created.
Change this instruction to:
all_theta = np.empty((num_labels,n+1))
Specification of dtype (in my opinion) is not necessary.
I'm working on a preprocessing function that takes DICOM files a input and returns a 3D np.array (image stack). The problem is that I need to keep the association between ImagePositionPatient[2] and the relative position of the processed images in the output array.
For example, if a slice with ImagePositionPatient[2] == 5 is mapped to a processed slice in position 3 in the returned stack, I need to return another array that has 5 in the third position, and the same for all original slices. For slices created during processing by interpolation or padding, the array shall contain a palceholder value like -99999 instead.
I paste my code here.
EDIT: new simplified version
def lung_segmentation(patient_dir):
"""
Load the dicom files of a patient, build a 3D image of the scan, normalize it to (1mm x 1mm x 1mm) and segment
the lungs
:param patient_dir: directory of dcm files
:return: a numpy array of size (384, 288, 384)
"""
""" LOAD THE IMAGE """
# Initialize image and get dcm files
dcm_list = glob(patient_dir + '/*.dcm')
img = np.zeros((len(dcm_list), 512, 512), dtype='float32') # inizializza un
# vettore di len(..) di matrici di 0 e di ampiezza 512x512
z = []
# For each dcm file, get the corresponding slice, normalize HU values, and store the Z position of the slice
for i, f in enumerate(dcm_list):
dcm = dicom.read_file(f)
img[i] = float(dcm.RescaleSlope) * dcm.pixel_array.astype('float32') + float(dcm.RescaleIntercept)
z.append(dcm.ImagePositionPatient[-1])
# Get spacing and reorder slices
spacing = list(map(float, dcm.PixelSpacing)) + [np.median(np.diff(np.sort(z)))]
print("LO SPACING e: "+str(spacing))
# spacing = list(map(lambda dcm, z: dcm.PixelSpacing + [np.median(np.diff(np.sort(z)))]))
img = img[np.argsort(z)]
""" NORMALIZE HU AND RESOLUTION """
# Clip and normalize
img = np.clip(img, -1024, 4000) # clippa con minimo a 1024 e max a 4k
img = (img + 1024.) / (4000 + 1024.)
# Rescale 1mm x 1mm x 1mm
new_shape = map(lambda x, y: int(x * y), img.shape, spacing[::-1])
old_shape = img.shape
img = resize(img, new_shape, preserve_range=True)
print('nuova shape calcolata'+ str(img.shape)+' con calcolo eseguito su img_shape: '+str(old_shape)+' * '+str(spacing[::-1]))
lungmask = np.zeros(img.shape) # WE NEED LUNGMASK FOR CODE BELOW
lungmask[int(img.shape[0]/2 - img.shape[0]/4) : int(img.shape[0]/2 + img.shape[0]/4),
int(img.shape[1]/2 - img.shape[1]/4) : int(img.shape[1]/2 + img.shape[1]/4),
int(img.shape[2]/2 - img.shape[2]/4) : int(img.shape[2]/2 + img.shape[2]/4)] = 1
# I set to value = 1 some pixel for executing code below, free to change
""" CENTER AND PAD TO GET SHAPE (384, 288, 384) """
# Center the image
sum_x = np.sum(lungmask, axis=(0, 1))
sum_y = np.sum(lungmask, axis=(0, 2))
sum_z = np.sum(lungmask, axis=(1, 2))
mx = np.nonzero(sum_x)[0][0]
Mx = len(sum_x) - np.nonzero(sum_x[::-1])[0][0]
my = np.nonzero(sum_y)[0][0]
My = len(sum_y) - np.nonzero(sum_y[::-1])[0][0]
mz = np.nonzero(sum_z)[0][0]
Mz = len(sum_z) - np.nonzero(sum_z[::-1])[0][0]
img = img * lungmask
img = img[mz:Mz, my:My, mx:Mx]
# Pad the image to (384, 288, 384)
nz, nr, nc = img.shape
pad1 = int((384 - nz) / 2)
pad2 = 384 - nz - pad1
pad3 = int((288 - nr) / 2)
pad4 = 288 - nr - pad3
pad5 = int((384 - nc) / 2)
pad6 = 384 - nc - pad5
# Crop images too big
if pad1 < 0:
img = img[:, -pad1:384 - pad2]
pad1 = pad2 = 0
if img.shape[0] == 383:
pad1 = 1
if pad3 < 0:
img = img[:, :, -pad3:288 - pad4]
pad3 = pad4 = 0
if img.shape[1] == 287:
pad3 = 1
if pad5 < 0:
img = img[:, :, -pad5:384 - pad6]
pad5 = pad6 = 0
if img.shape[2] == 383:
pad5 = 1
# Pad
img = np.pad(img, pad_width=((pad1 - 4, pad2 + 4), (pad3, pad4), (pad5, pad6)), mode='constant')
# The -4 / +4 is here for "historical" reasons, but it can be removed
return img
reference library for resize methods etc. is skimage
I will try to give at least some hints to the answer. As has been discussed in the comments, resizing may remove the processed data at the original positions due to needed interpolation - so in the end you have to come up with a solution for that, either by changing the resizing target to a multipe of the actual resolution, or by returning the interpolated positions instead.
The basic idea is to have your positions array z be transformed the same as the images are in z direction. So for each operation in processing that changes the z location of the processed image, a similar operation has to be done for z.
Let's say you have 5 slices with a slice distance of 3mm:
>>> z
[0, 6, 3, 12, 9]
We can make a numpy array from it for easier handling:
z_out = np.array(y)
This corresponds to the unprocessed img list.
Now you sort the image list, so you have to also sort z_out:
img = img[np.argsort(z)]
z_out = np.sort(z_out)
>>> z_out
[0, 3, 6, 9, 12]
Next, the image is resized, introducing interpolated slices.
I will assume here that the resizing is done so that the slice distance is a multiple of the target resolution during resizing. In this case you to calculate the number of interpolated slices, and fill the new position array with corresponding placeholder values:
slice_distance = int((max(z) - min(z)) / (len(z) - 1))
nr_interpolated = slice_distance - 1 # you may adapt this to your algorithm
index_diff = np.arange(len(z) - 1) # used to adapt the insertion index
for i in range(nr_interpolated):
index = index_diff * (i + 1) + 1 # insertion index for placeholders
z_out = np.insert(z_out, index, -99999) # insert placeholder for interpolated positions
This gives you the z array filled with the placeholder value where interpolated slices occur in the image array:
>>> z_out
[0, -99999, -999999, 3, -99999, -999999, 6, -99999, -999999, 9, -99999, -999999, 12]
Then you have to do the same padding as for the image in the z direction:
img = np.pad(img, pad_width=((pad1 - 4, pad2 + 4), (pad3, pad4), (pad5, pad6)), mode='constant')
# use 'minimum' so that the placeholder is used
z_out = np.pad(z_out, pad_width=(pad1 - 4, pad2 + 4), mode='minimum')
Assuming padding values 1 and 3 for simplicity this gives you:
>>> z_out
[-99999, 0, -99999, -999999, 3, -99999, -999999, 6, -99999, -999999, 9, -99999, -999999, 12, -99999, -999999, -99999]
If you have more transformations in z directions, you have to do the corresponding changes to z_out. If you are done, you can return your position list together with the image list:
return img, z_out
As an aside: your code will only work as intented if your image has a transverse (axial) orientation, otherwise you have to calculate the z position array from Image Position Patient and Image Orientation Patient, instead of just using the z component of the image position.
My Model:
class myNet(nn.Module):
def __init__(self):
super(myNet,self).__init__()
self.act1=Dynamic_relu_b(64)
self.conv1=nn.Conv2d(3,64,3)
self.pool=nn.AdaptiveAvgPool2d(1)
self.fc=nn.Linear(128,20)
def forward(self,x):
x=self.conv1(x)
x=self.act1(x)
x=self.pool(x)
x=x.view(x.shape[0],-1)
x=self.fc(x)
return x
A code that replicates the experiment is provided:
def one_hot_smooth_label(x,num_class,smooth=0.1):
num=x.shape[0]
labels=torch.zeros((num,20))
for i in range(num):
labels[i][x[i]]=1
labels=(1-(num_class-1)/num_class*smooth)*labels+smooth/num_class
return labels
images=torch.rand((4,3,300,300))
images=images.cuda()
labels=torch.from_numpy(np.array([1,0,0,1]))
model=myNet()
model=model.cuda()
output=model(images)
labels=one_hot_smooth_label(labels,20)
labels = labels.cuda()
criterion=nn.BCEWithLogitsLoss()
loss=criterion(output,labels)
loss.backward()
The error:
RuntimeError Traceback (most recent call last)
<ipython-input-42-1268777e87e6> in <module>()
21
22 loss=criterion(output,labels)
---> 23 loss.backward()
1 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
RuntimeError: Function AddBackward0 returned an invalid gradient at index 1 - expected type TensorOptions(dtype=float, device=cpu, layout=Strided, requires_grad=false) but got TensorOptions(dtype=float, device=cuda:0, layout=Strided, requires_grad=false) (validate_outputs at /pytorch/torch/csrc/autograd/engine.cpp:484)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7fcf7711b536 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x2d84224 (0x7fcfb1bad224 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #2: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x548 (0x7fcfb1baed58 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #3: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7fcfb1bb0ce2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #4: torch::autograd::Engine::thread_init(int) + 0x39 (0x7fcfb1ba9359 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_cpu.so)
frame #5: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7fcfbe2e8378 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0xbd6df (0x7fcfe23416df in /usr/lib/x86_64-linux-gnu/libstdc++.so.6)
frame #7: <unknown function> + 0x76db (0x7fcfe34236db in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #8: clone + 0x3f (0x7fcfe375c88f in /lib/x86_64-linux-gnu/libc.so.6)
After many experiments, I found that act1 in the model was the problem. If you delete act1, the error will not appear!
But I don't know why act1 has this problem.
What seems to be the wrong part of the error is requiers_grad=False, and I don't know which part set this.
This is the code about act1(Dynamic_relu_b):
class Residual(nn.Module):
def __init__(self, in_channel, R=8, k=2):
super(Residual, self).__init__()
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.relu = nn.ReLU(inplace=True)
self.R = R
self.k = k
out_channel = int(in_channel / R)
self.fc1 = nn.Linear(in_channel, out_channel)
fc_list = []
for i in range(k):
fc_list.append(nn.Linear(out_channel, 2 * in_channel))
self.fc2 = nn.ModuleList(fc_list)
def forward(self, x):
x = self.avg(x)
x = torch.squeeze(x)
x = self.fc1(x)
x = self.relu(x)
result_list = []
for i in range(self.k):
result = self.fc2[i](x)
result = 2 * torch.sigmoid(result) - 1
result_list.append(result)
return result_list
class Dynamic_relu_b(nn.Module):
def __init__(self, inchannel, R=8, k=2):
super(Dynamic_relu_b, self).__init__()
self.lambda_alpha = 1
self.lambda_beta = 0.5
self.R = R
self.k = k
self.init_alpha = torch.zeros(self.k)
self.init_beta = torch.zeros(self.k)
self.init_alpha[0] = 1
self.init_beta[0] = 1
for i in range(1, k):
self.init_alpha[i] = 0
self.init_beta[i] = 0
self.residual = Residual(inchannel)
def forward(self, input):
delta = self.residual(input)
in_channel = input.shape[1]
bs = input.shape[0]
alpha = torch.zeros((self.k, bs, in_channel))
beta = torch.zeros((self.k, bs, in_channel))
for i in range(self.k):
for j, c in enumerate(range(0, in_channel * 2, 2)):
alpha[i, :, j] = delta[i][:, c]
beta[i, :, j] = delta[i][:, c + 1]
alpha1 = alpha[0]
beta1 = beta[0]
max_result = self.dynamic_function(alpha1, beta1, input, 0)
for i in range(1, self.k):
alphai = alpha[i]
betai = beta[i]
result = self.dynamic_function(alphai, betai, input, i)
max_result = torch.max(max_result, result)
return max_result
def dynamic_function(self, alpha, beta, x, k):
init_alpha = self.init_alpha[k]
init_beta = self.init_beta[k]
alpha = init_alpha + self.lambda_alpha * alpha
beta = init_beta + self.lambda_beta * beta
bs = x.shape[0]
channel = x.shape[1]
results = torch.zeros_like(x)
for i in range(bs):
for c in range(channel):
results[i, c, :, :] = x[i, c] * alpha[i, c] + beta[i, c]
return results
How should I solve this problem?
In PyTorch two tensors need to be on the same device to perform any mathematical operation between them. But in your case one is on the CPU and the other on the GPU. The error is not as clear as it normally is, because it happened in the backwards pass. You were (un)lucky that your forward pass did not fail. That's because there is an exception to the same device restriction, namely when using scalar values in the mathematical operation, e.g. tensor * 2, and it even occurs when the scalar is a tensor: cpu_tensor * tensor(2, device='cuda:0'). You are using a lot of loops and accessing individual scalars to calculate further results.
While the forward pass works like that, in the backward pass when the gradients are calculated, the gradients are multiplied with the previous ones (application of the chain rule). At that point, the two are on different devices.
You have identified that it's in the Dynamic_relu_b. In there you need to make sure that every tensor that you create, is on the same device as the input. The two tensors you create in the forward method are:
alpha = torch.zeros((self.k, bs, in_channel))
beta = torch.zeros((self.k, bs, in_channel))
These are created on the CPU, but your input is on the GPU, so you need to put them on the GPU as well. To be generic, it should be put onto the device where the input is located.
alpha = torch.zeros((self.k, bs, in_channel), device=input.device)
beta = torch.zeros((self.k, bs, in_channel), device=input.device)
The biggest problem in your code are the loops. Not only did they obfuscated a bug, they are very harmful for performance, since they can neither be parallelised nor vectorised, and those are the reasons why GPUs are so fast. I'm certain that these loops can be replaced with more efficient operations, but you'll have to get out of the mindset of creating an empty tensor and then filling it one by one.
I'll give you one example from dynamic_function:
results = torch.zeros_like(x)
for i in range(bs):
for c in range(channel):
results[i, c, :, :] = x[i, c] * alpha[i, c] + beta[i, c]
You're multiplying x (size: [bs, channel, height, width]) with alpha (size: [bs, channel]), where every plane (height, width) of x is multiplied by a different element of alpha (scalar). That would be same as doing an element-wise multiplication with a tensor of the same size as the plane [height, width], but where all elements are the same scalar.
Thankfully, you don't need to repeat them yourself, since singular dimensions (dimensions with size 1) are automatically expanded to match the size of the other tensor, see PyTorch - Broadcasting Semantics for details. That means you only need to reshape alpha to have size [bs, channel, 1, 1].
The loop can therefore be replaced with:
results = x * alpha.view(bs, channel, 1, 1) + beta.view(bs, channel, 1, 1)
By eliminating that loop, you gain a lot of performance, and your initial error just got much clearer, because the forward pass would fail with the following message:
File "main.py", line 78, in dynamic_function
results = x * alpha.view(bs, channel, 1, 1) + beta.view(bs, channel, 1, 1)
RuntimeError: expected device cuda:0 but got device cpu
Now you would know that one of these is on the CPU and the other on the GPU.
Here is an example code that I am trying
def normalizeImages(data):
x = np.shape(data)[0]
y = np.shape(data)[1]
z = np.shape(data)[2]
w = np.shape(data)[3]
darray = np.array(data)
dflat = darray.flatten()
mindflat = min(dflat)
maxdflat = max(dflat)
for x in range(0, len(dflat)):
dflat[x] = (dflat[x] - mindflat)/(maxdflat - mindflat)* 255
arraynd = np.reshape(dflat, (x, y, z, w))
return arraynd.tolist();
X_valid = normalizeImages(X_valid)
it fails with
ValueErrorTraceback (most recent call last) <ipython-input-33-eb3d96398060> in <module>()
19 return arraynd.tolist();
20
---> 21 X_valid = normalizeImages(X_valid)
22 #X_train = normalizeImages(X_train)
23 #X_test = normalizeImages(X_test)
<ipython-input-33-eb3d96398060> in normalizeImages(data)
16 for x in range(0, len(dflat)):
17 dflat[x] = (dflat[x] - mindflat)/(maxdflat - mindflat)* 255
---> 18 arraynd = np.reshape(dflat, (x, y, z, w))
19 return arraynd.tolist();
20
C:\ProgramData\Miniconda3\envs\carnd-term1\lib\site-packages\numpy\core\fromnumeric.py in reshape(a, newshape, order)
222 except AttributeError:
223 return _wrapit(a, 'reshape', newshape, order=order)
--> 224 return reshape(newshape, order=order)
225
226
ValueError: total size of new array must be unchanged
The input X_valid is a 4 dimensional list of shape (4410, 32, 32, 3). Can anyone explain why this fails and how to fix?
You are changing the value of variable x in your loop. Do this instead; you don't even need a loop for this
def normalizeImages(data):
darray = np.array(data)
x, y, z, w = np.shape(darray)
dflat = darray.flatten()
mindflat = min(dflat)
maxdflat = max(dflat)
dflat = (dflat - mindflat)/(maxdflat - mindflat) * 255
return dflat.reshape(x, y, z, w).tolist()
You don't even need to reshape your data
def normalizeImages(data):
darray = np.array(data)
mind, maxd = darray.min(), darray.max()
darray = (darray - mind)/(maxd - mind) * 255
return darray.tolist()