I'm currently processing images that consist of a stack, 18 images per stack. I then deconvolve these images to produce cleaner sharper images. However when doing this I get border artifacts. I have spent some time writing code so as to determine how wide a pad I would need to pad these images, however I am unsure of how to use np.pad so that I may produce padded images. This is my code so far:
xextra = pad_width_x / 2
yextra = pad_width_y / 2
print (xextra)
print (yextra)
Where xextra and yextra are the pad widths I will be using. I understand that I will need to use this line of code to pad the array:
no_borders = np.pad(sparsebeadmix_sheet_cubic_deconvolution, pad_width_x, mode='constant', constant_values=0)
However how will I be able to process my stack of images (18 images) through this and save them as outputs?
I hope this makes sense!
If your stack is a nxny18 array:
import numpy as np
image_stack = np.ones((2, 2, 18))
extra_left, extra_right = 1, 2
extra_top, extra_bottom = 3, 1
np.pad(image_stack, ((extra_top, extra_bottom), (extra_left, extra_right), (0, 0)),
mode='constant', constant_values=3)
Related
I am training my model with several images.
When training my model I realized that I could increase my accuracy by replacing the zero elements in my image array with other values and so I replaced them with the median value of my image as shown with the following code.
import cv2
import imutils
import numpy as np
r_val_all = np.zeros((2000,112,112))
for r in range(len(r_val)):
#LOAD IMAGES
r_image_v = cv2.imread(r_val[r])
r_gray_v = cv2.cvtColor(r_image_v, cv2.COLOR_BGR2GRAY)
r_gray_v = imutils.resize(r_gray_v, width=112, height=112)
n = np.median(r_gray_v[r_gray_v > 0])
r_gray_v[r_gray_v == 0] = n
r_val_all[r,:,:] = r_gray_v
The accuracy did improve however it is not quite there yet.
What I actually require is something where the zero elements are replaced with a continuation of the pre-existent array values.
However I was not sure how to tackle such a problem are there any tools that perform the operation I require?
I used the second answer from the link, tell me if this is close to what you want, because it appeared to be what you wanted.
Creating one sample image and center it, so it's somewhat close to your first example image.
import numpy as np
import matplotlib.pyplot as plt
image = np.zeros((100, 100))
center_noise = np.random.normal(loc=10, size=(50, 50))
image[25:75, 25:75] = center_noise
plt.imshow(image, cmap='gray')
Inspired by rr_gray = np.where(rr_gray==0, np.nan, rr_gray) #convert zero elements to nan in your code, I'm replacing the zeros with NaN.
image_centered = np.where(image == 0, np.nan, image)
plt.imshow(image_centered, cmap='gray')
Now I used the function in the second answer of the link, fill.
test = fill(image_centered)
plt.imshow(test, cmap='gray')
This is the result
I'm sorry I can't help you more. I wish I could, I'm just not very well versed in image processing. I looked at your code and couldn't figure out why it's not working, sorry.
I am generating multiple 3D numpy array of size (22,6,2840),each array containing 22 array of size(6,2840).Now I want to save this array (22,6,2840) as images. I don't know if I can do that. I tried to do this using plt.savefig but it didn't work. I am trying for more than 2 weeks to find how I can do it.
Any help would be appreciated.
signals=np.zeros((22,6,2840))
t=0
movement=int(S*256)
if(S==0):
movement=_SIZE_WINDOW_SPECTOGRAM
while data.shape[1]-(t*movement+_SIZE_WINDOW_SPECTOGRAM) > 0:
for i in range(0, 22):
start = t*movement
stop = start+_SIZE_WINDOW_SPECTOGRAM
signals[i,:]=wavelet(data[i,start:stop])
if(signalsBlock is None):
signalsBlock=np.array([signals])
else:
signalsBlock=np.append(signalsBlock, [signals], axis=0)
nSpectogram=nSpectogram+1
if(signalsBlock.shape[0]==50):
saveSignalsOnDisk(signalsBlock, nSpectogram)
signalsBlock=None
t = t+1
try using the PyPNG library. You will have to reshape your array to a 2-D format and then write it as a png. The link to the library is here
image_2d = numpy.reshape(image_3d, (-1, column_count * plane_count))
pngWriter.write(out, image_2d)
Also, one more method by using PIL Image is provided here. However, that works with mostly RGB style 3 channel images.
I already achieved the goal described in the title but I was wondering if there was a more efficient (or generally better) way to do it. First of all let me introduce the problem.
I have a set of images of different sizes but with a width/height ratio less than (or equal) 2 (could be anything but let's say 2 for now), I want to normalize each one, meaning I want all of them to have the same size. Specifically I am going to do so like this:
Extract the max height above all images
Zoom the image so that each image reaches the max height keeping its ratio
Add a padding to the right with just white pixels until the image has a width/height ratio of 2
Keep in mind the images are represented as numpy matrices of grey scale values [0,255].
This is how I'm doing it now in Python:
max_height = numpy.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
for obs in data:
if len(obs[0])/len(obs) <= 2:
new_img = ndimage.zoom(obs, round(max_height/len(obs), 2), order=3)
missing_cols = max_height * 2 - len(new_img[0])
norm_img = []
for row in new_img:
norm_img.append(np.pad(row, (0, missing_cols), mode='constant', constant_values=255))
norm_img = np.resize(norm_img, (max_height, max_height*2))
There's a note about this code:
I'm rounding the zoom ratio because it makes the final height equal to max_height, I'm sure this is not the best approach but it's working (any suggestion is appreciated here). What I'd like to do is to expand the image keeping the ratio until it reaches a height equal to max_height. This is the only solution I found so far and it worked right away, the interpolation works pretty good.
So my final questions are:
Is there a better approach to achieve what explained above (image normalization) ? Do you think I could have done this differently ? Is there a common good practice I'm not following ?
Thanks in advance for your time.
Instead of ndimage.zoom you could use
scipy.misc.imresize. This
function allows you to specify the target size as a tuple, instead of by zoom
factor. Thus you won't have to call np.resize later to get the size exactly as
desired.
Note that scipy.misc.imresize calls
PIL.Image.resize
under the hood, so PIL (or Pillow) is a dependency.
Instead of using np.pad in a for-loop, you could allocate space for the desired array, norm_arr, first:
norm_arr = np.full((max_height, max_width), fill_value=255)
and then copy the resized image, new_arr into norm_arr:
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
For example,
from __future__ import division
import numpy as np
from scipy import misc
data = [np.linspace(255, 0, i*10).reshape(i,10)
for i in range(5, 100, 11)]
max_height = np.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
max_width = 2*max_height
result = []
for obs in data:
norm_arr = obs
h, w = obs.shape
if float(w)/h <= 2:
scale_factor = max_height/float(h)
target_size = (max_height, int(round(w*scale_factor)))
new_arr = misc.imresize(obs, target_size, interp='bicubic')
norm_arr = np.full((max_height, max_width), fill_value=255)
# check the shapes
# print(obs.shape, new_arr.shape, norm_arr.shape)
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
result.append(norm_arr)
# visually check the result
# misc.toimage(norm_arr).show()
I'm loading in a bunch of 16x16 images from a .csv file in with Numpy. Each row is a list of 256 grayscale values stored in CMO (so the shape is (n,256) where n is the number of images). This means that I can display any individual image with pyplot as:
plot.imshow(np.reshape(images[index], (16,16), order='F'), cmap=cm.Greys_r)
I want to tile these images with a certain number of images per row. I do have a working solution:
def TileImage(imgs, picturesPerRow=16):
# Convert to a true list of 16x16 images
tmp = np.reshape(imgs, (-1, 16, 16), order='F')
img = ""
for i in range(0, tmp.shape[0], picturesPerRow):
# On the last iteration, we may not have exactly picturesPerRow
# images left so we need to pad
if tmp.shape[0] - i >= picturesPerRow:
mid = np.concatenate(tmp[i:i+picturesPerRow], axis=1)
else:
padding = np.zeros((picturesPerRow - (tmp.shape[0] -i), 16, 16))
mid = np.concatenate(np.concatenate((tmp[i:tmp.shape[0]], padding), axis=0), axis=1)
if img == "":
img = mid
else:
img = np.concatenate((img, mid), axis=0)
return img
This works perfectly fine, but it feels like there should be a much cleaner way to do this sort of thing. I'm a bit of a novice at Numpy and I was wondering if there was a cleaner way to tile the flattened data in a way without all the manual padding and conditional concatenation.
Usually these sorts of simple array reshaping operations can be done in a couple of lines with Numpy, so I feel like I'm missing something. (Also, using a "" as a flag as if it were a null pointer seems a bit messy)
Here is a simplified version of your implementation.
Could not think about any simpler way of doing it.
def TileImage(imgs, picturesPerRow=16):
""" Convert to a true list of 16x16 images
"""
# Calculate how many columns
picturesPerColumn = imgs.shape[0]/picturesPerRow + 1*((imgs.shape[0]%picturesPerRow)!=0)
# Padding
rowPadding = picturesPerRow - imgs.shape[0]%picturesPerRow
imgs = vstack([imgs,zeros([rowPadding,imgs.shape[1]])])
# Reshaping all images
imgs = imgs.reshape(imgs.shape[0],16,16)
# Tiling Loop (The conditionals are not necessary anymore)
tiled = []
for i in range(0,picturesPerColumn*picturesPerRow,picturesPerRow):
tiled.append(hstack(imgs[i:i+picturesPerRow,:,:]))
return vstack(tiled)
Hope it helps.
I'm trying to add two images together using NumPy and PIL. The way I would do this in MATLAB would be something like:
>> M1 = imread('_1.jpg');
>> M2 = imread('_2.jpg');
>> resM = M1 + M2;
>> imwrite(resM, 'res.jpg');
I get something like this:
alt text http://www.deadlink.cc/matlab.jpg
Using a compositing program and adding the images the MATLAB result seems to be right.
In Python I'm trying to do the same thing like this:
from PIL import Image
from numpy import *
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
im1arr = asarray(im1)
im2arr = asarray(im2)
addition = im1arr + im2arr
resultImage = Image.fromarray(addition)
resultImage.save('/Users/rem7/Desktop/a.jpg')
and I get something like this:
alt text http://www.deadlink.cc/python.jpg
Why am I getting all those funky colors? I also tried using ImageMath.eval("a+b", a=im1, b=im2), but I get an error about RGB unsupported.
I also saw that there is an Image.blend() but that requires an alpha.
What's the best way to achieve what I'm looking for?
Source Images (images have been removed):
alt text http://www.deadlink.cc/_1.jpg
alt text http://www.deadlink.cc/_2.jpg
Humm, OK, well I added the source images using the add image icon and they show up when I'm editing the post, but for some reason the images don't show up in the post.
(images have been removed) 2013 05 09
As everyone suggested already, the weird colors you're observing are overflow. And as you point out in the comment of schnaader's answer you still get overflow if you add your images like this:
addition=(im1arr+im2arr)/2
The reason for this overflow is that your NumPy arrays (im1arr im2arr) are of the uint8 type (i.e. 8-bit). This means each element of the array can only hold values up to 255, so when your sum exceeds 255, it loops back around 0:
>>>array([255,10,100],dtype='uint8') + array([1,10,160],dtype='uint8')
array([ 0, 20, 4], dtype=uint8)
To avoid overflow, your arrays should be able to contain values beyond 255. You need to convert them to floats for instance, perform the blending operation and convert the result back to uint8:
im1arrF = im1arr.astype('float')
im2arrF = im2arr.astype('float')
additionF = (im1arrF+im2arrF)/2
addition = additionF.astype('uint8')
You should not do this:
addition = im1arr/2 + im2arr/2
as you lose information, by squashing the dynamic of the image (you effectively make the images 7-bit) before you perform the blending information.
MATLAB note: the reason you don't see this problem in MATLAB, is probably because MATLAB takes care of the overflow implicitly in one of its functions.
Using PIL's blend() with an alpha value of 0.5 would be equivalent to (im1arr + im2arr)/2. Blend does not require that the images have alpha layers.
Try this:
from PIL import Image
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
Image.blend(im1,im2,0.5).save('/Users/rem7/Desktop/a.jpg')
To clamp numpy array values:
>>> c = a + b
>>> c[c > 256] = 256
It seems the code you posted just sums up the values and values bigger than 256 are overflowing. You want something like "(a + b) / 2" or "min(a + b, 256)". The latter seems to be the way that your Matlab example does it.
Your sample images are not showing up form me so I am going to do a bit of guessing.
I can't remember exactly how the numpy to pil conversion works but there are two likely cases. I am 95% sure it is 1 but am giving 2 just in case I am wrong.
1) 1 im1Arr is a MxN array of integers (ARGB) and when you add im1arr and im2arr together you are overflowing from one channel into the next if the components b1+b2>255. I am guessing matlab represents their images as MxNx3 arrays so each color channel is separate. You can solve this by splitting the PIL image channels and then making numpy arrays
2) 1 im1Arr is a MxNx3 array of bytes and when you add im1arr and im2arr together you are wrapping the component around.
You are also going to have to rescale the range back to between 0-255 before displaying. Your choices are divide by 2, scale by 255/array.max() or do a clip. I don't know what matlab does