i want to create salt and pepper noise function (PIL and Numpy) - python

I want to create salt and pepper noise function.
The input is noise_density, i.e. the amount of pixels as noise in the output image and it should return value is the noisy image data source
def salt_pepper(noise_density):
noisesource = ColumnDataSource(data={'image': [noiseImage]})
return noisesource

This function returns an image that is [density]x[density] pixels, using numpy to generate a random array and using PIL to generate the image itself from the array.
def salt_pepper(density):
imarray = numpy.random.rand(density,density,3) * 255
return Image.fromarray(imarray.astype('uint8')).convert('L')
Now, for example, you could run
salt_pepper(500)
To generate an image file that is 500x500px.
Of course, make sure to
import numpy
from PIL import Image

I came up with a vectorized solution which I'm sure can be improved/simplified. Although the interface is not exactly as the requested one, the code is pretty straightforward (and fast 😬) and I'm sure it can be easily adapted.
import numpy as np
from PIL import Image
def salt_and_pepper(image, prob=0.05):
# If the specified `prob` is negative or zero, we don't need to do anything.
if prob <= 0:
return image
arr = np.asarray(image)
original_dtype = arr.dtype
# Derive the number of intensity levels from the array datatype.
intensity_levels = 2 ** (arr[0, 0].nbytes * 8)
min_intensity = 0
max_intensity = intensity_levels - 1
# Generate an array with the same shape as the image's:
# Each entry will have:
# 1 with probability: 1 - prob
# 0 or np.nan (50% each) with probability: prob
random_image_arr = np.random.choice(
[min_intensity, 1, np.nan], p=[prob / 2, 1 - prob, prob / 2], size=arr.shape
)
# This results in an image array with the following properties:
# - With probability 1 - prob: the pixel KEEPS ITS VALUE (it was multiplied by 1)
# - With probability prob/2: the pixel has value zero (it was multiplied by 0)
# - With probability prob/2: the pixel has value np.nan (it was multiplied by np.nan)
# We need to to `arr.astype(np.float)` to make sure np.nan is a valid value.
salt_and_peppered_arr = arr.astype(np.float) * random_image_arr
# Since we want SALT instead of NaN, we replace it.
# We cast the array back to its original dtype so we can pass it to PIL.
salt_and_peppered_arr = np.nan_to_num(
salt_and_peppered_arr, nan=max_intensity
).astype(original_dtype)
return Image.fromarray(salt_and_peppered_arr)
You can load a black and white version of Lena like so:
lena = Image.open("lena.ppm")
bwlena = Image.fromarray(np.asarray(lena).mean(axis=2).astype(np.uint8))
Finally, you can save a couple of examples:
salt_and_pepper(bwlena, prob=0.1).save("sp01lena.png", "PNG")
salt_and_pepper(bwlena, prob=0.3).save("sp03lena.png", "PNG")
Results:
https://i.ibb.co/J2y9HXS/sp01lena.png
https://i.ibb.co/VTm5Vy2/sp03lena.png

Related

Image turns dark after converting

I am working with a dataset that contains .mha files. I want to convert these files to either png/tiff for some work. I am using the Medpy library for converting.
image_data, image_header = load('image_path/c0001.mha')
from medpy.io import save
save(image_data, 'image_save_path/new_image.png', image_header)
I can actually convert the image into png/tiff format, but the converted image turns dark after the conversion. I am attaching the screenshot below. How can I convert the images successfully?
Your data is clearly limited to 12 bits (white is 2**12-1, i.e., 4095), while a PNG image in this context is 16 bits (white is 2**16-1, i.e., 65535). For this reason your PNG image is so dark that it appears almost black (but if you look closely it isn't).
The most precise transformation you can apply is the following:
import numpy as np
from medpy.io import load, save
def convert_to_uint16(data, source_max):
target_max = 65535 # 2 ** 16 - 1
# build a linear lookup table (LUT) indexed from 0 to source_max
source_range = np.arange(source_max + 1)
lut = np.round(source_range * target_max / source_max).astype(np.uint16)
# apply it
return lut[data]
image_data, image_header = load('c0001.mha')
new_image_data = convert_to_uint16(image_data, 4095) # 2 ** 12 - 1
save(new_image_data, 'new_image.png', image_header)
Output:
N.B.: new_image_data = image_data * 16 corresponds to replacing 65535 with 65520 (4095 * 16) in convert_to_uint16
You may apply "contrast stretching".
The dynamic range of image_data is about [0, 4095] - the minimum value is about 0, and the maximum value is about 4095 (2^12-1).
You are saving the image as 16 bits PNG.
When you display the PNG file, the viewer, assumes the maximum value is 2^16-1 (the dynamic range of 16 bits is [0, 65535]).
The viewer assumes 0 is black, 2^16-1 is white, and values in between scales linearly.
In your case the white pixels value is about 4095, so it translated to be a very dark gray in the [0, 65535] range.
The simplest solution is to multiply image_data by 16:
from medpy.io import load, save
image_data, image_header = load('image_path/c0001.mha')
save(image_data*16, 'image_save_path/new_image.png', image_header)
A more complicated solution is applying linear "contrast stretching".
We may transform the lower 1% of all pixel to 0, the upper 1% of the pixels to 2^16-1, and scale the pixels in between linearly.
import numpy as np
from medpy.io import load, save
image_data, image_header = load('image_path/c0001.mha')
tmp = image_data.copy()
tmp[tmp == 0] = np.median(tmp) # Ignore zero values by replacing them with median value (there are a lot of zeros in the margins).
tmp = tmp.astype(np.float32) # Convert to float32
# Get the value of lower and upper 1% of all pixels
lo_val, up_val = np.percentile(tmp, (1, 99)) # (for current sample: lo_val = 796, up_val = 3607)
# Linear stretching: Lower 1% goes to 0, upper 1% goes to 2^16-1, other values are scaled linearly
# Clipt to range [0, 2^16-1], round and convert to uint16
# https://stackoverflow.com/questions/49656244/fast-imadjust-in-opencv-and-python
img = np.round(((tmp - lo_val)*(65535/(up_val - lo_val))).clip(0, 65535)).astype(np.uint16) # (for current sample: subtract 796 and scale by 23.31)
img[image_data == 0] = 0 # Restore the original zeros.
save(img, 'image_save_path/new_image.png', image_header)
The above method enhance the contrast, but looses some of the original information.
In case you want higher contrast, you may use non-linear methods, improving the visibility, but loosing some "integrity".
Here is the "linear stretching" result (downscaled):

Having trouble getting the maximum value from a log ratio matrix

I'm trying to normalize a matrix of log ratio, so in order to do that, I want to find the maximum of the matrix. But I got infinite which is kind of impossible.
The code I've wrote:
import imageio as im
import numpy as np
imagepath1 = 'Andasol_09051987.jpg'
imagepath2 = 'Andasol_09122013.jpg'
image1 = im.imread(imagepath1)
image2 = im.imread(imagepath2)
Ds = np.abs(image1 - image2)
Dl = np.abs(np.log(image2+1)-np.log(image1+1))
Dsmax = Ds.max()
Dsmin = Ds.min()
Ds = ((Ds - Dsmax)/(Dsmax - Dsmin))*255
Dlmax = np.amax(Dl)
Dlmin = Dl.min()
Dl = ((Dl - Dlmax)/(Dlmax - Dlmin))*255
For the subtraction Ds part, things working well, but Dl part does not work. The value of Dlmax is infinite.
And for the calculation of the log ratio
Dl = np.abs(np.log(image2+1)-np.log(image1+1))
It has a warning RuntimeWarning: divide by zero encountered in log
Dl = np.abs(np.log(image2+1)-np.log(image1+1))
I really want to avoid divided by 0 that's why I add 1 to every pixel.
Both images are grayscale, so the value of each pixel varies from [0 255]

Count the number of pixels by color from an image loaded into a numpy array

I'm trying to count the number pixels in a weather radar image for each dbz reflectivity level (the colored blocks of green, orange, yellow, red, etc.) so I can "score" the radar image based on the type of echos.
I'm new to numpy and numpy arrays, but I know it can be very efficient when I'm working with the individual pixels in an image, so I'd like to learn more.
I'm not even sure I'm selecting the pixels correctly, but I think I'm getting close.
I have a sample of using both numpy and basic pixel iteration to count the number of green pixels with an RGBA of (1, 197, 1, 255).
Hopefully I'm close and someone can give me guidance on how to select the pixels using numpy and then count them:
import io
import numpy as np
import PIL.Image
import urllib2
import sys
color_dbz_20 = (2, 253, 2, 255)
color_dbz_25 = (1, 197, 1, 255)
color_dbz_30 = (0, 142, 0, 255)
url = 'http://radar.weather.gov/ridge/RadarImg/N0R/DLH_N0R_0.gif'
image_bytes = io.BytesIO(urllib2.urlopen(url).read())
image = PIL.Image.open(image_bytes)
image = image.convert('RGBA')
total_pixels = image.height * image.width
# Count using numpy
np_pixdata = np.array(image)
# Didn't work, gave me the total size:
# np_counter = np_pixdata[(np_pixdata == color_dbz_20)].size
np_counter = np.count_nonzero(np_pixdata[(np_pixdata == color_dbz_20)])
# Count using pillow
pil_pixdata = image.load()
pil_counter = 0
for y in xrange(image.size[1]):
for x in xrange(image.size[0]):
if pil_pixdata[x, y] == color_dbz_20:
pil_counter += 1
print "Numpy Count: %d" % np_counter
print "Pillow Count: %d" % pil_counter
Output:
Numpy Count: 134573
Pillow Count: 9967
The problem is that the numpy array will be an array of size X * Y * 4 but you compare each element with a tuple - but it's only a number. That's the reason why your:
np_counter = np_pixdata[(np_pixdata == color_dbz_20)].size
didn't exclude any elements.
That you got different counts in the end is because you counted nonzero-elements. But there are zeros in some array elements, just for one color but nevertheless 0 - which are excluded even though you don't want that!
First you want to compare numpy arrays so better convert the color-tuples too:
color_dbz_20 = np.array([2, 253, 2, 255]), ...
To get the real result for your condition you must use np.all along axis=2:
np.all(np_pixdata == color_dbz_20, axis=2)
This checks if the values along axis 2 (colors) are equal to the ones in your color_dbz_20 and this for each pixel. To get the sum of all the matches:
np.sum(np.all(np_pixdata == color_dbz_20, axis=2)) # Sum of boolean array is integer!
which gives you the number of pixel where the condition is True. True is interpreted as 1 and False as 0 - that way doing the sum will work - alternativly you could also count_nonzero instead of sum here. Always assuming you created your color_dbz_20-array as np.array.
Maybe the image has a different dimensionality and it's not width * height * depth then you just need to adjust the axis in the np.all to the dimension where the colors are (the one with length 4).

Better image normalization with numpy

I already achieved the goal described in the title but I was wondering if there was a more efficient (or generally better) way to do it. First of all let me introduce the problem.
I have a set of images of different sizes but with a width/height ratio less than (or equal) 2 (could be anything but let's say 2 for now), I want to normalize each one, meaning I want all of them to have the same size. Specifically I am going to do so like this:
Extract the max height above all images
Zoom the image so that each image reaches the max height keeping its ratio
Add a padding to the right with just white pixels until the image has a width/height ratio of 2
Keep in mind the images are represented as numpy matrices of grey scale values [0,255].
This is how I'm doing it now in Python:
max_height = numpy.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
for obs in data:
if len(obs[0])/len(obs) <= 2:
new_img = ndimage.zoom(obs, round(max_height/len(obs), 2), order=3)
missing_cols = max_height * 2 - len(new_img[0])
norm_img = []
for row in new_img:
norm_img.append(np.pad(row, (0, missing_cols), mode='constant', constant_values=255))
norm_img = np.resize(norm_img, (max_height, max_height*2))
There's a note about this code:
I'm rounding the zoom ratio because it makes the final height equal to max_height, I'm sure this is not the best approach but it's working (any suggestion is appreciated here). What I'd like to do is to expand the image keeping the ratio until it reaches a height equal to max_height. This is the only solution I found so far and it worked right away, the interpolation works pretty good.
So my final questions are:
Is there a better approach to achieve what explained above (image normalization) ? Do you think I could have done this differently ? Is there a common good practice I'm not following ?
Thanks in advance for your time.
Instead of ndimage.zoom you could use
scipy.misc.imresize. This
function allows you to specify the target size as a tuple, instead of by zoom
factor. Thus you won't have to call np.resize later to get the size exactly as
desired.
Note that scipy.misc.imresize calls
PIL.Image.resize
under the hood, so PIL (or Pillow) is a dependency.
Instead of using np.pad in a for-loop, you could allocate space for the desired array, norm_arr, first:
norm_arr = np.full((max_height, max_width), fill_value=255)
and then copy the resized image, new_arr into norm_arr:
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
For example,
from __future__ import division
import numpy as np
from scipy import misc
data = [np.linspace(255, 0, i*10).reshape(i,10)
for i in range(5, 100, 11)]
max_height = np.max([len(obs) for obs in data if len(obs[0])/len(obs) <= 2])
max_width = 2*max_height
result = []
for obs in data:
norm_arr = obs
h, w = obs.shape
if float(w)/h <= 2:
scale_factor = max_height/float(h)
target_size = (max_height, int(round(w*scale_factor)))
new_arr = misc.imresize(obs, target_size, interp='bicubic')
norm_arr = np.full((max_height, max_width), fill_value=255)
# check the shapes
# print(obs.shape, new_arr.shape, norm_arr.shape)
nh, nw = new_arr.shape
norm_arr[:nh, :nw] = new_arr
result.append(norm_arr)
# visually check the result
# misc.toimage(norm_arr).show()

Benchmarking Functions for Computing Average RGB Value of Images

I implemented computation of average RGB value of a Python Imaging Library image in 2 ways:
1 - using lists
def getAverageRGB(image):
"""
Given PIL Image, return average value of color as (r, g, b)
"""
# no. of pixels in image
npixels = image.size[0]*image.size[1]
# get colors as [(cnt1, (r1, g1, b1)), ...]
cols = image.getcolors(npixels)
# get [(c1*r1, c1*g1, c1*g2),...]
sumRGB = [(x[0]*x[1][0], x[0]*x[1][1], x[0]*x[1][2]) for x in cols]
# calculate (sum(ci*ri)/np, sum(ci*gi)/np, sum(ci*bi)/np)
# the zip gives us [(c1*r1, c2*r2, ..), (c1*g1, c1*g2,...)...]
avg = tuple([sum(x)/npixels for x in zip(*sumRGB)])
return avg
2 - using numpy
def getAverageRGBN(image):
"""
Given PIL Image, return average value of color as (r, g, b)
"""
# get image as numpy array
im = np.array(image)
# get shape
w,h,d = im.shape
# change shape
im.shape = (w*h, d)
# get average
return tuple(np.average(im, axis=0))
I was surprised to find that #1 runs about 20% faster than #2.
Am I using numpy correctly? Is there a better way to implement the average computation?
Surprising indeed.
You may want to use:
tuple(im.mean(axis=0))
to compute your mean (r,g,b), but I doubt it's gonna improve things a lot. Have you tried to profile getAverageRGBN and find the bottleneck?
One-liner w/o changing dimension or writing getAverageRGBN:
np.array(image).mean(axis=(0,1))
Again, it might not improve any performance.
In PIL or Pillow, in Python 3.4+:
from statistics import mean
average_color = [mean(image.getdata(band)) for band in range(3)]

Categories

Resources