Python: Greyscale image: Make everything white, except for black pixels - python

I tried to open (already greyscale) images and change all non-black pixels to white pixels. I implemented the following code:
from scipy.misc import fromimage, toimage
from PIL import Image
import numpy as np
in_path = 'E:\\in.png'
out_path = 'E:\\out.png'
# Open gray-scale image
img = Image.open(in_path).convert('L')
# Just for testing: The image is saved correct
#img.save(out_path)
# Make all non-black colors white
imp_arr = fromimage(img)
imp_arr = (np.ceil(imp_arr / 255.0) * 255.0).astype(int)
# Save the image
img = toimage(imp_arr, mode='L')
img.save(out_path)
The calculation to make all pixels white, except for the black ones is quite simple and also very fast. For my use-case it is especially important that it works very fast, for this reason i used numpy. For some reason this code does not work with all images?
An example: The following image is the input.
It contains a grey rectangle and also a white border. The output should be a complete white image, but for some reason the output is a black image:
With some other images it works quite well. What do i do wrong? I think floating point shouldn't be a big issue here, because this code does not require a high calculation accuracy to work.
Thank you very much

toimage expects a byte array, so convert to uint8 not int:
imp_arr = (np.ceil(imp_arr / 255.0) * 255.0).astype('uint8')
I seems to work for int if there is a mix of black and white pixels in the output, but not if they are all white. I can't find any explanation for this in the documentation.

Related

How to remove white fuzziness from image in python

I'm trying to remove the background from product images, save them as transparent png's and got to a point where I can't figure out how and why I get the white line around the products like a fuzziness(see second image) don't know the real word for the effect. Also I'm losing the Nike swoosh which is white too :(
from PIL import Image
img = Image.open('test.jpg')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] > 247 and item[1] > 247 and item[2] > 247:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
img.putdata(newData)
img.save("test.png", "PNG")
Any ideas how I can fix this so I get clean selections, edges ?
Take a copy of your image and use PIL/Pillow's ImageDraw.floodfill() to flood fill from the top-left corner using a reasonable tolerance - that way you will only fill to the edges of the shirt and avoid the Nike logo.
Then take the background outline and make it white and everything else black and try applying some morphology (from scikit-image maybe) to dilate the white a little larger to hide the jaggies.
Finally, put the resulting new layer into the image with putalpha().
I am really pushed for time, but here are the bones of it. Just missing the copy of the original image at the start and the putalpha() of the new alpha layer back at the end...
from PIL import Image, ImageDraw
import numpy as np
import skimage.morphology
# Open the shirt
im = Image.open('shirt.jpg')
# Make all background pixels (not including Nike logo) into magenta (255,0,255)
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=10)
# DEBUG
im.show()
Experiment with the threshold (thresh) here. If you make it 50, it works much more cleanly and may be good enough to stop.
# Make into Numpy array
n = np.array(im)
# Mask of magenta background pixels
bgMask =(n[:, :, 0:3] == [255,0,255]).all(2)
# DEBUG
Image.fromarray((bgMask*255).astype(np.uint8)).show()
# Make a disk-shaped structuring element
strel = skimage.morphology.disk(13)
# Perform a morphological closing with structuring element
closed = skimage.morphology.binary_closing(bgMask,selem=strel)
# DEBUG
Image.fromarray((closed*255).astype(np.uint8)).show()
If you are unfamiliar with morphology, Anthony Thyssen has some excellent noes worth reading here.
By the way, you could also use potrace to smooth the outline somewhat.
I had a bit more time today so here is a more complete version. You can experiment with the morphology disk sizes and floodfill thresholds according to your images till you find something tailored for your needs:
#!/bin/env python3
from PIL import Image, ImageDraw
import numpy as np
import skimage.morphology
# Open the shirt and make a clean copy before we dink with it too much
im = Image.open('shirt.jpg')
orig = im.copy()
# Make all background pixels (not including Nike logo) into magenta (255,0,255)
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=50)
# DEBUG
im.show()
# Make into Numpy array
n = np.array(im)
# Mask of magenta background pixels
bgMask =(n[:, :, 0:3] == [255,0,255]).all(2)
# DEBUG
Image.fromarray((bgMask*255).astype(np.uint8)).show()
# Make a disk-shaped structuring element
strel = skimage.morphology.disk(13)
# Perform a morphological closing with structuring element to remove blobs
newalpha = skimage.morphology.binary_closing(bgMask,selem=strel)
# Perform a morphological dilation to expand mask right to edges of shirt
newalpha = skimage.morphology.binary_dilation(newalpha, selem=strel)
# Make a PIL representation of newalpha, converting from True/False to 0/255
newalphaPIL = (newalpha*255).astype(np.uint8)
newalphaPIL = Image.fromarray(255-newalphaPIL, mode='L')
# DEBUG
newalphaPIL.show()
# Put new, cleaned up image into alpha layer of original image
orig.putalpha(newalphaPIL)
orig.save('result.png')
As regards using potrace to smooth the outline, you would save new alphaPIL as a PGM format image because that is what potrace likes as input. So that would be:
newalphaPIL.save('newalpha.pgm')
Now you can play around, oops I meant "experiment carefully" with potrace to smooth the alpha outline. The basic command is:
potrace -b pgm newalpha.pgm -o smoothalpha.pgm
You can then re-load the image smoothalpha.pgm back into your Python and use it on the last line in the putalpha() call. Here is an animation of the difference between the original unsmoothed alpha and the smoothed one:
Look carefully at the edges to see the difference. You may want to experiment with resizing the alpha either to twice the size or half the size before smoothing to see what effect that has.

Why 16bit to 8bit conversion produces striped image?

I am testing a segmentation algorithm on several VHSR satellite images, which originally comes in 16bit format, but when I convert them to 8bit images, the produced images are showing striped appearance.
I've been trying different python libraries (skimage, cv2, scipy) getting similar results.
1) The original 16-bit image it is a 4 band image (NIR,B,G,R), so you need to choose the right bands to create a true color image, RGB image (4,3,2 bands). thanks in advance. It can be downloaded from this link:
16bit image
2) I use this code to convert each pixel value, from a 16-bit integer now fitting within 8-bit range:
from scipy.misc import bytescale
SS = io.imread('Imag16bit.tif')
SS = bytescale(SS)
SS = np.asarray(SS)
plt.imshow(SS)
This is my result of above code:
bytescale works for me. I think the asarray step messes up something.
import cv2
from skimage import io
from scipy.misc import bytescale
image = io.imread('SkySat_16bit.tif')
cv2.imshow('Original', image)
print(image.dtype)
image = bytescale(image)
print(image.dtype)
cv2.imshow('Converted', image)
cv2.waitKey(0)
I think this is a way to do it:
#!/usr/local/bin/python3
from PIL import Image
from tifffile import imsave, imread
# Load image
im = imread('SkySat_16bit.tif')
# Extract Red, Green and Blue bands into separate 8-bit arrays
R = (im[:,:,3]/256).astype(np.uint8)
G = (im[:,:,2]/256).astype(np.uint8)
B = (im[:,:,1]/256).astype(np.uint8)
# Combine bands into RGB array
RGB = np.dstack((R,G,B))
# Save to disk
Image.fromarray(RGB).save('result.png')
You may want to adjust the contrast a bit, and check I selected the correct bands.

Edit image pixel by pixel - Python

I have two images, one overlay and one background.
I want to create a new image, by editing overlay image and manipulating it to show only the pixels which have blue colour in the background image.
I dont want to add the background, it is only for selecting the pixels.
Rest part should be transparent.
Any hints or ideas please? PS: I edited result image with paint so its not perfect.
Image 1 is background image.
Image 2 is overlay image.
Image 3 is the check I want to perform. (to find out which pixels have blue in background and making remaining pixels transparent)
Image 4 is the result image after editing.
I renamed your images according to my way of thinking, so I took this as image.png:
and this as mask.png:
Then I did what I think you want as follows. I wrote it quite verbosely so you can see all the steps along the way:
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
# Open input images
image = Image.open("image.png")
mask = Image.open("mask.png")
# Get dimensions
h,w=image.size
# Resize mask to match image, taking care not to introduce new colours (Image.NEAREST)
mask = mask.resize((h,w), Image.NEAREST)
mask.save('mask_resized.png')
# Convert both images to numpy equivalents
npimage = np.array(image)
npmask = np.array(mask)
# Make image transparent where mask is not blue
# Blue pixels in mask seem to show up as RGB(163 204 255)
npimage[:,:,3] = np.where((npmask[:,:,0]<170) & (npmask[:,:,1]<210) & (npmask[:,:,2]>250),255,0).astype(np.uint8)
# Identify grey pixels in image, i.e. R=G=B, and make transparent also
RequalsG=np.where(npimage[:,:,0]==npimage[:,:,1],1,0)
RequalsB=np.where(npimage[:,:,0]==npimage[:,:,2],1,0)
grey=(RequalsG*RequalsB).astype(np.uint8)
npimage[:,:,3] *= 1-grey
# Convert numpy image to PIL image and save
PILrgba=Image.fromarray(npimage)
PILrgba.save("result.png")
And this is the result:
Notes:
a) Your image already has an (unused) alpha channel present.
b) Any lines starting:
npimage[:,:,3] = ...
are just modifying the 4th channel, i.e. the alpha/transparency channel of the image

Displaying a grayscale Image

My aim:
Read an image into the PIL format.
Convert it to grayscale.
Plot the image using pylab.
Here is the code i'm using:
from PIL import Image
from pylab import *
import numpy as np
inputImage='C:\Test\Test1.jpg'
##outputImage='C:\Test\Output\Test1.jpg'
pilImage=Image.open(inputImage)
pilImage.draft('L',(500,500))
imageArray= np.asarray(pilImage)
imshow(imageArray)
##pilImage.save(outputImage)
axis('off')
show()
My Problem:
The image get's displayed like the colours are inverted.
But I know that the image is getting converted to grayscale, because when I write it to the disk it is appearing as a grayscale image.(Just as I expect).
I feel that the problem is somewhere in the numpy conversion.
I've just started programming in Python for Image Processing.
And Tips and Guideline will also be appreciated.
You want to over-ride the default color map:
imshow(imageArray, cmap="Greys_r")
Here's a page on plotting images and pseudocolor in matplotlib .
This produces a B&W image:
pilImage=Image.open(inputImage)
pilImage = pilImage.convert('1') #this convert to black&white
pilImage.draft('L',(500,500))
pilImage.save('outfile.png')
From the convert method docs:
convert
im.convert(mode) => image
Returns a converted copy of an image.
When translating from a palette image, this translates pixels through the palette.
If mode is omitted, a mode is chosen so that all information in the image and the palette can be represented without a palette.
When from a colour image to black and white, the library uses the ITU-R 601-2 luma transform:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
When converting to a bilevel image (mode "1"), the source image is first converted to black and white.
Resulting values larger than 127 are then set to white, and the image is dithered.
To use other thresholds, use the point method.

Drawing semi-transparent polygons in PIL

How do you draw semi-transparent polygons using the Python Imaging Library?
Can you draw the polygon on a separate RGBA image then use the Image.paste(image, box, mask) method?
Edit: This works.
from PIL import Image
from PIL import ImageDraw
back = Image.new('RGBA', (512,512), (255,0,0,0))
poly = Image.new('RGBA', (512,512))
pdraw = ImageDraw.Draw(poly)
pdraw.polygon([(128,128),(384,384),(128,384),(384,128)],
fill=(255,255,255,127),outline=(255,255,255,255))
back.paste(poly,mask=poly)
back.show()
http://effbot.org/imagingbook/image.htm#image-paste-method
I think #Nick T's answer is good, but you need to be careful when using his code as written with a very large background image, especially in the case that you may be annotating several polygons on said image. This is something I do when processing huge satellite images with some object detection code and annotating the detections using a transparent rectangle. To make the code efficient no matter the size of the background image, I make the following suggestion.
I would modify the solution to specify that the polygon image that you will paste be only as large as required to hold the polygon, not the same size as the back image. The coordinates of the polygon are specified with respect to the local bounding box, not the global image coordinates. Then you paste the polygon image at the offset in the larger background image.
import Image
import ImageDraw
img_size = (512,512)
poly_size = (256,256)
poly_offset = (128,128) #location in larger image
back = Image.new('RGBA', img_size, (255,0,0,0) )
poly = Image.new('RGBA', poly_size )
pdraw = ImageDraw.Draw(poly)
pdraw.polygon([ (0,0), (256,256), (0,256), (256,0)],
fill=(255,255,255,127), outline=(255,255,255,255))
back.paste(poly, poly_offset, mask=poly)
back.show()
Using the Image.paste(image, box, mask) method will convert the alpha channel in the pasted area of the background image into the corresponding transparency value of the polygon image.
The Image.alpha_composite(im1,im2) method utilizes the alpha channel of the "pasted" image, and will not turn the background transparent. However, this method again needs two equally sized images.

Categories

Resources