How to remove white fuzziness from image in python - python

I'm trying to remove the background from product images, save them as transparent png's and got to a point where I can't figure out how and why I get the white line around the products like a fuzziness(see second image) don't know the real word for the effect. Also I'm losing the Nike swoosh which is white too :(
from PIL import Image
img = Image.open('test.jpg')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] > 247 and item[1] > 247 and item[2] > 247:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
img.putdata(newData)
img.save("test.png", "PNG")
Any ideas how I can fix this so I get clean selections, edges ?

Take a copy of your image and use PIL/Pillow's ImageDraw.floodfill() to flood fill from the top-left corner using a reasonable tolerance - that way you will only fill to the edges of the shirt and avoid the Nike logo.
Then take the background outline and make it white and everything else black and try applying some morphology (from scikit-image maybe) to dilate the white a little larger to hide the jaggies.
Finally, put the resulting new layer into the image with putalpha().
I am really pushed for time, but here are the bones of it. Just missing the copy of the original image at the start and the putalpha() of the new alpha layer back at the end...
from PIL import Image, ImageDraw
import numpy as np
import skimage.morphology
# Open the shirt
im = Image.open('shirt.jpg')
# Make all background pixels (not including Nike logo) into magenta (255,0,255)
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=10)
# DEBUG
im.show()
Experiment with the threshold (thresh) here. If you make it 50, it works much more cleanly and may be good enough to stop.
# Make into Numpy array
n = np.array(im)
# Mask of magenta background pixels
bgMask =(n[:, :, 0:3] == [255,0,255]).all(2)
# DEBUG
Image.fromarray((bgMask*255).astype(np.uint8)).show()
# Make a disk-shaped structuring element
strel = skimage.morphology.disk(13)
# Perform a morphological closing with structuring element
closed = skimage.morphology.binary_closing(bgMask,selem=strel)
# DEBUG
Image.fromarray((closed*255).astype(np.uint8)).show()
If you are unfamiliar with morphology, Anthony Thyssen has some excellent noes worth reading here.
By the way, you could also use potrace to smooth the outline somewhat.
I had a bit more time today so here is a more complete version. You can experiment with the morphology disk sizes and floodfill thresholds according to your images till you find something tailored for your needs:
#!/bin/env python3
from PIL import Image, ImageDraw
import numpy as np
import skimage.morphology
# Open the shirt and make a clean copy before we dink with it too much
im = Image.open('shirt.jpg')
orig = im.copy()
# Make all background pixels (not including Nike logo) into magenta (255,0,255)
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=50)
# DEBUG
im.show()
# Make into Numpy array
n = np.array(im)
# Mask of magenta background pixels
bgMask =(n[:, :, 0:3] == [255,0,255]).all(2)
# DEBUG
Image.fromarray((bgMask*255).astype(np.uint8)).show()
# Make a disk-shaped structuring element
strel = skimage.morphology.disk(13)
# Perform a morphological closing with structuring element to remove blobs
newalpha = skimage.morphology.binary_closing(bgMask,selem=strel)
# Perform a morphological dilation to expand mask right to edges of shirt
newalpha = skimage.morphology.binary_dilation(newalpha, selem=strel)
# Make a PIL representation of newalpha, converting from True/False to 0/255
newalphaPIL = (newalpha*255).astype(np.uint8)
newalphaPIL = Image.fromarray(255-newalphaPIL, mode='L')
# DEBUG
newalphaPIL.show()
# Put new, cleaned up image into alpha layer of original image
orig.putalpha(newalphaPIL)
orig.save('result.png')
As regards using potrace to smooth the outline, you would save new alphaPIL as a PGM format image because that is what potrace likes as input. So that would be:
newalphaPIL.save('newalpha.pgm')
Now you can play around, oops I meant "experiment carefully" with potrace to smooth the alpha outline. The basic command is:
potrace -b pgm newalpha.pgm -o smoothalpha.pgm
You can then re-load the image smoothalpha.pgm back into your Python and use it on the last line in the putalpha() call. Here is an animation of the difference between the original unsmoothed alpha and the smoothed one:
Look carefully at the edges to see the difference. You may want to experiment with resizing the alpha either to twice the size or half the size before smoothing to see what effect that has.

Related

Change background color for Thresholderd image

I have been trying to write a code to extract cracks from an image using thresholding. However, I wanted to keep the background black. What would be a good solution to keep the outer boundary visible and the background black. Attached below is the original image along with the threshold image and the code used to extract this image.
import cv2
#Read Image
img = cv2.imread('Original.png')
# Convert into gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Image processing ( smoothing )
# Averaging
blur = cv2.blur(gray,(3,3))
ret,th1 = cv2.threshold(blur,145,255,cv2.THRESH_BINARY)
inverted = np.invert(th1)
plt.figure(figsize = (20,20))
plt.subplot(121),plt.imshow(img)
plt.title('Original'),plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(inverted,cmap='gray')
plt.title('Threshold'),plt.xticks([]), plt.yticks([])
Method 1
Assuming the circle in your images stays in one spot throughout your image set you can manually create a black 'mask' image with a white hole in the middle, then overlay it on the final inverted image.
You can easily make the mask image using your favorite image editor's magic wand tool.
I made this1 by also expanding the circle inwards by one pixel to take into account some of the pixels the magic wand tool couldn't catch.
You would then use the mask image like this:
mask = cv2.imread('/path/to/mask.png')
masked = cv2.bitwise_and(inverted, inverted, mask=mask)
Method 2
If the circle does NOT stay is the same spot throughout your entire image set you can try to make the mask from all the fully black pixels in your original image. This assumes that the 'sample' itself (the thing with the cracks) does not contain fully black pixels. Although this will result in the text on the bottom left to be left white.
# make all the non black pixels white
_,mask = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
1 The original is not the same size as your inverted image and thus the mask I made won't actually fit, you're gonna have to make it yourself.

Python Pillow(PIL) Not completely recoloring gradients?

I am having an issue where I'm using Pillow to recolor an image that has a lot of soft gradients but it seems to not completely color the most translucent part of these gradients, with the recolored image having a gradient that is not as smooth. Is there a way to fix this issue? Example Images and current code below.
enter image description here
Original Gradient: 1: https://i.stack.imgur.com/VFi75.png
enter image description here
Recolored Gradient: 1: https://i.stack.imgur.com/e5iNa.png
Here is the Original transparent PNG of the image
import random
import Owl_Attributes
from PIL import Image, ImageColor
# I create the image here and convert the color code to RGBA
RGB_im = image_base_accent3.convert("RGBA")
datas = RGB_im.getdata()
newData = []
for item in datas:
if item[0] == 208 and item[1] == 231 and item[2] == 161:
newData.append((255, 0, 0, item[3]))
else:
newData.append(item)
RGB_im.putdata(newData)
RGB_im.save('Owl_project_pictures\_final_RGB.png')
First, a couple of things to consider:
Inspect your images before you start work. Yours has an alpha channel that is pretty much pointless and irrelevant so I would discard that to save space and processing time.
Using for loops over Python lists of pixels is slow, inefficient, and error-prone in Python. Try to use built-in functions based on C code, or to use vectorised functions like Numpy.
On to your image. There are a whole load of shades and gradations of tone in your image and dealing with one separately through if statements is going to be difficult. I would suggest you want to use HSV colourspace instead.
I think you want the basic result to be a very saturated red with the lightness dictated by the lightness of the original image.
So, I would make an image with:
Hue=0 (see lower part of this diagram), and
Saturation=255 (i.e. fully saturated), and
Value (i.e. brightness) of the original image.
In code that might look like this:
#!/usr/bin/env python3
# ImageMagick command-line "equivalent"
# magick -size 599x452 xc:black xc:white \( VFi75.png -colorspace gray +level 0,60% \) +combine HSL result.png
from PIL import Image
# Load image and create HSV version
im = Image.open('VFi75.png')
HSV = im.convert('HSV')
# Split into separate channels for processing, discarding Hue and Saturation
_, _, V = HSV.split()
# Synthesize Hue channel, same size as input image, filled with 0, to make Red
H = Image.new('L', (im.width, im.height), 0)
# Synthesize Saturation channel, same size as input image, filled with 255, to make fully saturated
S = Image.new('L', (im.width, im.height), 255)
# Recombine synthesized H, S and V (based on original image brightness) back into a recombined image
RGB = Image.merge('HSV', (H,S,V)).convert('RGB')
# Save processed result
RGB.save('result.png')
If you wanted to make it lime green, you would change the Hue angle like this:
# Synthesize Hue channel, same size as input image, filled with 120, to make Lime Green
H = Image.new('L', (im.width, im.height), 120)
If you wanted to make it less saturated, you would change the saturation like this:
# Synthesize Saturation channel, same size as input image, filled with 64, to make less saturated
S = Image.new('L', (im.width, im.height), 64)

How to automatically detect a specific feature from one image and map it to another mask image? Then how to smoothen only the corners of the image?

Using the dlib library I was able to mask the mouth feature from one image (masked).
masked
Similarly, I have another cropped image of the mouth that does not have the mask (colorlip).
colorlip
I had scaled and replaced the images (replaced) and using np.where as shown in the code below.
replaced
#Get the values of the lip and the target mask
lip = pred_toblackscreen[bbox_lip[0]:bbox_lip[1], bbox_lip[2]:bbox_lip[3],:]
target = roi[bbox_mask[0]:bbox_mask[1], bbox_mask[2]:bbox_mask[3],:]
cv2.namedWindow('masked', cv2.WINDOW_NORMAL)
cv2.imshow('masked', target)
#Resize the lip to be the same scale/shape as the mask
lip_h, lip_w, _ = lip.shape
target_h, target_w, _ = target.shape
fy = target_h / lip_h
fx = target_w / lip_w
scaled_lip = cv2.resize(lip,(0,0),fx=fx,fy=fy)
cv2.namedWindow('colorlip', cv2.WINDOW_NORMAL)
cv2.imshow('colorlip', scaled_lip)
update = np.where(target==[0,0,0],scaled_lip,target)
cv2.namedWindow('replaced', cv2.WINDOW_NORMAL)
cv2.imshow('replaced', update)
But the feature shape (lip) in 'colorlip' does not match the 'masked' image. So, there is a misalignment and the edges of the mask look sharp as if the image has been overlayed. How to solve this problem? And how to make the final replaced image look more subtle and normal?
**Update #2: OpenCV Image Inpainting to smooth jagged borders.
OpenCV python inpainting should help with rough borders. Using the mouth landmark model, mouth segmentation mask from DL model or anything that was used the border location can be found. From that draw border with a small chosen width around the mouth contour in a new image and use it as a mask for inpainting. The mask I provided need to be inverted to work.
In input masks one of the mask is wider, one has shadow and last one is narrow. The six output images are generated with radius value of 5 and 20 for all three masks.
Code
import numpy as np
# import cv2 as cv2
# import cv2
import cv2.cv2 as cv2
img = cv2.imread('images/lip_img.png')
#mask = cv2.imread('images/lip_img_border_mask.png',0)
mask = cv2.imread('images/lip_img_border_mask2.png',0)
#mask = cv2.imread('images/lip_img_border_mask3.png',0)
mask = np.invert(mask)
# Choose appropriate method and radius.
radius = 20
dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_TELEA)
# dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_NS)
cv2.imwrite('images/inpainted_lip.jpg', dst)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input Image and Masks
Output Images
**Update #1: Added Deep Image harmonization based blending methods.
Try OpenCV seamless cloning for subtle replacement and getting rid of sharp edges. Also deep learning based image inpainting on sharp corners or combining it with seamless clone may provide better results.
Deep learning based Image Harmonization can be another approach to blend together two images such that the cropped part matches the style of background image. Even in this case the pixel intensity will change to match the background but blending will be smoother. Links are added to bottom of the post.
Example
This code example is based on learnopencv seamless cloning example,
# import cv2
from cv2 import cv2
import numpy as np
src = cv2.imread("images/src_img.jpg")
dst = cv2.imread("images/dest_img.jpg")
src_mask = cv2.imread("images/src_img_rough_mask.jpg")
src_mask = np.invert(src_mask)
cv2.namedWindow('src_mask', cv2.WINDOW_NORMAL)
cv2.imshow('src_mask', src_mask)
cv2.waitKey(0)
# Where to place image.
center = (500,500)
# Clone seamlessly.
output = cv2.seamlessClone(src, dst, src_mask, center, cv2.NORMAL_CLONE)
# Write result
cv2.imwrite("images/opencv-seamless-cloning-example.jpg", output)
cv2.namedWindow('output', cv2.WINDOW_NORMAL)
cv2.imshow('output', output)
cv2.waitKey(0)
Source Image
Rough Mask Image
Destination Image
Final Image
Reference
https://docs.opencv.org/4.5.4/df/da0/group__photo__clone.html
https://learnopencv.com/seamless-cloning-using-opencv-python-cpp/
https://learnopencv.com/face-swap-using-opencv-c-python/
https://github.com/JiahuiYu/generative_inpainting
https://docs.opencv.org/4.x/df/d3d/tutorial_py_inpainting.html
Deep Image Harmonization
https://github.com/bcmi/Image-Harmonization-Dataset-iHarmony4
https://github.com/wasidennis/DeepHarmonization
https://github.com/saic-vul/image_harmonization
https://github.com/wuhuikai/GP-GAN
https://github.com/junleen/RainNet
https://github.com/bcmi/BargainNet-Image-Harmonization
https://github.com/vinthony/s2am

Edit image pixel by pixel - Python

I have two images, one overlay and one background.
I want to create a new image, by editing overlay image and manipulating it to show only the pixels which have blue colour in the background image.
I dont want to add the background, it is only for selecting the pixels.
Rest part should be transparent.
Any hints or ideas please? PS: I edited result image with paint so its not perfect.
Image 1 is background image.
Image 2 is overlay image.
Image 3 is the check I want to perform. (to find out which pixels have blue in background and making remaining pixels transparent)
Image 4 is the result image after editing.
I renamed your images according to my way of thinking, so I took this as image.png:
and this as mask.png:
Then I did what I think you want as follows. I wrote it quite verbosely so you can see all the steps along the way:
#!/usr/local/bin/python3
from PIL import Image
import numpy as np
# Open input images
image = Image.open("image.png")
mask = Image.open("mask.png")
# Get dimensions
h,w=image.size
# Resize mask to match image, taking care not to introduce new colours (Image.NEAREST)
mask = mask.resize((h,w), Image.NEAREST)
mask.save('mask_resized.png')
# Convert both images to numpy equivalents
npimage = np.array(image)
npmask = np.array(mask)
# Make image transparent where mask is not blue
# Blue pixels in mask seem to show up as RGB(163 204 255)
npimage[:,:,3] = np.where((npmask[:,:,0]<170) & (npmask[:,:,1]<210) & (npmask[:,:,2]>250),255,0).astype(np.uint8)
# Identify grey pixels in image, i.e. R=G=B, and make transparent also
RequalsG=np.where(npimage[:,:,0]==npimage[:,:,1],1,0)
RequalsB=np.where(npimage[:,:,0]==npimage[:,:,2],1,0)
grey=(RequalsG*RequalsB).astype(np.uint8)
npimage[:,:,3] *= 1-grey
# Convert numpy image to PIL image and save
PILrgba=Image.fromarray(npimage)
PILrgba.save("result.png")
And this is the result:
Notes:
a) Your image already has an (unused) alpha channel present.
b) Any lines starting:
npimage[:,:,3] = ...
are just modifying the 4th channel, i.e. the alpha/transparency channel of the image

Fill in a hollow shape using Python and pillow (PIL)

I am trying to write a method that will fill in a given shape so that it becomes solid black.
Example:
This octagon which initially is only an outline, will turn into a solid black octagon, however this should work with any shape as long as all edges are closed.
Octagon
def img_filled(im_1, im_2):
img_fill_neg = ImageChops.subtract(im_1, im_2)
img_fill = ImageOps.invert(img_fill_neg)
img_fill.show()
I have read the docs 10x over and have found several other ways to manipulate the image, however I can not find an example to fill in a pre-existing shape within the image. I see that using floodfill() is an option, although I'm not sure how to grab the shape I want to fill.
Note: I do not have access to any other image processing libraries for this task.
There are several ways of doing this. You could do as I do here, and fill all the areas outside the outline with magenta, then make everything that is not magenta into black, and then revert all artificially magenta-coloured pixels to white.
I have interspersed intermediate images in the code, but you can just grab all the bits of code and collect them together in order to have a working lump of code.
#!/usr/bin/env python3
from PIL import Image, ImageDraw
import numpy as np
# Open the image
im = Image.open('octagon.png').convert('RGB')
# Make all background (exterior to octagon) pixels magenta (255,0,255)
ImageDraw.floodfill(im,xy=(0,0),value=(255,0,255),thresh=200)
# DEBUG
im.save('intermediate.png')
# Make everything not magenta black
n = np.array(im)
n[(n[:, :, 0:3] != [255,0,255]).any(2)] = [0,0,0]
# Revert all artifically filled magenta pixels to white
n[(n[:, :, 0:3] == [255,0,255]).all(2)] = [255,255,255]
Image.fromarray(n).save('result.png')
Or, you could fill all the background with magenta, then locate a white pixel and flood-fill with black using that white pixel as a seed. The method you choose depends on the expected colours of your images, and the degree to which you wish to preserve anti-aliasing and so on.

Categories

Resources