Image processing and for loops [duplicate] - python

How would I take an RGB image in Python and convert it to black and white? Not grayscale, I want each pixel to be either fully black (0, 0, 0) or fully white (255, 255, 255).
Is there any built-in functionality for getting it done in the popular Python image processing libraries? If not, would the best way be just to loop through each pixel, if it's closer to white set it to white, if it's closer to black set it to black?

Scaling to Black and White
Convert to grayscale and then scale to white or black (whichever is closest).
Original:
Result:
Pure Pillow implementation
Install pillow if you haven't already:
$ pip install pillow
Pillow (or PIL) can help you work with images effectively.
from PIL import Image
col = Image.open("cat-tied-icon.png")
gray = col.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save("result_bw.png")
Alternatively, you can use Pillow with numpy.
Pillow + Numpy Bitmasks Approach
You'll need to install numpy:
$ pip install numpy
Numpy needs a copy of the array to operate on, but the result is the same.
from PIL import Image
import numpy as np
col = Image.open("cat-tied-icon.png")
gray = col.convert('L')
# Let numpy do the heavy lifting for converting pixels to pure black or white
bw = np.asarray(gray).copy()
# Pixel range is 0...255, 256/2 = 128
bw[bw < 128] = 0 # Black
bw[bw >= 128] = 255 # White
# Now we put it back in Pillow/PIL land
imfile = Image.fromarray(bw)
imfile.save("result_bw.png")
Black and White using Pillow, with dithering
Using pillow you can convert it directly to black and white. It will look like it has shades of grey but your brain is tricking you! (Black and white near each other look like grey)
from PIL import Image
image_file = Image.open("cat-tied-icon.png") # open colour image
image_file = image_file.convert('1') # convert image to black and white
image_file.save('/tmp/result.png')
Original:
Converted:
Black and White using Pillow, without dithering
from PIL import Image
image_file = Image.open("cat-tied-icon.png") # open color image
image_file = image_file.convert('1', dither=Image.NONE) # convert image to black and white
image_file.save('/tmp/result.png')

I would suggest converting to grayscale, then simply applying a threshold (halfway, or mean or meadian, if you so choose) to it.
from PIL import Image
col = Image.open('myimage.jpg')
gry = col.convert('L')
grarray = np.asarray(gry)
bw = (grarray > grarray.mean())*255
imshow(bw)

img_rgb = cv2.imread('image.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
(threshi, img_bw) = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)

Pillow, with dithering
Using pillow you can convert it directly to black and white. It will look like it has shades of grey but your brain is tricking you! (Black and white near each other look like grey)
from PIL import Image
image_file = Image.open("cat-tied-icon.png") # open colour image
image_file = image_file.convert('1') # convert image to black and white
image_file.save('/tmp/result.png')
Original:
Converted:

And you can use colorsys (in the standard library) to convert rgb to hls and use the lightness value to determine black/white:
import colorsys
# convert rgb values from 0-255 to %
r = 120/255.0
g = 29/255.0
b = 200/255.0
h, l, s = colorsys.rgb_to_hls(r, g, b)
if l >= .5:
# color is lighter
result_rgb = (255, 255, 255)
elif l < .5:
# color is darker
result_rgb = (0,0,0)

Using opencv You can easily convert rgb to binary image
import cv2
%matplotlib inline
import matplotlib.pyplot as plt
from skimage import io
from PIL import Image
import numpy as np
img = io.imread('http://www.bogotobogo.com/Matlab/images/MATLAB_DEMO_IMAGES/football.jpg')
img = cv2.cvtColor(img, cv2.IMREAD_COLOR)
imR=img[:,:,0] #only taking gray channel
print(img.shape)
plt.imshow(imR, cmap=plt.get_cmap('gray'))
#Gray Image
plt.imshow(imR)
plt.title('my picture')
plt.show()
#Histogram Analyze
imgg=imR
hist = cv2.calcHist([imgg],[0],None,[256],[0,256])
plt.hist(imgg.ravel(),256,[0,256])
# show the plotting graph of an image
plt.show()
#Black And White
height,width=imgg.shape
for i in range(0,height):
for j in range(0,width):
if(imgg[i][j]>60):
imgg[i][j]=255
else:
imgg[i][j]=0
plt.imshow(imgg)

Here is the code for creating binary image using opencv-python :
img = cv2.imread('in.jpg',2)
ret, bw_img = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
cv2.imshow("Output - Binary Image",bw_img)

If you don't want to use cv methods for the segmentation and understand what you are doing, treat the RGB image as matrix.
image = mpimg.imread('image_example.png') # your image
R,G,B = image[:,:,0], image[:,:,1], image[:,:,2] # the 3 RGB channels
thresh = [100, 200, 50] # example of triple threshold
# First, create an array of 0's as default value
binary_output = np.zeros_like(R)
# then screen all pixels and change the array based on RGB threshold.
binary_output[(R < thresh[0]) & (G > thresh[1]) & (B < thresh[2])] = 255
The result is an array of 0's and 255's based on a triple condition.

Related

Why is openCV mask turning the whole image black?

The first image is the original, the second is the hsv, and the third is the mask.
The yellowest color in the hsv image is between the boundaries set. Why is the whole image turning black?
import numpy as np
import cv2
import imutils
directory = r"C:\\Users\\colin\\Documents\\projects\\dataset\\"
i = 0
for entry in os.scandir(directory):
if (entry.path.endswith(".png")) and (i == 0):
img = cv2.imread(directory + str(entry.name))
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
i += 1
lower_range = np.array([27,132,156])
upper_range = np.array([33,138,162])
mask = cv2.inRange(hsv, lower_range, upper_range)
cv2.imwrite('C:\\Users\\colin\\Documents\\projects\\mask.png',mask)
It's all about your color ranges, you can either change the values manually and randomly or maybe take a look at this color seperator script it can be very helpful hsv color seperator

Image editing using python

I have an image.
I want to erase its background and get a white or transparent background preferably without the black dotes. I am trying PIL in python:
img1 = img.convert("L")
img1.show()
img1.save("im1.jpg")
and also
img2 = img.convert("1")
img2.show()
img1.save("im1.jpg")
This gives something like this:
Trying with matplotlib:
import matplotlib.pyplot as plt
plt.imshow(img1,cmap=plt.cm.binary)
plt.show()
plt.savefig("im1.jpg")
makes the background black, the original green as grey and the original black dots as white.
Can you suggets what I can try?
I agree with #Piglet it is not fun if we tell you how to do it. To get you excited here is something I got when I played around:-
import cv2
import numpy as np
image = cv2.imread('captcha.jpg')
hsv=cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
# Set minimum and maximum HSV values to display
hMin=0
sMin = 37
vMin = 0
hMax = 179
sMax = 255
vMax = 255
lower = np.array([hMin, sMin, vMin])
upper = np.array([hMax, sMax, vMax])
mask = cv2.inRange(hsv, lower, upper)
result = cv2.bitwise_and(image, image, mask=mask)
cv2.imshow('image', result)
while 1==1:
k=cv2.waitKey(10)
if k==27:
cv2.destroyAllWindows()
break;
Hope this gets you going...

Remove background

I am doing OCR to extract information from the ID card. However, accuracy is quite low.
My assumption is that removing the background will make OCR more accurate.
I use the ID scanner machine (link) to obtain the grey image below. It seems that the machine uses IR instead of image processing.
Does anyone knows how to get the same result by using Opencv or tools (photoshop, gimp, etc)?
Thanks in advance.
Here are two more methods: adaptive thresholding and division normalization.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread("green_card.jpg")
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# do adaptive threshold on gray image
thresh1 = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 51, 25)
# write results to disk
cv2.imwrite("green_card_thresh1.jpg", thresh1)
# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (11,11))
morph = cv2.morphologyEx(gray, cv2.MORPH_DILATE, kernel)
# divide gray by morphology image
division = cv2.divide(gray, morph, scale=255)
# threshold
thresh2 = cv2.threshold(division, 0, 255, cv2.THRESH_OTSU )[1]
# write results to disk
cv2.imwrite("green_card_thresh2.jpg", thresh2)
# display it
cv2.imshow("thresh1", thresh1)
cv2.imshow("thresh2", thresh2)
cv2.waitKey(0)
Adaptive Thresholding Result:
Division Normalization Result:
EDIT:
since there are different lighting conditions, contrast adjustment is added here.
The simple approache in my mind to solve your issue is that: since the undesired background colours are Green and Red, and the desired font colour is Black, simply suppress the Red and green colours as following:
import numpy as np
import matplotlib.pyplot as plt
from skimage.io import imread, imsave
from skimage.color import rgb2gray
from skimage.filters import threshold_otsu
from skimage import exposure
def adjustContrast(img):
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
return img_rescale
# Read the image
img = imread('ID_OCR.jpg')
# Contrast Adjustment for each channel
img[:,:,0] = adjustContrast(img[:,:,0]) # R
img[:,:,1] = adjustContrast(img[:,:,1]) # G
img[:,:,2] = adjustContrast(img[:,:,2]) # B
# # Supress unwanted colors
img[img[...,0] > 100] = 255 # R
img[img[...,1] > 100] = 255 # B
# Convert the image to graylevel
img = rgb2gray(img)
# Rescale into 0-255
img = 255*img.astype(np.uint8)
# Save the results
imsave('Result.png', img)
The image will look like:
The Results are not optimal, because also your image resolution isn't high.
At the end, there are many solutions, and improvements, also you can use Morphology to make it look nicer, this is just a simple proposal to solve the problem.

How to blur an ellipse with ImageDraw

I implemented an algorithm that detects faces and I want to blur faces. I'm using PIL for blurring it.
image = Image.open(path_img)
draw = ImageDraw.Draw(image)
draw.ellipse((top, left, bottom, right), fill = 'white', outline ='white')
I got this with my code
Face to blur
I would like to use :
blurred_image = cropped_image.filter(ImageFilter.GaussianBlur(radius=10 ))
But I can't use it because I'm using an ImageDraw and it works only with Image class. How can I blur with an ellipse (circular) the face?
Thank you
blur the ellipse is the best way
With Pillow, the best way to do this is to use the blurred ellipse as a blending mask for composite.
from PIL import Image, ImageDraw, ImageFilter
def make_ellipse_mask(size, x0, y0, x1, y1, blur_radius):
img = Image.new("L", size, color=0)
draw = ImageDraw.Draw(img)
draw.ellipse((x0, y0, x1, y1), fill=255)
return img.filter(ImageFilter.GaussianBlur(radius=blur_radius))
kitten_image = Image.open("kitten.jpg")
overlay_image = Image.new("RGB", kitten_image.size, color="orange") # This could be a bitmap fill too, but let's just make it orange
mask_image = make_ellipse_mask(kitten_image.size, 150, 70, 350, 250, 5)
masked_image = Image.composite(overlay_image, kitten_image, mask_image)
masked_image.show()
Given this adorable kitten as input, the output is
EDIT: Inspired by Mark Setchell's answer, simply changing the overlay_image line to
overlay_image = kitten_image.filter(ImageFilter.GaussianBlur(radius=15))
gives us this blur variant (with smooth edges for the blur :) )
Not sure if you want to composite something over the image to conceal the contents, or blur it. This is more blurry :-)
Starting with Paddington:
You can go to "Stealth Mode" like this:
#!/usr/bin/env python3
from PIL import Image, ImageDraw, ImageFilter
import numpy as np
# Open image
im = Image.open('paddington.png')
# Make a mask the same size as the image filled with black
mask = Image.new('RGB',im.size)
# Draw a filled white circle onto the black mask
draw = ImageDraw.Draw(mask)
draw.ellipse([90,40,300,250],fill=(255,255,255))
# Blur the entire image
blurred = im.filter(ImageFilter.GaussianBlur(radius=15))
# Select either the original or the blurred image at each pixel, depending on the mask
res = np.where(np.array(mask)>0,np.array(blurred),np.array(im))
# Convert back to PIL Image and save
Image.fromarray(res).save('result.png')
Or, as suggested by #AKX, you can remove the Numpy dependency and make the code a bit smaller too yet still get same result:
#!/usr/bin/env python3
from PIL import Image, ImageDraw, ImageFilter
import numpy as np
# Open image
im = Image.open('paddington.png')
# Make a mask the same size as the image filled with black
mask = Image.new('L',im.size)
# Draw a filled white circle onto the black mask
draw = ImageDraw.Draw(mask)
draw.ellipse([90,40,300,250],fill=255)
# Blur the entire image
blurred = im.filter(ImageFilter.GaussianBlur(radius=15))
# Composite blurred image over sharp one within mask
res = Image.composite(blurred, im, mask)
# Save
res.save('result.png')

remove foreground from segmented image

i have trained a model to provide the segment in image and the output image looks like that
the original image is like that
i have tried opencv to subtract the two images by
image1 = imread("cristiano-ronaldo.jpg")
image2 = imread("cristiano-ronaldo_seg.png")
image3 = cv2.absdiff(image1,image2)
but the output is not what i need , i would like to have cristiano and white background , how i can achieve that
Explanation:
As your files have already the right shape (BGR) and (A) it is very easy to accomplish what you are trying to do, here are the steps.
1) Load original image as BGR (In opencv it's reversed rgb)
2) Load "mask" image as a single Channel A
3) Merge the original images BGR channel and consume your mask image as A Alpha
Code:
import numpy as np
import cv2
# Load an color image in grayscale
img1 = cv2.imread('ronaldo.png',3) #READ BGR
img2 = cv2.imread('ronaldoMask.png',0) #READ AS ALPHA
kernel = np.ones((2,2), np.uint8) #Create Kernel for the depth
img2 = cv2.erode(img2, kernel, iterations=2) #Erode using Kernel
width, height, depth = img1.shape
combinedImage = cv2.merge((img1, img2))
cv2.imwrite('ronaldocombine.png',combinedImage)
Output:
After read the segment image, convert to grayscale, then threshold it to get fg-mask and bg-mask. Then use cv2.bitwise_and to "crop" the fg or bg as you want.
#!/usr/bin/python3
# 2017.11.26 09:56:40 CST
# 2017.11.26 10:11:40 CST
import cv2
import numpy as np
## read
img = cv2.imread("img.jpg")
seg = cv2.imread("seg.png")
## create fg/bg mask
seg_gray = cv2.cvtColor(seg, cv2.COLOR_BGR2GRAY)
_,fg_mask = cv2.threshold(seg_gray, 0, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
_,bg_mask = cv2.threshold(seg_gray, 0, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)
## convert mask to 3-channels
fg_mask = cv2.cvtColor(fg_mask, cv2.COLOR_GRAY2BGR)
bg_mask = cv2.cvtColor(bg_mask, cv2.COLOR_GRAY2BGR)
## cv2.bitwise_and to extract the region
fg = cv2.bitwise_and(img, fg_mask)
bg = cv2.bitwise_and(img, bg_mask)
## save
cv2.imwrite("fg.png", fg)
cv2.imwrite("bg.png", bg)

Categories

Resources