I'm using the term "vectorize" because that is what has been used to describe the process I'm writing about. I don't know what it's actually called, but what I'm trying to do is take the elements of an image and separate them into different images.
Here is an example picture I'm trying to "vectorize":
What I would like to do is (using OpenCV) separate the corncob from the green stock it's attached to, and separate each corncob chunk into its own images.
What I have tried is the following:
def kmeansSegmentation(path_to_images, image_name, path_to_save_segments):
img = cv2.imread(path_to_images+image_name)
img_blur = cv2.GaussianBlur(img, (3,3), 0)
img_gray = cv2.cvtColor(img_blur, cv2.COLOR_BGR2GRAY)
img_reshaped = img_gray.reshape((-1, 3))
img_reshaped = np.float32(img_reshaped)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 5
attempts = 10
ret,label,center=cv2.kmeans(img_reshaped,K,None,criteria,attempts,cv2.KMEANS_PP_CENTERS)
center = np.uint8(center)
res = center[label.flatten()]
v = np.median(res)
sigma=0.33
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edges = cv2.Canny(img_gray, lower, upper)
contours, hierarchy = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
sorted_contours= sorted(contours, key=cv2.contourArea, reverse= True)
mask = np.zeros(img.shape[:2], dtype=img.dtype)
array_of_contour_areas = [cv2.contourArea(contour) for contour in contours]
contour_avg = sum(array_of_contour_areas)/len(array_of_contour_areas)
contour_var = sum(pow(x-contour_avg,2) for x in array_of_contour_areas) / len(array_of_contour_areas)
contour_std = math.sqrt(contour_var)
print("Saving segments", len(sorted_contours))
for (i,c) in tqdm(enumerate(sorted_contours)):
if (cv2.contourArea(c) > contour_avg-contour_std*2):
x,y,w,h= cv2.boundingRect(c)
cropped_contour= img[y:y+h, x:x+w]
cv2.drawContours(mask, [c], 0, (255), -1)
#tmp_image_name= image_name + "-kmeans-" + str(K) + str(random.random()) + ".jpg"
#cv2.imwrite(path_to_save_segments+tmp_image_name, cropped_contour)
result = cv2.bitwise_and(img, img, mask=mask)
"""
scale_percent = 30 # percent of original size
width = int(edges.shape[1] * scale_percent / 100)
height = int(edges.shape[0] * scale_percent / 100)
dim = (width, height)
resized = cv2.resize(result, dim, interpolation = cv2.INTER_AREA)
cv2.imshow("edges", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()
"""
#tmp_image_name= image_name + "-kmeans-" + str(K) + str(random.random()) + ".png"
#cv2.imwrite(path_to_save_segments+tmp_image_name, result)
return result
Excuse the commented out code; that's me just observing the changes I make to the image as I modify the algorithm.
I believe what you are trying to do is something called "Instance Segmentation". This process is best done with deep learning techniques which might not suit you unless you can find a pre-trained model. Here is an article on how you can do that: https://www.analyticsvidhya.com/blog/2019/04/introduction-image-segmentation-techniques-python/
A more simple (but way less accurate) solution might be to just find use RGB threshold values to create an "outline" of the picture and then use a flood fill algorithm to discern specific pixels in the resulting image. To make a rough outline of the image, you'll have to experiment with different threshold values. First, convert the whole image to grayscale w. OpenCV (cv2.imread('image-name', 0)). To test different threshold values, simply check if each pixel value is above or below a certain value (that you pick):
import numpy as np
import cv2
import matplotlib.pyplot as plt
def apply_threshold(img_array, threshold):
threshold_applied = []
for row in img_array:
threshold_applied.append([])
for pixel in row:
if pixel>threshold:
threshold_applied[len(threshold_applied)-1].append([255, 255, 255])
else:
threshold_applied[len(threshold_applied)-1].append([0, 0, 0])
new_img = np.array(threshold_applied, np.uint8)
cv2.imwrite(str(threshold)+".jpg", new_img)
img = cv2.imread("Ek1Dx.jpg", 0)
apply_threshold(img, 210)
If you run the code, you can see that the output is extremely inaccurate and that the cob and the kernels of corn are barely visible. Instance segmentation and dividing images has actually been a very big computer science problem for the past decade or so. There may be other filters you can use to get more accurate results, but I think identifying a threshold is a baseline image segmentation technique. If you want, you can try tweaking around with the threshold value in the program to see if it'll get you a more divided image.
Related
I am trying to build a script capable of counting how many Euros (for now just with coins) are in a picture. In order to accomplish this I am thinking of firstly locating the coins and then compare their relative size in order to know the value of each one as I've seen done in other places. My hardship lies in the first step, in the pre processing of the image.
A note is that this problem arises only when contrast between the background and certain coins is very low
I've tried various methods pre processing with different methods of detection such as connectedComponentsWithStats(), findContours() and SimpleBlobDetector, but the most successful combination I've achieved is:
import numpy as np
import cv2
import os
path = 'GenericImages/TP2/'
path_coins_highlighted = 'GenericImages/Highlights'
path_gaussian_blurs = 'GenericImages/Gaussian_Blurs'
dirs = os.listdir(path)
i = 0
for file in dirs:
path2img = os.path.join(path, file)
img = cv2.imread(path2img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# clahe = cv2.createCLAHE(clipLimit=40, tileGridSize=(8, 8))
# equalized = clahe.apply(gray)
gray_blur = cv2.GaussianBlur(gray, (15, 15), 0)
# gray_blur = cv2.bilateralFilter(gray, 9, 65, 9)
circles = cv2.HoughCircles(gray_blur, cv2.HOUGH_GRADIENT, 1, 15, param1=50, param2=30, minRadius=0, maxRadius=0)
circles = np.uint16(np.around(circles))
for x in circles[0, :]:
cv2.circle(img, (x[0], x[1]), x[2], (0, 255, 0), 2)
cv2.circle(img, (x[0], x[1]), 2, (0, 0, 255), 3)
cv2.imshow('Gray', gray)
cv2.imshow('Gaussian Blur', gray_blur)
path_save_gaussian_blur = os.path.join(path_gaussian_blurs, str(i) + '_gaussian_blur.jpg')
cv2.imwrite(path_save_gaussian_blur, gray_blur)
# cv2.imshow('equalized', equalized)
cv2.imshow('Highlights', img)
path_save_highlights = os.path.join(path_coins_highlighted, str(i) + '_highlight.jpg')
cv2.imwrite(path_save_highlights, img)
i += 1
cv2.waitKey(0)
The problem lies in the consistency of the detection, I believe that when it fails, it does so because there is little to no contrast between the background and the coins that HoughCircles is not detecting. The set of images below show the cases in which the algorithm fails.
SET 0:
SET1:
I've tried tweaking with equalization and a bilateral filter with different parameters in order to remove noise but keep the transition zones (contours of the coin) but I haven't found significant improvements.
I would appreciate some direction or ideas of what I should be looking for to solve this issue.
The lighting is non-uniform and your images are small and heavily compressed. These are the two factors that hinder a good detection. It might be difficult to control lighting but at least make sure you use lossless image formats (such as png) to avoid compression artifacts.
Anyway, your non-uniform lighting makes this a good case for a lighting normalization method called Gain Division. The idea is that you try to build a model of the background and then weight each input pixel by that model. The output gain should be relatively constant during most of the image. This is very useful because if we eliminate the non-uniform lighting we can create a foreground mask for the coins, and then we simply approximate circles to the coin's contours.
Let's give it a try:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "FHlbm.jpg"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Deep copy for results:
inputImageCopy = inputImage.copy()
# Get local maximum:
kernelSize = 30
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(inputImage, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)
# Perform gain division
gainDivision = np.where(localMax == 0, 0, (inputImage / localMax))
# Clip the values to [0,255]
gainDivision = np.clip((255 * gainDivision), 0, 255)
# Convert the mat type from float to uint8:
gainDivision = gainDivision.astype("uint8")
cv2.imshow("Gain Division", gainDivision)
cv2.waitKey(0)
Which yields:
This is the result of applying gain division to the first image. Note that now the background is almost uniform. This is excellent, because we can apply a simple auto threshold to create a binary mask containing just the foreground objects, like this:
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)
# Get binary image via Otsu:
_, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
This is the binary image:
Now, we have a problem here. The compression artifacts make this mask noisy. We could apply a little bit of morphology to improve the binary blobs, but your image is really small, so I have skipped this step. If you have access to larger, lossless images, you might want to include a cleaning step.
For now I'll simply try to compute the Minimum Enclosing Circle of each blob larger than a threshold, and I should get a detection a little bit more robust than Hough's. Let's see:
# Find the circle blobs on the binary mask:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contoursPoly = [None] * len(contours)
# Store the circles here:
detectedCircles = []
# Alright, just look for the outer bounding boxes:
for i, c in enumerate(contours):
# Get blob area:
blobArea = cv2.contourArea(c)
print(blobArea)
# Set min area:
minArea = 100
# Process only big blobs:
if blobArea > minArea:
# Approximate the contour to a circle:
(x, y), radius = cv2.minEnclosingCircle(c)
# Compute the center and radius:
center = (int(x), int(y))
radius = int(radius)
# Draw the circles:
cv2.circle(inputImageCopy, center, radius, (0, 0, 255), 1)
cv2.line(inputImageCopy, center, center, (0, 255, 0), 2)
# Store the center and radius:
detectedCircles.append([center, radius])
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
Let's see the results drawn onto a deep copy of the original image:
Not bad. All the circle's data (center and radius) is stored in the detectedCircles list. We can print the info like this:
# Check out the detected circles:
for i in range(len(detectedCircles)):
center, r = detectedCircles[i]
print("Circle #: "+str(i)+" x: "+str(center[0])+" y: "+str(center[1])+" r: "+str(r))
I'm working on a shooting simulator project where I have to detect bullet holes from images. I'm trying to differentiate two images so I can detect the new hole between the images, but its not working as expected. Between the two images, there are minor changes in the previous bullet holes because of slight movement between camera frames.
My first image is here
before.png
and the second one is here
after.png
I tried this code for checking differences
import cv2
import numpy as np
before = cv2.imread("before.png") after = cv2.imread("after.png")
result = after - before
cv2.imwrite("result.png", result)
the result i'm getting in result.png is the image below
result.png
but this is not what i expected, i only want to detect new hole
but it is showing diff with some pixels of previous image.
The result I'm expecting is
expected.png
Please help me figure it out so it can only detect big differences.
Thanks in advance.
Any new idea will be appreciated.
In order to find the differences between two images, you can utilize the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.
Using the skimage.metrics.structural_similarity() function from scikit-image, it returns a score and a difference image, diff. The score represents the structual similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image what you're looking for. The diff image contains the actual image differences between the two images.
Next, we find all contours using cv2.findContours() and filter for the largest contour. The largest contour should represent the new detected difference as slight differences should be smaller then the added bullet.
Here is the largest detected difference between the two images
Here is the actual differences between the two images. Notice how all of the differences were captured but since a new bullet is most likely the largest contour, we can filter out all the other slight movements between camera frames.
Note: this method works pretty well if we assume that the new bullet will have the largest contour in the diff image. If the newest hole was smaller, you may have to mask out the existing regions and whatever new contours in the new image would be the new hole (assuming the image will be a uniform black background with white holes).
from skimage.metrics import structural_similarity
import cv2
# Load images
image1 = cv2.imread('1.png')
image2 = cv2.imread('2.png')
# Convert to grayscale
image1_gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
image2_gray = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
# Compute SSIM between the two images
(score, diff) = structural_similarity(image1_gray, image2_gray, full=True)
# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1]
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] image1 we can use it with OpenCV
diff = (diff * 255).astype("uint8")
print("Image Similarity: {:.4f}%".format(score * 100))
# Threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
result = image2.copy()
# The largest contour should be the new detected difference
if len(contour_sizes) > 0:
largest_contour = max(contour_sizes, key=lambda x: x[0])[1]
x,y,w,h = cv2.boundingRect(largest_contour)
cv2.rectangle(result, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.imshow('result', result)
cv2.imshow('diff', diff)
cv2.waitKey()
Here's another example with different input images. SSIM is pretty good for detecting differences between images
This is my approach: after we subtract one from the other, there is some noise still remaining, so I just tried to remove that noise. I am dividing the image on a percentile of it's size, and, for each small section of the image, comparing between before and after, so that only significant chunks of white pixels are remaining. This algorithm lacks precision when there is occlusion, that is, whenever the new shot overlaps an existing one.
import cv2
import numpy as np
# This is the percentage of the width/height we're gonna cut
# 0.99 < percent < 0.1
percent = 0.01
before = cv2.imread("before.png")
after = cv2.imread("after.png")
result = after - before # Here, we eliminate the biggest differences between before and after
h, w, _ = result.shape
hPercent = percent * h
wPercent = percent * w
def isBlack(crop): # Function that tells if the crop is black
mask = np.zeros(crop.shape, dtype = int)
return not (np.bitwise_or(crop, mask)).any()
for wFrom in range(0, w, int(wPercent)): # Here we are gonna remove that noise
for hFrom in range(0, h, int(hPercent)):
wTo = int(wFrom+wPercent)
hTo = int(hFrom+hPercent)
crop = result[wFrom:wTo,hFrom:hTo] # Crop the image
if isBlack(crop): # If it is black, there is no shot in it
continue # We dont need to continue with the algorithm
beforeCrop = before[wFrom:wTo,hFrom:hTo] # Crop the image before
if not isBlack(beforeCrop): # If the image before is not black, it means there was a hot already there
result[wFrom:wTo,hFrom:hTo] = [0, 0, 0] # So, we erase it from the result
cv2.imshow("result",result )
cv2.imshow("before", before)
cv2.imshow("after", after)
cv2.waitKey(0)
As you can see, it worked for the use case you provided. A good next step is to keep an array of positions of shots, so that you can
My code :
from skimage.measure import compare_ssim
import argparse
import imutils
import cv2
import numpy as np
# load the two input images
imageA = cv2.imread('./Input_1.png')
cv2.imwrite("./org.jpg", imageA)
# imageA = cv2.medianBlur(imageA,29)
imageB = cv2.imread('./Input_2.png')
cv2.imwrite("./test.jpg", imageB)
# imageB = cv2.medianBlur(imageB,29)
# convert the images to grayscale
grayA = cv2.cvtColor(imageA, cv2.COLOR_BGR2GRAY)
grayB = cv2.cvtColor(imageB, cv2.COLOR_BGR2GRAY)
##########################################################################################################
difference = cv2.subtract(grayA,grayB)
result = not np.any(difference)
if result is True:
print ("Pictures are the same")
else:
cv2.imwrite("./open_cv_subtract.jpg", difference )
print ("Pictures are different, the difference is stored.")
##########################################################################################################
diff = cv2.absdiff(grayA, grayB)
cv2.imwrite("./tabsdiff.png", diff)
##########################################################################################################
grayB=cv2.resize(grayB,(grayA.shape[1],grayA.shape[0]))
(score, diff) = compare_ssim(grayA, grayB, full=True)
diff = (diff * 255).astype("uint8")
print("SSIM: {}".format(score))
#########################################################################################################
thresh = cv2.threshold(diff, 25, 255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
#s = imutils.grab_contours(cnts)
count = 0
# loop over the contours
for c in cnts:
# images differ
count=count+1
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(imageA, (x, y), (x + w, y + h), (0, 0, 255), 2)
cv2.rectangle(imageB, (x, y), (x + w, y + h), (0, 0, 255), 2)
##########################################################################################################
print (count)
cv2.imwrite("./original.jpg", imageA)
# cv2.imshow("Modified", imageB)
cv2.imwrite("./test_image.jpg", imageB)
cv2.imwrite("./compare_ssim.jpg", diff)
cv2.imwrite("./thresh.jpg", thresh)
cv2.waitKey(0)
Another code :
import subprocess
# -fuzz 5% # ignore minor difference between two images
# -density 300
# miff:- | display
# -metric phash
# -highlight-color White # by default its RED
# -lowlight-color Black
# -compose difference # src
# -threshold 0
# -separate -evaluate-sequence Add
cmd = 'compare -highlight-color black -fuzz 5% -metric AE Input_1.png ./Input_2.png -compose src ./result.png x: '
a = subprocess.call(cmd, shell=True)
Above code are various image comparison algorithms for images difference using opencv, ImageMagic, numpy, skimage, etc
Hope this will find you help-full.
I'm trying to crop an object from an image, and paste it on another image. Examining the method in this answer, I've successfully managed to do that. For example:
The code (show_mask_applied.py):
import sys
from pathlib import Path
from helpers_cv2 import *
import cv2
import numpy
img_path = Path(sys.argv[1])
img = cmyk_to_bgr(str(img_path))
threshed = threshold(img, 240, type=cv2.THRESH_BINARY_INV)
contours = find_contours(threshed)
mask = mask_from_contours(img, contours)
mask = dilate_mask(mask, 50)
crop = cv2.bitwise_or(img, img, mask=mask)
bg = cv2.imread("bg.jpg")
bg_mask = cv2.bitwise_not(mask)
bg_crop = cv2.bitwise_or(bg, bg, mask=bg_mask)
final = cv2.bitwise_or(crop, bg_crop)
cv2.imshow("debug", final)
cv2.waitKey(0)
cv2.destroyAllWindows()
helpers_cv2.py:
from pathlib import Path
import cv2
import numpy
from PIL import Image
from PIL import ImageCms
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def cmyk_to_bgr(cmyk_img):
img = Image.open(cmyk_img)
if img.mode == "CMYK":
img = ImageCms.profileToProfile(img, "Color Profiles\\USWebCoatedSWOP.icc", "Color Profiles\\sRGB_Color_Space_Profile.icm", outputMode="RGB")
return cv2.cvtColor(numpy.array(img), cv2.COLOR_RGB2BGR)
def threshold(img, thresh=128, maxval=255, type=cv2.THRESH_BINARY):
if len(img.shape) == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshed = cv2.threshold(img, thresh, maxval, type)[1]
return threshed
def find_contours(img):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11,11))
morphed = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
contours = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
return contours[-2]
def mask_from_contours(ref_img, contours):
mask = numpy.zeros(ref_img.shape, numpy.uint8)
mask = cv2.drawContours(mask, contours, -1, (255,255,255), -1)
return cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
def dilate_mask(mask, kernel_size=11):
kernel = numpy.ones((kernel_size,kernel_size), numpy.uint8)
dilated = cv2.dilate(mask, kernel, iterations=1)
return dilated
Now, instead of sharp edges, I want to crop with feathered/smooth edges. For example (the right one; created in Photoshop):
How can I do that?
All images and codes can be found that at this repository.
You are using a mask to select parts of the overlay image. The mask currently looks like this:
Let's first add a Gaussian blur to this mask.
mask_blurred = cv2.GaussianBlur(mask,(99,99),0)
We get to this:
Now, the remaining task it to blend the images using the alpha value in the mask, rather than using it as a logical operator like you do currently.
mask_blurred_3chan = cv2.cvtColor(mask_blurred, cv2.COLOR_GRAY2BGR).astype('float') / 255.
img = img.astype('float') / 255.
bg = bg.astype('float') / 255.
out = bg * (1 - mask_blurred_3chan) + img * mask_blurred_3chan
The above snippet is quite simple. First, transform the mask into a 3 channel image (since we want to mask all the channels). Then transform the images to float, since the masking is done in floating point. The last line does the actual work: for each pixel, blends the bg and img images according to the value in the mask. The result looks like this:
The amount of feathering is controlled by the size of the kernel in the Gaussian blur. Note that it has to be an odd number.
After this, out (the final image) is still in floating point. It can be converted back to int using:
out = (out * 255).astype('uint8')
While Paul92's answer is more than enough, I wanted to post my code anyway for any future visitor.
I'm doing this cropping to get rid of white background in some product photos. So, the main goal is to get rid of the whites while keeping the product intact. Most of the product photos have shadows on the ground. They are either the ground itself (faded), or the product's shadow, or both.
While the object detection works fine, these shadows also count as part of the object. Differentiating the shadows from the objects is not really necessary, but it results in some images that are not so desired. For example, examine the left and bottom sides of the image (shadow). The cut/crop is obviously visible, and doesn't look all that nice.
To get around this problem, I wanted to do non-rectangular crops. Using masks seems to do the job just fine. The next problem was to do the cropping with feathered/blurred edges so that I can get rid of these visible shadow cuts. With the help of Paul92, I've managed to do that. Example output (notice the missing shadow cuts, the edges are softer):
Operations on the image(s):
The code (show_mask_feathered.py, helpers_cv2.py)
import sys
from pathlib import Path
import cv2
import numpy
from helpers_cv2 import *
img_path = Path(sys.argv[1])
img = cmyk_to_bgr(str(img_path))
threshed = threshold(img, 240, type=cv2.THRESH_BINARY_INV)
contours = find_contours(threshed)
dilation_length = 51
blur_length = 51
mask = mask_from_contours(img, contours)
mask_dilated = dilate_mask(mask, dilation_length)
mask_smooth = smooth_mask(mask_dilated, odd(dilation_length * 1.5))
mask_blurred = cv2.GaussianBlur(mask_smooth, (blur_length, blur_length), 0)
mask_blurred = cv2.cvtColor(mask_blurred, cv2.COLOR_GRAY2BGR)
mask_threshed = threshold(mask_blurred, 1)
mask_contours = find_contours(mask_threshed)
mask_contour = max_contour(mask_contours)
x, y, w, h = cv2.boundingRect(mask_contour)
img_cropped = img[y:y+h, x:x+w]
mask_cropped = mask_blurred[y:y+h, x:x+w]
background = numpy.full(img_cropped.shape, (200,240,200), dtype=numpy.uint8)
output = alpha_blend(background, img_cropped, mask_cropped)
I'm trying to use Open CV to scale down numbers in an image. I currently am able to identify the contours but I am having trouble figuring out how to scale down the numbers once I have identified them.
Here is an example image:
Here are the contours I have identified:
Here is the code I am using to achieve this:
import cv2
image = cv2.imread("numbers.png")
edged = cv2.Canny(image, 10, 250)
# applying closing function
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (7, 7))
closed = cv2.morphologyEx(edged, cv2.MORPH_CLOSE, kernel)
_, cnts,_ = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
contours = []
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
contours.append(approx)
cv2.drawContours(image, [approx], -1, (0, 255, 0), 2)
cv2.imshow("Output", image)
cv2.waitKey(0)
I want to be able to use the contours to scale down the numbers without affecting the size of the image. Is this possible? Thanks!
Assuming you have an input image named "numbers.png".
First of all, import useful libraries and load the input image:
import cv2
import numpy as np
img = cv2.imread("./numbers.png", 1)
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
Secondly, you need to binarize the input image and find the external contours of the numbers:
_, im_th = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
_, contours, _ = cv2.findContours(255-im_th, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
So you can see the detected contours will be around the numbers.
Thirdly, find the relative bounding boxes around the numbers and find the middle point coordinates of the boxes (I assume the numbers should be resized and put in the center of the bottom line):
number_imgs = []
number_btm_mid_pos = []
for cnt in contours:
(x, y, w, h) = cv2.boundingRect(cnt)
number_imgs.append(img[y:y+h, x:x+w])
number_btm_mid_pos.append((int(x+w/2), y+h))
Finally, resize the numbers, put them back to the image, and display the result:
# resize images and put it back
output_img = np.ones_like(img) * 255
resize_ratio = 0.5
for (i, num_im) in enumerate(number_imgs):
num_im = cv2.resize(num_im, (0,0), fx=resize_ratio, fy=resize_ratio)
(img_h, img_w) = num_im.shape[:2]
# x1, y1, x2, y2
btm_x, btm_y = number_btm_mid_pos[i]
x1 = btm_x - int(img_w / 2)
y1 = btm_y - img_h
x2 = x1 + img_w
y2 = y1 + img_h
output_img[y1:y2, x1:x2] = num_im
cv2.imshow("Output Image", output_img)
cv2.imshow("Original Input", img)
cv2.waitKey()
You can adjust the variable "resize_ratio" to make sure the ratio is what you expected. The result should be something like this image here:
You may notice the last number "10" is splitting apart. It is because the "1 0" was recognized as two separate digits. To make it perfect, it is possible to write some code to test the gap/distance between every two digits. However, that would be not closely relevant, and a bit hard to generalize the solution based on the limited testing input. So I stop here.
Anyway, good luck and have fun.
I have an image and circle zone. I need to blur all, except for circle zone. Also i need to make border of circle smooth.
The input:
The output(made it in image redactor with mask, but i think opencv is using only bitmap masks):
For now i have code in python, which isn't blurring border of circle.
def blur_image(cv_image, radius, center, gaussian_core, sigma_x):
blurred = cv.GaussianBlur(cv_image, gaussian_core, sigma_x)
h, w, d = cv_image.shape
# masks
circle_mask = np.ones((h, w), cv_image.dtype)
cv.circle(circle_mask, center, radius, (0, 0, 0), -1)
circle_not_mask = np.zeros((h, w), cv_image.dtype)
cv.circle(circle_not_mask, center, radius, (2, 2, 2), -1)
# Computing
blur_around = cv.bitwise_and(blurred, blurred, mask=circle_mask)
image_in_circle = cv.bitwise_and(cv_image, cv_image, mask=circle_not_mask)
res = cv.bitwise_or(blur_around, image_in_circle)
return res
Current version:
How can i blur the border of circle? In example of output i've used gradient mask in program. Is there something similar in opencv?
UPDATE 04.03
So, i've tried formula from this answered topic and what i have:
Code:
def blend_with_mask_matrix(src1, src2, mask):
res = src2 * (1 - cv.divide(mask, 255.0)) + src1 * cv.divide(mask, 255.0)
return res
This code should work similar as recent one, but it doesn't. The image in circle is slightly different. It has some problems with color. The question is still open.
I think maybe you want something like that.
This is the source image:
The source-blured-pair :
The mask-alphablened-pair:
The code with description in the code comment.
#!/usr/bin/python3
# 2018.01.16 13:07:05 CST
# 2018.01.16 13:54:39 CST
import cv2
import numpy as np
def alphaBlend(img1, img2, mask):
""" alphaBlend img1 and img 2 (of CV_8UC3) with mask (CV_8UC1 or CV_8UC3)
"""
if mask.ndim==3 and mask.shape[-1] == 3:
alpha = mask/255.0
else:
alpha = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)/255.0
blended = cv2.convertScaleAbs(img1*(1-alpha) + img2*alpha)
return blended
img = cv2.imread("test.png")
H,W = img.shape[:2]
mask = np.zeros((H,W), np.uint8)
cv2.circle(mask, (325, 350), 40, (255,255,255), -1, cv2.LINE_AA)
mask = cv2.GaussianBlur(mask, (21,21),11 )
blured = cv2.GaussianBlur(img, (21,21), 11)
blended1 = alphaBlend(img, blured, mask)
blended2 = alphaBlend(img, blured, 255- mask)
cv2.imshow("blened1", blended1);
cv2.imshow("blened2", blended2);
cv2.waitKey();cv2.destroyAllWindows()
Some useful links:
Alpha Blending in OpenCV C++ : Combining 2 images with transparent mask in opencv
Alpha Blending in OpenCV Python:
Gradient mask blending in opencv python
So the main problem with (mask/255) * blur + (1-mask/255)*another img was operators. They were working only with one channel. Next problem is working with float numbers for "smoothing".
I've changed code of blending with alpha channel to this:
1) i'm taking every channel for source images and mask
2) Performing formula
3) Merging channels
def blend_with_mask_matrix(src1, src2, mask):
res_channels = []
for c in range(0, src1.shape[2]):
a = src1[:, :, c]
b = src2[:, :, c]
m = mask[:, :, c]
res = cv.add(
cv.multiply(b, cv.divide(np.full_like(m, 255) - m, 255.0, dtype=cv.CV_32F), dtype=cv.CV_32F),
cv.multiply(a, cv.divide(m, 255.0, dtype=cv.CV_32F), dtype=cv.CV_32F),
dtype=cv.CV_8U)
res_channels += [res]
res = cv.merge(res_channels)
return res
And as a gradient mask i'm just using blurred circle.
def blur_image(cv_image, radius, center, gaussian_core, sigma_x):
blurred = cv.GaussianBlur(cv_image, gaussian_core, sigma_x)
circle_not_mask = np.zeros_like(cv_image)
cv.circle(circle_not_mask, center, radius, (255, 255, 255), -1)
#Smoothing borders
cv.GaussianBlur(circle_not_mask, (101, 101), 111, dst=circle_not_mask)
# Computing
res = blend_with_mask_matrix(cv_image, blurred, circle_not_mask)
return res
Result:
It is working a bit slower than very first version without smoother borders, but it's ok.
Closing question.
You can easily mask upon an image using the following funciton:
def transparentOverlay(src, overlay, pos=(0, 0), scale=1):
overlay = cv2.resize(overlay, (0, 0), fx=scale, fy=scale)
h, w, _ = overlay.shape # Size of foreground
rows, cols, _ = src.shape # Size of background Image
y, x = pos[0], pos[1] # Position of foreground/overlay image
# loop over all pixels and apply the blending equation
for i in range(h):
for j in range(w):
if x + i >= rows or y + j >= cols:
continue
alpha = float(overlay[i][j][3] / 255.0) # read the alpha channel
src[x + i][y + j] = alpha * overlay[i][j][:3] + (1 - alpha) * src[x + i][y + j]
return src
You need to pass the source image, then the overlay mask and position where you want to set the mask.
You can even set the masking scale. by calling it like this way.
transparentOverlay(face_cigar_roi_color,cigar,(int(w/2),int(sh_cigar/2)))
For details you can look at this link: Face masking and Overlay using OpenCV python
Output:
You can try using a function from PIL library.
example -
from PIL import Image, ImageFilter
blur_factor = 3 # for smooth borders as you have mentioned
blurred_mask = mask.filter(ImageFilter.GaussianBlur(blur_factor)) # your circle = 255, background = 0
final_img = Image.composite(blurred_img, original_img, blurred_mask) # here blurred image is the one which you have already blurred, original image is your sharp non blurred image