I trying to get text Image from captcha image using opencv to build my dataset for training.My extracted captcha is not so smooth and optimized.Can someone help me to optimize and suggest the best method to remove the unwanted noise,and get the smooth captcha text.
Right now,i am first adding some padding, thresholding it and find the contours.
Issues:
1.adding padding using border_replicate may add some noise
2.thresholding is not smooth and perfect
Original Image :
Threshold Image:
Contour Image:
desired Image/Result:
final separated numbers required:
code:
import os
import os.path
import cv2
import glob
import imutils
import matplotlib.pyplot as plt
CAPTCHA_IMAGE_FOLDER = "generated_captcha_images"
OUTPUT_FOLDER = "extracted_letter_images"
# Get a list of all the captcha images we need to process
captcha_image_files = glob.glob(os.path.join(CAPTCHA_IMAGE_FOLDER, "*"))
counts = {}
# loop over the image paths
for (i, captcha_image_file) in enumerate(captcha_image_files):
print("[INFO] processing image {}/{}".format(i + 1, len(captcha_image_files)))
# Since the filename contains the captcha text (i.e. "2A2X.png" has the text "2A2X"),
# grab the base filename as the text
filename = os.path.basename(captcha_image_file)
captcha_correct_text = os.path.splitext(filename)[0]
# Load the image and convert it to grayscale
image = cv2.imread(captcha_image_file)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Add some extra padding around the image
gray = cv2.copyMakeBorder(gray, 8, 8, 8, 8, cv2.BORDER_REPLICATE)
# threshold the image (convert it to pure black and white)
thresh = cv2.threshold(
gray, 36, 255, cv2.THRESH_BINARY_INV)[1]
# find the contours (continuous blobs of pixels) the image
contours = cv2.findContours(
thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Hack for compatibility with different OpenCV versions
contours = contours[1] if imutils.is_cv3() else contours[0]
You might be looking for a skeleton rather than contours here. Check out this Skeletonize function in Scikit Image:
from skimage.morphology import skeletonize
skeleton = skeletonize(thresh.astype(np.float32) / 255)
f, axs = plt.subplots(2,1,figsize=(10,6))
for ax, img, t in zip(axs, [thresh, skeleton], ['Thresholded Image', 'SKImage Skeleton']):
ax.imshow( img, 'gray' )
ax.set_title(t)
ax.axis('off')
plt.show()
Related
I am trying to digitize the kid's drawing into SVG or transparent png file format so that they can be used in Scratch. The white paper should be replaced by transparent background and all the drawing part should be preserved.
My plan is to get the outest contour of the drawing and generate a mask, then use the mask to get the drawing part without paper background.
The problem is the drawing may not consecutive which means there may have some small holes leading to break the entire drawing contour to many many small contours.
Now I want to concatenate the near outest contours to form a big outest contour for masking.
The original drawing and the processed result is attached.
Code:
from __future__ import print_function
import cv2 as cv
import numpy as np
import argparse
import random as rng
rng.seed(12345)
def thresh_callback(val):
threshold = val
# Detect edges using Canny
canny_output = cv.Canny(src_gray, threshold, threshold * 2)
# Find contours
contours, hierarchy = cv.findContours(canny_output, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
# Draw contours
drawing = np.zeros((canny_output.shape[0], canny_output.shape[1], 3), dtype=np.uint8)
for i in range(len(contours)):
color = (rng.randint(0,256), rng.randint(0,256), rng.randint(0,256))
cv.drawContours(drawing, contours, i, color, 2, cv.LINE_8, hierarchy, 0)
# Show in a window
cv.imshow('Contours', drawing)
# Load source image
parser = argparse.ArgumentParser(description='Code for Finding contours in your image tutorial.')
parser.add_argument('--input', help='Path to input image.', default='IMG_4446.jpg')
args = parser.parse_args()
src = cv.imread(cv.samples.findFile(args.input))
if src is None:
print('Could not open or find the image:', args.input)
exit(0)
# Convert image to gray and blur it
src_gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
src_gray = cv.blur(src_gray, (3,3))
# Create Window
source_window = 'Source'
cv.namedWindow(source_window)
cv.imshow(source_window, src)
max_thresh = 255
thresh = 100 # initial threshold
cv.createTrackbar('Canny Thresh:', source_window, thresh, max_thresh, thresh_callback)
thresh_callback(thresh)
cv.waitKey()
import cv2, numpy as np
# Read Image
img = cv2.imread('/home/stephen/Desktop/test_img.png')
img =cv2.resize(img, (750,1000))
# Find the gray image
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Gray
gray = cv2.blur(gray, (2,2))
cv2.imwrite('/home/stephen/Desktop/gray.png',gray)
# Find the canny image
canny = cv2.Canny(gray, 30, 150) # Canny
# Find contours
contours, _ = cv2.findContours(canny,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Draw contours on canny (this connects the contours)
cv2.drawContours(canny, contours, -1, 255, 6)
cv2.imwrite('/home/stephen/Desktop/contours.png',canny)
# Get mask for floodfill
h, w = canny.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
# Floodfill from point (0, 0)
cv2.floodFill(canny, mask, (0,0), 123)
cv2.imwrite('/home/stephen/Desktop/floodfill.png',canny)
# Exclude everying but the floodfill region
canny = cv2.inRange(canny, 122, 124)
cv2.imwrite('/home/stephen/Desktop/inrange.png',canny)
When I run the code below, I am not able to find consistent circles in the image. The image I am using as input is:
import numpy as np
import matplotlib.pyplot as plt
import cv2
img = cv2.imread("pipe.jpg")
# convert the image to RGB
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# copy the RGB image
cimg = img.copy()
# convert the RGB image to grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# detect circles using hough transformation
circles = cv2.HoughCircles(image=img, method=cv2.HOUGH_GRADIENT, dp=3,
minDist=60, param1=100, param2=39, maxRadius=200)
for co,i in enumerate(circles[0, :],start=1):
i = [round(num) for num in i]
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
print("Number of circles detected:", co)
plt.imshow(cimg)
plt.show()
The result I get is:
As a pre-processing step, you usually smooth the image prior to the detection. Not smoothing will prompt a lot of false detections. Sometimes you don't see pre-smoothing in some tutorials because the images dealt with have nice clean edges with very little noise. Try using a median blur or Gaussian blur before you perform the detection.
Therefore, try something like:
import cv2
img = cv2.imread("pipe.jpg")
# convert the image to RGB
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# copy the RGB image
cimg = img.copy()
# convert the RGB image to grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
### NEW - Smooth the image first
blur = cv2.GaussianBlur(img, (5, 5), 0)
# or try
# blur = cv2.medianBlur(img, 5)
# detect circles using hough transformation
circles = cv2.HoughCircles(image=blur, method=cv2.HOUGH_GRADIENT, dp=3,
minDist=60, param1=100, param2=39, maxRadius=200)
Other than that, getting the detection to find all of the circles in the image is subject to playing with the hyperparameters of the Circular Hough Transform and through trial and error.
from PIL import Image
import pytesseract
from pdf2image import convert_from_path
import os
import pandas as pd
import cv2
import numpy as np
files = os.chdir("C:/Users/abhishek_kumar1/Desktop/New folder")
#print(os.getcwd())
pages = convert_from_path("d.pdf",190,single_file=True,
poppler_path='C:/Users/abhishek_kumar1/Downloads/poppler-0.68.0_x86/poppler-0.68.0/bin')
image_counter=1
for page in pages:
filename = "page_"+str(image_counter)+".jpg"
page.save(filename,'JPEG')
img = cv2.imread(filename)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imwrite('grey.png',gray)
binary,thresh1 = cv2.threshold(gray, 0, 255,cv2.THRESH_OTSU|cv2.THRESH_BINARY_INV)
cv2.imwrite('Thresh1.png',thresh1)
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15, 3))
dilation = cv2.dilate(thresh1, rect_kernel, iterations = 6)
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
im2 = img.copy()
ROI_number = 0
for cnt in contours[::-1]:
[x,y,w,h] = cv2.boundingRect(cnt)
ROI=im2[y:y+h, x:x+w]
#print(str(w),str(h))
#cv2.putText(im2, str(h), (x,y - 10 ), cv2.FONT_HERSHEY_SIMPLEX, 0.1, (255, 0, 0), 1)
#cv2.putText(im2, str(w), (x,y + 10 ), cv2.FONT_HERSHEY_SIMPLEX, 0.1, (0, 0, 255), 1)
cv2.imwrite('ROI_{}.jpg'.format(ROI_number),ROI)
cv2.rectangle(im2,(x,y),(x+w,y+h),(36,255,12),1)
ROI_number += 1
cv2.imwrite('contours1.png',im2)
How to find only this image from above code section section, is there any options to understand font type from image like bold, italic,something else
get trouble to find only the bold line part from all of images.
Please any body have a suggestion regarding this please help me out.
Alex Alex's answer did not work for me. Here is my alternative described in words.
The general idea is that we compare how many black pixels there are in comparison to the minimum possible pixels to still form characters. This provides us with a difference from the skeleton to normal text and skeleton to bold text. In this way, we can quite clearly separate normal text from the bold text.
Use OCR software to extract bounding boxes of individual words. Optional: Combine individual words into lines of words, for example by word_num in Pytesseract.
Convert the image to grayscale and invert the image colors
Perform Zhang-Suen thinning on the selected area of text on the image (opencv contribution: cv2.ximgproc.thinning)
Sum where there are white pixels in the thinned image, i.e. where values are equal to 255 (white pixels are letters)
Sum where there are white pixels in the inverted image
Finally compute the thickness (sum_inverted_pixels - sum_skeleton_pixels) / sum_skeleton_pixels (sometimes there will be zero division error, check when the sum of the skeleton is 0 and return 0 instead)
Normalize the thickness by minimum and maximum values
Apply a threshold for deciding when a word/line of text is bold, e.g. 0.6 or 0.7
See python code and result:
import cv2
import numpy as np
img = cv2.imread('C.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 160, 255, cv2.THRESH_BINARY)[1]
kernel = np.ones((5,5),np.uint8)
kernel2 = np.ones((3,3),np.uint8)
marker = cv2.dilate(thresh,kernel,iterations = 1)
mask=cv2.erode(thresh,kernel,iterations = 1)
while True:
tmp=marker.copy()
marker=cv2.erode(marker, kernel2)
marker=cv2.max(mask, marker)
difference = cv2.subtract(tmp, marker)
if cv2.countNonZero(difference) == 0:
break
marker_color = cv2.cvtColor(marker, cv2.COLOR_GRAY2BGR)
out=cv2.bitwise_or(img, marker_color)
cv2.imwrite('out.png', out)
cv2.imshow('result', out )
I am trying to code a tool that automatically identifies and alphabetically sorts the images based on equipment number (19-V1083AI). I used the pytesseract library to convert the image to a string after the contours of the equipment label were identified. Although the code runs correctly, it never outputs the equipment number. It's my first time using the pytesseract library and the goodFeaturesToTrack function. Any help would be greatly appreciated!
Original Image:
import numpy as np
import cv2
import imutils #resizeimage
import pytesseract # convert img to string
from matplotlib import pyplot as plt
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
# Read the image file
image = cv2.imread('Car Images/s3.JPG')
# Resize the image - change width to 500
image = imutils.resize(image, width=500)
# Display the original image
cv2.imshow("Original Image", image)
cv2.waitKey(0)
# RGB to Gray scale conversion
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow("1 - Grayscale Conversion", gray)
cv2.waitKey(0)
# Noise removal with iterative bilateral filter(removes noise while preserving edges)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
cv2.imshow("2 - Bilateral Filter", gray)
cv2.waitKey(0)
corners = cv2.goodFeaturesToTrack(gray,60,0.001,10)
corners = np.int0(corners)
for i in corners:
x,y = i.ravel()
cv2.circle(image,(x,y),0,255,-1)
coord = np.where(np.all(image == (255, 0, 0),axis=-1))
plt.imshow(image)
# Use tesseract to covert image into string
text = pytesseract.image_to_string(image, lang='eng')
print("Equipment Number is:", text)
plt.show()
Output:
I'm trying to crop an object from an image, and paste it on another image. Examining the method in this answer, I've successfully managed to do that. For example:
The code (show_mask_applied.py):
import sys
from pathlib import Path
from helpers_cv2 import *
import cv2
import numpy
img_path = Path(sys.argv[1])
img = cmyk_to_bgr(str(img_path))
threshed = threshold(img, 240, type=cv2.THRESH_BINARY_INV)
contours = find_contours(threshed)
mask = mask_from_contours(img, contours)
mask = dilate_mask(mask, 50)
crop = cv2.bitwise_or(img, img, mask=mask)
bg = cv2.imread("bg.jpg")
bg_mask = cv2.bitwise_not(mask)
bg_crop = cv2.bitwise_or(bg, bg, mask=bg_mask)
final = cv2.bitwise_or(crop, bg_crop)
cv2.imshow("debug", final)
cv2.waitKey(0)
cv2.destroyAllWindows()
helpers_cv2.py:
from pathlib import Path
import cv2
import numpy
from PIL import Image
from PIL import ImageCms
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def cmyk_to_bgr(cmyk_img):
img = Image.open(cmyk_img)
if img.mode == "CMYK":
img = ImageCms.profileToProfile(img, "Color Profiles\\USWebCoatedSWOP.icc", "Color Profiles\\sRGB_Color_Space_Profile.icm", outputMode="RGB")
return cv2.cvtColor(numpy.array(img), cv2.COLOR_RGB2BGR)
def threshold(img, thresh=128, maxval=255, type=cv2.THRESH_BINARY):
if len(img.shape) == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshed = cv2.threshold(img, thresh, maxval, type)[1]
return threshed
def find_contours(img):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11,11))
morphed = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
contours = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
return contours[-2]
def mask_from_contours(ref_img, contours):
mask = numpy.zeros(ref_img.shape, numpy.uint8)
mask = cv2.drawContours(mask, contours, -1, (255,255,255), -1)
return cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
def dilate_mask(mask, kernel_size=11):
kernel = numpy.ones((kernel_size,kernel_size), numpy.uint8)
dilated = cv2.dilate(mask, kernel, iterations=1)
return dilated
Now, instead of sharp edges, I want to crop with feathered/smooth edges. For example (the right one; created in Photoshop):
How can I do that?
All images and codes can be found that at this repository.
You are using a mask to select parts of the overlay image. The mask currently looks like this:
Let's first add a Gaussian blur to this mask.
mask_blurred = cv2.GaussianBlur(mask,(99,99),0)
We get to this:
Now, the remaining task it to blend the images using the alpha value in the mask, rather than using it as a logical operator like you do currently.
mask_blurred_3chan = cv2.cvtColor(mask_blurred, cv2.COLOR_GRAY2BGR).astype('float') / 255.
img = img.astype('float') / 255.
bg = bg.astype('float') / 255.
out = bg * (1 - mask_blurred_3chan) + img * mask_blurred_3chan
The above snippet is quite simple. First, transform the mask into a 3 channel image (since we want to mask all the channels). Then transform the images to float, since the masking is done in floating point. The last line does the actual work: for each pixel, blends the bg and img images according to the value in the mask. The result looks like this:
The amount of feathering is controlled by the size of the kernel in the Gaussian blur. Note that it has to be an odd number.
After this, out (the final image) is still in floating point. It can be converted back to int using:
out = (out * 255).astype('uint8')
While Paul92's answer is more than enough, I wanted to post my code anyway for any future visitor.
I'm doing this cropping to get rid of white background in some product photos. So, the main goal is to get rid of the whites while keeping the product intact. Most of the product photos have shadows on the ground. They are either the ground itself (faded), or the product's shadow, or both.
While the object detection works fine, these shadows also count as part of the object. Differentiating the shadows from the objects is not really necessary, but it results in some images that are not so desired. For example, examine the left and bottom sides of the image (shadow). The cut/crop is obviously visible, and doesn't look all that nice.
To get around this problem, I wanted to do non-rectangular crops. Using masks seems to do the job just fine. The next problem was to do the cropping with feathered/blurred edges so that I can get rid of these visible shadow cuts. With the help of Paul92, I've managed to do that. Example output (notice the missing shadow cuts, the edges are softer):
Operations on the image(s):
The code (show_mask_feathered.py, helpers_cv2.py)
import sys
from pathlib import Path
import cv2
import numpy
from helpers_cv2 import *
img_path = Path(sys.argv[1])
img = cmyk_to_bgr(str(img_path))
threshed = threshold(img, 240, type=cv2.THRESH_BINARY_INV)
contours = find_contours(threshed)
dilation_length = 51
blur_length = 51
mask = mask_from_contours(img, contours)
mask_dilated = dilate_mask(mask, dilation_length)
mask_smooth = smooth_mask(mask_dilated, odd(dilation_length * 1.5))
mask_blurred = cv2.GaussianBlur(mask_smooth, (blur_length, blur_length), 0)
mask_blurred = cv2.cvtColor(mask_blurred, cv2.COLOR_GRAY2BGR)
mask_threshed = threshold(mask_blurred, 1)
mask_contours = find_contours(mask_threshed)
mask_contour = max_contour(mask_contours)
x, y, w, h = cv2.boundingRect(mask_contour)
img_cropped = img[y:y+h, x:x+w]
mask_cropped = mask_blurred[y:y+h, x:x+w]
background = numpy.full(img_cropped.shape, (200,240,200), dtype=numpy.uint8)
output = alpha_blend(background, img_cropped, mask_cropped)