I have an image that I am extracting the bounding boxes on. The code I am using is as follows:
from trp import Document
doc = Document(response)
BB= []
for page in doc.pages:
for field in page.form.fields:
BB.append([field.value.geometry.boundingBox.left,field.value.geometry.boundingBox.top,
field.value.geometry.boundingBox.width,field.value.geometry.boundingBox.height])
The returned array is an array of all of the bounding boxes (in the format of [left,top,width,height] as follows:
[0.6512874960899353,
0.240500807762146,
0.09759742021560669,
0.024829255416989326],
[0.08765672147274017,
0.2444963902235031,
0.08454835414886475,
0.025759655982255936],
[0.3960501253604889,
0.24606066942214966,
0.08616191148757935,
0.020310387015342712],
[0.881671130657196,
0.24629370868206024,
0.03400468826293945,
0.01618446409702301],...]
Amazon makes the top left of the image the origin, and makes the maximum X and Y value 1.0. To translate this back to the original image dimensions, you are required to multiply the values by the corresponding values in image.shape
I print my image.shape, which corresponds to:
(3300,2550,3)
So I use the following code to translate this array:
Translated_BBs=[]
for i in BB:
Translated_BBs.append([int(i[3][1]*image.shape[0]),int(i[3][1]*image.shape[0]),
int(i[3][2]*image.shape[1]),int(i[3][3]*image.shape[0])])
However, when I use the following code to draw rectangles around the bounding boxes:
for i in Translated_BBs:
start_point = (i[0], i[1])
end_point = (i[1]+i[3],i[0]+i[2])
color = (255, 0, 0)
thickness = 4
cv2.rectangle(imm, start_point, end_point, color, thickness)
My bounding boxes do not appear where they should. Can anyone find the flaw in my code?
Related
I generated a data-set of (200 x 200x 3) images in which each image contains a 40 X 40 box of different color.
Create a model using tensorflow which can predict coords of this 40 x 40 box.
enter image description here
The code i used for generating these images:
from PIL import Image, ImageDraw
from random import randrange
colors = ["#ffd615", "#f9ff21", "#00d1ff",
"#0e153a", "#fc5c9c", "#ac3f21",
"#40514e", "#492540", "#ff8a5c",
"#000000", "#a6fff2", "#f0f696",
"#d72323", "#dee1ec", "#fcb1b1"]
def genrate_image(color):
img = Image.new(mode="RGB", size=(200, 200), color=color)
return img
def save_image(img, imgname):
img.save(imgname)
def draw_rect(image, color, x, y):
draw = ImageDraw.Draw(image)
coords = ((x, y), (x+40, y), (x+40, y+40), (x, y+40))
draw.polygon(coords, fill=color)
#return image, str(coords)
return image, coords[0][0], coords[2][0], coords[0][1], coords[2][1]
FILE_NAME = "train_annotations.txt"
for i in range(0, 100):
img = genrate_image(colors[randrange(0, len(colors))])
img, x0, x1, y0, y1 = draw_rect(img, colors[randrange(0, len(colors))], randrange(200 - 50), randrange(200 - 50))
save_image(img, "dataset/train_images/img"+str(i)+".png")
with open(FILE_NAME, "a+") as f:
f.write(f"{x0} {x1} {y0} {y1}\n")
f.close()
can anyone help me by suggesting how can i build a model which can predict coords of a new image.
Well the easiest way you can split these boxes is by doing a K-means clustering where K is 2. So you basically record all the rgb pixel values of the pixels. Then using K-means group up the pixels into 2 groups, one would be the background group, the other being the box color group. Then with the box color group, map those colors back to their original coordinates. Then get the mean of those coordinates to get the location of the 40x40 box.
https://www.tensorflow.org/api_docs/python/tf/compat/v1/estimator/experimental/KMeans
Above is a source documentation on how to do K-means
It is enough to perform a bounding box regression, for this you just need to add a fully connected layer after СNN with 4 output values:x1,y1,x2,y2. where they are top left and bottom right.
Something similar can be found here https://github.com/sabhatina/bounding-box-regression.
I have a screenshot received from an iPhone, both dark and light mode.
I need to use OCR to extract the URL but am unable to do so with the underlining that appears.
What would be the best way to remove the horizontal lines from the message? Except the phone number, it doesn't matter if other parts of the screenshot are distorted.
I've tried approaches as described in
Removing Horizontal Lines in image (OpenCV, Python, Matplotlib)
https://docs.opencv.org/3.2.0/d1/dee/tutorial_moprh_lines_detection.html
https://legacy.imagemagick.org/discourse-server/viewtopic.php?t=22338
And none seem to work well, at all.
Here's a possible solution for your problem. I'm using mock screenshots, since, like I suggested, it is better to use lossless images to get a better result. The main idea here is to extract the color of the text box and to fill the rest of the image with that color, then threshold the image. By doing this, we will reduce the intensity variation and obtain a better thresholded image - since the image histogram will contain fewer intensity values. These are the steps:
Crop the image to a ROI (Region Of Interest)
Get the colors in that ROI via K-Means
Get the color of the text box
Flood-fill the ROI with the color of the text box
Apply Otsu's thresholding to get a binary image
Get OCR of the image
Suppose this is our test images, one uses a a "light" theme while the other uses a "dark" theme:
I'll be using pyocr as OCR engine. Let's use image one, the code would be this:
# imports:
from PIL import Image
import numpy as np
import cv2
import pyocr
import pyocr.builders
tools = pyocr.get_available_tools()
# The tools are returned in the recommended order of usage
tool = tools[0]
langs = tool.get_available_languages()
lang = langs[0]
# image path
path = "D://opencvImages//"
fileName = "mockText.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Set the ROI location:
roiX = 0
roiY = 235
roiWidth = 750
roiHeight = 1080
# Crop the ROI:
smsROI = grayscaleImage[roiY:roiHeight, roiX:roiWidth]
The first bit crops the ROI - everything that is of interest, leaving out the "header" and the "footer" of the image, where's there's info that we really don't need. This is the current ROI:
Wouldn't be nice to (approximately) get all the colors used in the image? Fortunately that's what Color Quantization gives us - a reduced pallet of the average colors present in an image, provided the number of the colors we are looking for. Let's apply K-Means and use 3 clusters to group this colors.
In our test images, most of the pixels are background - so, the largest cluster of pixels will belong to the background. The text represents the smallest cluster of pixels. That leaves the remaining cluster our target - the color of the text box. Let's apply K-Means, then. We need to format the data before, though, because K-Means needs float re-arranged arrays:
# Reshape the data to width x height, number of channels:
kmeansData = smsROI.reshape((-1,1))
# convert the data to np.float32
kmeansData = np.float32(kmeansData)
# define criteria, number of clusters(K) and apply kmeans():
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 5, 1.0)
# Define number of clusters (3 colors):
K = 3
# Run K-means:
_, _, center = cv2.kmeans(kmeansData, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# Convert the centers to uint8:
center = np.uint8(center)
# Sort centers from small to largest:
center = sorted(center, reverse=False)
# Get text color and min color:
textBoxColor = int(center[1][0])
minColor = min(center)[0]
print("Minimum Color is: "+str(minColor))
print("Text Box Color is: "+str(textBoxColor))
The info of interest is in center. That's where our colors are. After sorting this list and getting the minimum color value (that I'll use later to distinguish between a light and a dark theme) we can print the values. For the first test image, these values are:
Minimum Color is: 23
Text Box Color is: 225
Alright, so far so good. We have the color of the text box. Let's use that and flood-fill the entire ROI at position (x=0, y=0):
# Apply flood-fill at seed point (0,0):
cv2.floodFill(smsROI, mask=None, seedPoint=(0, 0), newVal=textBoxColor)
The result is this:
Very nice. Let's apply Otsu's thresholding on this bad boy:
# Threshold via Otsu:
_, binaryImage = cv2.threshold(smsROI, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
Now, here comes the minColor part. If you are processing a dark theme screenshot and threshold it you will get white text on black background. If you were to process a light theme screenshot you would get black text on white background. We will always produce the same no matter the input: white text and black background. Let's check the min color, if this equals 0 (black) you just received a dark theme screenshot and you don't need to invert the image. Otherwise, invert the image:
# Process "Dark Theme / Light Theme":
if minColor != 0:
# Invert image if is not already inverted:
binaryImage = 255 - binaryImage
cv2.imshow("binaryImage", binaryImage)
cv2.waitKey(0)
For our first test image, the result is:
Notice the little bits of small noise. Let's apply an area filter (function defined at the end of the post) to get rid of pixels below a certain area threshold:
# Run a minimum area filter:
minArea = 10
binaryImage = areaFilter(minArea, binaryImage)
This is the filtered image:
Very nice. Lastly, I write this image and use pyocr to get the text as a string:
cv2.imwrite(path + "ocrText.png", binaryImage)
txt = tool.image_to_string(
Image.open(path + "ocrText.png"),
lang=lang,
builder=pyocr.builders.TextBuilder()
)
print("Image text is: "+txt)
Which results in:
Image text is: 301248 is your Amazon
verification code
If you test the second image you get the same exact result. This is the definition and implementation of the areaFilter function:
def areaFilter(minArea, inputImage):
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(inputImage, connectivity=4)
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype('uint8')
return filteredImage
Links to all images at the bottom
I have drawn a line over an arrow which captures the angle of that arrow. I would like to then remove the arrow, keep only the line, and use cv2.minAreaRect to determine the angle. So far I've got everything to work except removing the original arrow, which results in an incorrect angle generated by the cv2.minAreaRect bounding box.
Really, I just want the bold black line running through the arrow to use to measure the angle, not the arrow itself. if anyone has an idea to make this work, or a simpler way, please let me know. Thanks
Code:
import numpy as np
import cv2
image = cv2.imread("templates/a_15.png")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(image, 127, 255, 0)
contours, hierarchy = cv2.findContours(thresh, 1, 2)
cont = contours[0]
rows,cols = image.shape[:2]
[vx,vy,x,y] = cv2.fitLine(cont, cv2.DIST_L2,0,0.01,0.01)
leftish = int((-x*vy/vx) + y)
rightish = int(((cols-x)*vy/vx)+y)
line = cv2.line(image,(cols-1,rightish),(0,leftish),(0,255,0),10)
# thresholding
thresh = cv2.threshold(line, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
# compute rotated bounding box based on all pixel values > 0 and
# use coordinates to compute a rotated bounding box of those coordinates
coordinates = np.column_stack(np.where(thresh > 0))
w = coordinates[0]
h = coordinates[1]
# Compute minimum rotated angle that contains entire image.
# Return angle values in the range [-90, 0).
# As the rectangle rotates clockwise, angle values increase towards 0.
# Once 0 is reached, angle is set back to -90 degrees.
angle = cv2.minAreaRect(coordinates)[-1]
# for angles less than -45 degrees, add 90 degrees to angle to take the inverse.
if angle < - 45:
angle = -(90 + angle)
else:
angle = -angle
# rotate image
(h, w) = image.shape[:2]
center = (w // 2, h // 2) # image center
RM = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated = cv2.warpAffine(image, RM, (w, h),
flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)
# correction angle for validation
cv2.putText(rotated, "Angle {:.2f} degrees".format(angle),
(10, 30), cv2.FONT_HERSHEY_DUPLEX, 0.9, (0, 255, 0), 2)
# output
print("[INFO] angle: {:.3f}".format(angle))
cv2.imshow("Line", line)
cv2.imshow("Input", image)
cv2.imshow("Rotated", rotated)
cv2.waitKey(0)
Images
original
current results
goal
Here's a possible solution. The main idea is to identify de "tip" and the "tail" of the arrow approximating some key points. After you have identified both ends, you can draw a line joining both points. It is also an advantage to know which of the endpoints is the tip, because that way you can measure the angle from a constant point.
There's more than one way to achieve this. I choose something that I have applied in the past: I will use this approach to identify the endpoints of the overall shape. My assumption is that the tip will yield more points than the tail. After that, I'll cluster all the endpoints in two groups: tip and tail. I can use K-Means for that, as it will return the mean centers for both clusters. After that, we have our tip and tail points that can be joined easily with a line. These are the steps:
Convert the image to grayscale
Get the skeleton of the image, to normalize the shape to a width of 1 pixel
Apply the method described in the link to get the arrow's endpoints
Divide the endpoints in two clusters and use K-Means to get their centers
Join both endpoints with a line
Let's see the code:
# imports:
import cv2
import numpy as np
# image path
path = "D://opencvImages//"
fileName = "CoXeb.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Grayscale conversion:
grayscaleImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
grayscaleImage = 255 - grayscaleImage
# Extend the borders for the skeleton:
extendedImg = cv2.copyMakeBorder(grayscaleImage, 5, 5, 5, 5, cv2.BORDER_CONSTANT)
# Store a deep copy of the crop for results:
grayscaleImageCopy = cv2.cvtColor(extendedImg, cv2.COLOR_GRAY2BGR)
# Compute the skeleton:
skeleton = cv2.ximgproc.thinning(extendedImg, None, 1)
The first step is to get the skeleton of the arrow. As I said, this step is needed prior to the convolution-based method that identifies the endpoints of a shape. Computing the skeleton normalizes the shape to a one pixel width. However, sometimes, if the shape is too close to the "canvas" borders, the skeleton could show some artifacts. This is avoided with a border extension. The skeleton of the arrow is this:
Check that image out. If we identify the endpoints, the tip will exhibit at least 3 points, while the tail at least 1. That's handy - the tip will always have more points than the tail. If only we could detect those points... Luckily, we can:
# Threshold the image so that white pixels get a value of 0 and
# black pixels a value of 10:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)
# Set the end-points kernel:
h = np.array([[1, 1, 1],
[1, 10, 1],
[1, 1, 1]])
# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)
# Extract only the end-points pixels, those with
# an intensity value of 110:
binaryImage = np.where(imgFiltered == 110, 255, 0)
# The above operation converted the image to 32-bit float,
# convert back to 8-bit uint
binaryImage = binaryImage.astype(np.uint8)
This endpoint detecting method convolves the skeleton with a special kernel that identifies endpoints. It returns a binary image where all the endpoints have the value 110. After thresholding this mid-result, we get this image, which represents the arrow endpoints:
Nice, as you see, we can group the points in two clusters and get their cluster centers. Sounds like a job for K-Means, because that's exactly what it does. We first need to treat our data, though, because K-Means operates on defined-shaped arrays of float data:
# Find the X, Y location of all the end-points
# pixels:
Y, X = binaryImage.nonzero()
# Reshape the arrays for K-means
Y = Y.reshape(-1,1)
X = X.reshape(-1,1)
Z = np.hstack((X, Y))
# K-means operates on 32-bit float data:
floatPoints = np.float32(Z)
# Set the convergence criteria and call K-means:
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret, label, center = cv2.kmeans(floatPoints, 2, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# Set the cluster count, find the points belonging
# to cluster 0 and cluster 1:
cluster1Count = np.count_nonzero(label)
cluster0Count = np.shape(label)[0] - cluster1Count
print("Elements of Cluster 0: "+str(cluster0Count))
print("Elements of Cluster 1: " + str(cluster1Count))
The last two lines prints the endpoints that are assigned to Cluster 0 Cluster 1, respectively. That outputs this:
Elements of Cluster 0: 3
Elements of Cluster 1: 2
Just as expected - well, kinda. Seems that Cluster 0 is the tip and cluster 2 the tail! But the tail actually got 2 points. If you look the image of the skeleton closely, you'll see there's a small bifurcation at the tail. That's why we, in reality, got two points instead of just one. Alright, let's get the center points and draw them on the original input:
# Look for the cluster of max number of points
# That cluster will be the tip of the arrow:
maxCluster = 0
if cluster1Count > cluster0Count:
maxCluster = 1
# Check out the centers of each cluster:
matRows, matCols = center.shape
# Store the ordered end-points here:
orderedPoints = [None] * 2
# Let's identify and draw the two end-points
# of the arrow:
for b in range(matRows):
# Get cluster center:
pointX = int(center[b][0])
pointY = int(center[b][1])
# Get the "tip"
if b == maxCluster:
color = (0, 0, 255)
orderedPoints[0] = (pointX, pointY)
# Get the "tail"
else:
color = (255, 0, 0)
orderedPoints[1] = (pointX, pointY)
# Draw it:
cv2.circle(grayscaleImageCopy, (pointX, pointY), 3, color, -1)
cv2.imshow("End-Points", grayscaleImageCopy)
cv2.waitKey(0)
This is the resulting image:
The tip always gets drawn in red while the tail is drawn in blue. Very cool, let's store these points in the orderedPoints list and draw the final line in a new "canvas", with dimension same as the original image:
# Store the tip and tail points:
p0x = orderedPoints[1][0]
p0y = orderedPoints[1][1]
p1x = orderedPoints[0][0]
p1y = orderedPoints[0][1]
# Create a new "canvas" (image) using the input dimensions:
imageHeight, imageWidth = binaryImage.shape[:2]
newImage = np.zeros((imageHeight, imageWidth), np.uint8)
newImage = 255 - newImage
# Draw a line using the detected points:
(x1, y1) = orderedPoints[0]
(x2, y2) = orderedPoints[1]
lineColor = (0, 0, 0)
cv2.line(newImage , (x1, y1), (x2, y2), lineColor, thickness=2)
cv2.imshow("Detected Line", newImage)
cv2.waitKey(0)
The line overlaid on the original image and the new image containing only the line:
It sounds like you want to measure the angle of the line but because you are measuring a line you drew in the original image, you must now filter out the original image to get an accurate measure of the line...which you drew with coordinates you know the endpoints of?
I guess:
make a better filter?
draw the line in a blank image and detect angle there?
determine the angle from the known coordinates?
Since you were asking for just a line, I tried that...just made a blank image, drew your detected line on it and then used that downstream...
blankIm = np.ones((height, width, channels), dtype=np.uint8)
blankIm.fill(255)
line = cv2.line(blankIm,(cols-1,rightish),(0,leftish),(0,255,0),10)
I am building a project on Detectron2 with some mushrooms as a topic. The prediction works OK-well and I'm now trying to generate COCO-like annotations of the images with the predicted region (all XY coordinates of the region). For this, I need to do two things:
Retrieve the XY coordinates of the predicted shape/region and "downscale it" to only save the main edges (in order to avoid saving too many data points)
Plot the "saved" points back onto the main image for the user to judge if enough points are being saved
Unfortunately i'm failing at both. On the first point, I (think) that I have the same as a binary numpy object, but i'm surprised by its size AND i don't manage to transform it into sets of XY coordinates
On the second point, I am getting an error that I can't figure out how to debug:
/usr/local/lib/python3.6/dist-packages/google/colab/patches/__init__.py in cv2_imshow(a)
20 image.
21 """
---> 22 a = a.clip(0, 255).astype('uint8')
23 # cv2 stores colors as BGR; convert to RGB
24 if a.ndim == 3:
AttributeError: 'cv2.UMat' object has no attribute 'clip'
The portion of the code i'm using is here:
from detectron2.utils.visualizer import ColorMode
## Predicts some random image
dataset_dicts = get_all_mushroom_dicts(mushroom_categories, "mushroom_dataset_small/val")
d = random.sample(dataset_dicts, 1)
im = cv2.imread(d["file_name"])
mushroom_outputs = mushroom_predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=mushroom_metadata,
scale=0.8,
instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels
)
instances = mushroom_outputs["instances"].to("cpu")
mush_out = v.draw_instance_predictions(instances)
image = mush_out.get_image()[:, :, ::-1]
masks = np.asarray(instances.pred_masks)
print ("NP array shape", masks.shape)
print("Image array shape", image.shape)
print("Type before", type(image))
cv2_imshow(image) ## works fine
## ?? How to get the coordinates of the boundaries? And then take only some of them
## ??
## Assuming that (128, 128) is one of these coordinates for now
image2 = cv2.circle(image, (128, 128), 10, (255, 0, 0), 20)
print("Type after", type(image2))
cv2_imshow(image2) ## crashes
FYI I have also tried to find and draw the contour but this doesn't seem to work (see my post here https://github.com/facebookresearch/detectron2/issues/1702#event-3501434732 ).
Do you have any clue?
Thanks!
I managed to find a solution that works but part of it is not elegant:
Retrieve the XY coordinates:
masks = np.asarray(instances.pred_masks)
cnt, heirarchy = cv2.findContours(masks[0].astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
border = cnt[0]
gap = 20 # take only every 20 point
for i in list(range(border.shape[0]))[0::gap]:
y = int(border[i][0][1]*0.8)
x = int(border[i][0][0]*0.8)
print("New XY coordinates", (x,y))
Draw on the picture:
image_old = mush_out.get_image()[:, :, ::-1]
cv2.imwrite("img.png", image_old)
image = cv2.imread("img.png",cv2.IMREAD_COLOR) # Still no clue why this is needed but doesnt work if you dont save and re-read...
radius = 3
color = (255, 0, 0)
cv2.circle(image, (x, y), radius, color, -1) # my previous code was wrong; cv2.circle returns void and edits directly the image
cv2_imshow(image)
My idea is to use the multiple bounding box coordinates of the abnormal regions for a given image and crop these regions to save to a separate folder. I have written the code as shown below, to crop these multiple bounding box coordinates for a single image, however,I also get the bounding box which I have to get rid of.
import pandas as pd
import cv2
import numpy as np
df = pd.read_csv('excel1.csv')
image = cv2.imread('image2.png')
im_name = 'image2.png'
for i in range(len(df)):
name = df.loc[i]['filename']
if name == im_name:
start_point = (df.loc[i]['x'],df.loc[i]['y'])
end_point = (df.loc[i]['x']+df.loc[i]['width'],df.loc[i]['y']+df.loc[i]['height'])
color = (128, 0, 0)
thickness = 2
image = cv2.rectangle(image, start_point, end_point, color, thickness)
crop = image[df.loc[i]['y']:df.loc[i]['y']+df.loc[i]['height'],
df.loc[i]['x']:df.loc[i]['x']+df.loc[i]['width']]
cv2.imwrite("cropped/crop_{0}.png".format(i), crop)
cv2.imwrite('bb.png', image)
Use numpy slicing in the loop and then Python/OpenCV imwrite() that crop also inside the loop with a different name for each iteration of the loop
crop = image[ystart:ystop, xstart:xstop]
cv2.imwrite("crop_{0}.png".format(i), crop)
You can also add a different path for each image you want to write if you want them to go to different folders.
For numpy slicing, see https://www.w3schools.com/python/numpy_array_slicing.asp
I found a solution. I removed cv2.rectangle just because I anted to store only the bounding box regions so that the bounding boxes do not appear.