how can I merge this two image with python numpy and opencv? - python

I have two binary images. The first is like this:
and the last one is like this:
They dont have the same sized curve. I want to add the second one's two white zones contained in the black zone to the first one's black zone.
My code runs like this,but this a wrong answer:
The question is like this,and I want get the the finally image which I draw in picture with the the final image:
How can I achieve this task?

Assuming img1 is your first array (larger solid blob) and img2 is the second (smaller blob with holes), you need a method to identify and remove the outer region of the second image. The flood fill algorithm is a good candidate. It is implemented in opencv as cv2.floodFill.
The easiest thing to do would be to fill the outer edge, then just add the results together:
mask = np.zeros((img2.shape[0] + 2, img2.shape[1] + 2), dtype=np.uint8)
cv2.floodFill(img2, mask, (0, 0), 0, 0)
result = img1 + img2
Here is a toy example that shows mini-images topologically equivalent to your originals:
img1 = np.full((9, 9), 255, dtype=np.uint8)
img1[1:-1, 1:-1] = 0
img2 = np.full((9, 9), 255, dtype=np.uint8)
img2[2:-2, 2:-2] = 0
img2[3, 3] = img2[5, 5] = 255
The images look like this:
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(img1)
ax2.imshow(img2)
After the flood fill, the images look like this:
Adding the resulting images together looks like this:
Keep in mind that floodFill operates in-place, so you may want to make a copy of img2 before going down this road.

I think you want this:
#!/usr/local/bin/python3
from PIL import Image,ImageDraw, ImageColor, ImageChops
# Load images
im1 = Image.open('im1.jpg')
im2 = Image.open('im2.jpg')
# Flood fill white edges of image 2 with black
seed = (0, 0)
black = ImageColor.getrgb("black")
ImageDraw.floodfill(im2, seed, black, thresh=127)
# Now select lighter pixel of image1 and image2 at each pixel location and save it
result = ImageChops.lighter(im1, im2)
result.save('result.png')
If you prefer OpenCV, it might look like this:
#!/usr/local/bin/python3
import cv2
# Load images
im1 = cv2.imread('im1.jpg', cv2.IMREAD_GRAYSCALE)
im2 = cv2.imread('im2.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold, because JPEG is dodgy!
ret, im1 = cv2.threshold(im1, 127, 255, cv2.THRESH_BINARY)
ret, im2 = cv2.threshold(im2, 127, 255, cv2.THRESH_BINARY)
# Flood fill white edges of image 2 with black
h, w = im2.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
cv2.floodFill(im2, mask, (0,0), 0)
# Now select lighter of image1 and image2 and save it
result = np.maximum(im1, im2)
cv2.imwrite('result.png', result)

Related

Preprocess images using OpenCV for pytesseract OCR

I want to use OCR (pytesseract) to recognize the text located in images like these:
I have thousands of these arrows. Until now the procedure is as follows: I first resize the image (for another process). Then I crop the image to get rid of the most part of the arrow. Next I draw a white rectangle as a frame to remove further noise but still have distance between text and image borders for better text recognition. I resize the image again to ensure a height of capital letters to ~30 px (https://groups.google.com/forum/#!msg/tesseract-ocr/Wdh_JJwnw94/24JHDYQbBQAJ). Finally I binarize the image with a threshold of 150.
Full code:
import cv2
image_file = '001.jpg'
# load the input image and grab the image dimensions
image = cv2.imread(image_file, cv2.IMREAD_GRAYSCALE)
(h_1, w_1) = image.shape[:2]
# resize the image and grab the new image dimensions
image = cv2.resize(image, (int(w_1*320/h_1), 320))
(h_1, w_1) = image.shape
# crop image
image_2 = image[70:h_1-70, 20:w_1-20]
# get image_2 height, width
(h_2, w_2) = image_2.shape
# draw white rectangle as a frame around the number -> remove noise
cv2.rectangle(image_2, (0, 0), (w_2, h_2), (255, 255, 255), 40)
# resize image, that capital letters are ~ 30 px in height
image_2 = cv2.resize(image_2, (int(w_2*50/h_2), 50))
# image binarization
ret, image_2 = cv2.threshold(image_2, 150, 255, cv2.THRESH_BINARY)
# save image to file
cv2.imwrite('processed_' + image_file, image_2)
# tesseract part can be commented out
import pytesseract
config_7 = ("-c tessedit_char_whitelist=0123456789AB --oem 1 --psm 7")
text = pytesseract.image_to_string(image_2, config=config_7)
print("OCR TEXT: " + "{}\n".format(text))
The problem is that the text located in the arrow is never centered. Sometimes I remove part of the text with the method described above (e.g. in image 50A).
Is there a method in image processing to get rid of the arrow in a more elegant way? For instance using contour detection and deletion? I am more interested in the OpenCV part than the tesseract part to recognize the text.
Any help is appreciated.
If you look at the pictures you will see that there is a white arrow in the image which is also the biggest contour (especially if you draw a black border on the image). If you make a blank mask and draw the arrow (biggest contour on the image) then erode it a little bit you can perform a per element bitwise conjunction of the actual image and eroded mask. If it is not clear look at the bottom code and comments and you will see that it is actually pretty simple.
# imports
import cv2
import numpy as np
img = cv2.imread("number.png") # read image
# you can resize the image here if you like - it should still work for both sizes
h, w = img.shape[:2] # get the actual images height and width
img = cv2.resize(img, (int(w*320/h), 320))
h, w = img.shape[:2]
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # transform to grayscale
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1] # perform OTSU threhold
cv2.rectangle(thresh, (0, 0), (w, h), (0, 0, 0), 2)
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0] # search for contours
max_cnt = max(contours, key=cv2.contourArea) # select biggest one
mask = np.zeros((h, w), dtype=np.uint8) # create a black mask
cv2.drawContours(mask, [max_cnt], -1, (255, 255, 255), -1) # draw biggest contour on the mask
kernel = np.ones((15, 15), dtype=np.uint8) # make a kernel with appropriate values - in both cases (resized and original) 15 is ok
erosion = cv2.erode(mask, kernel, iterations=1) # erode the mask with given kernel
reverse = cv2.bitwise_not(img.copy()) # reversed image of the actual image 0 becomes 255 and 255 becomes 0
img = cv2.bitwise_and(reverse, reverse, mask=erosion) # per-element bit-wise conjunction of the actual image and eroded mask (erosion)
img = cv2.bitwise_not(img) # revers the image again
# save image to file and display
cv2.imwrite("res.png", img)
cv2.imshow("img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
You can try simple Python script:
import cv2
import numpy as np
img = cv2.imread('mmubS.png', cv2.IMREAD_GRAYSCALE)
thresh = cv2.threshold(img, 200, 255, cv2.THRESH_BINARY_INV )[1]
im_flood_fill = thresh.copy()
h, w = thresh.shape[:2]
im_flood_fill=cv2.rectangle(im_flood_fill, (0,0), (w-1,h-1), 255, 2)
mask = np.zeros((h + 2, w + 2), np.uint8)
cv2.floodFill(im_flood_fill, mask, (0, 0), 0)
im_flood_fill = cv2.bitwise_not(im_flood_fill)
cv2.imshow('clear text', im_flood_fill)
cv2.imwrite('text.png', im_flood_fill)
Result:

How to get RGB values of two separate lines in an image in two variables using opencv python

I have detected two lines in an image using cv2. now I want to get the RGB values of both lines in separate variables like left_line_veriable = ['rgb values'], right_line_rgb_values = ['rgb values']
Here is my code:
import cv2
import numpy as np
image = cv2.imread('tape.png')
image = cv2.cvtCOLOR(image, cv2.COLOR_BGR2GRAY)
# Apply adaptive threshold
image_thr = cv2.adaptiveThreshold(image, 255, cv2.THRESH_BINARY_INV, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, 81, 2)
# Apply morphological opening with vertical line kernel
kernel = np.ones((image.shape[0], 1), dtype=np.uint8) * 255
image_mop = cv2.morphologyEx(image_thr, cv2.MORPH_OPEN, kernel)
color_detected_img = cv2.bitwise_and(image, image, mask=image_mop)
cv2.imshow('image', color_detected_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
This is the image from which I want to get both line's RGB values in two variables as described above:
Maybe is not the most optimal way, but it is not hard to do. As I said in my comments, you can label the image to kind of segment the lines, then get the mean of the rgb values in it and the average position to get to know which one is left and right. Here is a small script to demonstrate what I am saying. The last part is just to show the results.
import cv2
import numpy as np
# load img and get the greyscale
img = cv2.imread("x.png")
grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# label the image
ret, thres = cv2.threshold(grey, 1, 255, cv2.THRESH_BINARY)
labelAmount, labels = cv2.connectedComponents(thres)
# get the mean of the color and position
values = []
# first label (0) is background
for i in range(1, labelAmount):
mask = np.zeros(labels.shape, dtype=np.uint8)
mask[labels == i] = 255
mean = cv2.mean(img, mask)[:-1]
meanPos = np.mean(cv2.findNonZero(mask), axis=0)[0]
values.append((mean, meanPos))
# sort them by x value (left to right)
values = sorted(values, key = lambda v : v[1][0])
left_line_color = values[0][0]
right_line_color = values[1][0]
# just to show the results
left_only = np.zeros(img.shape, dtype=np.uint8)
right_only = np.zeros(img.shape, dtype=np.uint8)
left_only = cv2.line (left_only, (int(values[0][1][0]), 0), (int(values[0][1][0]), img.shape[0]), left_line_color,5 )
right_only = cv2.line (right_only, (int(values[1][1][0]), 0), (int(values[1][1][0]), img.shape[0]), right_line_color,5 )
cv2.imshow("left_line", left_only)
cv2.imshow("right_line", right_only)
cv2.imshow("original", img)
cv2.waitKey(0)

Feather cropped edges

I'm trying to crop an object from an image, and paste it on another image. Examining the method in this answer, I've successfully managed to do that. For example:
The code (show_mask_applied.py):
import sys
from pathlib import Path
from helpers_cv2 import *
import cv2
import numpy
img_path = Path(sys.argv[1])
img = cmyk_to_bgr(str(img_path))
threshed = threshold(img, 240, type=cv2.THRESH_BINARY_INV)
contours = find_contours(threshed)
mask = mask_from_contours(img, contours)
mask = dilate_mask(mask, 50)
crop = cv2.bitwise_or(img, img, mask=mask)
bg = cv2.imread("bg.jpg")
bg_mask = cv2.bitwise_not(mask)
bg_crop = cv2.bitwise_or(bg, bg, mask=bg_mask)
final = cv2.bitwise_or(crop, bg_crop)
cv2.imshow("debug", final)
cv2.waitKey(0)
cv2.destroyAllWindows()
helpers_cv2.py:
from pathlib import Path
import cv2
import numpy
from PIL import Image
from PIL import ImageCms
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def cmyk_to_bgr(cmyk_img):
img = Image.open(cmyk_img)
if img.mode == "CMYK":
img = ImageCms.profileToProfile(img, "Color Profiles\\USWebCoatedSWOP.icc", "Color Profiles\\sRGB_Color_Space_Profile.icm", outputMode="RGB")
return cv2.cvtColor(numpy.array(img), cv2.COLOR_RGB2BGR)
def threshold(img, thresh=128, maxval=255, type=cv2.THRESH_BINARY):
if len(img.shape) == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshed = cv2.threshold(img, thresh, maxval, type)[1]
return threshed
def find_contours(img):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11,11))
morphed = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
contours = cv2.findContours(morphed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
return contours[-2]
def mask_from_contours(ref_img, contours):
mask = numpy.zeros(ref_img.shape, numpy.uint8)
mask = cv2.drawContours(mask, contours, -1, (255,255,255), -1)
return cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
def dilate_mask(mask, kernel_size=11):
kernel = numpy.ones((kernel_size,kernel_size), numpy.uint8)
dilated = cv2.dilate(mask, kernel, iterations=1)
return dilated
Now, instead of sharp edges, I want to crop with feathered/smooth edges. For example (the right one; created in Photoshop):
How can I do that?
All images and codes can be found that at this repository.
You are using a mask to select parts of the overlay image. The mask currently looks like this:
Let's first add a Gaussian blur to this mask.
mask_blurred = cv2.GaussianBlur(mask,(99,99),0)
We get to this:
Now, the remaining task it to blend the images using the alpha value in the mask, rather than using it as a logical operator like you do currently.
mask_blurred_3chan = cv2.cvtColor(mask_blurred, cv2.COLOR_GRAY2BGR).astype('float') / 255.
img = img.astype('float') / 255.
bg = bg.astype('float') / 255.
out = bg * (1 - mask_blurred_3chan) + img * mask_blurred_3chan
The above snippet is quite simple. First, transform the mask into a 3 channel image (since we want to mask all the channels). Then transform the images to float, since the masking is done in floating point. The last line does the actual work: for each pixel, blends the bg and img images according to the value in the mask. The result looks like this:
The amount of feathering is controlled by the size of the kernel in the Gaussian blur. Note that it has to be an odd number.
After this, out (the final image) is still in floating point. It can be converted back to int using:
out = (out * 255).astype('uint8')
While Paul92's answer is more than enough, I wanted to post my code anyway for any future visitor.
I'm doing this cropping to get rid of white background in some product photos. So, the main goal is to get rid of the whites while keeping the product intact. Most of the product photos have shadows on the ground. They are either the ground itself (faded), or the product's shadow, or both.
While the object detection works fine, these shadows also count as part of the object. Differentiating the shadows from the objects is not really necessary, but it results in some images that are not so desired. For example, examine the left and bottom sides of the image (shadow). The cut/crop is obviously visible, and doesn't look all that nice.
To get around this problem, I wanted to do non-rectangular crops. Using masks seems to do the job just fine. The next problem was to do the cropping with feathered/blurred edges so that I can get rid of these visible shadow cuts. With the help of Paul92, I've managed to do that. Example output (notice the missing shadow cuts, the edges are softer):
Operations on the image(s):
The code (show_mask_feathered.py, helpers_cv2.py)
import sys
from pathlib import Path
import cv2
import numpy
from helpers_cv2 import *
img_path = Path(sys.argv[1])
img = cmyk_to_bgr(str(img_path))
threshed = threshold(img, 240, type=cv2.THRESH_BINARY_INV)
contours = find_contours(threshed)
dilation_length = 51
blur_length = 51
mask = mask_from_contours(img, contours)
mask_dilated = dilate_mask(mask, dilation_length)
mask_smooth = smooth_mask(mask_dilated, odd(dilation_length * 1.5))
mask_blurred = cv2.GaussianBlur(mask_smooth, (blur_length, blur_length), 0)
mask_blurred = cv2.cvtColor(mask_blurred, cv2.COLOR_GRAY2BGR)
mask_threshed = threshold(mask_blurred, 1)
mask_contours = find_contours(mask_threshed)
mask_contour = max_contour(mask_contours)
x, y, w, h = cv2.boundingRect(mask_contour)
img_cropped = img[y:y+h, x:x+w]
mask_cropped = mask_blurred[y:y+h, x:x+w]
background = numpy.full(img_cropped.shape, (200,240,200), dtype=numpy.uint8)
output = alpha_blend(background, img_cropped, mask_cropped)

Get pixels location

I have a question about python and opencv. I would like to change the part of the picture which is black to some other color (no matter what). After changing, I would like to get the pixel values, ​​these 8 points marked with a red circle. How to do it?
https://imgur.com/2E1Wwqg
import cv2
import numpy as np
img = cv2.imread("image.jpg");
img[np.where((img == [0,0,0]).all(axis = 2))] = [50,150,166]
cv2.imwrite('output.png', img)
cv2.imshow("shapes", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
You can do that using OpenCV findContours() and minAreaRect() like this:
#!/usr/bin/env python3
import numpy as np
import cv2
# Load image
im = cv2.imread('start.png')
# Convert to grayscale
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
# Theshold inverse so the black comes out white because findContours() looks for white objects
ret,thresh = cv2.threshold(imgray,16,255,cv2.THRESH_BINARY_INV)
cv2.imwrite('thresh.png',thresh)
# Remove noise specks
thresh = cv2.medianBlur(thresh,5)
cv2.imwrite('thresh-m.png',thresh)
# Find contours, draw on image and save
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im, contours, -1, (0,255,0), 3)
# Show user what we found
i=0
for cnt in contours:
rect = cv2.minAreaRect(cnt)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(im,[box],0,(255,0,0),2)
print('Contour:{}\n{}'.format(i,box))
i = i+1
cv2.imwrite('result.png',im)
The thresholded image looks like this:
And the result image look like this:
The program output is the 4 corner points of the 4 minimum rectangles each one containing one of your lines.
Contour:0
[[416 776]
[410 767]
[659 607]
[664 616]]
Contour:1
[[297 780]
[ 77 599]
[ 83 592]
[303 773]]
Contour:2
[[518 695]
[507 694]
[519 176]
[530 177]]
Contour:3
[[226 688]
[224 174]
[233 173]
[235 687]]

OpenCV - Apply mask to a color image

How can I apply mask to a color image in latest python binding (cv2)? In previous python binding the simplest way was to use cv.Copy e.g.
cv.Copy(dst, src, mask)
But this function is not available in cv2 binding. Is there any workaround without using boilerplate code?
Here, you could use cv2.bitwise_and function if you already have the mask image.
For check the below code:
img = cv2.imread('lena.jpg')
mask = cv2.imread('mask.png',0)
res = cv2.bitwise_and(img,img,mask = mask)
The output will be as follows for a lena image, and for rectangular mask.
Well, here is a solution if you want the background to be other than a solid black color. We only need to invert the mask and apply it in a background image of the same size and then combine both background and foreground. A pro of this solution is that the background could be anything (even other image).
This example is modified from Hough Circle Transform. First image is the OpenCV logo, second the original mask, third the background + foreground combined.
# http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_houghcircles/py_houghcircles.html
import cv2
import numpy as np
# load the image
img = cv2.imread('E:\\FOTOS\\opencv\\opencv_logo.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# detect circles
gray = cv2.medianBlur(cv2.cvtColor(img, cv2.COLOR_RGB2GRAY), 5)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 20, param1=50, param2=50, minRadius=0, maxRadius=0)
circles = np.uint16(np.around(circles))
# draw mask
mask = np.full((img.shape[0], img.shape[1]), 0, dtype=np.uint8) # mask is only
for i in circles[0, :]:
cv2.circle(mask, (i[0], i[1]), i[2], (255, 255, 255), -1)
# get first masked value (foreground)
fg = cv2.bitwise_or(img, img, mask=mask)
# get second masked value (background) mask must be inverted
mask = cv2.bitwise_not(mask)
background = np.full(img.shape, 255, dtype=np.uint8)
bk = cv2.bitwise_or(background, background, mask=mask)
# combine foreground+background
final = cv2.bitwise_or(fg, bk)
Note: It is better to use the opencv methods because they are optimized.
import cv2 as cv
im_color = cv.imread("lena.png", cv.IMREAD_COLOR)
im_gray = cv.cvtColor(im_color, cv.COLOR_BGR2GRAY)
At this point you have a color and a gray image. We are dealing with 8-bit, uint8 images here. That means the images can have pixel values in the range of [0, 255] and the values have to be integers.
Let's do a binary thresholding operation. It creates a black and white masked image. The black regions have value 0 and the white regions 255
_, mask = cv.threshold(im_gray, thresh=180, maxval=255, type=cv.THRESH_BINARY)
im_thresh_gray = cv.bitwise_and(im_gray, mask)
The mask can be seen below on the left. The image on its right is the result of applying bitwise_and operation between the gray image and the mask. What happened is, the spatial locations where the mask had a pixel value zero (black), became pixel value zero in the result image. The locations where the mask had pixel value 255 (white), the resulting image retained its original gray value.
To apply this mask to our original color image, we need to convert the mask into a 3 channel image as the original color image is a 3 channel image.
mask3 = cv.cvtColor(mask, cv.COLOR_GRAY2BGR) # 3 channel mask
Then, we can apply this 3 channel mask to our color image using the same bitwise_and function.
im_thresh_color = cv.bitwise_and(im_color, mask3)
mask3 from the code is the image below on the left, and im_thresh_color is on its right.
You can plot the results and see for yourself.
cv.imshow("original image", im_color)
cv.imshow("binary mask", mask)
cv.imshow("3 channel mask", mask3)
cv.imshow("im_thresh_gray", im_thresh_gray)
cv.imshow("im_thresh_color", im_thresh_color)
cv.waitKey(0)
The original image is lenacolor.png that I found here.
Answer given by Abid Rahman K is not completely correct. I also tried it and found very helpful but got stuck.
This is how I copy image with a given mask.
x, y = np.where(mask!=0)
pts = zip(x, y)
# Assuming dst and src are of same sizes
for pt in pts:
dst[pt] = src[pt]
This is a bit slow but gives correct results.
EDIT:
Pythonic way.
idx = (mask!=0)
dst[idx] = src[idx]
The other methods described assume a binary mask. If you want to use a real-valued single-channel grayscale image as a mask (e.g. from an alpha channel), you can expand it to three channels and then use it for interpolation:
assert len(mask.shape) == 2 and issubclass(mask.dtype.type, np.floating)
assert len(foreground_rgb.shape) == 3
assert len(background_rgb.shape) == 3
alpha3 = np.stack([mask]*3, axis=2)
blended = alpha3 * foreground_rgb + (1. - alpha3) * background_rgb
Note that mask needs to be in range 0..1 for the operation to succeed. It is also assumed that 1.0 encodes keeping the foreground only, while 0.0 means keeping only the background.
If the mask may have the shape (h, w, 1), this helps:
alpha3 = np.squeeze(np.stack([np.atleast_3d(mask)]*3, axis=2))
Here np.atleast_3d(mask) makes the mask (h, w, 1) if it is (h, w) and np.squeeze(...) reshapes the result from (h, w, 3, 1) to (h, w, 3).

Categories

Resources