Rotating an image in openCV without the black box and without cropping - python

So I looked a here and there and it seems like it's a common problem. Though, I have a specific need that wasn't solved in the existing threads.
The main idea of my school project is to simulate a tattoo on your skin. For that, OpenCV detects the arm thanks to the skin color.
Thanks to the cv2.getRotationMatrix2D and cv2.warpAffine, I can rotate the tatoo with the arm.
#areaAngle is the inclination in degrees of the region of interest
N = cv2.getRotationMatrix2D((tatWidth/2,tatHeight/2),areaAngle,1)
#imgTatoo is the mustache image
roTatoo=cv2.warpAffine(imgTatoo,N,(tatWidth,tatHeight))
But my problem is this one :
When the arm is straight, everything is fine (image)
While when I tilt the arm, a magnificent black box appears (image again).
One of the proposed solutions was to crop the image using "bigger rectangle in the area". The thing is i want to keep the full tattoo and not just a cropped part.
Any idea how to do that?
Thanks
EDIT : I tried to resize the mask to match the diagonal height but the problem is that because of these lines of code:
tatoo=cv2.resize(imgTatoo,(tatWidth,tatHeight),interpolation=cv2.INTER_AREA)
mask2=cv2.resize(tatMask,(tatWidth,tatHeight),interpolation=cv2.INTER_AREA)
mask2inv=cv2.resize(invTatMask,(tatWidth,tatHeight),interpolation=cv2.INTER_AREA)
and further away
#create a ROI mask
roi = frame[fy1:fy2,fx1:fx2]
# print(roi.shape)
#merge the roi mask with the tatoo and the inverted tatoo masks
roi_bg = cv2.bitwise_and(roi,roi,mask = mask2inv)
roi_fg = cv2.bitwise_and(tatoo,tatoo,mask = mask2)
# print(roi_bg.shape,roi_fg.shape)
#merge the background and foreground ROI masks
dst = cv2.add(roi_fg,roi_bg)
if I try to resize the mask, i have to resize the tattoo image since the arrays need to be the same size.

Related

Change background color for Thresholderd image

I have been trying to write a code to extract cracks from an image using thresholding. However, I wanted to keep the background black. What would be a good solution to keep the outer boundary visible and the background black. Attached below is the original image along with the threshold image and the code used to extract this image.
import cv2
#Read Image
img = cv2.imread('Original.png')
# Convert into gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Image processing ( smoothing )
# Averaging
blur = cv2.blur(gray,(3,3))
ret,th1 = cv2.threshold(blur,145,255,cv2.THRESH_BINARY)
inverted = np.invert(th1)
plt.figure(figsize = (20,20))
plt.subplot(121),plt.imshow(img)
plt.title('Original'),plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(inverted,cmap='gray')
plt.title('Threshold'),plt.xticks([]), plt.yticks([])
Method 1
Assuming the circle in your images stays in one spot throughout your image set you can manually create a black 'mask' image with a white hole in the middle, then overlay it on the final inverted image.
You can easily make the mask image using your favorite image editor's magic wand tool.
I made this1 by also expanding the circle inwards by one pixel to take into account some of the pixels the magic wand tool couldn't catch.
You would then use the mask image like this:
mask = cv2.imread('/path/to/mask.png')
masked = cv2.bitwise_and(inverted, inverted, mask=mask)
Method 2
If the circle does NOT stay is the same spot throughout your entire image set you can try to make the mask from all the fully black pixels in your original image. This assumes that the 'sample' itself (the thing with the cracks) does not contain fully black pixels. Although this will result in the text on the bottom left to be left white.
# make all the non black pixels white
_,mask = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
1 The original is not the same size as your inverted image and thus the mask I made won't actually fit, you're gonna have to make it yourself.

I want to check is the product label correct or slipped? Python and OpenCV

I am doing a project that checks whether the labels on the ketchup bottles are outside the boundaries we want or are they correct. I am using python and opencv.
My goal is to set a boundaries and check if the tag has exceeded those limits.
I want to add boundaries like this:
I mean, for example, in the red area between the green rectangle and the edge of the ketchup bottle, I'm aiming to check if there is a pixel (so there is a slip on the label). --> example areas
So far I have done blurring the image and then I found the edges with the canny edge.
Blurred image:
After finding edges:
After this point, I want to add a frame to the picture where I found the edges and check if there are any pixels outside the border, but I'm stuck at this point.
I'm open to suggestions on how to do this.
This is my code:
import cv2
image = cv2.imread('ketchup1.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
canny = cv2.Canny(blur, 75, 225)
cv2.imshow("blurred image", blur)
cv2.imshow("canny image", canny)
cv2.waitKey(0)
cv2.destroyAllWindows()
Such inspections will always start with a global location of the object, because the bottles will not always be exactly placed. This can be done by template matching, or detection of the external edges (in an horizontal strip).
First solution:
Take a sample image and define a binary mask such as the one below. Then during inspection, after registering the mask on the image, count the edge pixels inside the mask area.
Second solution:
Use template matching to locate just the label and compare its position to that of the bottle.

Improving background extraction with shiny objects (OpenCV/Python)

I've been working on a 'fun' solution that combines multiple videos from a security camera into one video, the idea being to overlay foreground motion over many hours into a few seconds for a 'quick preview'. Thanks to Stephen and Bahramdum for getting me on the right path.
The project is open source, for anyone to see. So far, I've played with the following background extractions:
OpenCV BackgroundSubtraction using a variety of algorithms
Mask R-CNN
Yolov3/TinyYolo
Optical flow
(I haven't yet tried detection+centroid tracking, will do so next)
Based on my experiments so far, OpenCV's background extraction generally works the best due to the fact that it extracts foreground purely based on motion. Plus its very fast. True, that it also extracts things like moving leaves etc, but we can work on removing those.
Here is an example of 3 hours of video blended into one short video.
https://youtu.be/C-mJfzvFTdg
My current challenge is that it is doing a bad job with shiny objects, like cars.
Here is an example:
Background subtraction consistently does a bad job with extracting polygons for shiny objects and findContours does no better either.
I've tried several options, but my current approach is documented here, the gist of which is:
Convert frame to HSV
remove intensity (I read this in another SO thread for shiny objects)
Apply background subtraction
Clean up outside noise with MORPH_OPEN
Blur mask to hopefully connect near while blobs
find contours on new masks
only keep contours of certain min area
create a new mask, where we draw only these contours with fill
Do a final dilation to connect close filled contours of new masks
10.Use this new mask to extract foreground from the frame and overlay it with current blended video
Would anyone have suggestions on how to improve extraction for reflective objects?
self.fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows=False,
history=self.history)
frame_hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
frame_hsv[:,:,0] = 0 # see if removing intensity helps
# gray = cv2.cvtColor(frame_hsv, cv2.COLOR_BGR2GRAY)
# create initial background subtraction
frame_mask = self.fgbg.apply(frame_hsv)
# remove noise
frame_mask = cv2.morphologyEx(frame_mask, cv2.MORPH_OPEN, self.kernel_clean)
# blur to merge nearby masks, hopefully
frame_mask = cv2.medianBlur(frame_mask,15)
#frame_mask = cv2.GaussianBlur(frame_mask,(5,5),cv2.BORDER_DEFAULT)
#frame_mask = cv2.blur(frame_mask,(20,20))
h,w,_ = frame.shape
new_frame_mask = np.zeros((h,w),dtype=np.uint8)
copy_frame_mask = frame_mask.copy()
# find contours of mask
relevant = False
ctrs,_ = cv2.findContours(copy_frame_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rects = []
# only select relevant contours
for contour in ctrs:
area = cv2.contourArea(contour)
if area >= self.min_blend_area:
x,y,w,h = cv2.boundingRect(contour)
pts = Polygon([[x,y], [x+w,y], [x+w, y+h], [x,y+h]])
if g.poly_mask is None or g.poly_mask.intersects(pts):
relevant = True
cv2.drawContours(new_frame_mask, [contour], -1, (255, 255, 255), -1)
rects.append([x,y,w,h])
# do a dilation to again, combine the contours
frame_mask = cv2.dilate(new_frame_mask,self.kernel_fill,iterations = 5)
frame_mask = new_frame_mask
I found way too many variations on different conditions to have something predictable using openCV's background extraction
So I switched to yolov3 for object identification (added it as a new option, actually). While TinyYOLO was pretty terrible, YoloV3 seems to be adequate, albeit much slower.
Unfortunately, given YoloV3 only does rectangle and not subject masks, I had to switch to addWeighted method of cv2 to blend a new object on top of the blended frame.
Example: https://github.com/pliablepixels/zmMagik/blob/master/sample/1.gif

Opencv: How can I get the eye color

I am using dblib to get the eyes of a face. Below are some examples of the results.
I have tried several methods to accomplish the objective. For instance, I tried to detect the center of the eye based on this project; from that, it would be easy to detect the pupil and the iris, however, I did not achieve good results. I also have tried to use Hough Circles but in some cases the results are quite bad.
My best bet is to detect the pupil, which is the only part of the eye with a common color (black) for every eye. I would like to get some ideas to do so.
My first idea is to set a region (between 20 and 60 in the x axis), then, in gray-scale, make the dark pixels (less than 25, for instance) black, and the rest, white. That would create a mask, that can be blurred to use Hough Circles and detect the region of the pupil. Finally, I can set a radius for the iris.
Any idea would be appreciated.
Thanks.
Actually your idea of detecting the shape of the pupil is good but your pictures are not good enough to do it directly. An easy way is to pre-process those to remove all useless data.
I did some example with one of your original pics to show you (on Gimp)
Go to grey scale
Do a high pass filter to remove all small color fluctuations (you have very distinct colors so it should enhance borders very well)
Link to example filtered pic
Apply a threshold on your picture to remove remaining fluctuations (you can calculate the reference threshold value by analyzing your grey scale image color histogram)
Link to example thresholded pic
After those three steps you should have enough data to run your shape detection.
Most of the answers I have read till now say to use the Hough circle method to detect the iris region, but it doesn't really work on all images.
So my approach is pretty simple, which involves following steps
Detect face from the image
Find eye region from the face
Get the RGB values just below the pupil region(thereby getting the iris region RGB values)
And pass the obtained RGB values to find_color function
NOTE: Pass High-resolution image as the input for better results. If you pass low-resolution images such as 480x620, 320x240, you might end up getting poor results.
Below is the code for the same
import cv2
import imutils
from imutils import face_utils
import dlib
import numpy as np
import webcolors
flag=0
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
img= cv2.imread('blue2.jpg')
img_rgb= cv2.cvtColor(img,cv2.COLOR_BGR2RGB) #convert to RGB
#cap = cv2.VideoCapture(0) #turns on the webcam
(left_Start, left_End) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
#points for left eye and right eye
(right_Start, right_End) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
def find_color(requested_colour): #finds the color name from RGB values
min_colours = {}
for name, key in webcolors.CSS3_HEX_TO_NAMES.items():
r_c, g_c, b_c = webcolors.hex_to_rgb(name)
rd = (r_c - requested_colour[0]) ** 2
gd = (g_c - requested_colour[1]) ** 2
bd = (b_c - requested_colour[2]) ** 2
min_colours[(rd + gd + bd)] = key
closest_name = min_colours[min(min_colours.keys())]
return closest_name
#ret, frame=cap.read()
#frame = cv2.flip(frame, 1)
#cv2.imshow(winname='face',mat=frame)
gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
# detect dlib face rectangles in the grayscale frame
dlib_faces = detector(gray, 0)
for face in dlib_faces:
eyes = [] # store 2 eyes
# convert dlib rect to a bounding box
(x,y,w,h) = face_utils.rect_to_bb(face)
cv2.rectangle(img_rgb,(x,y),(x+w,y+h),(255,0,0),1) #draws blue box over face
shape = predictor(gray, face)
shape = face_utils.shape_to_np(shape)
leftEye = shape[left_Start:left_End]
# indexes for left eye key points
rightEye = shape[right_Start:right_End]
eyes.append(leftEye) # wrap in a list
eyes.append(rightEye)
for index, eye in enumerate(eyes):
flag+=1
left_side_eye = eye[0] # left edge of eye
right_side_eye = eye[3] # right edge of eye
top_side_eye = eye[1] # top side of eye
bottom_side_eye = eye[4] # bottom side of eye
# calculate height and width of dlib eye keypoints
eye_width = right_side_eye[0] - left_side_eye[0]
eye_height = bottom_side_eye[1] - top_side_eye[1]
# create bounding box with buffer around keypoints
eye_x1 = int(left_side_eye[0] - 0 * eye_width)
eye_x2 = int(right_side_eye[0] + 0 * eye_width)
eye_y1 = int(top_side_eye[1] - 1 * eye_height)
eye_y2 = int(bottom_side_eye[1] + 0.75 * eye_height)
# draw bounding box around eye roi
#cv2.rectangle(img_rgb,(eye_x1, eye_y1), (eye_x2, eye_y2),(0,255,0),2)
roi_eye = img_rgb[eye_y1:eye_y2 ,eye_x1:eye_x2] # desired EYE Region(RGB)
if flag==1:
break
x=roi_eye.shape
row=x[0]
col=x[1]
# this is the main part,
# where you pick RGB values from the area just below pupil
array1=roi_eye[row//2:(row//2)+1,int((col//3)+3):int((col//3))+6]
array1=array1[0][2]
array1=tuple(array1) #store it in tuple and pass this tuple to "find_color" Funtion
print(find_color(array1))
cv2.imshow("frame",roi_eye)
cv2.waitKey(0)
cv2.destroyAllWindows()
Below are some examples.
An actress with blue eyes
Now this is the output of our code when the above image is given as the input: lightsteelblue
An actress with brown eyes
The output of our code when the above image is given as the input: saddlebrown
Mila kunis (one brown eye and other is green)
The output of our code when the above image is given as the input: sienna(shade of brown)
An actress with grey eyes
The output of our code when the above image is given as the input: darkgrey
So, you can see how close the results are to the actual eye color. This works pretty well with high-resolution images as I already mentioned.
PS: Correct me if am wrong, open to suggestions.

processing different quality images with opencv

I am analyzing an image for finding brown objects in an image. I am thresholding an image and taking darkest parts as brown cells. However depending on the quality of an image objects cannot be identified sometimes. Is there any solution for that in OpenCV Python, such as pre-processing the gray scale image and defining what brown means for that particular image?
The code that I am using to find brown dots is as follows:
def countBrownDots(imageFile):
im = cv2.imread(imageFile)
#changing color space
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
gray = increaseBrighntness(gray)
l1,thresh = cv2.threshold(gray,10,255,cv2.THRESH_BINARY_INV)
thresh = ndimage.gaussian_filter(thresh, 16)
l2,thresh = cv2.threshold(thresh,70,255,cv2.THRESH_BINARY)
thresh = ndimage.gaussian_filter(thresh, 16)
cv2.imshow("thresh22",thresh)
rmax = pymorph.regmax(thresh)
nim = pymorph.overlay(thresh, rmax)
seeds,nr_nuclei = ndimage.label(rmax)
cv2.imshow("original",im)
cv2.imshow("browns",nim)
Here is an input image example:
Have a look at the image in HSV color space, here are the 3 planes stacked side by side
Although people have suggested segmenting on the basis of hue, there is actually more discriminative information in the saturation and value planes. For this particular image you would probably get a better result with the gray scale (i.e. value plane) than with the hue. However that is no reason to discard the color information.
As proof of concept (using Gimp) for color segmentation, I just randomly picked a brown spot and changed all colors with a color distance of less than 60 from that spot to green to get this:
If you play with the parameters a bit you will probably get what you want. Then write the code.
I tried pre-processing mean shift filtering to posterize the image, but that didn't really help.

Categories

Resources