I am very new to OpenCV and Python. So, for my first project to recognize objects, I am using a small test file in python. This is the test file
import cv2
from matplotlib import pyplot as plt
# Opening image
img = cv2.imread("image.jpg")
# OpenCV opens images as BRG
# but we want it as RGB We'll
# also need a grayscale version
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Use minSize because for not
# bothering with extra-small
# dots that would look like STOP signs
stop_data = cv2.CascadeClassifier('cascade.xml')
found = stop_data.detectMultiScale(img_gray,
minSize=(20,20))
amount_found = len(found)
if amount_found != 0:
# There may be more than one
# sign in the image
for (x, y, width, height) in found:
# We draw a green rectangle around
# every recognized sign
cv2.rectangle(img_rgb, (x, y),
(x + height, y + width),
(0, 255, 0), 5)
# Creates the environment of
# the picture and shows it
plt.subplot(1, 1, 1)
plt.imshow(img_rgb)
plt.show()
The tutorial that I saw told me to use a premade cascade.xml file. However, I wanted to train my cascade file to recognize a simple apple logo, which I cropped from a photo. I copy pasted the same image multiple times into one folder titled p. For the negative images, I used a folder titled n. After this, I used the GUI cascade trainer and trained a cascade file. However, when I use the same image (uncropped) in the python program, there is no output.
This is the image which I used to train:
This is the original image which I put in the python script
The original image is of size 422612 and the cropped image is of size 285354. I had set the size in the trainer to height=20 and width=15.
These are my settings:
Folder p:
Output of program:
Cascade GUI log end
NEG count : acceptanceRatio 0 : 0
Required leaf false alarm rate achieved. Branch training terminated.
Could someone tell me where I am going wrong ? Please ask if I have to provide anymore information. Is there any ratio relation between the positive image and the actual image that I am missing on ?
UPDATE: I added some more negative images and am now getting this output:
Related
I have a script that loads an image and using selectROI() allows me to select and crop a specific part of that image and get the contours of just that part alone. But how can I search if are there any other contours in the original image just like the one I selected and cropped? My goal is to teach a shape and verify if that shape occurs in any other parts of that image or any other image that I load after, hopefully even being able to have a certain tolerance of correspondence.
I could try something like object detection using HAAR Cascade or YOLO, but I am positive that there is a way to do it without relying on heavy-weight computation AI models, especially because I want to use it on static images, not on video. I say that because that is how it is made on industrial vision systems. You only need to load a single image and select the object that you want to detect so the contours can be drawn. We you load another image, the software will look for these contours up to a certain level of correspondence.
import cv2 as cv
import numpy as np
# Load Image
img = cv.imread('C:/Users/ALEMAC/Downloads/geometricShapes.jpg')
#Selecting ROI
imgdraw = cv.selectROI(img)
cropimg = img[int(imgdraw[1]):int(imgdraw[1]+imgdraw[3]), int(imgdraw[0]):int(imgdraw[0]+imgdraw[2])] #displaying the cropped image as the output on the screen
cv.imshow('Cropped_image',cropimg)
blank = np.zeros(cropimg.shape[:2], dtype='uint8') # creates a blank img, with the same size as our geometricShapes img
gray = cv.cvtColor(cropimg, cv.COLOR_BGR2GRAY)
blur = cv.GaussianBlur(gray,(3,3), cv.BORDER_DEFAULT)
# Find edges using contours method
ret, thresh = cv.threshold(blur, 125,255, cv.THRESH_BINARY)
#cv.imshow('Thresh', thresh)
contours, hierachies = cv.findContours(thresh, cv.RETR_TREE,cv.CHAIN_APPROX_SIMPLE)
cv.drawContours(blank,contours,-1,(255,255,255),thickness=1)
cv.imshow('Contours', blank)
cv.waitKey(0)
I have an image processing problem that I can't solve. I have a set of 375 images like the one below (1). I'm trying to remove the background, so to make "background substraction" (or "foreground extraction") and get only the waste on a plain background (black/white/...).
(1) Image example
I tried many things, including createBackgroundSubtractorMOG2 from OpenCV, or threshold. I also tried to remove the background pixel by pixel by subtracting it from the foreground because I have a set of 237 background images (2) (the carpet without the waste, but which is a little bit offset from the image with the objects). There are also variations in brightness on the background images.
(2) Example of a background image
Here is a code example that I was able to test and that gives me the results below (3) and (4). I use Python 3.8.3.
# Function to remove the sides of the images
def delete_side(img, x_left, x_right):
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if j<=x_left or j>=x_right:
img[i,j] = (0,0,0)
return img
# Intialize the background model
backSub = cv2.createBackgroundSubtractorMOG2(history=250, varThreshold=2, detectShadows=True)
# Read the frames and update the background model
for frame in frames:
if frame.endswith(".png"):
filepath = FRAMES_FOLDER + '/' + frame
img = cv2.imread(filepath)
img_cut = delete_side(img, x_left=190, x_right=1280)
gray = cv2.cvtColor(img_cut, cv2.COLOR_BGR2GRAY)
mask = backSub.apply(gray)
newimage = cv2.bitwise_or(img, img, mask=mask)
img_blurred = cv2.GaussianBlur(newimage, (5, 5), 0)
gray2 = cv2.cvtColor(img_blurred, cv2.COLOR_BGR2GRAY)
_, binary = cv2.threshold(gray2, 10, 255, cv2.THRESH_BINARY)
final = cv2.bitwise_or(img, img, mask=binary)
newpath = RESULT_FOLDER + '/' + frame
cv2.imwrite(newpath, final)
I was inspired by many other cases found on Stackoverflow or others (example: removing pixels less than n size(noise) in an image - open CV python).
(3) The result obtained with the code above
(4) Result when increasing the varThreshold argument to 10
Unfortunately, there is still a lot of noise on the resulting pictures.
As a beginner in "background substraction", I don't have all the keys to get an optimal solution. If someone would have an idea to do this task in a more efficient and clean way (Is there a special method to handle the case of transparent objects? Can noise on objects be eliminated more effectively? etc.), I'm interested :)
Thanks
Thanks for your answers. For information, I simply change of methodology and use a segmentation model (U-Net) with 2 labels (foreground, background), to identify the background. It works quite well.
I work at a studio that does school photos and we are trying to make a script to eliminate the job of cropping each photo to a template. The photos we work with are fairly uniform but they vary in resolution and head position a bit. I took up the mantle of trying to write the script with my fairly limited Python knowledge and through a lot of trial and error and online resources I think I have got most of the way there.
At the moment I am trying to figure out the best way to have the image crop from the NumPy array with the head where I want and I just cant find a good flexible solution. The head needs to be positioned slightly differently for pose 1 and pose 2 so its needs to be easy to change on the fly (Probably going to implement some sort of simple GUI to input stuff like that, but for now I can just change the code).
I also need to be able to change the output resolution of the photo so they are all uniform (2000x2500). Anyone have any ideas?
At the moment this is my current code, it just saves the detected face square:
import cv2
import os.path
import glob
# Cascade path
cascPath = 'haarcascade_frontalface_default.xml'
# Create the haar cascade
faceCascade = cv2.CascadeClassifier(cascPath)
#Check for output folder and create if its not there
if not os.path.exists('output'):
os.makedirs('output')
# Read Images
images = glob.glob('*.jpg')
for c, i in enumerate(images):
image = cv2.imread(i, 1)
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Find face(s) using cascade
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1, # size of groups
minNeighbors=5, # How many groups around are detected as face for it to be valid
minSize=(500, 500) # Min size in pixels for face
)
# Outputs number of faces found in image
print('Found {0} faces!'.format(len(faces)))
# Places a rectangle on face
for (x, y, w, h) in faces:
imgCrop = image[y:y+h,x:x+w]
if len(faces) > 0:
#Saves Images to output folder with OG name
cv2.imwrite('output/'+ i, imgCrop)
I can crop using it like this:
# Crop Padding
left = 300
right = 300
top = 400
bottom = 1000
for (x, y, w, h) in faces:
imgCrop = image[y-top:y+h+bottom, x-left:x+w+right]
but that outputs pretty random resolutions and changes based on the image resolution
TL;DR
To set a new resolution with the dimension, you can use cv2.resize. There may be a pixel loss so you can use the interpolation method.
The newly resized image may be in BGR format, so you may need to convert to RGB format.
cv2.resize(src=crop, dsize=(2000, 2500), interpolation=cv2.INTER_LANCZOS4)
crop = cv2.cvtColor(crop, cv2.COLOR_BGR2RGB) # Make sure the cropped image is in RGB format
cv2.imwrite("image-1.png", crop)
Suggestion:
One approach is using python's face-recognition library.
The approach is using two sample images for training.
Predict the next image based on training images.
For instance, The followings are the training images:
We want to predict the faces in the below image:
When we get the facial encodings of the training images and apply to the next image:
import face_recognition
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
# Load a sample picture and learn how to recognize it.
first_image = face_recognition.load_image_file("images/ex.jpg")
first_face_encoding = face_recognition.face_encodings(first_image)[0]
# Load a second sample picture and learn how to recognize it.
second_image = face_recognition.load_image_file("images/index.jpg")
sec_face_encoding = face_recognition.face_encodings(second_image)[0]
# Create arrays of known face encodings and their names
known_face_encodings = [
first_face_encoding,
sec_face_encoding
]
print('Learned encoding for', len(known_face_encodings), 'images.')
# Load an image with an unknown face
unknown_image = face_recognition.load_image_file("images/babes.jpg")
# Find all the faces and face encodings in the unknown image
face_locations = face_recognition.face_locations(unknown_image)
face_encodings = face_recognition.face_encodings(unknown_image, face_locations)
# Convert the image to a PIL-format image so that we can draw on top of it with the Pillow library
# See http://pillow.readthedocs.io/ for more about PIL/Pillow
pil_image = Image.fromarray(unknown_image)
# Create a Pillow ImageDraw Draw instance to draw with
draw = ImageDraw.Draw(pil_image)
# Loop through each face found in the unknown image
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
draw.rectangle(((left, top), (right, bottom)), outline=(0, 0, 255), width=5)
# Remove the drawing library from memory as per the Pillow docs
del draw
# Display the resulting image
plt.imshow(pil_image)
plt.show()
The output will be:
The above is my suggestion. When you create a new resolution with the current image, there will be a pixel loss. Therefore you need to use an interpolation method.
For instance: after finding the face locations, select the coordinates in the original image.
# Add after draw.rectangle function.
crop = unknown_image[top:bottom, left:right]
Set new resolution with the size 2000 x 2500 and interpolation with CV2.INTERN_LANCZOS4.
Possible Question: Why CV2.INTERN_LANCZOS4?
Of course, you can select whatever you like, but in this post CV2.INTERN_LANCZOS4 was suggested.
cv2.resize(src=crop, dsize=(2000, 2500), interpolation=cv2.INTER_LANCZOS4)
Save the image
crop = cv2.cvtColor(crop, cv2.COLOR_BGR2RGB) # Make sure the cropped image is in RGB format
cv2.imwrite("image-1.png", crop)
Outputs are around 4.3 MB Therefore I can't display in here.
From the final result, we clearly see and identify faces. The library precisely finds the faces in the image.
Here what you can do:
Either you can use the training images of your own-set, or you can use the example above.
Apply the face-recognition function for each image, using the trained face-locations and save the results in the directory.
here is how I got it to crop how I wanted, this is added right below the "output number of faces" function
#Get the face postion and output values into variables, might not be needed but I did it
for (x, y, w, h) in faces:
xdis = x
ydis = y
w = w
h = h
#Get scale value by dividing wanted head hight by detected head hight
ws = 600/w
hs = 600/h
#scale image to get head to right size, uses bilinear interpolation by default
scale = cv2.resize(image,(0,0),fx=hs,fy=ws)
#calculate head postion for given values
sxdis = int(xdis*ws) #applying scale to x distance and turning it into a integer
sydis = int(ydis*hs) #applying scale to y distance and turning it into a integer
sycent = sydis+300 #adding half head hight to get center
ystart = sycent-700 #subtract where you want the head center to be in pixels, this is for the vertical
yend = ystart+2500 #Add whatever you want vertical resolution to be
xcent = sxdis+300 #adding half head hight to get center
xstart = xcent-1000 #subtract where you want the head center to be in pixels, this is for the horizontal
xend = xstart+2000 #add whatever you want the horizontal resolution to be
#Crop the image
cropped = scale[ystart:yend, xstart:xend]
Its a mess but it works exactly how I wanted it to work.
ended up going with openCV instead of switching to python-Recognition because of speed but I might switch over if I can get multithreading to work in python-recognition.
I want to automate the task of entering set of images into a number generating system & before that i like to remove a dotted watermark which is common across these images.
I tried using google, tesseract & abby reader, but I found that the image part that does not contain the watermark is recognized well, but the part that is watermarked is almost impossible to recognize.
I would like to remove the watermark using image processing. I already tried few sample codes of opencv, python, matlab etc but none matching my requirements...
Here is a sample code in Python that I tried which changes the brightness & darkness:
import cv2
import numpy as np
img = cv2.imread("d:\\Docs\\WFH_Work\\test.png")
alpha = 2.5
beta = -250
new = alpha * img + beta
new = np.clip(new, 0, 255).astype(np.uint8)
cv2.imshow("my window", new)
Unusually, i dont know the watermark of this image consists how many pixels. Is there a way to get rid of this watermark OR make digits dark and lower the darkness of watermark via code?
Here is watermarked image
I am using dilate to remove the figures, then find the edge to detect watermark. Remove it by main gray inside watermark
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('test.png', 0)
kernel = np.ones((10,10),np.uint8)
dilation = cv2.dilate(img,kernel,iterations = 1)
erosion = cv2.erode(dilation,kernel,iterations = 1)
plt.imshow(erosion, cmap='gray')
plt.show()
#contour
gray = cv2.bilateralFilter(erosion, 11, 17, 17)
edged = cv2.Canny(gray, 30, 200)
plt.imshow(edged, cmap='gray')
plt.show()
Hi i run this blurdetection code in python ( source : https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/ )
# import the necessary packages
from imutils import paths
import argparse
import cv2
def variance_of_laplacian(image):
# compute the Laplacian of the image and then return the focus
# measure, which is simply the variance of the Laplacian
return cv2.Laplacian(image, cv2.CV_64F).var()
# loop over the input images
for imagePath in paths.list_images("images/"):
# load the image, convert it to grayscale, and compute the
# focus measure of the image using the Variance of Laplacian
# method
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fm = variance_of_laplacian(gray)
text = "Not Blurry"
# if the focus measure is less than the supplied threshold,
# then the image should be considered "blurry"
if fm < 100:
text = "Blurry"
# show the image
cv2.putText(image, "{}: {:.2f}".format(text, fm), (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 3)
cv2.imshow("Image", image)
print("{}: {:.2f}".format(text, fm))
key = cv2.waitKey(0)
with this 2173 x 3161 input file
input image
and this is the output show
the output image
The image is zoom in and dont shown full.
In the source code, they use 450 x 600 px input image :
input in source code
and this is the output :
output in source code
I think the pixels of the image influences of the output. So, how can i get the output like the output in source code to all image?
do i have to resize the input image? How to? but if I do it I'm afraid it will affect the result of his blur
Excerpt from the DOCUMENTATION.
There is a special case where you can already create a window and load image to it later. In that case, you can specify whether window is resizable or not. It is done with the function cv2.namedWindow(). By default, the flag is cv2.WINDOW_AUTOSIZE. But if you specify flag to be cv2.WINDOW_NORMAL, you can resize window. It will be helpful when image is too large in dimension and adding track bar to windows.
I just used the code placed in the question but added line cv2.namedWindow("Image", cv2.WINDOW_NORMAL) as mentioned in the comments.
# import the necessary packages
from imutils import paths
import argparse
import cv2
def variance_of_laplacian(image):
# compute the Laplacian of the image and then return the focus
# measure, which is simply the variance of the Laplacian
return cv2.Laplacian(image, cv2.CV_64F).var()
# loop over the input images
for imagePath in paths.list_images("images/"):
# load the image, convert it to grayscale, and compute the
# focus measure of the image using the Variance of Laplacian
# method
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fm = variance_of_laplacian(gray)
text = "Not Blurry"
# if the focus measure is less than the supplied threshold,
# then the image should be considered "blurry"
if fm < 100:
text = "Blurry"
# show the image
cv2.putText(image, "{}: {:.2f}".format(text, fm), (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 3)
cv2.namedWindow("Image", cv2.WINDOW_NORMAL) #---- Added THIS line
cv2.imshow("Image", image)
print("{}: {:.2f}".format(text, fm))
key = cv2.waitKey(0)
In case you want to use the exact same resolution as the example you've given, you can just use the cv2.resize() https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#resize method or (in case you want to keep the ratio of the x/y coordinates) use the imutils class provided in https://www.pyimagesearch.com/2015/02/02/just-open-sourced-personal-imutils-package-series-opencv-convenience-functions/
You still have to decide if you want to do the resizing first. It shouldn't matter in which order you greyscale or resize.
Command you can add:
resized_image = cv2.resize(image, (450, 600))