Find image inside another in SimpleCV - python

I'm using Python and SimpleCV (but is ok to use OpenCV too) and i have an image:
Futhermore, i have some small images, like this, which were cropped from the original image:
Assuming that the first image contains the second, I would like to get the second's image coordinates in regard of first, before cropping. How I can make this?

Use matchTemplate in OpenCV:
diff = cv2.matchTemplate(img1, img2, cv2.TM_CCORR_NORMED)
x, y = np.unravel_index(np.argmax(diff), diff.shape)

Related

How to remove noise from an image using pillow?

I am trying to de-noise an image that I've made in order to read the numbers on it using Tesseract.
Noisy image.
Is there any way to do so?
I am kind of new to image manipulation.
from PIL import ImageFilter
im1 = im.filter(ImageFilter.BLUR)
im2 = im.filter(ImageFilter.MinFilter(3))
im3 = im.filter(ImageFilter.MinFilter)
The Pillow library provides the ImageFilter module that can be used to enhance images. Per the documentation:
The ImageFilter module contains definitions for a pre-defined set of filters, which can be be used with the Image.filter() method.
These filters work by passing a window or kernel over the image, and computing some function of the pixels in that box to modify the pixels (usually the central pixel)
The MedianFilter seems to be widely used and resembles the description given in nishthaneeraj's answer.
You have to read Python pillow Documentation
Python pillow Documentation link:
https://pillow.readthedocs.io/en/stable/
Pillow image module:
https://pillow.readthedocs.io/en/stable/reference/ImageFilter.html#module-PIL.ImageFilter
How do you remove noise from an image in Python?
The mean filter is used to blur an image in order to remove noise. It involves determining the mean of the pixel values within a n x n kernel. The pixel intensity of the center element is then replaced by the mean. This eliminates some of the noise in the image and smooths the edges of the image.

How do I crop the black background of the image using OpenCV in Python?

So I have an image processing task at hand which requires me to crop a certain portion of an image. I have no prior experience of OpenCV. I would like to know of a certain approach where I should be headed.
Sample Input Image:
Sample Output Image:
What I initially thought was to convert the image to a bitmap and remove pixels that are below or above a certain threshold. Since I am free to use OpenCV and Python, I would like to know of any automated algorithm that does so and if not, what should be the right approach for such a problem. Thank you.
Applying a simple threshold should get rid of the background, provided it's always darker than the foreground. If you use the Otsu thresholding algorithm, it should choose a good partition for you. Using your example as input, this gives:
Next you could compute the bounding box to select the region of the foreground. Provided the background is distinct enough and there are no holes, this gives you the resulting rect:
[619 x 96 from (0, 113)]
You can then use this rect to crop the original, to produce the desired result:
I wrote the code to solve this in C++. A rough translation into Python would look something like this:
import cv2 as cv
img = cv.imread(sys.argv[1])
grayscale = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
thresholded = cv.threshold(grayscale, 0, 255, cv.THRESH_OTSU)
imwrite("otsu.png", thresholded)
bbox = cv.boundingRect(thresholded)
x, y, w, h = bbox
print(bbox)
foreground = img[y:y+h, x:x+w]
imwrite("foreground.png", foreground)
This method is fast and simple. If you find you have some white holes in your background which enlarge the bounding box, try applying an erosion operator.
FWIW I very much doubt you would get results like this as predictably or reliably using NNs.
The thresholding seems like a good approach. An overkill would be a neural network but you probably don't have enough data to train (:D) anyways check out this link.
you should be able to do something like:
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img = cv.imread('img.png')
gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU
NN would be a overkill! You can do edge detection and get the extreme horizontal lines as boundaries. Then crop only the roi within these two lines.

How to do image processing on a certain area of an image in python OpenCV 3?

Considering I have the coordinates already of the area of the image I want to do image processing on. It was already explained here using Rect but how do you do this on python OpenCV 3?
From the link you gave, it seems you don't want the output in a different image variable, given that you know the coordinates of the region you want to process. I'll assume your image processing function to be cv2.blur() so this is how it'll be:
image[y:y+height, w:w+width] = cv2.blur(image[y:y+height, w:w+width], (11,11))
Here, x & y are your ROI starting co-ordinates, and height & width are the height, width of the ROI
Hope this is what you wanted, or if it's anything different, provide more details in your question.
It would be very useful if you would provide more details and maybe some code you've tried.
From my understanding, you want to do image processing on a region of an image array only. You can do something like
foo(im[i1:i2, j1:j2, :])
Where foo is your image processing function.

I need to break original image into sub parts based on shape

I'm working with the following input image:
I want to extract all the boxes inside original images as an individual images with position so that i can also construct it after doing some operations on it. Currently I'm trying to detect contours on the image using OpenCV. But the problem is it also extracts all the words inside the box. The output is coming something like this:
Is there is any way where i can set the dimensions of box to be taken or something else is required for this.
Fairly simple approach:
Convert to grayscale.
Invert the image (to avoid getting top level contour detected around whole image -- we want the lines white and background black)
Find external contours only (we don't have any nested boxes).
Filter contours by area, discard the small ones.
You could possibly also filter by bounding box dimensions, etc. Feel free to experiment.
Example Script
Note: for OpenCV 2.4.x
import cv2
img = cv2.imread('cnt2.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = 255 - gray # invert
contours,heirarchy = cv2.findContours(gray,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for contour in contours:
area = cv2.contourArea(contour)
if area > 500.0:
cv2.drawContours(img,[contour],-1,(0,255,0),1)
cv2.imwrite('cnt2_out.png', img)
Example Output

Python OpenCv, only read part of image

I have thousands of large .png images (screenshots). I'm using opencv to do image recognition on a small portion of each image. I'm currently doing:
image = cv2.imread(path)
x,y,w,h = bounds
image = image[y:y + h, x:x + w]
The profiler tells me cv2.imread is a bottleneck. I'm wondering if I can make the script faster by only reading the part of each image I'm interested in rather than loading the entire image and then cropping to the bounds. I can't find an OpenCV flag for that though. Am I missing one?
AFAICT, there's no way to do this with OpenCV. But I did find a solution here: Load just part of an image in python
Simply using PIL to save the cropped region of interest when generating the screenshots works.

Categories

Resources