I have a region of an image selected, like this:
http://slideplayer.com/4593320/15/images/9/Intelligent+scissors+http%3A%2F%2Frivit.cs.byu.edu%2FEric%2FEric.html.jpg
and now, using OpenCV I would like to extract the region selected.
How could I do it? I have already researched but nothing useful got.
Thanks in advance.
First of all you have to import your pixel locations into the program and you have to create contour object using the points. I guess you know how to do this.
You can find from following link how to create contour object:
Creating your own contour in opencv using python
You can fill black using following code out of your selected image
black = np.zeros(img.shape).astype(img.dtype)
color = [1, 1, 1]
cv2.fillPoly(black, contours, color)
new_img = img * black
I guess you know (or find) how to crop after black out remaining image using contour pixels.
Related
I am trying to crop an image of a piece of card/paper or such so that the card/paper is in focus. I tried the below code but the problem is that it works only when the object in question is alone in the picture. If it is a blank background with nothing else in it- the cropping is flawless, otherwise it does not work as expected.
I am attempting create a system which crops different kinds of images and puts them through a classifier and then extracts text from them.
import cv2
import numpy as np
filenames = "img.jpg"
img = cv2.imread(filenames)
blurred = cv2.blur(img, (3,3))
canny = cv2.Canny(blurred, 50, 200)
## find the non-zero min-max coords of canny
pts = np.argwhere(canny>0)
y1,x1 = pts.min(axis=0)
y2,x2 = pts.max(axis=0)
## crop the region
cropped = img[y1:y2, x1:x2]
filename_cropped = filenames.split('.')
filename_cropped[0] = filename_cropped[0] + '_cropped'
filename_cropped = '.'.join(filename_cropped)
cv2.imwrite(filename_cropped, cropped)
An sample image that works is
Something that does not work is
Can anyone help with this?
The first image works because the entire images besides your target is empty. Canny will also give other results when there is more in the image.
If you are looking for those specific cards I suggest you try to use some colour filtering first. You can try to filer for the blue/purple hue of the card.
Increasing the canny threshold could also work, but you will always still be finding the hand as well in this image unless you add some colour filtering.
You can also try Sobel edge detection . This will probably highlight the instant edges of the card pretty well. But then again, it will also show the hand, so you can't just take all the Sobel/Canny outputs. You need to add processing before it that isolates the card, or after it that can find the rectangular shape of the card in the sobel/canny.
I'm working to create bounding boxes around the data I need to extract from an image. (I am using Jupyter notebook for python and OpenCV).
For this, I am drawing rectangles of desired coordinates and am using the following line of code:
cv2.rectangle(img,(50,82),(440,121), (0, 255, 0), 1)
This is for some reason giving only a black rectangle even though (0,255,0) is supposed to give green. What's more, if I use any other colour, for example (255,255,0), the box doesn't appear at all.
Thanks in advance for your help!
Is the image img that you are drawing on binary or grayscale? If so, make it color by merging the same image 3 times so that you have an RGB image with R=G=B. Or convert it Gray2BGR using cvtColor(). That is in Python/OpenCV do either
img = cv2.merge([img,img,img])
or
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGB)
I have two questions.
I am working with openCv and python and I am trying to have an image's contours. I am succesfull at that but when I try to se what is the difference between when I use cv2.drawContorus() functions and directly edit image with cv2.findContours() without sending a copy of original image as the source parameter. I have tried on some images but I couldnt see anything even happenning.
I am trying to get the contours of a square I created with paint square tool. But when I try with cv2.CHAIN_APPROX_SIMPLE method, it gives me coordinates of 6 points which none of the combinations from them is suitable for my square. Why does it do like that?
Can someone explain?
Here is my code for both problems:
import cv2
import numpy as np
image = cv2.imread(r"C:\Users\fazil\Desktop\12.png")
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
gray = cv2.Canny(gray,75,200)
gray = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)[1]
cv2.imshow("s",gray)
contours, hiearchy = cv2.findContours(gray,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print(contours[1])
cv2.drawContours(image,contours,1,(45,67,89),5)
cv2.imshow("k",gray)
cv2.imshow("j",image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I'm using pyzbar + opencv to detect QR code. I need to draw somethings on the top edge of QR code for printing purpose. I know pyzbar can detect the bounding box of QR code but it's hard to know which edge is top.
Any suggestion?
Need to detect top edge like these examples:
If the top edge always start and end with regtangle in the edges you can try detect with cv2.contour where the two rectangle in the image and than use cv2.line to draw line vetween the two edges
Use this great tutorial to detect the square in the edges and then get the start x,y points of each
I hope I help you if you get stucked please tell me and I try help you
If you want your QR picture to get identified, rotated and scaled correctly you need the following task :
(you needs the combination of two techniques)
First step: detect the x,y coordinates of the 4 corners of each QR code. The Python library zbar is useful for this :
print(symbol.location) gives the coordinates.
Second step: now a deskewing / perspective correction / "homography" is needed. Here is how to do it with Python + OpenCV.
I know it is an old question but maybe it will help someone in the future:
PyZbar detects QR codes well (gives almost perfect position of QRcode corners) but you don't get information where is top of the QRcode
OpenCV in my test was not as good but it return a list of corner points [top-left, top-right, bottom-right, bottom-left]. It is not documented behavior but it always works like this.
[tested with opencv-python==4.5.5.62]
You can just use pyzbar to do this.
Use decoded.polygon. It will return the polygon of the QR code.
In this example, we are using this image as input:
from PIL import Image, ImageDraw
from pyzbar import pyzbar
# Open image with PIL
img_input = Image.open('image.png').convert('RGB')
# Create a draw object
draw = ImageDraw.Draw(img_input)
Now we will use pyzbar to decode the image into QR Code, if any.
And then, we'll use decoded.polygon to get edges coordinates of the QR code and use PIL to draw the edges and diagonal line of the QR code:
# Decode the image into QR code, if any
decoder = pyzbar.decode(img_input)
if len(decoder) != 0:
for decoded in decoder:
if decoded.type == 'QRCODE':
# Draw only top edge of polygon with PIL
draw.line(decoded.polygon[2:4], fill = '#0F16F1', width = 5)
# Draw only buttom edge of polygon with PIL
draw.line(decoded.polygon[0:2], fill = '#D40FF1', width = 5)
# Draw only left edge of polygon with PIL
draw.line(decoded.polygon[0:4:3], fill = '#FD7A24', width = 5)
# Draw only right edge of polygon with PIL
draw.line(decoded.polygon[1:3], fill = '#00D4F1', width = 5)
# Draw diagonal line of polygon with PIL
draw.line(decoded.polygon[1:4:2], fill = '#00D354', width = 5)
Now we can simply save the result image with the following command:
img_input.save('image_and_polygon.png')
And the output will be:
I'm trying to crop the image from the binary image which is already processed from the original, suppose I have the original image
and I got the binary image from the original
and I want to crop the image only the white area using blob analysis
How can I do that?
In c++ you can use,
cv::Mat output_Mat = cv::Mat::zeros(RGB_Mat.size(), RGB_Mat.type());
RGB_Mat.copyTo(output_Mat, Binary_Mat);
Hope you can find corresponding python methods.
points = cv2.findNonZero(binary_image);
min_rect = cv2.boundingRect(points);