I'm using pyzbar + opencv to detect QR code. I need to draw somethings on the top edge of QR code for printing purpose. I know pyzbar can detect the bounding box of QR code but it's hard to know which edge is top.
Any suggestion?
Need to detect top edge like these examples:
If the top edge always start and end with regtangle in the edges you can try detect with cv2.contour where the two rectangle in the image and than use cv2.line to draw line vetween the two edges
Use this great tutorial to detect the square in the edges and then get the start x,y points of each
I hope I help you if you get stucked please tell me and I try help you
If you want your QR picture to get identified, rotated and scaled correctly you need the following task :
(you needs the combination of two techniques)
First step: detect the x,y coordinates of the 4 corners of each QR code. The Python library zbar is useful for this :
print(symbol.location) gives the coordinates.
Second step: now a deskewing / perspective correction / "homography" is needed. Here is how to do it with Python + OpenCV.
I know it is an old question but maybe it will help someone in the future:
PyZbar detects QR codes well (gives almost perfect position of QRcode corners) but you don't get information where is top of the QRcode
OpenCV in my test was not as good but it return a list of corner points [top-left, top-right, bottom-right, bottom-left]. It is not documented behavior but it always works like this.
[tested with opencv-python==4.5.5.62]
You can just use pyzbar to do this.
Use decoded.polygon. It will return the polygon of the QR code.
In this example, we are using this image as input:
from PIL import Image, ImageDraw
from pyzbar import pyzbar
# Open image with PIL
img_input = Image.open('image.png').convert('RGB')
# Create a draw object
draw = ImageDraw.Draw(img_input)
Now we will use pyzbar to decode the image into QR Code, if any.
And then, we'll use decoded.polygon to get edges coordinates of the QR code and use PIL to draw the edges and diagonal line of the QR code:
# Decode the image into QR code, if any
decoder = pyzbar.decode(img_input)
if len(decoder) != 0:
for decoded in decoder:
if decoded.type == 'QRCODE':
# Draw only top edge of polygon with PIL
draw.line(decoded.polygon[2:4], fill = '#0F16F1', width = 5)
# Draw only buttom edge of polygon with PIL
draw.line(decoded.polygon[0:2], fill = '#D40FF1', width = 5)
# Draw only left edge of polygon with PIL
draw.line(decoded.polygon[0:4:3], fill = '#FD7A24', width = 5)
# Draw only right edge of polygon with PIL
draw.line(decoded.polygon[1:3], fill = '#00D4F1', width = 5)
# Draw diagonal line of polygon with PIL
draw.line(decoded.polygon[1:4:2], fill = '#00D354', width = 5)
Now we can simply save the result image with the following command:
img_input.save('image_and_polygon.png')
And the output will be:
Related
I am trying to crop an image of a piece of card/paper or such so that the card/paper is in focus. I tried the below code but the problem is that it works only when the object in question is alone in the picture. If it is a blank background with nothing else in it- the cropping is flawless, otherwise it does not work as expected.
I am attempting create a system which crops different kinds of images and puts them through a classifier and then extracts text from them.
import cv2
import numpy as np
filenames = "img.jpg"
img = cv2.imread(filenames)
blurred = cv2.blur(img, (3,3))
canny = cv2.Canny(blurred, 50, 200)
## find the non-zero min-max coords of canny
pts = np.argwhere(canny>0)
y1,x1 = pts.min(axis=0)
y2,x2 = pts.max(axis=0)
## crop the region
cropped = img[y1:y2, x1:x2]
filename_cropped = filenames.split('.')
filename_cropped[0] = filename_cropped[0] + '_cropped'
filename_cropped = '.'.join(filename_cropped)
cv2.imwrite(filename_cropped, cropped)
An sample image that works is
Something that does not work is
Can anyone help with this?
The first image works because the entire images besides your target is empty. Canny will also give other results when there is more in the image.
If you are looking for those specific cards I suggest you try to use some colour filtering first. You can try to filer for the blue/purple hue of the card.
Increasing the canny threshold could also work, but you will always still be finding the hand as well in this image unless you add some colour filtering.
You can also try Sobel edge detection . This will probably highlight the instant edges of the card pretty well. But then again, it will also show the hand, so you can't just take all the Sobel/Canny outputs. You need to add processing before it that isolates the card, or after it that can find the rectangular shape of the card in the sobel/canny.
I have two questions.
I am working with openCv and python and I am trying to have an image's contours. I am succesfull at that but when I try to se what is the difference between when I use cv2.drawContorus() functions and directly edit image with cv2.findContours() without sending a copy of original image as the source parameter. I have tried on some images but I couldnt see anything even happenning.
I am trying to get the contours of a square I created with paint square tool. But when I try with cv2.CHAIN_APPROX_SIMPLE method, it gives me coordinates of 6 points which none of the combinations from them is suitable for my square. Why does it do like that?
Can someone explain?
Here is my code for both problems:
import cv2
import numpy as np
image = cv2.imread(r"C:\Users\fazil\Desktop\12.png")
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
gray = cv2.Canny(gray,75,200)
gray = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV)[1]
cv2.imshow("s",gray)
contours, hiearchy = cv2.findContours(gray,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
print(contours[1])
cv2.drawContours(image,contours,1,(45,67,89),5)
cv2.imshow("k",gray)
cv2.imshow("j",image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have a region of an image selected, like this:
http://slideplayer.com/4593320/15/images/9/Intelligent+scissors+http%3A%2F%2Frivit.cs.byu.edu%2FEric%2FEric.html.jpg
and now, using OpenCV I would like to extract the region selected.
How could I do it? I have already researched but nothing useful got.
Thanks in advance.
First of all you have to import your pixel locations into the program and you have to create contour object using the points. I guess you know how to do this.
You can find from following link how to create contour object:
Creating your own contour in opencv using python
You can fill black using following code out of your selected image
black = np.zeros(img.shape).astype(img.dtype)
color = [1, 1, 1]
cv2.fillPoly(black, contours, color)
new_img = img * black
I guess you know (or find) how to crop after black out remaining image using contour pixels.
Is it possible to rotate an image around a point that isn't the image center using PIL?
If not what could you folk recommend for me to achieve the desired behavior?
By default, it rotates around the center of the image, to answer the commenter's question. Otherwise, you can specify coordinates, starting from the top left.
from PIL import Image
im = Image.new("RGB", (100, 100))
resultIm = im.rotate(45, center=(25, 25))
See https://pillow.readthedocs.io/en/5.2.x/reference/Image.html#PIL.Image.Image.rotate for documentation.
I am trying to create a web based foreground extraction service (similar to clippingmagic.com).
I am using marker based watershed image segmentation (opencv, python).
I am using sketch.js for allowing users to draw the markers on the image.
I need to call the python script from my php code.
Here's what the script needs to do:
Read the image which has the markers drawn on it.
Create a matrix of integers where different colored markers are labelled with different integers.
Feed the markers matrix and input image to the watershed algorithm and store the output to a local file.
Extract the foreground which is marked with the corresponding marker.
Display the output to the user.
I am facing problem with step 2.
Here is my code so far
#!/usr/bin/env python
import numpy as np
import cv2
from common import Sketcher
img_m = cv2.imread('1_m.jpg'); //my image which has colored marks on it
h, w = img.shape[:2]
markers = np.zeros((h, w), np.int32)
cv2.imshow("Image",img_m);
//trying to put '1' at all places where image is marked with color (179,230,29)
markers = np.where((img_m == [179,230,29]),1,0)
//trying to put '2' at all places where image is marked with color (238,27,34)
markers = np.where((img_m == [238,27,34]),2,0)
cv2.watershed(img, markers) //gives me error "markers should be 1-channel 32-bit image"
cv2.waitKey(50)
Can somebody help me with this. Thanks.