image segmentation - How to detect this kind of vein junctions? (landmarks) - python

I need to detect the vein junctions of wings bee (the image is just one example). I use opencv - python.
ps: maybe the image lost a little bit of quality, but the image is all connected with one pixel wide.

This is an interesting question. The result I got is not perfect, but it might be a good start. I filtered the image with a kernel that only looks at the edges of the kernel. The idea being, that a junction has at least 3 lines that cross the kernel-edge, where regular lines only have 2. This means that when the kernel is over a junction, the resulting value will be higher, so a threshold will reveal them.
Due to the nature of the lines there are some value positives and some false negatives. A single joint will most likely be found several times, so you'll have to account for that. You can make them unique by drawing small dots and detecting those dots.
Result:
Code:
import cv2
import numpy as np
# load the image as grayscale
img = cv2.imread('xqXid.png',0)
# make a copy to display result
im_or = img.copy()
# convert image to larger datatyoe
img.astype(np.int32)
# create kernel
kernel = np.ones((7,7))
kernel[2:5,2:5] = 0
print(kernel)
#apply kernel
res = cv2.filter2D(img,3,kernel)
# filter results
loc = np.where(res > 2800)
print(len(loc[0]))
#draw circles on found locations
for x in range(len(loc[0])):
cv2.circle(im_or,(loc[1][x],loc[0][x]),10,(127),5)
#display result
cv2.imshow('Result',im_or)
cv2.waitKey(0)
cv2.destroyAllWindows()
Note: you can try to tweak the kernel and the threshold. For example, with the code above I got 126 matches. But when I use
kernel = np.ones((5,5))
kernel[1:4,1:4] = 0
with threshold
loc = np.where(res > 1550)
I got 33 matches in these locations:

You can use Harris corner detector algorithm to detect vein junction in above image. Compared to the previous techniques, Harris corner detector takes the differential of the corner score into account with reference to direction directly, instead of using shifting patches for every 45 degree angles, and has been proved to be more accurate in distinguishing between edges and corners (Source: wikipedia).
code:
img = cv2.imread('wings-bee.png')
# convert image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
'''
args:
img - Input image, it should be grayscale and float32 type.
blockSize - It is the size of neighbourhood considered for corner detection
ksize - Aperture parameter of Sobel derivative used.
k - Harris detector free parameter in the equation.
'''
dst = cv2.cornerHarris(gray, 9, 5, 0.04)
# result is dilated for marking the corners
dst = cv2.dilate(dst,None)
# Threshold for an optimal value, it may vary depending on the image.
img_thresh = cv2.threshold(dst, 0.32*dst.max(), 255, 0)[1]
img_thresh = np.uint8(img_thresh)
# get the matrix with the x and y locations of each centroid
centroids = cv2.connectedComponentsWithStats(img_thresh)[3]
stop_criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# refine corner coordinates to subpixel accuracy
corners = cv2.cornerSubPix(gray, np.float32(centroids), (5,5), (-1,-1), stop_criteria)
for i in range(1, len(corners)):
#print(corners[i])
cv2.circle(img, (int(corners[i,0]), int(corners[i,1])), 5, (0,255,0), 2)
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
output:
You can check the theory behind Harris Corner detector algorithm from here.

Related

Extract most central area in a Binary Image

I am processing binary images, and was previously using this code to find the largest area in the binary image:
# Use the hue value to convert to binary
thresh = 20
thresh, thresh_img = cv2.threshold(h, thresh, 255, cv2.THRESH_BINARY)
cv2.imshow('thresh', thresh_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Finding Contours
# Use a copy of the image since findContours alters the image
contours, _ = cv2.findContours(thresh_img.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#Extract the largest area
c = max(contours, key=cv2.contourArea)
This code isn't really doing what I need it to do, now I think it would better to extract the most central area in the binary image.
Binary Image
Largest Image
This is currently what the code is extracting, but I am hoping to get the central circle in the first binary image extracted.
OpenCV comes with a point-polygon test function (for contours). It even gives a signed distance, if you ask for that.
I'll find the contour that is closest to the center of the picture. That may be a contour actually overlapping the center of the picture.
Timings, on my quadcore from 2012, give or take a millisecond:
findContours: ~1 millisecond
all pointPolygonTests and argmax: ~1 millisecond
mask = cv.imread("fkljm.png", cv.IMREAD_GRAYSCALE)
(height, width) = mask.shape
ret, mask = cv.threshold(mask, 128, 255, cv.THRESH_BINARY) # required because the sample picture isn't exactly clean
# get contours
contours, hierarchy = cv.findContours(mask, cv.RETR_LIST | cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
center = (np.array([width, height]) - 1) / 2
# find contour closest to center of picture
distances = [
cv.pointPolygonTest(contour, center, True) # looking for most positive (inside); negative is outside
for contour in contours
]
iclosest = np.argmax(distances)
print("closest contour is", iclosest, "with distance", distances[iclosest])
# draw closest contour
canvas = cv.cvtColor(mask, cv.COLOR_GRAY2BGR)
cv.drawContours(image=canvas, contours=[contours[iclosest]], contourIdx=-1, color=(0, 255, 0), thickness=5)
closest contour is 45 with distance 65.19202405202648
a cv.floodFill() on the center point can also quickly yield a labeling on that blob... assuming the mask is positive there. Otherwise, there needs to be search.
(cx, cy) = center.astype(int)
assert mask[cy,cx], "floodFill not applicable"
# trying cv.floodFill on the image center
mask2 = mask >> 1 # turns everything else gray
cv.floodFill(image=mask2, mask=None, seedPoint=center.astype(int), newVal=255)
# use (mask2 == 255) to identify that blob
This also takes less than a millisecond.
Some practically faster approaches might involve a pyramid scheme (low-res versions of the mask) to quickly identify areas of the picture that are candidates for an exact test (distance/intersection).
Test target pixel. Hit (positive)? Done.
Calculate low-res mask. Per block, if any pixel is positive, block is positive.
Find positive blocks, sort by distance, examine closer all those that are within sqrt(2) * blocksize of the best distance.
There are several ways you define "most central." I chose to define it as the region with the closest distance to the point you're searching for. If the point is inside the region, then that distance will be zero.
I also chose to do this with a pixel-based approach rather than a polygon-based approach, like you're doing with findContours().
Here's a step-by-step breakdown of what this code is doing.
Load the image, put it into grayscale, and threshold it. You're already doing these things.
Identify connected components of the image. Connected components are places where there are white pixels which are directly connected to other white pixels. This breaks up the image into regions.
Using np.argwhere(), convert a true/false mask into an array of coordinates.
For each coordinate, compute the Euclidean distance between that point and search_point.
Find the minimum within each region.
Across all regions, find the smallest distance.
import cv2
import numpy as np
img = cv2.imread('test197_img.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
n_groups, comp_grouped = cv2.connectedComponents(thresh_img)
components = []
search_point = [600, 150]
for i in range(1, n_groups):
mask = (comp_grouped == i)
component_coords = np.argwhere(mask)[:, ::-1]
min_distance = np.sqrt(((component_coords - search_point) ** 2).sum(axis=1)).min()
components.append({
'mask': mask,
'min_distance': min_distance,
})
closest = min(components, key=lambda x: x['min_distance'])['mask']
Output:

How to Segment Image by Physical Borders with Similar Color Range Throughout?

I have an image like so (my apologies for anyone who finds this to be too much):
And would like to get an image like this (with the borders filled in as a segmentation should be):
As you can see, the segmentation should be defined by the "physical" borders present in the image, perhaps taking into account shadows, edges, etc.
I have tried using the Canny edge filter, but this seems to show me edges that are not desirable (even changing the parameters) and I'm not sure how to go forward in that direction.
My closest attempt has been using K-means clustering, but there seems to be two downsides to using this:
Completely unrelated portions of the image are labeled as the same cluster just because their RGB values are similar.
Because the algorithm depends on the average color values in a cluster, more lit parts of an image are labeled different clusters than darker ones even though I need them to be the same.
Here is the image I get using K-means:
And here is the code I used to get it:
import cv2
import numpy as np
original = cv2.imread('liver_annotation_yiHXgxp.png')
alpha = 3
beta = 0
contrast = cv2.convertScaleAbs(original, alpha=alpha, beta=beta)
kernel = np.ones((5,5),np.float32)/25
blur = cv2.filter2D(contrast,-1,kernel)
image = cv2.cvtColor(contrast, cv2.COLOR_BGR2RGB)
pixel_values = image.reshape((-1, 3))
pixel_values = np.float32(pixel_values)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.2)
k = 5
_, labels, (centers) = cv2.kmeans(pixel_values, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
centers = np.uint8(centers)
labels = labels.flatten()
segmented_image = centers[labels.flatten()]
segmented_image = segmented_image.reshape(image.shape)
# Results
cv2.imshow('original', original)
cv2.imshow('contrast', contrast)
cv2.imshow('blur', blur)
cv2.imshow('adjusted', segmented_image)
cv2.waitKey()

Edge detection in noisy binary image

I'm trying to clean the following image in order to perform edge and then polygon detection in order to extract the key buildings/features. Ideally I wish to end up with an image where the contours around houses and buildings are extracted correctly. The input image is (the full sized image is given here)
Currently my processing has worked as follows:
Read in image and perform canny edge detection
Apply Gaussian and median blurs
Perform a probabilistic Hough transform
Draw the lines given from the Hough transform in red
Remove non-red lines, apply blurs to the red and perform contour detection
Which is done with the following code:
def read_tif(PATH):
img = Image.open(PATH)
# Turn into np array
img = np.array(img, dtype=np.uint8)*255
return img
# Read in image
img = read_tif(here("Images/" + params['img']))
dst = cv2.Canny(img, 50, 200, None, 3)
# Apply blurs
dst = cv2.GaussianBlur(dst, (5, 5), 0)
dst = cv2.medianBlur(dst, 3)
cdst = cv2.cvtColor(dst, cv2.COLOR_GRAY2BGR)
cdstP = np.copy(cdst)
# Probabilistic Hough Transform
linesP = cv2.HoughLinesP(dst, 1, np.pi / 180, 50, None, 50, 10)
# Draw lines from Hough
if linesP is not None:
for i in range(0, len(linesP)):
l = linesP[i][0]
cv2.line(cdstP , (l[0], l[1]), (l[2], l[3]), (0,0,255), 6, cv2.LINE_AA)
Unfortunately this method is not having too much success as although lines are detected well there are many houses which are returned as several irregular polygons:
(see full image here). I have tried playing around with various other methods like dilation (to increase the width of the house boundary lines) but these don't seem to improve results and amplify some of the noise in the image.
Any advice or help on methods/approaches that can help in improving these results is much appreciated, TIA!!

Local Contrast Enhancement for Digit Recognition with cv2 / pytesseract

I want to use pytesseract to read digits from images. The images look as follows:
The digits are dotted and in order to be able to use pytesseract, I need black connected digits on a white background. To do so, I thought about using erode and dilate as preprocessing techniques. As you can see, the images are similar, yet quite different in certain aspects. For example, the dots in the first image are darker than the background, while the dots in the second are whiter. That means, in the first image I can use erode to get black connected lines and in the second image I can use dilate to get white connected lines and then inverse the colors. This leads to the following results:
Using an appropriate threshold, the first image can easily be read with pytesseract. The second image, whoever, is more tricky. The problem is, that for example parts of the "4" are darker than the background around the three. So a simple threshold is not going to work. I need something like local threshold or local contrast enhancement. Does anybody have an idea here?
Edit:
OTSU, mean threshold and gaussian threshold lead to the following results:
Your images are pretty low res, but you can try a method called gain division. The idea is that you try to build a model of the background and then weight each input pixel by that model. The output gain should be relatively constant during most of the image.
After gain division is performed, you can try to improve the image by applying an area filter and morphology. I only tried your first image, because it is the "least worst".
These are the steps to get the gain-divided image:
Apply a soft median blur filter to get rid of high frequency noise.
Get the model of the background via local maximum. Apply a very strong close operation, with a big structuring element (I’m using a rectangular kernel of size 15).
Perform gain adjustment by dividing 255 between each local maximum pixel. Weight this value with each input image pixel.
You should get a nice image where the background illumination is pretty much normalized, threshold this image to get a binary mask of the characters.
Now, you can improve the quality of the image with the following, additional steps:
Threshold via Otsu, but add a little bit of bias. (This, unfortunately, is a manual step depending on the input).
Apply an area filter to filter out the smaller blobs of noise.
Let's see the code:
import numpy as np
import cv2
# image path
path = "C:/opencvImages/"
fileName = "iA904.png"
# Reading an image in default mode:
inputImage = cv2.imread(path+fileName)
# Remove small noise via median:
filterSize = 5
imageMedian = cv2.medianBlur(inputImage, filterSize)
# Get local maximum:
kernelSize = 15
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
localMax = cv2.morphologyEx(imageMedian, cv2.MORPH_CLOSE, maxKernel, None, None, 1, cv2.BORDER_REFLECT101)
# Perform gain division
gainDivision = np.where(localMax == 0, 0, (inputImage/localMax))
# Clip the values to [0,255]
gainDivision = np.clip((255 * gainDivision), 0, 255)
# Convert the mat type from float to uint8:
gainDivision = gainDivision.astype("uint8")
# Convert RGB to grayscale:
grayscaleImage = cv2.cvtColor(gainDivision, cv2.COLOR_BGR2GRAY)
This is what gain division gets you:
Note that the lighting is more balanced. Now, let's apply a little bit of contrast enhancement:
# Contrast Enhancement:
grayscaleImage = np.uint8(cv2.normalize(grayscaleImage, grayscaleImage, 0, 255, cv2.NORM_MINMAX))
You get this, which creates a little bit more contrast between the foreground and the background:
Now, let's try to threshold this image to get a nice, binary mask. As I suggested, try Otsu's thresholding but add (or subtract) a little bit of bias to the result. This step, as mentioned, is dependent on the quality of your input:
# Threshold via Otsu + bias adjustment:
threshValue, binaryImage = cv2.threshold(grayscaleImage, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
threshValue = 0.9 * threshValue
_, binaryImage = cv2.threshold(grayscaleImage, threshValue, 255, cv2.THRESH_BINARY)
You end up with this binary mask:
Invert this and filter out the small blobs. I set an area threshold value of 10 pixels:
# Invert image:
binaryImage = 255 - binaryImage
# Perform an area filter on the binary blobs:
componentsNumber, labeledImage, componentStats, componentCentroids = \
cv2.connectedComponentsWithStats(binaryImage, connectivity=4)
# Set the minimum pixels for the area filter:
minArea = 10
# Get the indices/labels of the remaining components based on the area stat
# (skip the background component at index 0)
remainingComponentLabels = [i for i in range(1, componentsNumber) if componentStats[i][4] >= minArea]
# Filter the labeled pixels based on the remaining labels,
# assign pixel intensity to 255 (uint8) for the remaining pixels
filteredImage = np.where(np.isin(labeledImage, remainingComponentLabels) == True, 255, 0).astype("uint8")
And this is the final binary mask:
If you plan on sending this image to an OCR, you might want to apply some morphology first. Maybe a closing to try and join the dots that make up the characters. Also be sure to train your OCR classifier with a font that is close to what you are actually trying to recognize. This is the (inverted) mask after a size 3 rectangular closing operation with 3 iterations:
Edit:
To get the last image, process the filtered output as follows:
# Set kernel (structuring element) size:
kernelSize = 3
# Set operation iterations:
opIterations = 3
# Get the structuring element:
maxKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
closingImage = cv2.morphologyEx(filteredImage, cv2.MORPH_CLOSE, maxKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
# Invert image to obtain black numbers on white background:
closingImage = 255 - closingImage

Image Processing: Algorithm Improvement for Real-Time FedEx Logo Detector

I've been working on a project involving image processing for logo detection. Specifically, the goal is to develop an automated system for a real-time FedEx truck/logo detector that reads frames from a IP camera stream and sends a notification on detection. Here's a sample of the system in action with the recognized logo surrounded in the green rectangle.
Some constraints on the project:
Uses raw OpenCV (no deep learning, AI, or trained neural networks)
Image background can be noisy
The brightness of the image can vary greatly (morning, afternoon, night)
The FedEx truck/logo can have any scale, rotation, or orientation since it could be parked anywhere on the sidewalk
The logo could potentially be fuzzy or blurry with different shades depending on the time of day
There may be many other vehicles with similar sizes or colors in the same frame
Real-time detection (~25 FPS from IP camera)
The IP camera is in a fixed position and the FedEx truck will always be in the same orientation (never backwards or upside down)
The Fedex Truck will always be the "red" variation instead of the "green" variation
Current Implementation/Algorithm
I have two threads:
Thread #1 - Captures frames from the IP camera using cv2.VideoCapture() and resizes frame for further processing. Decided to handle grabbing frames in a separate thread to improve FPS by reducing I/O latency since cv2.VideoCapture() is blocking. By dedicating an independent thread just for capturing frames, this would allow the main processing thread to always have a frame available to perform detection on.
Thread #2 - Main processing/detection thread to detect FedEx logo using color thresholding and contour detection.
Overall Pseudo-algorithm
For each frame:
Find bounding box for purple color of logo
Find bounding box for red/orange color of logo
If both bounding boxes are valid/adjacent and contours pass checks:
Combine bounding boxes
Draw combined bounding boxes on original frame
Play sound notification for detected logo
Color thresholding for logo detection
For color thresholding, I have defined HSV (low, high) thresholds for purple and red to detect the logo.
colors = {
'purple': ([120,45,45], [150,255,255]),
'red': ([0,130,0], [15,255,255])
}
To find the bounding box coordinates for each color, I follow this algorithm:
Blur the frame
Erode and dilate the frame with a kernel to remove background noise
Convert frame from BGR to HSV color format
Perform a mask on the frame using the lower and upper HSV color bounds with set color thresholds
Find largest contour in the mask and obtain bounding coordinates
After performing a mask, I obtain these isolated purple (left) and red (right) sections of the logo.
False positive checks
Now that I have the two masks, I perform checks to ensure that the found bounding boxes actually form a logo. To do this, I use cv2.matchShapes() which compares the two contours and returns a metric showing the similarity. The lower the result, the higher the match. In addition, I use cv2.pointPolygonTest() which finds the shortest distance between a point in the image and a contour for additional verification. My false positive process involves:
Checking if the bounding boxes are valid
Ensuring the two bounding boxes are adjacent based on their relative proximity
If the bounding boxes pass the adjacency and similarity metric test, the bounding boxes are combined and a FedEx notification is triggered.
Results
This check algorithm is not really robust as there are many false positives and failed detections. For instance, these false positives were triggered.
While this color thresholding and contour detection approach worked in basic cases where the logo was clear, it was severely lacking in some areas:
There is latency problems from having to compute bounding boxes on each frame
It occasionally false detects when the logo is not present
Brightness and time of day had a great impact on detection accuracy
When the logo was on a skewed angle, color threshold detection worked but was unable to detect the logo due to the check algorithm.
Would anyone be able to help me improve my algorithm or suggest alternative detection strategies? Is there any other way to perform this detection since color thresholding is highly dependent on exact calibration? If possible, I would like to move away from color thresholding and the multiple layers of filters since it's not very robust. Any insight or advice is greatly appreciated!
You might want to take a look at feature matching. The goal is to find features in two images, a template image, and a noisy image and match them. This would allow you to find the template (the logo) in the noisy image (the camera image).
A feature is, in essence, things that humans would find interesting in an image, such as corners or open spaces. I would recommend using a scale-invariant feature transform (SIFT) as a feature detection algorithm. The reason I suggest using SIFT is that it is invariant to image translation, scaling, and rotation, partially invariant to illumination changes and robust to local geometric distortion. This matches your specification.
I generated the above image using code modified from the OpenCV docs docs on SIFT feature detection:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('main.jpg',0) # target Image
# Create the sift object
sift = cv2.xfeatures2d.SIFT_create(700)
# Find keypoints and descriptors directly
kp, des = sift.detectAndCompute(img, None)
# Add the keypoints to the final image
img2 = cv2.drawKeypoints(img, kp, None, (255, 0, 0), 4)
# Show the image
plt.imshow(img2)
plt.show()
You will notice when doing this that a large number of the features do land on the FedEx logo (Above).
The next thing I did was try matching the features from the video feed to the features in the FedEx logo. I did this using the FLANN feature matcher. You could have gone with many approaches (including brute force) but because you are working on a video feed this is probably your best option. The code below is inspired from the OpenCV docs on feature matching:
import numpy as np
import cv2
from matplotlib import pyplot as plt
logo = cv2.imread('logo.jpg', 0) # query Image
img = cv2.imread('main2.jpg',0) # target Image
# Create the sift object
sift = cv2.xfeatures2d.SIFT_create(700)
# Find keypoints and descriptors directly
kp1, des1 = sift.detectAndCompute(img, None)
kp2, des2 = sift.detectAndCompute(logo,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
# Draw lines
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
# Display the matches
img3 = cv2.drawMatchesKnn(img,kp1,logo,kp2,matches,None,**draw_params)
plt.imshow(img3, )
plt.show()
Using this I was able to get the following features matched as seen below. You will notice that there are outliers. However the majority of features match:
The final step would then to be to simply draw a bounding box around this image. I will link you to another stack overflow question which does something similar but with the orb detector. Here is another way to get a bounding box using the OpenCV docs.
I hope this helps!
You can help the detector with preprocessing the image, then you don't need as many training images.
First we reduce the barrel distortion.
import cv2
img = cv2.imread('fedex.jpg')
margin = 150
# add border as the undistorted image is going to be larger
img = cv2.copyMakeBorder(
img,
margin,
margin,
margin,
margin,
cv2.BORDER_CONSTANT,
0)
import numpy as np
width = img.shape[1]
height = img.shape[0]
distCoeff = np.zeros((4,1), np.float64)
k1 = -4.5e-5;
k2 = 0.0;
p1 = 0.0;
p2 = 0.0;
distCoeff[0,0] = k1;
distCoeff[1,0] = k2;
distCoeff[2,0] = p1;
distCoeff[3,0] = p2;
cam = np.eye(3, dtype=np.float32)
cam[0,2] = width/2.0 # define center x
cam[1,2] = height/2.0 # define center y
cam[0,0] = 12. # define focal length x
cam[1,1] = 12. # define focal length y
dst = cv2.undistort(img, cam, distCoeff)
Then we transform the image in a way as if the camera is facing the FedEx truck right on. That is wherever along the curb the truck is parked, the FedEx logo will have almost the same size and orientation.
# use four points for homography estimation, coordinated taken from undistorted image
# 1. top-left corner of F
# 2. bottom-left corner of F
# 3. top-right of E
# 4. bottom-right of E
pts_src = np.array([[1083, 235], [1069, 343], [1238, 301],[1201, 454]])
pts_dst = np.array([[1069, 235],[1069, 320],[1201, 235],[1201, 320]])
h, status = cv2.findHomography(pts_src, pts_dst)
im_out = cv2.warpPerspective(dst, h, (dst.shape[1], dst.shape[0]))

Categories

Resources