With python OpenCV I rotate an image that already has bounding boxes. I do this using the getRotationMatrix2D function. How can I use this to calculate the new coordinates of the bounding box?
Here is my sourcecode:
import cv2
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
def rotateImage(image, angle, bboxes):
center=tuple(np.array(image.shape[0:2])/2)
rot_mat = cv2.getRotationMatrix2D(center,angle,1.0)
return cv2.warpAffine(image, rot_mat, image.shape[0:2],flags=cv2.INTER_LINEAR)
img = cv2.imread('input.jpg')
img = img[:,:,::-1]
img_new = rotateImage(img, 5.0, [(0,0,50,50)])
plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(img_new),plt.title('Output')
plt.show()
Related
I created a code to isolate big circles (white) and I would like to color those circles and create contours on the thresholded image to calculate area.
The thing is the threshold image is a binary image 8uint. How can do that in a binary imagee ?
Thanks for the help
import os
#from skimage import measure, io, img_as_ubyte
from skimage.color import label2rgb, rgb2gray
from skimage.segmentation import clear_border
import matplotlib.pyplot as plt
import numpy as np
import cv2
import pandas as pd
import sys
import glob
original = cv2.imread('D:/2022/Python program/NBC_new2022_/images/image1.jpg',-1)
original = cv2.resize(original, (864, 648)) #resize of original image
img = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
median = cv2.medianBlur(img, 3)
ret, th = cv2.threshold(median, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((9,9), np.uint8)
opening = cv2.morphologyEx(th, cv2.MORPH_OPEN, kernel)
edge_touching_removed = clear_border(opening)
cv2.imshow('original', original)
cv2.imshow("Theshrold image", edge_touching_removed)
cv2.waitKey(0)
cv2.destroyAllWindows()
I've written a python script that detects tracking points in an image:
import numpy as np
import cv2
image = cv2.imread("image.jpg")
# MAGIC!
cv2.imshow("Image", image)
cv2.imshow("Tracking points", mask)
cv2.waitKey(0)
This is the result:
How can I get the coordinates of the white dots?
I'm trying to scatter all white pixels of this gradient image in matplotlib.pyplot:
import numpy as np
from PIL import Image
import cv2
from matplotlib import pyplot
img = Image.open(
"/root/.../aec.png").convert("L")
img = np.array(img)
kernel = np.ones((2, 2), np.uint8)
gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)
ox, oy = np.where(gradient == 255)
pyplot.plot(ox, oy, ".k")
pyplot.show()
The original picture (...) has a resolution of 2123x1269 and looks like this:
My graph in pyplot shows my gradient picture 270° rotated clockwise and I don't understand why.
I tried pyplot.plot(oy, ox, ".k"), then it's flipped to the x-axis compared to the original image. Rotating the original image with gradient = cv2.rotate(gradient, cv2.cv2.ROTATE_90_CLOCKWISE) gives me coordinates different from the pixel coordinates of my orginal image. (The x-y-pixel-coordinates have to be the ones of the gradient image.) Also the resolution of 2123x1269 should remain and the program should run as fast as possible.
How can I display the pixel coordinates of the gradient image in matplotlib.pyplot correctly?
That is because origin in opencv is at the top-left. Try reverting y axis on pyplot and exchange x and y.
EDIT: Just use plt.imshow(), it is the right function to display image data.
For anyone who ever encounters this problem, this is my final code:
import numpy as np
from PIL import Image
import cv2
from matplotlib import pyplot
img = Image.open(
"/root/.../aec.png").convert("L")
img = np.array(img)
kernel = np.ones((2, 2), np.uint8)
gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)
ox, oy = np.where(gradient == 255)
pyplot.plot(oy, ox, ".k")
pyplot.imshow(gradient)
pyplot.show()
Zoomed in version of the plot: correct plot, until now it servers my purpose.
Thanks #frab
I am generating material for a computer vision (CV) class and I would like to calculate the area of this highlighted part through conventional CV techniques:
So, I have applied Canny to detect edges and a Circle Hough transform trying to find the respective area. These are my results:
I tried to use Watershed, with the markers as the center of the circles I found, but I had no success. Does anyone have any idea how can I continue or have other ideas?
Here is the code :
import numpy as np
import matplotlib.pyplot as plt
import cv2 as cv
from skimage.feature import canny
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.draw import circle_perimeter
from skimage.segmentation import watershed
import urllib.request
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886/raw/master/imagens/pratica_07/head_CT.tif", "head_CT.tif")
# Read image
img = cv.imread("head_CT.tif",-1)
# Edge detector
edges = canny(img, sigma=2.0, low_threshold=19, high_threshold=57)
# Hough_circle
hough_radii = np.arange(29, 32, 1)
hough_res = hough_circle(edges, hough_radii)
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,total_num_peaks=4, min_xdistance=200,min_ydistance=200, threshold=0.25)
# Remove false-posite circle
sortX = np.argsort(cx)
cx = cx[sortX[:-1]]
cy = cy[sortX[:-1]]
radii = radii[sortX[:-1]]
# Draw red circles
img_rgb = np.tile(np.expand_dims(img,axis=-1),(1,1,3),)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius,shape=img_rgb.shape)
img_rgb[circy, circx] = (220, 20, 20)
# Plot images
imgs = [img_rgb, edges]
r,c = 1,2
fig, axs = plt.subplots(r, c, figsize=(15,15))
for i in range(r):
for j in range(c):
axs[j].imshow(imgs[i*c+j], cmap='gray')
axs[j].axis('off')
Here is the head_CT.tif image.
Thanks for any help.
*This image is from Gonzalez & Woods, Digital Image Processing book.
I would like to create a black rectangle with the bigger value of img.shape and then putting the image into the center of this black rectangle.
I coded this:
from skimage.io import imread
import numpy as np
#load the file_name
file_name = "/path/to/image/img.png"
#read in the image
img = imread(file_name)
#shape of image
img.shape
#create a black rectangle with length of sizes equal to the max of image.shape[0] or image.shape[1]
longSide = max(image.shape[0], image.shape[1])
#create a black square
rectangle = np.zeros((longSide, longSide), dtype="bool")
I now would like to paste the image in the center of this black rectangle (the black rectangle in the background). In the end it should look like this:
You can try using the PIL (Pillow) image library:
from skimage.io import imread
import numpy as np
from PIL import Image, ImageDraw, ImageFilter
#load the file_name
file_name = "/path/to/image/img.png"
#read in the image
img = imread(file_name)
#shape of image
img.shape
#create a black rectangle with length of sizes equal to the max of image.shape[0] or image.shape[1]
longSide = max(image.shape[0], image.shape[1])
#create a black square
rectangle = np.zeros((longSide, longSide), dtype="bool")
final_im = rectangle.copy()
final_im.paste(img, (100, 50))
# the final command is pasting the previous image on the rectangle, and positioning it using `(x coordinate in upper left, y coordinate in upper left)`.
More info: https://note.nkmk.me/en/python-pillow-paste/