OpenCV Retinaface images corrdinate - python

How to get the detected faces image coordinates which are detected. Can't find relevant info or documentation. any pointers?
My code is simple and detects faces but i need to store the coordinates for and crop the image and replace it with another image.
Code:
from matplotlib import pyplot as plt
from retinaface.pre_trained_models import get_model
from retinaface.utils import vis_annotations
image = cv2.imread("crowd.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
model = get_model("resnet50_2020-07-20", max_size=2048)
model.eval()
After this i can't figure out how to store this X,Y coordinates values
plt.imshow(vis_annotations(image, annotation))

Related

How do I retrieve the resultant image as a matrix(numpy array) from results given back by yolov5 in pytoch?

I have been learning how to implement pretrained yolo using pytorch, and I want to display the output image using openCV's cv2.imshow() method.
The output image can be displayed using .show() function and saved using .save() function, I however want to display it using cv2.imshow(), and for that I would need the image in the form of a numpy array.
I'm unaware about how we do that or even if that is at all possible.
Here's the code for it.
import torch
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
imgs = ['img.png'] # batch of images
results = model(imgs)
results.print()
results.show() # or .save(), shows/saves the same image with bounding boxes around detected objects
# Show 'results' using openCV's cv2.imshow() method?
results.xyxy[0] # img1 predictions (tensor)
print(results.pandas().xyxy[0]) # img1 predictions (pandas)
A longer way of solving this problem would be to create bounding boxes ourselves over the detected objects in the image and display it, but consider me lazy :p .
I am lazy like you :) and you can display the bounding boxes without the need to draw them manually. When you call results.save() it will save a version of the image with the boxes to this folder 'runs/detect/exp/' Then you can display that image using cv2.
results.save()
import cv2
img = cv2.imread("runs/detect/exp/zidane.jpg")
#cv2.imgshow does not work on Google collab so this is a work around.
# You should get the same results if you use cv2.imgshow
from google.colab.patches import cv2_imshow
cv2_imshow(img)
had the same issue, so I wrote a small method to do so quickly draw the image without saving it.
def drawRectangles(image, dfResults):
for index, row in dfResults.iterrows():
print( (row['xmin'], row['ymin']))
image = cv2.rectangle(image, (row['xmin'], row['ymin']), (row['xmax'], row['ymax']), (255, 0, 0), 2)
cv2_imshow(image)
_
results = model(image)
dfResults = results.pandas().xyxy[0]
self.drawRectangles(image, dfResults[['xmin', 'ymin', 'xmax','ymax']].astype(int))
Just open the image via cv2 and then ad rectangles by drawing a rectangle via the given points from your result arrays?
import cv2
img = cv2.imread("image_path")
link how to draw a rectangle with cv2
#ChristophRackwitz
here u have it
cv2.rectangle(image, start_point, end_point, color, thickness)

Displaying Hole Filled image in python with OpenCV

I am working on some image analysis in python using OpenCV. I want to display an image that I filled in holes with using scipy.ndimage.binary_filled_holes. Upon doing this I could not see anything being displayed when I used cv2.imshow, so I used plt.imshow and saw that the holes in my original image were filled. I want to use the cv2.imshow function to display the image. I did convert the image so that the datatype is uint8, yet still, nothing shows up. Any help would be appreciated.
import cv2
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage
img = cv2.imread('Funky 647.jpg', cv2.IMREAD_GRAYSCALE)
dst = cv2.fastNlMeansDenoising(img,None,10,7,21)
ret, thresh2 = cv2.threshold(dst, 40, 255, cv2.THRESH_BINARY)
hole_filled= np.uint8(scipy.ndimage.binary_fill_holes(thresh2))
# plt.imshow(hole_filled)
cv2.imshow('No Holes', hole_filled)
cv2.waitKey(0)
cv2.destroyAllWindows()
Hole Filled Image via matplotlib:

How do I fix this cv2.imshow() syntax error?

I cannot get the following code to run:
import cv2 # import Open Cv moudle
import numpy as np
import face_recognition
import matplotlib as plt
imgEloun=face_recognition.load_image_file ("image/elounMask.jpg") # load Image
imgEloun=cv2.cvtColor(imgEloun,cv2.COLOR_BGR2RGB) # changing BGR clr to RGB
imgTest=face_recognition.load_image_file ("image/steve jobs.jpg") # Load Image
imgTest=cv2.cvtColor(imgTest,cv2.COLOR_BGR2RGB) # changing BGR clr to RGB
faceLocation=face_recognition.face_locations(imgEloun)[0] #Detect face loaction in image
encodeELOUN=face_recognition.face_encodings(imgEloun)[0] #encoding face location
cv2.rectangle(imgEloun,(faceLocation[3],(faceLocation[0]),(faceLocation[1],(faceLocation[2]),(255,0,255))
cv2.imshow('elounMask',imgEloun) ######## here is error
cv2.imshow('steve jobs',imgTest) # Show Image as output
cv2.waitKey(0)
Here is a Screenshot of my error in PyCharm.
Is anyone able to help out?
cv2.rectangle(imgEloun,((faceLocation[3],(faceLocation[0])),((faceLocation[1],(faceLocation[2])),(255,0,255))
you missed the parenthesis when giving the coordinates for rectangle. It should be in this format,
eg: cv2.rectangle(image, (5,5), (200,200), (255,255,0), 2)

Extract car images without Mask RCNN

I want to extract car images without using Mask RCNN. I tried a couple of methods but couldn't decide on how to proceed with any of them. I need recommendation on which method would be best and how to go through with it.
Method 1 - Using XML files and haar cascade classifier
I was thinking of using xml files to detection and crop car images. The problems I faced were:
They only detect car in square shapes. I needed car images cropped. So ultimately I ended up with better images of cropped cars. This didn't solve my problem.
The cropped image didn't detect car as a whole but small parts of it. Maybe due to XML file's config.
My code:
!wget https://raw.githubusercontent.com/shaanhk/New-GithubTest/master/cars.xml
import numpy as py
import cv2
car_cascade=cv2.CascadeClassifier('cars.xml')
img = cv2.imread('im1.jpg')
cars = car_cascade.detectMultiScale(img, 1.1, 1)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Resulting image:
Method 2 - Using Canny Edge Detection
I tried to perform canny edge detection for car. It worked to some extent that I managed to reduce edges to mostly car object. But I don't know how to proceed from there.
My code:
import cv2
import numpy as np
image= cv2.imread('im1.jpg')
imagecopy= np.copy(image)
grayimage= cv2.cvtColor(imagecopy, cv2.COLOR_RGB2GRAY)
canny= cv2.Canny(grayimage, 300,150)
cv2.imshow('Highway Edge Detection Image', canny)
cv2.waitKey(0)
cv2.destroyAllWindows()
Resulting Image:
Method 3 - Extract car image using color gradients
On googling I found a method using HSV transformation and then creating a custom mask to extract cars. But I don't know much about this method and have no idea how to go about it. I used the code provided and am posting it below.
Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
image = mpimg.imread('im1.jpg')
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
# HSV channels
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
background_hue = h[10,10]
lower_hue = np.array([background_hue-10,0,0])
upper_hue = np.array([background_hue+10,255,255])
mask = cv2.inRange(hsv, lower_hue, upper_hue)
# Mask the image to let the car show through
masked_image = np.copy(image)
masked_image[mask != 0] = [0, 0, 0]
cv2.imwrite('mask.jpg',masked_image)
# Display it!
plt.imshow(masked_image)
Image:
I'd like to mention, I'm a complete beginner in Computer Vision and am trying to learn by doing some small stuff like these. My code is probably very flawed and hopefully I can work on it on the way. Please feel absolutely free to mention any other method (except Mask RCNN) or any problems with code.

Counting Objects in an image using OPENCV and Python

I'm currently in the pursue of counting the number of shrimps in a given image. I'm using this test image:
The code I have used so far is the following:
import cv2
import numpy as np
from matplotlib import pyplot as plt
#Load img
path = r'C:\Users\...' #the path to the image
original=cv2.imread(path, cv2.COLOR_BGR2RGB)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
#Hist to proceed with the binarizarion
hist = cv2.calcHist([img],[0],None,[256],[0,256])
#do the threshold
ret,thresh = cv2.threshold(img,60,255,cv2.THRESH_BINARY_INV)
From this point I have tried different morphological transformations such a erode, dilate, open and close but they don't seem to be working and separating the objects as I want.
I've read that I can apply a Watershed transformation so separate touching elements, but I donĀ“t have experience in this (working at this point at the moment).
After that I am planning on using a Simple Blob Detector to count the blobs, I don't know if these steps are correct.
Any help is very welcomed!

Categories

Resources