As a whole I am hoping to access ip camera using OpenCV, crop and adjust their image properties (saturation, contrast, brightness) and then output the result as a new stream.
I have very little knowledge of python/opencv and am doing my best to piece this together from what I can find.
I have been able to access the mjpeg stream however every way of cropping I have found seems to fail. The code below seems the most promising but I am open to alternative methods.
I have achieved the result i'm after using Max MSP and Syphon however my hope is that using OpenCV i will be able to make this completely web based and accessible.
I am hoping to avoid splitting the stream into individual jpegs but if that is the only way to achieve what i'm after please let me know.
Any and all guidance is greatly appreciated.
import cv2
import numpy as np
cap = cv2.VideoCapture('http://89.29.108.38:80/mjpg/video.mjpg')
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x,y), (x+w, y+h), (0, 255, 0), 20)
roi = frame[y:y+h, x:x+w]
while True:
ret, frame = cap.read()
cv2.imshow('Video', frame)
if cv2.waitKey(1) == 27:
exit(0)
Traceback (most recent call last):
File "mjpeg-crop.py", line 6, in <module>
(x, y, w, h) = cv2.boundingRect(c)
NameError: name 'c' is not defined
Too much for a comment, but try this to get started:
import cv2
import numpy as np
cap = cv2.VideoCapture('http://89.29.108.38:80/mjpg/video.mjpg')
# (x, y, w, h) = cv2.boundingRect(c)
# cv2.rectangle(frame, (x,y), (x+w, y+h), (0, 255, 0), 20)
# roi = frame[y:y+h, x:x+w]
while True:
ret, frame = cap.read()
# (height, width) = frame.shape[:2]
sky = frame[0:100, 0:200]
cv2.imshow('Video', sky)
if cv2.waitKey(1) == 27:
exit(0)
Related
I've written a very simple script for detecting cars when given footage:
cap = cv.VideoCapture(1)
car_cascade = cv.CascadeClassifier('assets/cars.xml')
while True:
ret, frame = cap.read()
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.1, 1)
for (x, y, w, h) in cars:
cv.rectangle(frame, (x, y), (x + w, y + h), (0, 0, 255), 2)
# Display the resulting frame
cv.imshow('frame', frame)
if cv.waitKey(1) == ord('q'):
break
# When everything done, release the capture
cap.release()
cv.destroyAllWindows()
I'm using the following file for my cars.xml: https://github.com/Aman-Preet-Singh-Gulati/Vehicle-count-detect/blob/main/Required%20Files/cars.xml. I've seen several projects that utilize this same Cascade file as well.
My problem is that when I spin up the video I see a screen like this, where hundreds of elements in the video are categorized as "cars" by the detectMultiScale function. I've been struggling to find anything on why this might be occuring.
I need some help with a project. My intention is to crop videos of sonographies with OpenCV and python in order to process them further. The features I am looking for are:
Loop through all available videos in a folder
find the contours and crop
export each video with one fixed size and resolution
Now i am a bit stuck on the contour finding and cropping part. I would like OpenCV to automatically recognize a bounding box around the shape of the sonography, knowing that all videos have the particular conus shape. Besides, it would be great if the non-relevant clutter could be removed. Can you help me? Attached you can find one original frame of the videos and the desired result.
import cv2
import numpy as np
cap = cv2.VideoCapture('video.mjpg')
# (x, y, w, h) = cv2.boundingRect(c)
# cv2.rectangle(frame, (x,y), (x+w, y+h), (0, 255, 0), 20)
# roi = frame[y:y+h, x:x+w]
while True:
ret, frame = cap.read()
# (height, width) = frame.shape[:2]
sky = frame[0:100, 0:200]
cv2.imshow('Video', sky)
if cv2.waitKey(1) == 27:
exit(0)
For the first frame of video; you can use this to detect the bounding-box of the image and then you can crop it or whatever you want :)
import sys
import cv2
import numpy as np
# Load our image
dir = sys.path[0]
org = cv2.imread(dir+'/im.png')
im=org.copy()
H,W=im.shape[:2]
# Convert image to grayscale
im=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
# remove noise
im=cv2.GaussianBlur(im,(21,21),21)
im=cv2.erode(im,np.ones((5,5)))
# remove horizantal line
im=cv2.GaussianBlur(im,(5,0),21)
blr=im.copy()
# make binary image
im=cv2.threshold(im,5,255,cv2.THRESH_BINARY)[1]
# draw black border around image to better detect blobs:
cv2.rectangle(im,(0,0),(W,H),0,thickness=W//25)
bw=im.copy()
# Invert the black and white colors
im=~im
# Find contours and sort them by width
cnts, _ = cv2.findContours(im, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
cnts.sort(key=lambda x: cv2.boundingRect(x)[2],reverse=True)
# Change the type and channels of image copies
im=cv2.cvtColor(im,cv2.COLOR_GRAY2BGR)
bw=cv2.cvtColor(bw,cv2.COLOR_GRAY2BGR)
blr=cv2.cvtColor(blr,cv2.COLOR_GRAY2BGR)
# Find the second biggest blob
x, y, w, h = cv2.boundingRect(cnts[1])
cv2.rectangle(org, (x, y), (x+w, y+h), (128, 0, 255), 10)
cv2.rectangle(im, (x, y), (x+w, y+h), (128, 255, 0), 10)
print(x,y,w,h)
# Save final result
top=np.hstack((blr,bw))
btm=np.hstack((im,org))
cv2.imwrite(dir+'/img_.png',np.vstack((top,btm)))
Bounding-Box area:
133 25 736 635
Cut and save the final image:
org = cv2.imread(dir+'/im.png')
cv2.imwrite(dir+'/img_.png',org[y:y+h,x:x+w])
I'm hoping to be able to isolate a small rectangular section of the area that is returned by a haar cascade (the cascade I'm using detects faces, so for example I would like to be able to isolate just the forehead within a given face). I know that training it specifically to detect the area I want is an option, but I'm hoping that it is easy to specify an arbitrary area within the face (for example, the top 20% of the rectangle). I include the code I'm using below:
import cv2
import numpy as py
from matplotlib import pyplot as plt
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture("resources/video/EXAMPLE.mp4")
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 9)
for (x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w, y+h), (255,0,0), 2)
cv2.imshow('img',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cap.destroyAllWindows()
Is there a way to manipulate/gain info about the pixels in "faces"? Any help/pointers would be hugely appreciated.
basicly you can divide h with 3 to getting forehead:
for (x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w, int(y+h/3)), (255,0,0), 2)
but if you want to getting optimized results you can use landmark detection
I have to detect faces using openCV and python. Then identify the position of the detected face if it is in the right, the left or the middle of the screen.
I already succeed to detect faces using the code below and still to know the position of the faces could someone please help me ?
import cv2
import sys
import numpy as np
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(1)
while True:
#capture frame by frame
ret,frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray,
scaleFactor=1.1,
minNeighbors= 5,
minSize=(30, 30),
flags=cv2.cv.CV_HAAR_SCALE_IMAGE)
#Draw a rectangle around the faces
for (x, y, w,h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0,255, 0), 2)
cv2.imshow('video',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
You could get the centre of the rectangle:
centre_x = x + w/2
centre_y = y + y/2
Then compare it with the size of the image. Assuming you have the image shape information:
height, width, channels = frame.shape #it could be gray.shape too
You can understand for example if the face is detected on the left side of the image by checking centre_x<width.
You have all the information to divide the image into a grid and understand where the rectangle places itself.
I need to find the centre of a rectangle that gets put around a face when it's detected in OpenCV. I am using Python in Visual Studio.
Here is the code I am running:
#!/usr/bin/env python
from cv2 import *
import sys
cascPath = sys.argv[1]
faceCascade = CascadeClassifier(cascPath)
video_capture = VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cvtColor(frame, COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=CASCADE_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
font = FONT_HERSHEY_SIMPLEX
# Draw text on the frame
putText(frame, 'Hayden' ,(10,100), font, 2,(255,255,255),2,LINE_AA)
# Display the resulting frame
imshow('Video', frame)
if waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
destroyAllWindows()
All I want to do is find the centre of the rectangle, any help will be greatly appreciated!
I'm really sorry but I don't know python. The code for this in C++ is:
Point center = Point(rectangle.x + rectangle.width)/2, (rectangle.y + rectangle.height)/2);
I'd be surprised if this didn't translate almost exactly to python