im trying to implement the GSOC background subtractor from openCV.
fgbg = cv.bgsegm_BackgroundSubtractorGSOC()
fgmask = fgbg.apply(frame)
but this gives me following error:
fgmask = fgbg.apply(frame)
TypeError: Incorrect type of self (must be 'bgsegm_BackgroundSubtractorGSOC' or its derivative)
and
fgmask = cv.bgsegm_BackgroundSubtractorGSOC.apply(frame)
gives me this error:
fgmask = cv.bgsegm_BackgroundSubtractorGSOC.apply(frame)
TypeError: descriptor 'apply' requires a 'cv2.bgsegm_BackgroundSubtractorGSOC' object but received a 'numpy.ndarray'
The documentation for .apply() says i only need to supply an inputarray (the frame), the output location and the learning rate. Changing .apply(frame) to .apply(frame, output, -1) does not fix the error
how do i correctly implement a bgsegm_BackgroundSubtractorGSOC object and use it on my image?
i read this post but it seems i am failing a step before that already
GSOC and the other background subtraction methods (other than MOG2 and KNN) are located in the extra modules and require the opencv-contrib library to be installed.
Once it is installed, the module can be used by writing:
backSub = cv.bgsegm.createBackgroundSubtractorGSOC()
Related
I'm taking pictures with OpenCV library:
def display_frame(self, frame, dt):
texture = Texture.create(size=(frame.shape[1], frame.shape[0]), colorfmt='bgr')
texture.blit_buffer(frame.tobytes(order=None), colorfmt='bgr', bufferfmt='ubyte')
texture.flip_vertical()
self.image.texture = texture
cv2.imwrite('/home/mark/frontend/picture_taken.jpg', self.image)
cam.release()
cv2.destroyAllWindows()
The above throws:
cv2.imwrite('/home/mark/frontend/picture_taken.jpg', self.image)
TypeError: Expected Ptr<cv::UMat> for argument 'img'
I'm not including all parts of the code but the above is giving you idea what kind
of image I'm trying to write on disk. On GitHub I found that this error is usually thrown when we try to write an image that we passed in in a wrong form. In some cases this can
be solved by including img = numpy.array(self.image) right above the line where I write the image. However that didn't work for me.
How this can be fixed ?
Hi this is just a guess but from my own experience i think your image variable is a None type. Can you try and print out if your self.image has any value
Getting the error bellow in the code
TypeError: 'NoneType' object is not subscriptable
line : crop_img = img[70:300, 70:300]
Can anyone please help me with this?
thanks a lot
img_dofh = cv2.imread("D.png",0)
ret, img = cap.read()
cv2.rectangle(img,(60,60),(300,300),(255,255,2),4) #outer most rectangle
crop_img = img[70:300, 70:300]
crop_img_2 = img[70:300, 70:300]
grey = cv2.cvtColor(crop_img, cv2.COLOR_BGR2GRAY)
You don't show where your img variable comes from. But somehow it is None instead of containing image data.
Often this happens when you write a function that is supposed to return a valid object for img, but you forget to include a return statement in the function, so it automatically returns None instead.
Check the code that creates img.
UPDATE
Responding to your code posting:
It would be helpful if you could provide a minimal, reproducible example. That might look something like this:
import cv2
cap = cv2.VideoCapture(0)
if cap.isOpened():
ret, img = cap.read()
if img is None:
print("img was not returned.")
else:
crop_img = img[70:300, 70:300]
print(crop_img) # should show an array of image data
Looking at the documentation, it appears that your camera may not have grabbed any frames by the time you reach this point in your code. The documentation says "If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the methods return false and the functions return NULL pointer." I would bet that the .read() function is returning a NULL pointer, which gets converted to None when it is sent back to Python.
Unfortunately, since no one else has your particular camera setup, other people may have trouble reproducing your problem.
The code above works fine on my MacBook Pro, but I have to give Terminal permission to use the camera the first time I try it. Have you tried restarting your terminal app? Does your program have access to the camera?
repost of my question here: https://github.com/ageitgey/face_recognition/issues/441
I would like to load an opencv image object into face_recognition library.
https://github.com/ageitgey/face_recognition
I am using Django Python, I have used opencv to crop and do some face quality checks.
Here is what something I did:
im_obj = request.data['image_file']
im_cv = cv2.imdecode(np.fromstring(im_obj.read(), np.uint8), cv2.IMREAD_UNCHANGED)
result, checked_cv_image = process(im_cv) <-- this does some stuff like crop, etc
unknown_image = face_recognition.load_image_file(checked_cv_image) <-- this doesnt work
it has this error:
AttributeError: 'numpy.ndarray' object has no attribute 'read'
My Question:
I would like to know is it possible to figure a way how to load an opencv object into face_recognition object.
UPDATE:
I tried to use, this:
ret, buf_checked_cv_image = cv2.imencode( '.png', checked_cv_image )
unknown_image = face_recognition.load_image_file(buf_checked_cv_image)
this doesnt work as well, same error:
AttributeError: 'numpy.ndarray' object has no attribute 'read'
I'm writing a program in python using OpenCV which detects the edges (Canny Edge Detector) from the footage my webcam records. I'm also using two track-bars in order to control the threshold values (in order for to understand how these values change the output of this edge detector).
The code I wrote is the following:
import cv2
import numpy as np
def nothing(x):
pass
img = np.zeros((300,512,3), np.uint8)
cv2.namedWindow('cannyEdge')
cv2.createTrackbar("minVal", "cannyEdge", 0,100, nothing)
cv2.createTrackbar("maxVal", "cannyEdge", 100,200,nothing)
cap = cv2.VideoCapture(0)
while(True):
minVal = cv2.getTrackbarPos("minVal", "cannyEdge")
maxVal = cv2.getTrackbarPos("maxVal", "cannyEdge")
#capture frame by frame
ret, frame = cap.read()
cv2.imshow('frame', frame)
edge = cv2.Canny(frame,minVal,maxVal)
#display the resulting frame
cv2.imshow('frame', edge)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#When everything is done, release the capture
cap.release
cv2.destroyAllWindows()
This program is for educational purposes only as I'm currently learning to use OpenCV.
Every time I run the program above the code seems to be working just fine but I get the following Error:
GLib-GObject-CRITICAL **: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
I've searched for the reason this error occurs but I haven't found anything helpful. My instinct tells me that my implementation for the trackbars is wrong and thus it's causing this error.
The tutorials I used are the following:
OpenCV tutorials - Canny Edge Detector
OpenCV tutorials - Trackbars
Does anybody know why this error occurs? Any help will be appreciated!
I am running Ubuntu 14.04, OpenCV 3.2.0 and Python 2.7.6
Try making the track bars and displaying the image in the same window and see if the error persists. I bet it shouldn't. Change: cv2.imshow('cannyEdge', edge)
Have you created another window named "frame"? If not, it looks like you should change 'frame' to 'cannyEdge':
cv2.imshow('cannyEdge', frame)
I am running into strange problems with the Python wrapper for OpenCV. I am using the cv2 binding and have been able to do a lot with it but the latest problem is my inability to create a VideoWriter.
When I try to create a video writer using this command:
cv2.VideoWriter('foo.out.mov', cv2.cv.CV_FOURCC('m','p','4','v'), 25, (704, 480), 1)
I get the following error:
error: /builddir/build/BUILD/OpenCV-2.3.1/modules/highgui/src/cap_gstreamer.cpp:483: error: (-210) Gstreamer Opencv backend doesn't support this codec acutally. in function CvVideoWriter_GStreamer::open
When create a VideoCapture I can successfully retrieve frames by using the read method, but any calls to the get method to retrieve parameters such as frame width, frame height, or FOURCC code all return 0.0.
I wanted to get the exact codec from the file I am opening to pass this into VideoWriter, but since this only returns 0.0 I don't know what to do.
Any help would be greatly appreciated.
Try passing -1 as the fourcc parameter. This should pop up a dialog and let you choose a video codec. I use it this way, and works great.
cv2.VideoWriter('foo.out.mov', -1, 25, (704, 480), 1)