How to use caffe convnet for object detection in video frames? - python

I have use codes from this link and sucessfully done the detection but the problem is it is only from webcam. I tried to modify the code so that it can read from file. the part I have modified is : I have written this
print("[INFO] starting video stream...")
vs= cv2.VideoCapture('cars.avi')
time.sleep(2.0)
fps = FPS().start()
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 400 pixels
frame = vs.read()
instead of this (code from the above link)
print("[INFO] starting video stream...")
vs = VideoStream(src=0).start()
time.sleep(2.0)
fps = FPS().start()
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 400 pixels
frame = vs.read()
For running the program from terminal I am using this command for both the cases:
python real_time_object_detection.py --prototxt
MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel
The error I am getting when reading from file is
the error I am getting is :
C:\Users\DEBASMITA\AppData\Local\Programs\Python\Python35\real-time-object-
detection>python videoobjectdetection.py --prototxt
MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel
[INFO] loading model...
Traceback (most recent call last):
File "videoobjectdetection.py", line 54, in <module>
frame = imutils.resize(frame, width=400)
File "C:\Users\DEBASMITA\AppData\Local\Programs\Python\Python35\lib\site-
packages\imutils\convenience.py", line 69, in resize
(h, w) = image.shape[:2]
AttributeError: 'tuple' object has no attribute 'shape'
I don't know where I am doing wrong. Please guide me.

I am unfamiliar with any of the code you are referencing, but the error is straightforward and similar errors hav been answered in other questions: You're trying to do a fancy method on a plain tuple object. Here's an example of this python concept using a common package, numpy for arrays:
#an example of the error you are getting with a plain tuple
>>>tup = (1,2,3,4)
>>>len(tup)
4
>>> tup.shape
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'tuple' object has no attribute 'shape'
#an example that uses an attribute called 'shape'
>>> import numpy as np
>>> x = np.array([1,2,3,4])
>>> x.shape
(4,)
>>> x.shape.shape
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'tuple' object has no attribute 'shape'
As you can see in my last two lines, the first time I call .shape on the numpy array, the call is valid. This call returns a tuple, so the last call to .shape.shape is invalid, it is operating on (4,). As for how to fix it? I don't know. For example, in this question the original poster thought that they were getting back some kind of image object, instead they were getting a tuple (maybe a tuple of image objects). Something similar is happening to you: Your VideoStream.read() call is returning a tuple. So when you call imutils.resize(frame, width=400) you are passing in a tuple, not an image or frame. So when that method tries to call .shape you get the error. VideoStream.read() may return a tuple by design, or an error condition. You'd have to read up on VideoStream to be sure.

Related

AttributeError: 'NoneType' object has no attribute 'copy' opencv error coming when running code [duplicate]

This question already has answers here:
Why do I get AttributeError: 'NoneType' object has no attribute 'something'?
(11 answers)
Closed last year.
I am having an issue with this Python code:
Code And Other Things
The Error Is:
Traceback (most recent call last):
File "C:\Users\thaku\Desktop\projects\pythoncode-tutorials-master\machine-learning\face-age-prediction\predict_age.py", line 156, in <module>
predict_age(image_path)
File "C:\Users\thaku\Desktop\projects\pythoncode-tutorials-master\machine-learning\face-age-prediction\predict_age.py", line 113, in predict_age
frame = img.copy()
AttributeError: 'NoneType' object has no attribute 'copy'
I Was trying opencv as learning python. the project was to predict age but as this error came my days are wasted so please help me..
To Run The Code: python .\predict_age.py /tmp
or some new errors come
def predict_age(input_path: str):
"""Predict the age of the faces showing in the image"""
# Read Input Image
img = cv2.imread(input_path)
# Take a copy of the initial image and resize it
frame = img.copy()
Seems your error happens here because img is None, so it has no method copy() to call.
You said you are running the code like this:
.\predict_age.py /tmp
I can see that the code initialises img with the input_path which is passed as sys.argv[1]. well, /tmp is not really an image, could you try passing an image like .\predict_age.py /tmp/my_image.png
The error means that the img variable from cv2.imread(input_path) is None. I.e., something went wrong with reading the image from input_path.
In your main code, you write
import sys
image_path = sys.argv[1]
predict_age(image_path)
So the image path is given by the first argument to the program. Are you running the code as python predict_age.py 3-people.jpg?

rioaxrray open netcdf file result is a list not an xarray

I am trying to open a netcdf file using rioarray:
import rioxarray
import xarray
import raster
xds = rioxarray.open_rasterio(file, crs='+proj=latlong', masked=True)
but:
type(xds)
list
and xds has none of the attributes or methods of an xarray.
xds_lonlat = xds.rio.reproject("epsg:4326")
AttributeError Traceback (most recent call last)
in
----> 1 xds_lonlat = xds.rio.reproject("epsg:4326")
AttributeError: 'list' object has no attribute 'rio'
clipped = xds.rio.clip(mask.geometry, mask.crs, drop=False, invert=True)
AttributeError Traceback (most recent call last)
in
----> 1 clipped = xds.rio.clip(mask.geometry, mask.crs, drop=False, invert=True)
AttributeError: 'list' object has no attribute 'rio'
Can anyone advise?
I recently encountered this when I was opening a netCDF (with rioxarray) that had multiple variables. Since it returns a list, you would not expect it to have any of the rioxarray attributes or methods.
The documentation for the function is here: https://corteva.github.io/rioxarray/stable/rioxarray.html
One of the return types is List[xarray.Dataset], so I think this behavior is expected.
My guess is that you want one of the entries in the list like xds=xds[0], though it's hard to know without having more information about the file that you are opening.

How to use "xphoto_WhiteBalancer.balanceWhite" with Python and OpenCV?

I am looking for a straightforward way of applying automatic white balance to an image.
I found some official documentation about a balanceWhite() method: cv::xphoto::WhiteBalancer Class Reference
However, I have an obscure error when I try to call the function as shown in the example.
image = cv2.xphoto_WhiteBalancer.balanceWhite(image)
Raises:
Traceback (most recent call last):
File "C:\Users\Delgan\main.py", line 80, in load
image = cv2.xphoto_WhiteBalancer.balanceWhite(image)
TypeError: descriptor 'balanceWhite' requires a 'cv2.xphoto_WhiteBalancer' object but received a 'numpy.ndarray'
If then I try to use a cv2.xphoto_WhiteBalancer object as required:
balancer = cv2.xphoto_WhiteBalancer()
cv2.xphoto_WhiteBalancer.balanceWhite(balancer, image)
It raises:
Traceback (most recent call last):
File "C:\Users\Delgan\main.py", line 81, in load
cv2.xphoto_WhiteBalancer.balanceWhite(balancer, image)
TypeError: Incorrect type of self (must be 'xphoto_WhiteBalancer' or its derivative)
Did anyone succeeded to use this feature with Python 3.6 and OpenCV 3.4?
I also tried with derived classes GrayworldWB, LearningBasedWB and SimpleWB but errors are the same.
The answer can be found in the xphoto documentation.
The appropriate methods to create the WB algorithms are createSimpleWB(), createLearningBasedWB() and createGrayworldWB().
Example:
wb = cv2.xphoto.createGrayworldWB()
wb.setSaturationThreshold(0.99)
image = wb.balanceWhite(image)
Here is a sample file in the official OpenCV repository: modules/xphoto/samples/color_balance_benchmark.py

Error in opencv python whie loading Video

Actually I was loading a video Using k=cv2.VideoCapture("it.mp4") which is in the same folder but when I check it is opened or not it shows "False". and when i use k.open() to open it, it shows me this error:
Traceback (most recent call last):
File "", line 1, in
TypeError: Required argument 'filename' (pos 1) not found
As I think it is not getting the file but the video is in the same folder. I am stuck on this since a long time.
Here is the code:-
import numpy as np
import cv2
cap=cv2.VideoCapture("it.mp4")
k=cap.isOpened()
if k==False:
cap.open()
And it shows the below error:-
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Required argument 'filename' (pos 1) not found
By looking at your code it is easy to figure out why you are getting this error. The reason is that you are using cap.open() without any arguments. You need to pass the filename to cap.open() in order to initialize the cv2.VideoCapture. So your code should be
import numpy as np
import cv2
cap=cv2.VideoCapture("it.mp4")
k=cap.isOpened()
if k==False:
cap.open("it.mp4")
In order to read the frames from cap you can use a loop like this
while(True):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
You need to pass an argument for the cap.open(). In your case-
cap.open("it.mp4")
It must either be the device id in case you are using a camera or a filename which you want to read. Check out the page here.
But the actual issue here i think is that your opencv is not able to read the video you passed and which is the issue you are trying to fix. Either the file name or the extension is wrong.
If its neither, simply go to the path C:\opencv\build\x86\vc12\bin , copy the opencv_ffmpegabcd.dll and paste it in your python root directory. abcd here is your opencv version. If its a 64bit setup, copy the corresponding one.

Memory leak with VideoCapture in Python OpenCV

I am using 3 webcams to occasionally take snapshots in OpenCV. They are connected to the same usb bus, which does not allow for all 3 connections at the same time due to usb bandwidth limitations (lowering the resolutions allows at most 2 simultaneous connections and I don't have more usb buses).
Because of this, I have to switch webcam connections every time I want to take a snapshot, but this causes a memory leak after some 40 switches.
This is the error I get:
libv4l2: error allocating conversion buffer
mmap: Cannot allocate memory
munmap: Invalid argument
munmap: Invalid argument
munmap: Invalid argument
munmap: Invalid argument
Unable to stop the stream.: Bad file descriptor
munmap: Invalid argument
munmap: Invalid argument
munmap: Invalid argument
munmap: Invalid argument
libv4l1: error allocating v4l1 buffer: Cannot allocate memory
HIGHGUI ERROR: V4L: Mapping Memmory from video source error: Invalid argument
HIGHGUI ERROR: V4L: Initial Capture Error: Unable to load initial memory buffers.
OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or
unsupported array type) in cvGetMat, file
/build/buildd/opencv-2.3.1/modules/core/src/array.cpp, line 2482
Traceback (most recent call last):
File "/home/irobot/project/test.py", line 7, in <module>
cv2.imshow('cam', img)
cv2.error: /build/buildd/opencv-2.3.1/modules/core/src/array.cpp:2482:
error: (-206) Unrecognized or unsupported array type in function cvGetMat
This is a simple piece of code that generates this error:
import cv2
for i in range(0,100):
print i
cam = cv2.VideoCapture(0)
success, img = cam.read()
cv2.imshow('cam', img)
del(cam)
if cv2.waitKey(5) > -1:
break
cv2.destroyAllWindows()
Maybe a worthy note is that I get VIDIOC_QUERYMENU: Invalid argument errors every time the camera connects, although I can then still use it.
As some extra info, this is my v4l2-ctl -V output of the webcam:
~$ v4l2-ctl -V
Format Video Capture:
Width/Height : 640/480
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 1280
Size Image : 614400
Colorspace : SRGB
What causes these errors and how can I fix them?
The relevant snippet of the error message is Unrecognised or unsupported array type in function cvGetMat. The cvGetMat() function converts arrays into a Mat. A Mat is the matrix data type that OpenCV uses in the world of C/C++ (Note: the Python OpenCV interface you are utilising uses Numpy arrays, which are then converted behind the scenes into Mat arrays). With that background in mind, the problem appears to be that that the array im you're passing to cv2.imshow() is poorly formed. Two ideas:
This could be caused by quirky behaviour on your webcam... on some
cameras null frames are returned from time to time. Before you pass
the im array to imshow(), try ensuring that it is not null.
If the error occurs on every frame, then eliminate some of the
processing that you are doing and call cv2.imshow() immediately
after you grab the frame from the webcam. If that still doesn't
work, then you'll know it's a problem with your webcam. Else, add
back your processing line by line until you isolate the problem. For
example, start with this:
while True:
# Grab frame from webcam
retVal, image = capture.read(); # note: ignore retVal
# faces = cascade.detectMultiScale(image, scaleFactor=1.2, minNeighbors=2, minSize=(100,100),flags=cv.CV_HAAR_DO_CANNY_PRUNING);
# Draw rectangles on image, and then show it
# for (x,y,w,h) in faces:
# cv2.rectangle(image, (x,y), (x+w,y+h), 255)
cv2.imshow("Video", image)
i += 1;
source: Related Question: OpenCV C++ Video Capture does not seem to work

Categories

Resources