Cvlib not showing boxes, labels and confidence - python

I am trying to replicate a simple object detection that I found in on website.
import cv2
import matplotlib.pyplot as plt
import cvlib as cv
from cvlib.object_detection import draw_bbox
im = cv2.imread('downloads.jpeg')
bbox, label, conf = cv.detect_common_objects(im)
output_image = draw_bbox(im, bbox, label, conf)
plt.imshow(output_image)
plt.show()
All required libraries are installed and there are no errors running the code. However, it does not show the output image with the boxes, labels and confidence. How do I fix it?

#After loading an image use an assert:
img = cv2.imread('downloads.jpeg')
assert not isinstance(img,type(None)), 'image not found'

Related

plt.show() not showing images

My code is as follow:
import torch
import torchvision
import torchvision.transforms as transforms
import numpy
import matplotlib
import matplotlib.pyplot as plt
torch.set_printoptions(linewidth=120)
train_set = torchvision.datasets.FashionMNIST(
root='./data/FashionMNIST',
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor()
])
)
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=10
)
sample = next(iter(train_set))
image, label = sample
plt.imshow(image.squeeze(), cmap='gray')
plt.show()
print(f"label:{label}")
I try to print an image via matploblib.pylot but nothing happens.
Plus, I'm doing this on my linux server, while the same code works quite well locally on my vscode.
As an alternative to viewing your matplolib window remotely, you can always save your plot as an image file and copy it to your local machine. This is as simple as using plt.savefig
plt.save(f'label:{label}.png')

Frame and keras model

I have a model which predicting the steering angle of a car from a picture which I like to implement in a android project by getting the frames from the camera.
in my Python code I'm using a h5 file instead of the tflite file in which the picture is converted to numpy and get processed using Cv2 lib.
Python Code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import load_model
from PIL import Image
import cv2
def img_preprocess(img):
img = img[60:135,:,:]
img = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
img = cv2.GaussianBlur(img, (3, 3), 0)
img = cv2.resize(img, (200, 66))
img = img/255
return img
if __name__ == '__main__':
model = load_model('model.h5')
image1 = Image.open("Gta2.png")
image1 = np.asarray(image1)
image1 = img_preprocess(image1)
image1 = np.array([image1])
steering_angle = float(model.predict(image1))
if(steering_angle > 0):
print('turn right')
print('turn wheel : {}'.format(steering_angle))
else:
print('turn left')
print('turn wheel : {}'.format(steering_angle))
I've been imported the model.tflite to my project assets and know I need to process the CameraBridgeViewBase.CvCameraViewFrame object from the camera to fit to my model input.
the model input and output.
so my questions are:
How process the CvCameraViewFrame Object as in the python code in the 'img_preprocess' function?
How to reach the input specs?
my android code:
You can first convert your Mat frame to Bitmap, such as the solution in this question.
And then use TFLite Task library ImageClassifier to run the inference. Here is an example of how to use ImageClassifier, which comes from the TFLite Image Classification reference app.
Alternatively, you can write your own code to process the image and run inference using Interpreter. See the example here.

How can I plot my trained model result on video using Detectron2?

I am new on using Detectron2. I want to load the video from local drive. And then, do detection using my trained model using Detectron2's VideoVisualizer.
I tried to find a tutorial about this. But it does not exist. Could you please what do I do?
Thank you
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import tqdm
import cv2
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.video_visualizer import VideoVisualizer
from detectron2.utils.visualizer import ColorMode, Visualizer
from detectron2.data import MetadataCatalog
import time
video = cv2.VideoCapture('gdrive/My Drive/video.mp4')
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.OUTPUT_DIR = 'gdrive/My Drive/mask_rcnn/'
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set threshold for this model
predictor = DefaultPredictor(cfg)
v = VideoVisualizer(MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), ColorMode.IMAGE)
First, check the following tutorial (you can skip training parts if you don't want to train on your own data).
https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=Vk4gID50K03a
Then, look at the following code to inference on video.
https://github.com/facebookresearch/detectron2/blob/master/demo/demo.py

In pillow library, BICUBIC is not working

This is my code.
import sys, os
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from scipy import *
sys.path.insert(0, 'C:/research')
im = Image.open('C:/research/1.jpg')
hei, wei = im.height, im.width
im_bicubic = im.resize((wei,hei), im.BICUBIC)
im.save('C:/research/1ori.jpg') #original image
im_bicubic.save('C:/research/1bic.jpg') #Images with bicubic applied
But I get this error.
AttributeError: 'JpegImageFile' object has no attribute 'BICUBIC'
Why is this message coming up?
.bmp, the same message pops up.
What should I do?
You need to use PIL.Image.BICUBIC instead of im.BICUBIC.
So you need to change:
im_bicubic = im.resize((wei,hei), im.BICUBIC)
to
im.resize((wei,hei),PIL.Image.BICUBIC)
You also need to import pil like so:
import PIL

Error while converting rgb to lab : python

import skimage
from skimage import io, color
import numpy as np
import scipy.ndimage as ndi
rgb = io.imread('img.jpg')
lab = color.rgb2lab(skimage.img_as_float(rgb))
l_chan1 = lab[:,:,0]
l_chan1 /= np.max(np.abs(l_chan1))
l_chan_med = ndi.median_filter(l_chan1, size=5)
skimage.io.imshow(l_chan_med)
I am trying to do some image processing. While i was changing the color scheme, i am getting an error for rgb2lab function. " color.rgb2lab module object has no attribute 'rgb2lab'. I have imported all the required libraries.Any suggestions will be appreciated
Try this:
import skimage
from skimage import io
from skimage.color import rgb2lab
import numpy as np
import scipy.ndimage as ndi
rgb = io.imread('img.jpg')
lab = rgb2lab(skimage.img_as_float(rgb))
l_chan1 = lab[:,:,0]
l_chan1 /= np.max(np.abs(l_chan1))
l_chan_med = ndi.median_filter(l_chan1, size=5)
skimage.io.imshow(l_chan_med)
I don't know what's wrong, but your code runs fine on my machine. Python 2.7.6, OS X Yosemite. I would try reinstalling scikit-image.

Categories

Resources