I'm using OpenCV 3.0.0 and Python 2.7.9 to pull images of detected objects out of a live video stream and to categorize them as either being in a class of specific objects or not using the OpenCV Machine Learning (cv2.ml) Support Vector Machine (SVM).
The code that I use to train the SVM generates SIFT keypoints in the images, uses KMEANS clustering, and then feeds into the SVM training algorithm. All of that works fine, but because it isn't necessarily a part of the required operational code, I did it separately and save the SVM model to a .dat file using:
svm was created with cv2.ml.SVM_create()
svm.save('datafile.dat')
The problem is that the svm.load() function is not implemented at all in OpenCV 3.0.0.
I've tried to use the StatModel(model) to load as well and that didn't work either.
I'm pretty invested in the python portion of this project so far and would rather not re-program it as C++ and now that I have the SVM working on the training side, would prefer not to use something in SciPy.
I'm hoping that the load feature is somehow renamed and just not well documented. Any ideas?
Unfortunately it is a bug. See also this question.
If you check the help for SVM_create() you will see that there is no function like read() or load() but save() (inherited from Algorithm class):
>>> import cv2
>>> help(cv2.ml.SVM_create())
Related
I have a real-time problem which is aimed to detect 9 objects. As far as I understand, yolo has promising results on real-time object detection problems so I am searching good instructions to train a pre-trained yolo model with my custom "own" dataset.
I have my dataset and they are already labeled, also they have bounding box coordinates in .txt files in yolo format. However, it is a bit confusing to find a good instruction on the web about yolo custom dataset training for own object detection problem, since instructions are mostly using generic dataset such as COCO, PASCAL etc. or their instructions are not well enough to implement the object detection model on own dataset.
TL;DR
My question is, are there some handy instructions about implementing yolo object detection for own dataset? I am more looking for frameworks to implement yolo model rather than darknet C implementation since I am more familiar with python so it would be perfect if you could provide Pytorch or Tensorflow implementation.
It is more appraciated if you already implemented yolov3-v4 with your own dataset with the help of instructions you found on the web and you are willing to share those instructions.
Thanks in advance.
For training purpose I would highly recommend AlexeyAB's repository as it's highly optimised for accuracy and speed, although it is also written in C. As far as testing and deployment is considered you have a lot of options:
OpenCV's DNN Module: refer this article.
Tensorflow Model
Pytorch Model
Out of these OpenCV's DNN implementation is the fastest for testing/inference.
I want to fine tune existing OpenCV DNN face detector to a face images database that I own. I have opencv_face_detector.pbtxt and opencv_face_detector_uint8.pb tensorflow files provided by OpenCV. I wonder if based on this files is there any way to fit the model to my data? So far, I haven't also managed to find any tensorflow training script for this model in OpenCV git repository and I only know, that given model is and SSD with resnet-10 as a backbone. I am also not sure, reading the information on the internet, if I can resume training from .pb file. Are you aware of availability of any scripts defining the model, that could be used for training? Would pbtxt and pb files be enough to continue training on new data?
Also, I noticed that there is a git containing caffe version of this model https://github.com/weiliu89/caffe/tree/ssd. Although I never worked with caffe before, would it be possible/easier to use existing weight (caffe .pg and .pbtxt files are also available in OpenCV's github) and fit the model to my dataset?
I don't see a way to do this in opencv, but I think you'd be able to load the model into tensorflow and use model.fit() to retrain.
The usual advice about transfer learning applies. You'd probably want to freeze most of the early layers and only retrain the last one or two. A slow learning rate would be advised as well.
I have the Ensemble model that combines both tensorflow and scikit-learn. And I would like to save this Ensemble model as a box to feed data in and generate the output. My code is as below
def model_base_LSTM(***):
***
model = model_base_LSTM(***)
ensem_model = BaggingRegressor(base_estimator=model, n_estimators=15)
ensem_model.fit(x_train, y_train)
bag_mod_pred = ensem_model.predict(x_test_bag)
from joblib import dump, load
dump(ensem_model, 'LSTM_Ensemble.joblib')
TypeError: can't pickle _thread._local objects
So, how to solve this problem??
You can save your TensorFlow (and even PyTorch) models with Scikit-Learn, but only if you use Neuraxle and its saving mechanics.
Neuraxle is an extension of Scikit-Learn to make it more compatible with all deep learning libraries.
The trick is performed by using Neuraxle-TensorFlow or Neuraxle-PyTorch.
Why so?
Using one of Neuraxle-TensorFlow or Neuraxle-PyTorch will provide you with a saver to allow your thing to be serialized correctly. You want it to be serialized correctly to be able to ensure compatibility between scikit-learn and your Deep Learning framework when it comes time to save or parallelize things and so forth. You can read how Neuraxle solves this with savers here.
Code Examples
Here is a full project example from A to Z where TensorFlow is used with Neuraxle as if it was used with Scikit-Learn.
Here is another practical example where TensorFlow is used within a scikit-learn-like pipeline
I would like to know, if there is the possibility to somehow train a svm classifier using scikit in python (love this module and its documentation) and import that trained model into C++ for prediction making.
Here is how far I got:
I have written a python script which uses scikit to create a reasonable svm classifier
I can also store that model in pickle format
Now, I had a look at libSVM for C++ but I do not see how that is able to import such a model. I think that the documentation is not that good or I missed something here.
However, I also thought that instead of storing the whole model, I could just store the parameters of the SVM Classifier and load only those parameters ( I think the needed once are: Support Vectors, C, degree) for a linear SVM classifier. Unfortunately, I cannot find any documentation of libSVM on how to do that.
A last option which I would not prefer that much would be to go with OpenCV in which I could train a SVM classifier, store it and load it back all in C++. But this would introduce even more library dependencies (especially such a large one) for my program. If there is a good way to avoid that, I would love to do so.
As always I thank you in advance!
Best,
Tukk
I am working on a computer vision/machine learning problem in python and have so far not needed opencv. However, I now need to use a fast cascade classifier to detect objects in a set of images. I came across the Haar based classifier written in C++. The testing phase can be called from Python, but I cannot find a way to train the classifier to recognize other objects from Python (I was only able to find this, which uses C++).
Is the training functionality of the Haar cascade only available through C++ or can it be used with Python?
If only C++, any pointers on what I need to install to get started?
Thanks!