Easy.py (libsvm) crashing before creating model - python

I've been trying to run the easy.py script provided by libsvm-3.17 however it is crashing before creating the model. It does generate the range, scale, output file and gnuplot image but not model, and I believe I do need this to classify the test data. Any input is greatly appreciated :) Thanks.
The error is :
Traceback (most recent call last):
File "tools\easy.py", line 61, in
c,g,rate = map(float,last_line.split())
ValueError: could not convert string to float: b'[0x7FFFB2243810]'
I've tried several data sets, this error pops up everytime.

Related

How to convert caffe model and weight to pytorch

Hello I need help converting weight and model of caffe to pytorch. I have tried using the github that most other post suggest to use this github but when I used it, I encounter a lot of problem since it used python 2 and currently I am using python 3. I have already tried to remove some layer that the github doesn't cover, manually change old syntax to the new syntax but the last error reference to nn module from pytorch and I have no idea to fix that.
Traceback (most recent call last):
File "caffe2pytorch.py", line 30, in <module>
pytorch_blobs, pytorch_models = forward_pytorch(protofile, weightfile)
File "caffe2pytorch.py", line 17, in forward_pytorch
net = caffenet.CaffeNet(protofile)
File "/home/cgal/reference/SfSNet/caffe2pytorch/caffenet.py", line 384, in __init__
self.add_module(name, model)
File "/home/cgal/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 186, in add_module
raise KeyError("module name can't contain \".\"")
KeyError: 'module name can\'t contain "."'
So is there any suggestion on how to convert caffe weight and model to pytorch?
This is the caffe model that I want to convert download here

Facing issue while loading the pre-trained model

I've trained my model using google colab and saved it as model.pkl. When I try to load the model in my laptop it is throwing the below error:
Traceback (most recent call last):
File "app.py", line 8, in <module>
model = pickle.load(open('model.pkl', 'rb'))
File "sklearn\tree\_tree.pyx", line 606, in sklearn.tree._tree.Tree.__cinit__
ValueError: Buffer dtype mismatch, expected 'SIZE_t' but got 'long long'
I've done some research on the above error and got to know that the random forest code uses different types for indices on 32-bit and 64-bit machines. I've seen similar question on this platform but NOT satisfied with the accepted answer because the answer suggesting to train the model again which is not suitable in case since there are lot of thing to re-do and i don't want to put load on the server again.
Any suggestions or solutions ?
Not sure about '.pkl' format,but can you try saving it as
model.save('modelweight.h5') and then load as model.load ('modelweight.h5').
This shall work.
Thanks.
try to use cpickle instead of pickle
try:
import cPickle as pickle
except:
import pickle
f = open('model.pkl','w+')
pickle.dump(model, f)#to save the model into file
f = open('model.pkl','r')
model = pickle.load(f)

Truth Value error in plotting SVM predicted values

I am using SVM to train an imageset for a machine learning project at post graduate level.
The error displayed when the plot function is called:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
The traceback reads as:
Traceback (most recent call last):
File "<ipython-input-5-10061f33ba63>", line 16, in <module>
plt.show(ix_train)
File "/home/.local/lib/python2.7/site-packages/matplotlib/pyplot.py", line 253, in show
return _show(*args, **kw)
File "/usr/lib/python2.7/dist-packages/ipykernel/pylab/backend_inline.py", line 41, in show
if close and Gcf.get_all_fig_managers():
Is there any plotting statement that I am missing, or a variable mismatch that I am doing?
I have followed the sklearn docs while trying to implement this function into my code.
Thanks.
You supply an argument to show, plt.show(something). That is not how show is meant to be used.
Instead you want to plot something and then show the previously created plot,
plt.plot(something)
plt.show()

Dilation image-recognition algorithms cityScapes model

I am trying to use this image-recognition algorithms using cityScapes model
https://github.com/fyu/dilation
However, I keep on getting the following error:
- bash-4.2$ python predict.py cityscapes sunny_1336601.png --gpu 0
Using GPU 0
WARNING: Logging before InitGoogleLogging() is written to STDERR
Traceback (most recent call last):
File "predict.py", line 133, in <module>
main()
File "predict.py", line 129, in main
predict(args.dataset, args.input_path, args.output_path)
File "predict.py", line 98, in predict
color_image = dataset.palette[prediction.ravel()].reshape(image_size)
ValueError: cannot reshape array of size 12582912 into shape (1090,1920,3)
I tried reshaping the image to every common resolution I could think of, including 640x480, but I have been getting the same error.
Any help or tips is highly appreciated.
Thanks!
I don't have enough reputation to comment, so I am posting my hunch as an answer (forgive me if I'm wrong) : the given size 12582912 has to be a product of the three numbers in the tuple. A quick factorisation showed 12582912 = 1024*768*16 = 2048*1536*4 So, if the images is a 4-channel image, the resolution is 2048 x 1536 which is in standard 4:3 aspect ratio.
It turns out that Cityscapes model only takes a specific size: The width should be twice the length.
If you know Python well, you will see that ValueError is internal code error. It has nothing to do with missing dependencies or environment.
It has to do with the fact that the image was one total size first, and then it's being casted to array and then back into another dimensions.
That is not something that can be fixed or should be fixed by tempering with input data, but with addressing the bug in the provided library itself.
It is very common to have this kind of restrictions with NN classifier. Because once layers are trained, they can't be changed and the input must be very specific. Of course, it still can be "cooked" before giving it to NN but it's usually nondestructive/basic scaling, so the proportions must be preserved, which is what the library does wrong.

How to fit multiple sequences with GMMHMM?

I have a problem with the Python hmmlearn library. This is that I have several training sets and I would like to have one Gaussian mixture hmm model to fit them.
Here is an example working with multiple sequences.
X = np.concatenate([X1, X2])
lengths = [len(X1), len(X2)]
hmm.GaussianHMM(n_components=3).fit(X, lengths)
When I change GaussianHMM to GMMHMM, it returns the following error:
hmm.GMMHMM(n_components=3).fit(X, lengths)
Traceback (most recent call last):
File "C:\Users\Cody\workspace\QuickSilver_HMT\hmm_list_sqlite.py", line 141, in hmm_list_pickle
hmm.GMMHMM(n_components=3).fit(X, lengths)
File "build\bdist.win32\egg\hmmlearn\hmm.py", line 998, in fit
raise ValueError("'lengths' argument is not supported yet")
ValueError: 'lengths' argument is not supported yet
How can one fit multiple sequences with GMMHMM?
The current master version contains a re-write of GMMHMM which did not support multiple sequences at some point. Now it does, so updating should help, as #ppasler suggested.
The re-write is still a work-in-progress. Please report any issues you encounter on the hmmlearn issue tracker.

Categories

Resources