Truth Value error in plotting SVM predicted values - python

I am using SVM to train an imageset for a machine learning project at post graduate level.
The error displayed when the plot function is called:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
The traceback reads as:
Traceback (most recent call last):
File "<ipython-input-5-10061f33ba63>", line 16, in <module>
plt.show(ix_train)
File "/home/.local/lib/python2.7/site-packages/matplotlib/pyplot.py", line 253, in show
return _show(*args, **kw)
File "/usr/lib/python2.7/dist-packages/ipykernel/pylab/backend_inline.py", line 41, in show
if close and Gcf.get_all_fig_managers():
Is there any plotting statement that I am missing, or a variable mismatch that I am doing?
I have followed the sklearn docs while trying to implement this function into my code.
Thanks.

You supply an argument to show, plt.show(something). That is not how show is meant to be used.
Instead you want to plot something and then show the previously created plot,
plt.plot(something)
plt.show()

Related

CDLIB : NMI (Normalized Mutual Information) function is not working can anyone help here?

I'm trying to evaluate the result of an algorithm of community detection. (Comparison between detected communities and truth community).
I used DBLP dataset but the function can't work it gives me this exception:
Traceback (most recent call last):
File "C:\Users\w\Dropbox\thesis_motaz_ben_hassine\implementations\h_clustering_new_Sim.py", line 323, in <module>
nmi=evaluation.normalized_mutual_information(res_omega_nmi,communities_omega_nmi)
File "c:\users\w\appdata\local\programs\python\python39\lib\site-packages\cdlib\evaluation\comparison.py", line 69, in normalized_mutual_information
__check_partition_coverage(first_partition, second_partition)
File "c:\users\w\appdata\local\programs\python\python39\lib\site-packages\cdlib\evaluation\comparison.py", line 36, in __check_partition_coverage
raise ValueError("Both partitions should cover the same node set")
ValueError: Both partitions should cover the same node set
you should use NMI between two community algorithms with the same graph.

How to convert caffe model and weight to pytorch

Hello I need help converting weight and model of caffe to pytorch. I have tried using the github that most other post suggest to use this github but when I used it, I encounter a lot of problem since it used python 2 and currently I am using python 3. I have already tried to remove some layer that the github doesn't cover, manually change old syntax to the new syntax but the last error reference to nn module from pytorch and I have no idea to fix that.
Traceback (most recent call last):
File "caffe2pytorch.py", line 30, in <module>
pytorch_blobs, pytorch_models = forward_pytorch(protofile, weightfile)
File "caffe2pytorch.py", line 17, in forward_pytorch
net = caffenet.CaffeNet(protofile)
File "/home/cgal/reference/SfSNet/caffe2pytorch/caffenet.py", line 384, in __init__
self.add_module(name, model)
File "/home/cgal/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 186, in add_module
raise KeyError("module name can't contain \".\"")
KeyError: 'module name can\'t contain "."'
So is there any suggestion on how to convert caffe weight and model to pytorch?
This is the caffe model that I want to convert download here

Python sklearn ValueError: array is too big

I made simple script on Python (ver.3.7) that classifies satellite image, but It can classify only clip of the satellite image. When I'm trying to classify the whole satellite image, it returns this:
Traceback (most recent call last):
File "v0-3.py", line 219, in classification_tool
File "sklearn\cluster\k_means_.py", line 972, in fit
File "sklearn\cluster\k_means_.py", line 312, in k_means
File "sklearn\utils\validation.py", line 496, in check_array
File "numpy\core\_asarray.py", line 85, in asarray
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
I tried using MiniBatchKMeans instead of KMeans (from Sklearn.KMeans : how to avoid Memory or Value Error?), but It still doesn't work. How I can avoid or solve this error? Maybe there are some mistakes in my code?
Oh I'm idiot because I used x32 version of Python instead of x64.
Maybe reinstalling Python to x64 version will solve your problem, user

How to fit multiple sequences with GMMHMM?

I have a problem with the Python hmmlearn library. This is that I have several training sets and I would like to have one Gaussian mixture hmm model to fit them.
Here is an example working with multiple sequences.
X = np.concatenate([X1, X2])
lengths = [len(X1), len(X2)]
hmm.GaussianHMM(n_components=3).fit(X, lengths)
When I change GaussianHMM to GMMHMM, it returns the following error:
hmm.GMMHMM(n_components=3).fit(X, lengths)
Traceback (most recent call last):
File "C:\Users\Cody\workspace\QuickSilver_HMT\hmm_list_sqlite.py", line 141, in hmm_list_pickle
hmm.GMMHMM(n_components=3).fit(X, lengths)
File "build\bdist.win32\egg\hmmlearn\hmm.py", line 998, in fit
raise ValueError("'lengths' argument is not supported yet")
ValueError: 'lengths' argument is not supported yet
How can one fit multiple sequences with GMMHMM?
The current master version contains a re-write of GMMHMM which did not support multiple sequences at some point. Now it does, so updating should help, as #ppasler suggested.
The re-write is still a work-in-progress. Please report any issues you encounter on the hmmlearn issue tracker.

Easy.py (libsvm) crashing before creating model

I've been trying to run the easy.py script provided by libsvm-3.17 however it is crashing before creating the model. It does generate the range, scale, output file and gnuplot image but not model, and I believe I do need this to classify the test data. Any input is greatly appreciated :) Thanks.
The error is :
Traceback (most recent call last):
File "tools\easy.py", line 61, in
c,g,rate = map(float,last_line.split())
ValueError: could not convert string to float: b'[0x7FFFB2243810]'
I've tried several data sets, this error pops up everytime.

Categories

Resources