Error in scikit-neuralnetwork classifier - python

I am using the Python(2.7.6) on ubuntu(14.04.2 LTS), numpy(1.11.3) and scikit-learn version(0.18.1). But it throws the following exception.
Here is the link of official document.
nn = Classifier(
layers=[
Layer("Maxout", units=100, pieces=2),
Layer("Softmax")],
learning_rate=0.001,
n_iter=25)
Error :
Traceback (most recent call last):
File "LeadScore.py", line 19, in <module>
Layer("Maxout", units=100, pieces=2),
TypeError: __init__() got an unexpected keyword argument 'pieces'

(Disclaimer: i never used this lib)
(1) scikit-neuralnetwork has not much to do with scikit-learn, so you should probably mention the version of scikit-neuralnetwork which you are using.
(2) According to this and this Maxout was removed from the library. If you search for pieces or maxout within the project-sources search-link no code is found!
(3) The basic problem here seems to be a mismatch between the example and your version. Maybe there was a version with maxout, but without the parameter pieces. I don't know.
(4) My opinion: this library/project does not seem that active anymore (at least compared to keras and co.) and while using Pybrain in the past (dead), it seems it's using Lasange (somewhat a dying project too) now. Together with these mismatches between examples and code this would give me a lot of headaches and i would switch libraries.

Related

Bayesian Logistic Regression Using Tensorflow Probability

I am having issues trying to run the Bayesian logistic regression example on tensorflow probability, as shown An introduction to probabilistic programming, now available in TensorFlow Probability.
If I just run the code on the site I get the following error:
Traceback (most recent call last):
File "<input>", line 75, in <module>
TypeError: make_simple_step_size_update_policy() missing 1 required positional argument: 'num_adaptation_steps'
Then when I specify the num_adaptation_steps=5 I get the following error:
FailedPreconditionError (see above for traceback): Error while reading resource variable step_size_hmc from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/step_size_hmc)
[[node mcmc_sample_chain/transformed_kernel_bootstrap_results/Identity_2/ReadVariableOp (defined at /home/abeer/PycharmProjects/TensorFlowProbability/venv/lib/python3.6/site-packages/tensorflow_probability/python/mcmc/hmc.py:127) ]]
I don't know what I am doing wrong and any help would be greatly appreciated. Thanks!!
The Challenger code in the current Colab for chapter 2 should work:
https://colab.sandbox.google.com/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter2_MorePyMC/Ch2_MorePyMC_TFP.ipynb#scrollTo=oHU-MbPxs8iL
hmc=tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=40,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=int(burnin * 0.8)),
state_gradients_are_stopped=True),
bijector=unconstraining_bijectors)
I just noticed that the earlier HMC examples in that Chapter are lacking the num_adaptation_steps, so I'll do a PR soon to fix that. Or feel free to do so as well.
Thanks
mike

Updating to the latest Yocto version sanity.bbclass issues

I have taken over a project which was using Yocto Fido version from 2015. I need to update this to the latest stable version Thud.
I have cloned the Poky-Thud repository and cloned the latest layers which are required by our customized layer such as meta-openembedded etc. and added our customized layer back to it.
Now, I wasn't expecting this to build straight away without issues by any means, but the below errors I'm getting with the new layers, I just don't understand. There are more errors like this relating to not enough values, but posted below is one.
There is an interface issue in meta/classes/sanity.bbclass. I can't just revert to the older version of meta to solve this, nor I don't think it makes sense to modify the code myself? Any ideas why this is and how to solve it?
ERROR: Execution of event handler 'config_reparse_eventhandler' failed
Traceback (most recent call last):
File "/home/ubuntu/new-repo/poky-thud/build-
bbgw/../meta/classes/sanity.bbclass", line 971, in
config_reparse_eventhandler(e=<bb.event.ConfigParsed object at
0x7ff4103bf3c8>):
python config_reparse_eventhandler() {
> sanity_check_conffiles(e.data)
}
File "/home/ubuntu/new-repo/poky-thud/build-
bbgw/../meta/classes/sanity.bbclass", line 572, in sanity_check_conffiles(d=
<bb.data_smart.DataSmart object at 0x7ff4108d35c0>):
for func in funcs:
> conffile, current_version, required_version, func =
func.split(":")
if check_conf_exists(conffile, d) and d.getVar(current_version)
is not None and \
ValueError: not enough values to unpack (expected 4, got 1)

OpenCV Error: Bad argument in ERClassifierNM1

I run opencv 3.2.0, ubuntu 14.04, and latest opencv_contrib.
I run examine:
https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.py
But it have show err:
$ python textdetection.py scenetext_word01.jpg
textdetection.py
A demo script of the Extremal Region Filter algorithm described in:
Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012
Extracting Class Specific Extremal Regions from 9 channels ...
(...) this may take a while (...)
OpenCV Error: Bad argument (Default classifier file not found!) in ERClassifierNM1, file /home/vietnam/opencv_and_contri/opencv_contrib/modules/text/src/erfilter.cpp, line 1022
Traceback (most recent call last):
File "textdetection.py", line 38, in <module>
erc1 = cv2.text.loadClassifierNM1(pathname+'/trained_classifierNM1.xml')
cv2.error: /home/vietnam/opencv_and_contri/opencv_contrib/modules/text/src/erfilter.cpp:1022: error: (-5) Default classifier file not found! in function ERClassifierNM1
How to solve this?
Try using relative paths in the parameters for cv2.text.loadClassifierNM1() and cv2.text.loadClassifierNM2()
So now that part of the code looks like this:
erc1 = cv2.text.loadClassifierNM1('./trained_classifierNM1.xml')
er1 = cv2.text.createERFilterNM1(erc1,16,0.00015,0.13,0.2,True,0.1)
erc2 = cv2.text.loadClassifierNM2('./trained_classifierNM2.xml')
er2 = cv2.text.createERFilterNM2(erc2,0.5)
I'm not sure why this works (it did for me), but I tried this after looking at a solution posted for a similar problem in VS2015 here: https://github.com/cesardelgadof/OpenCVBinaries/issues/1
Hope this helps.
Trying with absolute path e.g. "/usr/lib/opencv-3.2.0/opencv_contrib-3.2.0/modules/text/samples/trained_classifierNM1.xml" worked in my case for Ubuntu 16.04, C++

Code for gensim Word2vec as an HTTP service 'KeyedVectors' Attribute error

I am using the w2v_server_googlenews code from the word2vec HTTP server running at https://rare-technologies.com/word2vec-tutorial/#bonus_app. I changed the loaded file to a file of vectors trained with the original C version of word2vec. I load the file with
gensim.models.KeyedVectors.load_word2vec_format(fname, binary=True)
and it seems to load without problems. But when I test the HTTP service with, let's say
curl 'http://127.0.0.1/most_similar?positive%5B%5D=woman&positive%5B%5D=king&negative%5B%5D=man'
I got an empty result with only the execution time.
{"taken": 0.0003361701965332031, "similars": [], "success": 1}
I put a traceback.print_exc() on the except part of the related method, which is in this case def most_similar(self, *args, **kwargs): and I got:
Traceback (most recent call last):
File "./w2v_server.py", line 114, in most_similar
topn=5)
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 304, in most_similar
self.init_sims()
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 817, in init_sims
self.syn0norm = (self.syn0 / sqrt((self.syn0 ** 2).sum(-1))[..., newaxis]).astype(REAL)
AttributeError: 'KeyedVectors' object has no attribute 'syn0'
Any idea on why this might happens?
Note: I use python 2.7 and I installed gensim using pip, which gave me gensim 2.1.0.
FYI that demo code was baed on gensim 0.12.3 (from 2015, as listed in its requirements.txt), and would need updating to work with the latest gensim.
It might be sufficient to add a line to w2v_server.py at line 70 (just after the load_word2vec_format()), to force the creation of the needed syn0norm property (which in older gensims was auto-created on load), before deleting the raw syn0 values. Specifically:
self.model.init_sims(replace=True)
(You would leave out the replace=True if you were going to be doing operations other than most_similar(), that might require raw vectors.)
If this works to fix the problem for you, a pull-request to the w2v_server_googlenews repo would be favorably received!

Error using cv.CreateHist in Python OpenCV as well as strange absence of certain cv attributes

I am getting an error (see below) when trying to use cv.CreateHist in Python. I
am also noticing another alarming problem. If I spit out all of the attributes
of the cv module into a file, and then I search them, I find that a ton of
common things are missing.
For example, cv.TermCriteria() is not there; cv.ConnectedComp is not there; and
cv.CvRect is not there.
Everything about my installation, with Open CV 2.2, worked just fine. I can plot
images, make CvScalars, and call plenty of the functions, like cv.CamShift...
but there are a dozen or so of these hit-or-miss functions or data structures
that are simply missing with no explanation.
Here's my code for cv.CreateHist:
import cv
q = cv.CreateHist([1],1,cv.CV_HIST_ARRAY)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: x9y��
The weird wingding stuff is actually what it spits out at the command line, not a copy-paste error. Can anyone help figure this out? It's incredibly puzzling.
Ely
As for CvRect, see the documentation. It says such types are represented as Pythonic tuples.
As for your call to CreateHist, you may be passing the arguments in wrong order. See createhist in the docs for python opencv.

Categories

Resources