AdaBoost ML algorithm python implementation - python

Is there anyone that has some ideas on how to implement the AdaBoost (Boostexter) algorithm in python?
Cheers!

It looks as if the sdpy project has an AdaBoost implementation. Specifically look at the sdpy/cs/ml/cla/boosting.py file.
Perhaps you can get some motivation from there.

Thanks a million Steve! In fact, your suggestion had some compatibility issues with MacOSX (a particular library was incompatible with the system) BUT it helped me find out a more interesting package : icsi.boost.macosx. I am just denoting that in case any Mac-eter finds it interesting!
Thank you again!
Tim

Related

Fitting data with coupled ODEs using python package "bumps"

I've found online the toolbox bumps (https://pypi.org/project/bumps/) which looks like a well-rounded and easy to use approach to fit data.
I'm interested to fit data described by two coupled ODEs, but, unfortunately, I haven't found any information regarding this procedure on the docs (https://bumps.readthedocs.io/en/latest/index.html).
Does anyone know how to do it?
Thanks in advance
I've ask to the developer on GitHub and he provided two complete examples.
Here the link: https://github.com/bumps/bumps/issues/26

Running CHAID on continuous predictors

Please, did anyone try to run CHAID algorithm on continuous predictors ??
At first, I used SPSS Modeler and it worked fine.
but when I tried it on Python 3.6, it didn't work for me.
Thanks :)
P.S. CHAID package could be found here :
https://github.com/Rambatino/CHAID
I'm the author of that library.
It's usually better to post on the issues tab on the github repo as questions have more visibility there.
Unfortunately, with regards to continuous predictors, they need to be binned first before they can be run using CHAID. We haven't implemented a binning strategy as it's very subjective (SPSS makes a lot of decisions under the hood).

Any way to reorder variables for binary decision diagrams?

I am working on a teaching tool for binary decision diagrams in which there is also a feature for variable reordering. Can anyone suggest a suitable library which implements variable reordering while building the tree or some kind of algorithm which implements the same ?
It would be best if I could work with a library like pyeda, buDDy or pycudd because I am already familiar with these libraries.
Thanks and comment if you need any kind of clarification..
Have you looked at dd, by Ioannis Filippidis?
I'm the author of pyeda. Implementing ROBDDs in Python was definitely fun, and can probably have some educational value, but it definitely doesn't do any automatic variable reordering, so if that's a requirement I would recommend looking at dd or the other ones on your list.
My group at University of Maribor is producing BDD Scout ( http://biddy.meolic.com/ ), a tool for visualization of BDDs. Currently, ROBDDs with complemented edges and 0-sup-BDDs with complemented edges are supported. Conversions are supported. Reordering (i.e. variable swapping and sifting algorithm) is supported for both of them. BDD Scout work on GNU/Linux an MS Windows (source and binary packages are available). We hope that our tool one day becomes a good teaching tool but we need some feedback to improve it. Besides the robustness the set of the functionalities is the most critical part to improve. If you will find some time to try it do not hesitate to give us any comments and questions.

Orthogonalize in Python. Which package?

I need to orthogonalize vectors in Python. I have found so far only algorithms.orthogonalize.
Nevertheless, it looks like "algorithms" is a kind of a package (module?) I cannot find to install. Has anybody done an orthogonalization? Please, advice me a nice package/module for this procedure. I am quite new in Python.
numpy.linalg.qr turns out to be the best option to orthogonalize vectors, since the vectors I consider vectors with complex components. And if one does it with the orthogonalize method mentioned above, then one LOSES the complex parts!
That package is part of the Spectral Python project.
The orthogonalize method is documented here:
Performs Gram-Schmidt Orthogonalization on a set of vectors
It is installable via pip and easy_install.

How is string.find implemented in CPython?

I was wondering if the 'find' method on strings was implemented with a linear search, or if python did something more sophisticated. The Python documentation doesn't discuss implementation details, so http://docs.python.org/library/stdtypes.html is of no help. Could someone please point me to the relevant source code?
The comment on the implementation has the following to say:
fast search/count implementation,
based on a mix between boyer-moore
and horspool, with a few more bells
and whistles on the top.
for some more background, see: http://effbot.org/zone/stringlib.htm
—https://github.com/python/cpython/blob/master/Objects/stringlib/fastsearch.h#L5
You should be able to find it in Objects/stringlib/find.h, although the real code is in fastsearch.h.
Looks like the algorithm used originates from Boyer-Moore-Horspool algorithm

Categories

Resources