Hyper-parameter Optimization with keras models: GridSearchCV or talos? [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I want to tune hyper-parameters on keras models and I was exploring the alternatives I had at hand. The first and most obvious one was to use scikit-learn wrappers as shown here (https://keras.io/scikit-learn-api/) thereby being able to use all the faboulous things in the scikit-learn worflow but I also came across this package here (https://github.com/autonomio/talos) that seems very promising and most likely offers a speed boost.
if anyone used them both, could someone point me towards the better solution (flexibility, speed, features)? The sklearn workflow with pipeline and custom estimators provides a world of flexibility but talos seems more directly geared towards keras specifically therefore it must yield some advantages (I guess they would not have made a new standalone package otherwise) which I am not able to see (some benefits are highlighted here https://github.com/autonomio/talos/blob/master/docs/roadmap.rst but such thigns seem to be adequately covered within the scikit-learn framework)
any insights?

Personal opinions:
train/valid/test split is a better choice than cross validation for deep learning. (The cost of k training is too high)
random search is a good way to start exploring the hyper-parameters, so it's not really hard to code this yourself but yes talos or hyperas (which is quite famous) could be helpfull.

Related

Why Wasserstein GAN (WGAN) is not widely used compared to DCGAN? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Wasserstein GAN (https://arxiv.org/abs/1701.07875) is a big improvement to DCGAN for better training stability and less model collapse. But when seeing the implementations, WGAN is remarkably less used than the original DCGAN.
What is the cause of this fact?
I don’t have a definitive answer but one possibility is simply ease of use and open source implementations. A quick search shows a Pytorch implementation of WGAN and a TensorFlow tutorial on DCGAN. TensorFlow was previously the more popular option (according to this link) so people probably opted for the simpler option when implementing a comparison.
Also, bear in mind a stable implementation where you know you’ve probably implemented it correctly and your competing technique surpasses it is more desirable than learning a new framework for a GAN that will be harder to beat.

Is there a Python package that can do temporal logic model-checking for finite state machines? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I want to be able to model a system as a finite state machine and test the properties of the model against temporal logic specifications.
I am aware of StateFlow's model-checking capabilities, but if possible I would prefer to use Python because it is open-source. I am also aware of TuLiP as a solid option for designing and simulating finite state machines, but as far as I can tell it does not do model-checking. The FSM package list on the Python wiki seems to be full of similarly implementation-focused packages.
Does anyone know of a different Python package which is capable of model-checking per temporal logic design specifications?
There are plenty of free model checkers such as NuSMV and Spin https://en.wikipedia.org/wiki/List_of_model_checking_tools
https://github.com/johnyf/tool_lists/blob/master/verification_synthesis.md
I doubt that you find many python based tool, but there are a few available
PyNuSMV - a python frontend to NuSMV, industrial strength free model checker https://github.com/sbusard/pynusmv
Spot - an LTL-omega-automata library for model checking with a python binding https://spot.lrde.epita.fr/
Small CTL, CTL* and LTL Buchi automata model checker https://github.com/albertocasagrande/pyModelChecking
PyBoolNet A frontend to NuSMV https://github.com/hklarner/PyBoolNet along with misc bool net
Intrepyd https://github.com/formalmethods/intrepyd
Hardware LTL model checker https://github.com/cristian-mattarei/CoSA
HyLaa Hybrid Systems model checker https://github.com/stanleybak/hylaa

PuLP and OR-Tools Alternatives [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I currently have a MIP model formulated in Gurobi's python API, but recently I've been looking into tools such as PuLP and OR-Tools that allow me to build a model and feed it to multiple different optimizers. One feature of Gurobi used extensively in my model is the ability to have constraints that use functions such as and, or, min, max, and abs. However it seems as if PuLP and OR-Tools do not support these. Are there any alternatives that do support these? Or would I have to reformulate my model if I want to use something like this?
For or-tools, we only provide the minimal API for the linear solver.
If your problem is more structured (scheduling, routing, CP-like constraints), you can have a look at the CP-SAT interface:
https://developers.google.com/optimization/
https://github.com/google/or-tools/blob/master/ortools/sat/doc/index.md
Python examples are here:
https://github.com/google/or-tools/tree/master/examples/python
You might also want to have a look at Pyomo. It supports a variety of modeling tools and can call different solvers.

Clustering using SOM in python [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am trying to perform test summarize using self organizing map (SOM) as the clustering model. Do we have any libraries for performing SOM in python.
There is one here, but in general SOM implementations are not part of the main machine learning libraries. There are two reasons
SOM's, although nice to look at, don't really perform well in real problems.
It is too easy to construct one by yourself.
I would suggest to make it yourself. It is very easy and a great way to introduce yourself to python. The main code of the SOM itself is about 3 lines (a loop and one update). The remaing of the code would be for loading the data and plotting them, but you won't avoid that part of the code by using an external library

Python module for multiple variable global optimization [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have been looking for a python module that implements the common techniques of global optimization (finding the global minimum of a function in N dimensions) without success.
If you heard about a simulated annealing or genetic algorithm implementation in python, please share.
Scipy's optimize module has a dual_annealing function that might fit your needs. Also, you should check out the PyEvolve module for doing a genetic algorithm.
I'm not an expert, but have you looked at:
Scipy's optimize: http://docs.scipy.org/doc/scipy/reference/optimize.html#global
NLOpt: http://ab-initio.mit.edu/wiki/index.php/NLopt_Introduction
OpenOpt: http://openopt.org/Foreword
One of the most common is scipy.optimize.
For genetic algorithms, there's pygene.
Also, the aima-python project has implementations of algorithms described in Russell and Norvig's "Artificial Intelligence: A Modern Approach".
I've been working on a detailed comparison of many python global optimizers (I assume you are interested in derivative-free optimization where there are plenty of local minima).
hyperopt
optuna
pysot
scipy.optimize
pymoo
many more (see list of some I left out)
To summarize, I'd recommend scipy.optimize and if you're in dimension less than say ten, the SHGO algorithm therein is really solid. You might want to read up on it if you have a passing interest in homology. It is better than some previous ones, such as basin-hopping, because it cleverly tries to avoid redundant local searches.
The full list and comparisons are in the report
Simulated Annealing:
frigidum is a python package for simulated annealing.

Categories

Resources