Why Wasserstein GAN (WGAN) is not widely used compared to DCGAN? [closed] - python

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Wasserstein GAN (https://arxiv.org/abs/1701.07875) is a big improvement to DCGAN for better training stability and less model collapse. But when seeing the implementations, WGAN is remarkably less used than the original DCGAN.
What is the cause of this fact?

I don’t have a definitive answer but one possibility is simply ease of use and open source implementations. A quick search shows a Pytorch implementation of WGAN and a TensorFlow tutorial on DCGAN. TensorFlow was previously the more popular option (according to this link) so people probably opted for the simpler option when implementing a comparison.
Also, bear in mind a stable implementation where you know you’ve probably implemented it correctly and your competing technique surpasses it is more desirable than learning a new framework for a GAN that will be harder to beat.

Related

Why does numpy use row-based data as opposed to column-based data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
My problem is a conceptional rather then an actual practical one:
What are the reasons numpy uses row-based data instead of column-based data?
I know that row-based data can be accessed faster from the CPU this way, thus increasing performance. Column-based data on the other hand would be more "mathematically correct".
The performance would be reason alone to justify the convention, but I just wanted to know if there are any other reasons this convention is used (I am aware that not only numpy uses this convention, but that it is used in general, so I suppose another reason is that this follows convention of other libraries too).
Note that I asked this question on the numpy github already, but I wanted to see if I can reach different people with different knowledge here.

Finding a model for a machine learning problem with a sensor [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm doing a project where I have data of 100 sensors and its cycles until it breaks. It shows a lot of characteristcs until its failure, and then shows it for the replacement sensor. With this data, I have to built a model where I can predict for how long the sensor will work until its failure, but only with a few data, not the full cycle. I have no idea what machine learning model is suitable for this.
The type of problem you are describing is known as survival analysis. A wide range of both statistical and machine learning methods are available to help you solve these type of problems.
What is great about these methods is it also allow you to use data points where the event you are interested in has not occur. In your example, it means you can possibly extend your dataset by including data from sensors which has not failed yet.
When you look at the methods I suggest you also spend some time examining how to evaluate these types of models, since the evaluation methods are also slightly different then in typical machine learning problems.
A comprehensive range of techniques is available at: http://dmkd.cs.vt.edu/TUTORIAL/Survival/Slides.pdf

How to use R models in Python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have been working on an algorithm trading project where I used R to fit a random forest using historical data while the real-time trading system is in Python.
I have fitted a model I'd like to use in R and am now wondering how can I use this model for prediction purposes in the Python system.
Thanks.
There are several options:
(1) Random Forest is a well researched algorithm and is available in Python through sci-kit learn. Consider implementing it natively in Python if that is the end goal.
(2) If that is not an option, you can call R from within Python using the Rpy2 library. There is plenty of online help available for this library, so just do a google search for it.
Hope this helps.

Hyper-parameter Optimization with keras models: GridSearchCV or talos? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I want to tune hyper-parameters on keras models and I was exploring the alternatives I had at hand. The first and most obvious one was to use scikit-learn wrappers as shown here (https://keras.io/scikit-learn-api/) thereby being able to use all the faboulous things in the scikit-learn worflow but I also came across this package here (https://github.com/autonomio/talos) that seems very promising and most likely offers a speed boost.
if anyone used them both, could someone point me towards the better solution (flexibility, speed, features)? The sklearn workflow with pipeline and custom estimators provides a world of flexibility but talos seems more directly geared towards keras specifically therefore it must yield some advantages (I guess they would not have made a new standalone package otherwise) which I am not able to see (some benefits are highlighted here https://github.com/autonomio/talos/blob/master/docs/roadmap.rst but such thigns seem to be adequately covered within the scikit-learn framework)
any insights?
Personal opinions:
train/valid/test split is a better choice than cross validation for deep learning. (The cost of k training is too high)
random search is a good way to start exploring the hyper-parameters, so it's not really hard to code this yourself but yes talos or hyperas (which is quite famous) could be helpfull.

Is it possible to create your own Convolution Neural Network(CNN) by hand(without using any frameworks like Keras, Caffe, TF)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to create your own simple CNN, but I need in some ready implementations.
Can your share me links, articles, where I can find ready implementations of CNN(without using any frameworks Keras, but maybe with numpy,scipy)where I can see the implementation of each operations, like matrices multiplication and so on?
Yes, it is possible to implement your own bare-bones CNN without help of any frameworks like Keras, TF, etc. You can check out this simple implementation of CNNs using numpy/cython and the code repository here.

Categories

Resources