I would like to implement an artificial neural network via Hebbian learning for MNIST multiclass classification.
Is there a way to set or change learning rules in Keras for define an Hebbian ANN?
Or a method to change the update of the weights of the network based on a rule decided by me instead of the classic Keras settings?
Otherwise I would have thought of defining a neural network without the aid of the libraries but there would certainly be less accuracy and I would not have the practicality given by the libraries
Related
My problem is that after creating and training a neural net with TensorFlow (version 2.1.0) I need to extrapolate all the basic parameters: net architecture, functions used and weight values found through training.
These parameters will then be read by a library that will generate the VHDL code to bring the neural network created on python on an FPGA.
So I wanted to ask if there are one or more methods to get all this information, not in binary format. Among all these values the most important one is the extrapolation of the value of the weights found at the end of the training.
i am new to programming, python and all. i am tasked with a work at school that requires me to develop and evaluate abusive language detection models from a given dataset. my proposed model must be a Convolutional Neural Network with an appropriate embedding layer as a first layer. my problem is i don't know how to start as i am very new to this with no prior knowledge
First, you can start reading and have understanding of a CNN first.
https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
Lastly, you can check few sample implementation here
Keras:
https://towardsdatascience.com/building-a-convolutional-neural-network-cnn-in-keras-329fbbadc5f5
Pytorch:
https://adventuresinmachinelearning.com/convolutional-neural-networks-tutorial-in-pytorch/
Tensorflow
https://www.tensorflow.org/tutorials/images/cnn
I used gridsearchcv to determine which hyperparameters in the mlpclassifier can make the accuracy from my neural network higher. I figured out that the amount of layers and nodes makes a difference but I'm trying to figure out which other configurations can make a difference in accuracy (F1 score actualy). But from my experience it lookes like parameters like "activation", "learning_rate", "solver" don't really change anything.
I need to do a research on which other hyperparameters can make a difference in the accuracy from predictions via the neural network.
Does someone have some tips/ideas on which parameters different from the amount of layers / nodes that can make a difference in the accuracy from my neural network predictions?
It all depends on your dataset. Neural network are not magical tools that can learn everything and also they require a lot of data compared to traditional machine learning models. In case of MLP, making a model extremely complex by adding a lot of layers is never a good idea as it makes the model more complex, slow and can lead to overfitting as well. Learning rate is an important factor as it is used to find the best solution for the model. A model makes mistakes and learns from it and the speed of learning is controlled by learning rate. If learning rate is too small, your model will take a long time to reach the best possible stage but if it is too high the model might just skip the best stage. The choice of activation function is again dependent on the use case and the data but for simpler datasets, activation function will not make a huge differnece.
In traditional deep learning models, a neural network is build up of several layers which might not always be dense. All the layers in MLP as dense i.e. feed forward. To improve your model, you can try a combination of dense layers along with cnn, rnn, lstm, gru or other layers. Which layer to use depends completely on your dataset. If you are using a very simple dataset for a school project, then experiment with traditional machine learning methods like random forest as you might get better results.
If you want to stick to neural nets, read about other types of layers, dropout, regularization, pooling, etc.
Lately I was on a Data Science meetup in my city, there was a talk about connecting Neural Networks with SVM. Unfortunately presenter had to quit right after presentation, so I wasn't able to ask some questions.
I was wondering how is that possible ? He was talking about using neural networks for his classification, and later on, he was using SVM classifier to improve his accuracy and precision by about 10%.
I am using Keras for Neural Networks and SKlearn for the rest of ML.
This is completely possible and actually quite common. You just select the output of a layer of the neural network and use that as a feature vector to train a SVM. Generally one normalizes the feature vectors as well.
Features learned by (Convolutional) Neural Networks are powerful enough that they generalize to different kinds of objects and even completely different images. For examples see the paper CNN Features off-the-shelf: an Astounding Baseline for Recognition.
About implementation, you just have to train a neural network, then select one of the layers (usually the ones right before the fully connected layers or the first fully connected one), run the neural network on your dataset, store all the feature vectors, then train an SVM with a different library (e.g sklearn).
I need to implement a classification application for neuron-signals. In the first step, I need to train a denoising autoencoder(DAE) layer for signal cleaning then, I will feed the output to a DBN network for classification. I tried to find support for these types in Tensorflow but all what I found was two models CNN and RNN. Does anyone has an idea about a robust implementation for these two models using Tensorflow?