i am new to programming, python and all. i am tasked with a work at school that requires me to develop and evaluate abusive language detection models from a given dataset. my proposed model must be a Convolutional Neural Network with an appropriate embedding layer as a first layer. my problem is i don't know how to start as i am very new to this with no prior knowledge
First, you can start reading and have understanding of a CNN first.
https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
Lastly, you can check few sample implementation here
Keras:
https://towardsdatascience.com/building-a-convolutional-neural-network-cnn-in-keras-329fbbadc5f5
Pytorch:
https://adventuresinmachinelearning.com/convolutional-neural-networks-tutorial-in-pytorch/
Tensorflow
https://www.tensorflow.org/tutorials/images/cnn
Related
I would like to implement an artificial neural network via Hebbian learning for MNIST multiclass classification.
Is there a way to set or change learning rules in Keras for define an Hebbian ANN?
Or a method to change the update of the weights of the network based on a rule decided by me instead of the classic Keras settings?
Otherwise I would have thought of defining a neural network without the aid of the libraries but there would certainly be less accuracy and I would not have the practicality given by the libraries
I am trying to re-create a neural network mentioned in the following paper: "https://www.frontiersin.org/articles/10.3389/fneur.2020.00375/full"
However there's a lot of missing information. The neural network that i am trying to design:
I am a bit confused about the model above it doesn't look they concatenate the outputs from the 3 pretrained networks. It seems like they are using the imported networks as filters.
So my question is the following:
Is it possible to train 3 (4 with the last two dense and output layer) models seperately and use the 3 networks as filters. Pass their output (possibly flattened) to the last fully connected layers and get a classification and how is this usually accomplished training wise?
Or is the best approach to use the keras functional api to create a multi-input model where the outputs are concatenated before fed into the last neural network? (FYI: I've never used this api before so i don't know the limitations...)
My problem is that after creating and training a neural net with TensorFlow (version 2.1.0) I need to extrapolate all the basic parameters: net architecture, functions used and weight values found through training.
These parameters will then be read by a library that will generate the VHDL code to bring the neural network created on python on an FPGA.
So I wanted to ask if there are one or more methods to get all this information, not in binary format. Among all these values the most important one is the extrapolation of the value of the weights found at the end of the training.
I need to implement a neural network which is NOT layer based, meaning that ANY neuron may be connected to any other neuron, and that there's no way to logically organize them in consecutive layers.
What I'm asking for is an example or a reference to proper and clear documentation about how to implement the following:
Originally I had my own implementation in matlab, however, I've been using TensorFlow and Keras to test simple models and it allows to tune your networks very fast and the implementations are pretty efficient, so I decided to try out more complex models, however, I just got stuck creating this type of network.
HINT: It MAY be OK to create single-neuron layers, as long as you can connect a layer to ANY layer (without caring if it is not adjacent) and to MORE THAN ONE LAYER.
I'm new to Tf and Keras, so a simple python example would be appreciated, althought, pointing me in the right direction would be OK.
This is an example network (¡loops are intentional!):
I dont need to train at the moment, just to evaluate models, however, keep in mind that evaluation of this kind of network is different too, one possible way is to keep with the signal sending until output stabilices, but it is just an example.
I need to implement a classification application for neuron-signals. In the first step, I need to train a denoising autoencoder(DAE) layer for signal cleaning then, I will feed the output to a DBN network for classification. I tried to find support for these types in Tensorflow but all what I found was two models CNN and RNN. Does anyone has an idea about a robust implementation for these two models using Tensorflow?