I am trying to re-create a neural network mentioned in the following paper: "https://www.frontiersin.org/articles/10.3389/fneur.2020.00375/full"
However there's a lot of missing information. The neural network that i am trying to design:
I am a bit confused about the model above it doesn't look they concatenate the outputs from the 3 pretrained networks. It seems like they are using the imported networks as filters.
So my question is the following:
Is it possible to train 3 (4 with the last two dense and output layer) models seperately and use the 3 networks as filters. Pass their output (possibly flattened) to the last fully connected layers and get a classification and how is this usually accomplished training wise?
Or is the best approach to use the keras functional api to create a multi-input model where the outputs are concatenated before fed into the last neural network? (FYI: I've never used this api before so i don't know the limitations...)
Related
I have time series which I embed using a CNN.
I also have image-like data which I embed using a RNN.
What I would like to do is merge the output of both encoder networks to train a decoder network which would predict the next timestamp of the input time series.
Would you have any help on how to do this ?
I am using Python (Keras library) on Windows 10.
Thanks in advance
What you are trying to achieve can be accomplished by building a complex model using the Keras Functional API. The below resource can help you out in understanding about it.
https://machinelearningmastery.com/keras-functional-api-deep-learning/
Note: The main concept you would want to focus on is concatenation of the layers.
I want to build a neural network that will take the feature map from the last layer of a CNN (VGG or resnet for example), concatenate an additional vector (for example , 1X768 bert vector) , and re-train the last layer on classification problem.
So the architecture should be like in:
but I want to concat an additional vector to each feature vector (I have a sentence to describe each frame).
I have 5 possible labels , and 100 frames in the input frames.
Can someone help me as to how to implement this type of network?
I would recommend looking into the Keras functional API.
Unlike a sequential model (which is usually enough for many introductory problems), the functional API allows you to create any acyclic graph you want. This means that you can have two input branches, one for the CNN (image data) and the other for any NLP you need to do (relating to the descriptive sentence that you mentioned). Then, you can feed in the combined outputs of these two branches into the final layers of your network and produce your result.
Even if you've already created your model using models.Sequential(), it shouldn't be too hard to rewrite it to use the functional API.
For more information and implementation details, look at the official documentation here: https://keras.io/guides/functional_api/
i am new to programming, python and all. i am tasked with a work at school that requires me to develop and evaluate abusive language detection models from a given dataset. my proposed model must be a Convolutional Neural Network with an appropriate embedding layer as a first layer. my problem is i don't know how to start as i am very new to this with no prior knowledge
First, you can start reading and have understanding of a CNN first.
https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
Lastly, you can check few sample implementation here
Keras:
https://towardsdatascience.com/building-a-convolutional-neural-network-cnn-in-keras-329fbbadc5f5
Pytorch:
https://adventuresinmachinelearning.com/convolutional-neural-networks-tutorial-in-pytorch/
Tensorflow
https://www.tensorflow.org/tutorials/images/cnn
I'm a beginer in this field of Deep Learning. I'm trying to use Keras for a LSTM in a regression problem. I would like to build an ANN which could exploit the memory cell between one prediction and the next one.
In more details... I have a neural network (Keras) with 2 Hidden layer-LSTM and 1 output layer for a regression context.
The batch_size is equal to 7, timestep equal to 1 and I have 5749 samples.
I'm only interested to understand if using timestep == 1 is the same thing as using an MLP instead of LSTM. For time_step, I'm referring to the reshape phase for the input of the Sequential model in Keras. The output is a single regression.
I'm not interested in the previous inputs, but I'm interested only on the output of the network as an information for the next prediction.
Thank you in advance!
You can say so :)
You're right in thinking that you won't have any recurrency anymore.
But internally, there will be still more operations than in regular Dense layers, due to the existence of more kernels.
But be careful:
If you use stateful=True, it will still be a recurrent LSTM!
If you use initial states properly, you can still make it recurrent.
If you're interested in creating custom operations with the memory/state of the cells, you could try creating your custom recurrent cell taking the LSTMCell code as a template.
Then you'd use that cell in a RNN(CustomCell, ...) layer.
I need to implement a classification application for neuron-signals. In the first step, I need to train a denoising autoencoder(DAE) layer for signal cleaning then, I will feed the output to a DBN network for classification. I tried to find support for these types in Tensorflow but all what I found was two models CNN and RNN. Does anyone has an idea about a robust implementation for these two models using Tensorflow?