My problem is that after creating and training a neural net with TensorFlow (version 2.1.0) I need to extrapolate all the basic parameters: net architecture, functions used and weight values found through training.
These parameters will then be read by a library that will generate the VHDL code to bring the neural network created on python on an FPGA.
So I wanted to ask if there are one or more methods to get all this information, not in binary format. Among all these values the most important one is the extrapolation of the value of the weights found at the end of the training.
Related
i am new to programming, python and all. i am tasked with a work at school that requires me to develop and evaluate abusive language detection models from a given dataset. my proposed model must be a Convolutional Neural Network with an appropriate embedding layer as a first layer. my problem is i don't know how to start as i am very new to this with no prior knowledge
First, you can start reading and have understanding of a CNN first.
https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
Lastly, you can check few sample implementation here
Keras:
https://towardsdatascience.com/building-a-convolutional-neural-network-cnn-in-keras-329fbbadc5f5
Pytorch:
https://adventuresinmachinelearning.com/convolutional-neural-networks-tutorial-in-pytorch/
Tensorflow
https://www.tensorflow.org/tutorials/images/cnn
I am looking to develop a simple Neural Network in PyTorch or TensorFlow to predict one numeric value based on several inputs.
For example, if one has data describing the interior comfort parameters for a building, the NN should predict the numeric value for the energy consumption.
Both PyTorch or TensorFlow documented examples and tutorials are generally focused on classification and time dependent series (which is not the case). Any idea on which NN available in those libraries is best for this kind of problems? I'm just looking for a hint about the type, not code.
Thanks!
The type of problem you are talking about is called a regression problem. In such types of problems, you would have a single output neuron with a linear activation (or no activation). You would use MSE or MAE to train your network.
If your problem is time series(where you are using previous values to predict current/next value) then you could try doing multi-variate time series forecasting using LSTMs.
If your problem is not time series, then you could just use a vanilla feed forward neural network. This article explains the concepts of data correlation really well and you might find it useful in deciding what type of neural networks to use based on the type of data and output you have.
I was asked to create a machine algorithm using tensorflow and python that could detect anomalies by creating a range of 'normal' values. I have two perameters, a large array of floats around 1.5 and timestamps. I have not seen similar threads using tensorflow in a basic sense, and since I am new to technology I am looking to make a more basic machine. However, I would like to have it be unsupervised, meaning that I do not specify what an anomaly is, but rather a large amount of past data does. Thank you, I am running python 3.5 and tensorflow 1.2.1.
Deep Learning - Anomaly and Fraud Detection
https://exploreai.org/p/deep-learning-anomaly-and-fraud-detection
Simply normalize the values and feed it to the tensorflow autoencoder model.
So, autoencoders are deep neural networks used to reproduce the input at the output layer i.e. the number of neurons in the output layer is exactly the same as the number of neurons in the input layer. Consider the image below
The autoencoders work in a similar way. The encoder part of the architecture breaks down the input data to a compressed version ensuring that important data is not lost but the overall size of the data is reduced significantly. This concept is called Dimensionality Reduction.
Check this repo for code : Autoencoder in tensorflow
Lately I was on a Data Science meetup in my city, there was a talk about connecting Neural Networks with SVM. Unfortunately presenter had to quit right after presentation, so I wasn't able to ask some questions.
I was wondering how is that possible ? He was talking about using neural networks for his classification, and later on, he was using SVM classifier to improve his accuracy and precision by about 10%.
I am using Keras for Neural Networks and SKlearn for the rest of ML.
This is completely possible and actually quite common. You just select the output of a layer of the neural network and use that as a feature vector to train a SVM. Generally one normalizes the feature vectors as well.
Features learned by (Convolutional) Neural Networks are powerful enough that they generalize to different kinds of objects and even completely different images. For examples see the paper CNN Features off-the-shelf: an Astounding Baseline for Recognition.
About implementation, you just have to train a neural network, then select one of the layers (usually the ones right before the fully connected layers or the first fully connected one), run the neural network on your dataset, store all the feature vectors, then train an SVM with a different library (e.g sklearn).
I have trained a Neural Network using a Matlab Neural Network Toolbox, particularly using the command nntool. The detection is basically for traffic sign and I have used a database of 90 traffic images(no-entry,no right and stop signs), each of 30 images of size 8*8 pixels of which no-entry signs are taken positive. My input is 64*90 and target as 1*90. Now, I need to use this neural network in Python for real-time recognition. What parameters do I need? I am completely new to neural networking.Here's the link to an image Here's my link to the weights
First you need to take the weights of your neural network.
An example:
[x,t] = simplefit_dataset;
net = feedforwardnet(20);
net = train(net,x,t);
wb = getwb(net)
Then I also suggest that you read on ANN structure, this will help you understand how the output of a neural network is calculated, given the weights. Then you can adapt it to your own needs and using that calculate an output in Python.