How to train using neural network in python? [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am in my final year Project. In my project,I will collect data from a specific road. I have choosen 5 points in that road.From each point i will collect data from GPS about which day of the week,time of the day and time Taken to reach from previous point to that point.
I want to train neural network using this data.
So,the input is which day of the week,time,source and destination & output will be the time needed to reach the destination point from the source point.
What will be easiest to complete this job in python? which library should i choose?

I don't actually know about the conditions of you final year's project, but just a few sidenotes:
Using 4 inputs to your perceptron layer (weekday, hourofday, source, destination) to predict one final neuron (timedelta), you will most likely not need the non-linear powers of a neural network.
If you collect data on your own, you will most likely have too few observations to actually train a neural network. And with too few observations, it will probably overfit to your data.
You are very likely perfectly fine with a linear regression.
If you want to try using a neural network whatsoever, take a look at h2o - it offers a broad variety of machine learning / AI functionality to train models and make predictions.
However, to me it seems that you may require additional reading on this topic. You must understand how to interpret results (if any) and you should know about the pros and cons of each method - this includes knowing about data types and values being appropriate or not for certain models to be applied.

Related

Neural Network Architecture for Graph Inputs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have an undirected graph with edges of equal distance with 7 features per node. I want to train a Neural Network with this graph as an input and output a scalar. What network architecture do I need for my network to analyse the graph locally (for example, a node and it's neighbours) and to generalise, much like Convolutional Neural Networks operate on grid data. I have heard of Graph Neural Networks however I don't know if this is what i'm looking for. Will it be able to analyse my graph much like a CNN does with an image, sharing the generalisation benefits that convolution kernels bring?
I want to implement the solution in TensorFlow, ideally with Keras.
Thank you
The performance will most likely depend on the exact output that you're hoping to get. From your description a 2D-CNN should be good enough and easier to implement with Keras than a GNN.
However, there are some advantages to retaining the graph structure from your data. I think this is too much to present here, but you can find a proper explanation on "Spatio-Temporal Analysis and Prediction of Cellular Traffic in Metropolis" by Wang et al.
This paper also has the benefit of describing data processing to input into the network.
If you don't want to use basic Keras models to assemble your own GNN you may also want to take a look at Spektral, which is a python library for graph deep learning.
Without any other constraints I would firstly use a CNN, because it will be faster to implement with almost ready to use models from Keras.

Tensorflow gradient wrt input [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm experimenting with recent ideas coming from adversarial training and I'm specifically interested in a loss function which includes the input. This means I would like to derive the loss function with respect to the input (not only the model parameters).
One solution I can see is the function tf.conv2d_backprop_input(...). This can work as a solution for conv layers, however I also require a solution for fully connected layers as well. Another way to approach this problem is using the Cleverhans library written by Ian Goodfellow and Nicolas Papernot. This can be a more "complete" solution however its usage is not exactly clear (I need a simple example and not a complete API).
I would love to hear your thoughts and methodology on creating a custom deep learning simulation with adverserial training.
The dependence of an output node on the input can be calculated by backpropagation and is called saliency. It can be used to understand which parts of an input are most strongly contributing to a neuron's output for any differentiable neural network. This repository contains a collection of methods for calculating saliency and links to papers.

machine learning to create equation on a set of data [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I have a set data(now only 20 pair but maybe I can produce over 500 pair), my input data is a1 , a2 , a3 , a4 , a5 , a6 , a7 and my output is b, I don't have any idea about equation and what it looks like.
And I am new in machine learning, which algorithm or library or framework in python should I use to prediction the equation of these data?
thanks in advance
Your problem is called "regression problem". For this problems there are many approaches available. The easiest one is to start with a LinearRegression Model like described here: http://benalexkeen.com/linear-regression-in-python-using-scikit-learn/
If you think the relationship between input/output is more complex you will start with nonlinear models like
https://machinelearningmastery.com/develop-first-xgboost-model-python-scikit-learn/
There are many kinds of machine learning algorithms out there, and comprehensive libraries out there as well. The Tensorflow library is generally regarded as a good source for implementing neural networks, however with so few inputs (assuming you indeed mean inputs and not features), it will probably not have enough data to train. You will need to identify if you are trying to classify values or do regression on them (do you have a finite set of values, predicting a range of values, etc.) If you're using python you may wish to check out the scikit-learn library and perhaps do some simple linear or polynomial regression, or do something like KNN for classification. If you wish to learn more and have a more comprehensive tutorial, Kaggle has some good resources (and data science tutorials) to get you started.

Deep learning with data from simulations [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
While reading the great book by F. Chollet, I'm experimenting with Keras / Tensorflow, on a simple Sequential model that I train on simulated images, which come from a physical analytical model.
Having full control of the simulations, I wrote a generator which produces an infinite stream of data and label batches, which I use with fit_generator in Keras. The data so generated are never identical, plus I can add some random noise to each image.
Now I'm wondering: is it a problem if the model never sees the same input data from one epoch to the next?
Can I assume my problems in getting the loss down are not due to the fact that the data are "infinite" (so I only have to concentrate on hyper parameters tuning)?
Please feel free if you have any advice for dealing with DL on simulated data.
A well trained network will pick up on patterns in the data, prioritizing new data over old. If your data comes from a constant distribution this doesn't matter, but if that distribution is changing over time it should adapt (slowly) to the more recent distribution.
The fact that the data is never identical does not matter. Most trained networks use some form of data augmentation (e.g. for image processsing, it is common for images to be randomly cropped, rotated, resized, and have color manipulations applied etc, so each example is never identical even if it comes from the same base image).

Object detection with using keras: which R-CNN models is best?(recognition the navigations symbols) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm doing a project these days.
Goal of this project is approximately 200 symbol recognition.
Symbols are using in the navigation(turn_right, turn_left etc..)
I'm using YOLO model now
For traning this models, I thought I needed some improvement about traning speed.
This program will using when testing new navigation.
Is there any better models?
The model needs very fast traning speed, and high accuracy
Yolo is one of the best object detection for real time detection. Fast Training and high accuracy are competing goals. Did you mean test speed (with a trained model)?
Anyway, if you need fast training I highly suggest the cyclical learning rate strategy proposed by Leslie N. Smith.
Yolo has different version, so take a look at that as well.

Categories

Resources