Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is there an easy native way to implement tfa.optimizers.CyclicalLearningRate w/ QNetwork on DqnAgent?
Trying to avoid writing my own DqnAgent.
I guess the better question might be, what is a proper way to implement callbacks on DqnAgent?
From the tutorial you linked, the part where they set the optimizer is
optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()
So you can replace optimizer with whatever optimizer you would rather use. Based on the documentation something like
optimizer = tf.keras.optimizers.Adam(learning_rate=tfa.optimizers.CyclicalLearningRate)
should work, barring any potential compatibility issues coming from that they are using the tf 1.0 adam in the tutorial.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I try to use Keras library in python. Definitely this example https://www.tensorflow.org/tutorials/keras/classification.
What is best loss,hyper parameters and optimizer for this example?
You can look up any standard example such as MNIST classification for reference.
Usually for classification cross entropy loss is used. The optimizer is subjective and depends on the problem. SGD and Adam are common.
For LR you can start with 10^(-3) and keep reducing if the validation loss doesn't decrease after a certain number of iterations.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have an excel file with 1000 different values, I am trying to train my artificial intelligence with these values. While the Test Size is 0.33, artificial intelligence should be trained with 670 values, but only 21 values are trained. What is the source of the problem?
You probably mean a number of batches trained using fit. Every batch comprises 32 items by default
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
In a data frame with 1000 texts, after doing preprocessing lemmatization, how can I find out how many words have been lemmatized in each text?
Why did you run your model for just 3 epochs? I would suggest you to run it for about 20 epochs, and then see if the validation accuracy is not reducing. And the thing, I can tell you is that You need to change your this line of code:
model.add(Embedding(300000,60,input_length=300))
To this:
model.add(Embedding(k, 60,input_length=300))
Where you can set k as 256 or 512 or a number close to them. But 300000 would be just too much. By that, your network would focus more on the embedding layer, when the main job is of encoder and decoder.
Another thing, you should increase your LSTM units (maybe to a number like 128 or 256) in both encoder and decoder, and remove the recurrent_dropout parameter (since, you are dropping out using the dropout layer after encoder). If that still doesn't help then you can even add Batch Normalization layers to your model.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am wondering if there is a way to link the deep neural network built in python and use it in MATLAB code? for example, suppose I built a deep neural network as a function in python, so I need to call it in MATLAB code as a function in order to used it with MATLAB. Is that possible? if so, anyone can provide me a guidance or the steps to do that.
thank you
You can import pretrained networks in MATLAB if you have access to the Deep Learning Toolbox.
You can use the importKerasNetwork function for Tensorflow-Keras networks, importCaffeNetwork for Caffe networks or importONNXNetwork for ONNX networks.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm experimenting with recent ideas coming from adversarial training and I'm specifically interested in a loss function which includes the input. This means I would like to derive the loss function with respect to the input (not only the model parameters).
One solution I can see is the function tf.conv2d_backprop_input(...). This can work as a solution for conv layers, however I also require a solution for fully connected layers as well. Another way to approach this problem is using the Cleverhans library written by Ian Goodfellow and Nicolas Papernot. This can be a more "complete" solution however its usage is not exactly clear (I need a simple example and not a complete API).
I would love to hear your thoughts and methodology on creating a custom deep learning simulation with adverserial training.
The dependence of an output node on the input can be calculated by backpropagation and is called saliency. It can be used to understand which parts of an input are most strongly contributing to a neuron's output for any differentiable neural network. This repository contains a collection of methods for calculating saliency and links to papers.