Artificial Neural Network - Compiling error - python

I am learning Deep Learning myself and am facing issues while performing ANN. Here's what I am doing:
Initializing the ANN (I've split the dataset beforehand):
classifier = Sequential()
Adding the input layer and the first hidden layer:
classifier.add(Dense(input_dim = 11, kernel_initializer = 'uniform', activation = 'relu', units = 6))
Adding the second hidden layer:
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
Adding the output layer:
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
Compiling the ANN by employing Stochastic gradient descent:
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
After this, when I select and run the last command, I get an error that reads:
TypeError: sigmoid_cross_entropy_with_logits() got an unexpected keyword argument 'labels'
I noticed when I use loss = mean_squared_error, it compiles fine. Can you tell me what's going on?
Sypder and Python latest as on the day I am posting this.
Windows 10.
Thanos, TensorFlow and Keras latest
Thanks in advance.

Update your tensorflow version with a nightly build:
https://github.com/tensorflow/tensorflow#installation
see this issue:https://github.com/carpedm20/DCGAN-tensorflow/issues/84

Tensorflow changed keyword names for this function, and you are probably using outdated version of either tf or keras, update both and you should be good to go.

pip install -U tensorflow fixed the issue for me

Related

Keras.Sequential model throws " TypeError: Model.compile() missing 1 required positional argument: 'self' " when using model.compile()

I have been following some tutorials on TensorFlow and have run into a problem with my model that I can't find an answer to online.
I have this code that tries to create a convolutional neural network and then set its compile setting:
cnn = keras.Sequential
([
#cnn
layers.Conv2D(filters=32, kernel_size=(3,3), activation="relu", input_shape=(32,32,3)),
layers.MaxPooling2D((2,2)),
layers.Conv2D(filters=64, kernel_size=(3,3), activation="relu"),
layers.MaxPooling2D((2,2)),
#dense
layers.Flatten(),
layers.Dense(64, activation="relu"),
layers.Dense(10, activation="softmax")
])
cnn.compile(
optimizer = "adam",
loss = "sparse_categorical_crossentropy",
metrics = ["accuracy"]
)
Whenever I run this code the error
TypeError: Model.compile() missing 1 required positional argument: 'self' gets thrown on the line cnn.compile(.
I have tried using the .compile code from another program that I know for sure works, but it still threw the error. So my guess is that there is a problem with how I create cnn. I have also looked into the debugger, and it shows that it recognizes that cnn has a .compile method.
Thank you in advance for the help!
Edit: changing the declaration from
cnn = keras.Sequential
([
to
CNN = keras.Sequential(
[
fixed the issue.
Apparently the parenthesis needs to be on the same line to call on the initializer. Thank you xdurch0!
Thanks for the answer #xdurch0 and for confirmation by #Lorvarz. (posting the comment in the answer section for the benefit of the community)
Whitespace and line breaks are important in Python. Your code does not
do what you think it does. You are assigning cnn = keras.Sequential
and that's it.
changing the declaration from
cnn = keras.Sequential
([
to
CNN = keras.Sequential(
[
fixed the issue.

How to fix "WARNING:tensorflow:#custom_gradient grad_fn has 'variables' in signature, but no ResourceVariables were used on the forward pass"?

I have been getting this warning when using model.fit() or even model.summary(). Using (on Windows 10):
Tensorflow 2.6.2
Tensorflow Probability 0.14
Keras 2.6
Also tested on Google Colab with tf==2.8, tfp==0.16, keras==2.8, without any change.
Tried to downgrade Tensorflow and TF-probability as suggusted here (for a different implementation) but did not work. My model is this, however I've had the same issue with different hidden layers and also when used a functional API:
model = Sequential()
model.add(keras.Input(shape=(dataset.shape[1],1)))
model.add(tfkl.Conv1D(128, kernel_size = dataset.shape[1], activation='relu'))
model.add(tfkl.Conv1D(16, kernel_size = 1, activation='softplus'))
model.add( tfkl.Flatten())
model.add(tfkl.Dense(16, activation='softplus'))
model.add(tfkl.Dense(1, use_bias = 1))
model.add(tfpl.DistributionLambda(
lambda t: tfd.Chi2(df= abs(t[..., :1])
)))
Note that I'm not having the same issue when using tfd.Normal (only tfd.Chi2 and tfd.Gamma don't work). Anyone faced the same issue?

tensorflow lite on arduino nano 33 BLE : Didn't find op for builtin opcode 'EXPAND_DIMS' version '1'

I used Tensorflow lite 2.1.1-ALPHA-PRECOMPILED for arduino nano 33 ble with headers
Import
import numpy as np
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout, Conv1D, MaxPooling1D
Model Definition
def get_model(n_timesteps, n_features, n_outputs):
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
model.add(Conv1D(filters=32, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=16, kernel_size=5, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', tf.keras.metrics.Precision()])
# fit network
return model
model = get_model(128, 6, num_class=4)
Model Summary
TF Lite Converter Work but add expandsdims operation
# Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
def representative_dataset():
for _, samp in enumerate(trainX):
yield [samp.astype(np.float32).reshape(1, 128, 6)]
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce integer only quantization
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open('default_tf/model0_1.tflite', "wb").write(model_tflite)
When i check tflite structure via netron i found that the exapandsDims operation is included as shown in the following image
I already try to include
#include "tensorflow/lite/micro/all_ops_resolver.h"
to my sketch But did not resolve the problem and io also tried to include
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/kernels/micro_ops.h"
static tflite::MicroMutableOpResolver<1> micro_op_resolver;
void setup(){
micro_op_resolver.AddExpandDims();
}
in this case i get a error:
micro_op_resolver.AddExpandDims();
^~~~~~~~~~~~~
exit status 1
'class tflite::MicroMutableOpResolver<1>' has no member named 'AddExpandDims'
I resolve the problem to downgrade the python version of tensorflow from 2.3.0 to 2.1.1 where i trained the model, in 2.1.1 for convert the model intessorflow-lite use reshape ops where its replaced in 2.3.0 with exapandsdims ops.
We ran into the same problem but found a solution that avoids downgrading Tensorflow (so we can use the latest Tensorflow 2.4.1). The Tensorflow 1D operations, e.g., Conv1D, are specialized versions of their higher order counterparts, e.g., Conv2D. Therefore, you can "upgrade" your 1D operations without sacrificing accuracy. The Tensorflow Lite Converter does the same thing but replaces the Conv1D with Conv2D AND an additional ExpandDims layer before that - hence the problem.
Here is the procedure:
Adjust the input_shape for your network by adding another dimension of size 1, e.g., input_shape=(3,3,1) instead of input_shape=(3,3)
Replace the 1D operations with their 2D counterpart (Conv1D to Conv2D, MaxPooling1D to MaxPooling2D)
Adjust the kernel_size for your Conv2D layers, e.g., kernel_size=(3,1) instead of kernel_size=3
Adjust the pool_size for the pooling layers (such as MaxPooling2D or GlobalAveragePooling2D), e.g., pool_size=(2,1) instead of pool_size=2
Retrain the network
Convert your model to Tensorflow Lite Micro
Your model after conversion
def get_model(n_timesteps, n_features, n_outputs):
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(3,1), activation='relu', input_shape=(n_timesteps,n_features,1)))
model.add(Conv2D(filters=32, kernel_size=(3,1), activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2,1)))
model.add(Conv2D(filters=16, kernel_size=(5,1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,1)))
model.add(Flatten())
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', tf.keras.metrics.Precision()])
# fit network
return model
Downsides
You have to adjust your training data to account for the extra dimension
You have to adjust the input data for your network whenever you run inference
You have to retrain your network
We tried to avoid the adjustments of our training data and input data for inference by adding a Reshape layer right after the Input layer of your model. Unfortunately, this lead to another problem after pruning, int8 quantitating, and converting our model to Tensorflow Lite Micro. Without the reshaping everything works just fine.
I had a similar issue using Arduino IDE, was able to install the latest version of tflite micro and it resolved all issues https://github.com/tensorflow/tflite-micro-arduino-examples

Keras Tutorial Error: NameError: name 'layers' is not defined

I am trying to follow this Keras tutorial, but I encounter the following error when compiling using the command python3 test.py:
Traceback (most recent call last):
File "test.py", line 13, in <module>
layers.Dense(64, activation='sigmoid')
NameError: name 'layers' is not defined
My code is as follows:
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(keras.layers.Dense(64, activation='relu'))
# Add another:
model.add(keras.layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(keras.layers.Dense(10, activation='softmax'))
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
Python version: 3.6.6
Operating System: MacOS High Sierra
I am also doing this all in the command line (tensorflow)$ environment.
What is wrong
First of all, python is signalling you that an object with name layers is not present within the scope of the script.
But the actual error is that the code was copied out of the TensorFlow's Keras documentation, but in the documentation the second part of the code serves only to explain what is being instantiated within the model.add(...) call.
So just drop all the code that begins with layers, as it is just an explanation.
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(keras.layers.Dense(64, activation='relu'))
# Add another:
model.add(keras.layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(keras.layers.Dense(10, activation='softmax'))
Further readings
You should consider learning about Keras on the Keras Documentation.
For me importing layers with from tensorflow.keras import layers did the job.

Bus Error 10 from Filter # Keras/Tensorflow

So I am using Keras with Tensorflow as the backend for an image classification problem. This is done in Python3, using Anaconda for my VM, on Mac.
Anyway the issue is this:
I am using a simple 3D convolutional neural net here:
convo_net = Sequential()
convo_net.add(Conv3D(32, kernel_size=(3,3,3), input_shape=(1, 21, 256, 176), data_format='channels_first'))
convo_net.add(MaxPooling3D(pool_size=(2, 2, 2)))
convo_net.add(Dropout(.5))
convo_net.add(Flatten())
convo_net.add(Dense(512, activation='relu'))
convo_net.add(Dropout(.5))
convo_net.add(Dense(256, activation='relu'))
convo_net.add(Dropout(.5))
convo_net.add(Dense(2, activation='softmax'))
convo_net.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
convo_net.fit(training_set, training_labels, epochs=1, batch_size=3)
scores = convo_net.evaluate(testing_set, testing_labels)
sc = scores[1]
print(sc*100)
And so the thing is, that network I listed above runs just fine.
HOWEVER,
If I change the # of filters to use for convolution (as defined by the first parameter in the declaration of Conv3D) to anything less than 32, I get this error (also I haven't tried more than 32 filters but that's beside the issue since I would like fewer to improve runtime):
Bus error: 10
And that's it, nothing else. I am absolutely certain nothing else (in the code at least) is causing the error as I have checked thoroughly. It is definitely lowering the number of filters from 32. All of my packages and dependencies are up to date to. I am at a complete loss as to why this error is happening and can't find anyone else having any issue similar!

Categories

Resources