I have constructed, fitted, and saved the following model:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import preprocessing
from tensorflow.keras.models import Sequential
import config
from tensorflow.keras import applications
model = Sequential()
model.add(layers.Flatten(input_shape=input_shape.shape[1:]))
model.add(layers.Dense(100, activation=keras.layers.LeakyReLU(alpha=0.3)))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(50, activation=keras.layers.LeakyReLU(alpha=0.3)))
model.add(layers.Dropout(0.3))
model.add(layers.Dense(num_classes, activation='softmax'))
I am using the load_model function for evaluation, and I have not had any trouble up until now, but I am now getting the following error:
ValueError: Unknown activation function: LeakyReLU
Are there any syntactic changes to the architecture I should make, or is there a deeper issue here? Any advice would be appreciated, as I had already tried setting some custom objects as described here: https://github.com/BBQuercus/deepBlink/issues/107
Edit:
My imports in the file where I am calling load_model are the following:
import config
import numpy as np
from tensorflow.keras.preprocessing.image import img_to_array, load_img
from models.create_image_model import make_vgg
import argparse
from tensorflow.keras.models import load_model
import time
from tensorflow import keras
from tensorflow.keras import layers
There seem to be some issues when saving & loading models with such "non-standard" activations, as implied also in this SO thread; the safest way would seem to be to re-write your model with the LeakyReLU as a layer, and not as an activation:
model = Sequential()
model.add(layers.Flatten(input_shape=input_shape.shape[1:]))
model.add(layers.Dense(100)) # no activation here
model.add(layers.LeakyReLU(alpha=0.3)) # activation layer here instead
model.add(layers.Dropout(0.5))
model.add(layers.Dense(50)) # no activation here
model.add(layers.LeakyReLU(alpha=0.3)) # activation layer here instead
model.add(layers.Dropout(0.3))
model.add(layers.Dense(num_classes, activation='softmax'))
This is exactly equivalent to your own model, and more consistent with the design choices of Keras - which, for good or bad, includes LeakyReLU as a layer, and not as a standard activation function.
Related
I have completed a udacity nanodegree for NLP. I used the udacity platform for the project, but I am now trying to use my own local machine to train models etc.
I've finally gotten my GPU/tensorflow issues worked out(I think), but I'm running into some problems that I believe are related to the versions of tensorflow that udacity was using.
I am currently using TensorFlow 2.2
Specifically, I am getting an error from a validation step the project uses to list the loss function.
def _test_model(model, input_shape, output_sequence_length, french_vocab_size):
if isinstance(model, Sequential):
model = model.model
print(model.loss_functions)
When this is called I get the "'Model' object has no attribute 'loss_functions'" error.
The model is built with the below code.
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Build the layers
learning_rate = 0.01
#Config Model
inputs = Input(shape=input_shape[1:])
hidden_layer = GRU(output_sequence_length, return_sequences=True)(inputs)
outputs = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(hidden_layer)
#Create Model from parameters defined above
model = keras.Model(inputs=inputs, outputs=outputs)
#loss_function = 'sparse_categorical_crossentropy'
loss_fn = keras.losses.SparseCategoricalCrossentropy()
model.compile(loss=loss_fn,optimizer=Adam(learning_rate),metrics=['accuracy'])
I am using the below libraries along the way
import tensorflow as tf
from tensorflow.keras.losses import sparse_categorical_crossentropy
from tensorflow.keras.optimizers import Adam
from tensorflow import keras
import collections
import helper
import numpy as np
import project_tests as tests
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import GRU, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional, Dropout
from tensorflow.keras.layers.embeddings import Embedding
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import sparse_categorical_crossentropy
I can just comment out this check for the loss functions, but I would really like to understand what happened.
Thanks
I think the API changed in Tensorflow 2, does the following work:
model.compiled_loss._get_loss_object(model.compiled_loss._losses).fn
I am trying to save my ANN model using SavedModel format. The command that I used was:
model.save("my_model")
It supposed to give me a folder namely "my_model" that contains all saved_model.pb, variables and asset, instead it gives me an HDF file namely my_model. I am using keras v.2.3.1 and tensorflow v.2.2.0
Here is a bit of my code:
from keras import optimizers
from keras import backend
from keras.models import Sequential
from keras.layers import Dense
from keras.activations import relu,tanh,sigmoid
network_layout = []
for i in range(3):
network_layout.append(8)
model = Sequential()
#Adding input layer and first hidden layer
model.add(Dense(network_layout[0],
name = "Input",
input_dim=inputdim,
kernel_initializer='he_normal',
activation=activation))
#Adding the rest of hidden layer
for numneurons in network_layout[1:]:
model.add(Dense(numneurons,
kernel_initializer = 'he_normal',
activation=activation))
#Adding the output layer
model.add(Dense(outputdim,
name="Output",
kernel_initializer="he_normal",
activation="relu"))
#Compiling the model
model.compile(optimizer=opt,loss='mse',metrics=['mse','mae','mape'])
model.summary()
#Training the model
history = model.fit(x=Xtrain,y=ytrain,validation_data=(Xtest,ytest),batch_size=32,epochs=epochs)
model.save('my_model')
I have read the API documentation in the tensorflow website and I did what it said to use model.save("my_model") without any file extension, but I can't get it right.
Your help will be very appreciated. Thanks a bunch!
If you would like to use tensorflow saved model format, then use:
tms_model = tf.saved_model.save(model,"export/1")
This will create a folder export and a subfolder 1 inside that. Inside the 1 folder you can see the assets, variables and .pb file.
Hope this will help you out.
Make sure to change your imports like this
from tensorflow.keras import optimizers
Can you help me with this error?
TypeError: The added layer must be an instance of class Layer. Found: <keras.layers.convolutional.Conv2DTranspose object at 0x7f5dc629f240>
I get this when I try to execute the following line
decoder.add(Deconvolution2D(64, 3, 3, subsample=(1, 1), border_mode='same'))
My imports are:
from keras.layers import Layer
from keras.layers import Input
from keras.layers.convolutional import Deconvolution2D
According to Installing Keras: To use Keras, we will need to have the TensorFlow package installed. Once TensorFlow is installed.
Now import Keras as shown below
from tensorflow import keras
Now Deconvolution2Dlayer has been renamed Conv2DTranspose layer.
Now you can import layers as shown below
from tensorflow.keras.layers import Input, Conv2DTranspose
For more information you can refer here
I have issues with saving a sequential model produced by Keras to SavedModel format.
As been said in https://www.tensorflow.org/guide/keras/save_and_serialize#export_to_savedmodel ,
to save the Keras model to the format that could be used by TensorFlow, I need to use model.save() and provide save_format='tf', but what I have is:
Traceback (most recent call last):
File "load_file2.py", line 14, in <module>
classifier.save('/tmp/keras-model.pb', save_format='tf')
My code example is:
import pandas as pd
import tensorflow as tf;
import keras;
from keras import Sequential
from keras.layers import Dense
import json;
import numpy as np;
classifier = Sequential()
classifier.add(Dense(4, activation='relu', kernel_initializer='random_normal', input_dim=4))
classifier.add(Dense(1, activation='sigmoid', kernel_initializer='random_normal'))
classifier.compile(optimizer ='adam',loss='binary_crossentropy', metrics = ['accuracy'])
classifier.save('/tmp/keras-model.pb', save_format='tf')
My python is 3.6.10.
My tensorflow is 1.14 and 2.0 (I tested on both, my result is the same).
My keras is 2.3.1.
What is wrong there or what should I change to make my model saved and then used by tensorflow?
Or, maybe, there is another way of saving models from Keras with TensorFlow2 as backend?
Thanks.
I ran your code. With tensorflow 1.15 I got type error saying save_format is not a known parameter. With tensorflow 2 I got the suggesstion to use tf.keras instead of native keras. So, I tried tf.keras instead of keras. This time the code ran with no error.
Also, I don't see a fit method before saving the model.
With TF2.0:
import pandas as pd
import tensorflow as tf;
##Change.
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import json;
import numpy as np;
classifier = Sequential()
classifier.add(Dense(4, activation='relu', kernel_initializer='random_normal', input_dim=4))
classifier.add(Dense(1, activation='sigmoid', kernel_initializer='random_normal'))
classifier.compile(optimizer ='adam',loss='binary_crossentropy', metrics = ['accuracy'])
classifier.save('/tmp/keras-model.pb', save_format='tf')
Result:
INFO:tensorflow:Assets written to: /tmp/keras-model.pb/assets
I'm training models with a genetic algorithm, and the fitness function first builds the model with certain parameters, and then runs inference on the dataset with that model.
Obviously, genetic algorithms are very parallelizable, but I am running into problems loading and running multiple models at once. I am using Python's multiprocessing library and running the function that loads the model and runs inference using the Pool method. I get an error when doing this:
Blas GEMM launch failed : a.shape=(32, 50), b.shape=(50, 200), m=32, n=200, k=50
[[{{node lstm_1/while/body/_1/MatMul_1}}]] [Op:__inference_keras_scratch_graph_494]
Function call stack:
keras_scratch_graph
Not sure what's happening here, but the error isn't thrown when the models are not parallelized.
Any help is super appreciated.
Here is the code:
import tensorflow as tf
from keras import regularizers
from keras.optimizers import SGD
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten, ELU, LSTM, PReLU, GRU, CuDNNGRU, CuDNNLSTM
from keras.callbacks import EarlyStopping, ModelCheckpoint
import numpy as np
from sklearn import preprocessing
from sklearn.preprocessing import minmax_scale
import matplotlib.pyplot as plt
import math, random, copy, pickle, time, statistics
import pandas as pd
import multiprocessing as mp
def buildModel(windowSize):
#model to use
sample_model = Sequential()
sample_model.add(CuDNNLSTM(50, input_shape = (windowSize, 5), return_sequences=False))
sample_model.add(Dense(50, activation="relu"))
sample_model.add(Dense(3, activation="softmax"))
sample_model.compile(optimizer="adam", loss="categorical_crossentropy")
sample_model.build()
#record weight and bias shape
modelShape = []
for i in range(len(sample_model.layers)):
layerShape = []
for x in range(len(sample_model.layers[i].get_weights())):
layerShape.append(sample_model.layers[i].get_weights()[x].shape)
modelShape.append(layerShape)
return(sample_model, modelShape)
model = BuildModel(120)
pool = mp.Pool(mp.cpu_count())
results = [pool.apply(model.predict, args=(np.array(features[x]), batch_size=len(features[i]))) for x in range(len(features))]
pool.close()
The features I'm using aren't important, they could all be lists of 120 random numbers or something. I didn't actually add the features I use because they are huge and from a really big file.
I just want to be able to run model.predict inside the pool.apply[] multiprocessing function so that I can run multiple predictions concurrently.