I'm trying to do a simple Deep Learning task to learn how to use Tensorflow (and especially its Dataset tool). The task is the following : training a model which can tell if the sum of a given sequence of floats (length is fixed) is positive (labelled as 1) or negative (labelled as 0).
I did the following without using tf.data.Dataset and it works well.
def get_rand_seq():
return [rand.uniform(-1, 1) for _ in range(6)]
n = 1000
X = np.array([get_rand_seq() for _ in range(n)])
y = np.array([0 if sum(seq) < 0 else 1 for seq in X])
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(16, input_shape=(6, ), activation='relu'))
model.add(tf.keras.layers.Dense(4, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(X, y, epochs=10, batch_size=4)
Still, when I'm trying to do the same using a tf.data.Dataset input, I'm getting an error at the training step model.fit(...)
Here is my code :
ds_X = tf.data.Dataset.from_tensor_slices(X)
ds_y = tf.data.Dataset.from_tensor_slices(y)
ds = tf.data.Dataset.zip((ds_X, ds_y))
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(16, input_shape=(6, ), activation='relu'))
model.add(tf.keras.layers.Dense(4, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(ds, epochs=10, batch_size=4)
I get the following error :
ValueError: Input 0 of layer sequential_5 is incompatible with the layer: expected axis -1 of input shape to have value 6 but received input with shape [6, 1]
Even changing the input_shape to (6, 1) doesn't make it worked.
Is there a kind soul to enlighten a lost sheep like me ?
Do not use batch_size argument in model.fit when using tf.data.Dataset. You should act on the dataset itself (remember that any operation on dataset like batching, shuffeling etc... does not change the dataset in place which means a copy of dataset with new properties is returned and dataset should be overwritten)
Also, there is no need to create two distinct datasets and zip them. You can provide a tuple to the factory method to tf.data.Dataset.from_tensor_slices
import tensorflow as tf
import numpy as np
def get_rand_seq():
return [np.random.uniform(-1, 1) for _ in range(6)]
n = 1000
X = np.array([get_rand_seq() for _ in range(n)])
y = np.array([0 if sum(seq) < 0 else 1 for seq in X])
ds = tf.data.Dataset.from_tensor_slices((X, y)).batch(4)
# equivalent is
# ds = tf.data.Dataset.from_tensor_slices((X, y))
# ds = ds.batch(4) # not in-place
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(16, input_shape=(6, ), activation='relu'))
model.add(tf.keras.layers.Dense(4, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
model.fit(ds, epochs=1000)
Related
It is my first day with tf and keras. I had a quick tutorial which worked fine, but left me with a lot of questions.
Can someone show me how to get two data inputs instead of one?
import keras
import numpy as np
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd',loss='mean_squared_error')
xs = np.array([1,2,3,4,5,6,7], dtype=int) # input data 1
ys = np.array([8,11,14,17,20,23,26], dtype=int)
# formel is : 3*x+5
model.fit(xs, ys, epochs=500)
print(model.predict([10.0]))
add a few hidden layers for feature detection. If you want multiple features then you will need to change the shape of X and the input shape
X = np.array([1,2,3,4,5,6,7], dtype=int).tolist()
#ys = np.array([8,11,14,17,20,23,26], dtype=int)
ys=list(map(lambda x: 3*x+5,xs.tolist()))
plt.plot(xs,ys)
X_train, X_test, y_train, y_test= train_test_split(X,y,test_size=0.3)
model=Sequential()
model.add(layers.Input(shape=(1,), name='main_input'))
model.add(Dense(200, activation='tanh'))
model.add(Dense(100, activation='tanh'))
model.add(Dense(32, activation='tanh'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['mse'])
history=model.fit(X_train, y_train, epochs=1000, verbose=0)
predictionResults=model.predict(X_test)
index=0
results=predictionResults.flatten()
for value in X_test:
plt.scatter(value,results[index])
index+=1
plt.plot(X,y)
plt.show()
Going off of Golden's answer, here is an example of adding another feature "x". You should just mess around with the layers and sizes.
import keras
import numpy as np
xs = np.array([1,2,3,4,5,6,7], dtype=int) # input data 1
x = np.array([3,5,7,9,11,13,15], dtype=int) # input data 2
ys = np.array([3,10,21,36,55,78,105], dtype=int)
# formel is : xs * x
input_data = np.array([[xs],[x]]).T
model = keras.models.Sequential()
model.add(keras.Input(shape=input_data.shape[1:]))
model.add(keras.layers.Dense(500, activation='tanh'))
model.add(keras.layers.Dense(200, activation='tanh'))
model.add(keras.layers.Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['mse'])
model.fit(input_data, ys, epochs=500)
print(model(np.array([[10, 21]])).numpy())
see https://www.pyimagesearch.com/2019/02/04/keras-multiple-inputs-and-mixed-data/ for how to create a simple feedforward neural network with 10 inputs
model = Sequential()
model.add(Dense(8, input_shape=(10,), activation="relu"))
model.add(Dense(4, activation="relu"))
model.add(Dense(1, activation="linear"))
This network is a simple feedforward neural without with 10 inputs, a first hidden layer with 8 nodes, a second hidden layer with 4 nodes, and a final output layer used for regression.
Keras allows you to create multiple sequential networks with inputs and concatenate them into a dense layer with one or more outputs
I changed the data type but I could not resolve the error.
I tried One-Hot Encoding but it doesn't work too.
I don't know what's wrong:(
seed = 0
np.random.seed(seed)
tf.set_random_seed(seed)
df = pd.read_csv('HW01_dataset_tae.txt', sep=',' ,header=None, names = ["Native", "Instructor", "Course", "Semester", "Class Size", "Evaluation"])
dataset = df.values # dataframe to int64
X = dataset[:,0:5] # attribute
Y_Eva = dataset[:,5] # class
e = LabelEncoder()
e.fit(Y_Eva)
Y = e.transform(Y_Eva)
K = 10
kFold = StratifiedKFold(n_splits=K, shuffle=True, random_state=seed)
accuracy = []
for train_index, test_index in kFold.split(X,Y):
model = Sequential()
model.add(Dense(16, input_dim=5, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['accuracy'])
model.fit(X[train_index], Y[train_index], epochs=100, batch_size=2)
the error ; Error when checking target: expected dense_2 to have shape (3,) but got array with shape (1,)
is detected at here ; model.fit(X[train_index], Y[train_index], epochs=100, batch_size=2).
What shout I do?
I solved the problem.
At this code,
model.fit(X[train_index], Y[train_index], epochs=100, batch_size=2)
the number of rows in 'Y[train_index]' must be three because the classes are three.
The error came out since each Y[train_index] has only one row.
So, I used One-Hot Encoding and changed the code like this.
e = LabelEncoder()
e.fit(Y_Eva)
Y = e.transform(Y_Eva)
Y_encoded = np_utils.to_categorical(Y) # changed code
K = 10
kFold = StratifiedKFold(n_splits=K, shuffle=True, random_state=seed)
accuracy = []
for train_index, test_index in kFold.split(X,Y):
model = Sequential()
model.add(Dense(32, input_dim=5, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X[train_index], Y_encoded[train_index], epochs=100, batch_size=2) # changed code
Finally, I was able to run the code.
TensorFlow has made some documenation on the dense layer, and if you then instead of saying input_dim says input_shape you can specify the prefered shape.
https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense
model = Sequential()
model.add(Dense(16, input_shape=(5,))) # Then your data has to be of shape (batch x 5)
When you then are adding another dense layer, you actaully don't have to provide the input_sahpe
model.add(Dense(10))
I'm trying to build a 1D CNN model by processing ECG signals to diagnose sleep apnea.
I am using the sklearn library and encountered an error in train_test_split. Here is my code:
# loading the file
with open("ApneaData.csv") as csvDataFile:
csvReader = csv.reader(csvDataFile)
for line in csvReader:
lis.append(line[0].split()) # create a list of lists
# making a list of all x-variables
for i in range(1, len(lis)):
data.append(list(map(int, lis[i])))
# a list of all y-variables (either 0 or 1)
target = Extract(data) # sleep apn or not
# converting to numpy arrays
data = np.array(data)
target = np.array(target)
# stacking data into 3D
loaded = dstack(data)
change = dstack(target)
trainX, testX, trainy, testy = train_test_split(loaded, change, test_size=0.3)
# the model
verbose, epochs, batch_size = 0, 10, 32
n_timesteps, n_features, n_outputs = trainX.shape[0], trainX.shape[1], trainy.shape[0]
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(n_outputs, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# fitting the model
model.fit(trainX, trainy, epochs=epochs, batch_size=batch_size, verbose=verbose)
# evaluate model
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
I get the error:
ValueError: Error when checking input: expected conv1d_15_input to have shape (11627, 6001) but got array with shape (6001, 1)
I don't understand what I'm doing wrong? Any help would be much appreciated.
I think that n_timesteps and n_features should be shape[1] and shape[2], the first dimension is your number of samples
First,
# a list of all y-variables (either 0 or 1)
target = Extract(data) # sleep apn or not
This suggets you're doing a binary classification, and it seems you haven't applied one-hot-encoding. So, you last layer should be sigmoid.
the first dimension denotes number of samples. So,
trainX = tranX.reshape(trainX.shape[0], trainX.shape[1], -1)
(add a third dimension if not there already)
n_timesteps, n_features, n_outputs = trainX.shape[1], trainX.shape[2], 1
Finally, change your model.
model.add(Dense(n_outputs, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
I'm willing to create a Neural network in python, using Keras, that tells if anumber is even or odd. I know that can be done in many ways and that using NN for this is overkill but i want to this for educational purpose.
I'm running into an issue: the accuracy of my model is about 50 % that means that it's unable to tell if a number is even or odd.
I'll detail to you the step that i went through and hopefully we'll find a solution together :)
Step one creation of the data and labels:
Basically my data are the number from 0 to 99(binary) and the labels are 0(odd) and 1(even)
for i in range(100):
string = np.binary_repr(i,8)
array = []
for k in string:
array.append(int(k))
array = np.array(array)
labels.append(-1*(i%2 - 1))
Then I'm creating the model thas is made of 3 layer.
-Layer 1 (input) : one neuron that's takes any numpy array of size 8 (8 bit representation of integers)
-Layer 2 (Hidden) : two neurons
-Layer 3 (outuput) : one neuron
# creating a model
model = Sequential()
model.add(Dense(1, input_dim=8, kernel_initializer='uniform', activation='relu'))
model.add(Dense(2, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
then I'm training the model using binary_cross_entropy as a loss function since i want a binary classification of integers:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
then I'm training the model and evaluating it:
#training
model.fit(data, labels, epochs=10, batch_size=2)
#evaluate the model
scores = model.evaluate(data, labels)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
And that's where I'm lost because of that 50 % accuracy.
I think i missundesrtood something about NN or Keras implementation so any help would be appreciated.
Thank you for reading
edit : I modified my code according to the comment of Stefan Falk
The following gives me an accuracy on the test set of 100%:
import numpy as np
from tensorflow.contrib.learn.python.learn.estimators._sklearn import train_test_split
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Dense
# Number of samples (digits from 0 to N-1)
N = 10000
# Input size depends on the number of digits
input_size = int(np.log2(N)) + 1
# Generate data
y = list()
X = list()
for i in range(N):
binary_string = np.binary_repr(i, input_size)
array = np.zeros(input_size)
for j, binary in enumerate(binary_string):
array[j] = int(binary)
X.append(array)
y.append(int(i % 2 == 0))
X = np.asarray(X)
y = np.asarray(y)
# Make train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
# Create the model
model = Sequential()
model.add(Dense(2, kernel_initializer='uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
# Train
model.fit(X_train, y_train, epochs=3, batch_size=10)
# Evaluate
print("Evaluating model:")
scores = model.evaluate(X_test, y_test)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
Why does it work that well?
Your problem is very simple. The network only needs to know whether the first bit is set (1) or not (0). For this you actually don't need a hidden layer or any non-linlearities. The problem can be solved with simple linear regression.
This
model = Sequential()
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
will do the job as well. Further, on the topic of feature engineering,
X = [v % 2 for v in range(N)]
is also enough. You'll see that X in that case will have the same content as y.
Maybe try a non-linear example such as XOR. Note that we do not have a test-set here because there's nothing to generalize or any "unseen" data which may surprise the network.
import numpy as np
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Dense
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
model = Sequential()
model.add(Dense(5, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, batch_size=1, nb_epoch=1000)
print(model.predict_proba(X))
print(model.predict_proba(X) > 0.5)
Look at this link and play around with the example.
I am trying to implement a system by encoding inputs using CNN. After CNN, I need to get a vector and use it in another deep learning method.
def get_input_representation(self):
# get word vectors from embedding
inputs = tf.nn.embedding_lookup(self.embeddings, self.input_placeholder)
sequence_length = inputs.shape[1] # 56
vocabulary_size = 160 # 18765
embedding_dim = 256
filter_sizes = [3,4,5]
num_filters = 3
drop = 0.5
epochs = 10
batch_size = 30
# this returns a tensor
print("Creating Model...")
inputs = Input(shape=(sequence_length,), dtype='int32')
embedding = Embedding(input_dim=vocabulary_size, output_dim=embedding_dim, input_length=sequence_length)(inputs)
reshape = Reshape((sequence_length,embedding_dim,1))(embedding)
conv_0 = Conv2D(num_filters, kernel_size=(filter_sizes[0], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_1 = Conv2D(num_filters, kernel_size=(filter_sizes[1], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_2 = Conv2D(num_filters, kernel_size=(filter_sizes[2], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
maxpool_0 = MaxPool2D(pool_size=(sequence_length - filter_sizes[0] + 1, 1), strides=(1,1), padding='valid')(conv_0)
maxpool_1 = MaxPool2D(pool_size=(sequence_length - filter_sizes[1] + 1, 1), strides=(1,1), padding='valid')(conv_1)
maxpool_2 = MaxPool2D(pool_size=(sequence_length - filter_sizes[2] + 1, 1), strides=(1,1), padding='valid')(conv_2)
concatenated_tensor = Concatenate(axis=1)([maxpool_0, maxpool_1, maxpool_2])
flatten = Flatten()(concatenated_tensor)
dropout = Dropout(drop)(flatten)
output = Dense(units=2, activation='softmax')(dropout)
model = Model(inputs=inputs, outputs=output)
adam = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(optimizer=adam, loss='binary_crossentropy', metrics=['accuracy'])
adam = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(optimizer=adam, loss='binary_crossentropy', metrics=['accuracy'])
print("Traning Model...")
model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_data=(X_test, y_test)) # starts training
return ??
The above code, trains the model using X_train and Y_train and then tests it. However in my system I do not have Y_train or Y_test, I only need the vector in the last hidden layer before softmax layer. How can I obtain it?
For that you can define a backend function to get the output of arbitrary layer(s):
from keras import backend as K
func = K.function([model.input], [model.layers[index_of_layer].output])
You can find the index of your desired layer using model.summary() where the layers are listed starting from index zero. If you need the layer before the last layer you can use -2 as the index (i.e. .layers attribute is actually a list so you can index it like a list in python). Then you can use the function you have defined by passing a list of input array(s):
outputs = func(inputs)
Alternatively, you can also define a model for this purpose. This has been covered in Keras documentation more thoroughly so I advise you to read that.