Not able to train a simple Char RNN - python

I have been working on a vanilla char rnn in tensorflow. I am not able it to produce any thing sensible even after training it a couple of hours. The code is tf version of Keras code from Chollet's Deep learning with pythonGithub
I tried playing around with hyper params without much success. Chollet mentioned in the book that the model produced good output after 80 epochs. I have able to get anything resonable after 50K+ epochs :( Curious if there is something I missed while converting this code to tensorflow.
n_layers = 1
num_units = 128
batch_size = 150
X = tf.placeholder(tf.float32, [None, maxlen, len(unique_chars)], name="Placeholder_X")
y = tf.placeholder(tf.int64, [None, len(unique_chars)], name="Placeholder_Y")
lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=num_units) for layer in range(n_layers)]
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
outputs, current_state = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = current_state[-1][1]
logits = tf.layers.dense(top_layer_h_state, len(unique_chars), name="softmax")
xentropy=tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=y)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001)
training_op = optimizer.minimize(loss)
pred = tf.nn.softmax(logits)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Sampling Code:
with tf.Session() as sess:
init.run()
saver.restore(sess, model_name)
# Output some data
start_index = random.randint(0, len(text) - maxlen - 1)
generated_text = text[start_index: start_index + maxlen]
print("Seed: ", generated_text)
final_string = ""
sampled = np.zeros((1, maxlen, len(unique_chars)))
for i in range(50):
for t, char in enumerate(generated_text):
sampled[0, t, char_to_idx[char]] = 1.
preds_eval = sess.run([pred], feed_dict={X: sampled})
preds=preds_eval[0][0]
next_index = sample(preds, 0.5)
next_char = unique_chars[next_index]
generated_text += next_char
final_string += next_char
generated_text = generated_text[1:]
print("New String: " , final_string)
Sample Input Seed: is,
as is generally acknowledged nowadays, no better sopori
Input generation:
maxlen = 60
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i:i + maxlen])
next_chars.append(text[i + maxlen])
unique_chars = sorted(list(set(text)))
char_to_idx = dict((char, unique_chars.index(char)) for char in unique_chars)
data_X = np.zeros((len(sentences), maxlen, len(unique_chars)), dtype=np.float32)
data_Y = np.zeros((len(sentences), len(unique_chars)), dtype=np.int64)
for idx, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
data_X[idx, t, char_to_idx[char]] = 1
data_Y[idx, char_to_idx[next_chars[idx]]] = 1
Output from the model: vatsoéätlæéättire

It looks like you are trying to make a language model. I didn't read your entire code carefully. Just from the first part I noticed a couple of things. Why is your placeholder for x of type tf.float32 instead of integers? More importantly, why is the shape of y equal to batch size by vocab size? It should be batch_size by max_len -1 by vocab_size. In a language model you are always trying to predict the next character at every step. It's not a good way to train it to read a whole sequence of characters and then just predict one more at the end.

Related

Out of memory error by trying to train a Character-level recurrent sequence-to-sequence model with 2 millions dataset

first, i'm a newbie and i'm trying to train my Character-level recurrent sequence-to-sequence model with 2 millions tab seperated sentences and each time i get an out of memory error. How can i avoid this? Here's my commented code so that you understand better why this is happening:
import numpy as np
import tensorflow as tf
from tensorflow import keras
batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 2000000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = "es.txt" #tab separated file with english and spanish sentences
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
with open(data_path, "r", encoding="utf-8") as f:
lines = f.read().split("\n")
for line in lines[: min(num_samples, len(lines) - 1)]:
input_text, target_text, _ = line.split("\t")
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = "\t" + target_text + "\n"
input_texts.append(input_text)
target_texts.append(target_text)
for char in input_text:
if char not in input_characters:
input_characters.add(char)
for char in target_text:
if char not in target_characters:
target_characters.add(char)
input_characters = sorted(list(input_characters))
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters)
num_decoder_tokens = len(target_characters)
max_encoder_seq_length = max([len(txt) for txt in input_texts])
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print("Number of samples:", len(input_texts))
print("Number of unique input tokens:", num_encoder_tokens)
print("Number of unique output tokens:", num_decoder_tokens)
print("Max sequence length for inputs:", max_encoder_seq_length)
print("Max sequence length for outputs:", max_decoder_seq_length)
input_token_index = dict([(char, i) for i, char in enumerate(input_characters)])
target_token_index = dict([(char, i) for i, char in enumerate(target_characters)])
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype="float32"
)
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype="float32"
)
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.0
encoder_input_data[i, t + 1 :, input_token_index[" "]] = 1.0
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.0
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.0
decoder_input_data[i, t + 1 :, target_token_index[" "]] = 1.0
decoder_target_data[i, t:, target_token_index[" "]] = 1.0
# Define an input sequence and process it.
encoder_inputs = keras.Input(shape=(None, num_encoder_tokens))
encoder = keras.layers.LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = keras.Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = keras.layers.LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = keras.layers.Dense(num_decoder_tokens, activation="softmax")
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = keras.Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(
optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"]
)
model.fit(
[encoder_input_data, decoder_input_data],
decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2,
)
# Save model
model.save("ess2s")
Can someone explain me how to avoid this problem please? I read that i should a CsvDataset or any other dataset that tensorflow offers, but i don't understand how to use it in my case. Can someone help me please?
The solution i found was to use my unused hard disk as vram, which helps me solve this problem.

Huggingface distilbert-base-uncased-finetuned-sst-2-english runs out of ram with only a few kb?

My dataset is only 10 thousand sentences. I run it in batches of 100, and clear the memory on each run. I manually slice the sentences to only 50 characters. After running for 32 minutes, it crashes... On google colab with 25 gigs of ram.
I must be doing something terribly wrong.
I'm using the out-of-the-box model and tokenizer.
def eval_model(model, tokenizer_, X, y, batchsize, maxlength):
assert len(X) == len(y)
labels = ["negative", "positive"]
correctCounter = 0
epochs = int(np.ceil(len(dev_sent) / batchsize))
accuracies = []
for i in range(epochs):
print(f"Epoch {i}")
# slicing up the data into batches
X_ = X[i:((i+1)*100)]
X_ = [x[:maxlength] for x in X_] # make sure sentences are only of maxlength
y_ = y[i:((i+1)*100)]
encoded_input = tokenizer(X_, return_tensors='pt', padding=True, truncation=True, max_length=maxlength)
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
for i, scores in enumerate([softmax(logit) for logit in output["logits"].detach().numpy()]):
print("--------------------")
print(f'Sentence no. {len(accuracies)+1}')
print("Sentence: " + X_[i])
print("Score: " + str(scores))
ranking = np.argsort(scores)
print(f"Ranking: {ranking}")
pred = labels[np.argmax(np.argsort(scores))]
print(f"Prediction: {pred}, annotation: {y_[i]}")
if pred == y_[i]:
print("SUCCES!")
correctCounter += 1
else: print("FAILURE!")
# garbage collection (to not run out of ram... Which is shouldn't, it's just a few kb, but it does.... ?!)
del(encoded_input)
del(output)
del(scores)
gc.collect()
accuracies.append(correctCounter / len(y_))
#print(f"current accuracy: {np.mean(np.asarray(accuracies))}")
return np.mean(np.asarray(accuracies))
task='sentiment'
MODEL = f"distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
tokenizer.save_pretrained(MODEL)
accuracy = eval_model(model, tokenizer, dev_sent, dev_sentiment, 100, 50)
EDIT: here is the code on google colab https://colab.research.google.com/drive/1qKTabPTNYWEILoua0gIvlYDly8jcibr0?usp=sharing

Skip-gram word2vec loss doesn't decrease

I'm working on implementaion of word2vec architecture from scratch. But my model doesn't converge.
class SkipGramBatcher:
def __init__(self, text):
self.text = text.results
def get_batches(self, batch_size):
n_batches = len(self.text)//batch_size
pairs = []
for idx in range(0, len(self.text)):
window_size = 5
idx_neighbors = self._get_neighbors(self.text, idx, window_size)
#one_hot_idx = self._to_one_hot(idx)
#idx_pairs = [(one_hot_idx, self._to_one_hot(idx_neighbor)) for idx_neighbor in idx_neighbors]
idx_pairs = [(idx,idx_neighbor) for idx_neighbor in idx_neighbors]
pairs.extend(idx_pairs)
for idx in range(0, len(pairs), batch_size):
X = [pair[0] for pair in pairs[idx:idx+batch_size]]
Y = [pair[1] for pair in pairs[idx:idx+batch_size]]
yield X,Y
def _get_neighbors(self, text, idx, window_size):
text_length = len(text)
start = max(idx-window_size,0)
end = min(idx+window_size+1,text_length)
neighbors_words = set(text[start:end])
return list(neighbors_words)
def _to_one_hot(self, indexes):
n_values = np.max(indexes) + 1
return np.eye(n_values)[indexes]
I use text8 corpus and have applied preprocessing techniques such as stemming, lemmatization and subsampling. Also I've excluded English stop words and limited vocabulary
vocab_size = 20000
text_len = len(text)
test_text_len = int(text_len*0.15)
preprocessed_text = PreprocessedText(text,vocab_size)
I use tensorflow for graph computation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
n_embedding = 300
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((vocab_size, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
And apply negative sampling
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((vocab_size, n_embedding))) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(vocab_size), name="softmax_bias") # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(
weights=softmax_w,
biases=softmax_b,
labels=labels,
inputs=embed,
num_sampled=n_sampled,
num_classes=vocab_size)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Finally I train my model
epochs = 10
batch_size = 64
avg_loss = []
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = skip_gram_batcher.get_batches(batch_size)
start = time.time()
for batch_x,batch_y in batches:
feed = {inputs: batch_x,
labels: np.array(batch_y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Batch loss: {:.4f}".format(loss/iteration),
"{:.4f} sec/batch".format((end-start)/100))
#loss = 0
avg_loss.append(loss/iteration)
start = time.time()
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
But after running this model my average batch loss doesn't decrease dramatically
I guess I should have made a mistake somewhere. Any help is apprciated
What makes you say "my average batch loss doesn't decrease dramatically"? The graph you've attached shows some (unlabeled) value decreasing significantly, and still decreasing at a strong slope towards the end of data.
"Convergence" would show up as the improvement-in-loss first slowing, then stopping.
But if your loss is still noticeably dropping, just keep training! Using more epochs can be especially important on small datasets – like the tiny text8 you're using.

TensorFlow simple XOR example not converging

I have the following code to learn a simple XOR network:
import tensorflow as tf
import numpy as np
def generate_xor(length=1000):
x = np.random.randint(0,2, size=(length,2))
y = []
for pair in x:
y.append(int(np.logical_xor(pair[0],pair[1])))
return x, np.array(y)
n_inputs = 2
n_hidden = n_inputs*4
n_outputs = 1
x = tf.placeholder(tf.float32, shape=[1,n_inputs])
y = tf.placeholder(tf.float32, [1, n_outputs])
W = tf.Variable(tf.random_uniform([n_inputs, n_hidden],-1,1))
b = tf.Variable(tf.zeros([n_hidden]))
W2 = tf.Variable(tf.random_uniform([n_hidden,n_outputs],-1,1))
b2 = tf.Variable(tf.zeros([n_outputs]))
def xor_model(data):
x = data
hidden_layer = tf.nn.relu(tf.matmul(x,W)+b)
output = tf.nn.relu(tf.matmul(hidden_layer, W2)+b2)
return output
xor_nn = xor_model(x)
cost = tf.reduce_mean(tf.abs(xor_nn - y))
train_step = tf.train.AdagradOptimizer(0.05).minimize(cost)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
x_data,y_data = generate_xor(length=100000)
errors = []
count = 0
out_freq = 1000
for xor_in, xor_out in zip(x_data,y_data):
_, err = sess.run([train_step, cost], feed_dict={x:xor_in.reshape(1,2), y:xor_out.reshape(1,n_outputs)})
errors.append(err)
count += 1
if count == out_freq:
tol = np.mean(errors[-out_freq:])
print tol
count = 0
if tol < 0.005:
break
n_tests = 100
correct = 0
count = 0
x_test, y_test = generate_xor(length=n_tests)
for xor_in, xor_out in zip(x_test, y_test):
output = sess.run([xor_nn], feed_dict={x:xor_in.reshape(1,2)})[0]
guess = int(output[0][0])
truth = int(xor_out)
if guess == truth:
correct += 1
count += 1
print "Model %d : Truth %d - Pass Rate %.2f" % (int(guess), int(xor_out), float(correct*100.0)/float(count))
However, I can't get the code to reliably converge. I have tried varying the size of the hidden layer, using different optimizers / step sizes and different initializations of the weights and biases.
I'm clearly making an elemental error. If anyone could help I'd be grateful.
EDIT:
Thanks to Prem and Alexander Svetkin I managed to spot my errors. Firstly I wasn't rounding the outputs when I cast them to ints, a schoolboy mistake. Secondly I had a relu on the output layer which wasn't needed - a copy and paste mistake. Thirdly relu is indeed a bad choice of activation function for this task, using a sigmoid function works much better.
So this:
hidden_layer = tf.nn.relu(tf.matmul(x,W)+b)
output = tf.nn.relu(tf.matmul(hidden_layer, W2)+b2)
becomes this:
hidden_layer = tf.nn.sigmoid(tf.matmul(x,W)+b)
output = tf.matmul(hidden_layer, W2)+b2
and this:
guess = int(output[0][0])
becomes this:
guess = int(output[0][0]+0.5)
Shouldn't you only return the activation function of output layer instead of relu?
output = tf.matmul(hidden_layer, W2) + b2
ReLU just isn't right activation function for binary classification task, use something different, like sigmoid function.
Pay attention to your float output values. 0.99 should mean 1 or 0? Use rounding.

Tensorflow: Using neural network to classify positive or negative phrases

I am following through the tutorial here:
https://pythonprogramming.net/train-test-tensorflow-deep-learning-tutorial/
I can get the Neural Network trained and print out the accuracy.
However, I do not know how to use the Neural Network to make a prediction.
Here is my attempt. Specifically the issue is this line - I believe my issue is that I cannot get my input string into the format the model expects:
features = get_features_for_input("This was the best store i've ever seen.")
result = (sess.run(tf.argmax(prediction.eval(feed_dict={x:features}),1)))
Here is a larger listing:
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
i = 0
while i < len(train_x):
start = i
end = i + batch_size
batch_x = np.array(train_x[start:end])
batch_y = np.array(train_y[start:end])
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})
epoch_loss += c
i+=batch_size
print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct,'float'))
print('Accuracy', accuracy.eval({x:test_x, y:test_y}))
# pos: [1,0] , argmax: 0
# neg: [0,1] , argmax: 1
features = get_features_for_input("This was the best store i've ever seen.")
result = (sess.run(tf.argmax(prediction.eval(feed_dict={x:features}),1)))
if result[0] == 0:
print('Positive:',input_data)
elif result[0] == 1:
print('Negative:',input_data)
def get_features_for_input(input):
current_words = word_tokenize(input.lower())
current_words = [lemmatizer.lemmatize(i) for i in current_words]
features = np.zeros(len(lexicon))
for word in current_words:
if word.lower() in lexicon:
index_value = lexicon.index(word.lower())
# OR DO +=1, test both
features[index_value] += 1
features = np.array(list(features))
train_neural_network(x)
Following your comment above, it feels like your error ValueError: Cannot feed value of shape () is due to the fact that features is None, because your function get_features_for_input doesn't return anything.
I added the return features line and gave features a correct shape of [1, len(lexicon)] to match the shape of the placeholder.
def get_features_for_input(input):
current_words = word_tokenize(input.lower())
current_words = [lemmatizer.lemmatize(i) for i in current_words]
features = np.zeros((1, len(lexicon)))
for word in current_words:
if word.lower() in lexicon:
index_value = lexicon.index(word.lower())
# OR DO +=1, test both
features[0, index_value] += 1
return features
Your get_features_for_input function returns a single list representing features of a sentences but for feed_dict, the input needs to be of size [num_examples, features_size], here num_examples is 1.
The following code should work.
def get_features_for_input(input):
current_words = word_tokenize(input.lower())
current_words = [lemmatizer.lemmatize(i) for i in current_words]
features = np.zeros(len(lexicon))
for word in current_words:
if word.lower() in lexicon:
index_value = lexicon.index(word.lower())
# OR DO +=1, test both
features[index_value] += 1
features = np.array(list(features))
batch_features = []
batch_features[0] = features
return np.array(batch_features)
Basic funda for any machine learning algorithm is dimention should be same during training and testing.
During training you created matrix shape number of training samples, len(lexicon). Here you are trying bag of words approach and lexicons are nothing but the unique word in your training data.
During testing your input vector size should be same as your vector size for training. And it is just the size of lexicon created during training. Also each element in test vector defines the corresponding index word in lexicons.
Now come to your problem, in get_features_for_input(input) you used the lexicon, you must have defined somewhere in program. Given the error what I conclude is your lexicon list is empty, so in get_features_for_input function features = np.zeros(len(lexicon)) will produce array of zero shape and also never enters in loop.
Few expected modifications:
You can find function create_feature_sets_and_labels in your tutorial. That returns your cleaned formatted training data. Change return statement to return the lexicon list along with data.
return train_x,train_y,test_x,test_y,lexicon
Make small change to collect lexicon list, ref:here
train_x,train_y,test_x,test_y,lexicon = create_feature_sets_and_labels('/path/to/pos.txt','/path/to/neg.txt')
And just pass this lexicon list alongwith your input to get_features_for_input function
features = get_features_for_input("This was the best store i've ever seen.",lexicon)
Make small change in get_features_for_input function
def get_features_for_input(text,lexicon):
featureset = []
current_words = word_tokenize(text.lower())
current_words = [lemmatizer.lemmatize(i) for i in current_words]
features = np.zeros(len(lexicon))
for word in current_words:
if word.lower() in lexicon:
index_value = lexicon.index(word.lower())
features[index_value] += 1
featureset.append(features)
return np.asarray(featureset)

Categories

Resources