I want to train a Fasttext model in Python using the "gensim" library. First, I should tokenize each sentences to its words, hence converting each sentence to a list of words. Then, this list should be appended to a final list. Therefore, at the end, I will have a nested list containing all tokenized sentences:
word_punctuation_tokenizer = nltk.WordPunctTokenizer()
word_tokenized_corpus = []
for line in open('sentences.txt'):
new = line.strip()
new = word_punctuation_tokenizer.tokenize(new)
if len(new) != 0:
word_tokenized_corpus.append(new)
Then, the model should be built as the following:
embedding_size = 60
window_size = 40
min_word = 5
down_sampling = 1e-2
ft_model = FastText(word_tokenized_corpus,
size=embedding_size,
window=window_size,
min_count=min_word,
sample=down_sampling,
sg=1,
iter=100)
However, the number of sentences in "word_tokenized_corpus" is very large and the program can't handle it. Is it possible that I train the model by giving each tokenized sentence to it one by one, such as the following:?
for line in open('sentences.txt'):
new = line.strip()
new = word_punctuation_tokenizer.tokenize(new)
if len(new) != 0:
ft_model = FastText(new,
size=embedding_size,
window=window_size,
min_count=min_word,
sample=down_sampling,
sg=1,
iter=100)
Does this make any difference to the final results? Is it possible to train the model without having to build such a large list and keeping it in the memory?
Since the volume of the data is very high, it is better to convert the text file into a COR file. Then, read it in the following way:
from gensim.test.utils import datapath
corpus_file = datapath('sentences.cor')
As for the next step:
model = FastText(size=embedding_size,
window=window_size,
min_count=min_word,
sample=down_sampling,
sg=1,
iter=100)
model.build_vocab(corpus_file=corpus_file)
total_words = model.corpus_total_words
model.train(corpus_file=corpus_file, total_words=total_words, epochs=5)
If you want to use the default fasttext API, here how you can do it:
root = "path/to/all/the/texts/in/a/single/txt/files.txt"
training_param = {
'ws': window_size,
'minCount': min_word,
'dim': embedding_size,
't': down_sampling,
'epoch': 5,
'seed': 0
}
# for all the parameters: https://fasttext.cc/docs/en/options.html
model = fasttext.train_unsupervised(path, **training_param)
model.save_model("embeddings_300_fr.bin")
The advantage of using the fasttext API is (1) implemented in C++ with a wrapper in Python (way faster than Gensim) (also multithreaded) (2) manage better the reading of the text. It is also possible to use it directly from the command line.
Related
I'm having trouble improving the speed of my script with Numba.
I'm fairly new to programming but I realize the script below is suboptimal in many ways (e.g. using lists instead of np.arrays). I'm currently still figuring out how to write better code, but in the meantime I'd like to get started with Numba njit. I read that this module can drastically improve speed for even poorly optimized scripts.
Unfortunately when I try to run the script below with the #njit argument added it gives me the following error:
numba.core.errors.UnsupportedError: Failed in nopython mode pipeline (step: analyzing bytecode)
Use of unsupported opcode (DICT_MERGE) found
File "Z test file 2.py", line 92:
def analyze_text(excel_sheet):
<source elided>
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
^
Below you can find my script. I have an excel sheet where one column has 100.000 snippets of texts that I want to evaluate one by one.
For each snippet of text the sentiment is analyzed using huggingface's sentiment analysis model. The snippet is always partially positive, negative or neutral, However the percentages vary. For each snippet the positivity, negativity and neutral percentages are added to the lists called "Positive", "Negative", or "Neutral" respectively. These lists are turned into new columns in a new dataframe and the new dataframe is exported along with the snippets. The script works completely fine without the #njit attribute but it just takes for ever to go over all of the 100.000 snippets.
After a long search I found a vague answer that the problem might have something to do with Numba not being able to read the tokenized "encoded_input" variable but I have absolutely no idea how to go from here.
An solutions to use Numba (or to rewrite to script for speed) would be greatly appreciated.
sheet = pd.read_csv("Text_export_to_analyze.csv")
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '#user' if t.startswith('#') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-xlm-roberta-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
# add values to a list based on label
def add_values_in_lists(label, value):
if label == "Positive":
Positive.append(value)
if label == "Negative":
Negative.append(value)
if label == "Neutral":
Neutral.append(value)
Positive = list()
Negative = list()
Neutral = list()
#njit()
def analyze_text(excel_sheet):
for snippet in excel_sheet["Snippet"]:
text = snippet
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
add_values_in_lists(l,s)
print(f"{i+1}) {l} {np.round(float(s), 4)}")
analyze_text(sheet)
sentiment_df = pd.DataFrame(
{'Positive': Positive,
'Negative': Negative,
'Neutral': Neutral
})
final_text_df = pd.concat([sheet, sentiment_df], axis=1)
final_text_df.to_csv("Text_export_final.csv", index=False, encoding='utf-8-sig')
I'm having difficulties working with tf.contrib.data.Dataset API and wondered if some of you could help. I wanted to transform the entire skip-gram pre-processing of word2vec into this paradigm to play with the API a little bit, it involves the following operations:
Sequence of tokens are loaded dynamically (to avoid loading all dataset in memory at a time), say we then start with a Stream (to be understood as Scala's way, all data is not in memory but loaded when access is needed) of sequence of tokens: seq_tokens.
From any of these seq_tokens we extract skip-grams with a python function that returns a list of tuples (token, context).
Select for features the column of tokens and for label the column of contexts.
In pseudo-code to make it clearer it would look like above. We should be taking advantage of the framework parallelism system not to load by ourselves the data, so I would do something like first load in memory only the indices of sequences, then load sequences (inside a map, hence if not all lines are processed synchronously, data is loaded asynchronously and there's no OOM to fear), and apply a function on those sequences of tokens that would create a varying number of skip-grams that needs to be flattened. In this end, I would formally end up with data being of shape (#lines=number of skip-grams generated, #columns=2).
data = range(1:N)
.map(i => load(i): Seq[String]) // load: Int -> Seq[String] loads dynamically a sequence of tokens (sequences have varying length)
.flat_map(s => skip_gram(s)) // skip_gram: Seq[String] -> Seq[(String, String)] with output length
features = data[0] // features
lables = data[1] // labels
I've tried naively to do so with Dataset's API but I'm stuck, I can do something like:
iterator = (
tf.contrib.data.Dataset.range(N)
.map(lambda i: tf.py_func(load_data, [i], [tf.int32, tf.int32])) // (1)
.flat_map(?) // (2)
.make_one_shot_iterator()
)
(1) TensorFlow's not happy here because sequences loaded have differents lengths...
(2) Haven't managed yet to do the skip-gram part... I actually just want to call a python function that computes a sequence (of variable size) of skip-grams and flatten it so that if the return type is a matrix, then each line should be understood as a new line of the output Dataset.
Thanks a lot if anyone has any idea, and don't hesitate if I forgot to mention useful precisions...
I'm just implementing the same thing; here's how I solved it:
dataset = tf.data.TextLineDataset(filename)
if mode == ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=batch_size * 100)
dataset = dataset.flat_map(lambda line: string_to_skip_gram(line))
dataset = dataset.batch(batch_size)
In my dataset, I treat every line as standalone, so I'm not worrying about contexts that span multiple lines.
I therefore flat map each line through a function string_to_skip_gram that returns a Dataset of a length that depends on the number of tokens in the line.
string_to_skip_gram turns the line into a series of tokens, represented by IDs (using the method tokenize_str) using tf.py_func:
def string_to_skip_gram(line):
def handle_line(line):
token_ids = tokenize_str(line)
(features, labels) = skip_gram(token_ids)
return np.array([features, labels], dtype=np.int64)
res = tf.py_func(handle_line, [line], tf.int64)
features = res[0]
labels = res[1]
return tf.data.Dataset.from_tensor_slices((features, labels))
Finally, skip_gram returns a list of all possible context words and target words:
def skip_gram(token_ids):
skip_window = 1
features = []
labels = []
context_range = [i for i in range(-skip_window, skip_window + 1) if i != 0]
for word_index in range(skip_window, len(token_ids) - skip_window):
for context_word_offset in context_range:
features.append(token_ids[word_index])
labels.append(token_ids[word_index + context_word_offset])
return features, labels
Note that I'm not sampling the context words here; just using all of them.
I have read a description, how to apply random forest regression here. In this example the authors use the following code to create the features:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = "word",max_features = 5000)
train_data_features = vectorizer.fit_transform(clean_train_reviews)
train_data_features = train_data_features.toarray()
I am thinking of combining several possibilities as features and turn them on and off. And I don't know how to do it.
What I have so far is that I define a class, where I will be able to turn on and off the features and see if it brings something (for example, all unigrams and 20 most frequent unigrams, it could be then 10 most frequent adjectives, tf-idf). But for now I don't understand how to combine them together.
The code looks like this, and in the function part I am lost (the kind of function I have would replicate what they do in the tutorial, but it doesn't seem to be really helpful the way I do it):
class FeatureGen: #for example, feat = FeatureGen(unigrams = False) creates feature set without the turned off feature
def __init__(self, unigrams = True, unigrams_freq = True)
self.unigrams = unigrams
self.unigrams_freq = unigrams_freq
def get_features(self, input):
vectorizer = CountVectorizer(analyzer = "word",max_features = 5000)
tokens = input["token"]
if self.unigrams:
train_data_features = vectorizer.fit_transform(tokens)
return train_data_features
What should I do to add one more feature possibility? Like contains 10 most frequent words.
if self.unigrams
train_data_features = vectorizer.fit_transform(tokens)
if self.unigrams_freq:
#something else
return features #and this should be a combination somehow
Looks like you need np.hstack
However you need each features array to have one row per training case.
How to get document vectors of two text documents using Doc2vec?
I am new to this, so it would be helpful if someone could point me in the right direction / help me with some tutorial
I am using gensim.
doc1=["This is a sentence","This is another sentence"]
documents1=[doc.strip().split(" ") for doc in doc1 ]
model = doc2vec.Doc2Vec(documents1, size = 100, window = 300, min_count = 10, workers=4)
I get
AttributeError: 'list' object has no attribute 'words'
whenever I run this.
If you want to train Doc2Vec model, your data set needs to contain lists of words (similar to Word2Vec format) and tags (id of documents). It can also contain some additional info (see https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/doc2vec-IMDB.ipynb for more information).
# Import libraries
from gensim.models import doc2vec
from collections import namedtuple
# Load data
doc1 = ["This is a sentence", "This is another sentence"]
# Transform data (you can add more data preprocessing steps)
docs = []
analyzedDocument = namedtuple('AnalyzedDocument', 'words tags')
for i, text in enumerate(doc1):
words = text.lower().split()
tags = [i]
docs.append(analyzedDocument(words, tags))
# Train model (set min_count = 1, if you want the model to work with the provided example data set)
model = doc2vec.Doc2Vec(docs, size = 100, window = 300, min_count = 1, workers = 4)
# Get the vectors
model.docvecs[0]
model.docvecs[1]
UPDATE (how to train in epochs):
This example became outdated, so I deleted it. For more information on training in epochs, see this answer or #gojomo's comment.
Gensim was updated. The syntax of LabeledSentence does not contain labels. There are now tags - see documentation for LabeledSentence https://radimrehurek.com/gensim/models/doc2vec.html
However, #bee2502 was right with
docvec = model.docvecs[99]
It will should the 100th vector's value for trained model, it works with integers and strings.
doc=["This is a sentence","This is another sentence"]
documents=[doc.strip().split(" ") for doc in doc1 ]
model = doc2vec.Doc2Vec(documents, size = 100, window = 300, min_count = 10, workers=4)
I got AttributeError: 'list' object has no attribute 'words' because the input documents to the Doc2vec() was not in correct LabeledSentence format.
I hope this below example will help you understand the format.
documents = LabeledSentence(words=[u'some', u'words', u'here'], labels=[u'SENT_1'])
More details are here : http://rare-technologies.com/doc2vec-tutorial/
However, I solved the problem by taking input data from file using TaggedLineDocument().
File format: one document = one line = one TaggedDocument object.
Words are expected to be already preprocessed and separated by whitespace, tags are constructed automatically from the document line number.
sentences=doc2vec.TaggedLineDocument(file_path)
model = doc2vec.Doc2Vec(sentences,size = 100, window = 300, min_count = 10, workers=4)
To get document vector :
You can use docvecs. More details here : https://radimrehurek.com/gensim/models/doc2vec.html#gensim.models.doc2vec.TaggedDocument
docvec = model.docvecs[99]
where 99 is the document id whose vector we want. If labels are in integer format (by default, if you load using TaggedLineDocument() ), directly use integer id like I did. If labels are in string format,use "SENT_99" .This is similar to Word2vec
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
Documents = [TaggedDocument(doc, [i]) for i, doc in enumerate(doc1)]
Model = Doc2Vec(Documents, other parameters~~)
This should work fine. You need to tag your documents for training doc2vec model.
my first post here!
I have problems using the nltk NaiveBayesClassifier. I have a training set of 7000 items. Each training item has a description of 2 or 3 worlds and a code. I would like to use the code as label of the class and each world of the description as features.
An example:
"My name is Obama", 001
...
Training set = {[feature['My']=True,feature['name']=True,feature['is']=True,feature[Obama]=True], 001}
Unfortunately, using this approach, the training procedure NaiveBayesClassifier.train use up to 3 GB of ram..
What's wrong in my approach?
Thank you!
def document_features(document): # feature extractor
document = set(document)
return dict((w, True) for w in document)
...
words=set()
entries = []
train_set= []
train_length = 2000
readfile = open("atcname.pl", 'r')
t = readfile.readline()
while (t!=""):
t = t.split("'")
code = t[0] #class
desc = t[1] # description
words = words.union(s) #update dictionary with the new words in the description
entries.append((s,code))
t = readfile.readline()
train_set = classify.util.apply_features(document_features, entries[:train_length])
classifier = NaiveBayesClassifier.train(train_set) # Training
Use nltk.classify.apply_features which returns an object that acts like a list but does not store all the feature sets in memory.
from nltk.classify import apply_features
More Information and a Example here
You are loading the file anyway into the memory, you will need to use some form of lazy loading method. Which will load as per need basis.
Consider looking into this