In my spare time, I am transcribing a very old, rare book written in Romanian (in fact, it is the only remaining copy, to my knowledge). It was written over a hundred years ago, well before any computers existed. As such, no digital copies exist, and I am manually transcribing and digitizing it.
The book is thousands of pages long, and it is surprisingly time consuming (for me, at least) to add diacritic and accented marks (ă/â/î/ş/ţ) to every single word as I type. If I omit the marks and just type the bare letters (i.e a instead of ă/â), I am able to type more than twice as fast, which is a huge benefit. Currently I am typing everything directly into a .tex file to apply special formatting for the pages and illustrations.
However, I know that eventually I will have to add all these marks back into the text, and it seems tedious/unecessary to do all that manually, since I already have all the letters. I'm looking for some way to automatically/semi-automatically ADD diacritic/accent marks to a large body of text (not remove - I see plenty of questions asking how to remove the marks on SO).
I tried searching for large corpora of Romanian words (this and this were the most promising two), but everything I found fell short, missing at least a few words on any random sample of text I fed it (I used a short python script). It doesn't help that the book uses many archaic/uncommon words or uncommon spellings of words.
Does anyone have any ideas on how I might go about this? There are no dumb ideas here - any document format, machine learning technique, coding language, professional tool, etc that you can think of that might help is appreciated.
I should also note that I have substantial coding experience, and would not consider it a waste of time to build something myself. Tbh, I think it might be beneficial to the community, since I could not find such a tool in any western language (french, czech, serbian, etc). Just need some guidance on how to get started.
What comes to my mind is a simple replacement. About 10% of the words could are differentiated only by the diacritics, e.g. abandona and abandonă, those will not be fixed. But the other 90% will be fixed.
const dictUrl = 'https://raw.githubusercontent.com/ManiacDC/TypingAid/master/Wordlists/Wordlist%20Romanian.txt';
async function init(){
console.log('init')
const response = await fetch(dictUrl);
const text = await response.text();
console.log(`${text.length} characters`)
const words = text.split(/\s+/mg);
console.log(`${words.length} words`)
const denormalize = {}
let unique_count = 0;
for(const w of words){
const nw = w.normalize('NFD').replace(/[^a-z]/ig, '')
if(!Object.hasOwnProperty.call(denormalize, nw)){
denormalize[nw] = [];
unique_count += 1;
}
denormalize[nw].push(w);
}
console.log(`${unique_count} unique normalized words`)
for(const el of document.querySelectorAll('textarea')){
handleSpellings(el, denormalize);
}
}
function handleSpellings(el, dict){
el.addEventListener("keypress", function (e) {
if(e.key == ' ')
setTimeout(function () {
const restored = el.value.replace(
/\b\S+(?=[\x20-\x7f])/g,
(s) => {
const s2 = dict[s] ? dict[s][0] : s;
console.log([s, dict[s], s2]);
return s2;
}
);
el.value = restored;
}, 0)
})
}
window.addEventListener('load', init);
<body>
<textarea width=40 height=10 style="width: 40em; height:10em;">
</textarea>
</body>
Bob's answer is a static approach which will work depending on how good the word-list is.
So if a word is missing from this list it will never handled.
Moreover, as in many other languages, there are cases where two (or more) words exists with the same characters but different diacritics.
For Romanian I found the following example: peste = over vs. pesţe = fish.
These cases cannot be handled in a straightforward way either.
This is especially an issue, if the text you're converted contains words which aren't used anymore in today's language, especially diacritised ones.
In this answer I will present an alternative using machine learning.
The only caveat to this is that I couldn't find a publicly available trained model doing diacritic restoration for Romanian.
You may find some luck in contacting the authors of the papers I will mention here to see if they'd be willing to send their trained models for you to use.
Otherwise, you'll have to train yourself, which I'll give some pointers on.
I will try to give a comprehensive overview to get you started, but further reading is encouraged.
Although this process may be laborious, it can give you 99% accuracy with the right tools.
Language Model
The language model is a model which can be thought of as having a high-level "understanding" of the language.
It's typically pre-trained on raw text corpora.
Although you can train your own, be wary that these models are quite expensive to pre-train.
Whilst multilingual models can be used, language-specific models typically fare better if trained with enough data.
Luckily, there are publicly language models available for Romanian, such as RoBERT.
This language model is based on BERT, an architecture used extensively in Natural Language Processing & is more or less the standard in the field due as it attained state-of-the-art results in English & other languages.
In fact there are three variants: base, large, & small.
The larger the model, the better the results, due to the larger representation power.
But larger models will also have a higher footprint in terms of memory.
Loading these models is very easy with the transformers library.
For instance, the base model:
from transformers import AutoModel, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base")
model = AutoModel.from_pretrained("readerbench/RoBERT-base")
inputs = tokenizer("exemplu de propoziție", return_tensors="pt")
outputs = model(**inputs)
The outputs above will contain vector representations of the inputted texts, more commonly know as "word embeddings".
Language models are then fine-tuned to a downstream task — in your case, diacritic restoration — and would take these embeddings as input.
Fine-tuning
I couldn't find any publicly available fine-tuned models, so you'll have to fine-tune your own unless you find a model yourself.
To fine-tune a language model, we need to build a task-specific architecture which will be trained on some dataset.
The dataset is used to tell the model how the input is & how we'd like the output to be.
Dataset
From Diacritics Restoration using BERT with Analysis on Czech language, there's a publicly available dataset for a number of languages including Romanian.
The dataset annotations will also depend on which fine-tuning architecture you use (more on that below).
In general, you'd choose a dataset which you trust has a high-quality of of diacritics.
From this text you can then build annotations automatically by producing the undiacritised variants of the words as well as the corresponding labels.
Keep in mind that this or any other dataset you'll use will contain biases especially in terms of the domain the annotated texts originate from.
Depending on how much data you have already transcribed, you may also want to build a dataset using your texts.
Architecture
The architecture you choose will have a bearing on the downstream performance you use & the amount of custom code you'll have to do.
Word-level
The aforementioned work, Diacritics Restoration using BERT with Analysis on Czech language, use a token-level classification mechanism where each word is is labelled with a set of instructions of the type of diacritic marks to insert at which character index.
For example, the undiacritised word "dite" with instruction set 1:CARON;3:ACUTE indicates adding the appropriate diacritic marks at index 1 and index 3 to result in "dítě".
Since this is a token-level classification task, there's not much custom code you have to do, as you can directly use a BertForTokenClassification.
Refer to the authors' code for a more complete example.
One sidenote is that the authors use a multililingual language model.
This can be easily replaced with another language model such as RoBERT mentioned above.
Character-level
Alternatively, the RoBERT paper use a character-level model.
From the paper, each character is annotated as one of the following:
make no modification to the current character (e.g., a → a), add circumflex mark (e.g., a → â and i → î), add breve mark (e.g., a → ̆ă), and two more classes for adding comma below (e.g., s → ş and t → ţ)
Here you will have to build your own custom model (instead of the BertForTokenClassification above).
But, the rest of the training code will largely be the same.
Here's a template for the model class which can be used using the transformers library:
from transformers import BertModel, BertPreTrainedModel
class BertForDiacriticRestoration(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.bert = BertModel(config)
...
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None
):
...
Evaluation
In each section there's plethora of options for you to choose from.
A bit of pragmatic advice I'll offer is to start simple & complicate things if you want to improve things further.
Keep a testing set to measure if the changes you're making result in improvements or degradation over your previous setup.
Crucially, I'd suggest that at least a small part of your testing set is texts coming from the texts you have transcribed yourself, the more you use the better.
Primarily, this is data you annotated yourself, so you are more sure of the quality of this data then any other publicly available source.
Secondly, when you are testing on data coming from the target domain, you stand a better chance of more accurately evaluating your systems more accurately to your target task, due to certain biases which might be present from other domains.
Related
I am trying to detect switches from one language to another within a sentence.
For instance, with the sentence "Je parle francais as well as English", I would like to return both "Fr" and "En".
So far, it seems that the tools available focus on the sentence as a whole.
I also tried working with langdetect's detect_langs as it returns a probability and is thus able to return two potential languages. However, I have found it to be quite inaccurate for this task.
This brought me to think about creating my own model with Keras.
However I am far from mastering that tool and have many questions about what can be done or not with it.
Which is why I was wondering whether I can train a model on data where elements have two labels. As in, can I feed it a sentence with two languages, with the labels French and English? I would do so while also training it with sentences only on French or in English.
Does this make any sense? Are there otherwise alternative ways to go at it?
On top of that, if anyone has suggestions as to how to conduct this language detection task, please share them, even if they do not include Keras
So, I have a task where I need to measure the similarity between two texts. These texts are short descriptions of products from a grocery store. They always include a name of a product (for example, milk), and they may include a producer and/or size, and maybe some other characteristics of a product.
I have a whole set of such texts, and then, when a new one arrives, I need to determine whether there are similar products in my database and measure how similar they are (on a scale from 0 to 100%).
The thing is: the texts may be in two different languages: Ukrainian and Russian. Also, if there is a foreign brand (like, Coca Cola), it will be written in English.
My initial idea on solving this task was to get multilingual word embeddings (where similar words in different languages are located nearby) and find the distance between those texts. However, I am not sure how efficient this will be, and if it is ok, what to start with.
Because each text I have is just a set of product characteristics, some word embeddings based on a context may not work (I'm not sure in this statement, it is just my assumption).
So far, I have tried to get familiar with the MUSE framework, but I encountered an issue with faiss installation.
Hence, my questions are:
Is my idea with word embeddings worth trying?
Is there maybe a better approach?
If the idea with word embeddings is okay, which ones should I use?
Note: I have Windows 10 (in case some libraries don't work on Windows), and I need the library to work with Ukrainian and Russian languages.
Thanks in advance for any help! Any advice would be highly appreciated!
You could try Milvus that adopted Faiss to search similar vectors. It's easy to be installed with docker in windows OS.
Word embedding is meaningful inside the language but can't be transferrable to other languages. An observation for this statement is: if two words co-occur with a lot inside sentences, their embeddings can be near each other. Hence, as there is no one-to-one mapping between two general languages, you cannot compare word embeddings.
However, if two languages are similar enough to one-to-one mapping words, you may count on your idea.
In sum, without translation, your idea is not applicable to two general languages anymore.
Does the data contain lots of numerical information (e.g. nutritional facts)? If yes, this could be used to compare the products to some extent. My advice is to think of it not as a linguistic problem, but pattern matching as these texts have been assumably produced using semi-automatic methods using translation memories. Therefore similar texts across languages may have similar form and if so this should be used for comparison.
Multilingual text comparison is not a trivial task and I don't think there are any reasonably good out-of-box solutions for that. Yes, multilingual embeddings exist, but they have to be fine-tuned to work on specific downstream tasks.
Let's say that your task is about a fine-grained entity recognition. I think you have a well defined entities: brand, size etc...
So, these features that defines a product each could be a vector, which means your products could be represented with a matrix.
You can potentially represent each feature with an embedding.
Or mixture of the embedding and one-hot vectors.
Here is how.
Define a list of product features:
product name, brand name, size, weight.
For each product feature, you need a text recognition model:
E.g. with brand recognition you find what part of the text is its brand name.
Use machine translation if it is possible to make unified language representation for all sub texts. E.g. Coca Cola to
ru Кока-Кола, en Coca Cola.
Use contextual embeddings (i.e. huggingface multilingial BERT or something better) to convert prompted text into one vector.
In order to compare two products, compare their feature vectors: what is the average similarity between two feature array. You can also decide what is the weight on each feature.
Try other vectorization methods. Perhaps you dont want to mix brand knockoffs: "Coca Cola" is similar to "Cool Cola". So, maybe embeddings aren't good for brand names and size and weight but good enough for product names. If you want an exact match, you need a hash function for their text. On their multi-lingual prompt-engineered text.
You can also extend each feature vectors, with concatenations of several embeddings or one hot vector of their source language and things like that.
There is no definitive answer here, you need to experiment and test to see what is the best solution. You cam create a test set and make benchmarks for your solutions.
I'm trying to write a python program that will decide if a given post is about the topic of volunteering. My data-sets are small (only the posts, which are examined 1 by 1) so approaches like LDA do not yield results.
My end goal is a simple True/False, a post is about the topic or not.
I'm trying this approach:
Using Google's word2vec model, I'm creating a "cluster" of words that are similar to the word: "volunteer".
CLUSTER = [x[0] for x in MODEL.most_similar_cosmul("volunteer", topn=120)]
Getting the posts and translating them to English, using Google translate.
Cleaning the translated posts using NLTK (removing stopwords, punctuation, and lemmatize the post)
Making a BOW out of the translated, clean post.
This stage is difficult for me. I want to calculate a "distance" / "similarity" / something that will help me get the True/False answer that I'm looking for, but I can't think of a good way to do that.
Thank you for your suggestions and help in advance.
You are attempting to intuitively improvise a set of steps that, in the end, will classify these posts into the two categories, "volunteering" and "not-volunteering".
You should looks for online examples that do "text classification" that are similar to your task, work through them (with their original demo data) for understanding, then adapt them incrementally to work with your data instead.
At some point, word2vec might be a helpful contributor to your task - but I wouldn't start with it. Similarly, eliminating stop-words, performing lemmatization, etc might eventually be helpful, but need not be important up front.
You'll typically want to start by acquiring (by hand-labeling if necessary) a training set of text for which you know the "volunteering" or "not-volunteering" value (known labels).
Then, create some feature-vectors for the texts – A simple starting approach that offers a quick baseline for later improvements is a "bag of words" representation.
Then, feed those representations, with the known-labels, to some existing classification algorithm. The popular scikit-learn package in Python offers many. That is: you don't yet need to be worrying about choosing ways to calculate a "distance" / "similarity" / something that will guide your own ad hoc classifier. Just feed the labeled data into one (or many) existing classifiers, and check how well they're doing. Many will be using various kinds of similarity/distance calculations internally - but that's automatic and explicit from choosing & configuring the algorithm.
Finally, when you have something working start-to-finish, no matter how modest in results, then try alternate ways of preprocessing text (stop-word-removal, lemmatization, etc), featurizing text, and alternate classifiers/algorithm paramterizations - to compare results, and thus discover what works well given your specific data, goals, and practical constraints.
The scikit-learn "Working With Text Data" guide is worth reviewing & working-through, and their "Choosing the right estimator" map is useful for understanding the broad terrain of alternate techniques and major algorithms, and when different ones apply to your task.
Also, scikit-learn contributors/educators like Jake Vanderplas (github.com/jakevdp) and Olivier Grisel (github.com/ogrisel) have many online notebooks/tutorials/archived-video-presentations which step through all the basics, often including text-classification problems much like yours.
I am using a word embeddings model (FastText via Gensim library) to expand the terms of a search.
So, basically if the user write "operating system" my goal is to expand that term with very similar terms like "os", "windows", "ubuntu", "software" and so on.
The model works very well but now the time has come to improve the model with "external information", with "external information" i mean OOV (out-of-vocabulary) terms OR terms that do not have good context.
Following the example i wrote above when the user writes operating system i would like to expand the query with the "general" terms:
Terms built in the FastText model:
windows
ubuntu
software
AND
terms that represent (organizations/companies) like "Microsoft", "Apple" so the complete query will be:
term: operating system
query: operating system, os, software, windows, ios, Microsoft, Apple
My problem is that i DO NOT have companies inside the corpus OR, if present, i do not have to much context to "link" Microsoft to "operating system".
For example if i extract a piece inside the corpus i can read "... i have started working at Microsoft in November 2000 with my friend John ..." so, as you can see, i cannot contextualize "Microsoft" word because i do not have good context, indeed.
A small recap:
I have a corpus where the companies (terms) have poor context
I have a big database with companies and the description of what they do.
What i need to do:
I would like to include the companies in my FastText model and set "manually" their words context/cloud of related terms.
Ideas?
There is no easy way how to do it. The FastText algorithm uses character-level information, so it can infer embeddings for unseen words. This is what the FastText paper says about representing the words:
However, this makes sense only in the case of words where you can infer what they mean from knowing the parts. E.g., if you had a reliable embedding for "walk", but not for "walking" and there were plenty of words ending with "ing", FastText would be able to infer the embedding. But this obviously cannot work with words like "Microsoft".
The best thing you can do is train your embeddings on data that contain the words you want the model work with of genre as similar as possible. If your text is in English, tt should not be too difficult.
These kinds of models need numerous, varied usage examples to place a token in a relatively good place, at meaningful distances/directions from other related tokens. If you don't have such examples, or your examples are few/poor, there's little way the algorithm can help.
If you somehow know, a priori, that 'microsoft' should appear in some particular vector coordinates, then sure, you could patch the model to include that word->vector mapping. (Though, such model classes often don't include convenient methods for such incremental additions, because it's expected words are trained in bulk from corpuses, not dictated individually.)
But if you don't have example text for some range of tokens, like company names, you probably don't have independent ideas of where those tokens should be, either.
Really, you need to find adequate training data. And then, assuming you want the vectors for these new terms to be in the "same space" and comparable to your existing word-vectors, combine that with your prior data, and training all the data together into one combined model. (And further, for an algorithm like FastText to synthesize reasonable guess-vectors for never-before-seen OOV words, it needs lots of examples of words which have overlapping meanings and overlapping character-n-gram fragments.)
Expanding your corpus to have better training data for, say, 100 target organization names might be as simple as scraping sentences/paragraphs including those names from available sources, like Wikipedia or the web.
By gathering dozens (or even better hundreds or thousands) of independent examples of the organization names in real language contexts, and because those contexts include many mutually-shared other words, or names of yet other related organizations, you'd be able induce reasonable vectors for those terms, and related terms.
I want to pull abstracts out of a large corpus of scientific papers using a python script. The papers are all saved as strings in a large csv. I want to something like this: extracting text between two headers I can write a regex to find the 'Abstract' heading. However, finding the next section heading is proving difficult. Headers vary wildly from paper to paper. They can be ALL CAPS or Just Capitalized. They can be one word or a long phrase and span two lines. They are usually followed by one-two newlines. This is what I came up with: -->
abst = re.findall(r'(?:ABSTRACT\s*\n+|Abstract\s*\n+)(.*?)((?:[A-Z]+|(?:\n(?:[A-Z]+|(?:[A-Z][a-z]+\s*)+)\n+)',row[0],re.DOTALL)
Here is an example of an abstract:
'...\nAbstract\nFactorial Hidden Markov Models (FHMMs) are powerful models for
sequential\ndata but they do not scale well with long sequences. We
propose a scalable inference and learning algorithm for FHMMs that
draws on ideas from the stochastic\nvariational inference, neural
network and copula literatures. Unlike existing approaches, the
proposed algorithm requires no message passing procedure among\nlatent
variables and can be distributed to a network of computers to speed up
learning. Our experiments corroborate that the proposed algorithm does
not introduce\nfurther approximation bias compared to the proven
structured mean-field algorithm,\nand achieves better performance with
long sequences and large FHMMs.\n\n1\n\nIntroduction\n\n...'
So I'm trying to find 'Abstract' and 'Introduction' and pull out the text that is between them. However it could be 'ABSTRACT' and 'INTRODUCTION', or ABSTRACT and 'A SINGLE LAYER NETWORK AND THE MEAN FIELD\nAPPROXIMATION\n'
Help?
Recognizing the next section is a bit vague - perhaps we can rely on Abstract-section ending with two newlines?
ABSTRACT\n(.*)\n\n
Or maybe we'll just assume that the next section-title will start with an uppercase letter and be followed by any number of word-characters. (Also that's rather vague, too, and assumes there'l be no \n\n within the Abstract.
ABSTRACT\n(.*)\n\n\U[\w\s]*\n\n
Maybe that stimulates further fiddling on your end... Feel free to post examples where this did not match - maybe we can stepwise refine it.
N.B: as Wiktor pointed out, I could not use the case-insensitive modifiers. So the whole rx should be used with switches for case-insenstive matching.
Update1: the challenge here is really how to identify that a new section has begun...and not to confuse that with paragraph-breaks within the Abstract. Perhaps that can also be dealt with by changing the rather tolerant [\w\s]*with [\w\s]{1,100} which would only recognize text in a new paragraph as a title of the "abstract-successor" if it had between 2 and 100 characters (note: 2 characters, although the limit is set to 1 because of the \U (uppercase character).
ABSTRACT\n(.*)\n\n\U[\w\s]{1,100}\n\n