Just a few weeks ago, Meta presented an artificial intelligence model capable of translating into 200 languages. The bet on this technology, which has the name ‘No Language Left Behind’ (NLLB-200), is part of a project developed by Mark Zuckerberg’s company to boost its bet on the metaverse.
Almost all the technology giants, with the exception of Apple and Google, are undertaking projects to position themselves in this new virtual universe that is in full swing. But there are other, more modest companies, some of them local, that have long since initiated research efforts in this field. For some years now, the company Incyta and the GRIAL research group of the Arts and Humanities Department of the Universitat Oberta de Catalunya (UOC) have been collaborating on a series of research and technology transfer projects related to neural machine translation. The objective of the research is to develop neural machine translation systems to be integrated into the workflow of the company Incyta.
This Barcelona-based language services company has been using machine translation systems for years to carry out post-editing. This workflow based on machine translation plus post-editing makes it possible to offer a more efficient and economical translation service, while maintaining the same level of quality, to its wide range of clients: the written press, publishing houses, public administration, universities, etc.
Until a few years ago, machine translation systems offered sufficient quality only for similar language pairs, such as Spanish-Catalan or Spanish-French. On the other hand, for slightly more distant languages, such as Spanish-English, for example, the quality of machine translation was not sufficient. It was more efficient to translate the document manually from scratch.
The emergence of today’s neural machine translation systems has made it possible to obtain outstanding quality even for very distant language pairs, such as Chinese-Spanish, for example. The appearance of these systems has constituted a true revolution in the world of professional translation, since they open the door to applying the most post-edition machine translation flow to most translation jobs.
Rule-based machine translation and corpus-based machine translation
But to understand what this technological revolution is all about, it is worth remembering the two main paradigms of machine translation: rule-based machine translation and corpus-based machine translation. In the first paradigm, the rule-based paradigm, machine translation systems are developed by computer engineers and linguists who write programs, dictionaries and rules to translate a sentence in a source language into a sentence in the target language.
The development of these systems usually involves many months of work by teams of several people. Among the rule-based systems, syntactic transfer systems can be highlighted. In these systems, the sentence in the source language is syntactically parsed to automatically obtain a parse tree. This parse tree, which can be deep or shallow, is transferred to an equivalent tree in the target language using a set of rules.
Once this syntactic tree is obtained in the target language, the words are translated using bilingual dictionaries and the translated words are inflected to obtain a correct sentence in the target language. This paradigm has worked very well for similar languages that have quite similar syntactic structures. There are excellent systems using this methodology that are still in use for similar language pairs such as Spanish and Catalan.
In the second paradigm, corpus-based systems, systems are not developed, but trained. That is, the systems learn to translate from texts in the source language and in the target language. Parallel corpora, i.e., sets of segments or sentences in one language with their translation equivalents in another language, are normally used to train these systems.
Chronology of machine translation
The first corpus-based systems are statistical machine translation systems, which burst onto the market around 2005. These systems are based on the calculation of two probabilities: the probability that a given sentence in the target language is the translation of a sentence in the source language; and the probability that a given sentence in the target language is a correct sentence in that language. The first probability can be calculated from the statistics obtained from the parallel corpus; while the second probability is calculated from the statistics obtained in a monolingual corpus of the target language. This monolingual corpus can be obtained from the target language part of the parallel corpus.