Neural Machine Translation (NMT)

Deep learning has driven Neural Machine Translation to levels few had imagined just a few years ago. It’s now improving so rapidly that it will play a major role in all future translation workflows. But of course there are errors. We believe the best way to catch them is by enhancing NMT translations using Subject Matter Experts with a true understanding of the topic at hand.

How NMT works

NMT (Neural Machine Translation) builds its datasets and methods on a type of machine learning based on artificial neural networks in which multiple layers of processing are used to extract progressively higher-level features from data. Deep learning processes, which, as the name suggests, are similar to the effects of biological neural networks in the brains of living species. The operation of NMT systems is therefore based not so much on task-based programming as on problem-solving in the search for connections based on available examples.

NMT systems look for their patterns of operation – they do not need precise instructions in this regard. NMT functions on the basis of a two-sided, recursive neural network – one of the parties is the so-called Encoder that decrypts the original content, the second is the Decoder, which predicts words in the target language.

Exponential improvement in quality

First in the public domain from around 2015, its usage has grown rapidly since 2018. Such are the resources thrown at improving NMT by the likes of Amazon, Google and Microsoft, achieving human parity should no longer be considered science fiction. In fact, blind tests rank neural machine translation above humans; people make more mistakes than machines. NMT is also faster, cheaper, and more consistent than humans.

In short, in the 2020s there is no translation workflow in the world that can credibly say no to using NMT in some shape or form.

Exfluency™ and NMT

But machines do make mistakes and they need humans to spot them. To be more precise, Subject Matter Experts who can spot the errors in seemingly fluent texts. It is this combination of NMT + SME that Exfluency™ is leveraging with its AI and blockchain technology.

Source data is first run through the Exfluency Asset Store where it is checked for anything that can be recycled from the gated community in question.

The remaining text is scrambled and sent to four NMT engines, from which the best is selected using Levenshtein distance technology. The resulting version allows the Subject Matter Expert to enhance and complete the texts more quickly and accurately than in other translation workflows.

In a pilot project that generated around 600,000 words of translation in languages as disparate as Chinese, Japanese, French, Polish, and Spanish, a third of the new data generated by the four NMT engines we used achieved Human Parity quality. In other words, neither the Enhancer nor the Trust Miner felt there was a need to change the data provided by the engines.

By marrying blockchain with language technology and coupling SMEs to NMT output, Exfluency meets the challenges outlined above head-on. Multilingual data is created quicker, faster, and with a lower price tag – while paying SMEs more. Trust Chain addresses the quality issue by automatically matches our clients with the most qualified and highest performing SMEs.

Related articles