2016-06-10

Back a few weeks, we shared with you the news from Facebook about how the social giant was replacing Bing Translation with its own internally-developed automated machine translation (MT) system. We even did a brief test to see how their new in-house system compared to Bing Translate.

Now comes the deeper and more stunning details. It’s not just what Facebook is doing, it is how it is aiming to achieve its results. In a recent Slator article, Facebook’s Alan Packer was quoted as saying, “We believe, along with most of the research and academic community…that the current approach of statistical, phrase-based MT has kind of reached the end of its natural life.”

Over the past two years, Facebook has applied advanced AI research to Machine Translation, and moved towards an internally-developed neutral network-based Machine Translation solution. The present implementation, which replaced Bing, seems to be a stop-gap method, with a new neutral network system to be deployed by the end of 2016.

Facebook is not alone in this innovation. In a related Slator article from 24 May, it was described how Google filed for a patent in 2015 (published in April 2016) for neural Machine Translation. In that article Daniel Marcu, formerly of SDL and now of FairTradeTranslation.com, was quoted as saying most researchers are moving “from advancing the state of the art in a traditional, statistical MT framework to a neural MT framework.”

Artificial neural networks (ANN) operate upon threshold logic (wherein a myriad of weighted factors affecting a decision have to approach a certain quantitative value before it activates or triggers) and produce emergent results (where patterns or characteristics of the final results are not necessarily reflected in the entities that produced the result). They are far more complex and adaptive than statically-trained models. Subsequently, they also require a tremendous amount more processing power and sophistication in their architecture. For instance, Facebook and Google have been investing heavily in NVIDIA’s CUDA GPU programming environment for AI.

So what does this all mean for you and your business? Or, for the translation industry in general?

First, it means you can expect even greater adaptability and context-specificity in machine translation. As Facebook’s Packer notes, their social network is “extremely informal. It’s full of slang, it’s very regional,” full of social shorthand and idiom. Whereas statistical MT is often based in the rote corpora of government and business, which, Packer says, “don’t sound like they came from a human. They’re not natural, they don’t flow well.”

This new contextual awareness and deep learning of MT is indeed the wave of the future. It is more than simply a translation memory. It is understanding the domain in which a conversation takes place, and the linguistic norms of that environment, and having a system which can keep pace with the lingo of the day and mood of the moment.

Secondly, for all their advances, do not expect magic or miracles. Neural networks still can’t write a comprehensible movie script. Which is to say there is still a role for human writers, editors, reviewers, translators, and language experts in this ever-evolving world.

At e2f, we believe such advances as neural network-based MT will continue to bring together the traditional disciplines of computer science and linguistics. Computer-aided human translation, human post-editing of machine translation, and other current practices are just the beginning. Hybrid, synthesizing models will bring human translators and computer systems ever-closer, simultaneously pushing the limits for the highest levels of quality, while scaling to the massive quantities of data of the modern Internet and social web.

Show more