Google AI Invents its Own Language to Translate Between Languages

Blog Single


Late in December 2016, when the news world’s attention was focused on terrorist attacks, a seismic change in artificial intelligence passed by almost unnoticed.

Google Translate, the machine translation program, took a great leap forward in terms of the quality of translated material it outputted. This sudden improvement in the quality of translation work it produced seems to have been the result of a great leap forward in machine learning and artificial intelligence (AI) – one that Google AI invented on its own.

Google’s AI had previously used phrase-based translation, a slightly crude way to translate between languages the system already recognises.

This approach maps similar words and phrases without seeking to understand the linguistic structures they are based on. This meant the context was effectively ignored.

Phrase-based translation

The translated text resulting from phrase-based machine translation is often grammatically inaccurate and needs to pass through a human fluent in that language in order to make a bit more sense.

Another disadvantage is that this phrase-based approach to translating isn’t able to make informed guesses about any vocabulary it doesn’t recognise. It can’t learn from new input in the same way that a human translator could accommodate new findings or language insights, which limits the usefulness of this style of machine translation.

Back in September, Google added new AI technology that aimed to overcome these drawbacks and enable machine learning to take place. It’s called Google Neural Machine Translation system (GNMT) and basically, means that the system can learn from what’s inputted into it.

Google’s translation AI suddenly learned how to make educated guesses to cover missing information, in the same way, a human translator might problem solve and apply linguistic creativity to improve their translation. The update meant that the AI could infer the content, tone and meaning of phrases based on the context of other words and phrases in the text it was translating.

Google’s new language

What happened next was a surprise for many people. The AI behind Google Translate decided the best approach to bridge these gaps in its knowledge was effectively to invent its own language to help it translate between unfamiliar languages. What’s astonishing is that it came up with this solution on its own, without being instructed to do so via programming.

Before the machine-learning update was made, the system was unable to do much more than substitute words and phrases in one language for those it was taught had the same meaning in another. Effectively, it flicked through a translation dictionary and rendered whatever was in it to the user.

But under the new approach, the system managed to make connections between words and phrases without being explicitly taught to do so. It started to better understand language, rather than just look it up in a reference database.

Google’s AI managed to achieve a deeper understanding of meaning by looking at the commonality of meaning between languages.

It’s not only a great leap forward for AI as a technology but it also has implications for linguistics, who has been talking about the concept of a universal language (or ‘interlingua’) for many years.

The implications of this change

Those few people that did notice the spontaneous and sudden improvement were astonished. There’s considerable excitement about this development from fields of study as varied as psychology, linguistics and computer intelligence.

The development indicates that machine learning is already taking place and we can perhaps assume that further innovations will continue to emerge as the system improves itself further.

The world is now standing at the cusp of a new era of artificial intelligence, and there’s an AI arms race as companies such as Google, Baidu, Facebook and Microsoft compete to innovate in this area.

Soon we’ll be incorporating machine intelligence into our everyday lives, communicating with machines on a regular basis.

We’re getting ready for driverless cars, surgeons are experimenting with robotic technology for conducting medical procedures. AI is coming into increasing use in manufacturing and in dangerous industries such as mining and welding.

In Japan, the first ‘companion’ robots that went on sale sold out almost instantaneously, and there’s talk of using this kind of technology to enable older people to stay independent in their own homes for longer.

Already the trend has been incorporated into political landscape as society starts to think about the impact AI will have on jobs. As a translation company, we’re watching the developments in our area with interest.

Whilst we don’t feel that machine translation can yet compete against human translators in terms of quality of output, developments in the area of language are likely to impact on our field.

We’re particularly curious to see how this concept of a universal language influences linguistics as an academic field. It seems remarkable to think that machines could really teach us something about how human language works.

We also think that this technology could be really helpful when it comes to language learning. The better we understand language and how it works, the more effectively we can teach it.

Possibly these new developments could help unlock a new era of translation efficiency that will help us improve what we do.

The speed at which machine learning dazzled us all – from a technology update in September to a machine learning innovation only 3 months later – shows how quickly things are now moving in the field of AI.

From a business perspective, things are likely to change quickly as AI changes the rules of many industries. The challenge to all businesses is how to respond effectively to the radical innovations, which may emerge faster than human organisations are prepared for.

Related posts

Subscribe to our newsletter