End of an Era in Machine Translation

End of an Era in Machine Translation


Terms such as ‘state-of-the-art’, used to describe the groundbreaking quality of a discovery or a solution, are heard so often in relation to machine translation that they have almost started to lose their evocative power. Despite this slight desensitisation, innovation in the machine translation field does happen at an incredible speed. Up until 2016, statistical machine translation was at the forefront of MT frameworks, only to be superseded by neural machine translation in recent years.

In one of our September posts, we talked about statistical machine translation (SMT) in detail, going over its main characteristics. This technology has contributed considerably towards improvements in MT quality and to making machine translation more widely accessible.

Although some companies still rely on statistical MT or hybrid solutions combining statistical and neural MT as part of their localisation strategy, there are very clear signs that SMT is becoming a thing of the past.

To stress the importance of this change, let’s have a quick look at the timeline of developments in machine translation. The technology started emerging for the first time in 1950s and back then MT systems relied on linguistic rules that were being manually specified by researchers and ‘taught’ to the system.

scientist using tablet in the laboratory,Laboratory research concept,science background

The history of machine translation draws its beginnings in the mid 20th century. In 1954, researchers from IBM and Georgetown University in the United States completed automatic translation of over sixty Russian sentences into English.

The rule-based machine translation was briefly superseded by example-based machine translation, where automated translation happened mainly based on analogy.

The early 1990s brought yet newer framework – statistical machine translation, which was designed to analyse similar texts in two languages and understand the governing patterns. SMT reigned for almost 25 years – a period long enough to make everyone think that was all that was ever going to happen in machine translation.

Then, neural machine translation came into the picture, slowly deposing SMT in favour or its more natural and stylistically appealing translation outcomes. Strategic moves from some of the tech giants confirm the outdated nature of statistical machine translation.

For instance, up until now Microsoft has been offering the opportunity to create, train and customise your own statistical machine translation models through its Microsoft Translator Hub. However, this month it has been announced that Microsoft will retire this service by the end of April 2019 in favour of a newer neural framework in the form of Custom Translator.

This means that that Microsoft will no longer offer SMT and focus solely on neural machine translation. One of the contributing factors to the displacement of SMT is the ease with which neural machine translation model training can be done. Creating neural machine translation models requires far less supervision and therefore less overhead.

This goes to show that there is no such thing as static technology; once a solution is conceived and developed, it needs to be continuously reviewed and updated, otherwise it will eventually become outdated and irrelevant.

Even the current state-of-the-art neural machine translation is bound to have a limited lifespan. Although it is hard to tell what type of technology will replace it in the future, it is clear that no framework is completely future-proof.

As TAUS puts it, ‘Neural MT systems are data-hungry’, meaning they need a lot of bilingual training data, typically translation memories, to become robust and produce good quality translation results.

The double exposure image of the business man standing back during sunrise overlay with cityscape image. The concept of modern life, business, city life and internet of things.

The future of machine translation and artificial intelligence in general will be largely defined by how they can be leveraged to enhance human experience.

Perhaps the future will bring a response to this resource-heaviness of neural MT systems. Maybe in a few years’ time, they will be able to generate training data on their own, without humans having to manually collect it, therefore increasing the ease with which they can be created. This is turn, could lead to machine translation becoming even more widely accessible and therefore penetrating even more aspects of our daily lives.

The future of machine translation is still unknown but it is also bound to be exciting. With the capabilities of natural language processing still not fully explored, in a few years’ time we might see machine translation applied in ways that are completely unfamiliar to us today.

Related posts

Subscribe to our newsletter