Saving Time and Money with the Latest Translation Technologies

49d37bb2e981547e2ecf4bc168a3f1ab_IstvanLengel

Insight from the Creators of memoQ

István Lengyel is COO at Kilgray Translation Technologies, who develop the successful translation memory tool, memoQ. István has been with Kilgray since 2005, and has worked in localization since 1998. He lives in Gyula, Hungary, and travels the world meeting clients, working as an IT consultant, and attending and running industry events.

Interview by Matt Train

You have been with Kilgray since the beginning and seen it grow into a real competitor in the Translation Memory tools market…it must have been quite a journey, with some ups and downs?

Yes, it was, or rather it still is, but it taught me one thing – to prefer hard work and responsibility over any genius. Yes, we are a talented bunch, yes, we always need a very clear vision, but like everyone else, we also made a lot of mistakes, and instead of paying for these mistakes with losses, we preferred to take a very modest approach and try to remain very approachable, and whenever we make a mistake, the management jumps in and helps out. Although there were some more important deals in the history of the company, there were very few real turning points, most of the growth that we achieved was organic, though fast.

 

What advice would you give to organisations looking for a translation memory management system – what should they be looking out for?

Translation memory management? Huh, that’s a hard nut… Translation management would be so much easier. The problem with translation memory management is that you make one mistake and you messed your entire system up. This is why we created our TM Repository which is considerably less popular than memoQ, because it is only for translation memory management and only saves quality, does not save time or costs in the first place. You don’t even understand why it is good… until the moment you make mistakes. Then you would pay a hefty amount for that anecdotal Undo button. I guess the most important advice is to know what you are doing, or at least have a vision how you want to handle translations. Know who you are working with, what the preferences of those people or organizations are, know what you want to achieve by introducing such a system, know who you want to rely on, whether you want to insource, pay at least a person part time, or outsource the whole translation memory maintenance and if so, to whom (advice: do that to a company that does not provide translation to you). Know where the content is coming from and where it is going to, which is one of the areas where we see the most question marks all the time at organisations. Technology matters a lot, but unless you understand your organizational needs, you probably fall into the trap of a translation technology vendor or the lock-in of a translation provider. Translation technology is not rocket science at all, and most people are willing to help, so allow them to do that – and if you don’t buy from them, just offer them some information, some learning in exchange.

 

You’re about to release memoQ version 6.0. What features are you most excited about in this release?

My favourite feature is the subvendor group, and the reason why I like it is that it is absolutely a non-technical feature. When we started this collaborative translation story, where the focus is on the server rather than the desktop application, and everything is happening simultaneously, we had to create a model for user management on the server. Everybody makes the same mistake in the first place, there is one admin that creates all the users, and users usually are considered translators. In reality, however, this is not the case, because there are companies embedded in the process: multi-lingual vendors, single-language vendors, and so on. The justification for this is simple: nobody speaks 50 languages, and nobody can rate translators in many languages. I do speak a reasonably fluent German and Spanish besides my mother tongue that is Hungarian, but still, whenever I try to rate an English-German translator, I fail. Maybe because I really don’t know the language pair so well, maybe because I trust that translators are better than I am in a language pair they do professionally and I never touched in translation, and I’m not a native speaker of either.  So the smaller translation companies specializing on a few languages will remain. And therefore the old server story with one server and many equal users has to go.

A few years ago people used to exchange bilingual files or packages in email. It was slow, but it could go through ten people without any complication. However, today in a server-based world we still need to protect the interests of everyone involved, and the only way to protect smaller translation companies who have spent a lot of time researching the best value translators out there is to prevent the customer from seeing the names of their translators. The subvendor group does exactly this: you, as the server manager, can create a group for your subvendors who are companies, and delegate the project manager as a user into that group. The project manager of the subvendor can then create and assign freelance translator and reviewer users, and while she sees them one by one, the server manager does not, she sees them as a unit. There is still a limit that this cannot go down several levels, but I think it’s already a big step forward.

But this release is about increasing performance, stability and scalability in the first place. It’s the details that count here, the perfection of features that we added when we still had to release very quickly to level up with the competition. It’s our biggest release so far, where we changed over a million lines of code, but actually what I hope is that most users will not even recognize any changes in the core functionality.

 

Translation technologies seem to be developing fast, and converging. What are your predictions for translation technology over the next 5-10 years?

Predictions? I can only have wild guesses… Translation technology also moves together with regular IT trends, we’re getting competition from cloud-based solutions, and actually the cloud did not even exist as a recognized or wide-spread concept in IT when we started business (and it wasn’t THAT long ago). Then there is a lot of room for simplification, but if that’s really happening or not, I don’t know. A lot of the legacy is still there and some people really don’t want to change processes. Although the web has improved an awful lot recently (three years ago I dreaded the thought that we have to start working on a browser-based environment, and then we released an editor which I even like), I don’t think that web-only systems will become widespread. Technically the web-based systems still break very easily, and the fact that there are more mainstream browsers than earlier (mobile phones, tablets, Chrome) also does not facilitate web-based development. However, this might change in the future. I think translation technology will become more open ultimately, with more interchange possibilities between the tools, but that still does not mean the barriers of entry go lower. Too many people think building a simple translation tool is very easy (we made the same mistake J), but a translation tool is so dependent on many other tools that keeping the whole thing up-to-date is already a challenging task.

Get pricing

Comments are closed.