As Microsoft, Xiaomi, Skype, YouTube, Google Meet, Zoom and several other companies and platforms introduce live translation features for everyone’s convenience, we wonder how this affects the business of real translators and interpreters.
What is Live Translation
First off, let’s see what live translation is. As you may already know, it is based on an automatic speech-to-text solution, the output of which is machine translated, producing instant, almost real-time results.
These features are indeed a great help for someone who doesn’t speak the target language at all, and just wants to interact with people with whom he/she doesn’t speak the same language. On the other hand, experience shows that speech-to-text quality, and as a consequence, live translation quality is not always impeccable, if you remember the blunder with Justin Trudeau’s speech for one.)
Machine Translation is not perfect
Another factor affecting the quality of live translation is the quality of machine translation itself. Some MT engines are doing better than others, but neither one is perfect (just yet). Therefore, live translation features will only level up to professional simultaneous- or consecutive interpretation services provided by real interpreters, if both speech recognition and machine translation improve to produce the same results.
Until then, the services provided by live translation features and a real interpreter/translator remain different, and different services attract different target audiences.
As live translation features gain momentum and becomes increasingly common, more and more people will have the opportunity to familiarize themselves with them and to weigh their advantages and drawbacks. Sooner or later, the audience will also learn the extent they may rely on it and when to choose professional services instead.
When, after making an informed decision, people still go for live translation features, it is because their purposes don’t justify the extra costs of real, professional interpretation services, and for those purposes, they would never have used professional services in the first place.
However, for purposes that do justify such extra costs, people will always choose real interpretation/translation services. If an important transaction is at stake, they just can’t afford not to.
Real Translators for important projects
During these few years to come, when live translation features aren’t fully accurate and reliable, but are already in use, LSPs should concentrate on these latter occasions, and to tell you the truth, there is nothing new about this.
Our real market has always been businesses who were up to something big, rather than private individuals only wanting to know what’s written on a label. Coordination of high-value contracts, political negotiations, and serious professional discourses have always represented the brunt of LSPs’ interpretation workload, and with the onset of live translation age, this will only become more and more distinct.
At least for now, providers of real interpretation/translation services are not in an immediate danger of losing their profits to live translation features. But, will this remain so in the long run? How long will it take for those features to improve to an extent to make LSPs redundant?
Automatic speech recognition goes back to the 1950s. (Who would have thought that?) Until the 1970s, these mechanisms only recognized a limited vocabulary, which also limited their use. In the mid-1980s, Fred Jelinek and co. integrated a typewriter into the speech recognition system, which marked the emergence of automatic speech-to-text services.
The first attempts of computer-based machine translation took place in 1949 to decode messages during the Second World War. The first machine translation program had a vocabulary of only 250 words, and worked from Russian to English only, but it sparked interest worldwide, resulting in continuous developments over time.
Live Translation Future
It took more than a half-century to arrive at the breakthrough point, which was brought about by the appearance of artificial intelligence in both concepts, leading to the results we know now.
And the development continues. Though the process of recognizing speech, converting it into text, and translating this text into another language is an extremely complex task, I believe that through persistent fine-tuning and improvements, the time will come when the flaws of these automatic processes will be reduced to the level where human interpretation or translation services will become redundant.
Redundant, yes. But dead? Nope. Adaptation and resilience will help interpreters along the way to put these linguistic skills to another good use.