Major technological advancements are made every single day and the driving force behind these life saving and world altering advancements is always human. Machines are accelerating processes and providing alternative solutions for many of the repetitive and everyday tasks humans tire of, and the translation industry is no exception. However, not all tasks are repetitive when it comes to translation. For these automatized solutions to work and improve, the involvement of human linguists is imperative. It’s very important to acknowledge that behind any great machine is a super human. Let’s take a closer look at how humans are shaping AI and fueling translation technologies.
Machine translation technology has made massive strides in recent years. Long gone are the days of rule based machine translation. Thanks to the introduction of AI and machine learning, machine translation output quality has been improving at a faster rate. It’s worth noting that there are still limitations to this technology — limitations that require human intervention to overcome. In order for machine translation engines to learn, someone has to teach them. As a result, linguists must be involved both before and after a translation takes place. Linguists must “train” the engines to predict how a translator would proceed through post-edition work.
Post-edition occurs when a human translator corrects the machine-generated text and provides explanations for each correction. They then explain why they decided to do what they did, which helps improve the machine translation’s capabilities. Next, engineers process this information and feed the engine with more data. The goal is that as time passes, less human interference is required to produce human-quality translations. To achieve this goal, more and more industry-specific machine translation engines are being created and companies now have the possibility of training machine translation engines for their use.
In the case of transcription services, we’re seeing an increase in the use of speech recognition software. It’s important to pause here and draw a line between speech recognition and dictation. Dictation occurs when the speaker purposefully modulates and uses commands to be understood, which can be easier to decode because the speaker is usually intentionally clear. The difficult task is to decode speech when it’s not dictated, such as during a lecture or an interview.
Similar to machine translation, speech recognition software requires training. The developers of speech recognition software apps are collecting massive amounts of data from the users’ recorded sentences and correcting the transcribed text to train the software and make it more accurate when it comes to elements like accents, jargon, and speed. This is no easy task as no one human speaks precisely the same. Similar to how no two people are alike, deviations in speech patterns and accents must be taken into account. Any type of anomaly, like an accent, can cause speech recognition software to misinterpret certain aspects of a conversation. This is why having a human review of the output — no matter how powerful the technology is — is imperative. If you’re struggling to visualize this technology, pick up your smartphone. Often, when you get a new smartphone, you have to train your phone’s digital assistant (like Apple’s Siri) to recognize your specific voice. In many cases, the phone will only respond to your voice and won’t work for someone else.
As impressive as machine translation and tools like speech recognition software are, they still require human supervision in order to achieve the best results. There are very necessary human touches that can take the content created by AI and make it more accurate and effective. The post-editing of machine translated texts and transcription QA checks are a necessary step a human linguist must take, as there are times when the sensitivity of the materials require that no risks are taken (such as with medical translation) and that humans are the ones making the important decisions.