When we talk about translating, most of us think about translating texts. Books, contracts, manuals, you name it. But there’s a whole different world out there, named subtitling. Let me give you a peak into this creative and challenging trade.
Is a subtitler a translator? Yes. Is a translator a subtitler? Not necessarily. Let me start by a short explanation on what subtitling is, before I cover the differences between text translations and subtitling.
Most of you see subtitles on a near daily basis. Whether it’s a movie on TV, a video on social media, or a series on your favorite streaming service. Subtitles come in handy when the audio is in a language you don’t understand, or when you can’t or don’t want to have the audio on, because you’re already listening to music or don’t want to disturb other people, or for whatever other reason.
With subtitles, you can read what’s going on, instead of having to listen to the audio. When the subtitles are in the same language as the audio, we actually call them “captions”. Captions usually include guidance for viewers that are deaf or hard of hearing. When captions are translated, they’re “subtitles”, although in everyday speech, “subtitles” covers both.
So, what’s the big deal then?
Isn’t subtitling just translating text too? Yes and no. Of course, it basically is translating captions. But it is so much more. Most subtitlers do start with translating captions, as that’s the easiest way to get a rough draft of your subtitles. The challenging and creative part starts once you have that rough draft.
With subtitles, you’re severely limited in your translation. Of course, when you’re translating texts, you may also have requirements as to formatting, but usually it doesn’t really matter if a translated book for example is a few pages longer than the original. With subtitles, there’s a maximum to the number of characters that’s allowed per subtitle. For languages that use the Latin alphabet, it’s usually something like a maximum of 42 characters per line and a maximum of 2 lines per subtitle. Those subtitles need to stay on screen long enough for people to read them, so you can’t just add more text by adding subtitles that stay on screen for just half a second.
Those limitations force you to be creative, especially when you’re translating into a language that uses more words to convey the same message as the original language. In that case, you have to summarize the dialogue while still retaining the essence. That means that even more than when translating texts, you have to translate for meaning. In practice, that means that tweaking the text, finding that sweet spot where the subtitles are short enough, the timing is long enough and you still retain the meaning of the audio, takes more time than the initial rough translation of the captions.
Another difference is the accuracy of jokes and puns. In a book for example, when someone makes a joke, you usually have quite a bit of freedom on how to translate that joke. You have to make sure that it fits the context of course, but with subtitling, that context has an extra dimension: what’s visible on screen.
Remember the episode of The Big Bang Theory where Sheldon does the Higgs boson particle rebus? The first part of that rebus is a drawing of pigs, and “p=h”. Subtitlers clearly have been struggling there. For example in Dutch, “pigs” would be “varkens” or “biggen”, but that doesn’t make sense when the drawing shows a “p” to be changed into an “h”, and the outcome has to be “Higss boson particle”, or “Higssboson-deeltje” in Dutch.
In a text translation, when the actual rebus isn’t visible, you just make up a different rebus that makes sense. When subtitling, when the actual rebus is visible, you’ll have to make up a rebus that not only makes sense in the language that you translate into, but that also fits what’s visible on screen.
In addition to the translated dialogue, subtitles may also contain information on other audio than the dialogue. That’s especially important for viewers that are deaf or hard of hearing. For example, the subtitles can indicate who’s speaking, or can explain which sounds are on the audio that are important for the plot. These are called SDH (Subtitles for the Deaf & Hard of Hearing), or open or closed captions.
More than language
Finally, as a subtitler, you don’t just take care of the correct content of the subtitles, you’re also responsible for timing them right. The timing of the subtitle has to match the timing of the audio. That is, when a speaker starts speaking, that’s when the subtitle has to appear on screen. When the speaker stops, the subtitle needs to disappear. Of course, when the speaker speaks long, the subtitles may have to be cut up into different subtitles, all matching the audio as close as possible. Also, it’s important that subtitles don’t turn into spoilers: don’t give away the punchline before the speaker voices it. Some clients have specific requirements, making timing into even more of a puzzle. Netflix, for example, requires that subtitles end and start within a certain amount of milliseconds from a scene change.
Those additional requirements on timing, length of subtitles and visible context, make subtitling a Creative process with a capital C.