An Inside Look at Closed Captioning
I’ve corrected the spelling and grammatical errors, and timed the captions frame by frame (1/30 second) on the computer, so that a caption with one character’s transcribed speech doesn’t overlap with the presence of another speaker on-screen.
It’s very exacting work, and a half-hour television show can take an experienced closed caption editor 10 hours to complete. (The prerecorded television shows with which I work are received in advance, and most often use pop-up captions, which literally “pop up” and disappear. News and sports use roll-up captions, which roll upwards before disappearing. The latter, requiring live captioners, tend to have mistakes, because this form of captioning is done completely on the spot).
The job requires good typing skills and an eye for detail, as well as advanced skills in the English language. One of my colleagues has a background in journalism, and two of us (including myself) are graduates in English literature.
The last caption I have to add to this particular episode of “foodessence” (topic: corn) is “corn is the kernel that captains the Western world.” A cute pun, but a captioner’s nightmare. One idiosyncrasy of the English language is that a significant minority of words are not spelled the way they sound. This is due to the diversity of English linguistic roots, as well as its characteristics fondness for absorbing words from different languages. While “kernel” can be broken down phonetically, the word it plays on, “colonel,” cannot. If a caption user is not aware that these two words sound the same way, not only is the humour lost, so is any sense of clarity. A hearing person learning English as a second language would likely be just as confused. I type “colonel” in brackets, with the hope that this makes the statement clearer for some caption users.
I resigned myself long ago to the fact that captioning will never be a perfect medium. The rules of spoken language and written are simply too different… and, in English, riddled with exceptions.
Spelling and grammatical mistakes are, of course, unacceptable. What’s ambiguous, however, is the ethics of editing text. Unfortunately for caption users and editors, speaking speed is faster than reading speed. While a person may speak 250 words per minute, a comfortable reading speed for adults is 140-180 words.
A “scene change” is an abrupt or fading transition from one scene to another. During a drama, a sit-com or an interview, the shift from one speaker’s on-screen presence to another is called a scene change. Of course, a scene change can also be a shift in settings as well. The only medium on video that likely does not have scene changes is a lecture.
Whenever possible, captions must flow with scene changes. If a caption moves over the scene change that is, if one speaker’s text continues on-screen when a different person appears it looks sloppy. Frequent scene changes especially prevalent in suspense-oriented programs like police dramas make an editor’s job challenging. Shifts from one speaker’s on-screen presence to another’s can occur in under one second, but all captions must be on-screen for at least one second, and a full line of text needs a good 1.5 seconds to be read at a comfortable speed.
If there are frequent scene changes and a lot of dialogue, as an editor, I must decide whether to have a caption go over a scene change or to edit text to fix within the given time frame before the next scene change occurs. Editing itself has to be done very carefully, in such a way that the text’s meaning is not changed. I try to stick to taking away excessive “ands” and “thats,” but sometimes need to do more. If a person during an interview speaks quickly and with excessive repetition, I will sometimes edit out repeated words. A close relative of mine who uses captions is often informed by his wife when text is edited. He finds it helpful.
Meanwhile, there are other things to be considered. If a character off-screen is speaking, he or she needs to be identified with his or her name in brackets, effectively requiring more time for a caption to be read. As well, captions need to be placed according to the speaker’s location; however, many other factors have to be considered as well. Captions must not cover credits, supers, subtitles, eyes or the mouth.
The correct spelling of archaic words must also be sought. Once when I was captioning the television show “Nature Walk,” an obscure and mispronounced word uttered by an archeologist in Arizona required of me to call the state park where he works, for the correct spelling. On another occasion, I sent an e-mail to everyone at the station, asking for the aid of any German speakers at Life Network. Correct spellings were finally confirmed by one of our engineer’s mothers.
Of course, I wouldn’t do this job if I didn’t enjoy it. My favourite job is describing sound effects. It’s especially entertaining during a program with excessive fighting, where eight hours of typing “thwak kapow” and “boom” bring me back to those carefree, childhood days gone by of watching the ’70s version of “Batman and Robin” with a teddy bear in one hand and cream cheese-stuffed celery sticks in the other. While Life Network does not show such programming, our parent company, Atlantis Communications, does produce some action dramas, some of which my colleagues and I caption.
It should be noted at this point that closed captions are no longer used only by the communities of people who are Deaf and hard of hearing. Public places such as sports bars and workout clubs often display captions on their television sets to please simultaneously customers who watch TV and customers who would rather not be distracted by the audio. Also, educators have found that captions are an effective tool for teaching literacy, because captions provide visual cues to the words that are heard.
The Canadian Radio-Television and Telecommunications Commission (CRTC) requires television stations making more than $10 million in revenue to provide closed captions for 90 per cent of all programming by the end of their licensing terms. The ten percent difference is to allow for errors when systems break down.
My hope is one day to see captioning in the movie theatres as well. While many videotapes are captioned, it doesn’t compare to the wonderful visual experience of the big screen. Besides depriving an entire community from enjoying an important medium in our society, as Deaf culture and sign language enter mainstream cinema through films such as “Children of a Lesser God” and “Mr. Holland’s Opus,” it hardly seems appropriate to discriminate systematically against a community whose culture is increasingly adding to the richness of the entertainment industry, as well to mainstream hearing culture.
(Mordecai Drache is a freelance writer living in Toronto, Ontario.)