I’m watching the Apple livestream (yeah, bored). There is an optional large section, about a third of the screen, with someone translating into ASL. Most government livestreams do the same, and many others.
It would *seem* that anyone capable of reading onscreen ASL would be capable of reading closed captions. I understand that ASL is its own language, so it’s not just English, but it’s easy to add multi-language CC, fully automated.
So what is the purpose of a separate ASL stream, as opposed to just having multi-lingual CC?
In: 23
Latest Answers