The sensor was probably an electrode like the ones used for electromyography (EMG) studies, which can detect the tiny electrical pulses that travel through the nerves to the muscles. It also could be some sort of tiny strain sensor which would detect the change in shape of his cheek as he flexed it.
In terms of software, he would probably be using something like the T9 auto-correct that used to be used on non-smartphones, which can predict and suggest words based on a few button presses (or cheek flexes). If he really was limited to a single action (such as flexing a cheek), the software might present each letter in the alphabet one at a time, and a flex would select the letter. If he had more than one muscle he could control, he might have been able to use a cursor or pointer to select letters from an on screen keyboard.
In any case, communicating would have been incredibly slow. When you see him in an interview or public speaking engagement, he would have prepared his statements or answers long before the event, and triggered the computer to speak the pre-composed words at the appropriate time.
Latest Answers