Adobe hasn't yet published any official press releases, so this feature may be a tad undocumented and in early beta form to date.
They also have not disclosed which engine they are using (if it's not propreitary) to parse spoken audio tracks for speech recognition indexing.
We've blogged about the two most popular such engines, Everyzing's "Ramp" and it's predecessor and core technology, MIT's "Lecture Browser" from CSAIL (Computer Science and Artificial Intelligence Laboratory) and it's very likely this indexing uses these platforms or variations thereof (IOHO), although new "automated indexing of video's spoken audio" technology is emerging from several new entities, including IBM.
Labels: Adobe, automated speech recognition, EveryZing, indexing through speech recognition, Lecture Browser, MIT, video's spoken audio indexing