The great news is that among other big players in the speech recognition marketplace, SRI International, makers of the advanced Speaker Indepent DynaSpeak SDK, have incorporated Distributed Speech Recognition as an intregal part of the SDK.
From the DSR section of the DynaSpeak pages on SRI's website:
"To date, speech recognition systems have been deployed in two ways: on a remote server or pre-loaded on a mobile device. Either approach forced makers of mobile phones, PDAs, PCs, and consumer and automotive electronics products to accept tradeoffs. To eliminate design sacrifices, SRI has created a third mode of deploying speech recognition: DynaSpeak with Distributed Speech Recognition (DSR). With DSR, a user's speech is preprocessed on the user device and transmitted over a low bandwidth channel to a full-featured server-side system. The benefits are numerous: higher quality audio capture, lower cost per device, and centralized management of speech applications."
SRI isn't the only big player to incorporate DSR; Hewlett-Packard has been developing distributed speech recognition as a power-saving solution for wireless devices:
"We have shown that DSR can reduce the required systemwide energy consumption for a speech recognition task by over 95% compared to a software based client-side speech recognition system. These savings include the software optimizations of the DSR front-end as well as the savings from the decreased duty cycle of the wireless interface."
Nuance also built DSR into their OpenSpeech Recognizer 2.0 that's availaible with their Network Speech Solutions.
We cheer the efforts, and hopefully handset manufacturers will begin to support it inside 3G mobile phones; the "Chicken or the egg" dilemma David Pearce, DSR's chief developer spoke of at both speechTek 2005 and in his VoiceXMl articles is solved - now that it's "hatched" let's hope it continues to grow!