[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Accessibility] Call to Arms

From: Eric S. Johansson
Subject: Re: [Accessibility] Call to Arms
Date: Mon, 26 Jul 2010 13:39:50 -0400
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20100713 Thunderbird/3.1.1

 On 7/26/2010 8:14 AM, Steve Holmes wrote:
On Sun, Jul 25, 2010 at 10:52:41PM -0400, Richard Stallman wrote:
Something like that might be independent enough of the recognizer
to be a valid project.  But there ARE free software packages for
speech recognition.  So people should develop it to work with them.
If users can also run it with NaturallySpeaking, that is ok,
as long as we don't suggest it.

However, I think we should not include such things in THIS project,
because we need to focus energy on the goal of making those free
recognizers better.  For us, replacing important proprietary software
takes priority over advancing the capabilities of software.
When I hear this statement, I feel a bit of a problem here.  Maybe
it's a matter of pragmatics or something but if the current free
solutions are inadaquite or don't work at all and a sutible
replacement of this or a proprietary product like Naturally Speaking
is going to be 7 to 10 years away, the people needing such a process
would have no choice but to go with a Windows solution and abandon any
hope of using gnu/linux and its other numerous advantages.  I like the
idea if the support utilities were to be developed to be compatible
with Naturally Speaking currently but with the firm understanding that
a good replacement for Naturally Speaking be developed in parallel and
maybe even given higher priority to shorten the development cycle.
The context of the above statement of replacing proprietary software
before enhancing existing software sounds a log like "all or nothing"
to me.  Couldn't something like the LGPL be applied to this speech
recognission effort? Develop the tools don't lock them to Naturally
Speaking; maybe adjust the existing recognition layer to be compatible
with what is expected by Naturally Speaking so a smooth transition
could take place and soon be rid of depending on the proprietary

thank you for expressing it this way. The basic idea I've had in the background for a few years has been something like this.

Interface side )----<lh bridge ]==[ rh bridge>--- (Recognizer

You replace the right-hand bridge and recognizer dependent on your needs. In the very beginning, the right-hand bridge and recognizer would be NaturallySpeaking plus something we build. As the years go by and a free software foundation recognizer comes to life, they can create their own right-hand bridge for testing and usage.

The other reason I recommend this approach is because the connection between the haves can be many thousands of miles. There's no reason to put the recognizer on the same machine as the interface. This model lets us support speech recognition on many more machines than you have recognizers[1] it also lets you put a recognizer on a host and speak to a virtual machine. It lets you put a recognizer inside of wine and speak to another machine.

[1] to assist that a user put all of the recognition environment, macros, configuration etc. on every machine they use is the path to madness. It's bad enough keeping your shell environment consistent, a speech recognition environment is much more complex and correspondingly difficult to synchronize. another difficulty is transporting the audio stream from where you are to where the recognizer is. Keeping all of your recognition/configuration information on a single machine reduces or eliminates all these problems and enhances the user experience.

Let me also point out that if I had the right tools that let me write code, I could've started on something this. Waiting for the free software foundation model, I wouldn't be writing code for at least 5 to 6 years and by that time I probably would've lost interest.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]