accessibility
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Accessibility] Why not first an IDE that recognizes speech?


From: Eric S. Johansson
Subject: Re: [Accessibility] Why not first an IDE that recognizes speech?
Date: Wed, 28 Jul 2010 13:12:47 -0400
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.7) Gecko/20100713 Thunderbird/3.1.1

 On 7/28/2010 12:35 PM, Susan Jolly wrote:
I'm a Tab.  Capital T to remind me to emphasize that at my age temporary is
very likely to be true.  I'm a retired computational scientist (numerical
solution of coupled nonlinear PDEs, etc.).  I saw the announcement for this
list as a result of my interest in braille software.

Never knew much about speech recognition before but the posts that point out
that it is REALLY HARD have convinced me.  I know really hard from an
algorithmic standpoint.  Sounds like a bad first project.

My impression is that a lot of posters here really want to be able to use
speech recognition to do software development.  Why not start with that?
Interesting important problem with IMHO a greater chance of success and
might well support the harder project both as a developer tool and as an
extensible code base.

yeah, it's a really good idea but this is where the conflict comes in. Honoring Richard's philosophy of doing nothing that encourages or leads users to use proprietary software, we can't do thiss. NaturallySpeaking is the only useful system out there (ssee my other post about programming issues with speech recognition) and in order to do programming by voice right, you need to deeply couple the speech recognition application to the programming environment. And we are back in conflict began.

Haven't found a way to solve that problem yet. Not sure it's even possible short of forking the work into tools and application recognizer bridges as a non-canoe[sic] project and leaving a minimal component that can be built using the free software foundation philosophy in mind to them.

I need to go back and reread some of the messages but I believe (and please correct me if I'm wrong because this could help solve part of the problem) that the free software foundation philosophy not allow us to build all of the toolkits and bridge software if the only recognizer available was NaturallySpeaking. Of course, one way to solve that might be to put a Sphinx recognizer in place and let it exist as a time wasting honey trap until the real free software foundation recognizer was working. that way, the free softtware foundation philosophy is satisfied (completely free toolchain) and those of us who know better... I mean who need to get work done...er... are totally evil could use NaturallySpeaking as a part of a non-free software foundation component that works with the free software foundation tool chain.

My snarky humor aside, would that work? Would using Sphinx as a placeholder satisfy everyone for the short-term as we build for the future?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]