accessibility
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Accessibility] Call to Arms


From: Eric S. Johansson
Subject: Re: [Accessibility] Call to Arms
Date: Sun, 25 Jul 2010 00:41:21 -0400
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.7) Gecko/20100713 Thunderbird/3.1.1

 On 7/24/2010 6:59 PM, Richard Stallman wrote:
     I would propose organizing the project to first satisfy the economic needs 
of
     the disabled community, so they can make money, they can be independent 
and as a
     result, be able to make choices about software freedom.

That is very abstract, so I am not sure what it would imply at a
concrete level.  I don't know what I would think of your
practical suggestions.  But at this abstract level, I see a possible
misunderstanding about our goals and ideals.
let me try rewording it to something a little more concrete.

If I can't make money, software philosophy doesn't matter. When I can make money, I am now in the position to make choices. Any philosophy that serves itself ahead of the needs of the disabled is actively harming the disabled. (I apologize for the harshness of the sentence) the philosophy of the free software foundation, as I interpret it is actively harmful to disabled users.

disability solutions will require a hybrid approach because some of the core components may take decades to reach parity with existing commercial solutions and it is unethical to ask a population to take a backseat economically and socially while the free software foundation writes software to replace a commercial component, a component that took at least five years and some $60 mil (1997) with a cadre of about 100 PhD computational linguists and other equivalent researchers plus the data gathering effort to put together a high-quality corpus for training and testing. yes, a large vocabulary continuous speech recognition engine is what is needed in order to do everyday tasks instead of a special case hack for a limited application domain. Your first test should be to try and write an e-mail message like this one. If you can't do it, the system has failed

from the perspective of a disabled person, a toolkit, a hybrid of free and proprietary that is available now is worth more than some folks can possibly imagine. this hybrid is liberating. It is freeing. It enables disabled to participate in the wider world. Waiting is death of the spirit, skills, and body.

the opposite, this is your organization. If you wish to put the free software philosophy ahead of the need of the disabled it is your choice. I apologize but could not participate with a clear conscience because it violates my ethics of how you treat people.

This is not an open source project.  This is the free software
movement -- a totally different idea.  So please don't think or speak
of it as "open source", because that would lead you astray.

My apologies. I spoke carelessly and I will try to be more consistent in the 
future.

There are many technical projects in which that difference of
philosophies and values has no effect.  But here we are discussing a
point where it is absolutely crucial.

Our goal is to establish freedom for software users, and freedom is
much broader and deeper than "freedom of choice".  Thus, our aim is
not just that people should be able to "make choices about software
freedom", but rather that they should actually HAVE software freedom.

Proprietary software is digital colonization, unjust and evil.  Our
goal is therefore to eliminate proprietary software.  We cannot
eliminate it this year, but what we can and must do now is refuse to
legitimize it.

In the same way, the abolitionists did not seek to give people
the power to make choices about freedom or slavery.  They sought
to abolish slavery.
yeah I thought so. Look higher up in the definition of freedom to economic freedom. a disabled user does not have economic freedom which means they lose many other forms of freedom. They frequently do not have the freedom that other people do because they can't type. They can't use linked in or write an e-mail message. Web forums, Facebook, Google, USENET, IRC are all off limits to them. The foundation philosophy as expressed above is telling them that their needs are secondary. They need to wait and wait and wait until the right software is available. This is unbelievably cruel because I've seen 15 years of promises and vaporware with speech recognition from the free and open software worlds. Hasn't happened. I have no evidence that it will.

I see a number of factual and conceptual points that need correction.

Thank you. Always glad to get new information.
* "Giving people the power to make choices about free software or not"
is not the right way to think of our goal (see above).  Our goal was,
and is, to liberate the users from proprietary software.

How about liberating people from economic misfortune, social isolation, disconnection from governmental services because of their disability? how long should they wait?
* We never had a policy of developing "tools" first, and that's not
what happened.  We developed all sorts of system components in the
1980s, including a chess game, a PDF interpreter, and a spreadsheet.

I understand but from most developer perspectives, the toolchain was first and foremost. If you ask, developers of our generation, most of them will tell you it's the tools they need first. My first encounter was Emacs 17 on VMS. I was so grateful because it saved me from Digital shell environment hell.
* I don't think those GNU tools were _necessary_ to enable people to
write the other components of GNU.  Some of them were advances in
convenience and power, and that may have helped people develop all
sorts of things -- but people COULD have written other GNU components
with vi and debugged them with dbx.

But they didn't, or at least in my world they didn't. There was incredible loyalty to the new toolchain and any move by management in a half a dozen companies to replace it with something commercial kicked up a hell of a s*it storm.

* Making GNU programs run on many platforms was never a high priority
goal.  The main purpose of GNU packages is to be parts of the GNU
system.  However, users ported some GNU packages to many platforms,
and we accepted their changes in a spirit of cooperation.

But again, the needs of the programmer outweighed the needs of the GNU. People needed cross platform support. They had no choice in their OS but they want to build an ecosystem were free tools could make their life easier. Tools first then down to the core of the OS
* I had to consider the ethical question of whether it was legitimate
to use Unix (unethical nonfree software) in order to write GNU
components.  My conclusion was that it is ethical to use a nonfree
package to bootstrap a free replacement for that package.  By doing
this, we would participate in the evil of Unix in a secondary way in
order to put an end to it completely.

That's another way of putting my rationale behind using NaturallySpeaking. as Barbie would say "recognizers are hard". A language models are hard. Collecting rational corpus is unbelievably hard. This is why I say close to a decade or even more to complete this project. Then we will still have nothing in place in applications, toolkits, communications to make speech recognition work with applications.

Using NaturallySpeaking as the core, you can start connecting applications and toolkits to a working recognizer. It gives you experience, now. it gives you insights on the application recognizer interface, now. And more importantly it gives you information about how you need to build your recognition engine. The recognition engine should never be the tail that wags the dog. if you do, you usually end up with a handful of poo.

     Cultivate resources to put NaturallySpeaking under wine. Itit's very close 
to
     ready and the final push to make it real isn't happening. My reason for 
this
     step is that it removes all nonfree software except the speech recognition
     engine. This gives us a place to work from, to experiment with various
     techniques for cross machine speech recognition work.

I agree that using NaturallySpeaking on Wine is less bad than using it
on Windows.  But this is not a step on the path to replacing
NaturallySpeaking, thus, for us to do this would be a detour.

That's a valid argument if you can deliver a NaturallySpeaking equivalent application engine in 6 to 12 months. From what I've seen so far, you're not even close. As of said above, I expect 8-15 years and by then, I'll probably be dead or retired. think about what this delay means. You are preventing disabled users from having a freer choice. without the hybrid approach I'm proposing, They will choose Windows with NaturallySpeaking. It's the only option available to the. If you can accept that it's going to take roughly a decade to your recognizer and can get behind NaturallySpeaking online, we could see everything free but the recognition application in 1/3 to 1/4 of the time. Nothing will stop you from continuing on the replacement application but in the intervening X years, upper extremity disabled users will be able to work, play, and maybe even write code. I'd forgotten about that benefit. If you have programming by voice working using NaturallySpeaking, you have a potential pool of programmers who might be able to help. If you wait for your own recognizer, you've lost that population almost entirely by the time you're done.
     Develop advanced dictation box which is the simplest model for
     reliable data injection into applications.  This is the first
     layer that will enable upper extremity disabled users to make
     money. This is where you can start writing documentation, e-mails
     etc.

This sounds good as long as it works without proprietary software.
But if it is would be an add-on to NaturallySpeaking, it would
constitute enhancing proprietary software -- a counterproductive
distraction, and improper for us to work on or even endorse.

I was speaking shorthand. It's not an add-on to NaturallySpeaking. It is an add-on to the communications framework between recognition application and user application.

I'm not sure if that made this point before but I believe fundamentally that there should be a divide between the recognition environment and the users working environment. I work with approximately 100 machines a year. I am not going to install recognizer on each of them nor am I going to train, install microphone or any of that stuff because 95% of those machines are on the far end of the network. Recognition and task management takes place locally, data injection command-and-control takes place remotely.

I hope you can see how an application like a dictation box would be plug into this framework.
If what we need is to improve the free software for speech
recognition, so as to replace NaturallySpeaking, then that
is what we should focus on.

That might also be the different focus. I'm looking at increasing disabled user freedom by functionality supplied by free software. if we fix what's broken first, and replace the nonfree component either in parallel or later, we will deliver the most important thing, a solution to disabled users. I don't believe that we should keep NaturallySpeaking around for the long term but I do believe it is essential component to get disabled users running now and running until the free solution exists.

Thank you Richard. I apologize if I've come across as testy in this but disability access is obviously very important to me since I am disabled. I appreciate your taking the time to reply and correct some of my misunderstandings. I hope you better understand my philosophical foundation with regards to disabled users and where I would want to push the project.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]