[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Accessibility] resident evil

From: Eric S. Johansson
Subject: Re: [Accessibility] resident evil
Date: Sat, 31 Jul 2010 12:57:46 -0400
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20100713 Thunderbird/3.1.1

 On 7/31/2010 7:09 AM, Chris Hofstader wrote:

> rms: However, there is no reason to focus narrowly on the needs of people
> who can't type.  The more people the program is usable by, the better,
> and that will mean more contributions of a kind that are directly
> useful.  That ought to benefit all the users.
cdh: I think it might actually be harder to exclude populations than to prefer a population in the recognition engine. We already know we will have multiple UI to serve different use cases: programming by voice, dictation, command and control, etc. I think that it is at the UI level where we may include features that apply more strongly to someone who cannot type versus someone who prefers dictation while typing a little.

Great point. And this is also the point where we can hurt people very badly. We need to have early and often trials on usability. I thoroughly expect that the user interface work could take as much is 5 to 7 years before it is fully refined to a form we believe we can live with. NaturallySpeaking is a great example of this. It became apparent by version 6 that there was no way in hell they were going to enable every single application on the face of planet because requiring application developers to do anything is a fools errand. Even accessibility by accident was really difficult to achieve wish prompted the development of dictation box, accessibility by the outside. Unfortunately, they never really spent time and effort to refine the concept as I did with the enhanced dictation box.

So if it takes 10 years to write the recognition engine and another five years to work on user interface, it's a longer than if you were able to work on them in parallel which you can't because you need a target for testing.

cdh: as a blind user with RSI, I can say that while using DNS, saying, "Go back four words..." pretty well really sucks if you lose count of how many words you have typed since the item that you want to change. It's a strange cognitive model to be composing text while also trying to count words and characters that I never quite figured out how to do without spending a bunch of time.

You: say last few
system: on my trip down
you: sounds good
system: my dog sits next to me
you: sounds good
system: and pees everything in South Carolina
you: sounds bad
system: and pees everything in South Carolina
you: replace and pees everything
system: found
you: with and sees everything
system: and sees everything in South Carolina
you: sounds good
system: from the car end of utterance

Simple grammar. "Save [the ]last few" is a global command asking to repeat the last few utterances a local command sequence during the editing process is sounds good/sounds bad, replace (phrase) and with (phrase). This is an example of reducing ambiguity of commands by reduction of scope.

This user interface was created on the fly this morning and I'm wondering any of it would match what you need?

and in the flogging a dead horse category, Chris you are a prime example of a user that is actively harmed by the rejection of a hybrid approach. you have been royally screwed by loss of vision and now loss of hand function. Please take care of your voice and work with a speech therapist periodically to make sure everything is okay.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]