gnuspeech-contact
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuspeech-contact] Thoughts on GNUSpeech and possible accessibility app


From: Jason White
Subject: [gnuspeech-contact] Thoughts on GNUSpeech and possible accessibility applications
Date: Mon, 6 Apr 2009 20:03:27 +1000
User-agent: Mutt/1.5.18 (2008-05-17)

Hello,

I've been monitoring the archives for a while, but I thought it was time to
subscribe to the list.

I would be interested in helping to contribute beta testing to the project, as
well as any ideas or experience that might be of benefit. As a computer user
who happens to be blind, I have been using speech synthesis since the early
1980s. (These days, I rely more on braille devices, but speech still plays a
significant role).

Although there are excellent free (as in freedom) screen readers and
speech-based user interfaces available for GNU/Linux, such as Emacspeak
(http://emacspeak.sourceforge.net/), SpeakUp (http://www.linux-speakup.org/)
and Orca (http://www.gnome.org/projects/orca/), the quality of free text to
speech systems is, in my judgment at least, somewhat inadequate. To be
specific about this, I haven't heard any free software that even comes close
to competing with the DECTalk synthesizer which is on my desk here. Moreover,
the proprietary speech synthesis systems (available as software only rather
than hardware) for GNU/Linux all incur licencing fees, and owing to the lack
of access to source code, bugs can't be fixed by the developers of screen
readers and speech-based user interfaces, or by users with programming skills.

A high-quality, free, synthesizer could also be integrated by default into
GNU/LInux distributions, and made available in devices that employ free and
open-source software, for example mobile telephones
(http://eyes-free.googlecode.com/ exemplifies the latter, and currently uses
ESpeak as its synthesizer).

Is there interest among participants in the GNUSpeech project in its potential
to support such applications? If so, the porting of the text to speech server
to GNU/Linux would be a necessary prerequisite, but the development
environment would also need to be available to enable the implementation of
additional languages. I am also interested in whether the possible
accessibility applications of the project might help to attract development
resources. I don't know any possible sources of funding or developers at
present, but I would gladly participate in any such discussions.

Since the text to speech system doesn't run under GNU/Linux yet, I haven't
been able to test it. However, the paper and sample files at
http://pages.cpsc.ucalgary.ca/~hill/papers/avios95/menu.htm

were very useful. My initial impression is that I find GNUSpeech difficult to
understand, partly due to the mixture of British phonetics and North American
pronunciation that leads, for example, to pronounced "r" and "l" sounds where
they occur in American, but not British English. However, I like the rhythm
and intonation, which I know from having read the papers are the subject of
substantial research. I don't understand speech synthesis sufficiently to know
whether the quality of the speech could be easily improved by fine-tuning the
dictionaries and databases, making use of the part of speech information, etc.

For the accessibility applications mentioned above, there are other
requirements that would need to be satisfied, and again I would be pleased to
contribute to such discussions in the event that they become relevant to the
project. I am also aware that I am by no means alone in desiring a
high-quality text to speech system suitable for such applications in a Linux
environment, available as free software.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]