[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
retrieving synthesized auio data?
retrieving synthesized auio data?
Thu, 04 Feb 2010 20:26:05 -0700
I need to generate the audio data, though probably not as wave files. In
short, I need to be able to generate the audio data and have it passed
back to a buffer in my program for writing to a file of my choosing, as
Rockbox uses a few different files structured in different ways. So yes,
I need it for further processing, and an output module probably wouldn't
be the way to go for this. I don't want to have to switch output
backends for this program to work, I want it to work seemlessly for the
user. Since speech dispatcher isn't able to do this yet, I'll probably
end up just implementing espeak support directly for now, abstracted so
that switching over to speech dispatcher once this is supported will be
On Fri, 2010-02-05 at 04:10 +0100, Halim Sahin wrote:
> hi Jacob and Luke
> @Jacob do you need the audio data for further processing?
> Or do you need only creating wave files from the synthesized text?
> Maybe a good start is to add a dummy audio output driver in speechd
> which writes it's
> output data into a fifo.
> This wouldn't need any api work and could be implemented (in my
> opinion) really fast and without much work!
> On Thu, Feb 04, 2010 at 12:04:00PM -0800, Luke Yelavich wrote:
> > I intend to write up some roadmap/specification documentation as to
> > what I would like to work on with speech-dispatcher next. I think
> > first, we get a 0.6.8 release out the door, then start thinking what
> > needs major work, to ensure speech-dispatcher is still usable both as
> > a system service for those who want it, and for the ever changing
> > multi-user desktop environment.
> Consider making pulse optional for ubuntu will solve this problem without
> any new line code.
> > One such idea I have, is to consider
> > dbus as a client/server communication transport layer. This could even
> > go so far as to solve the issue of using system level clients like
> > BrlTTY with a system level speech-dispatcher, which would then
> > communicate with a user level speech-dispatcher for example.
> Luke! It's only an issue because you and other prefer the wrong audio
> system. i hope one day you start thinking about other stuff to do for
> speech-dispatcher than the ..... user session integration.
> The decision to use pulseaudio (only) for ubuntu produced tons of mails from
> many unhappy users in orca/speechd/ubuntu accessibility mailinglists.
> Allmost every day some people asking howto use sd as system service etc.
> BTW.: it works really well this way!
> Starting paralel process and let them communicate through dbus will add
> more and more and more overhead to speechd and it's deppendencies.
> And it will only produce new issues without bringing really new
> features instead of complexity.
> Many other audio apps needs to be rewritten to be compatible with this
> new approach. Thx to PA for this.
> Just my two cents.
> PS.: @Luke it doesn't make sense to ignore the user wishes in this area.
> Read the mailinglists and talk with the people who are not able to use
> pulse with speechd.
> Talk also with other a11y projects and speechd users.
> Speechd mailing list
> Speechd at lists.freebsoft.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 197 bytes
Desc: This is a digitally signed message part
TTS API Provider and Retrieving synthesized audio data, Hynek Hanke, 2010/02/05