speechd-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

WIP audio in server


From: Luke Yelavich
Subject: WIP audio in server
Date: Fri, 12 Feb 2016 17:16:27 +1100

On Fri, Feb 12, 2016 at 02:45:55PM AEDT, Jeremy Whiting wrote:
> Hi Andrei,
> 
> On Thu, Feb 11, 2016 at 1:51 PM, Andrei Kholodnyi
> <andrei.kholodnyi at gmail.com> wrote:
> > Hi Jeremy,
> >
> > I'm glad to see that we have common understanding on this topic.
> > server shall handle client connections, client shall handle data.
> >
> > Currently it is not like this, and I think we need to put efforts fixing it.
> > I really like your idea to get audio back from the modules, but it shall go
> > directly to the client.
> 
> Yeah, sending the audio data back to each client makes sense.
> Especially as most libspeechd users likely have some sound output
> mechanism of their own. Recently a client evaluated speech-dispatcher
> and decided to write their own library that does most/some of what it
> does but gives them the audio back rather than playing it itself.
> There were other reasons they decided to write their own rather than
> use speech-dispatcher (proprietary speech synthesizer, etc.) but
> that's one of the reasons.

Ok, so what about clients like Orca? Orca is getting support for playing audio 
for progress bar beeps, but that uses gstreamer, and likely is being developed 
such that latency is not a concern. I am pretty sure that it doesn't make sense 
for Orca to manage the audio for its speech.
> 
> > Also I'm not sure we need to mix metadata and audio in one stream.
> 
> Yeah, I don't like it mixing them either, but wasn't sure how to
> separate them. I guess we could have two sockets, one for metadata and
> the other for the raw data or something. Do you have something in
> mind?

Yeah, sounds reasonable.

Luke



reply via email to

[Prev in Thread] Current Thread [Next in Thread]