speechd-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Speech Dispatcher roadmap discussion.


From: kendell clark
Subject: Speech Dispatcher roadmap discussion.
Date: Wed, 08 Oct 2014 18:17:38 -0500

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512



hi
I'll add one more note. SInce all this talk about speakup permissions,
loginD, etc, will this affect espeakup? It's been largely abandoned if
I recall correctly, but it's still very stable. I've tried, and
failed, to get speechd-up running under arch here. THis isn't likely
to get solved soon, but this *does* need to get solved eventually, and
preferably in a way that isn't systemD specific,  so that bsd,
solares, etc can take advantage of it too, if possible

THanks
Kendell clark
On 10/08/2014 05:36 PM, Luke Yelavich wrote:
> CCing to the speechd list, and thanks for your feedback.
> 
> On Wed, Oct 08, 2014 at 08:02:35PM AEDT, Halim Sahin wrote:
>> Hi, Am 08.10.2014 09:32, schrieb Luke Yelavich:
>>> * Implement event-based main loops in the server and modules
>> 
>> Pros and cons?
> 
> Ok, I can think of a few pros, but cannot think of any cons.
> Someone feel free to chime in if you can think of any. * Pro - It
> is non-blocking, which allows Speech Dispatcher to do other tasks
> even whilst it is waiting for client activity. * Pro - Allows the
> possibility for timer based events, i.e if there are no clients
> connected, and the server waits for a defined and configurable
> period of time, it can perform an action, i.e shut itself down. *
> Pro - Allows event prioritization, i.e if you want the servicing of
> currently connected clients to come first, that can be assigned a
> higher priority. * Pro - Allows for the possibility of inotify
> based file monitoring if desired, which means the file descriptors
> are not constantly being polled, which means less system resource
> usage. It should be noted that inotify is linux specific, but other
> platforms do offer something similar. Since GLib would likely be
> doing the work, it will depend on support being implemented in GLib
> for the platform in question.
> 
>>> * Assess whether the SSIP protocol needs to be extended to
>>> better support available synthesizer features
>> ok.
>> 
>>> * Assess DBus use for IPC between client and server
>> 
>> I see _no_ advantages to use d-bus at all. I don't think that a
>> console screenreader like sbl or brltty should use d-bus to talk
>> to speechd. This would only add more complexity to clients!
> 
> I personally agree with this, but it was on the original roadmap,
> so I decided to add it anyway. If enough evidence can be presented
> as to why we should use DBus, then I am willing to reconsider.
> 
>>> * SystemD/LoginD integration
>> Ok In fact most distros are using unfortunately systemd. Please
>> keep backward compatibility. Maybe other systems like **bsd*
>> should be able to use speechd without having systemd running.
> 
> SystemD is Linux Specific, so any support for ConsoleKit/SystemD
> etc would be enabled at build time, and it will have to be modular,
> given the need to support different *nix flavours.
> 
>>> * Rework of the settings mechanism to use DConf/GSettings
>> If you realy plan to change the settings stuff to gsettings, make
>> sure that: 1. speechd can be configured without running complete
>> Desktop like gnome.
> 
> That is certainly possible. Gsettings does have command-line tools
> to allow for this.
> 
>> 2. Settings should be accessible without installing a Desktop
>> because many embedded systems don't have gnome installed. In that
>> case speechd needs to be configured from plain textconsole.
> 
> What I had in mind was to refactor spd-conf to be a multi-ui
> configuration tool, in that it can provide a text UI on the
> console, and if desired, a GUI for graphical desktops. So given
> that approach, I am sure we can satisfy point 2 that you raise.
> 
>>> * Separate compilation and distribution of modules
>> 
>> FULLACK.
>>> * Consider refactoring client API code such that we only have
>>> one client API codebase to maintain, i.e python bindings
>>> wrapping the C library etc
>> 
>> My thought: maintain only c api and use something like ctypes to
>> use it in python.
> 
> Given I am not strong in python, this is one option I always forget
> about. Indeed, using ctypes is an option, but perhaps we wrap it in
> a little python code to make it easier for developers such that
> they do not need to know how to use C types.
> 
>>> * Moving audio drivers from the modules to the server
>> Hmm this will make things unstable for a long time. I am not sure
>> if this is really a problem and it's only a problem in
>> pulseaudio.
> 
> As I said, its not entirely pulseaudio specific. Even with libao or
> ALSA, you may want to have one Speech Dispatcher client coming out
> of one sound card, and, and another coming out of a second sound
> card, and both clients may wish to use the same speech synthesizer.
> Moving audio output management to the server would allow for this.
> This is something that would likely require a separate feature
> branch to be developed, and kept in sync with master, until it is
> stable enough to be merged.
> 
>> Since pulseaudio integration there is no easy way to get audio
>> running in plain textconsole. This produces many problems for new
>> users. Please give it a higher priority and let's try to find a
>> working solution for it.
> 
> This issue, while affecting pulseaudio, also affects plain ALSA use
> as well. With distros now using session and seat management with
> Consolekit/LoginD, the problem has to do with permissions of
> various device nodes needed for hardware access, and yes even
> speakup falls into this category. As it stands now, it is possible
> to run Speech Dispatcher et al in your text console user session,
> and all should work. The problem arrises when you want an
> accessible text login. The reason why GUI login systems work is
> because the GUI login manager runs as its own user, and is treated
> like a logged in session, allowing for audio devices etc to work.
> What is needed is for a special session type to be added to
> ConsoleKit/LoginD. THis special session would be run as a separate
> user, but tat user would get access to audio, console etc whenever
> the active vt does not have a logged in user. In other words, you
> switch to a text console with a login, and this special session
> type is marked as the active session. As a result, all processes
> running under atht special session, i.e Speech Dispatcher and
> speechd-up, gets access to the audio device hardware and speakup
> device node, and allows the user to read the console at login.
> 
>> BTW.: loading all outputmodules should be reworked. In my opinion
>> only installed sysths should be loded not all available output 
>> modules.
> 
> Agreed.
> 
> Thanks again for your feedback.
> 
> Luke
> 
> _______________________________________________ Speechd mailing
> list Speechd at lists.freebsoft.org 
> http://lists.freebsoft.org/mailman/listinfo/speechd
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCgAGBQJUNcYSAAoJEGYgJ5/kqBTdQn8P/3d6uCgjuln2pSuxai1IAMgp
dEgEDULHFGA3qYaHWsbJtkSLcSscWHb353ui+5Ru0Tbm2kjIf8/2V4NsB/oaZS0c
17Sj1x5ouPFCgDlaxyRZiqW9H4raj3t/9Haar1aFxwZMz/6p8+pyMn9bnVk8oQn6
Hk7gjQNoYp1afLZW//2B+Ly9mmT3lHlDHTiBWlGQiEj5522ps3wHBtWRMXuTGi06
xJm3Di+tnv42coGDdv0AdfdISuLEkmk08o8jeerwrdiW4UQxiBwK6P4K3Pydcq9M
CILNIM1CPJxGYgC7VMWbtkK0iymJlbF5kjLSF0dAKnUuf5JvsPfWoRI7py4m2DD5
cwCBXk7bMaK+Oy81RvVwjUFZ4AfGIwDbKfOx/K0HJSgNyf3UfPiAjal6BlwcHcx+
ikzIh/3H9lZtJni4o2COI3Nf3eD+HGn2kklIzrk3+3z9IRtC6ghkjuFMTnTw4EvW
2QVUlZjZ3ei0evC7bFLdLJjg8NkBq0hjUk4w3suRovaiDBt8blZiLNYaj2gcQRi9
YAbrF8If/S6CSkKP2Mq6voQi+s3663cE3Y4AOVE0Wma1n1mZpyJVbJjqkffBxvbL
+cR7oNqa6a8eM01OWWp3aOJCKiAX6S93KlKo7LHvYPGGUyZGC8TEdp7FsU9ic0iq
q9g0NX1IC9b9N4BIf+9W
=YYTk
-----END PGP SIGNATURE-----



reply via email to

[Prev in Thread] Current Thread [Next in Thread]