speechd-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RFC: #2 Separating audio output system for session integration


From: Halim Sahin
Subject: RFC: #2 Separating audio output system for session integration
Date: Sat, 28 Aug 2010 04:24:26 +0200

Hi,
On Fr, Aug 27, 2010 at 06:30:52 +0000, Andrei.Kholodnyi at gmail.com wrote:
>    Hi Halim,
>    Yes, it helps.
>    I have dropped a look into speechd.conf and
>    you are right, there are no user specific options in it.

:-).

>    > The approach binding the speechd-server to current active session
>    would
>    > increase complexity in the screen reader start processes.
>    Could you please explain it in more details.

Yes: Afaik the plans of integrating speechd in user-session were to
start it in several sessions.
1. pseudo session for login (console).
2. gdm session (for graphical login a11y).
3. user-session (after login).

This approach has some problems:
1. We can't moove the textmode screenreaders in userspace because sbl
and brltty need some access to devicenodes which can't be accessed as
user.
2. during startup of sbl it would need many restarts or reconnection to
speechd because speechd would be frequently restart as described.
When sbl start in the init proccess it will connect to speechd in a
pseudo/idle  session.
This would make textmode login acessible.
After login that speechd would be invalid so sbl needs to reconnect to a
speechd running in the usersession etc.
3. using the unix-socket feature is difficult when sd runs in a
user-session and sbl as system service.
Different users would have different location of the unix-socket of
speechd :-(.

This could be avoided by simply running sd as a systemservice.
In fact sd is a server and not a userclient.
Only the audiopart needs some work for integrating it in the active
session.
HTH.
Halim




reply via email to

[Prev in Thread] Current Thread [Next in Thread]