[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Speech Dispatcher roadmap discussion.
From: |
Luke Yelavich |
Subject: |
Speech Dispatcher roadmap discussion. |
Date: |
Fri, 10 Oct 2014 12:37:24 +1100 |
On Fri, Oct 10, 2014 at 06:55:08AM AEDT, Trevor Saunders wrote:
> On Wed, Oct 08, 2014 at 06:32:09PM +1100, Luke Yelavich wrote:
> > Hey folks.
> > This has been a long time coming. I originally promised a roadmap shortly
> > after taking up Speech Dispatcher maintainership. Unfortunately, as is
> > often the case, real life and other work related tasks got in the way,
> > however I am now able to give some attention to thinking about where to
> > take the project from here. It should be noted that a lot of what is here
> > is based on roadmap discussions back in 2010(1) and roadmap documents on
> > the project website.(2) Since then, much has changed in the wider *nix
> > ecosystem, and there have been some changes in underlying system services,
> > and there are now additional requirements that need to be considered.
> >
> > I haven't given any thought as to version numbering at this point, I'd say
> > all of the below is 0.9. If we find any critical bugs that need fixing, we
> > can always put out another 0.8 bugfix release in the meantime.
> >
> > The roadmap items, as well as my thoughts are below.
> >
> > * Implement event-based main loops in the server and modules
> >
> > I don't think this requires much explanation. IMO this is one of the first
> > things to be done, as it lays some important groundwork for other
> > improvements as mentioned below. Since we use Glib, my proposal is to use
> > the Glib main loop system. It is very flexible, and easy to work with.
>
> I'm not seeing how any of the below things actually depend on changing
> this, or how your distinguishing select(2) from "event based".
Ok, currently we use select with no timeout, so the main server loop waits for
select to return activity on any of the file descriptors. We would have to
change the main loop implementation such that we can receive events when the
active session changes with LoginD/ConsoleKit, as well as any settings change
events when a setting is changed. Even if we were to still use a file-based
config system, we could use file monitoring via glib to watch for file activity
on the config files, and act on those events.
I am of the opinion that it is easier to use code that is already written as
part of one of the supporting libraries we use, rather than re-implement a main
loop ourselves, and thereby we can spend more time improving Speech Dispatcher
itself.
> > * Assess DBus use for IPC between client and server
> >
> > Brailcom raised this back in 2010, and the website mentions analysis being
> > required, however I have no idea what they had in mind. Nevertheless, using
> > DBus as the client-server IPC is worth considering, particularly with
> > regards to application confinement, and client API, see below. Work is
> > ongoing to put the core part of DBus into the kernel, so once that is done,
> > performance should be much improved.
> >
> > Its worth noting that DBus doesn't necessarily have to be used for
> > everything. DBus could be used only to spawn the server daemon and nothing
> > else, or the client API library could use DBus to initiate a connection via
> > DBus, setting up a unix socket per client. I haven't thought this through,
> > so I may be missing the mark on some of these ideas, but we should look at
> > all options.
>
> I'm not really sure what the point would be, especially since we'd want
> to keep unix sockets / tcp for backwards compat with things that don't
> use libspeechd. In theory using an existing IPC framework seems nice,
> but given the dbus code I've read I'm not convinced its actually any
> better.
Yeah as I said in my reply to Halim, I don't personally agree with this, but I
added it since it was on the original roadmap.
>
> > * Support confined application environments
> >
> > Like it or not, ensuring applications have access to only what they need is
> > becoming more important, and even open source desktop environments are
> > looking into implementing confinement for applications. Unfortunately no
> > standard confinement framework is being used, so this will likely need to
> > be modular to support apparmor/whatever GNOME is using. Apparmor is what
> > Ubuntu is using for application confinement going forward.
>
> Well, in principal it makes sense although getting that right on unix
> within user ids is pretty footgun prone. Anyway presumably people will
> use policies that don't restrict things that don't specify what they
> need, if you don't do that of course things will break, and I'll
> probably say that's not my fault.
- Speech Dispatcher roadmap discussion., (continued)
- Speech Dispatcher roadmap discussion., Bohdan R . Rau, 2014/10/13
- Speech Dispatcher roadmap discussion., Trevor Saunders, 2014/10/14
- Speech Dispatcher roadmap discussion., Bohdan R . Rau, 2014/10/15
- Speech Dispatcher roadmap discussion., Trevor Saunders, 2014/10/15
- Speech Dispatcher roadmap discussion., Bohdan R . Rau, 2014/10/16
- Speech Dispatcher roadmap discussion., Luke Yelavich, 2014/10/15
- Speech Dispatcher roadmap discussion., Bohdan R . Rau, 2014/10/16
- Speech Dispatcher roadmap discussion., Luke Yelavich, 2014/10/21
Speech Dispatcher roadmap discussion., Bohdan R . Rau, 2014/10/12
Speech Dispatcher roadmap discussion., Trevor Saunders, 2014/10/09
- Speech Dispatcher roadmap discussion.,
Luke Yelavich <=