From: |
Unknown |
There has, in my personal opinion, been very poor collaberation between
the accessibility community and Brailcom.
One example, in my opinion, of poor collaberation has been Brailcom's
extremely minimal involvement in the development process. A development
repository was opened by Luke so that some of us would be able to get
changes into speech-dispatcher. Many patches were sent to this list,
for a long time, but there has been extremely minimal participation from
Brailcom, so the official repository was never updated and Brailcom
hasn't been responding to our patches.
We repeatedly asked Brailcom for direction on this list and were told
that they couldn't commit any resources to speech-dispatcher at this
time. So, speech-dispatcher was becoming a more important project to
the accessibility community, but the maintainers were not working on it,
and they had no idea when they would be able to work on it again.
Another concern I personally have is Brailcom's speechd-up project being
deprecated as well as their comments on their speechd-up project page
about speakup itself being replaced by other technologies. On the
contrary, speakup has a very active user base and shows no signs of
being replaced.
In my opinion, the fork happened because of poor collaberation and slow
responsiveness from the maintainers.
I understand that what Brailcom spends their time on is controled by
funding, but they made no effort or very minimal effort to work with the
community.
Yes, we could continue doing unofficial releases and waiting for them to
do official releases when they get funding, etc, but the problem there
is that if we add functionality in the community releases, there is no
guarantee that that functionality would be accepted by Brailcom in their
official releases.
I personally would be open to working on speech dispatcher, but there
would need to be a big change in the way Brailcom collaberates with the
community for me to be comfortable with that.
William
From the remaining errors that you discussed in your message. Here's my
log.
festival.c: In function `_festival_speak':
festival.c:630:3: warning: format `%ld' expects type `long int', but argument 3
has type `unsigned int'
festival.c:630:3: warning: format `%ld' expects type `long int', but argument 3
has type `unsigned int'
In file included from ibmtts.c:55:0:
/opt/IBM/ibmtts/inc/eci.h:366:1: warning: useless storage class specifier in
empty declaration
espeak.c: In function `module_speak':
espeak.c:402:4: warning: format `%d' expects type `int', but argument 2 has
type `wchar_t'
espeak.c: In function `espeak_play_file':
espeak.c:1149:2: warning: format `%ld' expects type `long int', but argument 3
has type `sf_count_t'
espeak.c:1149:2: warning: format `%ld' expects type `long int', but argument 3
has type `sf_count_t'
In file included from ivona.c:83:0:
ivona_client.c: In function `ivona_get_msgpart':
ivona_client.c:156:3: warning: pointer targets in passing argument 3 of
`module_get_message_part' differ in signedness
module_utils.h:169:5: note: expected `unsigned int *' but argument is of type
`int *'
ivona.c: In function `_ivona_speak':
ivona.c:341:7: warning: passing argument 2 of `ivona_get_msgpart' from
incompatible pointer type
ivona_client.c:84:5: note: expected `char *' but argument is of type `char
(*)[64]'
ivona.c:364:10: warning: passing argument 2 of `ivona_get_msgpart' from
incompatible pointer type
ivona_client.c:84:5: note: expected `char *' but argument is of type `char
(*)[64]'
The warnings in ivona.c are dangerous.
-- Chris
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL:
<http://lists.freebsoft.org/pipermail/speechd/attachments/20100907/c4f3809e/attachment.pgp>
From the way I read the cooperation document, the final decision is for
the reviewer to make, so, since you pushed Chris's patch, that is what
we have. Also, since it was better than what is there now, I would say
it should be pushed.
I will open another thread at a later time about my patch.
William
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL:
<http://lists.freebsoft.org/pipermail/speechd/attachments/20100925/21ec0391/attachment.pgp>
From what little I know of the patch, I'd suggest upgrading to the
latest Voxin.
-- Chris
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL:
<http://lists.freebsoft.org/pipermail/speechd/attachments/20100928/e11e0c57/attachment.pgp>
From my understanding so far, confinement is something that also has
address@hidden kernel level, and I think it goes much beyond just user/group ID
restrictions, it even goes so far as preventing an application from using
particular services unless it absolutely and clearly defines that they are
needed for operation.
I am still researching and trying to come to an understanding about
confinement, I just know that it is a thing for multiple desktop environments
going forward.
> > * Rework of the settings mechanism to use DConf/GSettings
> >
> > There was another good discussion about this back in 2010. You will find
> > this discussion in the same link I linked to above with regards to
> > Consolekit/LoginD. GSettings has seen many improvements since then, which
> > will help in creating some sort of configuration application/interface for
> > users to use to configure Speech Dispatcher, should they need to configure
> > it at all. Using GSettings, a user can make a settings change, and it can
> > be acted on immediately without a server or module restart. GSettings also
> > solves the system/user configuration problem, in that if the user has not
> > changed a setting, the system-wide setting is used as the default until the
> > user changes that setting. We could also extend the client API to allow
> > clients to have more control over Speech Dispatcher settings that affect
> > them, and have those settings be applied on a client by client basis. I
> > think we already have something like this now, but the client cannot change
> > those settings via an API.
>
> So, I think we can classify the config options into 3 catagories.
>
> * server config (socket to listen on, log file etc)
>
> I think if you want to change this sort of thing then you don't really
> care about a nice UI, and text files are fine.
Or maybe even command-line only, and have a reasonable set of defaults set at
build time.
Having said that, there may be use cases where an admin is deploying systems
where tight control of logging content may be required. With the right backend,
gsettings values can be locked down such that users cannot change their values.
Dconf certainly supports this. A text file that is only stored in a system
location for these values also works, but gsettings also allows for vendor
patch files that can be put in place to set the defaults. A text file would
likely mean an admin has to edit a text file every time the system or Speech
Dispatcher package is updated.
> * audio
>
> I think this is somewhat the same as the previous, though maybe we need
> to get better at automatically doing the right thing first.
True, but I also think the above can be applied to this as well.
>
> * module stuff.
>
> I think we should allow clients to control that and then rip out the
> configuration options. I think in practice the only time we see people
> change this is when they want to control things they
> can't do from Orca.
Right, agreed.
> > * Separate compilation and distribution of modules
> >
> > As much as many of us prefer open source synthesizers, there are instances
> > where users would prefer to use proprietary synthesizers. We cannot always
> > hope to be able to provide a driver for all synthesizers, so Speech
> > Dispatcher needs an interface to allow synthesizer driver developers to
> > write support for Speech Dispatcher, and build it, outside the Speech
> > Dispatcher source tree.
>
> How is this not possible today? I expect if you drop an executable in
> /usr/lib/speech-dispatcher-modules/ and ask to use it speech dispatcher
> will use it, and the protocol is at least sort of documented.
Sure, but this would allow for scenarios like Debian to be able to ship a
module for Speech Dispatcher that works with Pico, given Pico is non-free
according to Debian's guidelines. Atm Debian users have to use a generic config
file via sd_generic, or rebuild Speech Dispatcher themselves with pico
installed.
> > * Moving audio drivers from the modules to the server
> >
> > Another one that was not raised previously, but needs to be considered. I
> > thought about this after considering various use cases for Speech
> > Dispatcher and its clients, particularly Orca. This is one that is likely
> > going to benefit pulse users more than other audio driver users, but I am
> > sure people can think of other reasons.
> >
> > At the moment, when using pulseaudio, Speech Dispatcher connects to
> > pulseaudio per synthesizer, and not per client. This means that if a user
> > has Orca configured to use different synthesizers for say the system and
> > hyperlink voices, then these synthesizers have individual connections to
> > PulseAudio. When viewing a list of currently connected PulseAudio clients,
> > you see names like sd_espeak, or sd_ibmtts, and not Orca, as you would
> > expect. Furthermore, if you adjust the volume of one of these pulse
> > clients, the change will only affect that particular speech synthesizer,
> > and not the entire audio output of Orca. What is more, multiple Speech
> > Dispatcher clients may be using that same synthesizer, so if volume is
> > changed at the PulseAudio level, then an unknown number of Speech
> > Dispatcher clients using that synthesizer are affected. In addition, if the
> > user wishes to send Orca output to another audio device, then they have to
> > change the output device for multiple Pulse clients, a
> nd as a result they may also be moving the output of another Speech
> Dispatcher client to a different audio device where they don't want it.
>
> The first part of this seems like a short comming of using the pulse
> volume control instead of the one in orca, but anyway.
I'd argue that it is has to do with the way audio output support is implemented
from the Speech Dispatcher side. This has nothing to do with PulseAudio or any
other audio output driver that is being used.
>
> couldn't we accomplish the same thing with less movement of lots of
> data by changing when we connect modules to the audio output?
We certainly could, but we would need to extend the audio framework such that
we handle audio connections per client, and probably a separate thread per
client in the modules such that multiple lines of text can be synthesized
simultaneously if required. This is probably a better way to go as it is less
disruptive.
>
> > Actually, the choice of what sound device to use per Speech Dispatcher
> > client can be applied to all audio output drivers. In other words, moving
> > management of output audio to the server would allow us to offer clients
> > the ability to choose the sound device that their audio is sent to.
>
> I think allowing clients to choose where audio goes is valuable, and the
> way to implement audio file retrieval, but it seems to me we can manage
> the client -> audio output management in the server and just reconfig
> the module when the client it is synthesizing for changes.
Yep agreed.
>
> > Please feel free to respond with further discussion points about anything I
> > have raised here, or if you have another suggestion for roadmap inclusion,
> > I'd also love to hear it.
>
> Well, I'm not actually sure if I think its a good idea or not, but I
> know Chris has wished we used C++, and at this point I may agree with
> him.
If you can come up with good reasons why we should spend time rewriting Speech
Dispatcher in another language, have to deal with problems that we have
previously faced and solved etc, all whilst delivering improvements in a
reasonable amount of time, it would be worth considering. I was thinking the
same myself for a while, but I am no longer convinced it is worth the time
spent. Rather I think it would be time wasted, particularly since Speech
Dispatcher in its current form works reasonably well, it just needs some
improving.
Luke
[Prev in Thread] |
Current Thread |
[Next in Thread] |