[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: NSCharacterSet bloat?
From: |
Richard Frith-Macdonald |
Subject: |
Re: NSCharacterSet bloat? |
Date: |
Wed, 25 Jan 2006 10:11:25 +0000 |
On 24 Jan 2006, at 18:53, Derek Zhou wrote:
Richard Frith-Macdonald <richard@brainstorm.co.uk> writes:
On 23 Jan 2006, at 22:40, Derek Zhou wrote:
If that is not too big, can we merge them into base or make?
Of course we could ... but I can see no reason to do so, any more
than we should merge gcc into the base or make package.(less in fact
... gcc is generally useful).
gcc (or any lib that gnustep depends on) is public known free
software, while the utility to produce the characterset is not even
mentioned on gnustep.org
Not sure what you are trying to say.
True, all the software (gcc, charset utilities, libraries etc) is
'public known free software', but that has nothing to do with being
mentioned on gnustep.org and has nothing to do with whether it gets
included in the base library (other than the fact that we would not
include anything that isn't free software of course). Also not sure
why you talk about libs that gnustep depends upon. Perhaps you think
that we are using private, non-free software here ... if so that's is
an incorrect and rather strange assumption which ought to have been
dispelled by looking at the readme file and/or the source code.
Perhaps you want to do something worthwhile here though ... I can see
some value if you want to produce a new utility to reliably download
charset data from the unicode website, convert it to the format in
which it is used in the base library, compare it with the built-in
data, and produce a warning if the base library is no longer up to
date. That would save us the effort of doing it manually once every
few years.
I haven't take a look at the current utility to compile the
characterset, but assuming that works, modify it to automate the
download from unicode.org, compare with current installed characterset
and update if necessary shoudn't be too hard.
I was thinking that the value of an automated utility to check for
changes would be that we could just run it before each gnustep
release to make sure that we made a release which had the very latest
changes in unicode in it. In practice changes to unicode rules
requiring changes to the charset tables are extremely infrequent
(several years apart), so we could quite easily fail to notice them
for a while.
However, to be able to
update a live GNUstep system, I have to seperate the characterset from
the binary, so it goes back to my original argument, that is
seperation
between code and data is a good thing.
If you want to change a live system, there are options:
1. extremely 'live' ... you patch the memory in the running
executables ... fairly straightforward, but even so, I would never
recommend it.
2. semi-live ... you replace the dynamic library and restart any
running software.
3. routine ... at the point where you make a new release of your
software live on your system, you also install a new release of gnustep.
Having the charset information in a separate file would change option
2 to 'you replace the data files and restart any running software',
but wouldn't make anything easier ... it just means you would have to
replace/update several data files rather than one library.
A good example is linux use to
have some firmware embedded in the code, now linux load firmware at
run time. Also, there maybe legal concern here: By linking in
unicode.org's data into the binary, gnustep may become a derivative
work of unicode.org.
I think it's better to stick to saying that you have a personal
aesthetic issue here ...
our characterset data is not firmware
our characterset data is not owned by unicode.org
even if we included unicode datafiles, their licensing terms don't
differentiate between distribution as standalone data or in software.
Thanks