Andrew Haley wrote:
Maybe, but that's not the only thing. It's possible to define jbyte
so that it is an 8 bit signed value but not a character type, and JNI
does not forbid this. I suspect that all the platforms we use define
jbyte to be a character type, but I can see no overpowering reason to
introduce a dependency on that.
"jbyte" must have a single platform-specific definition, as all JVMs on
that
platform should be able to execute the same JNI library code (no
recompilation
required). I would argue that if a char type has 8 bits on a platform,
there
is a strong case for it to be defined as "typedef signed char jbyte",
and I would
gues VM implementors would be veery unhappy if Sum (or any other)
decided to
define jbyte otherwise on that platform.
BUT.... I agree, it could be false on some system. So, assuming the
worst-case
scenario, I have attached an updated version of my byte array proposal that
is, as far as I can tell, robust across all possible platforms.
It contains 2 utility "inline" functions: wrap and unwrap. I provide 2
versions
of each, one version for systems where jbyte == signed char, and one
version
for systems where jbyte != signed char. A test function, ideal for use in
an autoconf macro, is provided that issues a warning when jbyte !=
signed char.
So, Andrew, does this version pass the portability test?
I would really like to see the native counterpart of your opaque types and
compare the "theoretical" performance of it relative to the byte array
proposal.
Etienne