l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Broken dream of mine :(


From: William Leslie
Subject: Re: Broken dream of mine :(
Date: Sun, 11 Oct 2009 18:17:29 +1100

2009/10/8 Jonathan S. Shapiro <address@hidden>:
> On Mon, Oct 5, 2009 at 8:14 PM, William Leslie
> <address@hidden> wrote:
>>
>> 2009/10/6 Jonathan S. Shapiro <address@hidden>:
>> > Trivial example 1: The "readonly" keyword in C# is (correctly and
>> > necessarily) ignored by most C# compilers. Exercise for the reader:
>> > explain why.
>> Any discussion of optimisation needs to keep in mind the premise that
>> optimisations are only applicable if they preserve the semantics of
>> the language.  Any attempt to take advantage of readonly would need to
>> show that, on the domain of interest, no paths modify the region of
>> interest, and that there are no memory barriers; effect analysis of
>> this depth is very expensive if all you are getting out of it is to
>> show that a readonly is a loop invariant...
>
> Ahh. You are forgetting that both the Java and the C# runtime
> environments have preemptive concurrency.

Perhaps I understand the memory model of neither, then.  I mean to say
that effects of different threads only need to be ordered at memory
barriers, so effects on different regions can be coalesced within a
thread.  Consider the canonical example:

global_flag = True

def some_loop():
    while global_flag:
        something()

start_new_thread(some_loop)

something()

global_flag = False

In the absence of synchronisation primatives, under most memory
models, the some_loop thread is not sure to see the update to
global_flag, ever.  In these cases, a compiler is free to fold away
its lookup.  Do you have an example where, despite no memory barriers
and no writes on the region of interest, constant folding the lookup
violates the semantics of some interesting language?

(hint: writing to a region with callable type and other dynamic
dispatch doesn't work, because that must be part of the effect
analysis.)

> In many interesting cases,
> the analysis you want simply cannot be done unless you prohibit
> runtime class loading.

So in some cases you fail to prove that the optimisation can be
performed over the domain of interest.  This actually happens a lot;
you've just got to suck it up and concentrate on code you /can/
optimise.

>> , indeed, if you are doing
>> that kind of analysis, readonly is redundant.
>
> You misunderstand what readonly does. It allows you to state, for
> example, that a class member field is unchanging. In that situation,
> it is not at all redundant.

Indeed, I do.  I thought it was more like a const pointer (where the
compiler must ignore the constness because there may be non-const
aliases).

>> > Yes. They eliminate between 50% and 60% of current vulnerabilities.
>> >
>> > But be careful. You need to test and calibrate the runtime cost of this...
>
> I am reminded of a humorous comment that Jochen Liedtke once made:
> "Fast, yah! But correct? Eh."

Proving that even in reply to yourself, you are always a quick wit.

>> > JIT code is bad because we don't know how to assure anything as
>> > complex as a JIT compiler.
>>
>> Any transformation a JIT compiler makes must preserve the semantics of
>> the original program, otherwise it would not be useful.
>
> That's a fine statement on paper. And like I said, it is
> *considerably* beyond the current state of the art to know whether a
> given JIT compiler actually meets this requirement.

A given JIT compiler, definately.  JIT compilers are a lot of code,
which quickly grows bugs as it is modified to support more language
features and more special cases for optimisation.  There is a lot of
work that a JIT must do, such as generate guards and enumerate
assumptions, patch loop or function entry points, do register
allocation and transliteration into machine code, on top of the actual
optimisations that are made.  They also have to be fast to be useful
(although I know of at least one JIT written entirely in a safe
language, it spends % time in the GC).  Are safety and speed here
mutually exclusive goals?  I don't think I know enough to say no.

Further, even if it *was* an effort you decided to undertake, I don't
think it would have been the ideal starting point.  The majority of
the enourmous body of code we use on a daily basis is either written
in an unsafe language or run on a VM written in an unsafe language.
Rewriting VMs and applications to take advantage of a safe execution
environment is certainly a good idea, but but good ideas mean nothing
while they are not yet feasible.  That's why a mechanism for unsafe
execution had to be created first, the optimisation for safe languages
can come later.

>>  Since the
>> program must have been shown to be safe to have been compiled the
>> first time...
>
> How do you know that this check was actually performed?

The how is probably beyond the scope of this discussion, but I
imagined that this would be information that the constructor would
know.  A thread could have 'jit safe' capabilities, which identify the
address spaces into which the constructor has placed the code for this
safe program.  Communicating processes could then hand these to a JIT
which operates on behalf of them.  The JIT would then verify that both
constructors trust each other.

The way I see it, the mechanism would have to be cooperative and
verified.  I guess that does diminish its usefulness, though.

William Leslie




reply via email to

[Prev in Thread] Current Thread [Next in Thread]