axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Axiom-developer] [#18 Axiom domains and Aldor return types] (new)


From: anonymous
Subject: [Axiom-developer] [#18 Axiom domains and Aldor return types] (new)
Date: Mon, 17 Jan 2005 21:28:45 -0600

Re: [Axiom-developer] Axiom domains and Aldor return types
From: William Sit
Subject: Re: [Axiom-developer] Axiom domains and Aldor return types
Date: Mon, 17 Jan 2005 00:56:39 -0500

Steve:

Stephen Wilson wrote:

> First, let me thank you for your analysis! I am finding the questions
> raised here very interesting. But I'm still on the other side of the
> fence for the moment :)

I have no problem with that :-) If you read my toss and turn part, I know what
your reasons are and certainly respect that point of view as well. In some
sense, I am playing devil's advocate.
 
> > (I) The algorithm, which is really for S, is now an algorithm for R.
> > It hides the existence of S completely.  It also hides the relevance
> > of p:P.
> 
> The algorithm never needs to know S explicitly. If anything, the
> algorithm involves `something' with satisfies C(R,p). However, the
> meaning of a call to bar(p:R, ...)$Foo will in general carry a level
> of meaning all its own, which need not communicate a dependence on S
> (or C(R,p)). I dont see how this level of semantics involves the
> problems we are discussing.

On the contrary, it has a lot to do! Let me recap your Foo package here for
discussion:
---------
   Foo(R: ModularComputation): with { ... } == add {
          bar(r: R, p:R): R == {

         elem : ResidueClassRing(R, p) :=
                  modularRep(r)$residueClassRing(p)

            -- hairy computation...

         canonicalPreImage(elem)
                } }
---------

Two comments: (1) Axiom can also hide S as you prefer:

    Foo(R,p): T == C where
      R: ModularCategory
      p: Ideal(R)
      Q ==> any domain constructed from R, p
      T == with
        bar: R -> Q
      C == add
        S:= SomeConstructionForResidueClassRing(R,p)
        import S
        bar(r)==
           elem:S:=modularRep(r)$S
            -- hairy computations
           q:Q:= ...

    Calling sequence in some other package, assuming R, p are already
    defined:

       bar(r)$Foo(R,p)

But notice that in such case, S is fixed (hardwired), much like your
residueClassRing(p) is fixed because of encapsulation in ModularComputation. The
original way, with S as a parameter, allows easy changes of representation of S
(hey, except for signature, bar(r)$Foo(R,p,S) is really a function of 4
parameters).  The Aldor way would need to replace R by R1, which is another
wrapper around the base ring R. On the other hand, the original way, Q can
depend on S and now it cannot. If Aldor allows signatures like:

    bar:(r:R, p:Ideal(R)) -> BAE(R, p, residueClassRing(p))

that would be more general. But I worry about the efficiency here: each
evaluation of bar would require a new instantiation of 
residueClassRing(p) even if p is fixed! I doubt very much the compiler will
optimize the code for this special, but perhaps common situation. In the Axiom
version, because bar does not depend on p (or rather p is fixed with respect to
bar), only one instantiation will be done. 
In a recent message, 

[Axiom-developer] A terrible bug: romberg+simplify/expand: slowdown of
              125/300 times

Vladimir gave an example where Axiom does repeatedly invoke a map function from
EF for each of the 2049 evaluations of the function in:

   romberg(z+->simplify(%i^2), 0, 1, 0.1, 0.1, 10, 12)

(use ")set mess bot on" to see the 2049 selections pass you by), even after the
function z+->simplify(%i^2) is compiled as in

   g: (Complex Integer, Integer)->Integer

The interpreter somehow invokes a lift from Complex Integer->Integer
to work over EXPR COMPLEX INT -> EXPR INT simply because %i was used.

But I agree that if you want to hide p, then Axiom cannot handle this because
the signature for bar would not be available.

(2) If you push your argument further, we can say that everything we compute
only depends on what we know about the integers. We can say there is no need to
communicate a dependence on say a polynomial ring over the integers because all
algorithms only work on the integers (the exponents of the monomials are also
integer vectors). But the way mathematics is developed is to create new objects.
If the algorithm is really for the new object (residue class ring or the
polynomial ring) then it should be reflected in the signature of the code. If
you think the algorithm is only for R, sure you can hide the new object even if
you need it for computation.  Of course, one can, like algebraic geometer, take
the view of Spec R which encapsulates all its local rings, but that would be
fine if you have algorithms on sheaves. On the other hand, I think at these high
level of abstractions, it is important to distinguish data structure for an
object, and algorithms for it or its derived objects and not to mix them up.
That is the reason for package constructors in Axiom, to be separated from
domain constructors. They are like two "independent" branches of government.

> > (II) For each R:A for which a method to construct S for p:P exist, we
> > must *retroactively* make R:ComputationMethod.
> 
> Again, true. However, I fail to see why this is a issue. When one
> writes the algebra, decisions made during initial design will always
> be rethought. I dont think this is an argument against the
> residueClassRing example as much as it is a proof that `hindsight is
> 20/20'.

I am not arguing against residueClassRing, but more generally and that is why I
abstracted the example. We will never be able to envision all the possible
structures that can be put on a mathematical object at its first implementation.
The Integer domain is a prime example. But there are many other too. For
example, we have trees implemented, but tree domains can also have a Hopf
algebra structure. That is not what most would think of when implementing trees.
So that structure, when needed, say doing renormalization with Feymann graphs in
quantum field theory, would have to be added on later. Whether one wants to add
this directly to the initial construction or not is matter of choice. But in the
end, as my abstract example shows, there is no inherent gain of computation
power or even convenience, and the two calling sequences:

   bar(...)$Algorithm(R,p, Method1(R,p))
   bar(p:P, ...)$Algorithm(R)
(the second should more correctly be written
   bar(p:P, ...)$Algorithm(R1)                   )

are not that much different. So in Aldor, you can choose whichever you like:
encapsulating more structure into R, or separating the residueClassRing
construction (and I indicated that there is more freedom of expression in
Aldor).  In the analogous example I gave, there are many representations
(different data structure, different term ordering) for the polynomial ring, and
it would be crazy to encapsulate such structure into the coefficient domain R. 
Can you tell me structurally, both in terms of mathematics and coding, any
difference between the polynomial ring example and the residue class ring
example? Why should one be treated differently than the other in the matter of
encapsulation?
(Remember, the P for residue class rings is Ideal(R), not R and so it is an
externally defined domain much like the P for polynomial ring is List Symbol,
external to R.)

> > (III) Now if R has several methods, then we must *rename* R using
> > three different names in (II), even though R is still the same object
> > in category A.  This would be very undesirable if R is Integer.
> 
> For Integer we might say
> 
> ------
>    residueClassRing(p:%):ResidueClassRing(%, p) ==
>       if p < 2^32 then
>          ZechPrimeField(p)
>       else
>          IntegerMod(%,p);
> ------

But this can also be done in Axiom in the domain constructor for the
ResidueClassRing. 
----
     ResidueClassRingForInteger(p) ==
       if p < 2^32 then
          ZechPrimeField(p)
       else
          IntegerMod(%,p);
----
The only difference is you called the domain constructor by a uniform name (but
has to rename the ring R for each separate data structure for the residue class
ring S in making R into ModularComputation).

[Remark: it would be harder if you implement residue class ring more generally,
using ideals insteads of numbers, but I am going along with your version for
now.]
   
But as a style of programming, I would not inline the two segments of code as
above (the then-part and the else-part will be separated by many lines of code).
So in the end, one would still separately have two constructors (notice that if
not inlined, two names are used: ZechPrimeField and IntegerMod, and
ResidueClassRingForInteger, or residueClassRing are both wrappers, in addition
to the extra level ModularComputation wrapper). What I am saying is, this extra
wrapped level ("push" direction) is not necessary in Aldor (just as wrapping a
function with dependent signature into a package is not necessary in Aldor, but
in Axiom, that is the unavoidable "lift" direction.) 


> If for a given situation we need more information than just what R:A
> and p:P can provide, then variants on the name of a domain constructor
> is an option, but not the only one (or the best one). We could just as
> easily add a second residueClassRing(p, moreInformation) to
> ModularComputation's list of exports to express the dependency on such
> information explicitly. We could alternatively write
> ModularComputation2 with the extra signature. In short, we let the
> structure of the algebra communicate the interrelations as much as
> possible rather than relying on naming conventions alone. This is a
> problem of design, and one which dependently typed functions provide
> an elegant solution.

You may have missed my point. By creating more categories, such as
ModularComputation or ModularComputation2, you REQUIRE the domain R to be
declared as such in any actual computation. If R is a new domain, fine. If R
already exists, this has to be wrapped into new domains, using new names. But in
any case, R is now tied to the residueClassRing map that makes it a
ModularComputation domain, its identity as the original commutative ring is
suppressed, and R splits into many versions depending on the way
residueClassRing is implemented. 

I would much prefer that the new names refer to the methods to construct
residueClassRing, than to tie it to the ring R, because one method may work for
many R, and yet your ModularComputation works only for ONE R. An example would
be when R is a polynomial ring over some field k. We can construct residue class
rings uniformly not just for prime polynomial ideals, but for all coefficient
field and for all choices of indeterminates. In your set up, each would need a
separate residueClassRing map. Unless convinced otherwise, I think overloading
(or polymorphic) is best used for operations and not domain constructions.

There is a big difference between mathematics objects and computation objects.
In mathematics, one may argue that all the residue class rings are part of the
ring R (each residue class ring is uniquely defined). So there is still just one
mathematical object when all its residue class rings are included.
Computationally, however, one has to consider both data structure representation
of the residue class rings AND the algorithmic aspect of operations and other
problem solving techniques for the residue class rings. We cannot lump all these
into the same domain R. Each must be separately identified. That is the reason
for so many polynomial categories, even for a fixed coefficient domain and a
fixed set of variables. Pushing these domain constructors into the level of a
function (with dependent signature) may look neat, but it does not eliminate the
bifurcation in computation objects.

> 
> > (IV) The "advantage" at first glance, seems to be operator
> > overloading:  using one single name "method" to refer to all the
> > Method[i] in constructions of S.  But as (II) shows, this "powerful"
> > and categorical construction is nothing but a wrapper, which is the
> > the reverse of what I proposed for Martin's example:  instead of
> > lifting a function g(n,k) to the level of a package, where the
> > dependence on parameters in the signature belongs, the construction of
> > ComputationMethod pushed the level of a domain constructor (which is
> > what each Method[i] is) to the level of a function.  I don't think
> > that is a convincing example of the need for allowing dependence on
> > parameters for functions.
> 
> But this act of `pushing' a domain constructor to a level of a
> function is what allows one to communicate the structure of the algebra
> by *using the structure itself*. I'll say more below.

If you treat creating more abstract categories when it is not absolutely needed
and hiding the existence of the residue class rings as *using the structure
itself*, then I do not agree. But if what you are trying to do is an initial
step towards the idea of sheaves, that is different and in that case, perhaps
you should start with the category of sheaves.
> 
> > The power of that construction in Aldor, as I pointed out in another
> > email, is allowing such dependence in functions on the SOURCE side,
> > not on the TARGET side of the map. In short notation:
> >
> >        F(a:A, b:B(a), c:C(a,b)):D(a,b,c)
> > ...
> 
> I do not argue these points at all. Note that in some languages, like
> ocaml, the signature for a function like this would take the form
> (more or less):
> 
>        F: (a:A) -> (b: B(a)) -> (c: C(a,b)) -> D(a,b,c).
> 
> And calling F x, would return a function F': B(a) -> C(a,b) ->
> D(a,b,c), etc.

> Depending on your point of view, a function with multiple, dependently
> typed arguments is just notation to express a chain of functions where
> the only dependence is in the type of the result. I find this
> interpretation quite satisfying.

This is indeed a nice way to define recursively the signatures, but I won't call
these "functions". 
 
> Your comments on `lifting' the aldor example is perfect in that this
> is the probably the best way to write such things in axiom today. I'll
> just recap the result:
> 
> > Foo(R,p,S): T == C where
> >   R: ModularCategory
> >   p: Ideal(R)
> >   S: ResidueClassRing(R,p)
> >   Q ==> any domain constructed from R, p, S
> >   T == with
> >     bar: R -> Q
> >   C == add
> >     bar(r)==
> >        elem:S:=modularRep(r)$S
> >         -- hairy computations
> >        q:Q:= ...
> >
> > Calling sequence in some other package, assuming R, p, S are already
> > defined:
> >
> >    bar(r)$Foo(R,p,S)
> >
> > If you want to use the default implementation when R is a domain in
> > IntegerCategory, you can use:
> >
> >   if R has IntegerCategory then
> >       S:=IntegerMod(R,p)
> >   bar(r)$Foo(R,p,S)
> 
> The last part is the kicker for me, asking `if R has IntegerCategory
> then ...' is where the two approaches diverge. I claimed that I
> doubted one could write an equivalent Foo without the use of some
> hypothetical RESCLASS package/domain which made use of copious `if R
> has ..'  constructs. Although the model I had in mind of how this
> would look in Axiom is different from your solution, the main point is
> still there. The questions are being asked, just at a different level
> -- either code which uses Foo needs to provide an answer, or a user
> typing into the interpreter.

Well, I was only using the "if R has IntegerCategory" because you defined a
default implementation (which actually does not work for all R:IntegerCategory).
The Axiom implementation allows you to put ANY RESCLASS domain for the parameter
S and there is no need for "if R has ...".  In Axiom, I am not aware of any
default construction for domains (all constructions are "equal opportunity"
employed), only default implementation for operations (for example ** in
monoid).
 
> The crux issue in my mind is that the aldor example illustrates how
> one can encapsulate questions like `if R has ..' into the structure of
> the algebra itself. residueClassRing() is more than just a wrapper. It
> communicates a relationship between R, p, and C(r,p). The axiom
> solution you propose communicates a relationship between R, p, and
> S:C(R,p). The problem of needing to know S is removed in the aldor
> example, as questions regarding it are not raised ( clearly for given
> R and p there *is* a relationship to S:C(R,p). 

There is only syntactical difference and not semantical. The S in the Foo
package is only a symbol needed to do a package call. It is the equivalent to
the symbol method(p) (or residueClassRing(p)) to make the same package call. In
either case, the actual knowledge of S or method(p) is not needed, only the
categorical properties as defined in C(R,p) are. In Aldor, you need to use
method(p) and in Axiom, you need to use S, both for package call. If you hide S
(as given above), you hardwired S into R and make it less convenient to change
S.

I fail to see the connection between this hiding and the "if R has ..." and I
already commented on that. You are correct that SOME "if R has ..." may be
eliminated by the signatures F(a:A, b:B(a), c:C(a,b)):D(a,b,c) (recall that I
proposed the converse to implement these signatures in Axiom *using* "if R has
...".  In other words, "if R has ..." is more general (or powerful) construct
than the signatures. Even if (in 30 years?) Axiom or Aldor is integrated with
theorem proving, there will still be "if R has ..." that cannot be verified by
the theorem provers and need a mathematician to declare it (if only for
efficiency). To quote a cliche: who is going to prove the theorem-provers?

> But this relationship
> is expressed in code at the domain level, where it `belongs'. The
> domains are just implementing the requirements of their exports).

Absolutely. That's what Axiom (and Aldor) both do. What's the difference?
You push construction of domains (residue class rings) to the function level
(like hidden domains within a domain, second class citizen?). Surely, you don't
think residue class rings do not deserve their own domain?

> There are other reasons to consider dependent types in a function
> target. If we are interested in future integration with a proof engine
> like ACL2, then we will want to consider the role such functions can
> play in such a setting. Some of the leading proof assistants, like
> LEGO and COQ, make explicit use of dependent type information.

I don't argue against the usefulness of dependent types. Surely, they are more
convenient to use, but that does not mean they must be used. I do not totally
object to your construction of ModularComputation. In Axiom, there is at least
one example similar: EuclideanDomain, where we absorb the Euclidean algorithm as
part of the domain. But there is a difference, the functions that are added do
not construct domains, they only implement algorithms. So this is "pushing"
packages into new domains, while your example is "pushing" domains to functions.
Maybe someone will pick up this discussion and explore all the pushing and
lifting possible among categories, domains, packages and "functions" and
determine some general guidelines of what should be done under what
circumstances. That might be a good master thesis.

This discussion started with whether Axiom can simulate dependent signature and
I believe you now agree that it can. 

William




--
forwarded from http://page.axiom-developer.org/zope/mathaction/address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]