axiom-developer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

## Re: [Axiom-developer] Curiosities with Axiom mathematical structures

 From: William Sit Subject: Re: [Axiom-developer] Curiosities with Axiom mathematical structures Date: Tue, 07 Mar 2006 14:52:08 -0500

Hi Ralf:

Ralf Hemmecke wrote:

> >> William Sit <address@hidden> writes:
>
> > Agreed in theory, not in practice.  We should distinguish two issues: (1)
> > derived  operations that depend only on the defining operations should be
> > generically implemented and inherited, and (2) how to handle the notations
> > (equivalently, identifiers in computer science terminology) for the defining
> > operations and derived operations.
>
> By (1) you probably mean to say something like
>
> define PrimitiveType: Category == with {
>         =: (%, %) -> Boolean;
>         ~=: (%, %) -> Boolean;
>      default { (a:%) ~= (b:%):Boolean == ~(a = b); }
> }

That is right, but I forgot to point out that the generic implementation may be
overriden.

> I must say that I liked this idea of "default implementations" when I've
> encountered it. But it also introduces all the complications with
> multiple inheritance.
>
> For example, assume we have
>
> define ContainerType(T: Type): Category == with {
>    bracket: Tuple Type -> %;    -- constructor
>    generator: % -> Generator T; -- yields all the elements
>    #: % -> Integer;             -- size
>    default {
>      #(x: %): Integer == {
>        n: Integer := 0;
>        for element in x repeat n := n + 1;
>      }
>      return n;
>    }
> }
>
> define FixedArrayType(n: Integer, T: Type): Category == with {
>    ContainerType(T);
>    default {#(x: %): Integer == n;}
> }
>
> Theoretically, there is no difference in the two # functions, but
> practically, one would prefer the second one for FixedArray's since it
> is a little bit faster.

Yes, a default implementation, because it is more general, may be less
efficient. However, in Axiom, I trust Aldor also, any default implementation can
be overriden by simply defining the same operations in the domain constructor.
So in the example above, the compiler will compile the function '#' for
'FixedArrayType' using the simpler code.

> Now, assume I have some other category MyFancyCat that inherits from
> ContainerType, but not from FixedArrayType. Then I create
>
> define MyFixedCat(n: Integer, T: Type): Category == with {
>    MyFancyCat T;
>    FixedArrayType(n, T);
> }
>
> Of course, MyFixedCat also inherits defaults, but which one? (The
> current compiler takes the first one.) Changing the order of the
> categories results in more efficient code. However, think of a second
> default function that could be implemented in both categories, but the
> more efficient one in ContainerType. Then you can choose any order for
> MyFixedCat and get always only one efficient default.
>
In case of multiple inheritance, it is the responsibility of the user to choose,
not asking the compiler to choose for you (that would be equivalent to asking
the Interpreter to coerce and then complain about the result or resulting type).
It would be trivial to override (and choose) by something like:

default(#: %)->Integer == #$(MyFancyCat T) or default(#:% ->Integer == #$FixedArrayType(n,T)

The compiler should not choose this and should flag it as a user error.

> When I started with Aldor, I used defaults a lot, but I encountered
> exactly the above scenario and then removed the defaults in almost all
> places and shifted it to the actual domains. Default implementations
> should be used with great care.

Are you telling me that Aldor does not allow overriding a default
implementation? I know Axiom allows.

> Computation time is never an issue in mathematics, but one should not
> (totally) neglect it in computer algebra.

Well, you need mathematics to prove computation time complexities. Then you let
computer programs worry about the details.

> Your (2) from above refers to the idea of renaming operators. Currently,
> apart from the "renaming during inheritance", we are restricted anyway
> quite a bit. For example, I would like my multiplication sign to look
> like \times or \circ. Well, Aldor does not (yet) handle Unicode. But
> even if it did, I would probably have to disambiguate certain
> mathematical expressions so that they become Aldor programs.

First of all, it is not I who suggested this renaming. I am opposing it, if
anything, as a solution to the problem raised because Axiom did not consider
AbelianMonoid as inherited from Monoid. I think "duplicating code", to some
extent, ease the problem, but down the road, it may not be a satisfactory
option.

How to handle renaming is of course a very difficult problem. The idea, in both
mathematics and symbolic computation, to overload operators (symbols) is to
reduce the number of different notations when the operators have essentially the
same properties. Let's not even discuss the complication that arises when
typesetting (which is when \cdot  or \times or simply concatenation may be used
for the same multiplication).  Thus * is commonly used for commutative
multiplication and sometimes for noncommutative multiplication as well, + is
always for addition, \circ for composition (non-commutative of course), \cdot
for matrix product or vector dot product, etc.  When an algebraic structure
involves more than one kind of multiplication, sometimes mathematicians "abuse"
notations by not distinguishing them, like k(yz) could mean a scalar
multiplication by k to an algebra product y and z.  In computer algebra, this is
clearly not allowed because, as I said, computer languages are rigid. So there
has to be a proliferation of symbols (operators, function names, etc) to make
every operation unique and unambiguous. In CAS, the best that can be done is to
retain a small set of symbols commonly used, and then use long function names
for others. Overloading helps to hide this proliferation. In choosing this set,
we must agree to the properties of each so that + will never be used for
non-commutative binary operation. Allowing arbitrary user selected symbols is
simply looking for trouble.

> But actually that is another issue: How to write a good user interface
> (notations) that maps nice looking expressions to the right underlying
> algorithm. Maple and Mathematica know already quite a bit of such user
> interfaces. However, they don't have the beauty of types.

I don't see how Maple or Mathematica handle this any better than Axiom. They
still fix properties of commonly used notations (perhaps not in a way that
everyone agrees, but if you use their system, you have to: for example,
Mathematica uses ** for non-commutative multiplication). What Maple and
Mathematica did better for user I/O is in their display technology, not their
input technology. You can type tex expressions like \lambda and it will display
the Greek font for lambda and it is treated just like any other symbol for input
purposes. There is no difference (other than display) if you use the identifier
'lambda' instead. The palettes that allow you to enter nice looking expression
as input are really mouse-click functions that display one thing and enter
something else to the kernel.

I believe we were not discussing user interface earlier. We were discussing
whether one should be allowed to use arbitrary operators (or functions) for, say
the multiplication of a monoid. We are trying to find a way to express the
mathematical fact that an abelian monoid is a monoid, despite the difference in
notation for the monoid operation. As I pointed out in previous email, the
problem is not simply "renaming" (and certainly not I/O) but rather the concept
of isomorphism is behind it.

> There is Chapter 9 of Doye's PhD thesis
> http://portal.axiom-developer.org/refs/articles/doye-aldor-phd.pdf
> which deals in part with the renaming.
>
>  From what I understand, it simply makes two mathematically distinct
> things (operator symbols and operator names) distinct in the programming
> language. (Though I don't quite like the syntax.)

Thanks for the pointer. Glad that someone actually thought about this issue
already. After glancing Chap. 9 and guessing at the notations from earlier
chapters, it seems to me Doye is making each defining operation into an
equivalent class, or may be just a set, (\sigma (source target)) where \sigma
would be the default and hard coded symbol for the operation, and source and
targets are uniquely identifiable as sorted domains. The user can then declare
another symbol, say \lambda, to belong to this equivalent class or set, thus
changing the notation (renaming the function). So in effect, \lambda is tagged
by (\sigma (source target)). This seems to be the reverse of overloading. I'll
reserve judgement until I have read the thesis in more details.

By the way, I don't follow his example at bottom of p. 127: If \phi is a functor
(he said homomorphism, but that is wrong, because a list and a set are from
different categories), he wants the equation to hold:

\phi(#([1,1]) = #(\phi([1,1]))

The right hand side makes sense, with answer 1, but the left hand side does not,
since #([1,1]) = 2 is a number, not a list. He claimed \phi(2) = 2.
In addition, there is no requirement that a functor commutes with operations. A
functor should, in addition to taking objects (lists) to objects (sets), also
has to take a morphism *between* source objects (lists) to another morphism
*between* the image objects (sets). The map '#' is not a morphism in the
category of Lists, nor of Sets. If one must, then '#' is a functor from the
category of Lists (and also for Sets) to the category whose objects are
non-negative integers. But the composition of functors \phi \circ # does not
make sense.  What Doye has in mind is the following diagram:

#
Lists --> NNI
|        |
\phi |        |  id
v   #    v
Sets  --> NNI

But there is no reason to expect this to be a commutative diagram of functors.
So I think his example illustrates nothing.

> It would certainly make sense to print "a + a" as "2*a", but how an
> element of a domain is printed is decided by the domain.

It is more than just "certainly make sense to print". In a ring, a+a IS 2a. The
additive group structure is the same as a Z-module structure and hence 2a is the
correct way to denote the answer. How it is displayed in a CAS is something
else, and my argument is that this answer not only "makes sense", but it
"should" be displayed as 2a.

> If we have
>
> define MyMonoid: Category == with {
>    1: %;
>    *: (%, %) -> %;
>    power: (%, Integer) -> %;
> }
>
> -- and (now fancy notation with renaming)
>
> define MyTimesPlus: Category == with
>    MyMonoid;
>    MyMonoid where {
>      0: % == 1;
>      +: (%, %) -> % == +;
>      ntimes: (%, Integer) -> % == power;
>    }
> }

Some typo above? should be '+: (%,%)->% == *' ? I am not following your syntax
for MyTimesPlus either (why use "where"?)  Parethesis not balanced also. I
thought you meant:

define MyTimesPlus: Category == MyMonoid with {
0: %;
+: (%,%);
ntimes: (%, INteger) -> %;
default {
0: % == 1;
+: (%, %) -> % == *;
ntimes: (%, Integer) -> % == power;
}
}

which says a domain of category MyTimesPlus belongs to the category of MyMonoid
but has three additional objects 0, + and ntimes, and here're the default
implementations. Or do you really mean the defaults are compulsory? (If so, then
I don't know why in MyInteger1 (and MyInteger2) below you define 0 (and +). You
would be overriding the defaults.)

> Then MyTimesPlus has 6 different signatures.
> If one removes the line "ntimes" then it would be only 5 and "power"
> would correspond to the multiplication (or (more correctly) to the
> actual implementation of it in a corresponding domain).
>
> So, operator symbols and operator names agree as long as they are not
> renamed.
>
> And one could also build
>
> MyInteger1: MyTimesPlus == add {
>    Rep == Integer;
>    0: % == per 0;
>    1: % == per 1;
>    ...
> }
>
> MyInteger2: MyTimesPlus == add {
>    Rep == Integer;
>    0: % == per 1;
>    1: % == per 0;
>    (x: %) + (y: %): % == per (rep x * rep y);
>    (x: %) * (y: %): % == per (rep x + rep y);
>    ...
> }
>
> Now, clearly, it can easily be figured out what 0, 1, +, etc. mean if
> MyInteger1 or MyInteger2 is in scope.

Depends. Most likely, MyInteger1 and MyInteger2 would have coercion to and from
Integer (to enable input).  Unless you type qualify the constants, it is not
clear what an interpreter would do.

> If both are in scope then nobody
> can infer what "0 + 1" should return. In fact, constructing
> implementations like that is also currently possible without renaming.

No one can tell what "0+1" is unless the 0 and 1 are qualified as to which
domain each belongs. You are creating an ambiguous situation deliberately, and
so you have to make it unambiguous again by package call (which is in some sense
the same as renaming). But what is your point?

You know, there is some mathematics behind these constructions, but that is bad
mathematics: it may be fun to confuse students to switch the two monoid
structures in Integer and even to interchange the notation for 0 and1 and + and
*. But no real mathematics is done that way, even if it is "allowed" AND
correct! We don't say this is a bug in the mathematical system and we don't view
it as a problem at all. So why should computer algebra systems be different?

> > So I hope we need not argue a third issue: (3) maintaining standard
> > notations
> > for the defining operations in common algebraic structures.
>
> Hmmm, don't some people use \cdot, \times, \circ, \odot, etc to denote
> multiplication? ;-)

There are many kinds of multiplication. Sometimes people overload a symbol and
sometimes to emphasize the distinction, they use different ones. Very often,
these distinctions are unnecessary as they can be deduced from context (by
intelligent human). In symbolic computation, that is different. Without detail
package call or domain declaration, the computer cannot tell the context.
Computer can handle "overloading" so let's use it, and use it in a way in
agreement with mathematics when mathematical symbols of operations are used as
identifiers.

> But let's continue with MyInteger1/2.
> If I now ask
>
> MyInteger1 has MyMonoid
>
> then that refers to the multiplicative structure. I have, however, to say
>
> macro MyAdditiveMonoidMacro == {
>    MyMonoid where {
>      0: % == 1;
>      +: (%, %) -> % == +;
>      ntimes: (%, Integer) -> % == power;
>    }
> }
> MyInteger1 has MyAdditiveMonoidMacro

Do you mean you have to use the macro to ask:

MyInteger1 has MyTimesPlus

while 'MyInteger1 has MyMonoid' works?

> So what do I gain by a "renaming feature"? Not much, since now I must
> add the corresponding "with respect to addition" or create another category.
> (Isn't that very similar to the case when Monoid and AbelianMonoid are
> totally unrelated?)
>
> In fact, if I now define a new category
>
> define MyAdditiveMonoidCategory == {
>    MyMonoid where {
>      0: % == 1;
>      +: (%, %) -> % == +;
>      ntimes: (%, Integer) -> % == power;
>    }
> }
>
> and ask
>
> MyInteger1 has MyAdditiveMonoidCategory
>
> then this should (and must) return false since MyInteger1 is not
> explicitly declared to be a member of that category.
>
> Ralf

I think you just demonstrated the rigidity of computer languages. I believe
allowing "user renaming" of functions sets up another level of abstraction that
will only cause more confusion and will not solve the problem of multiple
inheritance (I may change my mind after reading Doye). The designers of Axiom
made a good compromise, even though it does not reflect the true mathematics
algebraic hierarchy. One possible improvement (but far from a total solution) is
to include inheritance via isomorphisms, but even that is a lot of work.

William



reply via email to

 [Prev in Thread] Current Thread [Next in Thread]