axiom-developer
[Top][All Lists]

## [Axiom-developer] Re: Mixing up variables: (was Re: conditionally define

 From: William Sit Subject: [Axiom-developer] Re: Mixing up variables: (was Re: conditionally defined functions) Date: Wed, 29 Sep 2004 01:02:01 -0400

Dear Martin:

Martin Rubey wrote:

> "interpreting polynomials as functions" -> "use SUPs, please!"

OK, I see what you meant: the variable in SUP is unspecified. But there is
really not that much difference between UP and SUP (UP is a special extension of
SUP, in fact UP == SUP add coerce: Variable(x)-> UP(x, R)
where coerce(v)== monomial(1,1) and another for coerce to outputform, printing
"x" instead of "?". So x is identified with ? under coerce for input, and ? is
printed as x on output. The only changes are in input/output. See also next
comment.

[...] Regarding [equation] and [equal] :

> I think you are right. When I get to it, I'll set up a MathAction page
> summarizing all this.

That would be very much appreciated. But, perhaps no so fast:
Here is an example worth examining:

supint:=monomial(1,1)$SUP(INT) suppoly:=monomial(1,1)$SUP(POLY INT)
g:Boolean:= (supint = suppoly)

The interpreter coerced supint:SUP(INT) to SUP(POLY INT) and then returns g as
true. Mathematically (see Lang, Algebra; more discussion later), supint is a
function from Variable(?)->INT with supint(?)=1$INT, and suppoly is a function from Variable(?)->POLY INT, with suppoly(?)=1$(POLY INT), so as functions they
are different. The coercion means we identify supint with the composition supint
followed by the embedding INT->POLY INT (so that 1$INT is identified with 1$(POLY INT)). This is all mathematically acceptible. So for equality testing
a=b, coercion may be acceptable in certain cases, contrary to what I advocated
last time. I was thinking about the case where a:A, b:B and A is a subdomain of
B, but could not find an example where a=b is true as a statement in B, but b is
not in A. In the above example, A is the domain of functions from
Variable(?)->INT (or if you like, powers of the unspecified variable ?).
Similarly for B. So A is NOT a subdomain of B, but there is an embedding A -> B
induced by INT -> POLY INT. Of course, if we do make this identification, then
suppoly is in SUP(INT). (Axiom does allow this in the package SUP2(R,S)).

suppoly::SUP INT
?

So I am no longer sure about banning all automatic coercions for [equal]. But I
don't know how else to avoid contradictions in Axiom if coercion remains the way
it works.

upx:=monomial(1,1)$UP(x,INT) g:= (supint = suppoly) true g:= (upx = supint) true g:= (upx = suppoly) false Coercing to the common domain SUP POLY INT for the last test may be the cause for the contradiction. More interestingly: upx * supint * suppoly 2 x ? supint * suppoly * upx 2 x? g:= (upx * supint * suppoly = supint * suppoly * upx) false Exercise: try the other four permutations and predict the results. > ------------------------------------------------------------------------ > Yes. Given x^2 sin(x) - cos(x)/x, no sane mathematician will say that this > could be viewed as a polynomial. Of course, this argument is based on the idea > that x=x, i.e, there shouldn't be different x's in one expression. > > I'm pretty sure that it is next to impossible to do this conversion > consistently. For example: > > (14) -> x::UP(x, EXPR INT)/x::UP(x, EXPR INT) > > (14) 1 > Type: Fraction UnivariatePolynomial(x,Expression Integer) > (15) -> x::UP(x, EXPR INT)/x::EXPR INT > > 1 > (15) - x > x > Type: UnivariatePolynomial(x,Expression Integer) > > are both completely correct, given the strange semantics of UP(x, EXPR INT). The problem is caused by the user in (15). An undeclared x (as in "x ::") is treated in the interpreter (you must declare for the compiler) as the unique element of Variable(x). Any time such an x is coerced into UP(y, ?), it will check if y is x, and if so, return the monomial x in UP(y,?), otherwise it tries to coerce x into ? when possible, or gives an error if not. As I have pointed out before, Axiom is not confused, the user is. Axiom cannot possibly rule out such user behavior because each operation (x::UP(x,?), x::EXPR INT, and /) is correctly performed. To do so would require Axiom to "look ahead" when processing x::UP(x,?). This is unreasonable as well as impossible -- how far ahead? what if the x::EXPR INT is something returned by another routine? I see no reason to allow POLY EXPR INT or similar constructs **as Axiom now stands.** Unfortunately, the general and categoric constructs in Axiom make it difficult to "exclude" specific domains (POLY(R) is valid for any ring R; the user is allowed to construct POLY(R) for whatever R, even if R is POLY(S). In fact, I don't know of any Axiom mechanism to exclude specific domains in a constructor's parameters. (Axiom allows declaring domains to HAVE certain attributes, but I believe does not allow them to NOT HAVE certain attributes). Below, I'll try to analyse where the present problem lies and propose a way to correct the problem. Even if you agree with my analysis, whether the correction should be implemented or not is still subject to discussion. The problem lies in the original construction of POLY or EXPR: these domains were meant to be user-friendly, they include any polynomial (or expression) in whatever variables the user inputs. When a user is "lazy" and works in such domains, it is implicitly assumed that (s)he is not aware of, or does not care for, any particular data structure. So it is implicitly assumed that (s)he would not extend such domains by adding more variables because these domains are already all-encompassing as far as variables go (but the coefficient domain may be extended to one that does not involve new variables). Now if we are to "generalize" the construction of POLY (and in fact the construction of ANY domain with "variables," whether the set of "variables" is a specific finite set of symbols or the set of all symbols), then we should go back to the mathematical meaning of the polynomial ring (see Lang, Algebra, 1965). The set of indeterminates can be ANY set, so that towers like Q[S][T] is mathematically meaningful for ANY sets S, T (even if S and T are the same set, because the sets are only used to *index* the indeterminates: recall (Lang, p. 110) that a (primitive) monomial over S is a function f:S -> N (N the monoid of non-negative integers) of finite support; and a polynomial ring over a ring R with indeterminates (indexed by) S is the monoid algebra R[S] generated over R by all (primitive) monomials. Normally, we would simply denote the variable indexed by s: S by s, but this is "abuse of language" because the "variable" indexed by s is actually a function and should be named f_s where f_s(s)=1 and f_s(t) = 0 for t not equal to s. With this more pedantic notation, R[S] is really$R[(f_s)_{s \in S}]$(using TeX). So if after forming R[S], we want to form R[S][T] where T = S, then by definition, a duplicate set of monomials g:T->N is used and R[S][T] is$R[(f_s)_{s \in S}][(g_t)_{t \in T}]$and no confusion would arise even when S = T. (You may think of the row index set and column index set of a square nxn matrix; both are [1..n], but we would use a different notation say i for row and j for column. Anyone using i for both indices, especially in programs, ...) Now the problem with the current implementation of any polynomial ring (or Expression(R)) is that such duplicates are not anticipated (or ignored for efficiency perhaps). If we were to generalize the construction by identifying the "variables" with an indexed notation (for example, in R[x,y], the x and y are actually (x,R) and (y,R); in programming terms, prefix or postfix x and y by an identifier string unique to R much like C++), then we can create towers without any ambiguity -- assuming no user would know these secret identifiers. So "the set of all symbols" does not really mean ALL symbols but only those within some limited constructive rule for symbols. In outputs, we may provide a switch to turn the identifier part on or off. In the new generalized version: )set mess scope on x::UP(x,EXPR INT)/x::EXPR INT would output something like: x$UP(x, EXPR INT)/x$(EXPR INT) and not 1. The tags show the user exactly what is done without having to examine the sometimes very long output from )set mess bot on. Let me say that there is pehaps a role for constructs like POLY POLY INT or POLY EXPR INT. Examples: computing syzygies of polynomials generating an ideal, or more generally the defining set of algebraic polynomial (equations) for algebraic functions -- a (non-linear) system of differential equations is simply algebraic equations for functions and their derivatives. However, just as POLY INT is for the "lazy" user, so would the new POLY POLY INT. Any serious computation should identify (and separate) the variables in towers. The utlimate question is: is it worth the trouble to generalize? If it is easy to modify the compiler to add the identifiers, then that would be the easier route. If not, I'll vote no unless someone shows me an example in a real computation where this new type of construct is necessary, cannot be done otherwise, and the benefit outweighs the work required. > I wouldn't think in terms of data representations yet. My credo: First > determine good semantics, then worry about how to implement/represent > them. > > > We need a clearer way to make the choice. Using a hidden and probably > > unjustified convention that the "outer domain takes all the variables" > > is questionable. > > I disagree: I am convinced that this convention is clear. The only other > choice for me would be to depreciate these domains. Unfortunately, the current convention is not "outer domain takes all the variables." )clear all a:=monomial(1,1)$UP(x, INT)
x

a::SUP INT
?

a::SUP UP(x,INT)
x

Even though a tower of polynomial rings is involved in the above, these are
legitimate mathematical constructs (and correctly done) -- the unspecified
variable is intended to be transcendental over ANY coefficient domain, and is
often used to work with minimal polynomial of an element algebraic over the
coefficient domain. Here it shows that in coercing a variable into
SUP(R), the interpreter first try to coerce it into R, and failing that, to to
unspecified variable. So in this case, SUP only takes a variable when necessary.

> > > The domain EXPR POLY INT would be strictly the same as EXPR INT, I
> > > think.
>
> > No, we are not talking about mathematics, but data structure. I think we
> > agree that the current semantics allowed is ambiguous and for this
> > reason, any reformating should be done with specific variables.
>
> I would like to have a mapping from maths to axiom as clear as possible.
> And given the convention that the "outermost" domain "takes" all the
> variables (it can), EXPR POLY INT and EXPR INT would be the same
> mathematically, since any polynomial is an EXPR. Of course, the
> representations could be different in the two domains.

See above discussion on indexed method constructing polynomial rings.

> I agree on everything you say about expressions with specified variables.

Good, specified variables is the way to go. After all, what is strong typing
without specificity?

> > > ---------------------------------------------------------------------
>
> I don't have anything clever to say right now about defining a wrapper of
> "variables" that has values in Any.
>
> If we want to follow the convention "outermost domain takes all variables
> it can" we can simply add a function "variables: % -> List Symbol" to
> QFCAT (or wherever it should go), since we only want to test for symbols
> right now.

I am not convinced the convention is desirable.

> Note that only the interpreter catches bogus like UP(x, UP(x, INT)) right
> now. It may be an option to change the interpreter rather than the
> library, but I think this is the wrong direction to go. BTW: Do you
> (or anybody else reading this) know how the interpreter catches this?
> In fact, it is broken anyway:
>
> (12) -> x::UP(x,UP(x,INT))
>
>    UnivariatePolynomial(x,UnivariatePolynomial(x,Integer)) is not a
>       valid type.
> (12) -> monomial(1,1)$UP(x,UP(x,INT)) > > (12) x > Type: UnivariatePolynomial(x,UnivariatePolynomial(x,Integer)) This is the same "error" you found some time ago, where somehow :: is handled differently from : or coerce. In the second (12), the interpreter can find the monomial function in$UP(x, *) and so it is happy -- never mind the *. In the
first (12), the interpreter must find the coerce function but don't know which
to pick, even if it did find them. If you packaged call, everything would be ok.

(1) -> coerce(x)$UP(x, UP(x, INT)) x (2) -> degree % 1 (3) -> coerce(coerce(x)$UP(x, INT))\$UP(x, UP(x, INT))
x
(4) -> degree %
0

I think the failure in (12) is an interpreter problem not able to find the
functions or cannot decide on what the user wants: (1) or (3). The error message
is an "error" in itself. UP(x, UP(x, INT)) seems a perfectly good domain in the
current Axiom system and the two x's are different (I believe in SUP, the
variable is indexed in the sense above; for SUPs with different coefficient
rings, these become equal only when coerced into a SUP with an extended
coefficient ring). This is the nature of SUP from which UP is derived. It's only
the user who assigns the same symbol for I/O, but shouldn't have.

Generally, the Axiom compiler language is more precise and the interpreter
because it tries to be smart, does not always succeed in interpreting user
intentions. Nonetheless, it is still doing quite a good job.

> Thus I think the direction to go is to put these checks into the compiler.

I don't know enough about compilers: Can a compiler single out (12) to disallow
it? For now, just don't use the same symbol for two different things, and
perhaps fix the error message with a warning instead.

> What would DPOLYCAT or DEXPR be?

DPOLCAT already exists (Differential Polynomial Category, sorry for misspelling
of abbreviation -- actually a restriction of abbreviation rules in Axiom to 7
characters to allow for DPOLCAT-). DEXPR would be Differential Expression (these
would involve derivatives or integrals of functions that could not be simplified
such as derivatives of Bessel functions, but also derivatives of other special
functions satisfying higher order differential equations).

Wiliam