a stack-trace for the segfault would be good (command gdb apl
then: 'run' and finally 'bt' after
No idea what AST is.
You could try TAB-expansion to get options in various situations
and try e.g.
to get help for APL primitives. Currently system functions and
variables are not in )help,
but I suppose extending file src/Help.def could easily add them.
Compiling APL is IMHO a wrong path. Too many problems, too little
On 10/16/19 5:01 PM, Rowan Cannaday
Thank you for the explanation Jürgen.
That makes intuitive
sense. A shared-memory single threaded service is a
is to compile a subset of APL to an intermediate
Is there a way to export the AST?
in addition - is
there an in-repl method of viewing help and/or arguments for
system variables & functions?
By the way, a
minor regression: segfaulting, but only after exiting.
it is sort of working, but I could well use some help in
the remaining problems. I can help fixing them, but
finding their root cause
(and making them reproducible) is a different story.
My current interpretation of various benchmarks that Elias
myself did some years ago is that the bandwidth of the
between the CPUs (or cores) and the memory is the limiting
factor, and no
matter how efficient the APL interpreter is, this
bottleneck will dictate the
speedup that can be achieved.
As an example, from 1985 to 1990, myself and 4 students
had built a the
hardware of a parallel APL machine with 32 CPUs and
measured a speedup
of close to 32 for sufficiently large vectors.
In contrast, if I remember correctly, then Elias achieved
a speedup of 12 with
80 CPUs using the parallel feature of GNU APL. The only
I can see between our 1990 machine (called Datis-P-256
because the architecture
could be scaled up to 256 processors) was the memory
Datis-P had one separate memory for each CPU, while
boxes share their memory module(s) among different cores.
boils down to the fact that the memory bandwidth of
Datis-P scaled with the
number of processors, while the number of cores on a
typical multi-core box
does not. As long as this is the case, parallel APL
remains severely limited
in terms of the speedup that can be achieved.
On 10/16/19 12:58 PM, Blake McBride wrote:
I think getting the parallel processing working
is important. It may be that for various reasons
the speedup in general cases is minimal and not
worth the effort. However, I'd imagine that there
are particular use-cases utilizing large arrays
where the speedup would be substantial. That is
when those types of enhancements would make APL a
fixed in SVN 1191.
You should not be too enthusiastic, though,
because the speed-ups that
can be achieved are somewhat disappointing. And
due to that, I
haven't put too much effort into fixing faults
(sometimes apl hangs
on a semaphore when parallel execution is
On 10/16/19 5:15 AM, Rowan Cannaday wrote:
intrigued by the ability to parallelize APL,
thought I'd try to test it:
`apl --cfg` followed by a line of '='
signs followed by `apl -q`:
sizeof(Value) : 456 bytes
sizeof(Cell) : 24 bytes
sizeof(Value header): 168 bytes
how ./configure was (probably) called:
Project: GNU APL
Version / SVN: 1.8 / 1190M
Build Date: 2019-10-16 02:45:24 UTC
Build OS: Linux 5.2.0-3-amd64
Archive SVN: 1161
$ apl -q
thread # 0: 0 RUN job: 0
thread #-1: 0 RUN job: 0
-- Stack trace at main.cc:88
0x5631406CAD8D init_apl(int, char const**)