bug-apl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-apl] miscellāneum


From: Peter Teeson
Subject: Re: [Bug-apl] miscellāneum
Date: Tue, 5 Mar 2019 14:54:50 -0500

In the days when they had a commercial time-sharing APL system,
I was fortunate to work at I. P. Sharp in the Zoo 
(the R & D group that was responsible for the system SW)

We made a number of ad hoc special case hand crafted optimizations.
In my case that was for inner and outer product.

Luckily I shared an office with someone who was a senior coder who had a 
particular interest in algorithms in general. He gently guided me and I will 
never forget his kindness. 

If we were not able to get at a minimum a 10% increase we didn’t bother.
As you can imagine those two operators were good candidates and we did 
achieve a significant improvement in a number of frequently occurring special cases.

That was on various mainframes - a 360 Model 50 at first (later twinned), an Amdahl, 
twin Model 158’s and even a Burroughs 6700 which never made it into our production.

But these optimizations were hand picked by the analyses that were made on the 
fertile areas - not generic algorithms. (Long live Knuth! Still waiting for the next volume.)

Good memories of the early days and what a blessing to have been able to participate.

respect….

Peter

On Mar 5, 2019, at 11:24 AM, Dr. Jürgen Sauermann <address@hiddenjrgen-sauermann-zvb.de> wrote:

On 03/05/2019 02:20 AM, Hudson Flavio Meneses Lacerda wrote:
On Sun, 3 Mar 2019 16:21:25 +0100
Dr. Jürgen Sauermann <address@hidden> wrote:
[...]
As far as GNU APL is concerned, first of all apply statement 1. on
page 38 to the interpreter itself rather than to the APL code that is
being interpreted. [...]
Hi.

Do you suggest, when writing GNU APL, to put the emphasis on the
readability and basically ignore optimization? Are there any tips
for better code for GNU APL?

[...]
No. Optimization has always be a concern. However, optimization of design patterns
that look impressive on particular examples that rarely exist in real life (and even if so
only use a small fraction of the CPU time of an application) is a waste of time.

Mr. Bergquist has formulated this for APL programs (which were terribly slow
around 1987) but his statement is true in a much wider context (where it is
known as Amdahl's law).

There are no tips where GNU APL behaves differently than other APLs. But
there have been performance problems reported earlier that could be traced
back  to the use of an (O(N²)) algorithm where an O(N) algorithm exists.

A typical example is this:

Z←⍳0
for 1 millon times: Z←Z,X


this takes time O(N²) (since Z ist growing, making Z,X slower and slower)
Instead the following:

Z←1 million⍴ 0
for 1 millon times: Z[n]←X

does the same and takes only O(N),


      
In particular today I would argue that:
[...]
8. I tend to measure time with a (CPU-)clock rather than a ruler. The
remarks on scalars are partly incorrect in the context of scalar
extension (e.g. multiplication of a vector and a scalar).
Those examples seem to involve only scalars and scalar items of vectors.


BTW, are there any significant differences between these expressions
(where f is scalar, and t is a lengthy vector) ?:

a)	cos ← {2○⍵}
	PI ← ○1
	S ← cos 2 × PI × f × t
b)	S ← 2○ ○t×f×2
c)	S ← 2○ (2×○1×f) × t
d)	S ← 2○ (○2×f) × t
e)	S ← 2○ (○f+f) × t
f)	S ← cos(2×PI×f)×t
g)	S ← cos t×(2×PI×f)
I believe that g) is the fastest but the difference is minor, As a general rule, try to move
the longer vectors to the left and the shorter ones to the right. In a) every × and also the
monadic   is done ⍴,t times while in g) only the leftmost multiplication is done that often.
Regards,
Hudson



reply via email to

[Prev in Thread] Current Thread [Next in Thread]