[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: more advanced bytevector => supervectors

From: Stefan Israelsson Tampe
Subject: Re: more advanced bytevector => supervectors
Date: Wed, 8 Sep 2021 04:04:06 +0200

Today I added structure to support supervectors of scheme values and refactored the code with a nice macrology. So now we
have an infrastructure that the compiler can compile to very efficient code. All loop must be dispatched to the type of ref and set
operations. But the good part is that this selection is done only once on each bin. which can be amortised with all the work on the
loop. This means that there can be an explosion in the number of combinations. The design is such that one can select the set
of variants used in a nice interface. So let's discuss the setup to illustrate this point afet a small detour to algebra

After investigating the structure of bit like vectors today I turned to algebraic operations and wonder about what structure we can have
regarding supervectors. First to note is that storing an offset in the bin can be of value if we only consider + and - operations so for a bin 
with values X we store it as a + X, now adding (a1 + X1) + (a2 + X2) lead to the final value (a1 + a2) + (X1 + X2) which means that
a1 and a2 can be general values and we could try to compress the X1 and X2 to a smaller byte structure

For multiplication we can have a scaling operation and if we only conisder  * / we have
(b1 * X1)  * (b2 * X2) = (b1*b2) (X1*X2)

Mixing multiplications will lead to problems but essentially if you add number you can gain if the scaling is the same e.g.
(a1 + bX1) + (a2 + b*X2) = (a1 + a2) + b (X1 + X2). 

complexity wise one do not win here, but the X1 and X2 could be one type of number and the scale and offset of a larger

All operations + - * /  of a constant bin with a fat bin can be fast if we store a scaling and an offset. 

This is the basic structure when considering basic algebra and one could possibly benefit by storing and offset and a scale
factor in the bin information. Not sure of the use cases though.

Now over to the macrology ...

The besic tool to make find out about a ref or set is defined as

(define-syntax-rule (mk-get-setter get-setter do-inv? AS a b c d)
  (define-inlinable (get-setter do-inv2? bits bits1 loop )
    (lambda* (#:key (is-typed? #f)
         (is-scm    #f)
         (is-fixed    #f)
         (is-float  #f)        
         (is-integer  #f)        
         (is-single #f)
         (is-unsigned #f)
         (is-endian #f)
         (is-m      #f)
         (is-inv    #f))


AS can be AR,AS,ARS    (ref set! ref-set!) e.g depending on the type of the bin we will find a ref if AR is used, a set! if AS is used and both 
the ref and set for the ARS version. do-inv? can if false skip lognot stored info in the bin. get-setter is the name we define of the new setter and a b c  id all of the form
AR:   ((1 ref-u8) (2 ref-u16)) ... 
AS:   ((1 set-s8) (2 set-u16)) ... 
ARS: ((1 ref-u8 set-s8) (2 ref-u16 set-u16)) ... 

so a is the signed versions used, then b is the unsigned versions to use and c is teh floating operations used and finally d is the scm values used

it will define an inlanable function like
(define-inlinable (get-setter do-inv? bits bits1  loop) ...)

So using get-setter typically means
((get-setter #f bin1 #f 
   (lambda (set) (set v 2 val)))

   #:is-endian 'little          ;; only consider little endian setters like I know 
   #:is-unsigned  #t         ;; only use unsigned
   #:is-integer      #t         ;; only use integer representations
   #:is-fixed          #t        ;; do not use the scm value vector versions
So a version where we only consider handling nonegative integers of up to 64bit. The gain is faster compilation as this ideom will dispatch
between 4 different versions of the the loop lambda and the compiler could inline all of them or be able to detect the one that are used and hot compile that version
(a feature we do not have yet in guile) now whe you select between a ref and a set you will similarly end up with 4*4 versions = 16 different loops that. full versions
is very large and a double loop with all featurs consists of (2*2 + 3*2*2*2 + 4 + 1)**2 = 33*33 ~ 1000 versions of the loop which is crazy if we should expand the loop
for all cases in the compilation. Now guile would just use a functional approach and not expand the loop everywhere. We will have parameterised versions of
libraries so that one can select which versions to compile for. for example the general functions that performs transform form one supervector to another is a general
ideom that would use the full dispatc which is not practical, 

So for e.g. bitwise operations one could do
(define-values (l-not l-and l-or -xor) 
       #:is-endian 'little          ;; only consider little endian setters like I know 
       #:is-unsigned  #t         ;; only use unsigned
       #:is-integer      #t         ;; only use integer representations
       #:is-fixed         #t        ;; do not use the scm value vector versions

As you realise although the compiler ca innline stuff, the complexity is too big for the compiler to expand all the loops. In the end this facility
should not be needed, but currently I think itäs a good idea.

That's the plan!


On Thu, Sep 2, 2021 at 5:45 PM Stefan Israelsson Tampe <> wrote:
Hi guilers!

My next project is to explore a more advanced bytevector structure than today's bytevectors. 
I think that having this infrastructure in guile and core code taken advantage of it like having strings otop of it and allowing our string library to use those (the difficult case is probably to get regexps working properly)

Anyhow this is my initial comment in the code:

The idea of this data structure is to note that when we employ full
large datastructure scans we can allow for a much more rich and featurefull
datastructure then a simple bytevector. The reason is that we can divide
the data in byte chunks and spend most of the time scanning copying maping
those areas with usual methis, even optimized C - code taken advantage of advanced cpu opcodes are possible here. ANd by dividing it in chunks we get a lot of new features with essentiually no cose with more than complexity which we manage mostly in scheme. We gain many things,

1. Many new features that can vary on the pages

2. less memory hogs as
   a. it can use copy ion write semantics
   b. it does not need to fins 1GB continuous blocks

3. potentially faster operations as we can fast foorward the zeroe on write
   pages compared to pure bytevectors

4. we can allow for swaping and refferential datastructures to speed up copying
   even further

5. can get better fiber features of C programs that spends seconds or minutes on
   performing an operation because it will just spend a microsecond or such
   in C-land and then swap back to Scheme. CTRL-C will also work nicely.

6. We could potentially build a string library untop of these datastructures and
   also we could add features to pages that lead to a much richer interface.
7. resizing is much faster end efficient

8. reversing is super quick

9. queues and stacks of byte data can have a very efficient implementations

1. More complex as we need to consider boudaries
2. Slower one off operations like bytevector-u8-get as guile compiles the
   core operatoins to quite effective JIT CPU encodings. But maybe we can
   disign some caching to make those operations much faster and even have
   suporting JIT operations.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]