gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/AGPU paper.txt Makefile


From: Janne V. Kujala
Subject: [Gzz-commits] manuscripts/AGPU paper.txt Makefile
Date: Mon, 14 Apr 2003 06:54:26 -0400

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Changes by:     Janne V. Kujala <address@hidden>        03/04/14 06:54:24

Modified files:
        AGPU           : paper.txt 
Added files:
        AGPU           : Makefile 

Log message:
        irregu figs

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/AGPU/Makefile?rev=1.1
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/AGPU/paper.txt.diff?tr1=1.9&tr2=1.10&r1=text&r2=text

Patches:
Index: manuscripts/AGPU/paper.txt
diff -u manuscripts/AGPU/paper.txt:1.9 manuscripts/AGPU/paper.txt:1.10
--- manuscripts/AGPU/paper.txt:1.9      Wed Apr  9 06:47:01 2003
+++ manuscripts/AGPU/paper.txt  Mon Apr 14 06:54:24 2003
@@ -12,10 +12,9 @@
 The user can
 thus identify an item at a glance, even if only a *fragment* of the
 item is shown, without reading the title (which the fragment may not
-even show).  
+even show). See Fig.1.
 The user should be able to learn the textures of the most
 often visited documents, as per Zipf's law.
-See figxupdfdiag.
 An initial experiment has shown that the generated textures are indeed
 recognizable.
 
@@ -48,7 +47,7 @@
 combiner parameters chosen randomly from the seed number.  For this,
 we use dot products of texture values with each other and with random
 constant vectors, and scale up with the register combiner output mappings
-to sharpen the result (see Fig.~\ref{fig-regcomb}).  The resulting values
+to sharpen the result (see Fig. 4).  The resulting values
 are used for interpolating between the palette colors.
 On the NV25, we use offset textures to allow the creation of new 
 shapes by texture shading.
@@ -58,7 +57,7 @@
 
 --- Figures
 
-figxupdfdiag: The motivating example for unique backgrounds: the
+Fig.1. The motivating example for unique backgrounds: the
 BuoyOING focus+context interface for browsing bidirectionally
 hyperlinked documents.  The interface shows the relevant *fragments*
 of the other ends of the links and animates them fluidly to the focus
@@ -71,7 +70,7 @@
 document (1) which was in the focus in the first keyframe.  Our (as
 yet untested) hypothesis is that this will aid user orientation.
 
-fig-perceptual: The qualitative model of visual perception used to
+Fig.2. The qualitative model of visual perception used to
 create the algorithm.  The visual input is transformed into a feature
 vector, which contains numbers (activation levels) corresponding to
 e.g. colors, edges, curves and small patterns.  The feature vector is
@@ -79,24 +78,24 @@
 recognizable textures, random seed values should produce a
 distribution of feature vectors with maximum entropy.
 
-fig-basis: The complete set of 2D basis textures used by our
+Fig.3. The complete set of 2D basis textures used by our
 implementation.  All textures shown in this article are built from
 these textures and the corresponding HILO textures for offsetting.
 
-fig-regcomb: How the limited register combiners of the NV10
+Fig.4. How the limited register combiners of the NV10
 architecture can be used to generate shapes.  Top: the two basis
 textures.  Bottom left: dot product of the basis textures:
-2(2a-1)\cdot(2b-1)+1/2, where a and b are the texture RGB values.
+2(2a-1) . (2b-1)+1/2, where a and b are the texture RGB values.
 Bottom right: dot product of the basis textures squared: 32(
-(2a-1)\cdot(2b-1) )^2.  This term can then be used to modulate between
+(2a-1) . (2b-1) )^2.  This term can then be used to modulate between
 two colors.
 
-fig-examples: A number of unique backgrounds generated by our system.
+Fig.5. A number of unique backgrounds generated by our system.
 This view can be rendered, without pre-rendering the textures, in 20
 ms on a GeForce4 Ti 4200 in a 1024x768 window (fill-rate/bandwidth
 limited).
 
-figxanalogicalexample: Two different screenshots of a structure of PDF
+Figs.6-7. Two different screenshots of a structure of PDF
 documents viewed in a focus+context view.  The user interface shows
 relationships between specific points in the documents.  Each document
 has an unique background, which makes it easy to see that the fragment




reply via email to

[Prev in Thread] Current Thread [Next in Thread]