gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/pointers article.rst


From: Benja Fallenstein
Subject: [Gzz-commits] manuscripts/pointers article.rst
Date: Mon, 10 Nov 2003 06:35:49 -0500

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Branch:         
Changes by:     Benja Fallenstein <address@hidden>      03/11/10 06:35:49

Modified files:
        pointers       : article.rst 

Log message:
        twid intro

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/pointers/article.rst.diff?tr1=1.213&tr2=1.214&r1=text&r2=text

Patches:
Index: manuscripts/pointers/article.rst
diff -u manuscripts/pointers/article.rst:1.213 
manuscripts/pointers/article.rst:1.214
--- manuscripts/pointers/article.rst:1.213      Mon Nov 10 06:17:00 2003
+++ manuscripts/pointers/article.rst    Mon Nov 10 06:35:49 2003
@@ -122,9 +122,9 @@
 servers.
 If the Web worked like a filesharing system, there would be
 no central point of failure for a web page; a page could be downloaded
-from any host with a copy. This would save bandwidth
+from any host that has a copy. This would save bandwidth
 and increase availability; pages would stay online as long as
-any user keeps a copy of them on their local harddisk.
+any user keeps them on their local harddisk.
 
 Such permanence is an important concern, as seen by the following example.
 In 1997, NASA launched the Cassini-Huygens spacecraft
@@ -138,7 +138,7 @@
 SpaceViews, a publication of the National Space Society,
 published a list of links to web pages of both
 Cassini opponents and supporters [#rtg-links]_.
-In the year 2003,
+In 2003,
 only six years after the launch, 
 only 29 
 of the 83 links provided by SpaceViews continue to work.
@@ -161,26 +161,26 @@
 
 
 We hold that a P2P Web should provide
-the following features of filesharing systems for its users: 
+the following features of filesharing systems to its users: 
 
 Off-line capability
        The ability to 
-       download version(s) and use them off-line without 
+       download versions and use them off-line without 
        a change in the user interface or functionality
-       (clicking on a link should "just work" when off-line,
-       as long as the target is available locally)
        and to use a neighbouring computer's cache
-       if the LAN is disconnected from the internet 
+       if the LAN is disconnected from the Internet.
+       Clicking on a link should "just work" when off-line,
+       as long as the target is available locally.
 
 Heterogeneity
        The possibility to
        use different P2P networks interchangeably, even
        simultaneously, sharing the same data in all of them,
        a) to withstand
-       exploits of single network; b) because one size doesn't fit all
+       exploits of a single network; b) because one size doesn't fit all
        (e.g. anonymity vs. efficiency); 
        c) because people can use their favorite network
-       while agreeing on a common data model (network effect)
+       while agreeing on a common data model (network effect).
 
 Distributed archivability
        The ability to
@@ -188,7 +188,7 @@
        someone else online, *accessible to others*,
        under its original URI, for any period of time
        (so that when the original publisher loses interest,
-       a page does not fall off the Web).
+       a page does not have to fall off the Web).
 
 In a P2P Web, pages could be
 linked using  permanent URIs 
@@ -212,25 +212,23 @@
 a versioning mechanism which is similar to OceanStore's heartbeats,
 but allows the clients to download and store the pointer records
 along with the corresponding version of a document.
-Instead of having a primary replica, we search
+To find the current version, we search
 the whole P2P network for pointer records
 related to a document.
 
 An additional contribution is the Storm data model,
 an API formalizing the notion of searching for data
-by hash and content. Indexing files by content
-is implemented through application-specific plug-ins.
-Allowing files to be found by any particular property
-of their content (such as the 'album' field in OGG metadata)
-is as easy as writing a simple plug-in.
-Pointer records may be implemented
-on top of the Storm indexing, rather than separately
-for each P2P network.
+by hash and content. 
 The API is used by applications
 such as browsers and implemented for each P2P network
 accessed. 
+Indexing files, to allow them to be found by any particular property
+of their content (such as the \`album' field in an ogg file's metadata),
+is as easy as writing an application-specific plug-in.
+The Storm model allows pointer records to be implemented
+only once, rather than separately for each P2P network.
 
-.. The indices generated by indexing plugins
+.. The indices generated by indexing plug-ins
    are automatically published on each P2P network.
 
 ..  Like in OceanStore, old
@@ -447,7 +445,7 @@
 and content-based search present in many existing P2P systems
 (although allowing files to be found by specific properties
 of their content is made much easier to implement, using 
-a plugin architecture).
+a plug-in architecture).
 
 .. [#] For STORage Module. Storm is also the name of our
    Free Software implementation, discussed in
@@ -467,7 +465,7 @@
     for showing a block in a Web browser.)
 
 Reverse indices
-    Reverse indices are application-specific plugins
+    Reverse indices are application-specific plug-ins
     which index blocks by properties of their content
     rather than by hash. Reverse indices examine each
     block in a local data store and return hashtable keys




reply via email to

[Prev in Thread] Current Thread [Next in Thread]