gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/storm article.rst


From: Benja Fallenstein
Subject: [Gzz-commits] manuscripts/storm article.rst
Date: Sun, 09 Feb 2003 01:18:25 -0500

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Changes by:     Benja Fallenstein <address@hidden>      03/02/09 01:18:23

Modified files:
        storm          : article.rst 

Log message:
        more

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/storm/article.rst.diff?tr1=1.118&tr2=1.119&r1=text&r2=text

Patches:
Index: manuscripts/storm/article.rst
diff -u manuscripts/storm/article.rst:1.118 manuscripts/storm/article.rst:1.119
--- manuscripts/storm/article.rst:1.118 Sat Feb  8 22:53:09 2003
+++ manuscripts/storm/article.rst       Sun Feb  9 01:18:23 2003
@@ -233,7 +233,7 @@
 2.2. Alternative versions
 -------------------------
 
-Likewise, version control systems like CVS or RCS [ref] usually assume
+Version control systems like CVS or RCS [ref] usually assume
 a central server hosting a repository. The WebDAV/DeltaV protocols,
 designed for interoperability between version control systems, inherit
 this assumption [ref]. On the other hand, Arch [ref] places all repositories
@@ -335,6 +335,19 @@
    In the broadcasting approach, implementations' differences mostly lie in 
the 
    *structural level* of the overlay network, i.e. super peers and peer 
clusters.
 
+The basic definition of a distributed hashtable does not indicate
+how large the keys and values used may be. Intuitively, we expect keys
+to be small, maybe a few hundred bytes at most; however, there are different
+approaches to the size of values. Consider a file-sharing application:
+If the keys are keywords from the titles of shared files, are the values
+the files-- or the addresses of peers from which the files may be
+downloaded? Iyer et al [ref Squirrel] call the former approach
+a *home-store* and the latter a *directory* scheme (they call the peer
+responsible for a hashtable item its 'home node,' thus 'home-store').
+
+.. Should we discuss applications of p2p systems (CFS, OceanStore, Squirrel, 
...)
+   here? If so, which ones?
+
 CFS [ref], which is built upon Chord DHT peer-to-peer routing layer[ref], 
stores 
 data as blocks. However, CFS *splits* data (files) into several miniblocks and 
 spreads blocks over the available CFS servers. Freenet [ref] and PAST [ref],
@@ -342,26 +355,39 @@
 as whole files. All previously mentioned systems lack of the immutable 
 property which is used in Storm blocks.
 
-
-2.4. Peer-to-peer hypermedia
-----------------------------
-
-Related work: we need something about p2p hypermedia: 
-[ref Bouvin, Wiil ("Peer-to-Peer Hypertext")]
-
-.. (Probabilistic access to documents may be ok in e.g. workgroups,
-   but does not really seem desirable. (At the ht'02 panel, Bouvin
-   said they might be ok, which others found very... bold.) 
-   One example may be a user's public comments on documents; 
-   these might be only available when that user is online.
+Recently there has been some interest in peer-to-peer hypermedia.
+Thompson and de Roure [ref ht01] examine the discovery
+of documents and links available at and relating to
+a user's physical location. For example, this could include
+a linkbase constructed from links made available by different
+participants of a meeting [thompson00weaving]. 
+Bouvin [ref 02] focuses on the scalability and ease of entry
+of peer-to-peer systems, examining ways in which p2p can serve
+as a basis for Open Hypermedia, while our own work has been 
+in implementing Xanalogical storage [ref 02].
+
+At the Hypertext'02 panel on peer-to-peer hypertext [ref],
+there was a lively discussion on whether the probabilistic access
+to documents offered by peers joining and leaving the network
+would be tolerable for hypermedia publishing. For many documents,
+the answer probably is no; however, for personal links,
+comments and notes about documents, this behavior may be acceptable,
+especially since this kind of publication would not require
+setting up a webspace account first and could therefore 
+encourage publication.
+
+In the end, some peers will necessarily be more equal than others:
+Published data will be hosted on servers
+which are permanently on-line, but act as ordinary peers
+in the indexing overlay network.
    
 
 3. Block storage
 ================
 
 In Storm, all data is stored
-as *blocks*, byte sequences identified by a SHA-1 cryptographic content hash 
-[ref SHA-1 and our ht'02 paper]. 
+as *blocks*, byte sequences identified by a SHA-1 
+cryptographic content hash [ref SHA-1]. 
 Being purely a function of a block's content, block ids
 are completely independent of network location.
 Blocks have a similar granularity
@@ -471,6 +497,10 @@
 a published document from the network; whether this is
 a good or a bad property we leave for the reader to judge.
 
+Finally, because blocks are easy to move from system
+to system, block storage may be more *durable* than files:
+
+
 These advantages are bought by an utter incompatibility with
 the dominant paradigms of file names and URLs. We hope that
 it would be possible to port existing applications to use Storm
@@ -513,7 +543,7 @@
 Unfortunately, we have not put a p2p-based implementation
 into use yet and can therefore only report on our design.
 Currently, we are working on a prototype implementation
-based on the GISP distributed hashtable [ref]
+based on UDP, the GISP distributed hashtable [ref],
 and the directory approach (using the DHT to find a peer
 with a copy of the block, then using HTTP to download the block).
 Many practical problems have to be overcome before this




reply via email to

[Prev in Thread] Current Thread [Next in Thread]