monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Monotone-devel] announcing preliminary "dumb server" support for mo


From: Nathaniel Smith
Subject: Re: [Monotone-devel] announcing preliminary "dumb server" support for monotone
Date: Fri, 14 Oct 2005 10:43:55 -0700
User-agent: Mutt/1.5.9i

On Fri, Oct 14, 2005 at 04:37:04PM +0200, Zbynek Winkler wrote:
> Yeah, I've noticed that too. I suppose the trees are rebuilt for each 
> push/pull because different branches can be included in it every time, 
> right? Would it make sense to precompute the merkle trees for each 
> branch and store it in the db? It could get updated in a lazy manner - 
> when adding new things to the branch, delete the hash indexes that would 
> need to be updated; on the next sync rebuild the  all the missing hash 
> indexes...

Maybe.  There isn't a separate merkle trie for each branch; there's a
single merkle trie for a _set_ of branches.  (Or something like that
-- there's some subtlety where if we have A -> B, where A and B are in
a separate branches, any merkle trie that includes B will still
include A, to make sure the receiving db stays consistent.)

In the normal case, the user always syncs the same set, so this kind
of caching might be a win.  It's not really related to monotone-dumb,
though :-).

> >However, this doesn't actually solve any of the problems that
> >merkle_dir.py solves, since all its heavy lifting has to do with the
> >other end -- how do you deal with a simple remote filesystem to make
> >it possible to efficiently push and pull.  It isn't trivial to
> >integrate this with something like the above 'automate merkle_hash'
> >idea, because then you'd have to make sure you could efficiently
> >calculate hashes _in the same way monotone does_, and that takes a bit
> >of thought.
> >
> >Oh, however however, you're actually quite right -- you could do
> >something very conceptually simple in just monotone-dumb, which is
> >implement a class that acts like a MerkleDir, but that is constructed
> >in memory directly off a monotone database.  Basically, you'd iterate
> >over the db like do_export does, but instead of actually fetching
> >stuff, you could just keep track of which ids exist, build HASHES
> >files in memory, and use them to sync.  Then you pull stuff out of the
> >db as necessary, when it turns out you want to send it to the remote
> >side.  That'd be neat.
> > 
> >
> Yes, that would. I might give it a shot... How hard would it be to 
> implement something like the above (including the precomputed per-branch 
> merkle trees) outside of monotone first, let's say as a python wrapper? 
> When adding stuff to a branch it would invalidate part of the stored 
> cache and recompute it on next sync. One merkle dir would store only one 
> branch...?

The precomputed per-branch merkle tries seems totally orthogonal to
me; maybe I'm missing something.

-- Nathaniel

-- 
"The problem...is that sets have a very limited range of
activities -- they can't carry pianos, for example, nor drink
beer."




reply via email to

[Prev in Thread] Current Thread [Next in Thread]