monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Monotone-devel] url schemes


From: Derek Scherger
Subject: Re: [Monotone-devel] url schemes
Date: Sat, 22 Mar 2008 18:41:55 -0600
User-agent: Thunderbird 2.0.0.12 (X11/20080303)

Markus Schiltknecht wrote:
Then, there are the planned nuskool commands. Those are currently encoded entirely in JSON. The HTTP client requests the same URL every time, and encodes the query in JSON. ATM nuskool doesn't support branch inclusion or exclusion patterns. The commands currently are:

 * inquiring revisions: asks the server if it has certain revisions
 * getting descendants: querying the ancestry map of the server
 * getting (pulling) a revision
 * putting (pushing) a revision
 * getting file data
 * putting file data
 * getting file delta
 * putting file delta

I'm not convinced they ought to stay this way though. That's just where I ended up after picking up what graydon had started and I don't know whether I went in the direction he had in mind or not. I have the feeling that it's a bit verbose, but I'm not sure what to do about that yet.

On the bright side, I have managed to pull files and revisions from my monotone database using the nuskool branch (which doesn't yet pull certs or keys or care about branch epochs but does basically seem to work). It is rather slow at the moment (71 minutes vs 25 minutes with netsync, which *does* pull certs, keys etc.). I haven't done any profiling yet but I would expect two things to show up.

(1) printing/parsing basic_io has come up in the past and nuskool adds very similar printing/parsing json_io so it will probably double the printing/parsing time.

(2) it's currently very granular, request one revision, receive one revision, then for all files changed in the revision request one file data or delta, receive one file data or delta, etc. until all the content for the revision has been received, then move on to the next revision. latency of request/response times is probably a big factor.

I went with the fine-grained get/put request/response pairs so that neither side would end up having to hold too many files in memory at any one time. If we instead requested all file data/deltas for one rev the number of round trips would be reduced but we'd end up having to hold at least one copy (probably more) of the works in memory which didn't seem so good. I'm open to suggestions. ;)

In terms of the printing/parsing, Zack mentioned a while ago the idea of a binary encoding for revs and I had been thinking along the same lines. A very simple to read/write serialization would be good. I'm not sure if the json form has any real benefit or not, whether arbitrary web clients would be interested in the rev formats, etc. or whether a simple binary form would be better.

Cheers,
Derek





reply via email to

[Prev in Thread] Current Thread [Next in Thread]