[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Monotone-devel] url schemes

From: Derek Scherger
Subject: Re: [Monotone-devel] url schemes
Date: Sun, 23 Mar 2008 20:43:28 -0600
User-agent: Thunderbird (X11/20080303)

Markus Schiltknecht wrote:
Hello Derek,

first of all: nice work in nuskool! Thanks for ripping out my silly code, which re-implemented a kind of toposort. Dunno what I was thinking there...

Haha, I remember looking at that and thinking, "there must be a simpler way" and toposort was it.

[ A small side note: I'd have had an easier life reading your cool patches, if you committed whitespace changes separately. ]

Yeah, sorry about that. Emacs cleaned up a bunch of things I didn't notice until I had made some changes and I didn't take the time and commit this as two changes.

Xxdiff does work reasonably well to look over whitespace polluted diffs if you turn off display of whitespace. ;)

Too Verbose, maybe. But also very simple to understand.

Indeed. It did turn out to be very simple. The multiplicity of encode/decode request/response things just seems a bit over the top.

On the bright side, I have managed to pull files and revisions from my monotone database using the nuskool branch (which doesn't yet pull certs or keys or care about branch epochs but does basically seem to work). It is rather slow at the moment (71 minutes vs 25 minutes with netsync, which *does* pull certs, keys etc.). I haven't done any profiling yet but I would expect two things to show up.

Uh.. that is the time to pull the complete net.venge.monotone repository, right? While that certainly sounds awful, let me point out


that that's not the case where nuskool is supposed to be the winner.

I'm assuming that if this does work out it will replace netsync and it just can't be slower and be successful imho.

It's rather optimized for subsequent pulls and it's already faster than netsync there:

Yeah, the revision refinement phase is really quick. Side note: I'm not 100% sure it's correct yet. I do recall seeing a push saying that there are X outbound revs while pull, with the databases reversed, had some other number of inbound revs. We need to double check this.

# time ./mtn gsync -d ../test.db
mtn: 13,850 common revisions
mtn: 130 frontier revisions
mtn: 0 outbound revisions
mtn: 0 inbound revisions

Oh, another note here. I purposely set things up in run_gsync_protocol so that the client knows exactly which revisions are inbound and outbound, thinking that we really want something like push/pull/sync --check to list (but not transfer) revisions that will be transferred. The mercurial equivalents are the incoming/outgoing commands.

This may require a bit more information coming back in the descendants response, including author/date/changelog/branch certs for example. The thought of combining author/date/changelog/branch into one commit cert crossed my mind here again. The current certs don't allow us to tie the correct things together. Maybe we should start another branch to combine these certs into a single commit cert.

./mtn gsync -d ../test.db 1.48s user 0.13s system 38% cpu 4.172 total

(Avg ping time from here to is ~60 ms)

(Agreed, that's not a fair comparison either, because gsync doesn't pull certs.)

Yeah, but it is encouraging, nonetheless.

(1) printing/parsing basic_io has come up in the past and nuskool adds very similar printing/parsing json_io so it will probably double the printing/parsing time.

That applies to the current http channel. Other channels might or might not use JSON. Or maybe we even want to add different content-types for http, i.e. return json or raw binary, depending the http accept header.

Yeah, both ideas have crossed my mind as well.

(2) it's currently very granular, request one revision, receive one revision, then for all files changed in the revision request one file data or delta, receive one file data or delta, etc. until all the content for the revision has been received, then move on to the next revision. latency of request/response times is probably a big factor.

Agreed. However, merging multiple get requests for a single resource into one multiplex request is just one option to solve that problem. Another one would be running multiple queries in parallel. Dunno how feasible that is, though.

I may just try having get_revision include all of the file data/delta details as well, and see how big these get in the monotone database. If we didn't first encode the json object as a string and subsequently write it to the network we could just start writing bytes until we were done and not have to hold them all in memory. However this causes problems with trying to set the Content-Length header. I'm not sure what to think of issuing several requests (one for each file data/delta in a revision, perhaps up to some limit). Actually, I don't think it would help, because the server can only handle one request at a time afaict or there will be multiple scgi processes running and there will be database lock issues.

Probably doing a bit of profiling first would be the best idea!

(Using threads could also help hash calculation... considering our commodity hardware boxes are getting more and more cores per box, that might be worth it in the long run).

So would a hand-optimized sha1 implementation. Would someone just write one of these already! ;)

Plus: having that simplicity would allow us to handle dumb servers pretty equally.

I went with the fine-grained get/put request/response pairs so that neither side would end up having to hold too many files in memory at any one time. If we instead requested all file data/deltas for one rev the number of round trips would be reduced but we'd end up having to hold at least one copy (probably more) of the works in memory which didn't seem so good. I'm open to suggestions. ;)

I don't think files necessarily need to be put together by revision - that would be a rather useless collection for small changes. Instead, we should be able to collect any number of files together - and defer writing the revision until we have all of them.

I'm not really sure where you're going with this.

I certainly think of JSON as a good exchange format. It doesn't only help JavaScript, but provides a good mixture between well structured data (think XML) and raw binary data. It provides some structure, but it's not overly verbose. And it's easily usable from pretty much any scripting language.

Agreed, however, I'm wondering how popular or useful scripted pushing/pulling is going to be. When I first say the json format I though that it might have been nice to have that rather than basic_io but it probably didn't exist at the time basic_io was invented.

However, one of the downsides of JSON is: it cannot encode binary data. Or more precisely: strings are interpreted as UTF-8 encoded, so you better don't write binary data in there.

Yeah, the base64 encoding/decoding of file content is another extra step that shouldn't really be needed.

Thus, JSON and binary encoding for revs don't seem to mix well here. As much as I like binary encoded stuff for internal things, I also like to be able to read the revision's contents.

Once again, this makes me think about using the revisions solely for synchronization, and not storing them in the database, but use (binary) rosters instead.

Or storing the revisions in the database as binary rather than text, but I guess we don't actually use the revisions themselves that much do we. Seems like a reasonable idea.

In general, I think it would be great if we had a few people working together on all of these things, rather than one poor lonely soul on each of them. You and Zack seem to have been doing a bit of this on the compaction and encapsulation branches and I'm sure it's more fun and produces better results that way.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]