[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Deep vs Shallow trace: Removing the tradeoff?

From: ilmu
Subject: Re: Deep vs Shallow trace: Removing the tradeoff?
Date: Sun, 28 Mar 2021 23:16:23 +0000

First of all, thanks for engaging with me, sorry for not putting more work into the initial email. Note that I have attached inline images from the paper, the paper is also attached to this email.

What I understand from the above is that there is not so much of a difference between Bazel and Nix/Guix. The "shallow" tracing is a deep tracing. It just records something different.


The framework they present is that a build system can be classified based on how it decides whether a rebuild is necessary and how it schedules the necessary rebuilds (i.e. how it sorts the work to be done). They consider these orthogonal strategies a complete description of "what a build system is".


The constructive trace allows you to keep intermediate artifacts but creates the extra overhead of managing these.


Since a deep constructive trace only looks at the terminal dependencies you have something of a merkle-dag. What I was suggesting is to be able to do "shallow builds" while still supporting "early cutoff". For reference:


Early cutoff is a very desirable property that nix does not have (and I assume therefore that neither does guix).


Basically, if we have a proof that the immediate dependencies are complete then we don't need to fetch anything deeper in the dependency tree.

In Nix/Guix, we recursively compute the hash of packages from their inputs, receipe and other stuff. This hash is unrelated to the checksum of the result and changes as soon as one element changes. It essentially records the whole source dependency graph.
From what you write, Bazel records the checksum of inputs and receipe, so it essentially encodes the whole binary dependency graph. It's not "shallow", as a real change down the graph can have repercussions higher up.
Actually Eelco described something similar for Nix in his thesis, the intensionnal model. As you noticed, different receipes or inputs can lead to the same output (bitwise). The idea is to declare the expected checksum of the result, and use this to compute the store item name. As long as you apply changes that do not affect the output, different receipes (updates) can be used to generate the same item. So if you update a package, it might affect its output checksum, so you need to change it. Dependents might not be affected, in which case you can block update propagation there.
The question being how to monitor that packages need to be rebuilt or have the same hash?

I think the pictures above should clarify, I should have looked at the paper again before sending my previous email, I read it quite a long time ago.. The property I was trying to recover in the nix-style build systems is this idea of an early cutoff. The strategy for achieving that is to use constructive traces with depth 1 (what I was calling "shallow traces") and then recursive succinct arguments of knowledge for retaining the shallow build property that you get from the "infinitely deep traces".

I don't really understand why it must be zero-knowledge. My understanding is that to have a zero-knowledge proof, you first need a proof of the claim, then find a clever way to proove you have the proof, without disclosing it. Not sure it's helpful here. If you have a proof that you don't need to rebuild, why not share that proof to everyone in the first place?

Sure, you only need the succinctness property. I should have been more precise, what I am mainly after is resolving the tradeoff between shallow and deep traces.

I hope that this discussion can at least be helpful for people that are not familiar with these ideas, even if - as it seems from your response - the idea does not have merit.

Kind regards,
- ilmu

Attachment: build-systems.pdf
Description: Adobe PDF document

reply via email to

[Prev in Thread] Current Thread [Next in Thread]