[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Handling nars/narinfos at scale, some ideas...

From: Ludovic Courtès
Subject: Re: Handling nars/narinfos at scale, some ideas...
Date: Wed, 10 Feb 2021 22:04:01 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)

Hi Chris!

Christopher Baines <> skribis:

> When serving from a store, you can use guix gc to remove items, and gc
> roots to protect the items you want to keep. I'm not aware of similar
> tooling when you just have a bunch of nars+narinfo files. This means you
> either just delete files based on when you generated them, or don't
> delete anything and potentially have an ever growing collection of nars.

Nitpick: ‘guix publish’ has a simple LRU policy for its cache, based the
atime of cached narinfos, which allows it to eventually reclaim
unpopular items.

> When serving the substitutes, there's advantages to having low latency
> access to the narinfo files, since they're very small. If you're trying
> to serve the whole world. one way of doing this would be to store the
> narinfos on several machines around the world, and direct requests for
> them to a machine that's close in terms of network latency. The relevant
> bit here is storing the narinfos on multiple machines, and keeping them
> in sync. This also may improve resilience if through this there's not a
> single point of failure with the one machine storing the narinfo files.
> I think these needs: doing garbage collection across narinfo data and
> storing narinfo data on multiple machines can be met with one
> approach. I'm also thinking this might be a good place to try and store
> analytics about the fetching of nars+narinfos.

I think what’s appropriate here is “cache eviction” rather than “garbage
collection”: in the former case, time locality is the driving factor to
determine what to remove, whereas in the latter case, reachability from
some roots is what matters.  That’s the difference between
/var/cache/guix/publish and /gnu/store.

I believe here you’d typically want policies similar to that of ‘guix
publish’: LRU + minimum time-to-live.  When things are distributed, it’s
a bit harder though: do you need to gather usage stats from all the
mirrors to the head? or do you perform cache eviction on each mirror
with purely local knowledge?

In any case, you need to make sure that the ‘Cache-Control’ header sent
to the client with its narinfo reply is honored—that the nar will remain
available for the specified time, no matter which replica the client
ends up talking to.

> This new tool/service would be a standalone thing, but I'm very much
> thinking about deploying it alongside a Guix Build Coordinator
> instance. Again, while the Guix Build Coordinator can help with serving
> substitutes, that approach doesn't stretch yet to doing the things
> above.
> Note that while this does similar things to guix publish, it's not
> designed to replace it. This approach is probably only worth it if you
> want to store/serve nars+narinfos on from more than one machine.
> I also don't see this as something to do instead of things like IPFS
> distribution for substitutes, but I do think it would be good to have a
> way of providing substitutes over HTTP which is reliable and works at a
> global scale.

Agreed on all points.

> The architecture I'm currently thinking about for this is to store the
> narinfo data in a PostgreSQL database. This will allow for storing the
> equivalent of "roots" in the graph, using SQL queries to traverse the
> graph to find the "garbage" and using logical replication to sync the
> data between multiple machines. Additionally, I'm thinking that the
> narinfo's can be served directly from the database, and maybe analytics
> data (counts of narinfo requests) can be saved back to the database.

What about nars, BTW?  :-)

> My testbed for this will probably be, so I'll probably
> need to look at doing something to direct requests to different servers
> (maybe GeoIP with knot) and getting Letsencrypt to work across multiple
> servers, but that can come later.
> Anyway, I haven't actually implemented this yet, but maybe after sending
> this email I'll be one step closer...
> Please let me know if you have any thoughts or questions!

That’s a pretty exciting project, and if it can address the
single-point-of-failure issue with and also provide a
general solution to mirroring (rather than the ad-hoc solutions
discussed so far), that’s great!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]