[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Software Heritage fifth anniversary event

From: zimoun
Subject: Re: Software Heritage fifth anniversary event
Date: Thu, 2 Dec 2021 14:17:39 +0100


On Wed, 1 Dec 2021 at 19:04, Timothy Sample <> wrote:
> Ludovic Courtès <> writes:
> > I gave a 10–15mn talk on how Guix uses SWH, what Disarchive is, what
> > the current status of the “preservation of Guix” is, and what remains
> > to be done:
> >
> >   
> >

Thank you Ludo for this nice write up!  I hope the stream had been
recorded and soon available for all. :-)

> > I chatted with the SWH tech team; they’re obviously very busy solving
> > all sorts of scalability challenges :-) but they’re also truly
> > interested in what we’re doing and in supporting our use case.  Off the
> > top of my head, here are some of the topics discussed:
> >
> >   • ingesting past revisions: if we can give them ‘sources.json’ for
> >     past revisions, they’re happy to ingest them;
> This is something I can probably coax out of the Preservation of Guix
> database.  That might be the cheapest way to do it.  Alternatively, when
> we get “sources.json” built with Cuirass, we could tell Cuirass to build
> out a sample of previous commits to get pretty good coverage.  (Side
> note: eventually we could verify the coverage of the sampling approach
> using the Data Service, which has a processed a very exhaustive list of
> commits.)

Let avoid "quirk" because now the ingestion requires too many manual checks. :-)

For instance, "guix lint -c archival" works well but it is not
systematically done by contributors or pushers; especially on quick
updated packages.  This is mainly what we see: 35 vs 24 missing
type:git from PoG [1,2].

On the other hand, 'sources.json' is built with the Guix website.  But
SWH ingests only the tarball items from there.

It is not clear to me how to add to CI both: saving requests for
git-fetch packages and build 'sources.json'.

Last, all the packages are not equal.  We could have 99.99% for the
coverage but if the missing 0.01% packages are deep in the graph, then
all the house of card falls down.  Somehow, we need to work on the
graph and spot the "important", or least sort them.  Argh, it is
something I would like to do since long time (help when release is
coming) but days count only 24h. ;-)


> >   • rate limit: we can find an arrangement to raise it for the purposes
> >     of statistics gathering like Simon and Timothy have been doing (we
> >     can discuss the details off-list);
> Cool!  So far it hasn’t been a concern for me, but it would help in the
> future if want to try and track down Git repositories that have gone
> missing.

Timothy, could you provide again the entry point you use?

> >     they’re not opposed to the idea of eventually hosting or maintaining
> >     the Disarchive database (in fact one of the developers thought we
> >     were hosting it in Git and that as such they were already archiving
> >     it—maybe we could go back to Git?);
> It’s a possibility, but right now I’m hopeful that the database will be
> in the care of SWH directly before too long.  I’d rather wait and see at
> this point.  I’m sure we could manage it, but the uncompressed size of
> the Disarchive specification of a Chromium tarball is 366M.  Storing all
> the XZ specifications uncompressed is over 20G.  It would be a big Git
> repo!

Hehe!  That's something we discussed at the very beginning of Disarchive. :-)

If Disarchive-DB is managed by SWH, maybe some people would be afraid
by security concerns.  I mean, today SWH ingests an archive. Today,
this archive is checksummed using a robust algorithm say Foo.  Using
the content from SWH and the meta from Disarchive-DB, the archive is
rebuilt and because Foo is robust, it is possible to checksum that the
rebuild match the expectation.  Later, Foo is weak and preimage attack
is possible.  All one has is the expectation using Foo.  Therefore,
SWH could cheat and introduce something in content and/or meta that
matches the expectation using Foo.  If the 2 databases are
independent, then it is harder. :-)

Well, the assumptions are: SWH would be still there when Foo is
broken.  Currently Foo is SHA-256, so who knows. :-)

>From scientific context, this scenario (SWH corrupted) is really low
in the list of issues. ;-)

> >   • bit-for-bit archival: there’s a tension between making SWH a
> >     “canonical” representation of VCS repos and making it a faithful,
> >     bit-for-bit identical copy of the original, and there are different
> >     opinions in the team here; our use case pretty much requires
> >     bit-for-bit copies, and fortunately this is what SWH is giving us in
> >     practice for Git repos, so checkout authentication (for example)
> >     should work even when fetching Guix from SWH.

The main issue is the lookup.  Non bit-for-bit archival implies that
people store a SWH lookup key (swhid I guess) at ingestion time,
otherwise it becomes nearly impossible to find back.  To me, the
tension is in the meaning of preservation of source code, i.e.,
between archiving for reading or archiving for compiling.  In the case
of compilation, all the lookup must be automated and so non
bit-for-bit archival means: make swhid THE standard for serialization;
somehow replacing all the other checksums.

> > Anyway I think we can take this as an opportunity to increase bandwidth
> > with the SWH developers!

Yeah, let have a good story! :-)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]