[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Savannah-hackers-public] Re: New savannah site

From: Henrik Sandklef
Subject: [Savannah-hackers-public] Re: New savannah site
Date: Mon, 21 Dec 2009 09:13:30 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20091204 Thunderbird/3.0

On 12/20/2009 06:32 AM, Sylvain Beucler wrote:

(Fw'ing to savannah-hackers-public)

On Sat, Dec 19, 2009 at 10:47:43AM +0100, Henrik Sandklef wrote:

  is there a place to discuss new features for next savannah*?

Not specifically, but that would be a good thing to do.
savannah-hackers-public would be a good place to discuss meanwhile.

  for me it would be super cool to have RSS feeds from 'everything':
  * VCS (svn, cvs, bazaar**, mercurial, git** .....)
  * bug reported  (including change of status)
  * tasks (including change of status)

  this can be used to get a good overview of the activity of a
project, be it the entire GNU project or just a small project. Jose,
Nacho, Rikard (who may met in Gothenburg) wrote a small software
that tries to compile feeds into "an extended planet".

For VCSes, I'm not sure that Savane is the place to add this.

Problem as I see it is that for example CVS does not provide an RSS feed as the newer VCS:s (bazaar, git..) do. So in order to have RSS feeds from CVS we need to either:

1. Constantly scrape the repo (from external computer) to find out if any new stuff was added

2. Add some kind of software to do this on the CVS 'server' (read savannah)

Comments on (1):
 + clean CVS set up at savannah
 - this may lead to too many "cvs update" from external site

Comments on (2):

I see
Savane as a super-glue that binds various tools together - instead of
a monolithic all-integrated bloatware that would be impossible to
maintain (which is what the competition tends to become).

Good point. See comment above.

For the trackers, this sound like a good idea.

We can easily build projects sites such as:

BTW, I wrote (well, adapted) a small script that scrapes the MLs at GNU:
    (uh oh, i am not a web designer)
  I am using that and some other feeds for the site:

Btw, I'd like to know if you did any work on the privacy matters that
are related to scrapping.

AFAIU, you're working in this field as part of your thesis, and other
people/companies in the world also work on this.  This allows to get
data on projects, which I think is fine, but also on individuals,
which I think is a problem.  For example one is easily able to compute
the average work hours (and more generally work habits) of a specific

At least the work hours committing code. This only (and unfortunately) counts for a small part of the work of an engineer.

Previously I felt somehow protected by 1) the amont of noise around
the traces I produce, making it hard to gather them and 2) the fact
that digging such traces and showing them off would amount to voyerism
and would be (dis)considered as such.  With the development of
scrapping technologies, these protections are destroyed, so is there
any progress on re-improving privacy?

I think you're very right about (1). Given that there are sites doing stat digging already I think that your suggestion is good and valid.

One could argue and say that this should be up to every developer to solve. As an example I could commit to a secret repo and do commits from that repo to the real (public) repo.

I think that there is also a risk of a change in commit behaviour to please the VCS stat sw. (BTW, my commits on Xnee last night may give many points since I made soo many stupid small errors rendering in tons of small commits....)

E.g. (wild idea) one could use a git frontend that would reset all the
commit hours to 'this morning, midnight', which wouldn't affect the
stats, but avoid leaking privacy info.

Interesting. And one could also anonymoise the commits.



*) yes, I am referring to the installation of the software ;)
**) exists already

reply via email to

[Prev in Thread] Current Thread [Next in Thread]