[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Social-discuss] Privacy-over-Webfinger Draft
Re: [Social-discuss] Privacy-over-Webfinger Draft
Thu, 15 Jul 2010 11:24:43 -0400
On Thu, 2010-07-15 at 12:42 +0100, Blaine Cook wrote:
> Thanks [everyone] for the feedback! These are exactly the sorts of
> details and trade-offs we need to sort out.
> On 15 July 2010 03:58, Ted Smith <address@hidden> wrote:
> > On Wed, 2010-07-14 at 02:34 +0100, Blaine Cook wrote:
> > An editing note - you use the phrase "reader" in section 3, but it seems
> > like after that, you use "user" where you want to say "reader".
> Yes - there was a change in terminology, something got mixed up.
> Thanks for catching that. :-)
> > The sort of thing in Figure 4 unnerves me - that sort of transparency in
> > the origin of a request reveals the social graph to both the client and
> > the publishing server. I'm not quite sure if there's a way to avoid this
> > while still doing authentication of the type you propose; in Social-P2P
> > we avoid this by simply giving encrypted data to anyone who asks
> > (Diaspora, I hear, will do the same thing). It seems to me that the only
> > way to protect the social graph in this case is to use public-key
> > cryptography; if the user and publisher can have an exchange that's
> > encrypted end-to-end, then they can utilize crypto at key points to
> > ensure that the client and content server don't know who's talking to
> > whom. This also allows you to forgo the question of whether the user has
> > delegated to that client, since the client is no longer trusted with
> > either the data being requested or the social graph.
> The trade-offs here are not easy. If the data is just up on the
> internet, how do you signal to someone that they should fetch it? You
> could use Tor and poll your friends for updates, but that means that
> your latency will be *extremely* high, and that Tor would become a
> massive heat score (i.e., the incentive for attackers to "own"
> sufficiently large chunks of the Tor network becomes high enough that
> they will, and the privacy that you're trying to precipitate
> disappears, without you knowing it).
If the system is P2P and it's even possible to poll friends for updates,
it's possible to push notifications of updates to your friends in the
same way this spec talks about (in some cases). That push is just as
anonymous as a pull - all we need to do is secure some communications
channel, whether the sender requests data or provides data is
I use Tor for my RSS feeds already - images take longer than usual to
load (1-5 seconds instead of instantly), but aside that there are no
problems, and if my reader cached images instead of pulling them when I
read a given item, there wouldn't be any problems.
I'm not sure if we need to worry about the Sybill attack - for one,
there are much better attacks on Tor, but there are also ways we can
exploit the fact this is a social network and create a friend-to-friend
small-world network ala Freenet.
> Keep in mind that in order to discover the social graph (assuming the
> use of SSL), an attacker would need to gain control of either the
> Client or the Content Server, and then would only be able to determine
> links between users on the Client and Content Server. In a P2P
> scenario, this could be accomplished by installing malware on the
> target user's computer – something that's not at all difficult to do
> for the vast majority of cases.
I would argue it's more difficult to install malware on a *particular*
user's computer than it is to subpoena a server, but you make a good
> Moreover, it's important to not lose sight of the goal; people will
> only use a social network if it's possible to see the relationships
> and interconnectedness inherent in the graph. Our real-world social
> interactions do not happen in "cones of silence",
There's no reason why these interactions can't be first-order data
objects in themselves - hosted on a given node, and propagated out in
the same way any other data object is.
> A thought experiment (I don't know if this will address your concerns
> fully, but it's worth a shot):
> Given the scenario where sender identity is to be hidden from
> attackers who have access to either the Client or Content Server, what
> if we used an HTTPS-based mix network? If we can send messages from
> HTTP point A to HTTP point B, then we can essentially construct
> This provides both the desired addressability and anonymity
> properties, and decomposes into something I can build and experiment
> with using little more than netcat and telnet.
How is this better than using Tor? I don't know if I could find fault in
it right now, but all that shows is that you and I can't find fault in
it, and as The Bruce says, anyone can design a system that they can't
crack themselves (and probably, anyone can design a system that someone
else can't crack in an email response to that system being proposed). It
seems like it would be better to not re-invent the wheel if possible.
However, yes, no matter what transport you use, HTTP or XMPP or
TCP/IP/carrier pigeon, if your mix is secure, it addresses my concerns
Description: This is a digitally signed message part