[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [igraph] 'decompose.graph' versus 'clusters'

From: David Hunkins
Subject: Re: [igraph] 'decompose.graph' versus 'clusters'
Date: Fri, 25 Jul 2008 08:23:04 -0700

And by the way, betweenness.estimate(G,cutoff=5) worked on my weakly connected graph G with 2M vertices and 2M edges. It only took two hours on the 'fast' EC2 machine. Thanks again,


David Hunkins
im: davehunkins
415 336-8965

On Jul 25, 2008, at 8:17 AM, Gabor Csardi wrote:

On Fri, Jul 25, 2008 at 08:13:39AM -0700, David Hunkins wrote:
Okay, so the 8-way partition did the trick (decompose.graph was able to pull apart the 300k-node graphs but not the 600k-node graphs, which, when run, had generated the protection faults). I think I'm calculating real
betweenness values for each connected component, because the order of
operations is this:
1. remove largest cluster (the 2M-plus node cluster that breaks
2. remove smallest clusters (the 1- and 2-node clusters that I'm not
interested in)
3. take remaining clusters (about 200,000 of them) and divide them up
into 8 groups
4. for each of the 8 groups, run decompose.graph to return a list of
5. run betweenness on each of the graphs in the list of subgraphs (so I am only ever running betweenness on something that's a maximal connected
component in the original graph)
If I still haven't understood something about betweenness please let me
This was surprisingly fast (just 8 hours of cpu time).

Oh, ok, that is fine, I thought you wanted to break the giant
component into eight pieces.....

Next month I'll be trying such a strategy again on a much larger
dataset; I'll be using the faster decompose.graph (presumably that's in
your latest 0.6 tarfile) and let you know how it goes.

Not yet! I'll email you when it is uploaded.



Csardi Gabor <address@hidden>    UNIL DGM

igraph-help mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]