[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #55642] isosurface is slow

From: Guillaume
Subject: [Octave-bug-tracker] [bug #55642] isosurface is slow
Date: Mon, 4 Feb 2019 08:59:29 -0500 (EST)
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0


                 Summary: isosurface is slow
                 Project: GNU Octave
            Submitted by: gyom
            Submitted on: Mon 04 Feb 2019 01:59:26 PM UTC
                Category: Octave Function
                Severity: 3 - Normal
                Priority: 5 - Normal
              Item Group: Performance
                  Status: None
             Assigned to: None
         Originator Name: Guillaume
        Originator Email: 
             Open/Closed: Open
         Discussion Lock: Any
                 Release: dev
        Operating System: Any



isosurface becomes very slow when the size of the input data increases:

n = 64;
[x, y, z] = meshgrid (1:n, 1:n, 1:n);
v = (x-n/2).^2 + (y-n/2).^2 + (z-n/2).^2;
tic; fv = isosurface (x, y, z, v, max (v(:))/2); toc

Octave takes 11.34s while Matlab takes 0.04s.

If you increase n to 128, Octave takes 200s and Matlab 0.25s

And for n = 256, Matlab takes 2s while Octave was still running after an

Most of the time is spent in __unite_shared_vertices__ where the loop over
vertices is known to be slow (see bug #46946 and patch #8912). 

One way would be to move __unite_shared_vertices__ into C++. Otherwise I
wonder if one can implement 'unique with tolerance' from sorting the

v = sortrows (fvc.vertices);
d = diff (v, 1, 1);
all (abs (d) < tol)

I also notice that the help text of isosurface still mentions the use of

## If given the string input argument @qcode{"noshare"}, vertices may be
## returned multiple times for different faces.  The default behavior is to
## eliminate vertices shared by adjacent faces with @code{unique} which may
## time consuming.


Reply to this item at:


  Message sent via Savannah

reply via email to

[Prev in Thread] Current Thread [Next in Thread]