[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnash-dev] Re: point test

From: Sandro Santilli
Subject: Re: [Gnash-dev] Re: point test
Date: Mon, 5 Nov 2007 12:05:23 +0100

On Mon, Nov 05, 2007 at 10:16:54AM +0100, Udo Giacomozzi wrote:

> >> Ok, but what is effectively "normalizing"?
> SS> Building valid topologies from invalid ones (if worth it).
> Sorry, still don't get it :( Can you use simple words instead?

Topologies are made of nodes, edges and faces.
Two intersecting edges would be invalid (missing node info).
Normalizing would mean finding all intersections and defining all faces
by the edges that bound them.
I'm not sure we need such a representation.

> Hmmmm, maybe you mean the query point itself is transformed so that it
> is relative to the normal size and orientation of the shape? Ok, that
> makes sense. Would be interesting to know if scaling makes any
> difference since the shape might contain small parts which are
> invisible at it's normal size but are relevant when the shape is
> greatly magnified. From the renderer point of view this might be very
> relevant (like missing detail when upscaling a bitmap).

Scaling the query point would only mean reducing its precision to
match the precision of the source coordinates (twips).

The triangle below have sides of 1 twip:


If you scale it by (say) x100, you'll be able to put the mouse pointer
inside the fill, but the core lib will never find it *in* the fill
(when scaled the point will end up falling on a corner)

Now the question would be: is it worth it to use higher precision ?
At which cost ? I can't think of a real-world case in which this would
be relevant.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]