chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Chicken-users] gui api design -- some thought -- long mail


From: Brandon J. Van Every
Subject: Re: [Chicken-users] gui api design -- some thought -- long mail
Date: Sat, 10 Feb 2007 23:52:57 -0800
User-agent: Thunderbird 1.5.0.9 (Windows/20061207)

Shawn Rutledge wrote:

This sounds weird at first but I guess it's normal for a game
developer, because when the whole screen is a virtual world, every
little action can potentially result in a change to every pixel,
right?

Right.

Do game developers usually make OpenGL calls to clear the
screen, create vertices, set properties, and render the scene, every
frame, and then start over and repeat as often as possible, and then
hopefully brag about how many FPS they get in spite of all that?  Or
do they typically expect the OpenGL implementation to hold a lot of
data, like pre-defined shapes that can be re-used?

They do both. It depends on whether the geometry is moving, is otherwise dynamically generated, is completely static, and how much VRAM the video card has. Typically the geometry is placed in a "vertex buffer." Where that vertex buffer is allocated, depends on whether it's specified as static or dynamic. There is a notion of a write-only-once vertex buffer, designed to be passed quickly to the 3D HW, and not intended to be read or have computations done on it. Buffers that are going to have a bunch of reads, writes, and computations, have to be allocated in system memory. Otherwise the performance will completely suck when doing memory mapped IO reads back from the card. At least, this was true 3 years ago when I last cared about the issue. With the advent of programmable vertex shaders, it's possible that 3D HW guys may finally have started to design their VRAM access symmetrically, instead of as a write-out don't-read device. Even if everything has changed, which I doubt, there's still the installed base of older 3D cards that do behave this way.


But in 2D UIs you usually have the concept of "damaged areas" that
need repainting, so as to avoid re-drawing pixels that didn't change.
To me, that idea has always been integral to the idea of writing an
efficient GUI.  Maybe it will become an obsolete idea though as GUIs
get more complex.

It's not relevant to modern 3D graphics. The geometry is very complex and projects to all sorts of places in screen space. If you've got sufficiently complex geometry then you cannot disentangle it in object space. Instead an image space technique such as a Z-buffer is used, per pixel.



So I'm not yet convinced that even if you want a really minimal UI,
that there is anything wrong with at least separating the painting
code from the event-handling code, and putting those functions plus
some metadata into a data structure.

There's no point even bothering with high-flying, cantankerous designs unless you've actually got some code that obviously needs to do it. It's better to lazy evaluate these sorts of design problems. I've gone bankrupt overdesigning stuff. People have to code up demos, make use of stuff, and attempt real apps before correct designs become apparent. It generally takes me 6 times to get a design right.

 Especially, any text which
the user enters himself ought to be considered sacred and never
garbage-collected unless the user asks for it to be deleted.

Lousy security model, that.

It's actually ok in programming to make a decision.  Really.


Cheers,
Brandon Van Every





reply via email to

[Prev in Thread] Current Thread [Next in Thread]