gnustep-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Painter Fuzzy Node in github


From: Ivan Vučica
Subject: Re: Painter Fuzzy Node in github
Date: Thu, 18 Dec 2014 11:42:09 +0000

I think I get what you want.

You want to predict which areas of the screen might require an update in near future, and pre-render the updated graphics. Then, when the time comes, you want to quickly blit the prerendered update on the screen.

Now, here are the issues.

GNUstep currently doesn't have animated widgets, especially not dynamically rendered ones, so you'll have a very, very hard time finding a proper use case for this. And you won't have noticeable speeds-up from caching even that, either; GPUs can composite things faster than you can cache them. 

I doubt GNUstep will have buttons animated with a glow animation before it can render things as layers on GPU, and when it does, advantages you'd get from prerendering this probably be lost. 

A cool thing about Cocoa and GNUstep is that, as complex as the rendering process already is, it is still understandable and debuggable. Adding unpredictable update triggers may make debugging harder.

tl;dr Updates too rare and computationally non-intensive.

You are welcome to develop the idea and prove me wrong. I just think there may be better uses of your time. :-)

On Thu Dec 18 2014 at 9:53:03 AM Johan Ceuppens <address@hidden> wrote:
Hi D,

I'll try to explain what can use AI as you asked :

2014-12-18 9:23 GMT+01:00 David Chisnall <address@hidden>:
On 18 Dec 2014, at 09:07, Johan Ceuppens <address@hidden> wrote:

> Probably the system now as it stands for CoreX etc is mapping a window. Also X11 (sub)windows get mapped with or without the main windows AFAIK. X11 paints once per map cycle. If you paint in X you have to loop constantly through painting the screen (GS also has a root window, again a window). IF you do not paint in X11 your window lookandfeel or subelements will not be updated on the screen.

I'm not sure that you understand how drawing works with the Cocoa model.


Thanks a lot for the explanation.
 
You have two hierarchies:

 - The view hierarchy, which corresponds to nested view objects in the window.  Views are natural units for decomposition.


I know a NSView calls drawRect: for updating the screen with [self setNeedsDisplay:YES]; and so on. Altough that view is very minimal. This is where UIKit comes in for example with UIView.

These subviews should be able to be put in a drawing system that is AI capable with said attractor using the mouse context and without game-wise updating of e.g. enemies on screen. This should be a fine interface for view hierarchy rendering AI-wise. 
 
 - The CoreAnimation layer hierarchy, which is similar, except that some views will render into their parent's layer and some will contain more than one layer.  Layers are natural units for caching.

Layers are more or less equivalent to textures.  Once they are rendered, they are pushed to the GPU and remain there. 

By layer you probably mean the CALayer, these can benefit from AI on the cache or on the parent-children layer (decision) tree or greedy fuzzy set instead of the tree. parent children in bare X11 are an array of children and a parent window of X11 type Window.
 
When a view is redrawn, two things happen:

First, if the view is marked as needing redisplay, then its drawRect: method is invoked (possibly multiple times for different rectangles), which will update a known dirty region (the XDAMAGE extension can be used for this on X11).  This then draws into the underlying layer.  This (on OS X) can happen in parallel if the views are marked as supporting threaded rendering (the only reason why not is if they are data views that share a datasource and it would add too much synchronisation overhead for it to be worthwhile).  Any updated layers are then shipped to the GPU.


[self setNeedsDisplay]; updates the view as you say and calls drawRect:.
 
Once the layers are updated, the GPU then composites them.
 
When the mouse moves, no redraw events happen because the mouse is in a separate compositing context.  If you expose a part of a window, no -drawRect: invocations need to happen if the CA layers are still valid, they're just composited by the GPU. This can be very cheap, because you're compositing a few dozen textures on a processor designed to composite a few million textures per second.

The cheapness you mention is where AI has some time to calculate stuff.

By catching XMouseEvent you can learn loops of mouse clicking in your application or as it stands quartzcore.
 
With this in mind, which part do you think can be sped up by applying AI techniques?


Lots, it comes down to start with any window subhierarchy such as subviews, CALayers, anything which get rendered even if it is composited by the GPU as the AI idles the caching and full bottoms up layering/hierarchying the rendering system with its own data structures such as a tree or graph (a tree for a decision tree a graph for fuzzy network and a list for fuzzy sets.)

There might be a chance for AI in the full GS system such as a menu of idle tasks using AI such as a webspider and so on. This looks minimally but if my interfaces of gnustep-fuzzy stay alright then you can call the library (without dependencies working in parallel with tree hierarchies as a decision tree and a fuzzy logic network or set rulesystem.)
Again, it's written in portable objective-c on gcc 4.2.1.

`Enry

 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]