First of all thanks, I'll have a look it sounds very interesting.
I've built some experimental add-ons to the gtk renderer that allow it
to use XVideo, if that will help you out. I've tested it on 1Ghz
Geode "thin client" devices with an old SiS video chipset. It runs
Ubuntu 7.10 (256MB ram, 512MB flash), and can blow up flash content to
1280x1024 with very little CPU use.
I managed video acceleration. How did you tackle the GUI acceleration problem. And BTW 1GHz Geode is a supercomputer next to what I can afford to work with. Right now AGG is absolutely killing me even at very low resolutions. Profiling revealed optimization hell.... OpenGL renderer still relies very heavily on CPU processing which again seems very difficult to optimize by dropping work on special purpose HW.
PS Optimization hell:
1. Each flash clip has a completely different profiler map
2. agg seems to work very serially
3. The top profiled functions are very heavily templated and are way deep inside agg
agg::span_image_filter_rgba_bilinear<agg::image_accessor_clone<agg::pixfmt_alpha_blend_rgba<agg::blender_rgba_pre<agg::rgba8, agg::order_rgba>, agg::row_accessor<unsigned char>, unsigned int> >, agg::span_interpolator_linear_subdiv<agg::trans_affine, 8u> >::generate(agg::rgba8*, int, int, unsigned int)