[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnash-dev] Gnash0.8.2 memory usage improvement on ARMv6 -- seeking

From: Hong Yu
Subject: Re: [Gnash-dev] Gnash0.8.2 memory usage improvement on ARMv6 -- seeking support
Date: Tue, 29 Apr 2008 16:24:13 +0800
User-agent: Thunderbird (X11/20080213)

Thanks for the suggestions! If jemalloc() option will be available soon, we might soon try Gnash-cvs on ARM, together with the GNASH_GC_TRIGGER_THRESHOLD=1 option.

We have run 'valgrind --tool=massif ./gtk-gnash <movie>' with our .swf file on Ubuntu-edgy, and the conclusive report is (what does it indicate?):
==24736== Total spacetime: 1,974,594,504,528 ms.B
==24736== heap: 90.5%
==24736== heap admin: 9.2%
==24736== stack(s): 0.1%
==18509== Total spacetime: 1,337,694,253,067 ms.B
==18509== heap: 86.1%
==18509== heap admin: 11.2%
==18509== stack(s): 2.6%

On the other hand, we have also tried gprof profiling, and it seems that graphics rendering animation takes considerable percentage of total execution time on PC.

Best regards,

Hong Yu

strk wrote:
Another thing is that current CVS supports use of an environment
variable to specify how many new GC-managed objects should be allocated
before a new collection cycle starts.


On Tue, Apr 29, 2008 at 08:38:18AM +0200, strk wrote:
On Tue, Apr 29, 2008 at 11:51:47AM +0800, Hong Yu wrote:
We have ported Gnash0.8.2 to ARMv6 platform. However Gnash0.8.2 fails to play one of our .swf file satisfactorily, by ending with 'std::bad_alloc' message, indicating that it consumes out 120MB and more memory. Therefore we wish to improve Gnash0.8.2's performance for ARM. Would anyone have suggestions and/or clues on how we can gradually achieve the goal of improving Gnash for low level platform(s)? Thanks.
My suggestion is to find out what's taking up all the memory as first
thing. One of valgrind tools should help (massif).

valgrind --tool=massif gtk-gui <movie>


Gnash-dev mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]